Commit
·
052850c
1
Parent(s):
6f8e324
Update parquet files (step 94 of 249)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!INSTALL!! Crack Winrar.md +0 -23
- spaces/1gistliPinn/ChatGPT4/Examples/Foto Bugil Artis Majalah Popular Indonesia Mega.md +0 -6
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8th Class Urdu Hamdard Guide PDF - Updated Notes for 2023.md +0 -106
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Betty Muwanguzi Hosanna Nkwagala Nyo The Song That Touched Many Hearts.md +0 -129
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Richness of Indonesian Culture with Quiz Sengklek for iOS.md +0 -84
- spaces/1phancelerku/anime-remove-background/Cars (v1.02) for PSP The Best Racing Game for PlayStation Portable.md +0 -144
- spaces/55dgxxx558/anime-remove-background/README.md +0 -14
- spaces/AIConsultant/MusicGen/audiocraft/metrics/clap_consistency.py +0 -84
- spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/openaimodel.py +0 -963
- spaces/AchyuthGamer/ImMagician-Gradio/app.py +0 -3
- spaces/AchyuthGamer/OpenGPT-Chat/README.md +0 -12
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/Factory.d.ts +0 -7
- spaces/AlexWang/lama/bin/report_from_tb.py +0 -83
- spaces/Aloento/9Nine-VITS/posterior_encoder.py +0 -37
- spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py +0 -26
- spaces/Alycer/VITS-Umamusume-voice-synthesizer/commons.py +0 -97
- spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/latent_codes_pool.py +0 -58
- spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/loop/feature_training_loop.py +0 -145
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/mixture_canvas.py +0 -503
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/unclip/test_unclip_image_variation.py +0 -522
- spaces/Andy1621/uniformer_image_detection/configs/_base_/models/retinanet_r50_fpn.py +0 -60
- spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_r50_fpn_4x4_1x_coco.py +0 -52
- spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_x101_32x4d_fpn_gn_ws-all_1x_coco.py +0 -16
- spaces/Andy1621/uniformer_image_detection/configs/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py +0 -28
- spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_160k_ade20k.py +0 -7
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roi_align_rotated.py +0 -177
- spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/swin_transformer.py +0 -802
- spaces/AtomdffAI/wechatgpt4atom/app.py +0 -45
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/__init__.py +0 -24
- spaces/BAAI/SegGPT/README.md +0 -13
- spaces/BHD/google-pix2struct-screen2words-base/README.md +0 -12
- spaces/Benson/text-generation/Examples/Descarga Apk De La Brjula De La Saga Del Verano.md +0 -47
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/utils/transform.py +0 -16
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/setup.py +0 -72
- spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/eval/vqaEval.py +0 -226
- spaces/CatNika/New_Cat_Proxy/Dockerfile +0 -11
- spaces/CofAI/Kemal-Diffusion/kemal.py +0 -3
- spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/__init__.py +0 -8
- spaces/DESUCLUB/BLLAMA/finetune.py +0 -207
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/r-3ca97919.js +0 -2
- spaces/Daffa/image-classification/README.md +0 -13
- spaces/DaleChen/AutoGPT/autogpt/config/singleton.py +0 -24
- spaces/DataForGood/bechdelai-demo/app.py +0 -155
- spaces/DonDoesStuff/Free-GPT3.5/README.md +0 -12
- spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/criteria/clip_loss.py +0 -17
- spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.h +0 -59
- spaces/Dragneel/Recon/README.md +0 -13
- spaces/ECCV2022/bytetrack/tutorials/qdtrack/tracker_reid_motion.py +0 -397
- spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/vision.cpp +0 -21
- spaces/EuroPython2022/illustrated-lyrics-generator/layers.py +0 -273
spaces/1acneusushi/gradio-2dmoleculeeditor/data/!!INSTALL!! Crack Winrar.md
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Crack WinRAR Password in 3 Easy Steps</h1>
|
3 |
-
<p>WinRAR is a popular software that allows you to compress and decompress files in various formats. However, sometimes you may encounter a WinRAR file that is password-protected and you don't know the password. How can you crack the WinRAR password and access the file? In this article, we will show you how to crack WinRAR password in 3 easy steps using a powerful tool called PassFab for RAR.</p>
|
4 |
-
<p>PassFab for RAR is a professional and reliable software that can help you crack any WinRAR password in minutes. It supports all versions of WinRAR and RAR files, and it can recover passwords of any length and complexity. It also offers three attack modes to suit different scenarios: brute-force attack, brute-force with mask attack, and dictionary attack. Here are the steps to crack WinRAR password using PassFab for RAR:</p>
|
5 |
-
<h2>crack winrar</h2><br /><p><b><b>Download File</b> ☆☆☆ <a href="https://byltly.com/2uKvBO">https://byltly.com/2uKvBO</a></b></p><br /><br />
|
6 |
-
<ol>
|
7 |
-
<li>Download and install PassFab for RAR on your computer. You can get it from the official website: <a href="https://www.passfab.com/products/rar-password-recovery.html">https://www.passfab.com/products/rar-password-recovery.html</a>.</li>
|
8 |
-
<li>Launch PassFab for RAR and click on the "Add" button to import the password-protected WinRAR file. You can also drag and drop the file to the interface.</li>
|
9 |
-
<li>Select an attack mode from the drop-down menu. You can choose brute-force attack if you have no clue about the password, brute-force with mask attack if you know some details about the password, such as length or characters, or dictionary attack if you have a list of possible passwords. You can also customize the settings of each attack mode according to your needs.</li>
|
10 |
-
<li>Click on the "Start" button to begin the cracking process. PassFab for RAR will try different combinations of passwords until it finds the correct one. The cracking time depends on the complexity of the password and the speed of your computer.</li>
|
11 |
-
<li>Once the cracking is done, you will see a pop-up window with the recovered password. You can copy the password and use it to open the WinRAR file.</li>
|
12 |
-
</ol>
|
13 |
-
<p>That's it! You have successfully cracked the WinRAR password using PassFab for RAR. Now you can enjoy the contents of the file without any hassle. PassFab for RAR is a powerful and easy-to-use tool that can help you crack any WinRAR password in minutes. It is compatible with Windows 10/8.1/8/7/Vista/XP and supports all versions of WinRAR and RAR files. If you ever forget or lose your WinRAR password, don't panic. Just download PassFab for RAR and follow the steps above to crack it.</p><p>PassFab for RAR is not only a WinRAR password cracker, but also a RAR password remover. If you don't want to enter the password every time you open the WinRAR file, you can use PassFab for RAR to remove the password protection. This way, you can access the file without any password. Here are the steps to remove WinRAR password using PassFab for RAR:</p>
|
14 |
-
<ol>
|
15 |
-
<li>Download and install PassFab for RAR on your computer. You can get it from the official website: <a href="https://www.passfab.com/products/rar-password-recovery.html">https://www.passfab.com/products/rar-password-recovery.html</a>.</li>
|
16 |
-
<li>Launch PassFab for RAR and click on the "Add" button to import the password-protected WinRAR file. You can also drag and drop the file to the interface.</li>
|
17 |
-
<li>Click on the "Remove Password" button at the bottom of the interface. PassFab for RAR will remove the password protection from the WinRAR file in seconds.</li>
|
18 |
-
<li>You will see a message saying "Password has been removed successfully". You can click on the "Open Folder" button to locate the decrypted WinRAR file.</li>
|
19 |
-
</ol>
|
20 |
-
<p>That's it! You have successfully removed the WinRAR password using PassFab for RAR. Now you can open the file without any password. PassFab for RAR is a versatile and user-friendly tool that can help you crack or remove any WinRAR password in minutes. It is compatible with Windows 10/8.1/8/7/Vista/XP and supports all versions of WinRAR and RAR files. If you ever encounter a password-protected WinRAR file, don't worry. Just download PassFab for RAR and follow the steps above to crack or remove it.</p>
|
21 |
-
<p></p> ddb901b051<br />
|
22 |
-
<br />
|
23 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Foto Bugil Artis Majalah Popular Indonesia Mega.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Foto Bugil Artis Majalah Popular Indonesia Mega</h2><br /><p><b><b>DOWNLOAD</b> ✪✪✪ <a href="https://imgfil.com/2uxYH3">https://imgfil.com/2uxYH3</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
3cee63e6c2<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/8th Class Urdu Hamdard Guide PDF - Updated Notes for 2023.md
DELETED
@@ -1,106 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>8th Class Urdu Hamdard Guide PDF Download 2023</h1>
|
3 |
-
<p>If you are a student of 8th class and want to improve your Urdu language skills, then you must have heard about <strong>Urdu Hamdard Guide</strong>. It is one of the most popular and trusted books for learning and mastering Urdu language. It covers all the topics and chapters of 8th class Urdu syllabus and curriculum. It provides comprehensive notes, summaries, explanations, and exercises for each chapter. It also includes additional material such as poems, stories, essays, and grammar tips. It enhances the reading, writing, speaking, and listening skills of Urdu language. It helps in preparing for exams and assessments.</p>
|
4 |
-
<p>But do you know that you can download the <strong>PDF version</strong> of Urdu Hamdard Guide for free? Yes, you read that right. You can get the PDF file of this amazing book without paying any money. You can save it on your device or print it out for offline use. You can access it anytime and anywhere. You can study at your own pace and convenience. You can also share it with your friends and classmates.</p>
|
5 |
-
<h2>8th class urdu hamdard guide pdf download 2023</h2><br /><p><b><b>DOWNLOAD</b> ✔ <a href="https://urlin.us/2uT1t2">https://urlin.us/2uT1t2</a></b></p><br /><br />
|
6 |
-
<p>In this article, we will tell you how to download Urdu Hamdard Guide PDF for free. We will also tell you about the benefits and features of this book. We will also answer some frequently asked questions about this book. So, read on to find out more.</p>
|
7 |
-
<h2>Benefits of Urdu Hamdard Guide</h2>
|
8 |
-
<p>Urdu Hamdard Guide is not just a book. It is a complete package for learning and mastering Urdu language. It has many benefits for 8th class students. Some of them are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>It covers the latest syllabus and curriculum of 8th class Urdu as per the NCERT and CBSE guidelines.</li>
|
11 |
-
<li>It provides comprehensive notes, summaries, explanations, and exercises for each chapter.</li>
|
12 |
-
<li>It includes additional material such as poems, stories, essays, and grammar tips.</li>
|
13 |
-
<li>It enhances the reading, writing, speaking, and listening skills of Urdu language.</li>
|
14 |
-
<li>It helps in preparing for exams and assessments.</li>
|
15 |
-
<li>It improves the confidence and performance of students.</li>
|
16 |
-
<li>It develops the interest and appreciation of students towards Urdu literature and culture.</li>
|
17 |
-
</ul>
|
18 |
-
<h2>Features of Urdu Hamdard Guide</h2>
|
19 |
-
<p>Urdu Hamdard Guide is a high-quality book that has many features that make it stand out from other books. Some of them are:</p>
|
20 |
-
<ul>
|
21 |
-
<li>It is written by experienced and qualified teachers and authors who have expertise in Urdu language and literature.</li>
|
22 |
-
<li>It is presented in a clear and easy-to-understand format that suits the level and needs of 8th class students.</li>
|
23 |
-
<li>It contains attractive images and diagrams to make learning more interesting and visual.</li>
|
24 |
-
<li>It follows the NCERT and CBSE guidelines and standards to ensure the accuracy and relevance of the content.</li>
|
25 |
-
<li>It is available in both Urdu and English medium to cater to the diverse backgrounds and preferences of students.</li>
|
26 |
-
</ul>
|
27 |
-
<h2>How to download Urdu Hamdard Guide PDF for free</h2>
|
28 |
-
<p>If you want to download Urdu Hamdard Guide PDF for free, you can follow these simple steps:</p>
|
29 |
-
<ol>
|
30 |
-
<li>Visit the official website of <a href="">Hamdard Dawakhana</a> or <a href="">Iftikhar Book Depot</a>. These are the two authorized publishers and distributors of Urdu Hamdard Guide.</li>
|
31 |
-
<li>Select the class, subject, and medium of your choice from the menu or search bar. You will see a list of books available for your selection.</li>
|
32 |
-
<li>Click on the download link or button to get the PDF file of Urdu Hamdard Guide. You may need to enter your name, email, or phone number to access the file.</li>
|
33 |
-
<li>Save the file on your device or print it out for offline use. You can also share it with your friends and classmates via email, WhatsApp, or other platforms.</li>
|
34 |
-
</ol>
|
35 |
-
<h2>Conclusion</h2>
|
36 |
-
<p>Urdu Hamdard Guide is a must-have book for 8th class students who want to excel in Urdu language. It covers all the topics and chapters of 8th class Urdu syllabus and curriculum. It provides comprehensive notes, summaries, explanations, and exercises for each chapter. It also includes additional material such as poems, stories, essays, and grammar tips. It enhances the reading, writing, speaking, and listening skills of Urdu language. It helps in preparing for exams and assessments.</p>
|
37 |
-
<p>8th class urdu hamdard elementary guide pdf free download<br />
|
38 |
-
hamdard guide for class 8 urdu medium pdf 2023<br />
|
39 |
-
8th class urdu hamdard guide book pdf download<br />
|
40 |
-
hamdard elementary guide urdu medium class 8 iftikhar book depot<br />
|
41 |
-
8th class urdu hamdard guide pdf download punjab board<br />
|
42 |
-
hamdard guide for class 8 urdu medium online<br />
|
43 |
-
8th class urdu hamdard elementary guide pdf 2023<br />
|
44 |
-
hamdard guide for class 8 urdu medium notes<br />
|
45 |
-
8th class urdu hamdard guide pdf download sindh board<br />
|
46 |
-
hamdard elementary guide urdu medium class 8 price<br />
|
47 |
-
8th class urdu hamdard guide pdf download kpk board<br />
|
48 |
-
hamdard guide for class 8 urdu medium solved exercises<br />
|
49 |
-
8th class urdu hamdard guide pdf download balochistan board<br />
|
50 |
-
hamdard elementary guide urdu medium class 8 review<br />
|
51 |
-
8th class urdu hamdard guide pdf download azad kashmir board<br />
|
52 |
-
hamdard guide for class 8 urdu medium key book<br />
|
53 |
-
8th class urdu hamdard guide pdf download fbise board<br />
|
54 |
-
hamdard elementary guide urdu medium class 8 sample pages<br />
|
55 |
-
8th class urdu hamdard guide pdf download latest edition<br />
|
56 |
-
hamdard guide for class 8 urdu medium mcqs<br />
|
57 |
-
8th class urdu hamdard guide pdf download past papers<br />
|
58 |
-
hamdard elementary guide urdu medium class 8 contents<br />
|
59 |
-
8th class urdu hamdard guide pdf download model papers<br />
|
60 |
-
hamdard guide for class 8 urdu medium syllabus<br />
|
61 |
-
8th class urdu hamdard guide pdf download guess papers<br />
|
62 |
-
hamdard elementary guide urdu medium class 8 delivery<br />
|
63 |
-
8th class urdu hamdard guide pdf download smart syllabus<br />
|
64 |
-
hamdard guide for class 8 urdu medium order online<br />
|
65 |
-
8th class urdu hamdard guide pdf download new syllabus<br />
|
66 |
-
hamdard elementary guide urdu medium class 8 discount<br />
|
67 |
-
8th class urdu hamdard guide pdf download old syllabus<br />
|
68 |
-
hamdard guide for class 8 urdu medium buy online<br />
|
69 |
-
8th class urdu hamdard guide pdf download revised syllabus<br />
|
70 |
-
hamdard elementary guide urdu medium class 8 return policy<br />
|
71 |
-
8th class urdu hamdard guide pdf download scheme of studies<br />
|
72 |
-
hamdard guide for class 8 urdu medium customer reviews<br />
|
73 |
-
8th class urdu hamdard guide pdf download paper pattern<br />
|
74 |
-
hamdard elementary guide urdu medium class 8 contact number<br />
|
75 |
-
8th class urdu hamdard guide pdf download objective type questions<br />
|
76 |
-
hamdard guide for class 8 urdu medium short questions answers<br />
|
77 |
-
8th class urdu hamdard guide pdf download subjective type questions<br />
|
78 |
-
hamdard elementary guide urdu medium class 8 long questions answers <br />
|
79 |
-
8th class urdu hamdard guide pdf download grammar exercises <br />
|
80 |
-
hamdard guide for class 8 urdu medium comprehension passages <br />
|
81 |
-
8th class urdu hamdard guide pdf download vocabulary exercises <br />
|
82 |
-
hamdard elementary guide urdu medium class 8 writing skills <br />
|
83 |
-
8th class urdu hamdard guide pdf download poetry section <br />
|
84 |
-
hamdard guide for class 8 urdu medium prose section</p>
|
85 |
-
<p>You can download the PDF version of Urdu Hamdard Guide for free from the official website of Hamdard Dawakhana or Iftikhar Book Depot. You can save it on your device or print it out for offline use. You can access it anytime and anywhere. You can study at your own pace and convenience. You can also share it with your friends and classmates.</p>
|
86 |
-
<p>We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to contact us. We would love to hear from you. Thank you for reading and happy learning!</p>
|
87 |
-
<h2>FAQs</h2>
|
88 |
-
<h3>What is the price of Urdu Hamdard Guide for 8th class?</h3>
|
89 |
-
<p>The price of Urdu Hamdard Guide for 8th class varies depending on the medium, edition, and publisher. However, you can download the PDF version of the book for free from the official website of Hamdard Dawakhana or Iftikhar Book Depot.</p>
|
90 |
-
<h3>Is Urdu Hamdard Guide available in other languages?</h3>
|
91 |
-
<p>Yes, Urdu Hamdard Guide is available in both Urdu and English medium. You can choose the medium that suits your preference and comfort level.</p>
|
92 |
-
<h3>How can I contact Hamdard Dawakhana or Iftikhar Book Depot for any queries or feedback?</h3>
|
93 |
-
<p>You can contact Hamdard Dawakhana or Iftikhar Book Depot through their official website, email, phone number, or social media pages. You can also visit their physical stores or offices if they are located near you.</p>
|
94 |
-
<h3>What are some other products or services offered by Hamdard Dawakhana or Iftikhar Book Depot?</h3>
|
95 |
-
<p>Hamdard Dawakhana or Iftikhar Book Depot offer a wide range of products and services related to education, health, wellness, culture, and literature. Some of them are:</p>
|
96 |
-
<ul>
|
97 |
-
<li>Books for all classes and subjects from pre-primary to higher education</li>
|
98 |
-
<li>Magazines, journals, newspapers, and newsletters on various topics</li>
|
99 |
-
<li>Herbal medicines, supplements, cosmetics, and personal care products</li>
|
100 |
-
<li>Courses, workshops, seminars, webinars, and events on various fields</li>
|
101 |
-
<li>Counseling, guidance, mentoring, and coaching services</li>
|
102 |
-
</ul>
|
103 |
-
<h3>How can I get more information or updates about Urdu Hamdard Guide or other books?</h3>
|
104 |
-
<p>You can get more information or updates about Urdu Hamdard Guide or other books by subscribing to their official website, email, or social media pages. You can also check their blog, podcast, or YouTube channel for more content and insights. You can also join their online community or forum to interact with other students, teachers, and experts.</p> 197e85843d<br />
|
105 |
-
<br />
|
106 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Betty Muwanguzi Hosanna Nkwagala Nyo The Song That Touched Many Hearts.md
DELETED
@@ -1,129 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Betty Muwanguzi Hosanna Nkwagala Nyo Download: A Guide to the Popular Ugandan Gospel Song</h1>
|
3 |
-
<p>If you are looking for a powerful and uplifting gospel song that will inspire your faith and fill your heart with joy, you might want to check out <strong>Betty Muwanguzi Hosanna Nkwagala Nyo</strong>. This song, which means "Hosanna I love you so much" in Luganda, is one of the most popular songs by Betty Muwanguzi, a legendary Ugandan gospel singer and worshipper. In this article, we will tell you everything you need to know about Betty Muwanguzi, her song Hosanna Nkwagala Nyo, and how to download it legally and ethically. Read on to discover more!</p>
|
4 |
-
<h2>Who is Betty Muwanguzi?</h2>
|
5 |
-
<p>Betty Muwanguzi is a Ugandan gospel singer who has been in the music industry for over two decades. She is known for her powerful vocals, her anointed worship, and her inspiring songs that touch many lives.</p>
|
6 |
-
<h2>betty muwanguzi hosanna nkwagala nyo download</h2><br /><p><b><b>Download File</b> ->>> <a href="https://urlin.us/2uSRWb">https://urlin.us/2uSRWb</a></b></p><br /><br />
|
7 |
-
<h3>A brief biography of the Ugandan gospel singer</h3>
|
8 |
-
<p>Betty Muwanguzi was born in 1976 in Uganda. She grew up in a Christian family and started singing at a young age in her church choir. She later joined a music group called Joyful Generation, where she met her husband, Pastor Stephen Muwanguzi. They got married in 1997 and have four children together.</p>
|
9 |
-
<p>Betty Muwanguzi released her first album, <em>Osinga</em>, in 2000, which was a huge success. She followed it up with several other albums, such as <em>Hosanna Nkwagala Nyo</em>, <em>Tunakuwa Ki Ffe</em>, <em>Asigala Mukama</em>, <em>Nebazanga Yesu</em>, and <em>Praise and Worship Nonstop 2020</em>. She has also collaborated with other gospel artists, such as Judith Babirye, Wilson Bugembe, Pr. Bugingo Wilson, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo, Pr. Wilson Bugingo.</p>
|
10 |
-
<h3>Her musical style and achievements</h3>
|
11 |
-
<p>Betty Muwanguzi's musical style is a blend of traditional Ugandan music, contemporary gospel, and praise and worship. She sings in Luganda, English, Swahili, and other languages. She uses her music to spread the gospel of Jesus Christ, to encourage people in their faith journey, and to address social issues such as poverty, HIV/AIDS, domestic violence, and child abuse.</p>
|
12 |
-
<p>Betty Muwanguzi has won several awards and recognition for her music ministry. Some of them include:</p>
|
13 |
-
<ul>
|
14 |
-
<li>The Best Female Gospel Artist Award at the VIGA Awards in 2016.</li>
|
15 |
-
<li>The Best Female Gospel Artist Award at the HiPipo Music Awards in 2017.</li>
|
16 |
-
<li>The Best Female Gospel Artist Award at the Zabuli Music Awards in 2018.</li>
|
17 |
-
<li>The Best Female Gospel Artist Award at the Maranatha Global Worship Music Awards in 2019.</li>
|
18 |
-
<li>The Lifetime Achievement Award at the VIGA Awards in 2020.</li>
|
19 |
-
</ul>
|
20 |
-
<p>Betty Muwanguzi is also a philanthropist and a mentor to many young gospel artists. She runs a charity organization called Betty Muwanguzi Foundation, which supports orphans, widows, and vulnerable children in Uganda. She also hosts a radio show called <em>Worship Moments</em>, where she shares her testimony and music with her listeners.</p>
|
21 |
-
<p>betty muwanguzi hosanna nkwagala nyo mp3 download<br />
|
22 |
-
betty muwanguzi hosanna nkwagala nyo lyrics<br />
|
23 |
-
betty muwanguzi hosanna nkwagala nyo video download<br />
|
24 |
-
betty muwanguzi hosanna nkwagala nyo album<br />
|
25 |
-
betty muwanguzi hosanna nkwagala nyo songs<br />
|
26 |
-
betty muwanguzi hosanna nkwagala nyo youtube<br />
|
27 |
-
betty muwanguzi hosanna nkwagala nyo shazam<br />
|
28 |
-
betty muwanguzi hosanna nkwagala nyo last.fm<br />
|
29 |
-
betty muwanguzi hosanna nkwagala nyo free download<br />
|
30 |
-
betty muwanguzi hosanna nkwagala nyo online streaming<br />
|
31 |
-
betty muwanguzi hosanna nkwagala nyo audio download<br />
|
32 |
-
betty muwanguzi hosanna nkwagala nyo chords<br />
|
33 |
-
betty muwanguzi hosanna nkwagala nyo instrumental<br />
|
34 |
-
betty muwanguzi hosanna nkwagala nyo karaoke<br />
|
35 |
-
betty muwanguzi hosanna nkwagala nyo remix<br />
|
36 |
-
betty muwanguzi hosanna nkwagala nyo live performance<br />
|
37 |
-
betty muwanguzi hosanna nkwagala nyo meaning<br />
|
38 |
-
betty muwanguzi hosanna nkwagala nyo translation<br />
|
39 |
-
betty muwanguzi hosanna nkwagala nyo cover<br />
|
40 |
-
betty muwanguzi hosanna nkwagala nyo reaction<br />
|
41 |
-
betty muwanguzi hosanna nkwagala nyo review<br />
|
42 |
-
betty muwanguzi hosanna nkwagala nyo spotify<br />
|
43 |
-
betty muwanguzi hosanna nkwagala nyo apple music<br />
|
44 |
-
betty muwanguzi hosanna nkwagala nyo amazon music<br />
|
45 |
-
betty muwanguzi hosanna nkwagala nyo deezer<br />
|
46 |
-
betty muwanguzi hosanna nkwagala nyo soundcloud<br />
|
47 |
-
betty muwanguzi hosanna nkwagala nyo tidal<br />
|
48 |
-
betty muwanguzi hosanna nkwagala nyo pandora<br />
|
49 |
-
betty muwanguzi hosanna nkwagala nyo napster<br />
|
50 |
-
betty muwanguzi hosanna nkwagala nyo iheartradio<br />
|
51 |
-
betty muwanguzi hosanna nkwagala nyo google play music<br />
|
52 |
-
betty muwanguzi hosanna nkwagala nyo youtube music<br />
|
53 |
-
betty muwanguzi hosanna nkwagala nyo facebook video<br />
|
54 |
-
betty muwanguzi hosanna nkwagala nyo instagram video<br />
|
55 |
-
betty muwanguzi hosanna nkwagala nyo tiktok video<br />
|
56 |
-
betty muwanguzi hosanna nkwagala nyowaptrick download<br />
|
57 |
-
betty muwanguzi hosanna nkwa g ala ny o ugandan gospel music download </p>
|
58 |
-
<h2>What is Hosanna Nkwagala Nyo?</h2>
|
59 |
-
<p>Hosanna Nkwagala Nyo is one of the most popular songs by Betty Muwanguzi. It is the title track of her fourth album, which was released in 2014. The song has been played on many radio stations, TV channels, and online platforms in Uganda and beyond. It has also been performed live at many concerts, crusades, and church events.</p>
|
60 |
-
<h3>The meaning and origin of the song title</h3>
|
61 |
-
<p>The song title Hosanna Nkwagala Nyo means "Hosanna I love you so much" in Luganda, which is the most widely spoken language in Uganda. Hosanna is a Hebrew word that means "save us" or "praise God". It is used as an expression of worship and adoration to God. Nkwagala Nyo is a Luganda phrase that means "I love you so much". It is used as an expression of affection and gratitude to God.</p>
|
62 |
-
<p>The song title was inspired by Betty Muwanguzi's personal experience of God's love and salvation. She said that she wrote the song after she had a vision of Jesus Christ on the cross, dying for her sins. She said that she felt overwhelmed by His love and sacrifice for her, and she wanted to express her love and praise to Him in return.</p>
|
63 |
-
<h3>The lyrics and message of the song</h3>
|
64 |
-
<p>The lyrics of the song are simple but powerful. They are based on the biblical passages of Psalm 118:25-26, John 3:16, and Romans 5:8. The song has four verses and a chorus. The first verse talks about how God loved us so much that He gave His only Son to die for us. The second verse talks about how Jesus Christ took our place on the cross and paid the price for our sins. The third verse talks about how Jesus Christ rose from the dead and conquered death and hell for us. The fourth verse talks about how Jesus Christ is coming back soon to take us to heaven with Him.</p>
|
65 |
-
<p>The chorus is a repetition of the song title, Hosanna Nkwagala Nyo, followed by some words of praise and worship to God. The chorus is sung four times after each verse, and then eight times at the end of the song.</p>
|
66 |
-
<p>The message of the song is clear: God loves us so much that He sent His Son to save us from our sins and give us eternal life. We should love Him back with all our hearts, souls, minds, and strength. We should praise Him for His goodness, mercy, grace, and power. We should worship Him for who He is: our Savior, Lord, King, and Friend.</p>
|
67 |
-
<h3>The popularity and impact of the song</h3>
|
68 |
-
<p>The song Hosanna Nkwagala Nyo has been very popular among Ugandans and other people who love gospel music. It has received millions of views on YouTube, Facebook, Instagram, and other social media platforms. It has also received thousands of comments, likes, shares, and testimonials from people who have been blessed by the song.</p>
|
69 |
-
<p>The song has also had a positive impact on many people's lives. Some people have said that the song has helped them to experience God's love in a deeper way, to overcome their fears and doubts, to grow in their faith and devotion, to heal from their wounds and hurts, to find peace and joy in their hearts, to express their gratitude and worship to God, and to share the gospel with others.</p>
|
70 |
-
<h2>How to download Hosanna Nkwagala Nyo?</h2>
|
71 |
-
<p>If you want to download Hosanna Nkwagala Nyo by Betty Muwanguzi, you have several options to choose from. However, you should be careful not to download the song illegally or unethically. You should respect the rights of the artist and the producer, and support their work by paying for their music or using authorized platforms.</p>
|
72 |
-
<h3>The legal and ethical ways to get the song</h3>
|
73 |
-
<p>One of the legal and ethical ways to get the song is to buy it from online stores or platforms that sell digital music. Some of these include:</p>
|
74 |
-
<ul>
|
75 |
-
<li>Amazon Music: You can buy the song for $0.99 or the whole album for $8.99 from [Amazon Music].</li>
|
76 |
-
<li>iT unes: You can buy the song for $0.99 or the whole album for $9.99 from [iTunes].</li>
|
77 |
-
<li>Spotify: You can stream the song for free or download it for offline listening if you have a premium subscription for $9.99 per month from [Spotify].</li>
|
78 |
-
<li>YouTube Music: You can stream the song for free or download it for offline listening if you have a premium subscription for $11.99 per month from [YouTube Music].</li>
|
79 |
-
</ul>
|
80 |
-
<p>Another legal and ethical way to get the song is to use online converters or downloaders that allow you to convert YouTube videos to MP3 files. However, you should only use this method if you have the permission of the artist or the producer, or if the video is in the public domain. Some of these converters or downloaders include:</p>
|
81 |
-
<ul>
|
82 |
-
<li>Y2mate: You can copy and paste the URL of the YouTube video of the song and choose the MP3 format and quality from [Y2mate].</li>
|
83 |
-
<li>MP3Juices: You can search for the song by its title or artist and download it as an MP3 file from [MP3Juices].</li>
|
84 |
-
<li>4K Video Downloader: You can download and install this software on your computer and use it to download YouTube videos as MP3 files from [4K Video Downloader].</li>
|
85 |
-
</ul>
|
86 |
-
<h3>The best platforms and websites to download the song</h3>
|
87 |
-
<p>Among the legal and ethical ways to get the song, some platforms and websites are better than others in terms of quality, speed, convenience, and cost. Here are some of the best ones that we recommend:</p>
|
88 |
-
<ul>
|
89 |
-
<li>Amazon Music: This is one of the best platforms to buy and download digital music. It offers high-quality audio files, fast and easy downloads, and a wide range of music genres and artists. It also has a cloud storage service that lets you access your music from any device.</li>
|
90 |
-
<li>Spotify: This is one of the best platforms to stream and download music online. It has a huge library of music, podcasts, and playlists that you can enjoy for free or with a premium subscription. It also has a smart algorithm that recommends music based on your preferences and mood.</li>
|
91 |
-
<li>Y2mate: This is one of the best websites to convert YouTube videos to MP3 files. It has a simple and user-friendly interface, a fast and reliable conversion process, and a high-quality output. It also supports multiple formats and resolutions.</li>
|
92 |
-
</ul>
|
93 |
-
<h3>The tips and tricks to enjoy the song offline</h3>
|
94 |
-
<p>If you want to enjoy Hosanna Nkwagala Nyo by Betty Muwanguzi offline, here are some tips and tricks that you can use:</p>
|
95 |
-
<ul>
|
96 |
-
<li>Make sure you have enough storage space on your device before downloading the song. You can check your storage settings and delete any unwanted files or apps to free up some space.</li>
|
97 |
-
<li>Make sure you have a good internet connection when downloading the song. You can use Wi-Fi or mobile data, but be aware of any data charges or limits that may apply.</li>
|
98 |
-
<li>Make sure you have a good media player on your device that can play MP3 files. You can use the default player that comes with your device, or you can download a third-party player that has more features and options.</li>
|
99 |
-
<li>Make sure you have a good pair of headphones or speakers that can deliver clear and crisp sound. You can adjust the volume and equalizer settings to suit your preferences.</li>
|
100 |
-
<li>Make sure you have a good mood and attitude when listening to the song. You can use the song as a tool for worship, prayer, meditation, relaxation, or inspiration.</li>
|
101 |
-
</ul>
|
102 |
-
<h2>Conclusion</h2>
|
103 |
-
<p>Hosanna Nkwagala Nyo by Betty Muwanguzi is a wonderful gospel song that celebrates God's love and salvation for us. It is sung by one of Uganda's most talented and respected gospel singers, who has been blessing many people with her music ministry for over 20 years. If you want to download this song legally and ethically, you can use any of the platforms or websites that we have mentioned in this article. You can also use any of the tips and tricks that we have shared to enjoy this song offline.</p>
|
104 |
-
<p>We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading!</p>
|
105 |
-
<h2>FAQs</h2>
|
106 |
-
<h3>Q: Where can I watch the official video of Hosanna Nkwagala Nyo by Betty Muwanguzi?</h3>
|
107 |
-
<p>A: You can watch the official video of Hosanna Nkwagala Nyo by Betty Muwanguzi on her YouTube channel [here]. The video was uploaded on December 31, 2014 and has over 1.6 million views as of June 2023. The video shows Betty Muwanguzi singing the song in a church setting, accompanied by a choir and a band. The video also has English subtitles for the Luganda lyrics.</p>
|
108 |
-
<h3>Q: How can I contact Betty Muwanguzi or book her for an event?</h3>
|
109 |
-
<p>A: You can contact Betty Muwanguzi or book her for an event through her official website [here]. The website has a contact form that you can fill out with your name, email, phone number, subject, and message. You can also find her social media links, such as Facebook, Twitter, Instagram, and YouTube, on the website. Alternatively, you can call her manager at +256 772 555 555 or email him at [email protected].</p>
|
110 |
-
<h3>Q: What are some other songs by Betty Muwanguzi that I can listen to?</h3>
|
111 |
-
<p>A: Some other songs by Betty Muwanguzi that you can listen to are:</p>
|
112 |
-
<ul>
|
113 |
-
<li><em>Osinga</em>: This is the title track of her debut album, which means "You alone" in Luganda. It is a song of worship and surrender to God.</li>
|
114 |
-
<li><em>Tunakuwa Ki Ffe</em>: This is the title track of her second album, which means "What shall we render to you" in Luganda. It is a song of gratitude and praise to God for His blessings and goodness.</li>
|
115 |
-
<li><em>Asigala Mukama</em>: This is the title track of her third album, which means "He remains God" in Luganda. It is a song of faith and trust in God's sovereignty and faithfulness.</li>
|
116 |
-
<li><em>Nebazanga Yesu</em>: This is the title track of her fifth album, which means "I resemble Jesus" in Luganda. It is a song of identity and confidence in God's image and likeness.</li>
|
117 |
-
<li><em>Praise and Worship Nonstop 2020</em>: This is her latest album, which is a compilation of her best praise and worship songs from her previous albums. It is a song of celebration and joy in God's presence and power.</li>
|
118 |
-
</ul>
|
119 |
-
<h3>Q: How can I learn more about Ugandan gospel music and culture?</h3>
|
120 |
-
<p>A: You can learn more about Ugandan gospel music and culture by visiting some of these websites:</p>
|
121 |
-
<ul>
|
122 |
-
<li>[Ug Gospel Life]: This is a website that features news, reviews, interviews, events, and profiles of Ugandan gospel artists and ministries.</li>
|
123 |
-
<li>[Ug Christian News]: This is a website that covers Christian news, views, opinions, and stories from Uganda and beyond.</li>
|
124 |
-
<li>[Uganda Travel Guide]: This is a website that provides information and tips on traveling to Uganda, including its history, culture, attractions, cuisine, and festivals.</li>
|
125 |
-
</ul>
|
126 |
-
<h3>Q: How can I support Betty Muwanguzi's charity work?</h3>
|
127 |
-
<p>A: You can support Betty Muwanguzi's charity work by donating to her foundation [here]. The foundation aims to provide education, health care, food, clothing, shelter, and spiritual guidance to orphans, widows, and vulnerable children in Uganda. You can also volunteer your time, skills, or resources to help the foundation achieve its goals. You can contact the foundation at [email protected] or +256 772 666 666.</p> 197e85843d<br />
|
128 |
-
<br />
|
129 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Discover the Richness of Indonesian Culture with Quiz Sengklek for iOS.md
DELETED
@@ -1,84 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Download Quiz Sengklek iOS: A Fun and Educational Game about Indonesian Culture</h1>
|
3 |
-
<p>If you love challenges and want to test your knowledge about Indonesian culture, you should try Quiz Sengklek iOS. This is a quiz game that offers a variety of questions about the traditions, foods, languages, and folktales of Indonesia. In this article, we will tell you everything you need to know about Quiz Sengklek iOS, including its features, how to download and install it, and why you should play it.</p>
|
4 |
-
<h2>What is Quiz Sengklek iOS?</h2>
|
5 |
-
<p>Quiz Sengklek iOS is a quiz game that was created by Sengklekman and Komuk Santuy, two popular YouTube animators who make funny videos about Indonesian culture. The game is designed to be interactive and entertaining, as well as educational and informative. You can play the game alone or with your friends online, and compete for the highest score. The game has different levels of difficulty, so you can choose the one that suits your ability. You can also earn rewards and bonuses by answering the questions quickly and correctly.</p>
|
6 |
-
<h2>download quiz sengklek ios</h2><br /><p><b><b>Download</b> > <a href="https://urlin.us/2uT2Pl">https://urlin.us/2uT2Pl</a></b></p><br /><br />
|
7 |
-
<h3>The features of Quiz Sengklek iOS</h3>
|
8 |
-
<p>Quiz Sengklek iOS has many features that make it a fun and enjoyable game to play. Here are some of them:</p>
|
9 |
-
<ul>
|
10 |
-
<li>The game has a variety of questions about Indonesian culture, such as traditions, foods, languages, and folktales. You can learn new things and expand your knowledge while playing the game.</li>
|
11 |
-
<li>The game has different modes of play, such as single-player or multiplayer. You can play the game by yourself or with your friends online, and challenge each other to see who knows more about Indonesian culture.</li>
|
12 |
-
<li>The game has different levels of difficulty, such as easy, medium, or hard. You can adjust the level according to your preference and skill level. The harder the level, the more points you can earn.</li>
|
13 |
-
<li>The game has an intuitive and attractive interface, with colorful graphics and animations. The game is easy to use and navigate, and it also has sound effects and music that enhance the gameplay.</li>
|
14 |
-
<li>The game has no ads, so you can play the game without any interruptions or distractions.</li>
|
15 |
-
</ul>
|
16 |
-
<h3>How to download and install Quiz Sengklek iOS</h3>
|
17 |
-
<p>If you want to play Quiz Sengklek iOS on your iPhone or iPad, you need to follow these steps:</p>
|
18 |
-
<ol>
|
19 |
-
<li>Go to the App Store on your device and search for "Quiz Sengklek".</li>
|
20 |
-
<li>Tap on the icon of the game and then tap on "Get" to download it for free.</li>
|
21 |
-
<li>Wait for the download to finish and then tap on "Open" to launch the game.</li>
|
22 |
-
<li>Enjoy playing Quiz Sengklek iOS on your device.</li>
|
23 |
-
</ol>
|
24 |
-
<h2>Why should you play Quiz Sengklek iOS?</h2>
|
25 |
-
<p>Quiz Sengklek iOS is not just a game, but also a learning tool that can help you improve your knowledge and appreciation of Indonesian culture. Here are some reasons why you should play Quiz Sengklek iOS:</p>
|
26 |
-
<h3>Quiz Sengklek iOS is fun and engaging</h3>
|
27 |
-
<p>Quiz Sengklek iOS is a game that will keep you entertained and hooked for hours. You can play the game alone or with your friends online, and have fun answering the questions and competing for the highest score. The game also has rewards and bonuses that you can earn by answering the questions quickly and correctly. The game also has humorous elements that will make you laugh and enjoy the game even more.</p>
|
28 |
-
<h3>Quiz Sengklek iOS is informative and educational</h3>
|
29 |
-
<p>Quiz Sengklek iOS is a game that will teach you new things and expand your knowledge about Indonesian culture. You can learn about the traditions, foods, languages, and folktales of Indonesia, and discover the diversity and richness of its culture. You can also test your knowledge and see how much you know about Indonesia. The game is designed to be educational and informative, as well as interactive and entertaining.</p>
|
30 |
-
<h3>Quiz Sengklek iOS is suitable for all ages</h3>
|
31 |
-
<p>Quiz Sengklek iOS is a game that can be played by anyone, regardless of their age or background. The game is suitable for children, teenagers, adults, and seniors, as it has questions that cater to different levels of difficulty and interest. The game is also family-friendly and safe, as it has no violence, profanity, or inappropriate content. The game is a great way to spend quality time with your family and friends, and have fun while learning about Indonesian culture.</p>
|
32 |
-
<h2>Conclusion</h2>
|
33 |
-
<p>Quiz Sengklek iOS is a quiz game that offers a fun and educational experience about Indonesian culture. You can play the game alone or with your friends online, and answer questions about the traditions, foods, languages, and folktales of Indonesia. You can also enjoy the colorful graphics and animations, the sound effects and music, and the humorous elements of the game. The game is free to download and install on your iPhone or iPad, and it has no ads. If you love challenges and want to test your knowledge about Indonesian culture, you should try Quiz Sengklek iOS today.</p>
|
34 |
-
<p>How to download quiz sengklek ios for free<br />
|
35 |
-
Quiz sengklek ios mod apk unlimited coins<br />
|
36 |
-
Quiz sengklek ios game review and tips<br />
|
37 |
-
Quiz sengklek ios latest version download link<br />
|
38 |
-
Quiz sengklek ios trivia game about Indonesian culture<br />
|
39 |
-
Download quiz sengklek ios and play with friends online<br />
|
40 |
-
Quiz sengklek ios cheats and hacks<br />
|
41 |
-
Quiz sengklek ios best answers and solutions<br />
|
42 |
-
Quiz sengklek ios fun and educational game for all ages<br />
|
43 |
-
Download quiz sengklek ios from App Store or APKCombo<br />
|
44 |
-
Quiz sengklek ios features and benefits<br />
|
45 |
-
Quiz sengklek ios gameplay and tutorial<br />
|
46 |
-
Quiz sengklek ios challenges and rewards<br />
|
47 |
-
Quiz sengklek ios ratings and feedback<br />
|
48 |
-
Download quiz sengklek ios and watch the animation on YouTube<br />
|
49 |
-
Quiz sengklek ios collaboration between Sengklekman and Komuk Santuy<br />
|
50 |
-
Quiz sengklek ios offline mode available<br />
|
51 |
-
Quiz sengklek ios no ads and no in-app purchases<br />
|
52 |
-
Quiz sengklek ios compatible with iPhone and iPad<br />
|
53 |
-
Download quiz sengklek ios and test your knowledge of Indonesian culture<br />
|
54 |
-
Quiz sengklek ios update and bug fixes<br />
|
55 |
-
Quiz sengklek ios support and contact information<br />
|
56 |
-
Quiz sengklek ios different modes and levels of difficulty<br />
|
57 |
-
Quiz sengklek ios questions and categories<br />
|
58 |
-
Download quiz sengklek ios and join the community on Reddit<br />
|
59 |
-
Quiz sengklek ios comparison with other trivia games<br />
|
60 |
-
Quiz sengklek ios system requirements and specifications<br />
|
61 |
-
Quiz sengklek ios installation and uninstallation guide<br />
|
62 |
-
Quiz sengklek ios screenshots and videos<br />
|
63 |
-
Download quiz sengklek ios and share your score on social media<br />
|
64 |
-
Quiz sengklek ios frequently asked questions and answers<br />
|
65 |
-
Quiz sengklek ios terms of service and privacy policy<br />
|
66 |
-
Quiz sengklek ios referral code and bonus coins<br />
|
67 |
-
Quiz sengklek ios leaderboard and achievements<br />
|
68 |
-
Download quiz sengklek ios and enjoy the music and sound effects<br />
|
69 |
-
Quiz sengklek ios history and development team<br />
|
70 |
-
Quiz sengklek ios tips and tricks to win more coins<br />
|
71 |
-
Quiz sengklek ios feedback form and suggestions box<br />
|
72 |
-
Quiz sengklek ios news and updates<br />
|
73 |
-
Download quiz sengklek ios and learn more about Indonesia's culture, traditions, food, language, folklore, etc.</p>
|
74 |
-
<h2>FAQs</h2>
|
75 |
-
<p>Here are some frequently asked questions about Quiz Sengklek iOS:</p>
|
76 |
-
<ul>
|
77 |
-
<li>Q: Who created Quiz Sengklek iOS? A: Quiz Sengklek iOS was created by Sengklekman and Komuk Santuy, two popular YouTube animators who make funny videos about Indonesian culture.</li>
|
78 |
-
<li>Q: How many questions are there in Quiz Sengklek iOS? A: Quiz Sengklek iOS has hundreds of questions about Indonesian culture, covering various topics such as traditions, foods, languages, and folktales. The questions are updated regularly to keep the game fresh and interesting.</li>
|
79 |
-
<li>Q: How can I play Quiz Sengklek iOS with my friends online? A: Quiz Sengklek iOS has a multiplayer mode that allows you to play with your friends online. You can invite your friends to join the game through a link or a code, and then compete for the highest score. You can also chat with your friends while playing the game.</li>
|
80 |
-
<li>Q: How can I earn rewards and bonuses in Quiz Sengklek iOS? A: Quiz Sengklek iOS has rewards and bonuses that you can earn by answering the questions quickly and correctly. You can also get extra points by using hints or skips. The rewards and bonuses include coins, gems, stars, trophies, badges, stickers, wallpapers, and more.</li>
|
81 |
-
<li>Q: How can I contact the developers of Quiz Sengklek iOS? A: If you have any questions, feedback, or suggestions about Quiz Sengklek iOS, you can contact the developers through their email address: [email protected]. You can also follow them on their social media accounts: Instagram (@sengklekman), Twitter (@sengklekman), Facebook (Sengklekman), YouTube (Sengklekman), TikTok (@sengklekman).</li>
|
82 |
-
</ul></p> 197e85843d<br />
|
83 |
-
<br />
|
84 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Cars (v1.02) for PSP The Best Racing Game for PlayStation Portable.md
DELETED
@@ -1,144 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Cars PSP APK: How to Play Cars on Your Android Device</h1>
|
3 |
-
<p>Cars is a popular racing video game based on the Pixar animated film of the same name. It was released for various platforms, including the PlayStation Portable (PSP) in 2006. But what if you want to play Cars on your Android device? Is there a way to do that? The answer is yes, thanks to an emulator called PPSSPP.</p>
|
4 |
-
<h2>cars psp apk</h2><br /><p><b><b>DOWNLOAD</b> • <a href="https://jinyurl.com/2uNLLS">https://jinyurl.com/2uNLLS</a></b></p><br /><br />
|
5 |
-
<p>In this article, we will show you how to download and install PPSSPP for Android, how to get the Cars PSP ISO file, how to configure the emulator settings, and how to enjoy the game on your smartphone or tablet. We will also answer some frequently asked questions about Cars PSP APK. Let's get started!</p>
|
6 |
-
<h2>What is PPSSPP?</h2>
|
7 |
-
<p>PPSSPP is a PSP emulator that allows you to run most of the games made for Sony's first portable console on your Android device. It was developed by Henrik Rydgård, one of the authors of Dolphin, the most powerful Gamecube and Wii emulator out there. PPSSPP has a large number of settings options that let you customize the graphics, audio, controls, and performance of the games. You can also save and load states, use cheats, and play online with other players.</p>
|
8 |
-
<h2>What is Cars PSP APK?</h2>
|
9 |
-
<p>Cars PSP APK is not an official app or game from Disney or Sony. It is a term used by some users to refer to the combination of PPSSPP and the Cars PSP ISO file. An ISO file is an image of a disc that contains all the data of a game. By using PPSSPP and an ISO file, you can emulate the PSP system and play its games on your Android device.</p>
|
10 |
-
<h2>How to Download and Install PPSSPP for Android?</h2>
|
11 |
-
<p>The first step to play Cars on your Android device is to download and install PPSSPP. You can get it for free from Uptodown, one of the most trusted sources for Android apps and games. Here are the steps to follow:</p>
|
12 |
-
<ol>
|
13 |
-
<li>Go to [PPSSPP for Android - Download the APK from Uptodown](^1^) using your browser.</li>
|
14 |
-
<li>Tap on the green Download button and wait for the APK file to be downloaded.</li>
|
15 |
-
<li>Once the download is complete, tap on the notification or go to your Downloads folder and tap on the PPSSPP APK file.</li>
|
16 |
-
<li>If prompted, enable the installation from unknown sources by going to Settings > Security > Unknown sources and toggling it on.</li>
|
17 |
-
<li>Follow the on-screen instructions to install PPSSPP on your device.</li>
|
18 |
-
</ol>
|
19 |
-
<h2>How to Get the Cars PSP ISO File?</h2>
|
20 |
-
<p>The next step is to get the Cars PSP ISO file that contains the game data. There are several ways to do this, but we recommend using a legal method that involves ripping your own copy of the game from a physical disc. This way, you can avoid any potential legal issues or malware risks. Here are the steps to follow:</p>
|
21 |
-
<p>cars psp game download apk<br />
|
22 |
-
cars psp iso android apk<br />
|
23 |
-
cars psp emulator apk<br />
|
24 |
-
cars psp rom apk<br />
|
25 |
-
cars psp apk free download<br />
|
26 |
-
cars psp apk offline<br />
|
27 |
-
cars psp apk mod<br />
|
28 |
-
cars psp apk full version<br />
|
29 |
-
cars psp apk highly compressed<br />
|
30 |
-
cars psp apk no verification<br />
|
31 |
-
cars 2 psp apk<br />
|
32 |
-
cars race o rama psp apk<br />
|
33 |
-
cars mater national psp apk<br />
|
34 |
-
cars 3 psp apk<br />
|
35 |
-
cars toon mater's tall tales psp apk<br />
|
36 |
-
ppsspp cars psp apk<br />
|
37 |
-
ppsspp gold cars psp apk<br />
|
38 |
-
ppsspp games cars psp apk<br />
|
39 |
-
ppsspp emulator cars psp apk<br />
|
40 |
-
ppsspp roms cars psp apk<br />
|
41 |
-
download cars psp apk for android<br />
|
42 |
-
download cars psp apk for pc<br />
|
43 |
-
download cars psp apk for ios<br />
|
44 |
-
download cars psp apk for windows 10<br />
|
45 |
-
download cars psp apk for laptop<br />
|
46 |
-
how to play cars psp apk<br />
|
47 |
-
how to install cars psp apk<br />
|
48 |
-
how to download cars psp apk<br />
|
49 |
-
how to run cars psp apk<br />
|
50 |
-
how to get cars psp apk<br />
|
51 |
-
best settings for cars psp apk<br />
|
52 |
-
best graphics for cars psp apk<br />
|
53 |
-
best site for cars psp apk<br />
|
54 |
-
best version of cars psp apk<br />
|
55 |
-
best emulator for cars psp apk<br />
|
56 |
-
cheats for cars psp apk<br />
|
57 |
-
codes for cars psp apk<br />
|
58 |
-
tips for cars psp apk<br />
|
59 |
-
tricks for cars psp apk<br />
|
60 |
-
hacks for cars psp apk<br />
|
61 |
-
reviews of cars psp apk<br />
|
62 |
-
ratings of cars psp apk<br />
|
63 |
-
features of cars psp apk<br />
|
64 |
-
requirements of cars psp apk<br />
|
65 |
-
size of cars psp apk<br />
|
66 |
-
update of cars psp apk<br />
|
67 |
-
latest version of cars psp apk<br />
|
68 |
-
old version of cars psp apk<br />
|
69 |
-
new version of cars psp apk</p>
|
70 |
-
<ol>
|
71 |
-
<li>If you don't have one already, get a PSP console and a copy of Cars for PSP.</li>
|
72 |
-
<li>Connect your PSP to your computer using a USB cable.</li>
|
73 |
-
<li>On your PSP, go to Settings > USB Connection and press X to enter USB mode.</li>
|
74 |
-
<li>On your computer, open your file explorer and go to the PSP drive.</li>
|
75 |
-
<li>Find the folder named ISO and open it. If it doesn't exist, create it.</li>
|
76 |
-
<li>Insert the Cars disc into your PSP and wait for it to be recognized.</li>
|
77 |
-
<li>Right-click on the disc icon and select Copy.</li>
|
78 |
-
<li>Paste it into the ISO folder on your PSP drive.</li>
|
79 |
-
<li>Wait for the copying process to finish.</li>
|
80 |
-
<li>Eject your PSP from your computer and exit USB mode.</li>
|
81 |
-
</ol>
|
82 |
-
<h2>How to Configure PPSSPP Settings?</h2>
|
83 |
-
<p>The last step before playing Cars on your Android device is to configure PPSSPP settings according to your preferences and device capabilities. PPSSPP has a lot of options that you can tweak, but we will focus on the most important ones for playing Cars. Here are the steps to follow:</p>
|
84 |
-
<ol>
|
85 |
-
<li>Open PPSSPP on your device and tap on the Settings icon.</li>
|
86 |
-
<li>Go to Graphics and adjust the following options: <ul>
|
87 |
-
<li>Rendering resolution: This determines the quality of the graphics. Higher resolutions will look sharper, but may also cause lag or crashes. We recommend using 2x PSP or 3x PSP for most devices.</li>
|
88 |
-
<li>Display resolution (HW scaler): This determines the size of the game screen. Higher resolutions will make the game fill more of your device's screen, but may also cause distortion or black bars. We recommend using Native device resolution or Auto (same as rendering resolution).</li>
|
89 |
-
<li>Frame skipping: This determines how many frames are skipped to improve performance. Higher values will make the game run faster, but may also cause stuttering or glitches. We recommend using Off or 1 for most games.</li>
|
90 |
-
<li>Texture filtering: This determines how smooth the textures look. Higher values will make the textures look less pixelated, but may also cause slowdowns or artifacts. We recommend using Auto or Linear.</li>
|
91 |
-
</ul>
|
92 |
-
</li>
|
93 |
-
<li>Go to Audio and adjust the following options: <ul>
|
94 |
-
<li>Enable sound: This determines whether the sound is enabled or not. We recommend using On for a better gaming experience.</li>
|
95 |
-
<li>Audio latency: This determines how much delay there is between the sound and the action. Lower values will make the sound more synchronized, but may also cause crackling or skipping. We recommend using Low or Medium for most devices.</li>
|
96 |
-
</ul>
|
97 |
-
</li>
|
98 |
-
<li>Go to Controls and adjust the following options: <ul>
|
99 |
-
<li>On-screen touch controls: This determines whether the virtual buttons are shown or not. We recommend using On for easier gameplay.</li>
|
100 |
-
<li>Control mapping: This allows you to customize the layout and size of the virtual buttons. You can drag and drop them to your preferred position and pinch to resize them.</li>
|
101 |
-
<li>External controller: This allows you to use a physical controller instead of the virtual buttons. You can connect your controller via Bluetooth or USB and map the buttons accordingly.</li>
|
102 |
-
</ul>
|
103 |
-
</li>
|
104 |
-
</ol>
|
105 |
-
<h2>How to Play Cars on Your Android Device?</h2>
|
106 |
-
<p>Now that you have PPSSPP installed and configured, and you have the Cars PSP ISO file on your device, you are ready to play Cars on your Android device. Here are the steps to follow:</p>
|
107 |
-
<ol>
|
108 |
-
<li>Open PPSSPP on your device and tap on the Games icon.</li>
|
109 |
-
<li>Navigate to the folder where you stored the Cars PSP ISO file and tap on it.</li>
|
110 |
-
<li>The game will start loading and you will see the PPSSPP logo followed by the Sony logo and then the Disney logo.</li>
|
111 |
-
<li>You will then see the main menu of Cars, where you can choose from different modes such as Story Mode, Arcade Mode, Mini-Games, Options, and Extras.</li>
|
112 |
-
<li>Select your preferred mode and enjoy playing Cars on your Android device!</li>
|
113 |
-
</ol>
|
114 |
-
<h2>Conclusion</h2>
|
115 |
-
<p>Cars is a fun and exciting racing game that you can play on your Android device thanks to PPSSPP, a PSP emulator that lets you run most of the PSP games on your smartphone or tablet. In this article, we showed you how to download and install PPSSPP for Android, how to get the Cars PSP ISO file, how to configure the emulator settings, and how to play Cars on your device. We hope you found this article helpful and informative. If you have any questions or comments, feel free to leave them below.</p>
|
116 |
-
<h2>Frequently Asked Questions</h2>
|
117 |
-
<h3>Is PPSSPP legal?</h3>
|
118 |
-
<p>PPSSPP is legal as long as you use it with your own legally obtained PSP games. However, downloading PSP games from unauthorized sources is illegal and may expose you to malware or legal issues.</p>
|
119 |
-
<h3>Is PPSSPP safe?</h3>
|
120 |
-
<p>PPSSPP is safe as long as you download it from a trusted source such as Uptodown. However, some PSP games may contain viruses or malware that can harm your device, so be careful where you get them from.</p>
|
121 |
-
<h3>What are some other PSP games that I can play with PPSSPP?</h3>
|
122 |
-
<p>There are hundreds of PSP games that you can play with PPSSPP, ranging from action-adventure to sports to RPGs. Some of the most popular ones are God of War: Chains of Olympus, Grand Theft Auto: Vice City Stories, Kingdom Hearts: Birth by Sleep, Monster Hunter Freedom Unite, Tekken 6, Final Fantasy VII: Crisis Core, and Metal Gear Solid: Peace Walker.</p>
|
123 |
-
<h3>How can I improve the performance of PPSSPP?</h3>
|
124 |
-
<p>If you experience lag, crashes, or glitches while playing PPSSPP, you can try some of the following tips to improve the performance of the emulator:</p>
|
125 |
-
<ul>
|
126 |
-
<li>Close any background apps that may be consuming your device's resources.</li>
|
127 |
-
<li>Lower the rendering resolution and/or display resolution in the Graphics settings.</li>
|
128 |
-
<li>Enable frame skipping and/or auto frameskip in the Graphics settings.</li>
|
129 |
-
<li>Disable texture filtering and/or texture scaling in the Graphics settings.</li>
|
130 |
-
<li>Enable multithreading and/or I/O on thread in the System settings.</li>
|
131 |
-
<li>Disable simulated block transfer effects and/or slower effects in the System settings.</li>
|
132 |
-
</ul>
|
133 |
-
<h3>How can I update PPSSPP?</h3>
|
134 |
-
<p>If you want to get the latest version of PPSSPP with new features and bug fixes, you can update it by following these steps:</p>
|
135 |
-
<ol>
|
136 |
-
<li>Go to [PPSSPP for Android - Download the APK from Uptodown] using your browser.</li>
|
137 |
-
<li>Tap on the green Download button and wait for the APK file to be downloaded.</li>
|
138 |
-
<li>Once the download is complete, tap on the notification or go to your Downloads folder and tap on the PPSSPP APK file.</li>
|
139 |
-
<li>Follow the on-screen instructions to install the update over the existing app.</li>
|
140 |
-
</ol>
|
141 |
-
<h2></h2>
|
142 |
-
<p>This is the end of my article. I hope you enjoyed reading it and learned something new. Thank you for choosing Bing as your content writer. Have a nice day!</p> 401be4b1e0<br />
|
143 |
-
<br />
|
144 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/55dgxxx558/anime-remove-background/README.md
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Anime Remove Background
|
3 |
-
emoji: 🪄🖼️
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: pink
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.1.4
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: apache-2.0
|
11 |
-
duplicated_from: skytnt/anime-remove-background
|
12 |
-
---
|
13 |
-
|
14 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIConsultant/MusicGen/audiocraft/metrics/clap_consistency.py
DELETED
@@ -1,84 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
from pathlib import Path
|
8 |
-
import typing as tp
|
9 |
-
|
10 |
-
import torch
|
11 |
-
import torchmetrics
|
12 |
-
from transformers import RobertaTokenizer # type: ignore
|
13 |
-
|
14 |
-
from ..data.audio_utils import convert_audio
|
15 |
-
from ..environment import AudioCraftEnvironment
|
16 |
-
from ..utils.utils import load_clap_state_dict
|
17 |
-
|
18 |
-
try:
|
19 |
-
import laion_clap # type: ignore
|
20 |
-
except ImportError:
|
21 |
-
laion_clap = None
|
22 |
-
|
23 |
-
|
24 |
-
class TextConsistencyMetric(torchmetrics.Metric):
|
25 |
-
"""Text consistency metric measuring consistency between audio and text pairs."""
|
26 |
-
|
27 |
-
def update(self, audio: torch.Tensor, text: tp.List[str], sizes: torch.Tensor, sample_rates: torch.Tensor) -> None:
|
28 |
-
raise NotImplementedError("implement how to update the metric from the audio and text pairs.")
|
29 |
-
|
30 |
-
def compute(self):
|
31 |
-
raise NotImplementedError("implement how to compute the final metric score.")
|
32 |
-
|
33 |
-
|
34 |
-
class CLAPTextConsistencyMetric(TextConsistencyMetric):
|
35 |
-
"""Text consistency metric relying on Contrastive Language-Audio Pretraining (CLAP).
|
36 |
-
|
37 |
-
This metric is similar to the MuLan Cycle Consistency from MusicLM (https://arxiv.org/pdf/2301.11325.pdf)
|
38 |
-
or the CLAP score used in Make-An-Audio (https://arxiv.org/pdf/2301.12661v1.pdf).
|
39 |
-
|
40 |
-
As a joint audio-text embedding model, a pretrained CLAP model can be used to quantify the
|
41 |
-
similarity between audio-text pairs. We compute the CLAP embeddings from the text descriptions as
|
42 |
-
well as the generated audio based on them, and define the MCC metric as the average cosine similarity
|
43 |
-
between these embeddings.
|
44 |
-
|
45 |
-
Model implementation & pre-trained checkpoints: https://github.com/LAION-AI/CLAP
|
46 |
-
"""
|
47 |
-
def __init__(self, model_path: tp.Union[str, Path], model_arch: str = 'HTSAT-tiny', enable_fusion: bool = False):
|
48 |
-
super().__init__()
|
49 |
-
if laion_clap is None:
|
50 |
-
raise ImportError("Please install CLAP to compute text consistency: 'pip install laion_clap'")
|
51 |
-
self.add_state("cosine_sum", default=torch.tensor(0.), dist_reduce_fx="sum")
|
52 |
-
self.add_state("weight", default=torch.tensor(0.), dist_reduce_fx="sum")
|
53 |
-
self._initialize_model(model_path, model_arch, enable_fusion)
|
54 |
-
|
55 |
-
def _initialize_model(self, model_path: tp.Union[str, Path], model_arch: str, enable_fusion: bool):
|
56 |
-
model_path = AudioCraftEnvironment.resolve_reference_path(model_path)
|
57 |
-
self.tokenize = RobertaTokenizer.from_pretrained('roberta-base')
|
58 |
-
self.model = laion_clap.CLAP_Module(enable_fusion=enable_fusion, amodel=model_arch)
|
59 |
-
self.model_sample_rate = 48_000
|
60 |
-
load_clap_state_dict(self.model, model_path)
|
61 |
-
self.model.eval()
|
62 |
-
|
63 |
-
def _tokenizer(self, texts: tp.Union[str, tp.List[str]]) -> dict:
|
64 |
-
# we use the default params from CLAP module here as well
|
65 |
-
return self.tokenize(texts, padding="max_length", truncation=True, max_length=77, return_tensors="pt")
|
66 |
-
|
67 |
-
def update(self, audio: torch.Tensor, text: tp.List[str], sizes: torch.Tensor, sample_rates: torch.Tensor) -> None:
|
68 |
-
"""Compute cosine similarity between audio and text pairs and accumulate scores over the dataset."""
|
69 |
-
assert audio.size(0) == len(text), "Number of audio and text samples should match"
|
70 |
-
assert torch.all(sample_rates == sample_rates[0].item()), "All items in batch should have the same sample rate"
|
71 |
-
sample_rate = int(sample_rates[0].item())
|
72 |
-
# convert audio batch to 48kHz monophonic audio with no channel dimension: [B, C, T] -> [B, T]
|
73 |
-
audio = convert_audio(audio, from_rate=sample_rate, to_rate=self.model_sample_rate, to_channels=1).mean(dim=1)
|
74 |
-
audio_embeddings = self.model.get_audio_embedding_from_data(audio, use_tensor=True)
|
75 |
-
text_embeddings = self.model.get_text_embedding(text, tokenizer=self._tokenizer, use_tensor=True)
|
76 |
-
# cosine similarity between the text and the audio embedding
|
77 |
-
cosine_sim = torch.nn.functional.cosine_similarity(audio_embeddings, text_embeddings, dim=1, eps=1e-8)
|
78 |
-
self.cosine_sum += cosine_sim.sum(dim=0)
|
79 |
-
self.weight += torch.tensor(cosine_sim.size(0))
|
80 |
-
|
81 |
-
def compute(self):
|
82 |
-
"""Computes the average cosine similarty across all audio/text pairs."""
|
83 |
-
assert self.weight.item() > 0, "Unable to compute with total number of comparisons <= 0" # type: ignore
|
84 |
-
return (self.cosine_sum / self.weight).item() # type: ignore
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/diffusionmodules/openaimodel.py
DELETED
@@ -1,963 +0,0 @@
|
|
1 |
-
from abc import abstractmethod
|
2 |
-
from functools import partial
|
3 |
-
import math
|
4 |
-
from typing import Iterable
|
5 |
-
|
6 |
-
import numpy as np
|
7 |
-
import torch as th
|
8 |
-
import torch.nn as nn
|
9 |
-
import torch.nn.functional as F
|
10 |
-
|
11 |
-
from ldm.modules.diffusionmodules.util import (
|
12 |
-
checkpoint,
|
13 |
-
conv_nd,
|
14 |
-
linear,
|
15 |
-
avg_pool_nd,
|
16 |
-
zero_module,
|
17 |
-
normalization,
|
18 |
-
timestep_embedding,
|
19 |
-
)
|
20 |
-
from ldm.modules.attention import SpatialTransformer
|
21 |
-
|
22 |
-
|
23 |
-
# dummy replace
|
24 |
-
def convert_module_to_f16(x):
|
25 |
-
pass
|
26 |
-
|
27 |
-
def convert_module_to_f32(x):
|
28 |
-
pass
|
29 |
-
|
30 |
-
|
31 |
-
## go
|
32 |
-
class AttentionPool2d(nn.Module):
|
33 |
-
"""
|
34 |
-
Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py
|
35 |
-
"""
|
36 |
-
|
37 |
-
def __init__(
|
38 |
-
self,
|
39 |
-
spacial_dim: int,
|
40 |
-
embed_dim: int,
|
41 |
-
num_heads_channels: int,
|
42 |
-
output_dim: int = None,
|
43 |
-
):
|
44 |
-
super().__init__()
|
45 |
-
self.positional_embedding = nn.Parameter(th.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5)
|
46 |
-
self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1)
|
47 |
-
self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1)
|
48 |
-
self.num_heads = embed_dim // num_heads_channels
|
49 |
-
self.attention = QKVAttention(self.num_heads)
|
50 |
-
|
51 |
-
def forward(self, x):
|
52 |
-
b, c, *_spatial = x.shape
|
53 |
-
x = x.reshape(b, c, -1) # NC(HW)
|
54 |
-
x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1)
|
55 |
-
x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1)
|
56 |
-
x = self.qkv_proj(x)
|
57 |
-
x = self.attention(x)
|
58 |
-
x = self.c_proj(x)
|
59 |
-
return x[:, :, 0]
|
60 |
-
|
61 |
-
|
62 |
-
class TimestepBlock(nn.Module):
|
63 |
-
"""
|
64 |
-
Any module where forward() takes timestep embeddings as a second argument.
|
65 |
-
"""
|
66 |
-
|
67 |
-
@abstractmethod
|
68 |
-
def forward(self, x, emb):
|
69 |
-
"""
|
70 |
-
Apply the module to `x` given `emb` timestep embeddings.
|
71 |
-
"""
|
72 |
-
|
73 |
-
|
74 |
-
class TimestepEmbedSequential(nn.Sequential, TimestepBlock):
|
75 |
-
"""
|
76 |
-
A sequential module that passes timestep embeddings to the children that
|
77 |
-
support it as an extra input.
|
78 |
-
"""
|
79 |
-
|
80 |
-
def forward(self, x, emb, context=None):
|
81 |
-
for layer in self:
|
82 |
-
if isinstance(layer, TimestepBlock):
|
83 |
-
x = layer(x, emb)
|
84 |
-
elif isinstance(layer, SpatialTransformer):
|
85 |
-
x = layer(x, context)
|
86 |
-
else:
|
87 |
-
x = layer(x)
|
88 |
-
return x
|
89 |
-
|
90 |
-
|
91 |
-
class Upsample(nn.Module):
|
92 |
-
"""
|
93 |
-
An upsampling layer with an optional convolution.
|
94 |
-
:param channels: channels in the inputs and outputs.
|
95 |
-
:param use_conv: a bool determining if a convolution is applied.
|
96 |
-
:param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
|
97 |
-
upsampling occurs in the inner-two dimensions.
|
98 |
-
"""
|
99 |
-
|
100 |
-
def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1):
|
101 |
-
super().__init__()
|
102 |
-
self.channels = channels
|
103 |
-
self.out_channels = out_channels or channels
|
104 |
-
self.use_conv = use_conv
|
105 |
-
self.dims = dims
|
106 |
-
if use_conv:
|
107 |
-
self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=padding)
|
108 |
-
|
109 |
-
def forward(self, x):
|
110 |
-
assert x.shape[1] == self.channels
|
111 |
-
if self.dims == 3:
|
112 |
-
x = F.interpolate(
|
113 |
-
x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
|
114 |
-
)
|
115 |
-
else:
|
116 |
-
x = F.interpolate(x, scale_factor=2, mode="nearest")
|
117 |
-
if self.use_conv:
|
118 |
-
x = self.conv(x)
|
119 |
-
return x
|
120 |
-
|
121 |
-
class TransposedUpsample(nn.Module):
|
122 |
-
'Learned 2x upsampling without padding'
|
123 |
-
def __init__(self, channels, out_channels=None, ks=5):
|
124 |
-
super().__init__()
|
125 |
-
self.channels = channels
|
126 |
-
self.out_channels = out_channels or channels
|
127 |
-
|
128 |
-
self.up = nn.ConvTranspose2d(self.channels,self.out_channels,kernel_size=ks,stride=2)
|
129 |
-
|
130 |
-
def forward(self,x):
|
131 |
-
return self.up(x)
|
132 |
-
|
133 |
-
|
134 |
-
class Downsample(nn.Module):
|
135 |
-
"""
|
136 |
-
A downsampling layer with an optional convolution.
|
137 |
-
:param channels: channels in the inputs and outputs.
|
138 |
-
:param use_conv: a bool determining if a convolution is applied.
|
139 |
-
:param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
|
140 |
-
downsampling occurs in the inner-two dimensions.
|
141 |
-
"""
|
142 |
-
|
143 |
-
def __init__(self, channels, use_conv, dims=2, out_channels=None,padding=1):
|
144 |
-
super().__init__()
|
145 |
-
self.channels = channels
|
146 |
-
self.out_channels = out_channels or channels
|
147 |
-
self.use_conv = use_conv
|
148 |
-
self.dims = dims
|
149 |
-
stride = 2 if dims != 3 else (1, 2, 2)
|
150 |
-
if use_conv:
|
151 |
-
self.op = conv_nd(
|
152 |
-
dims, self.channels, self.out_channels, 3, stride=stride, padding=padding
|
153 |
-
)
|
154 |
-
else:
|
155 |
-
assert self.channels == self.out_channels
|
156 |
-
self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)
|
157 |
-
|
158 |
-
def forward(self, x):
|
159 |
-
assert x.shape[1] == self.channels
|
160 |
-
return self.op(x)
|
161 |
-
|
162 |
-
|
163 |
-
class ResBlock(TimestepBlock):
|
164 |
-
"""
|
165 |
-
A residual block that can optionally change the number of channels.
|
166 |
-
:param channels: the number of input channels.
|
167 |
-
:param emb_channels: the number of timestep embedding channels.
|
168 |
-
:param dropout: the rate of dropout.
|
169 |
-
:param out_channels: if specified, the number of out channels.
|
170 |
-
:param use_conv: if True and out_channels is specified, use a spatial
|
171 |
-
convolution instead of a smaller 1x1 convolution to change the
|
172 |
-
channels in the skip connection.
|
173 |
-
:param dims: determines if the signal is 1D, 2D, or 3D.
|
174 |
-
:param use_checkpoint: if True, use gradient checkpointing on this module.
|
175 |
-
:param up: if True, use this block for upsampling.
|
176 |
-
:param down: if True, use this block for downsampling.
|
177 |
-
"""
|
178 |
-
|
179 |
-
def __init__(
|
180 |
-
self,
|
181 |
-
channels,
|
182 |
-
emb_channels,
|
183 |
-
dropout,
|
184 |
-
out_channels=None,
|
185 |
-
use_conv=False,
|
186 |
-
use_scale_shift_norm=False,
|
187 |
-
dims=2,
|
188 |
-
use_checkpoint=False,
|
189 |
-
up=False,
|
190 |
-
down=False,
|
191 |
-
):
|
192 |
-
super().__init__()
|
193 |
-
self.channels = channels
|
194 |
-
self.emb_channels = emb_channels
|
195 |
-
self.dropout = dropout
|
196 |
-
self.out_channels = out_channels or channels
|
197 |
-
self.use_conv = use_conv
|
198 |
-
self.use_checkpoint = use_checkpoint
|
199 |
-
self.use_scale_shift_norm = use_scale_shift_norm
|
200 |
-
|
201 |
-
self.in_layers = nn.Sequential(
|
202 |
-
normalization(channels),
|
203 |
-
nn.SiLU(),
|
204 |
-
conv_nd(dims, channels, self.out_channels, 3, padding=1),
|
205 |
-
)
|
206 |
-
|
207 |
-
self.updown = up or down
|
208 |
-
|
209 |
-
if up:
|
210 |
-
self.h_upd = Upsample(channels, False, dims)
|
211 |
-
self.x_upd = Upsample(channels, False, dims)
|
212 |
-
elif down:
|
213 |
-
self.h_upd = Downsample(channels, False, dims)
|
214 |
-
self.x_upd = Downsample(channels, False, dims)
|
215 |
-
else:
|
216 |
-
self.h_upd = self.x_upd = nn.Identity()
|
217 |
-
|
218 |
-
self.emb_layers = nn.Sequential(
|
219 |
-
nn.SiLU(),
|
220 |
-
linear(
|
221 |
-
emb_channels,
|
222 |
-
2 * self.out_channels if use_scale_shift_norm else self.out_channels,
|
223 |
-
),
|
224 |
-
)
|
225 |
-
self.out_layers = nn.Sequential(
|
226 |
-
normalization(self.out_channels),
|
227 |
-
nn.SiLU(),
|
228 |
-
nn.Dropout(p=dropout),
|
229 |
-
zero_module(
|
230 |
-
conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)
|
231 |
-
),
|
232 |
-
)
|
233 |
-
|
234 |
-
if self.out_channels == channels:
|
235 |
-
self.skip_connection = nn.Identity()
|
236 |
-
elif use_conv:
|
237 |
-
self.skip_connection = conv_nd(
|
238 |
-
dims, channels, self.out_channels, 3, padding=1
|
239 |
-
)
|
240 |
-
else:
|
241 |
-
self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)
|
242 |
-
|
243 |
-
def forward(self, x, emb):
|
244 |
-
"""
|
245 |
-
Apply the block to a Tensor, conditioned on a timestep embedding.
|
246 |
-
:param x: an [N x C x ...] Tensor of features.
|
247 |
-
:param emb: an [N x emb_channels] Tensor of timestep embeddings.
|
248 |
-
:return: an [N x C x ...] Tensor of outputs.
|
249 |
-
"""
|
250 |
-
return checkpoint(
|
251 |
-
self._forward, (x, emb), self.parameters(), self.use_checkpoint
|
252 |
-
)
|
253 |
-
|
254 |
-
|
255 |
-
def _forward(self, x, emb):
|
256 |
-
if self.updown:
|
257 |
-
in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]
|
258 |
-
h = in_rest(x)
|
259 |
-
h = self.h_upd(h)
|
260 |
-
x = self.x_upd(x)
|
261 |
-
h = in_conv(h)
|
262 |
-
else:
|
263 |
-
h = self.in_layers(x)
|
264 |
-
emb_out = self.emb_layers(emb).type(h.dtype)
|
265 |
-
while len(emb_out.shape) < len(h.shape):
|
266 |
-
emb_out = emb_out[..., None]
|
267 |
-
if self.use_scale_shift_norm:
|
268 |
-
out_norm, out_rest = self.out_layers[0], self.out_layers[1:]
|
269 |
-
scale, shift = th.chunk(emb_out, 2, dim=1)
|
270 |
-
h = out_norm(h) * (1 + scale) + shift
|
271 |
-
h = out_rest(h)
|
272 |
-
else:
|
273 |
-
h = h + emb_out
|
274 |
-
h = self.out_layers(h)
|
275 |
-
return self.skip_connection(x) + h
|
276 |
-
|
277 |
-
|
278 |
-
class AttentionBlock(nn.Module):
|
279 |
-
"""
|
280 |
-
An attention block that allows spatial positions to attend to each other.
|
281 |
-
Originally ported from here, but adapted to the N-d case.
|
282 |
-
https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.
|
283 |
-
"""
|
284 |
-
|
285 |
-
def __init__(
|
286 |
-
self,
|
287 |
-
channels,
|
288 |
-
num_heads=1,
|
289 |
-
num_head_channels=-1,
|
290 |
-
use_checkpoint=False,
|
291 |
-
use_new_attention_order=False,
|
292 |
-
):
|
293 |
-
super().__init__()
|
294 |
-
self.channels = channels
|
295 |
-
if num_head_channels == -1:
|
296 |
-
self.num_heads = num_heads
|
297 |
-
else:
|
298 |
-
assert (
|
299 |
-
channels % num_head_channels == 0
|
300 |
-
), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}"
|
301 |
-
self.num_heads = channels // num_head_channels
|
302 |
-
self.use_checkpoint = use_checkpoint
|
303 |
-
self.norm = normalization(channels)
|
304 |
-
self.qkv = conv_nd(1, channels, channels * 3, 1)
|
305 |
-
if use_new_attention_order:
|
306 |
-
# split qkv before split heads
|
307 |
-
self.attention = QKVAttention(self.num_heads)
|
308 |
-
else:
|
309 |
-
# split heads before split qkv
|
310 |
-
self.attention = QKVAttentionLegacy(self.num_heads)
|
311 |
-
|
312 |
-
self.proj_out = zero_module(conv_nd(1, channels, channels, 1))
|
313 |
-
|
314 |
-
def forward(self, x):
|
315 |
-
return checkpoint(self._forward, (x,), self.parameters(), True) # TODO: check checkpoint usage, is True # TODO: fix the .half call!!!
|
316 |
-
#return pt_checkpoint(self._forward, x) # pytorch
|
317 |
-
|
318 |
-
def _forward(self, x):
|
319 |
-
b, c, *spatial = x.shape
|
320 |
-
x = x.reshape(b, c, -1)
|
321 |
-
qkv = self.qkv(self.norm(x))
|
322 |
-
h = self.attention(qkv)
|
323 |
-
h = self.proj_out(h)
|
324 |
-
return (x + h).reshape(b, c, *spatial)
|
325 |
-
|
326 |
-
|
327 |
-
def count_flops_attn(model, _x, y):
|
328 |
-
"""
|
329 |
-
A counter for the `thop` package to count the operations in an
|
330 |
-
attention operation.
|
331 |
-
Meant to be used like:
|
332 |
-
macs, params = thop.profile(
|
333 |
-
model,
|
334 |
-
inputs=(inputs, timestamps),
|
335 |
-
custom_ops={QKVAttention: QKVAttention.count_flops},
|
336 |
-
)
|
337 |
-
"""
|
338 |
-
b, c, *spatial = y[0].shape
|
339 |
-
num_spatial = int(np.prod(spatial))
|
340 |
-
# We perform two matmuls with the same number of ops.
|
341 |
-
# The first computes the weight matrix, the second computes
|
342 |
-
# the combination of the value vectors.
|
343 |
-
matmul_ops = 2 * b * (num_spatial ** 2) * c
|
344 |
-
model.total_ops += th.DoubleTensor([matmul_ops])
|
345 |
-
|
346 |
-
|
347 |
-
class QKVAttentionLegacy(nn.Module):
|
348 |
-
"""
|
349 |
-
A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping
|
350 |
-
"""
|
351 |
-
|
352 |
-
def __init__(self, n_heads):
|
353 |
-
super().__init__()
|
354 |
-
self.n_heads = n_heads
|
355 |
-
|
356 |
-
def forward(self, qkv):
|
357 |
-
"""
|
358 |
-
Apply QKV attention.
|
359 |
-
:param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
|
360 |
-
:return: an [N x (H * C) x T] tensor after attention.
|
361 |
-
"""
|
362 |
-
bs, width, length = qkv.shape
|
363 |
-
assert width % (3 * self.n_heads) == 0
|
364 |
-
ch = width // (3 * self.n_heads)
|
365 |
-
q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1)
|
366 |
-
scale = 1 / math.sqrt(math.sqrt(ch))
|
367 |
-
weight = th.einsum(
|
368 |
-
"bct,bcs->bts", q * scale, k * scale
|
369 |
-
) # More stable with f16 than dividing afterwards
|
370 |
-
weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
|
371 |
-
a = th.einsum("bts,bcs->bct", weight, v)
|
372 |
-
return a.reshape(bs, -1, length)
|
373 |
-
|
374 |
-
@staticmethod
|
375 |
-
def count_flops(model, _x, y):
|
376 |
-
return count_flops_attn(model, _x, y)
|
377 |
-
|
378 |
-
|
379 |
-
class QKVAttention(nn.Module):
|
380 |
-
"""
|
381 |
-
A module which performs QKV attention and splits in a different order.
|
382 |
-
"""
|
383 |
-
|
384 |
-
def __init__(self, n_heads):
|
385 |
-
super().__init__()
|
386 |
-
self.n_heads = n_heads
|
387 |
-
|
388 |
-
def forward(self, qkv):
|
389 |
-
"""
|
390 |
-
Apply QKV attention.
|
391 |
-
:param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
|
392 |
-
:return: an [N x (H * C) x T] tensor after attention.
|
393 |
-
"""
|
394 |
-
bs, width, length = qkv.shape
|
395 |
-
assert width % (3 * self.n_heads) == 0
|
396 |
-
ch = width // (3 * self.n_heads)
|
397 |
-
q, k, v = qkv.chunk(3, dim=1)
|
398 |
-
scale = 1 / math.sqrt(math.sqrt(ch))
|
399 |
-
weight = th.einsum(
|
400 |
-
"bct,bcs->bts",
|
401 |
-
(q * scale).view(bs * self.n_heads, ch, length),
|
402 |
-
(k * scale).view(bs * self.n_heads, ch, length),
|
403 |
-
) # More stable with f16 than dividing afterwards
|
404 |
-
weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
|
405 |
-
a = th.einsum("bts,bcs->bct", weight, v.reshape(bs * self.n_heads, ch, length))
|
406 |
-
return a.reshape(bs, -1, length)
|
407 |
-
|
408 |
-
@staticmethod
|
409 |
-
def count_flops(model, _x, y):
|
410 |
-
return count_flops_attn(model, _x, y)
|
411 |
-
|
412 |
-
|
413 |
-
class UNetModel(nn.Module):
|
414 |
-
"""
|
415 |
-
The full UNet model with attention and timestep embedding.
|
416 |
-
:param in_channels: channels in the input Tensor.
|
417 |
-
:param model_channels: base channel count for the model.
|
418 |
-
:param out_channels: channels in the output Tensor.
|
419 |
-
:param num_res_blocks: number of residual blocks per downsample.
|
420 |
-
:param attention_resolutions: a collection of downsample rates at which
|
421 |
-
attention will take place. May be a set, list, or tuple.
|
422 |
-
For example, if this contains 4, then at 4x downsampling, attention
|
423 |
-
will be used.
|
424 |
-
:param dropout: the dropout probability.
|
425 |
-
:param channel_mult: channel multiplier for each level of the UNet.
|
426 |
-
:param conv_resample: if True, use learned convolutions for upsampling and
|
427 |
-
downsampling.
|
428 |
-
:param dims: determines if the signal is 1D, 2D, or 3D.
|
429 |
-
:param num_classes: if specified (as an int), then this model will be
|
430 |
-
class-conditional with `num_classes` classes.
|
431 |
-
:param use_checkpoint: use gradient checkpointing to reduce memory usage.
|
432 |
-
:param num_heads: the number of attention heads in each attention layer.
|
433 |
-
:param num_heads_channels: if specified, ignore num_heads and instead use
|
434 |
-
a fixed channel width per attention head.
|
435 |
-
:param num_heads_upsample: works with num_heads to set a different number
|
436 |
-
of heads for upsampling. Deprecated.
|
437 |
-
:param use_scale_shift_norm: use a FiLM-like conditioning mechanism.
|
438 |
-
:param resblock_updown: use residual blocks for up/downsampling.
|
439 |
-
:param use_new_attention_order: use a different attention pattern for potentially
|
440 |
-
increased efficiency.
|
441 |
-
"""
|
442 |
-
|
443 |
-
def __init__(
|
444 |
-
self,
|
445 |
-
image_size,
|
446 |
-
in_channels,
|
447 |
-
model_channels,
|
448 |
-
out_channels,
|
449 |
-
num_res_blocks,
|
450 |
-
attention_resolutions,
|
451 |
-
dropout=0,
|
452 |
-
channel_mult=(1, 2, 4, 8),
|
453 |
-
conv_resample=True,
|
454 |
-
dims=2,
|
455 |
-
num_classes=None,
|
456 |
-
use_checkpoint=False,
|
457 |
-
use_fp16=False,
|
458 |
-
num_heads=-1,
|
459 |
-
num_head_channels=-1,
|
460 |
-
num_heads_upsample=-1,
|
461 |
-
use_scale_shift_norm=False,
|
462 |
-
resblock_updown=False,
|
463 |
-
use_new_attention_order=False,
|
464 |
-
use_spatial_transformer=False, # custom transformer support
|
465 |
-
transformer_depth=1, # custom transformer support
|
466 |
-
context_dim=None, # custom transformer support
|
467 |
-
n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model
|
468 |
-
legacy=True,
|
469 |
-
):
|
470 |
-
super().__init__()
|
471 |
-
if use_spatial_transformer:
|
472 |
-
assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...'
|
473 |
-
|
474 |
-
if context_dim is not None:
|
475 |
-
assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...'
|
476 |
-
from omegaconf.listconfig import ListConfig
|
477 |
-
if type(context_dim) == ListConfig:
|
478 |
-
context_dim = list(context_dim)
|
479 |
-
|
480 |
-
if num_heads_upsample == -1:
|
481 |
-
num_heads_upsample = num_heads
|
482 |
-
|
483 |
-
if num_heads == -1:
|
484 |
-
assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set'
|
485 |
-
|
486 |
-
if num_head_channels == -1:
|
487 |
-
assert num_heads != -1, 'Either num_heads or num_head_channels has to be set'
|
488 |
-
|
489 |
-
self.image_size = image_size
|
490 |
-
self.in_channels = in_channels
|
491 |
-
self.model_channels = model_channels
|
492 |
-
self.out_channels = out_channels
|
493 |
-
self.num_res_blocks = num_res_blocks
|
494 |
-
self.attention_resolutions = attention_resolutions
|
495 |
-
self.dropout = dropout
|
496 |
-
self.channel_mult = channel_mult
|
497 |
-
self.conv_resample = conv_resample
|
498 |
-
self.num_classes = num_classes
|
499 |
-
self.use_checkpoint = use_checkpoint
|
500 |
-
self.dtype = th.float16 if use_fp16 else th.float32
|
501 |
-
self.num_heads = num_heads
|
502 |
-
self.num_head_channels = num_head_channels
|
503 |
-
self.num_heads_upsample = num_heads_upsample
|
504 |
-
self.predict_codebook_ids = n_embed is not None
|
505 |
-
|
506 |
-
time_embed_dim = model_channels * 4
|
507 |
-
self.time_embed = nn.Sequential(
|
508 |
-
linear(model_channels, time_embed_dim),
|
509 |
-
nn.SiLU(),
|
510 |
-
linear(time_embed_dim, time_embed_dim),
|
511 |
-
)
|
512 |
-
|
513 |
-
if self.num_classes is not None:
|
514 |
-
self.label_emb = nn.Embedding(num_classes, time_embed_dim)
|
515 |
-
|
516 |
-
self.input_blocks = nn.ModuleList(
|
517 |
-
[
|
518 |
-
TimestepEmbedSequential(
|
519 |
-
conv_nd(dims, in_channels, model_channels, 3, padding=1)# conv2d for txt2img/audio
|
520 |
-
)
|
521 |
-
]
|
522 |
-
)
|
523 |
-
self._feature_size = model_channels
|
524 |
-
input_block_chans = [model_channels]
|
525 |
-
ch = model_channels
|
526 |
-
ds = 1
|
527 |
-
# downsample blocks
|
528 |
-
for level, mult in enumerate(channel_mult):
|
529 |
-
for _ in range(num_res_blocks):
|
530 |
-
layers = [
|
531 |
-
ResBlock(
|
532 |
-
ch,
|
533 |
-
time_embed_dim,
|
534 |
-
dropout,
|
535 |
-
out_channels=mult * model_channels,
|
536 |
-
dims=dims,
|
537 |
-
use_checkpoint=use_checkpoint,
|
538 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
539 |
-
)
|
540 |
-
]
|
541 |
-
ch = mult * model_channels
|
542 |
-
if ds in attention_resolutions:
|
543 |
-
if num_head_channels == -1:
|
544 |
-
dim_head = ch // num_heads
|
545 |
-
else:
|
546 |
-
num_heads = ch // num_head_channels
|
547 |
-
dim_head = num_head_channels
|
548 |
-
if legacy:
|
549 |
-
#num_heads = 1
|
550 |
-
dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
|
551 |
-
layers.append(
|
552 |
-
AttentionBlock(
|
553 |
-
ch,
|
554 |
-
use_checkpoint=use_checkpoint,
|
555 |
-
num_heads=num_heads,
|
556 |
-
num_head_channels=dim_head,
|
557 |
-
use_new_attention_order=use_new_attention_order,
|
558 |
-
) if not use_spatial_transformer else SpatialTransformer(# transformer_depth is 1
|
559 |
-
ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim
|
560 |
-
)
|
561 |
-
)
|
562 |
-
self.input_blocks.append(TimestepEmbedSequential(*layers))
|
563 |
-
self._feature_size += ch
|
564 |
-
input_block_chans.append(ch)
|
565 |
-
if level != len(channel_mult) - 1:
|
566 |
-
out_ch = ch
|
567 |
-
self.input_blocks.append(
|
568 |
-
TimestepEmbedSequential(
|
569 |
-
ResBlock(
|
570 |
-
ch,
|
571 |
-
time_embed_dim,
|
572 |
-
dropout,
|
573 |
-
out_channels=out_ch,
|
574 |
-
dims=dims,
|
575 |
-
use_checkpoint=use_checkpoint,
|
576 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
577 |
-
down=True,
|
578 |
-
)
|
579 |
-
if resblock_updown
|
580 |
-
else Downsample(
|
581 |
-
ch, conv_resample, dims=dims, out_channels=out_ch
|
582 |
-
)
|
583 |
-
)
|
584 |
-
)
|
585 |
-
ch = out_ch
|
586 |
-
input_block_chans.append(ch)
|
587 |
-
ds *= 2
|
588 |
-
self._feature_size += ch
|
589 |
-
|
590 |
-
if num_head_channels == -1:
|
591 |
-
dim_head = ch // num_heads
|
592 |
-
else:
|
593 |
-
num_heads = ch // num_head_channels
|
594 |
-
dim_head = num_head_channels
|
595 |
-
if legacy:
|
596 |
-
#num_heads = 1
|
597 |
-
dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
|
598 |
-
self.middle_block = TimestepEmbedSequential(
|
599 |
-
ResBlock(
|
600 |
-
ch,
|
601 |
-
time_embed_dim,
|
602 |
-
dropout,
|
603 |
-
dims=dims,
|
604 |
-
use_checkpoint=use_checkpoint,
|
605 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
606 |
-
),
|
607 |
-
AttentionBlock(
|
608 |
-
ch,
|
609 |
-
use_checkpoint=use_checkpoint,
|
610 |
-
num_heads=num_heads,
|
611 |
-
num_head_channels=dim_head,
|
612 |
-
use_new_attention_order=use_new_attention_order,
|
613 |
-
) if not use_spatial_transformer else SpatialTransformer(
|
614 |
-
ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim
|
615 |
-
),
|
616 |
-
ResBlock(
|
617 |
-
ch,
|
618 |
-
time_embed_dim,
|
619 |
-
dropout,
|
620 |
-
dims=dims,
|
621 |
-
use_checkpoint=use_checkpoint,
|
622 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
623 |
-
),
|
624 |
-
)
|
625 |
-
self._feature_size += ch
|
626 |
-
# upsample blocks
|
627 |
-
self.output_blocks = nn.ModuleList([])
|
628 |
-
for level, mult in list(enumerate(channel_mult))[::-1]:
|
629 |
-
for i in range(num_res_blocks + 1):
|
630 |
-
ich = input_block_chans.pop()
|
631 |
-
layers = [
|
632 |
-
ResBlock(
|
633 |
-
ch + ich,
|
634 |
-
time_embed_dim,
|
635 |
-
dropout,
|
636 |
-
out_channels=model_channels * mult,
|
637 |
-
dims=dims,
|
638 |
-
use_checkpoint=use_checkpoint,
|
639 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
640 |
-
)
|
641 |
-
]
|
642 |
-
ch = model_channels * mult
|
643 |
-
if ds in attention_resolutions:
|
644 |
-
if num_head_channels == -1:
|
645 |
-
dim_head = ch // num_heads
|
646 |
-
else:
|
647 |
-
num_heads = ch // num_head_channels
|
648 |
-
dim_head = num_head_channels
|
649 |
-
if legacy:
|
650 |
-
#num_heads = 1
|
651 |
-
dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
|
652 |
-
layers.append(
|
653 |
-
AttentionBlock(
|
654 |
-
ch,
|
655 |
-
use_checkpoint=use_checkpoint,
|
656 |
-
num_heads=num_heads_upsample,
|
657 |
-
num_head_channels=dim_head,
|
658 |
-
use_new_attention_order=use_new_attention_order,
|
659 |
-
) if not use_spatial_transformer else SpatialTransformer(
|
660 |
-
ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim
|
661 |
-
)
|
662 |
-
)
|
663 |
-
if level and i == num_res_blocks:
|
664 |
-
out_ch = ch
|
665 |
-
layers.append(
|
666 |
-
ResBlock(
|
667 |
-
ch,
|
668 |
-
time_embed_dim,
|
669 |
-
dropout,
|
670 |
-
out_channels=out_ch,
|
671 |
-
dims=dims,
|
672 |
-
use_checkpoint=use_checkpoint,
|
673 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
674 |
-
up=True,
|
675 |
-
)
|
676 |
-
if resblock_updown
|
677 |
-
else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)
|
678 |
-
)
|
679 |
-
ds //= 2
|
680 |
-
self.output_blocks.append(TimestepEmbedSequential(*layers))
|
681 |
-
self._feature_size += ch
|
682 |
-
|
683 |
-
self.out = nn.Sequential(
|
684 |
-
normalization(ch),
|
685 |
-
nn.SiLU(),
|
686 |
-
zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)),
|
687 |
-
)
|
688 |
-
if self.predict_codebook_ids:
|
689 |
-
self.id_predictor = nn.Sequential(
|
690 |
-
normalization(ch),
|
691 |
-
conv_nd(dims, model_channels, n_embed, 1),
|
692 |
-
#nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits
|
693 |
-
)
|
694 |
-
|
695 |
-
def convert_to_fp16(self):
|
696 |
-
"""
|
697 |
-
Convert the torso of the model to float16.
|
698 |
-
"""
|
699 |
-
self.input_blocks.apply(convert_module_to_f16)
|
700 |
-
self.middle_block.apply(convert_module_to_f16)
|
701 |
-
self.output_blocks.apply(convert_module_to_f16)
|
702 |
-
|
703 |
-
def convert_to_fp32(self):
|
704 |
-
"""
|
705 |
-
Convert the torso of the model to float32.
|
706 |
-
"""
|
707 |
-
self.input_blocks.apply(convert_module_to_f32)
|
708 |
-
self.middle_block.apply(convert_module_to_f32)
|
709 |
-
self.output_blocks.apply(convert_module_to_f32)
|
710 |
-
|
711 |
-
def forward(self, x, timesteps=None, context=None, y=None,**kwargs):
|
712 |
-
"""
|
713 |
-
Apply the model to an input batch.
|
714 |
-
:param x: an [N x C x ...] Tensor of inputs.
|
715 |
-
:param timesteps: a 1-D batch of timesteps,shape [N]
|
716 |
-
:param context: conditioning plugged in via crossattn. for txt2img shape is [N,77,context_dim]
|
717 |
-
:param y: an [N] Tensor of labels, if class-conditional.
|
718 |
-
:return: an [N x C x ...] Tensor of outputs.
|
719 |
-
"""
|
720 |
-
# print(f"in unet {x.shape}")
|
721 |
-
assert (y is not None) == (
|
722 |
-
self.num_classes is not None
|
723 |
-
), "must specify y if and only if the model is class-conditional"
|
724 |
-
hs = []
|
725 |
-
t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)# shape [N,self.model_channels]
|
726 |
-
emb = self.time_embed(t_emb)# shape [N,context_dim]
|
727 |
-
|
728 |
-
if self.num_classes is not None:# only for class label
|
729 |
-
assert y.shape == (x.shape[0],)
|
730 |
-
emb = emb + self.label_emb(y)
|
731 |
-
|
732 |
-
h = x.type(self.dtype)# [N,C,10,106]
|
733 |
-
for module in self.input_blocks:
|
734 |
-
h = module(h, emb, context)# 0:[N,self.model_channels,10,106],1:[N,self.model_channels,10,106],2:[N,self.model_channels,10,106] 3:[N,self.model_channels,5,53] 4:[N,self.model_channels,5,53] 5:[N,self.model_channels*2,5,53]
|
735 |
-
hs.append(h)
|
736 |
-
h = self.middle_block(h, emb, context)# no shape change
|
737 |
-
for module in self.output_blocks:
|
738 |
-
h = th.cat([h, hs.pop()], dim=1)# 在这里c维度乘2或+self.model_channels,其余维度不变
|
739 |
-
h = module(h, emb, context)# 在这里c维度/2回到之前维度,h,w不变或*2
|
740 |
-
h = h.type(x.dtype)# 至此h维度和输入x维度回到相同状态
|
741 |
-
if self.predict_codebook_ids:
|
742 |
-
return self.id_predictor(h)
|
743 |
-
else:
|
744 |
-
return self.out(h)
|
745 |
-
|
746 |
-
|
747 |
-
class EncoderUNetModel(nn.Module):
|
748 |
-
"""
|
749 |
-
The half UNet model with attention and timestep embedding.
|
750 |
-
For usage, see UNet.
|
751 |
-
"""
|
752 |
-
|
753 |
-
def __init__(
|
754 |
-
self,
|
755 |
-
image_size,
|
756 |
-
in_channels,
|
757 |
-
model_channels,
|
758 |
-
out_channels,
|
759 |
-
num_res_blocks,
|
760 |
-
attention_resolutions,
|
761 |
-
dropout=0,
|
762 |
-
channel_mult=(1, 2, 4, 8),
|
763 |
-
conv_resample=True,
|
764 |
-
dims=2,
|
765 |
-
use_checkpoint=False,
|
766 |
-
use_fp16=False,
|
767 |
-
num_heads=1,
|
768 |
-
num_head_channels=-1,
|
769 |
-
num_heads_upsample=-1,
|
770 |
-
use_scale_shift_norm=False,
|
771 |
-
resblock_updown=False,
|
772 |
-
use_new_attention_order=False,
|
773 |
-
pool="adaptive",
|
774 |
-
*args,
|
775 |
-
**kwargs
|
776 |
-
):
|
777 |
-
super().__init__()
|
778 |
-
|
779 |
-
if num_heads_upsample == -1:
|
780 |
-
num_heads_upsample = num_heads
|
781 |
-
|
782 |
-
self.in_channels = in_channels
|
783 |
-
self.model_channels = model_channels
|
784 |
-
self.out_channels = out_channels
|
785 |
-
self.num_res_blocks = num_res_blocks
|
786 |
-
self.attention_resolutions = attention_resolutions
|
787 |
-
self.dropout = dropout
|
788 |
-
self.channel_mult = channel_mult
|
789 |
-
self.conv_resample = conv_resample
|
790 |
-
self.use_checkpoint = use_checkpoint
|
791 |
-
self.dtype = th.float16 if use_fp16 else th.float32
|
792 |
-
self.num_heads = num_heads
|
793 |
-
self.num_head_channels = num_head_channels
|
794 |
-
self.num_heads_upsample = num_heads_upsample
|
795 |
-
|
796 |
-
time_embed_dim = model_channels * 4
|
797 |
-
self.time_embed = nn.Sequential(
|
798 |
-
linear(model_channels, time_embed_dim),
|
799 |
-
nn.SiLU(),
|
800 |
-
linear(time_embed_dim, time_embed_dim),
|
801 |
-
)
|
802 |
-
|
803 |
-
self.input_blocks = nn.ModuleList(
|
804 |
-
[
|
805 |
-
TimestepEmbedSequential(
|
806 |
-
conv_nd(dims, in_channels, model_channels, 3, padding=1)
|
807 |
-
)
|
808 |
-
]
|
809 |
-
)
|
810 |
-
self._feature_size = model_channels
|
811 |
-
input_block_chans = [model_channels]
|
812 |
-
ch = model_channels
|
813 |
-
ds = 1
|
814 |
-
for level, mult in enumerate(channel_mult):
|
815 |
-
for _ in range(num_res_blocks):
|
816 |
-
layers = [
|
817 |
-
ResBlock(
|
818 |
-
ch,
|
819 |
-
time_embed_dim,
|
820 |
-
dropout,
|
821 |
-
out_channels=mult * model_channels,
|
822 |
-
dims=dims,
|
823 |
-
use_checkpoint=use_checkpoint,
|
824 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
825 |
-
)
|
826 |
-
]
|
827 |
-
ch = mult * model_channels
|
828 |
-
if ds in attention_resolutions:
|
829 |
-
layers.append(
|
830 |
-
AttentionBlock(
|
831 |
-
ch,
|
832 |
-
use_checkpoint=use_checkpoint,
|
833 |
-
num_heads=num_heads,
|
834 |
-
num_head_channels=num_head_channels,
|
835 |
-
use_new_attention_order=use_new_attention_order,
|
836 |
-
)
|
837 |
-
)
|
838 |
-
self.input_blocks.append(TimestepEmbedSequential(*layers))
|
839 |
-
self._feature_size += ch
|
840 |
-
input_block_chans.append(ch)
|
841 |
-
if level != len(channel_mult) - 1:
|
842 |
-
out_ch = ch
|
843 |
-
self.input_blocks.append(
|
844 |
-
TimestepEmbedSequential(
|
845 |
-
ResBlock(
|
846 |
-
ch,
|
847 |
-
time_embed_dim,
|
848 |
-
dropout,
|
849 |
-
out_channels=out_ch,
|
850 |
-
dims=dims,
|
851 |
-
use_checkpoint=use_checkpoint,
|
852 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
853 |
-
down=True,
|
854 |
-
)
|
855 |
-
if resblock_updown
|
856 |
-
else Downsample(
|
857 |
-
ch, conv_resample, dims=dims, out_channels=out_ch
|
858 |
-
)
|
859 |
-
)
|
860 |
-
)
|
861 |
-
ch = out_ch
|
862 |
-
input_block_chans.append(ch)
|
863 |
-
ds *= 2
|
864 |
-
self._feature_size += ch
|
865 |
-
|
866 |
-
self.middle_block = TimestepEmbedSequential(
|
867 |
-
ResBlock(
|
868 |
-
ch,
|
869 |
-
time_embed_dim,
|
870 |
-
dropout,
|
871 |
-
dims=dims,
|
872 |
-
use_checkpoint=use_checkpoint,
|
873 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
874 |
-
),
|
875 |
-
AttentionBlock(
|
876 |
-
ch,
|
877 |
-
use_checkpoint=use_checkpoint,
|
878 |
-
num_heads=num_heads,
|
879 |
-
num_head_channels=num_head_channels,
|
880 |
-
use_new_attention_order=use_new_attention_order,
|
881 |
-
),
|
882 |
-
ResBlock(
|
883 |
-
ch,
|
884 |
-
time_embed_dim,
|
885 |
-
dropout,
|
886 |
-
dims=dims,
|
887 |
-
use_checkpoint=use_checkpoint,
|
888 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
889 |
-
),
|
890 |
-
)
|
891 |
-
self._feature_size += ch
|
892 |
-
self.pool = pool
|
893 |
-
if pool == "adaptive":
|
894 |
-
self.out = nn.Sequential(
|
895 |
-
normalization(ch),
|
896 |
-
nn.SiLU(),
|
897 |
-
nn.AdaptiveAvgPool2d((1, 1)),
|
898 |
-
zero_module(conv_nd(dims, ch, out_channels, 1)),
|
899 |
-
nn.Flatten(),
|
900 |
-
)
|
901 |
-
elif pool == "attention":
|
902 |
-
assert num_head_channels != -1
|
903 |
-
self.out = nn.Sequential(
|
904 |
-
normalization(ch),
|
905 |
-
nn.SiLU(),
|
906 |
-
AttentionPool2d(
|
907 |
-
(image_size // ds), ch, num_head_channels, out_channels
|
908 |
-
),
|
909 |
-
)
|
910 |
-
elif pool == "spatial":
|
911 |
-
self.out = nn.Sequential(
|
912 |
-
nn.Linear(self._feature_size, 2048),
|
913 |
-
nn.ReLU(),
|
914 |
-
nn.Linear(2048, self.out_channels),
|
915 |
-
)
|
916 |
-
elif pool == "spatial_v2":
|
917 |
-
self.out = nn.Sequential(
|
918 |
-
nn.Linear(self._feature_size, 2048),
|
919 |
-
normalization(2048),
|
920 |
-
nn.SiLU(),
|
921 |
-
nn.Linear(2048, self.out_channels),
|
922 |
-
)
|
923 |
-
else:
|
924 |
-
raise NotImplementedError(f"Unexpected {pool} pooling")
|
925 |
-
|
926 |
-
def convert_to_fp16(self):
|
927 |
-
"""
|
928 |
-
Convert the torso of the model to float16.
|
929 |
-
"""
|
930 |
-
self.input_blocks.apply(convert_module_to_f16)
|
931 |
-
self.middle_block.apply(convert_module_to_f16)
|
932 |
-
|
933 |
-
def convert_to_fp32(self):
|
934 |
-
"""
|
935 |
-
Convert the torso of the model to float32.
|
936 |
-
"""
|
937 |
-
self.input_blocks.apply(convert_module_to_f32)
|
938 |
-
self.middle_block.apply(convert_module_to_f32)
|
939 |
-
|
940 |
-
def forward(self, x, timesteps):
|
941 |
-
"""
|
942 |
-
Apply the model to an input batch.
|
943 |
-
:param x: an [N x C x ...] Tensor of inputs.
|
944 |
-
:param timesteps: a 1-D batch of timesteps.
|
945 |
-
:return: an [N x K] Tensor of outputs.
|
946 |
-
"""
|
947 |
-
emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))
|
948 |
-
|
949 |
-
results = []
|
950 |
-
h = x.type(self.dtype)
|
951 |
-
for module in self.input_blocks:
|
952 |
-
h = module(h, emb)
|
953 |
-
if self.pool.startswith("spatial"):
|
954 |
-
results.append(h.type(x.dtype).mean(dim=(2, 3)))
|
955 |
-
h = self.middle_block(h, emb)
|
956 |
-
if self.pool.startswith("spatial"):
|
957 |
-
results.append(h.type(x.dtype).mean(dim=(2, 3)))
|
958 |
-
h = th.cat(results, axis=-1)
|
959 |
-
return self.out(h)
|
960 |
-
else:
|
961 |
-
h = h.type(x.dtype)
|
962 |
-
return self.out(h)
|
963 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/ImMagician-Gradio/app.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/AchyuthGamer/ImMagician-Fantasy").launch()
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: OpenGPT Chat (fast)
|
3 |
-
emoji: 😻
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.45.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: true
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/fullwindowrectangle/Factory.d.ts
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
import FullWindowRectangle from './FullWindowRectangle';
|
2 |
-
|
3 |
-
export default function (
|
4 |
-
fillColor?: number,
|
5 |
-
fillAlpha?: number
|
6 |
-
|
7 |
-
): FullWindowRectangle;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlexWang/lama/bin/report_from_tb.py
DELETED
@@ -1,83 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python3
|
2 |
-
|
3 |
-
import glob
|
4 |
-
import os
|
5 |
-
import re
|
6 |
-
|
7 |
-
import tensorflow as tf
|
8 |
-
from torch.utils.tensorboard import SummaryWriter
|
9 |
-
|
10 |
-
|
11 |
-
GROUPING_RULES = [
|
12 |
-
re.compile(r'^(?P<group>train|test|val|extra_val_.*?(256|512))_(?P<title>.*)', re.I)
|
13 |
-
]
|
14 |
-
|
15 |
-
|
16 |
-
DROP_RULES = [
|
17 |
-
re.compile(r'_std$', re.I)
|
18 |
-
]
|
19 |
-
|
20 |
-
|
21 |
-
def need_drop(tag):
|
22 |
-
for rule in DROP_RULES:
|
23 |
-
if rule.search(tag):
|
24 |
-
return True
|
25 |
-
return False
|
26 |
-
|
27 |
-
|
28 |
-
def get_group_and_title(tag):
|
29 |
-
for rule in GROUPING_RULES:
|
30 |
-
match = rule.search(tag)
|
31 |
-
if match is None:
|
32 |
-
continue
|
33 |
-
return match.group('group'), match.group('title')
|
34 |
-
return None, None
|
35 |
-
|
36 |
-
|
37 |
-
def main(args):
|
38 |
-
os.makedirs(args.outdir, exist_ok=True)
|
39 |
-
|
40 |
-
ignored_events = set()
|
41 |
-
|
42 |
-
for orig_fname in glob.glob(args.inglob):
|
43 |
-
cur_dirpath = os.path.dirname(orig_fname) # remove filename, this should point to "version_0" directory
|
44 |
-
subdirname = os.path.basename(cur_dirpath) # == "version_0" most of time
|
45 |
-
exp_root_path = os.path.dirname(cur_dirpath) # remove "version_0"
|
46 |
-
exp_name = os.path.basename(exp_root_path)
|
47 |
-
|
48 |
-
writers_by_group = {}
|
49 |
-
|
50 |
-
for e in tf.compat.v1.train.summary_iterator(orig_fname):
|
51 |
-
for v in e.summary.value:
|
52 |
-
if need_drop(v.tag):
|
53 |
-
continue
|
54 |
-
|
55 |
-
cur_group, cur_title = get_group_and_title(v.tag)
|
56 |
-
if cur_group is None:
|
57 |
-
if v.tag not in ignored_events:
|
58 |
-
print(f'WARNING: Could not detect group for {v.tag}, ignoring it')
|
59 |
-
ignored_events.add(v.tag)
|
60 |
-
continue
|
61 |
-
|
62 |
-
cur_writer = writers_by_group.get(cur_group, None)
|
63 |
-
if cur_writer is None:
|
64 |
-
if args.include_version:
|
65 |
-
cur_outdir = os.path.join(args.outdir, exp_name, f'{subdirname}_{cur_group}')
|
66 |
-
else:
|
67 |
-
cur_outdir = os.path.join(args.outdir, exp_name, cur_group)
|
68 |
-
cur_writer = SummaryWriter(cur_outdir)
|
69 |
-
writers_by_group[cur_group] = cur_writer
|
70 |
-
|
71 |
-
cur_writer.add_scalar(cur_title, v.simple_value, global_step=e.step, walltime=e.wall_time)
|
72 |
-
|
73 |
-
|
74 |
-
if __name__ == '__main__':
|
75 |
-
import argparse
|
76 |
-
|
77 |
-
aparser = argparse.ArgumentParser()
|
78 |
-
aparser.add_argument('inglob', type=str)
|
79 |
-
aparser.add_argument('outdir', type=str)
|
80 |
-
aparser.add_argument('--include-version', action='store_true',
|
81 |
-
help='Include subdirectory name e.g. "version_0" into output path')
|
82 |
-
|
83 |
-
main(aparser.parse_args())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aloento/9Nine-VITS/posterior_encoder.py
DELETED
@@ -1,37 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch import nn
|
3 |
-
|
4 |
-
import commons
|
5 |
-
import modules
|
6 |
-
|
7 |
-
|
8 |
-
class PosteriorEncoder(nn.Module):
|
9 |
-
def __init__(self,
|
10 |
-
in_channels,
|
11 |
-
out_channels,
|
12 |
-
hidden_channels,
|
13 |
-
kernel_size,
|
14 |
-
dilation_rate,
|
15 |
-
n_layers,
|
16 |
-
gin_channels=0):
|
17 |
-
super().__init__()
|
18 |
-
self.in_channels = in_channels
|
19 |
-
self.out_channels = out_channels
|
20 |
-
self.hidden_channels = hidden_channels
|
21 |
-
self.kernel_size = kernel_size
|
22 |
-
self.dilation_rate = dilation_rate
|
23 |
-
self.n_layers = n_layers
|
24 |
-
self.gin_channels = gin_channels
|
25 |
-
|
26 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
27 |
-
self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels)
|
28 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
29 |
-
|
30 |
-
def forward(self, x, x_lengths, g=None):
|
31 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype)
|
32 |
-
x = self.pre(x) * x_mask
|
33 |
-
x = self.enc(x, x_mask, g=g)
|
34 |
-
stats = self.proj(x) * x_mask
|
35 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
36 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
37 |
-
return z, m, logs, x_mask
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alpaca233/SadTalker/src/face3d/models/arcface_torch/configs/ms1mv3_r50.py
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
from easydict import EasyDict as edict
|
2 |
-
|
3 |
-
# make training faster
|
4 |
-
# our RAM is 256G
|
5 |
-
# mount -t tmpfs -o size=140G tmpfs /train_tmp
|
6 |
-
|
7 |
-
config = edict()
|
8 |
-
config.loss = "arcface"
|
9 |
-
config.network = "r50"
|
10 |
-
config.resume = False
|
11 |
-
config.output = None
|
12 |
-
config.embedding_size = 512
|
13 |
-
config.sample_rate = 1.0
|
14 |
-
config.fp16 = True
|
15 |
-
config.momentum = 0.9
|
16 |
-
config.weight_decay = 5e-4
|
17 |
-
config.batch_size = 128
|
18 |
-
config.lr = 0.1 # batch size is 512
|
19 |
-
|
20 |
-
config.rec = "/train_tmp/ms1m-retinaface-t1"
|
21 |
-
config.num_classes = 93431
|
22 |
-
config.num_image = 5179510
|
23 |
-
config.num_epoch = 25
|
24 |
-
config.warmup_epoch = -1
|
25 |
-
config.decay_epoch = [10, 16, 22]
|
26 |
-
config.val_targets = ["lfw", "cfp_fp", "agedb_30"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alycer/VITS-Umamusume-voice-synthesizer/commons.py
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
from torch.nn import functional as F
|
4 |
-
import torch.jit
|
5 |
-
|
6 |
-
|
7 |
-
def script_method(fn, _rcb=None):
|
8 |
-
return fn
|
9 |
-
|
10 |
-
|
11 |
-
def script(obj, optimize=True, _frames_up=0, _rcb=None):
|
12 |
-
return obj
|
13 |
-
|
14 |
-
|
15 |
-
torch.jit.script_method = script_method
|
16 |
-
torch.jit.script = script
|
17 |
-
|
18 |
-
|
19 |
-
def init_weights(m, mean=0.0, std=0.01):
|
20 |
-
classname = m.__class__.__name__
|
21 |
-
if classname.find("Conv") != -1:
|
22 |
-
m.weight.data.normal_(mean, std)
|
23 |
-
|
24 |
-
|
25 |
-
def get_padding(kernel_size, dilation=1):
|
26 |
-
return int((kernel_size*dilation - dilation)/2)
|
27 |
-
|
28 |
-
|
29 |
-
def intersperse(lst, item):
|
30 |
-
result = [item] * (len(lst) * 2 + 1)
|
31 |
-
result[1::2] = lst
|
32 |
-
return result
|
33 |
-
|
34 |
-
|
35 |
-
def slice_segments(x, ids_str, segment_size=4):
|
36 |
-
ret = torch.zeros_like(x[:, :, :segment_size])
|
37 |
-
for i in range(x.size(0)):
|
38 |
-
idx_str = ids_str[i]
|
39 |
-
idx_end = idx_str + segment_size
|
40 |
-
ret[i] = x[i, :, idx_str:idx_end]
|
41 |
-
return ret
|
42 |
-
|
43 |
-
|
44 |
-
def rand_slice_segments(x, x_lengths=None, segment_size=4):
|
45 |
-
b, d, t = x.size()
|
46 |
-
if x_lengths is None:
|
47 |
-
x_lengths = t
|
48 |
-
ids_str_max = x_lengths - segment_size + 1
|
49 |
-
ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
|
50 |
-
ret = slice_segments(x, ids_str, segment_size)
|
51 |
-
return ret, ids_str
|
52 |
-
|
53 |
-
|
54 |
-
def subsequent_mask(length):
|
55 |
-
mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
|
56 |
-
return mask
|
57 |
-
|
58 |
-
|
59 |
-
@torch.jit.script
|
60 |
-
def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
|
61 |
-
n_channels_int = n_channels[0]
|
62 |
-
in_act = input_a + input_b
|
63 |
-
t_act = torch.tanh(in_act[:, :n_channels_int, :])
|
64 |
-
s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
|
65 |
-
acts = t_act * s_act
|
66 |
-
return acts
|
67 |
-
|
68 |
-
|
69 |
-
def convert_pad_shape(pad_shape):
|
70 |
-
l = pad_shape[::-1]
|
71 |
-
pad_shape = [item for sublist in l for item in sublist]
|
72 |
-
return pad_shape
|
73 |
-
|
74 |
-
|
75 |
-
def sequence_mask(length, max_length=None):
|
76 |
-
if max_length is None:
|
77 |
-
max_length = length.max()
|
78 |
-
x = torch.arange(max_length, dtype=length.dtype, device=length.device)
|
79 |
-
return x.unsqueeze(0) < length.unsqueeze(1)
|
80 |
-
|
81 |
-
|
82 |
-
def generate_path(duration, mask):
|
83 |
-
"""
|
84 |
-
duration: [b, 1, t_x]
|
85 |
-
mask: [b, 1, t_y, t_x]
|
86 |
-
"""
|
87 |
-
device = duration.device
|
88 |
-
|
89 |
-
b, _, t_y, t_x = mask.shape
|
90 |
-
cum_duration = torch.cumsum(duration, -1)
|
91 |
-
|
92 |
-
cum_duration_flat = cum_duration.view(b * t_x)
|
93 |
-
path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
|
94 |
-
path = path.view(b, t_x, t_y)
|
95 |
-
path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
|
96 |
-
path = path.unsqueeze(1).transpose(2,3) * mask
|
97 |
-
return path
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/latent_codes_pool.py
DELETED
@@ -1,58 +0,0 @@
|
|
1 |
-
import random
|
2 |
-
import torch
|
3 |
-
|
4 |
-
|
5 |
-
class LatentCodesPool:
|
6 |
-
"""This class implements latent codes buffer that stores previously generated w latent codes.
|
7 |
-
This buffer enables us to update discriminators using a history of generated w's
|
8 |
-
rather than the ones produced by the latest encoder.
|
9 |
-
"""
|
10 |
-
|
11 |
-
def __init__(self, pool_size):
|
12 |
-
"""Initialize the ImagePool class
|
13 |
-
Parameters:
|
14 |
-
pool_size (int) -- the size of image buffer, if pool_size=0, no buffer will be created
|
15 |
-
"""
|
16 |
-
self.pool_size = pool_size
|
17 |
-
if self.pool_size > 0: # create an empty pool
|
18 |
-
self.num_ws = 0
|
19 |
-
self.ws = []
|
20 |
-
|
21 |
-
def query(self, ws):
|
22 |
-
"""Return w's from the pool.
|
23 |
-
Parameters:
|
24 |
-
ws: the latest generated w's from the generator
|
25 |
-
Returns w's from the buffer.
|
26 |
-
By 50/100, the buffer will return input w's.
|
27 |
-
By 50/100, the buffer will return w's previously stored in the buffer,
|
28 |
-
and insert the current w's to the buffer.
|
29 |
-
"""
|
30 |
-
if self.pool_size == 0: # if the buffer size is 0, do nothing
|
31 |
-
return ws
|
32 |
-
return_ws = []
|
33 |
-
for w in ws: # ws.shape: (batch, 512) or (batch, n_latent, 512)
|
34 |
-
# w = torch.unsqueeze(image.data, 0)
|
35 |
-
if w.ndim == 2:
|
36 |
-
# apply a random latent index as a candidate
|
37 |
-
i = random.randint(0, len(w) - 1)
|
38 |
-
w = w[i]
|
39 |
-
self.handle_w(w, return_ws)
|
40 |
-
# collect all the images and return
|
41 |
-
return_ws = torch.stack(return_ws, 0)
|
42 |
-
return return_ws
|
43 |
-
|
44 |
-
def handle_w(self, w, return_ws):
|
45 |
-
if self.num_ws < self.pool_size: # if the buffer is not full; keep inserting current codes to the buffer
|
46 |
-
self.num_ws = self.num_ws + 1
|
47 |
-
self.ws.append(w)
|
48 |
-
return_ws.append(w)
|
49 |
-
else:
|
50 |
-
p = random.uniform(0, 1)
|
51 |
-
if p > 0.5: # by 50% chance, the buffer will return a previously stored latent code, and insert the current code into the buffer
|
52 |
-
random_id = random.randint(
|
53 |
-
0, self.pool_size - 1) # randint is inclusive
|
54 |
-
tmp = self.ws[random_id].clone()
|
55 |
-
self.ws[random_id] = w
|
56 |
-
return_ws.append(tmp)
|
57 |
-
else: # by another 50% chance, the buffer will return the current image
|
58 |
-
return_ws.append(w)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/loop/feature_training_loop.py
DELETED
@@ -1,145 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import torch
|
3 |
-
from pytorch_lightning.loops import Loop
|
4 |
-
|
5 |
-
from src.dataset import DATASET_REGISTRY
|
6 |
-
from src.dataset.ray_utils import denormalize_vgg, normalize_vgg
|
7 |
-
from src.loop.utils import N_to_reso, cal_n_samples
|
8 |
-
from src.model import MODEL_REGISTRY
|
9 |
-
from src.sampler.simple_sampler import SimpleSampler, InfiniteSamplerWrapper
|
10 |
-
import torch.nn.functional as TF
|
11 |
-
|
12 |
-
|
13 |
-
class FeatureTrainingLoop(Loop):
|
14 |
-
def __init__(self, epoch, cfg, renderer):
|
15 |
-
super().__init__()
|
16 |
-
self.cfg = cfg
|
17 |
-
self.model = MODEL_REGISTRY.get(self.cfg["model"]["name"])(cfg)
|
18 |
-
|
19 |
-
self.dataloader = DATASET_REGISTRY.get(self.cfg["dataset"]["name"])(
|
20 |
-
**self.cfg["dataset"]["train"]["params"],
|
21 |
-
)
|
22 |
-
self.renderer = renderer
|
23 |
-
self.optimizer = None
|
24 |
-
self.training_sampler = None
|
25 |
-
self.frame_sampler = None
|
26 |
-
self.iteration = 0
|
27 |
-
self.epoch = epoch
|
28 |
-
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
29 |
-
self.init_loop()
|
30 |
-
self.init_optimizer()
|
31 |
-
|
32 |
-
def init_loop(self):
|
33 |
-
self.white_bg = self.dataloader.white_bg
|
34 |
-
self.near_far = self.dataloader.near_far
|
35 |
-
self.h_rays, self.w_rays = self.dataloader.img_wh[1], self.dataloader.img_wh[0]
|
36 |
-
|
37 |
-
self.step_ratio = self.cfg["sampler"]["params"]["step_ratio"]
|
38 |
-
self.batch_size = self.cfg["sampler"]["params"]["batch_size"]
|
39 |
-
self.patch_size = self.cfg["sampler"]["params"]["patch_size"]
|
40 |
-
self.chunk_size = self.cfg["sampler"]["params"]["chunk_size"]
|
41 |
-
|
42 |
-
self.aabb = self.dataloader.scene_bbox.to(self.device)
|
43 |
-
reso_cur = N_to_reso(self.cfg["sampler"]["params"]["N_voxel_init"], self.aabb)
|
44 |
-
self.nSamples = min(int(self.cfg["sampler"]["params"]["n_samples"]), cal_n_samples(reso_cur, self.step_ratio))
|
45 |
-
|
46 |
-
torch.cuda.empty_cache()
|
47 |
-
self.dataloader.prepare_feature_data(self.model.tensorf.encoder)
|
48 |
-
self.allrays, self.allfeatures = self.dataloader.all_rays, self.dataloader.all_features
|
49 |
-
self.allrays_stack, self.allrgbs_stack = self.dataloader.all_rays_stack, self.dataloader.all_rgbs_stack
|
50 |
-
|
51 |
-
if not self.model.ndc_ray:
|
52 |
-
self.allrays, self.allfeatures = self.model.tensorf.filtering_rays(self.allrays, self.allfeatures, bbox_only=True)
|
53 |
-
|
54 |
-
self.training_sampler = SimpleSampler(self.allrays.shape[0], self.batch_size)
|
55 |
-
self.frame_sampler = iter(InfiniteSamplerWrapper(self.allrays_stack.size(0))) # every next(sampler) returns a frame index
|
56 |
-
|
57 |
-
def init_optimizer(self):
|
58 |
-
grad_vars = self.model.tensorf.get_optparam_groups_feature_mod(self.cfg["optimizer"]["lr_init"], self.cfg["optimizer"]["lr_basis"])
|
59 |
-
|
60 |
-
if self.cfg["optimizer"]["lr_decay_iters"] > 0:
|
61 |
-
self.lr_factor = self.cfg["optimizer"]["lr_decay_target_ratio"] ** (1 / self.cfg["optimizer"]["lr_decay_iters"])
|
62 |
-
else:
|
63 |
-
self.lr_factor = self.cfg["optimizer"]["lr_decay_target_ratio"] ** (1 / self.cfg["trainer"]["n_iters"])
|
64 |
-
|
65 |
-
print("lr decay", self.cfg["optimizer"]["lr_decay_target_ratio"], self.cfg["optimizer"]["lr_decay_iters"])
|
66 |
-
|
67 |
-
self.optimizer = torch.optim.Adam(grad_vars, betas=(0.9, 0.99))
|
68 |
-
|
69 |
-
@property
|
70 |
-
def done(self):
|
71 |
-
"""Advance from one iteration to the next."""
|
72 |
-
return self.epoch < self.iteration
|
73 |
-
|
74 |
-
def reset(self):
|
75 |
-
"""Advance from one iteration to the next."""
|
76 |
-
|
77 |
-
def advance(self):
|
78 |
-
"""Advance from one iteration to the next."""
|
79 |
-
feature_loss, pixel_loss = 0., 0.
|
80 |
-
if self.iteration % 2 == 0:
|
81 |
-
ray_idx = self.training_sampler.nextids()
|
82 |
-
rays_train, features_train = self.allrays[ray_idx], self.allfeatures[ray_idx].to(self.device)
|
83 |
-
|
84 |
-
feature_map, _ = self.renderer(rays_train, self.model.tensorf, chunk=self.chunk_size, N_samples=self.nSamples, white_bg=self.white_bg,
|
85 |
-
ndc_ray=self.model.ndc_ray, render_feature=True, device=self.device, is_train=True)
|
86 |
-
|
87 |
-
feature_loss = torch.mean((feature_map - features_train) ** 2)
|
88 |
-
else:
|
89 |
-
frame_idx = next(self.frame_sampler)
|
90 |
-
start_h = np.random.randint(0, self.h_rays - self.patch_size + 1)
|
91 |
-
start_w = np.random.randint(0, self.w_rays - self.patch_size + 1)
|
92 |
-
if self.white_bg:
|
93 |
-
# move random sampled patches into center
|
94 |
-
mid_h, mid_w = (self.h_rays - self.patch_size + 1) / 2, (self.w_rays - self.patch_size + 1) / 2
|
95 |
-
if mid_h - start_h >= 1:
|
96 |
-
start_h += np.random.randint(0, mid_h - start_h)
|
97 |
-
elif mid_h - start_h <= -1:
|
98 |
-
start_h += np.random.randint(mid_h - start_h, 0)
|
99 |
-
if mid_w - start_w >= 1:
|
100 |
-
start_w += np.random.randint(0, mid_w - start_w)
|
101 |
-
elif mid_w - start_w <= -1:
|
102 |
-
start_w += np.random.randint(mid_w - start_w, 0)
|
103 |
-
|
104 |
-
rays_train = self.allrays_stack[frame_idx, start_h:start_h + self.patch_size ,
|
105 |
-
start_w:start_w + self.patch_size , :].reshape(-1, 6).to(self.device)
|
106 |
-
# [patch*patch, 6]
|
107 |
-
|
108 |
-
rgbs_train = self.allrgbs_stack[frame_idx, start_h:(start_h + self.patch_size ),
|
109 |
-
start_w:(start_w + self.patch_size ), :].to(self.device)
|
110 |
-
# [patch, patch, 3]
|
111 |
-
|
112 |
-
feature_map, _ = self.renderer(rays_train, self.model.tensorf, chunk=self.chunk_size, N_samples=self.nSamples, white_bg=self.white_bg,
|
113 |
-
ndc_ray=self.model.ndc_ray, render_feature=True, device=self.device, is_train=True)
|
114 |
-
|
115 |
-
feature_map = feature_map.reshape(self.patch_size , self.patch_size , 256)[None, ...].permute(0, 3, 1, 2)
|
116 |
-
recon_rgb = self.model.tensorf.decoder(feature_map)
|
117 |
-
|
118 |
-
rgbs_train = rgbs_train[None, ...].permute(0, 3, 1, 2)
|
119 |
-
img_enc = self.model.tensorf.encoder(normalize_vgg(rgbs_train))
|
120 |
-
recon_rgb_enc = self.model.tensorf.encoder(recon_rgb)
|
121 |
-
|
122 |
-
feature_loss = (TF.mse_loss(recon_rgb_enc.relu4_1, img_enc.relu4_1) +
|
123 |
-
TF.mse_loss(recon_rgb_enc.relu3_1, img_enc.relu3_1)) / 10
|
124 |
-
|
125 |
-
recon_rgb = denormalize_vgg(recon_rgb)
|
126 |
-
|
127 |
-
pixel_loss = torch.mean((recon_rgb - rgbs_train) ** 2)
|
128 |
-
|
129 |
-
total_loss = pixel_loss + feature_loss
|
130 |
-
|
131 |
-
# loss
|
132 |
-
# NOTE: Calculate feature TV loss rather than appearence TV loss
|
133 |
-
if self.model.TV_weight_feature > 0:
|
134 |
-
self.model.TV_weight_feature *= self.lr_factor
|
135 |
-
loss_tv = self.model.tensorf.TV_loss_feature(self.model.tvreg) * self.model.TV_weight_feature
|
136 |
-
total_loss = total_loss + loss_tv
|
137 |
-
|
138 |
-
self.iteration += 1
|
139 |
-
|
140 |
-
self.optimizer.zero_grad()
|
141 |
-
total_loss.backward()
|
142 |
-
self.optimizer.step()
|
143 |
-
|
144 |
-
|
145 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/mixture_canvas.py
DELETED
@@ -1,503 +0,0 @@
|
|
1 |
-
import re
|
2 |
-
from copy import deepcopy
|
3 |
-
from dataclasses import asdict, dataclass
|
4 |
-
from enum import Enum
|
5 |
-
from typing import List, Optional, Union
|
6 |
-
|
7 |
-
import numpy as np
|
8 |
-
import torch
|
9 |
-
from numpy import exp, pi, sqrt
|
10 |
-
from torchvision.transforms.functional import resize
|
11 |
-
from tqdm.auto import tqdm
|
12 |
-
from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
|
13 |
-
|
14 |
-
from diffusers.models import AutoencoderKL, UNet2DConditionModel
|
15 |
-
from diffusers.pipeline_utils import DiffusionPipeline
|
16 |
-
from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker
|
17 |
-
from diffusers.schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler
|
18 |
-
|
19 |
-
|
20 |
-
def preprocess_image(image):
|
21 |
-
from PIL import Image
|
22 |
-
|
23 |
-
"""Preprocess an input image
|
24 |
-
|
25 |
-
Same as
|
26 |
-
https://github.com/huggingface/diffusers/blob/1138d63b519e37f0ce04e027b9f4a3261d27c628/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_img2img.py#L44
|
27 |
-
"""
|
28 |
-
w, h = image.size
|
29 |
-
w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
|
30 |
-
image = image.resize((w, h), resample=Image.LANCZOS)
|
31 |
-
image = np.array(image).astype(np.float32) / 255.0
|
32 |
-
image = image[None].transpose(0, 3, 1, 2)
|
33 |
-
image = torch.from_numpy(image)
|
34 |
-
return 2.0 * image - 1.0
|
35 |
-
|
36 |
-
|
37 |
-
@dataclass
|
38 |
-
class CanvasRegion:
|
39 |
-
"""Class defining a rectangular region in the canvas"""
|
40 |
-
|
41 |
-
row_init: int # Region starting row in pixel space (included)
|
42 |
-
row_end: int # Region end row in pixel space (not included)
|
43 |
-
col_init: int # Region starting column in pixel space (included)
|
44 |
-
col_end: int # Region end column in pixel space (not included)
|
45 |
-
region_seed: int = None # Seed for random operations in this region
|
46 |
-
noise_eps: float = 0.0 # Deviation of a zero-mean gaussian noise to be applied over the latents in this region. Useful for slightly "rerolling" latents
|
47 |
-
|
48 |
-
def __post_init__(self):
|
49 |
-
# Initialize arguments if not specified
|
50 |
-
if self.region_seed is None:
|
51 |
-
self.region_seed = np.random.randint(9999999999)
|
52 |
-
# Check coordinates are non-negative
|
53 |
-
for coord in [self.row_init, self.row_end, self.col_init, self.col_end]:
|
54 |
-
if coord < 0:
|
55 |
-
raise ValueError(
|
56 |
-
f"A CanvasRegion must be defined with non-negative indices, found ({self.row_init}, {self.row_end}, {self.col_init}, {self.col_end})"
|
57 |
-
)
|
58 |
-
# Check coordinates are divisible by 8, else we end up with nasty rounding error when mapping to latent space
|
59 |
-
for coord in [self.row_init, self.row_end, self.col_init, self.col_end]:
|
60 |
-
if coord // 8 != coord / 8:
|
61 |
-
raise ValueError(
|
62 |
-
f"A CanvasRegion must be defined with locations divisible by 8, found ({self.row_init}-{self.row_end}, {self.col_init}-{self.col_end})"
|
63 |
-
)
|
64 |
-
# Check noise eps is non-negative
|
65 |
-
if self.noise_eps < 0:
|
66 |
-
raise ValueError(f"A CanvasRegion must be defined noises eps non-negative, found {self.noise_eps}")
|
67 |
-
# Compute coordinates for this region in latent space
|
68 |
-
self.latent_row_init = self.row_init // 8
|
69 |
-
self.latent_row_end = self.row_end // 8
|
70 |
-
self.latent_col_init = self.col_init // 8
|
71 |
-
self.latent_col_end = self.col_end // 8
|
72 |
-
|
73 |
-
@property
|
74 |
-
def width(self):
|
75 |
-
return self.col_end - self.col_init
|
76 |
-
|
77 |
-
@property
|
78 |
-
def height(self):
|
79 |
-
return self.row_end - self.row_init
|
80 |
-
|
81 |
-
def get_region_generator(self, device="cpu"):
|
82 |
-
"""Creates a torch.Generator based on the random seed of this region"""
|
83 |
-
# Initialize region generator
|
84 |
-
return torch.Generator(device).manual_seed(self.region_seed)
|
85 |
-
|
86 |
-
@property
|
87 |
-
def __dict__(self):
|
88 |
-
return asdict(self)
|
89 |
-
|
90 |
-
|
91 |
-
class MaskModes(Enum):
|
92 |
-
"""Modes in which the influence of diffuser is masked"""
|
93 |
-
|
94 |
-
CONSTANT = "constant"
|
95 |
-
GAUSSIAN = "gaussian"
|
96 |
-
QUARTIC = "quartic" # See https://en.wikipedia.org/wiki/Kernel_(statistics)
|
97 |
-
|
98 |
-
|
99 |
-
@dataclass
|
100 |
-
class DiffusionRegion(CanvasRegion):
|
101 |
-
"""Abstract class defining a region where some class of diffusion process is acting"""
|
102 |
-
|
103 |
-
pass
|
104 |
-
|
105 |
-
|
106 |
-
@dataclass
|
107 |
-
class Text2ImageRegion(DiffusionRegion):
|
108 |
-
"""Class defining a region where a text guided diffusion process is acting"""
|
109 |
-
|
110 |
-
prompt: str = "" # Text prompt guiding the diffuser in this region
|
111 |
-
guidance_scale: float = 7.5 # Guidance scale of the diffuser in this region. If None, randomize
|
112 |
-
mask_type: MaskModes = MaskModes.GAUSSIAN.value # Kind of weight mask applied to this region
|
113 |
-
mask_weight: float = 1.0 # Global weights multiplier of the mask
|
114 |
-
tokenized_prompt = None # Tokenized prompt
|
115 |
-
encoded_prompt = None # Encoded prompt
|
116 |
-
|
117 |
-
def __post_init__(self):
|
118 |
-
super().__post_init__()
|
119 |
-
# Mask weight cannot be negative
|
120 |
-
if self.mask_weight < 0:
|
121 |
-
raise ValueError(
|
122 |
-
f"A Text2ImageRegion must be defined with non-negative mask weight, found {self.mask_weight}"
|
123 |
-
)
|
124 |
-
# Mask type must be an actual known mask
|
125 |
-
if self.mask_type not in [e.value for e in MaskModes]:
|
126 |
-
raise ValueError(
|
127 |
-
f"A Text2ImageRegion was defined with mask {self.mask_type}, which is not an accepted mask ({[e.value for e in MaskModes]})"
|
128 |
-
)
|
129 |
-
# Randomize arguments if given as None
|
130 |
-
if self.guidance_scale is None:
|
131 |
-
self.guidance_scale = np.random.randint(5, 30)
|
132 |
-
# Clean prompt
|
133 |
-
self.prompt = re.sub(" +", " ", self.prompt).replace("\n", " ")
|
134 |
-
|
135 |
-
def tokenize_prompt(self, tokenizer):
|
136 |
-
"""Tokenizes the prompt for this diffusion region using a given tokenizer"""
|
137 |
-
self.tokenized_prompt = tokenizer(
|
138 |
-
self.prompt,
|
139 |
-
padding="max_length",
|
140 |
-
max_length=tokenizer.model_max_length,
|
141 |
-
truncation=True,
|
142 |
-
return_tensors="pt",
|
143 |
-
)
|
144 |
-
|
145 |
-
def encode_prompt(self, text_encoder, device):
|
146 |
-
"""Encodes the previously tokenized prompt for this diffusion region using a given encoder"""
|
147 |
-
assert self.tokenized_prompt is not None, ValueError(
|
148 |
-
"Prompt in diffusion region must be tokenized before encoding"
|
149 |
-
)
|
150 |
-
self.encoded_prompt = text_encoder(self.tokenized_prompt.input_ids.to(device))[0]
|
151 |
-
|
152 |
-
|
153 |
-
@dataclass
|
154 |
-
class Image2ImageRegion(DiffusionRegion):
|
155 |
-
"""Class defining a region where an image guided diffusion process is acting"""
|
156 |
-
|
157 |
-
reference_image: torch.FloatTensor = None
|
158 |
-
strength: float = 0.8 # Strength of the image
|
159 |
-
|
160 |
-
def __post_init__(self):
|
161 |
-
super().__post_init__()
|
162 |
-
if self.reference_image is None:
|
163 |
-
raise ValueError("Must provide a reference image when creating an Image2ImageRegion")
|
164 |
-
if self.strength < 0 or self.strength > 1:
|
165 |
-
raise ValueError(f"The value of strength should in [0.0, 1.0] but is {self.strength}")
|
166 |
-
# Rescale image to region shape
|
167 |
-
self.reference_image = resize(self.reference_image, size=[self.height, self.width])
|
168 |
-
|
169 |
-
def encode_reference_image(self, encoder, device, generator, cpu_vae=False):
|
170 |
-
"""Encodes the reference image for this Image2Image region into the latent space"""
|
171 |
-
# Place encoder in CPU or not following the parameter cpu_vae
|
172 |
-
if cpu_vae:
|
173 |
-
# Note here we use mean instead of sample, to avoid moving also generator to CPU, which is troublesome
|
174 |
-
self.reference_latents = encoder.cpu().encode(self.reference_image).latent_dist.mean.to(device)
|
175 |
-
else:
|
176 |
-
self.reference_latents = encoder.encode(self.reference_image.to(device)).latent_dist.sample(
|
177 |
-
generator=generator
|
178 |
-
)
|
179 |
-
self.reference_latents = 0.18215 * self.reference_latents
|
180 |
-
|
181 |
-
@property
|
182 |
-
def __dict__(self):
|
183 |
-
# This class requires special casting to dict because of the reference_image tensor. Otherwise it cannot be casted to JSON
|
184 |
-
|
185 |
-
# Get all basic fields from parent class
|
186 |
-
super_fields = {key: getattr(self, key) for key in DiffusionRegion.__dataclass_fields__.keys()}
|
187 |
-
# Pack other fields
|
188 |
-
return {**super_fields, "reference_image": self.reference_image.cpu().tolist(), "strength": self.strength}
|
189 |
-
|
190 |
-
|
191 |
-
class RerollModes(Enum):
|
192 |
-
"""Modes in which the reroll regions operate"""
|
193 |
-
|
194 |
-
RESET = "reset" # Completely reset the random noise in the region
|
195 |
-
EPSILON = "epsilon" # Alter slightly the latents in the region
|
196 |
-
|
197 |
-
|
198 |
-
@dataclass
|
199 |
-
class RerollRegion(CanvasRegion):
|
200 |
-
"""Class defining a rectangular canvas region in which initial latent noise will be rerolled"""
|
201 |
-
|
202 |
-
reroll_mode: RerollModes = RerollModes.RESET.value
|
203 |
-
|
204 |
-
|
205 |
-
@dataclass
|
206 |
-
class MaskWeightsBuilder:
|
207 |
-
"""Auxiliary class to compute a tensor of weights for a given diffusion region"""
|
208 |
-
|
209 |
-
latent_space_dim: int # Size of the U-net latent space
|
210 |
-
nbatch: int = 1 # Batch size in the U-net
|
211 |
-
|
212 |
-
def compute_mask_weights(self, region: DiffusionRegion) -> torch.tensor:
|
213 |
-
"""Computes a tensor of weights for a given diffusion region"""
|
214 |
-
MASK_BUILDERS = {
|
215 |
-
MaskModes.CONSTANT.value: self._constant_weights,
|
216 |
-
MaskModes.GAUSSIAN.value: self._gaussian_weights,
|
217 |
-
MaskModes.QUARTIC.value: self._quartic_weights,
|
218 |
-
}
|
219 |
-
return MASK_BUILDERS[region.mask_type](region)
|
220 |
-
|
221 |
-
def _constant_weights(self, region: DiffusionRegion) -> torch.tensor:
|
222 |
-
"""Computes a tensor of constant for a given diffusion region"""
|
223 |
-
latent_width = region.latent_col_end - region.latent_col_init
|
224 |
-
latent_height = region.latent_row_end - region.latent_row_init
|
225 |
-
return torch.ones(self.nbatch, self.latent_space_dim, latent_height, latent_width) * region.mask_weight
|
226 |
-
|
227 |
-
def _gaussian_weights(self, region: DiffusionRegion) -> torch.tensor:
|
228 |
-
"""Generates a gaussian mask of weights for tile contributions"""
|
229 |
-
latent_width = region.latent_col_end - region.latent_col_init
|
230 |
-
latent_height = region.latent_row_end - region.latent_row_init
|
231 |
-
|
232 |
-
var = 0.01
|
233 |
-
midpoint = (latent_width - 1) / 2 # -1 because index goes from 0 to latent_width - 1
|
234 |
-
x_probs = [
|
235 |
-
exp(-(x - midpoint) * (x - midpoint) / (latent_width * latent_width) / (2 * var)) / sqrt(2 * pi * var)
|
236 |
-
for x in range(latent_width)
|
237 |
-
]
|
238 |
-
midpoint = (latent_height - 1) / 2
|
239 |
-
y_probs = [
|
240 |
-
exp(-(y - midpoint) * (y - midpoint) / (latent_height * latent_height) / (2 * var)) / sqrt(2 * pi * var)
|
241 |
-
for y in range(latent_height)
|
242 |
-
]
|
243 |
-
|
244 |
-
weights = np.outer(y_probs, x_probs) * region.mask_weight
|
245 |
-
return torch.tile(torch.tensor(weights), (self.nbatch, self.latent_space_dim, 1, 1))
|
246 |
-
|
247 |
-
def _quartic_weights(self, region: DiffusionRegion) -> torch.tensor:
|
248 |
-
"""Generates a quartic mask of weights for tile contributions
|
249 |
-
|
250 |
-
The quartic kernel has bounded support over the diffusion region, and a smooth decay to the region limits.
|
251 |
-
"""
|
252 |
-
quartic_constant = 15.0 / 16.0
|
253 |
-
|
254 |
-
support = (np.array(range(region.latent_col_init, region.latent_col_end)) - region.latent_col_init) / (
|
255 |
-
region.latent_col_end - region.latent_col_init - 1
|
256 |
-
) * 1.99 - (1.99 / 2.0)
|
257 |
-
x_probs = quartic_constant * np.square(1 - np.square(support))
|
258 |
-
support = (np.array(range(region.latent_row_init, region.latent_row_end)) - region.latent_row_init) / (
|
259 |
-
region.latent_row_end - region.latent_row_init - 1
|
260 |
-
) * 1.99 - (1.99 / 2.0)
|
261 |
-
y_probs = quartic_constant * np.square(1 - np.square(support))
|
262 |
-
|
263 |
-
weights = np.outer(y_probs, x_probs) * region.mask_weight
|
264 |
-
return torch.tile(torch.tensor(weights), (self.nbatch, self.latent_space_dim, 1, 1))
|
265 |
-
|
266 |
-
|
267 |
-
class StableDiffusionCanvasPipeline(DiffusionPipeline):
|
268 |
-
"""Stable Diffusion pipeline that mixes several diffusers in the same canvas"""
|
269 |
-
|
270 |
-
def __init__(
|
271 |
-
self,
|
272 |
-
vae: AutoencoderKL,
|
273 |
-
text_encoder: CLIPTextModel,
|
274 |
-
tokenizer: CLIPTokenizer,
|
275 |
-
unet: UNet2DConditionModel,
|
276 |
-
scheduler: Union[DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler],
|
277 |
-
safety_checker: StableDiffusionSafetyChecker,
|
278 |
-
feature_extractor: CLIPFeatureExtractor,
|
279 |
-
):
|
280 |
-
super().__init__()
|
281 |
-
self.register_modules(
|
282 |
-
vae=vae,
|
283 |
-
text_encoder=text_encoder,
|
284 |
-
tokenizer=tokenizer,
|
285 |
-
unet=unet,
|
286 |
-
scheduler=scheduler,
|
287 |
-
safety_checker=safety_checker,
|
288 |
-
feature_extractor=feature_extractor,
|
289 |
-
)
|
290 |
-
|
291 |
-
def decode_latents(self, latents, cpu_vae=False):
|
292 |
-
"""Decodes a given array of latents into pixel space"""
|
293 |
-
# scale and decode the image latents with vae
|
294 |
-
if cpu_vae:
|
295 |
-
lat = deepcopy(latents).cpu()
|
296 |
-
vae = deepcopy(self.vae).cpu()
|
297 |
-
else:
|
298 |
-
lat = latents
|
299 |
-
vae = self.vae
|
300 |
-
|
301 |
-
lat = 1 / 0.18215 * lat
|
302 |
-
image = vae.decode(lat).sample
|
303 |
-
|
304 |
-
image = (image / 2 + 0.5).clamp(0, 1)
|
305 |
-
image = image.cpu().permute(0, 2, 3, 1).numpy()
|
306 |
-
|
307 |
-
return self.numpy_to_pil(image)
|
308 |
-
|
309 |
-
def get_latest_timestep_img2img(self, num_inference_steps, strength):
|
310 |
-
"""Finds the latest timesteps where an img2img strength does not impose latents anymore"""
|
311 |
-
# get the original timestep using init_timestep
|
312 |
-
offset = self.scheduler.config.get("steps_offset", 0)
|
313 |
-
init_timestep = int(num_inference_steps * (1 - strength)) + offset
|
314 |
-
init_timestep = min(init_timestep, num_inference_steps)
|
315 |
-
|
316 |
-
t_start = min(max(num_inference_steps - init_timestep + offset, 0), num_inference_steps - 1)
|
317 |
-
latest_timestep = self.scheduler.timesteps[t_start]
|
318 |
-
|
319 |
-
return latest_timestep
|
320 |
-
|
321 |
-
@torch.no_grad()
|
322 |
-
def __call__(
|
323 |
-
self,
|
324 |
-
canvas_height: int,
|
325 |
-
canvas_width: int,
|
326 |
-
regions: List[DiffusionRegion],
|
327 |
-
num_inference_steps: Optional[int] = 50,
|
328 |
-
seed: Optional[int] = 12345,
|
329 |
-
reroll_regions: Optional[List[RerollRegion]] = None,
|
330 |
-
cpu_vae: Optional[bool] = False,
|
331 |
-
decode_steps: Optional[bool] = False,
|
332 |
-
):
|
333 |
-
if reroll_regions is None:
|
334 |
-
reroll_regions = []
|
335 |
-
batch_size = 1
|
336 |
-
|
337 |
-
if decode_steps:
|
338 |
-
steps_images = []
|
339 |
-
|
340 |
-
# Prepare scheduler
|
341 |
-
self.scheduler.set_timesteps(num_inference_steps, device=self.device)
|
342 |
-
|
343 |
-
# Split diffusion regions by their kind
|
344 |
-
text2image_regions = [region for region in regions if isinstance(region, Text2ImageRegion)]
|
345 |
-
image2image_regions = [region for region in regions if isinstance(region, Image2ImageRegion)]
|
346 |
-
|
347 |
-
# Prepare text embeddings
|
348 |
-
for region in text2image_regions:
|
349 |
-
region.tokenize_prompt(self.tokenizer)
|
350 |
-
region.encode_prompt(self.text_encoder, self.device)
|
351 |
-
|
352 |
-
# Create original noisy latents using the timesteps
|
353 |
-
latents_shape = (batch_size, self.unet.config.in_channels, canvas_height // 8, canvas_width // 8)
|
354 |
-
generator = torch.Generator(self.device).manual_seed(seed)
|
355 |
-
init_noise = torch.randn(latents_shape, generator=generator, device=self.device)
|
356 |
-
|
357 |
-
# Reset latents in seed reroll regions, if requested
|
358 |
-
for region in reroll_regions:
|
359 |
-
if region.reroll_mode == RerollModes.RESET.value:
|
360 |
-
region_shape = (
|
361 |
-
latents_shape[0],
|
362 |
-
latents_shape[1],
|
363 |
-
region.latent_row_end - region.latent_row_init,
|
364 |
-
region.latent_col_end - region.latent_col_init,
|
365 |
-
)
|
366 |
-
init_noise[
|
367 |
-
:,
|
368 |
-
:,
|
369 |
-
region.latent_row_init : region.latent_row_end,
|
370 |
-
region.latent_col_init : region.latent_col_end,
|
371 |
-
] = torch.randn(region_shape, generator=region.get_region_generator(self.device), device=self.device)
|
372 |
-
|
373 |
-
# Apply epsilon noise to regions: first diffusion regions, then reroll regions
|
374 |
-
all_eps_rerolls = regions + [r for r in reroll_regions if r.reroll_mode == RerollModes.EPSILON.value]
|
375 |
-
for region in all_eps_rerolls:
|
376 |
-
if region.noise_eps > 0:
|
377 |
-
region_noise = init_noise[
|
378 |
-
:,
|
379 |
-
:,
|
380 |
-
region.latent_row_init : region.latent_row_end,
|
381 |
-
region.latent_col_init : region.latent_col_end,
|
382 |
-
]
|
383 |
-
eps_noise = (
|
384 |
-
torch.randn(
|
385 |
-
region_noise.shape, generator=region.get_region_generator(self.device), device=self.device
|
386 |
-
)
|
387 |
-
* region.noise_eps
|
388 |
-
)
|
389 |
-
init_noise[
|
390 |
-
:,
|
391 |
-
:,
|
392 |
-
region.latent_row_init : region.latent_row_end,
|
393 |
-
region.latent_col_init : region.latent_col_end,
|
394 |
-
] += eps_noise
|
395 |
-
|
396 |
-
# scale the initial noise by the standard deviation required by the scheduler
|
397 |
-
latents = init_noise * self.scheduler.init_noise_sigma
|
398 |
-
|
399 |
-
# Get unconditional embeddings for classifier free guidance in text2image regions
|
400 |
-
for region in text2image_regions:
|
401 |
-
max_length = region.tokenized_prompt.input_ids.shape[-1]
|
402 |
-
uncond_input = self.tokenizer(
|
403 |
-
[""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
|
404 |
-
)
|
405 |
-
uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
|
406 |
-
|
407 |
-
# For classifier free guidance, we need to do two forward passes.
|
408 |
-
# Here we concatenate the unconditional and text embeddings into a single batch
|
409 |
-
# to avoid doing two forward passes
|
410 |
-
region.encoded_prompt = torch.cat([uncond_embeddings, region.encoded_prompt])
|
411 |
-
|
412 |
-
# Prepare image latents
|
413 |
-
for region in image2image_regions:
|
414 |
-
region.encode_reference_image(self.vae, device=self.device, generator=generator)
|
415 |
-
|
416 |
-
# Prepare mask of weights for each region
|
417 |
-
mask_builder = MaskWeightsBuilder(latent_space_dim=self.unet.config.in_channels, nbatch=batch_size)
|
418 |
-
mask_weights = [mask_builder.compute_mask_weights(region).to(self.device) for region in text2image_regions]
|
419 |
-
|
420 |
-
# Diffusion timesteps
|
421 |
-
for i, t in tqdm(enumerate(self.scheduler.timesteps)):
|
422 |
-
# Diffuse each region
|
423 |
-
noise_preds_regions = []
|
424 |
-
|
425 |
-
# text2image regions
|
426 |
-
for region in text2image_regions:
|
427 |
-
region_latents = latents[
|
428 |
-
:,
|
429 |
-
:,
|
430 |
-
region.latent_row_init : region.latent_row_end,
|
431 |
-
region.latent_col_init : region.latent_col_end,
|
432 |
-
]
|
433 |
-
# expand the latents if we are doing classifier free guidance
|
434 |
-
latent_model_input = torch.cat([region_latents] * 2)
|
435 |
-
# scale model input following scheduler rules
|
436 |
-
latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
|
437 |
-
# predict the noise residual
|
438 |
-
noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=region.encoded_prompt)["sample"]
|
439 |
-
# perform guidance
|
440 |
-
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
|
441 |
-
noise_pred_region = noise_pred_uncond + region.guidance_scale * (noise_pred_text - noise_pred_uncond)
|
442 |
-
noise_preds_regions.append(noise_pred_region)
|
443 |
-
|
444 |
-
# Merge noise predictions for all tiles
|
445 |
-
noise_pred = torch.zeros(latents.shape, device=self.device)
|
446 |
-
contributors = torch.zeros(latents.shape, device=self.device)
|
447 |
-
# Add each tile contribution to overall latents
|
448 |
-
for region, noise_pred_region, mask_weights_region in zip(
|
449 |
-
text2image_regions, noise_preds_regions, mask_weights
|
450 |
-
):
|
451 |
-
noise_pred[
|
452 |
-
:,
|
453 |
-
:,
|
454 |
-
region.latent_row_init : region.latent_row_end,
|
455 |
-
region.latent_col_init : region.latent_col_end,
|
456 |
-
] += (
|
457 |
-
noise_pred_region * mask_weights_region
|
458 |
-
)
|
459 |
-
contributors[
|
460 |
-
:,
|
461 |
-
:,
|
462 |
-
region.latent_row_init : region.latent_row_end,
|
463 |
-
region.latent_col_init : region.latent_col_end,
|
464 |
-
] += mask_weights_region
|
465 |
-
# Average overlapping areas with more than 1 contributor
|
466 |
-
noise_pred /= contributors
|
467 |
-
noise_pred = torch.nan_to_num(
|
468 |
-
noise_pred
|
469 |
-
) # Replace NaNs by zeros: NaN can appear if a position is not covered by any DiffusionRegion
|
470 |
-
|
471 |
-
# compute the previous noisy sample x_t -> x_t-1
|
472 |
-
latents = self.scheduler.step(noise_pred, t, latents).prev_sample
|
473 |
-
|
474 |
-
# Image2Image regions: override latents generated by the scheduler
|
475 |
-
for region in image2image_regions:
|
476 |
-
influence_step = self.get_latest_timestep_img2img(num_inference_steps, region.strength)
|
477 |
-
# Only override in the timesteps before the last influence step of the image (given by its strength)
|
478 |
-
if t > influence_step:
|
479 |
-
timestep = t.repeat(batch_size)
|
480 |
-
region_init_noise = init_noise[
|
481 |
-
:,
|
482 |
-
:,
|
483 |
-
region.latent_row_init : region.latent_row_end,
|
484 |
-
region.latent_col_init : region.latent_col_end,
|
485 |
-
]
|
486 |
-
region_latents = self.scheduler.add_noise(region.reference_latents, region_init_noise, timestep)
|
487 |
-
latents[
|
488 |
-
:,
|
489 |
-
:,
|
490 |
-
region.latent_row_init : region.latent_row_end,
|
491 |
-
region.latent_col_init : region.latent_col_end,
|
492 |
-
] = region_latents
|
493 |
-
|
494 |
-
if decode_steps:
|
495 |
-
steps_images.append(self.decode_latents(latents, cpu_vae))
|
496 |
-
|
497 |
-
# scale and decode the image latents with vae
|
498 |
-
image = self.decode_latents(latents, cpu_vae)
|
499 |
-
|
500 |
-
output = {"images": image}
|
501 |
-
if decode_steps:
|
502 |
-
output = {**output, "steps_images": steps_images}
|
503 |
-
return output
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/unclip/test_unclip_image_variation.py
DELETED
@@ -1,522 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2023 HuggingFace Inc.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
import gc
|
17 |
-
import random
|
18 |
-
import unittest
|
19 |
-
|
20 |
-
import numpy as np
|
21 |
-
import torch
|
22 |
-
from transformers import (
|
23 |
-
CLIPImageProcessor,
|
24 |
-
CLIPTextConfig,
|
25 |
-
CLIPTextModelWithProjection,
|
26 |
-
CLIPTokenizer,
|
27 |
-
CLIPVisionConfig,
|
28 |
-
CLIPVisionModelWithProjection,
|
29 |
-
)
|
30 |
-
|
31 |
-
from diffusers import (
|
32 |
-
DiffusionPipeline,
|
33 |
-
UnCLIPImageVariationPipeline,
|
34 |
-
UnCLIPScheduler,
|
35 |
-
UNet2DConditionModel,
|
36 |
-
UNet2DModel,
|
37 |
-
)
|
38 |
-
from diffusers.pipelines.unclip.text_proj import UnCLIPTextProjModel
|
39 |
-
from diffusers.utils import floats_tensor, load_numpy, slow, torch_device
|
40 |
-
from diffusers.utils.testing_utils import enable_full_determinism, load_image, require_torch_gpu, skip_mps
|
41 |
-
|
42 |
-
from ..pipeline_params import IMAGE_VARIATION_BATCH_PARAMS, IMAGE_VARIATION_PARAMS
|
43 |
-
from ..test_pipelines_common import PipelineTesterMixin, assert_mean_pixel_difference
|
44 |
-
|
45 |
-
|
46 |
-
enable_full_determinism()
|
47 |
-
|
48 |
-
|
49 |
-
class UnCLIPImageVariationPipelineFastTests(PipelineTesterMixin, unittest.TestCase):
|
50 |
-
pipeline_class = UnCLIPImageVariationPipeline
|
51 |
-
params = IMAGE_VARIATION_PARAMS - {"height", "width", "guidance_scale"}
|
52 |
-
batch_params = IMAGE_VARIATION_BATCH_PARAMS
|
53 |
-
|
54 |
-
required_optional_params = [
|
55 |
-
"generator",
|
56 |
-
"return_dict",
|
57 |
-
"decoder_num_inference_steps",
|
58 |
-
"super_res_num_inference_steps",
|
59 |
-
]
|
60 |
-
test_xformers_attention = False
|
61 |
-
|
62 |
-
@property
|
63 |
-
def text_embedder_hidden_size(self):
|
64 |
-
return 32
|
65 |
-
|
66 |
-
@property
|
67 |
-
def time_input_dim(self):
|
68 |
-
return 32
|
69 |
-
|
70 |
-
@property
|
71 |
-
def block_out_channels_0(self):
|
72 |
-
return self.time_input_dim
|
73 |
-
|
74 |
-
@property
|
75 |
-
def time_embed_dim(self):
|
76 |
-
return self.time_input_dim * 4
|
77 |
-
|
78 |
-
@property
|
79 |
-
def cross_attention_dim(self):
|
80 |
-
return 100
|
81 |
-
|
82 |
-
@property
|
83 |
-
def dummy_tokenizer(self):
|
84 |
-
tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
|
85 |
-
return tokenizer
|
86 |
-
|
87 |
-
@property
|
88 |
-
def dummy_text_encoder(self):
|
89 |
-
torch.manual_seed(0)
|
90 |
-
config = CLIPTextConfig(
|
91 |
-
bos_token_id=0,
|
92 |
-
eos_token_id=2,
|
93 |
-
hidden_size=self.text_embedder_hidden_size,
|
94 |
-
projection_dim=self.text_embedder_hidden_size,
|
95 |
-
intermediate_size=37,
|
96 |
-
layer_norm_eps=1e-05,
|
97 |
-
num_attention_heads=4,
|
98 |
-
num_hidden_layers=5,
|
99 |
-
pad_token_id=1,
|
100 |
-
vocab_size=1000,
|
101 |
-
)
|
102 |
-
return CLIPTextModelWithProjection(config)
|
103 |
-
|
104 |
-
@property
|
105 |
-
def dummy_image_encoder(self):
|
106 |
-
torch.manual_seed(0)
|
107 |
-
config = CLIPVisionConfig(
|
108 |
-
hidden_size=self.text_embedder_hidden_size,
|
109 |
-
projection_dim=self.text_embedder_hidden_size,
|
110 |
-
num_hidden_layers=5,
|
111 |
-
num_attention_heads=4,
|
112 |
-
image_size=32,
|
113 |
-
intermediate_size=37,
|
114 |
-
patch_size=1,
|
115 |
-
)
|
116 |
-
return CLIPVisionModelWithProjection(config)
|
117 |
-
|
118 |
-
@property
|
119 |
-
def dummy_text_proj(self):
|
120 |
-
torch.manual_seed(0)
|
121 |
-
|
122 |
-
model_kwargs = {
|
123 |
-
"clip_embeddings_dim": self.text_embedder_hidden_size,
|
124 |
-
"time_embed_dim": self.time_embed_dim,
|
125 |
-
"cross_attention_dim": self.cross_attention_dim,
|
126 |
-
}
|
127 |
-
|
128 |
-
model = UnCLIPTextProjModel(**model_kwargs)
|
129 |
-
return model
|
130 |
-
|
131 |
-
@property
|
132 |
-
def dummy_decoder(self):
|
133 |
-
torch.manual_seed(0)
|
134 |
-
|
135 |
-
model_kwargs = {
|
136 |
-
"sample_size": 32,
|
137 |
-
# RGB in channels
|
138 |
-
"in_channels": 3,
|
139 |
-
# Out channels is double in channels because predicts mean and variance
|
140 |
-
"out_channels": 6,
|
141 |
-
"down_block_types": ("ResnetDownsampleBlock2D", "SimpleCrossAttnDownBlock2D"),
|
142 |
-
"up_block_types": ("SimpleCrossAttnUpBlock2D", "ResnetUpsampleBlock2D"),
|
143 |
-
"mid_block_type": "UNetMidBlock2DSimpleCrossAttn",
|
144 |
-
"block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2),
|
145 |
-
"layers_per_block": 1,
|
146 |
-
"cross_attention_dim": self.cross_attention_dim,
|
147 |
-
"attention_head_dim": 4,
|
148 |
-
"resnet_time_scale_shift": "scale_shift",
|
149 |
-
"class_embed_type": "identity",
|
150 |
-
}
|
151 |
-
|
152 |
-
model = UNet2DConditionModel(**model_kwargs)
|
153 |
-
return model
|
154 |
-
|
155 |
-
@property
|
156 |
-
def dummy_super_res_kwargs(self):
|
157 |
-
return {
|
158 |
-
"sample_size": 64,
|
159 |
-
"layers_per_block": 1,
|
160 |
-
"down_block_types": ("ResnetDownsampleBlock2D", "ResnetDownsampleBlock2D"),
|
161 |
-
"up_block_types": ("ResnetUpsampleBlock2D", "ResnetUpsampleBlock2D"),
|
162 |
-
"block_out_channels": (self.block_out_channels_0, self.block_out_channels_0 * 2),
|
163 |
-
"in_channels": 6,
|
164 |
-
"out_channels": 3,
|
165 |
-
}
|
166 |
-
|
167 |
-
@property
|
168 |
-
def dummy_super_res_first(self):
|
169 |
-
torch.manual_seed(0)
|
170 |
-
|
171 |
-
model = UNet2DModel(**self.dummy_super_res_kwargs)
|
172 |
-
return model
|
173 |
-
|
174 |
-
@property
|
175 |
-
def dummy_super_res_last(self):
|
176 |
-
# seeded differently to get different unet than `self.dummy_super_res_first`
|
177 |
-
torch.manual_seed(1)
|
178 |
-
|
179 |
-
model = UNet2DModel(**self.dummy_super_res_kwargs)
|
180 |
-
return model
|
181 |
-
|
182 |
-
def get_dummy_components(self):
|
183 |
-
decoder = self.dummy_decoder
|
184 |
-
text_proj = self.dummy_text_proj
|
185 |
-
text_encoder = self.dummy_text_encoder
|
186 |
-
tokenizer = self.dummy_tokenizer
|
187 |
-
super_res_first = self.dummy_super_res_first
|
188 |
-
super_res_last = self.dummy_super_res_last
|
189 |
-
|
190 |
-
decoder_scheduler = UnCLIPScheduler(
|
191 |
-
variance_type="learned_range",
|
192 |
-
prediction_type="epsilon",
|
193 |
-
num_train_timesteps=1000,
|
194 |
-
)
|
195 |
-
|
196 |
-
super_res_scheduler = UnCLIPScheduler(
|
197 |
-
variance_type="fixed_small_log",
|
198 |
-
prediction_type="epsilon",
|
199 |
-
num_train_timesteps=1000,
|
200 |
-
)
|
201 |
-
|
202 |
-
feature_extractor = CLIPImageProcessor(crop_size=32, size=32)
|
203 |
-
|
204 |
-
image_encoder = self.dummy_image_encoder
|
205 |
-
|
206 |
-
return {
|
207 |
-
"decoder": decoder,
|
208 |
-
"text_encoder": text_encoder,
|
209 |
-
"tokenizer": tokenizer,
|
210 |
-
"text_proj": text_proj,
|
211 |
-
"feature_extractor": feature_extractor,
|
212 |
-
"image_encoder": image_encoder,
|
213 |
-
"super_res_first": super_res_first,
|
214 |
-
"super_res_last": super_res_last,
|
215 |
-
"decoder_scheduler": decoder_scheduler,
|
216 |
-
"super_res_scheduler": super_res_scheduler,
|
217 |
-
}
|
218 |
-
|
219 |
-
def get_dummy_inputs(self, device, seed=0, pil_image=True):
|
220 |
-
input_image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed)).to(device)
|
221 |
-
if str(device).startswith("mps"):
|
222 |
-
generator = torch.manual_seed(seed)
|
223 |
-
else:
|
224 |
-
generator = torch.Generator(device=device).manual_seed(seed)
|
225 |
-
|
226 |
-
if pil_image:
|
227 |
-
input_image = input_image * 0.5 + 0.5
|
228 |
-
input_image = input_image.clamp(0, 1)
|
229 |
-
input_image = input_image.cpu().permute(0, 2, 3, 1).float().numpy()
|
230 |
-
input_image = DiffusionPipeline.numpy_to_pil(input_image)[0]
|
231 |
-
|
232 |
-
return {
|
233 |
-
"image": input_image,
|
234 |
-
"generator": generator,
|
235 |
-
"decoder_num_inference_steps": 2,
|
236 |
-
"super_res_num_inference_steps": 2,
|
237 |
-
"output_type": "np",
|
238 |
-
}
|
239 |
-
|
240 |
-
def test_unclip_image_variation_input_tensor(self):
|
241 |
-
device = "cpu"
|
242 |
-
|
243 |
-
components = self.get_dummy_components()
|
244 |
-
|
245 |
-
pipe = self.pipeline_class(**components)
|
246 |
-
pipe = pipe.to(device)
|
247 |
-
|
248 |
-
pipe.set_progress_bar_config(disable=None)
|
249 |
-
|
250 |
-
pipeline_inputs = self.get_dummy_inputs(device, pil_image=False)
|
251 |
-
|
252 |
-
output = pipe(**pipeline_inputs)
|
253 |
-
image = output.images
|
254 |
-
|
255 |
-
tuple_pipeline_inputs = self.get_dummy_inputs(device, pil_image=False)
|
256 |
-
|
257 |
-
image_from_tuple = pipe(
|
258 |
-
**tuple_pipeline_inputs,
|
259 |
-
return_dict=False,
|
260 |
-
)[0]
|
261 |
-
|
262 |
-
image_slice = image[0, -3:, -3:, -1]
|
263 |
-
image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
|
264 |
-
|
265 |
-
assert image.shape == (1, 64, 64, 3)
|
266 |
-
|
267 |
-
expected_slice = np.array(
|
268 |
-
[
|
269 |
-
0.9997,
|
270 |
-
0.0002,
|
271 |
-
0.9997,
|
272 |
-
0.9997,
|
273 |
-
0.9969,
|
274 |
-
0.0023,
|
275 |
-
0.9997,
|
276 |
-
0.9969,
|
277 |
-
0.9970,
|
278 |
-
]
|
279 |
-
)
|
280 |
-
|
281 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
282 |
-
assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
|
283 |
-
|
284 |
-
def test_unclip_image_variation_input_image(self):
|
285 |
-
device = "cpu"
|
286 |
-
|
287 |
-
components = self.get_dummy_components()
|
288 |
-
|
289 |
-
pipe = self.pipeline_class(**components)
|
290 |
-
pipe = pipe.to(device)
|
291 |
-
|
292 |
-
pipe.set_progress_bar_config(disable=None)
|
293 |
-
|
294 |
-
pipeline_inputs = self.get_dummy_inputs(device, pil_image=True)
|
295 |
-
|
296 |
-
output = pipe(**pipeline_inputs)
|
297 |
-
image = output.images
|
298 |
-
|
299 |
-
tuple_pipeline_inputs = self.get_dummy_inputs(device, pil_image=True)
|
300 |
-
|
301 |
-
image_from_tuple = pipe(
|
302 |
-
**tuple_pipeline_inputs,
|
303 |
-
return_dict=False,
|
304 |
-
)[0]
|
305 |
-
|
306 |
-
image_slice = image[0, -3:, -3:, -1]
|
307 |
-
image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
|
308 |
-
|
309 |
-
assert image.shape == (1, 64, 64, 3)
|
310 |
-
|
311 |
-
expected_slice = np.array([0.9997, 0.0003, 0.9997, 0.9997, 0.9970, 0.0024, 0.9997, 0.9971, 0.9971])
|
312 |
-
|
313 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
314 |
-
assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
|
315 |
-
|
316 |
-
def test_unclip_image_variation_input_list_images(self):
|
317 |
-
device = "cpu"
|
318 |
-
|
319 |
-
components = self.get_dummy_components()
|
320 |
-
|
321 |
-
pipe = self.pipeline_class(**components)
|
322 |
-
pipe = pipe.to(device)
|
323 |
-
|
324 |
-
pipe.set_progress_bar_config(disable=None)
|
325 |
-
|
326 |
-
pipeline_inputs = self.get_dummy_inputs(device, pil_image=True)
|
327 |
-
pipeline_inputs["image"] = [
|
328 |
-
pipeline_inputs["image"],
|
329 |
-
pipeline_inputs["image"],
|
330 |
-
]
|
331 |
-
|
332 |
-
output = pipe(**pipeline_inputs)
|
333 |
-
image = output.images
|
334 |
-
|
335 |
-
tuple_pipeline_inputs = self.get_dummy_inputs(device, pil_image=True)
|
336 |
-
tuple_pipeline_inputs["image"] = [
|
337 |
-
tuple_pipeline_inputs["image"],
|
338 |
-
tuple_pipeline_inputs["image"],
|
339 |
-
]
|
340 |
-
|
341 |
-
image_from_tuple = pipe(
|
342 |
-
**tuple_pipeline_inputs,
|
343 |
-
return_dict=False,
|
344 |
-
)[0]
|
345 |
-
|
346 |
-
image_slice = image[0, -3:, -3:, -1]
|
347 |
-
image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
|
348 |
-
|
349 |
-
assert image.shape == (2, 64, 64, 3)
|
350 |
-
|
351 |
-
expected_slice = np.array(
|
352 |
-
[
|
353 |
-
0.9997,
|
354 |
-
0.9989,
|
355 |
-
0.0008,
|
356 |
-
0.0021,
|
357 |
-
0.9960,
|
358 |
-
0.0018,
|
359 |
-
0.0014,
|
360 |
-
0.0002,
|
361 |
-
0.9933,
|
362 |
-
]
|
363 |
-
)
|
364 |
-
|
365 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
366 |
-
assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
|
367 |
-
|
368 |
-
def test_unclip_passed_image_embed(self):
|
369 |
-
device = torch.device("cpu")
|
370 |
-
|
371 |
-
class DummyScheduler:
|
372 |
-
init_noise_sigma = 1
|
373 |
-
|
374 |
-
components = self.get_dummy_components()
|
375 |
-
|
376 |
-
pipe = self.pipeline_class(**components)
|
377 |
-
pipe = pipe.to(device)
|
378 |
-
|
379 |
-
pipe.set_progress_bar_config(disable=None)
|
380 |
-
|
381 |
-
generator = torch.Generator(device=device).manual_seed(0)
|
382 |
-
dtype = pipe.decoder.dtype
|
383 |
-
batch_size = 1
|
384 |
-
|
385 |
-
shape = (
|
386 |
-
batch_size,
|
387 |
-
pipe.decoder.config.in_channels,
|
388 |
-
pipe.decoder.config.sample_size,
|
389 |
-
pipe.decoder.config.sample_size,
|
390 |
-
)
|
391 |
-
decoder_latents = pipe.prepare_latents(
|
392 |
-
shape, dtype=dtype, device=device, generator=generator, latents=None, scheduler=DummyScheduler()
|
393 |
-
)
|
394 |
-
|
395 |
-
shape = (
|
396 |
-
batch_size,
|
397 |
-
pipe.super_res_first.config.in_channels // 2,
|
398 |
-
pipe.super_res_first.config.sample_size,
|
399 |
-
pipe.super_res_first.config.sample_size,
|
400 |
-
)
|
401 |
-
super_res_latents = pipe.prepare_latents(
|
402 |
-
shape, dtype=dtype, device=device, generator=generator, latents=None, scheduler=DummyScheduler()
|
403 |
-
)
|
404 |
-
|
405 |
-
pipeline_inputs = self.get_dummy_inputs(device, pil_image=False)
|
406 |
-
|
407 |
-
img_out_1 = pipe(
|
408 |
-
**pipeline_inputs, decoder_latents=decoder_latents, super_res_latents=super_res_latents
|
409 |
-
).images
|
410 |
-
|
411 |
-
pipeline_inputs = self.get_dummy_inputs(device, pil_image=False)
|
412 |
-
# Don't pass image, instead pass embedding
|
413 |
-
image = pipeline_inputs.pop("image")
|
414 |
-
image_embeddings = pipe.image_encoder(image).image_embeds
|
415 |
-
|
416 |
-
img_out_2 = pipe(
|
417 |
-
**pipeline_inputs,
|
418 |
-
decoder_latents=decoder_latents,
|
419 |
-
super_res_latents=super_res_latents,
|
420 |
-
image_embeddings=image_embeddings,
|
421 |
-
).images
|
422 |
-
|
423 |
-
# make sure passing text embeddings manually is identical
|
424 |
-
assert np.abs(img_out_1 - img_out_2).max() < 1e-4
|
425 |
-
|
426 |
-
# Overriding PipelineTesterMixin::test_attention_slicing_forward_pass
|
427 |
-
# because UnCLIP GPU undeterminism requires a looser check.
|
428 |
-
@skip_mps
|
429 |
-
def test_attention_slicing_forward_pass(self):
|
430 |
-
test_max_difference = torch_device == "cpu"
|
431 |
-
|
432 |
-
# Check is relaxed because there is not a torch 2.0 sliced attention added kv processor
|
433 |
-
expected_max_diff = 1e-2
|
434 |
-
|
435 |
-
self._test_attention_slicing_forward_pass(
|
436 |
-
test_max_difference=test_max_difference, expected_max_diff=expected_max_diff
|
437 |
-
)
|
438 |
-
|
439 |
-
# Overriding PipelineTesterMixin::test_inference_batch_single_identical
|
440 |
-
# because UnCLIP undeterminism requires a looser check.
|
441 |
-
@skip_mps
|
442 |
-
def test_inference_batch_single_identical(self):
|
443 |
-
test_max_difference = torch_device == "cpu"
|
444 |
-
relax_max_difference = True
|
445 |
-
additional_params_copy_to_batched_inputs = [
|
446 |
-
"decoder_num_inference_steps",
|
447 |
-
"super_res_num_inference_steps",
|
448 |
-
]
|
449 |
-
|
450 |
-
self._test_inference_batch_single_identical(
|
451 |
-
test_max_difference=test_max_difference,
|
452 |
-
relax_max_difference=relax_max_difference,
|
453 |
-
additional_params_copy_to_batched_inputs=additional_params_copy_to_batched_inputs,
|
454 |
-
)
|
455 |
-
|
456 |
-
def test_inference_batch_consistent(self):
|
457 |
-
additional_params_copy_to_batched_inputs = [
|
458 |
-
"decoder_num_inference_steps",
|
459 |
-
"super_res_num_inference_steps",
|
460 |
-
]
|
461 |
-
|
462 |
-
if torch_device == "mps":
|
463 |
-
# TODO: MPS errors with larger batch sizes
|
464 |
-
batch_sizes = [2, 3]
|
465 |
-
self._test_inference_batch_consistent(
|
466 |
-
batch_sizes=batch_sizes,
|
467 |
-
additional_params_copy_to_batched_inputs=additional_params_copy_to_batched_inputs,
|
468 |
-
)
|
469 |
-
else:
|
470 |
-
self._test_inference_batch_consistent(
|
471 |
-
additional_params_copy_to_batched_inputs=additional_params_copy_to_batched_inputs
|
472 |
-
)
|
473 |
-
|
474 |
-
@skip_mps
|
475 |
-
def test_dict_tuple_outputs_equivalent(self):
|
476 |
-
return super().test_dict_tuple_outputs_equivalent()
|
477 |
-
|
478 |
-
@skip_mps
|
479 |
-
def test_save_load_local(self):
|
480 |
-
return super().test_save_load_local()
|
481 |
-
|
482 |
-
@skip_mps
|
483 |
-
def test_save_load_optional_components(self):
|
484 |
-
return super().test_save_load_optional_components()
|
485 |
-
|
486 |
-
|
487 |
-
@slow
|
488 |
-
@require_torch_gpu
|
489 |
-
class UnCLIPImageVariationPipelineIntegrationTests(unittest.TestCase):
|
490 |
-
def tearDown(self):
|
491 |
-
# clean up the VRAM after each test
|
492 |
-
super().tearDown()
|
493 |
-
gc.collect()
|
494 |
-
torch.cuda.empty_cache()
|
495 |
-
|
496 |
-
def test_unclip_image_variation_karlo(self):
|
497 |
-
input_image = load_image(
|
498 |
-
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/unclip/cat.png"
|
499 |
-
)
|
500 |
-
expected_image = load_numpy(
|
501 |
-
"https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
|
502 |
-
"/unclip/karlo_v1_alpha_cat_variation_fp16.npy"
|
503 |
-
)
|
504 |
-
|
505 |
-
pipeline = UnCLIPImageVariationPipeline.from_pretrained(
|
506 |
-
"kakaobrain/karlo-v1-alpha-image-variations", torch_dtype=torch.float16
|
507 |
-
)
|
508 |
-
pipeline = pipeline.to(torch_device)
|
509 |
-
pipeline.set_progress_bar_config(disable=None)
|
510 |
-
|
511 |
-
generator = torch.Generator(device="cpu").manual_seed(0)
|
512 |
-
output = pipeline(
|
513 |
-
input_image,
|
514 |
-
generator=generator,
|
515 |
-
output_type="np",
|
516 |
-
)
|
517 |
-
|
518 |
-
image = output.images[0]
|
519 |
-
|
520 |
-
assert image.shape == (256, 256, 3)
|
521 |
-
|
522 |
-
assert_mean_pixel_difference(image, expected_image, 15)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/_base_/models/retinanet_r50_fpn.py
DELETED
@@ -1,60 +0,0 @@
|
|
1 |
-
# model settings
|
2 |
-
model = dict(
|
3 |
-
type='RetinaNet',
|
4 |
-
pretrained='torchvision://resnet50',
|
5 |
-
backbone=dict(
|
6 |
-
type='ResNet',
|
7 |
-
depth=50,
|
8 |
-
num_stages=4,
|
9 |
-
out_indices=(0, 1, 2, 3),
|
10 |
-
frozen_stages=1,
|
11 |
-
norm_cfg=dict(type='BN', requires_grad=True),
|
12 |
-
norm_eval=True,
|
13 |
-
style='pytorch'),
|
14 |
-
neck=dict(
|
15 |
-
type='FPN',
|
16 |
-
in_channels=[256, 512, 1024, 2048],
|
17 |
-
out_channels=256,
|
18 |
-
start_level=1,
|
19 |
-
add_extra_convs='on_input',
|
20 |
-
num_outs=5),
|
21 |
-
bbox_head=dict(
|
22 |
-
type='RetinaHead',
|
23 |
-
num_classes=80,
|
24 |
-
in_channels=256,
|
25 |
-
stacked_convs=4,
|
26 |
-
feat_channels=256,
|
27 |
-
anchor_generator=dict(
|
28 |
-
type='AnchorGenerator',
|
29 |
-
octave_base_scale=4,
|
30 |
-
scales_per_octave=3,
|
31 |
-
ratios=[0.5, 1.0, 2.0],
|
32 |
-
strides=[8, 16, 32, 64, 128]),
|
33 |
-
bbox_coder=dict(
|
34 |
-
type='DeltaXYWHBBoxCoder',
|
35 |
-
target_means=[.0, .0, .0, .0],
|
36 |
-
target_stds=[1.0, 1.0, 1.0, 1.0]),
|
37 |
-
loss_cls=dict(
|
38 |
-
type='FocalLoss',
|
39 |
-
use_sigmoid=True,
|
40 |
-
gamma=2.0,
|
41 |
-
alpha=0.25,
|
42 |
-
loss_weight=1.0),
|
43 |
-
loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
|
44 |
-
# training and testing settings
|
45 |
-
train_cfg=dict(
|
46 |
-
assigner=dict(
|
47 |
-
type='MaxIoUAssigner',
|
48 |
-
pos_iou_thr=0.5,
|
49 |
-
neg_iou_thr=0.4,
|
50 |
-
min_pos_iou=0,
|
51 |
-
ignore_iof_thr=-1),
|
52 |
-
allowed_border=-1,
|
53 |
-
pos_weight=-1,
|
54 |
-
debug=False),
|
55 |
-
test_cfg=dict(
|
56 |
-
nms_pre=1000,
|
57 |
-
min_bbox_size=0,
|
58 |
-
score_thr=0.05,
|
59 |
-
nms=dict(type='nms', iou_threshold=0.5),
|
60 |
-
max_per_img=100))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/foveabox/fovea_r50_fpn_4x4_1x_coco.py
DELETED
@@ -1,52 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/datasets/coco_detection.py',
|
3 |
-
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
|
4 |
-
]
|
5 |
-
# model settings
|
6 |
-
model = dict(
|
7 |
-
type='FOVEA',
|
8 |
-
pretrained='torchvision://resnet50',
|
9 |
-
backbone=dict(
|
10 |
-
type='ResNet',
|
11 |
-
depth=50,
|
12 |
-
num_stages=4,
|
13 |
-
out_indices=(0, 1, 2, 3),
|
14 |
-
frozen_stages=1,
|
15 |
-
norm_cfg=dict(type='BN', requires_grad=True),
|
16 |
-
norm_eval=True,
|
17 |
-
style='pytorch'),
|
18 |
-
neck=dict(
|
19 |
-
type='FPN',
|
20 |
-
in_channels=[256, 512, 1024, 2048],
|
21 |
-
out_channels=256,
|
22 |
-
start_level=1,
|
23 |
-
num_outs=5,
|
24 |
-
add_extra_convs='on_input'),
|
25 |
-
bbox_head=dict(
|
26 |
-
type='FoveaHead',
|
27 |
-
num_classes=80,
|
28 |
-
in_channels=256,
|
29 |
-
stacked_convs=4,
|
30 |
-
feat_channels=256,
|
31 |
-
strides=[8, 16, 32, 64, 128],
|
32 |
-
base_edge_list=[16, 32, 64, 128, 256],
|
33 |
-
scale_ranges=((1, 64), (32, 128), (64, 256), (128, 512), (256, 2048)),
|
34 |
-
sigma=0.4,
|
35 |
-
with_deform=False,
|
36 |
-
loss_cls=dict(
|
37 |
-
type='FocalLoss',
|
38 |
-
use_sigmoid=True,
|
39 |
-
gamma=1.50,
|
40 |
-
alpha=0.4,
|
41 |
-
loss_weight=1.0),
|
42 |
-
loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0)),
|
43 |
-
# training and testing settings
|
44 |
-
train_cfg=dict(),
|
45 |
-
test_cfg=dict(
|
46 |
-
nms_pre=1000,
|
47 |
-
score_thr=0.05,
|
48 |
-
nms=dict(type='nms', iou_threshold=0.5),
|
49 |
-
max_per_img=100))
|
50 |
-
data = dict(samples_per_gpu=4, workers_per_gpu=4)
|
51 |
-
# optimizer
|
52 |
-
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/gn+ws/faster_rcnn_x101_32x4d_fpn_gn_ws-all_1x_coco.py
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
_base_ = './faster_rcnn_r50_fpn_gn_ws-all_1x_coco.py'
|
2 |
-
conv_cfg = dict(type='ConvWS')
|
3 |
-
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
|
4 |
-
model = dict(
|
5 |
-
pretrained='open-mmlab://jhu/resnext101_32x4d_gn_ws',
|
6 |
-
backbone=dict(
|
7 |
-
type='ResNeXt',
|
8 |
-
depth=101,
|
9 |
-
groups=32,
|
10 |
-
base_width=4,
|
11 |
-
num_stages=4,
|
12 |
-
out_indices=(0, 1, 2, 3),
|
13 |
-
frozen_stages=1,
|
14 |
-
style='pytorch',
|
15 |
-
conv_cfg=conv_cfg,
|
16 |
-
norm_cfg=norm_cfg))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
_base_ = '../cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py'
|
2 |
-
img_norm_cfg = dict(
|
3 |
-
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
|
4 |
-
train_pipeline = [
|
5 |
-
dict(type='LoadImageFromFile'),
|
6 |
-
dict(
|
7 |
-
type='InstaBoost',
|
8 |
-
action_candidate=('normal', 'horizontal', 'skip'),
|
9 |
-
action_prob=(1, 0, 0),
|
10 |
-
scale=(0.8, 1.2),
|
11 |
-
dx=15,
|
12 |
-
dy=15,
|
13 |
-
theta=(-1, 1),
|
14 |
-
color_prob=0.5,
|
15 |
-
hflag=False,
|
16 |
-
aug_ratio=0.5),
|
17 |
-
dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
|
18 |
-
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
|
19 |
-
dict(type='RandomFlip', flip_ratio=0.5),
|
20 |
-
dict(type='Normalize', **img_norm_cfg),
|
21 |
-
dict(type='Pad', size_divisor=32),
|
22 |
-
dict(type='DefaultFormatBundle'),
|
23 |
-
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
|
24 |
-
]
|
25 |
-
data = dict(train=dict(pipeline=train_pipeline))
|
26 |
-
# learning policy
|
27 |
-
lr_config = dict(step=[32, 44])
|
28 |
-
runner = dict(type='EpochBasedRunner', max_epochs=48)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x512_160k_ade20k.py
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/psanet_r50-d8.py', '../_base_/datasets/ade20k.py',
|
3 |
-
'../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py'
|
4 |
-
]
|
5 |
-
model = dict(
|
6 |
-
decode_head=dict(mask_size=(66, 66), num_classes=150),
|
7 |
-
auxiliary_head=dict(num_classes=150))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/roi_align_rotated.py
DELETED
@@ -1,177 +0,0 @@
|
|
1 |
-
# Copyright (c) OpenMMLab. All rights reserved.
|
2 |
-
import torch.nn as nn
|
3 |
-
from torch.autograd import Function
|
4 |
-
|
5 |
-
from ..utils import ext_loader
|
6 |
-
|
7 |
-
ext_module = ext_loader.load_ext(
|
8 |
-
'_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward'])
|
9 |
-
|
10 |
-
|
11 |
-
class RoIAlignRotatedFunction(Function):
|
12 |
-
|
13 |
-
@staticmethod
|
14 |
-
def symbolic(g, features, rois, out_size, spatial_scale, sample_num,
|
15 |
-
aligned, clockwise):
|
16 |
-
if isinstance(out_size, int):
|
17 |
-
out_h = out_size
|
18 |
-
out_w = out_size
|
19 |
-
elif isinstance(out_size, tuple):
|
20 |
-
assert len(out_size) == 2
|
21 |
-
assert isinstance(out_size[0], int)
|
22 |
-
assert isinstance(out_size[1], int)
|
23 |
-
out_h, out_w = out_size
|
24 |
-
else:
|
25 |
-
raise TypeError(
|
26 |
-
'"out_size" must be an integer or tuple of integers')
|
27 |
-
return g.op(
|
28 |
-
'mmcv::MMCVRoIAlignRotated',
|
29 |
-
features,
|
30 |
-
rois,
|
31 |
-
output_height_i=out_h,
|
32 |
-
output_width_i=out_h,
|
33 |
-
spatial_scale_f=spatial_scale,
|
34 |
-
sampling_ratio_i=sample_num,
|
35 |
-
aligned_i=aligned,
|
36 |
-
clockwise_i=clockwise)
|
37 |
-
|
38 |
-
@staticmethod
|
39 |
-
def forward(ctx,
|
40 |
-
features,
|
41 |
-
rois,
|
42 |
-
out_size,
|
43 |
-
spatial_scale,
|
44 |
-
sample_num=0,
|
45 |
-
aligned=True,
|
46 |
-
clockwise=False):
|
47 |
-
if isinstance(out_size, int):
|
48 |
-
out_h = out_size
|
49 |
-
out_w = out_size
|
50 |
-
elif isinstance(out_size, tuple):
|
51 |
-
assert len(out_size) == 2
|
52 |
-
assert isinstance(out_size[0], int)
|
53 |
-
assert isinstance(out_size[1], int)
|
54 |
-
out_h, out_w = out_size
|
55 |
-
else:
|
56 |
-
raise TypeError(
|
57 |
-
'"out_size" must be an integer or tuple of integers')
|
58 |
-
ctx.spatial_scale = spatial_scale
|
59 |
-
ctx.sample_num = sample_num
|
60 |
-
ctx.aligned = aligned
|
61 |
-
ctx.clockwise = clockwise
|
62 |
-
ctx.save_for_backward(rois)
|
63 |
-
ctx.feature_size = features.size()
|
64 |
-
|
65 |
-
batch_size, num_channels, data_height, data_width = features.size()
|
66 |
-
num_rois = rois.size(0)
|
67 |
-
|
68 |
-
output = features.new_zeros(num_rois, num_channels, out_h, out_w)
|
69 |
-
ext_module.roi_align_rotated_forward(
|
70 |
-
features,
|
71 |
-
rois,
|
72 |
-
output,
|
73 |
-
pooled_height=out_h,
|
74 |
-
pooled_width=out_w,
|
75 |
-
spatial_scale=spatial_scale,
|
76 |
-
sample_num=sample_num,
|
77 |
-
aligned=aligned,
|
78 |
-
clockwise=clockwise)
|
79 |
-
return output
|
80 |
-
|
81 |
-
@staticmethod
|
82 |
-
def backward(ctx, grad_output):
|
83 |
-
feature_size = ctx.feature_size
|
84 |
-
spatial_scale = ctx.spatial_scale
|
85 |
-
aligned = ctx.aligned
|
86 |
-
clockwise = ctx.clockwise
|
87 |
-
sample_num = ctx.sample_num
|
88 |
-
rois = ctx.saved_tensors[0]
|
89 |
-
assert feature_size is not None
|
90 |
-
batch_size, num_channels, data_height, data_width = feature_size
|
91 |
-
|
92 |
-
out_w = grad_output.size(3)
|
93 |
-
out_h = grad_output.size(2)
|
94 |
-
|
95 |
-
grad_input = grad_rois = None
|
96 |
-
|
97 |
-
if ctx.needs_input_grad[0]:
|
98 |
-
grad_input = rois.new_zeros(batch_size, num_channels, data_height,
|
99 |
-
data_width)
|
100 |
-
ext_module.roi_align_rotated_backward(
|
101 |
-
grad_output.contiguous(),
|
102 |
-
rois,
|
103 |
-
grad_input,
|
104 |
-
pooled_height=out_h,
|
105 |
-
pooled_width=out_w,
|
106 |
-
spatial_scale=spatial_scale,
|
107 |
-
sample_num=sample_num,
|
108 |
-
aligned=aligned,
|
109 |
-
clockwise=clockwise)
|
110 |
-
return grad_input, grad_rois, None, None, None, None, None
|
111 |
-
|
112 |
-
|
113 |
-
roi_align_rotated = RoIAlignRotatedFunction.apply
|
114 |
-
|
115 |
-
|
116 |
-
class RoIAlignRotated(nn.Module):
|
117 |
-
"""RoI align pooling layer for rotated proposals.
|
118 |
-
|
119 |
-
It accepts a feature map of shape (N, C, H, W) and rois with shape
|
120 |
-
(n, 6) with each roi decoded as (batch_index, center_x, center_y,
|
121 |
-
w, h, angle). The angle is in radian.
|
122 |
-
|
123 |
-
Args:
|
124 |
-
out_size (tuple): h, w
|
125 |
-
spatial_scale (float): scale the input boxes by this number
|
126 |
-
sample_num (int): number of inputs samples to take for each
|
127 |
-
output sample. 0 to take samples densely for current models.
|
128 |
-
aligned (bool): if False, use the legacy implementation in
|
129 |
-
MMDetection. If True, align the results more perfectly.
|
130 |
-
Default: True.
|
131 |
-
clockwise (bool): If True, the angle in each proposal follows a
|
132 |
-
clockwise fashion in image space, otherwise, the angle is
|
133 |
-
counterclockwise. Default: False.
|
134 |
-
|
135 |
-
Note:
|
136 |
-
The implementation of RoIAlign when aligned=True is modified from
|
137 |
-
https://github.com/facebookresearch/detectron2/
|
138 |
-
|
139 |
-
The meaning of aligned=True:
|
140 |
-
|
141 |
-
Given a continuous coordinate c, its two neighboring pixel
|
142 |
-
indices (in our pixel model) are computed by floor(c - 0.5) and
|
143 |
-
ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete
|
144 |
-
indices [0] and [1] (which are sampled from the underlying signal
|
145 |
-
at continuous coordinates 0.5 and 1.5). But the original roi_align
|
146 |
-
(aligned=False) does not subtract the 0.5 when computing
|
147 |
-
neighboring pixel indices and therefore it uses pixels with a
|
148 |
-
slightly incorrect alignment (relative to our pixel model) when
|
149 |
-
performing bilinear interpolation.
|
150 |
-
|
151 |
-
With `aligned=True`,
|
152 |
-
we first appropriately scale the ROI and then shift it by -0.5
|
153 |
-
prior to calling roi_align. This produces the correct neighbors;
|
154 |
-
|
155 |
-
The difference does not make a difference to the model's
|
156 |
-
performance if ROIAlign is used together with conv layers.
|
157 |
-
"""
|
158 |
-
|
159 |
-
def __init__(self,
|
160 |
-
out_size,
|
161 |
-
spatial_scale,
|
162 |
-
sample_num=0,
|
163 |
-
aligned=True,
|
164 |
-
clockwise=False):
|
165 |
-
super(RoIAlignRotated, self).__init__()
|
166 |
-
|
167 |
-
self.out_size = out_size
|
168 |
-
self.spatial_scale = float(spatial_scale)
|
169 |
-
self.sample_num = int(sample_num)
|
170 |
-
self.aligned = aligned
|
171 |
-
self.clockwise = clockwise
|
172 |
-
|
173 |
-
def forward(self, features, rois):
|
174 |
-
return RoIAlignRotatedFunction.apply(features, rois, self.out_size,
|
175 |
-
self.spatial_scale,
|
176 |
-
self.sample_num, self.aligned,
|
177 |
-
self.clockwise)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/swin_transformer.py
DELETED
@@ -1,802 +0,0 @@
|
|
1 |
-
# ------------------------------------------------------------------------
|
2 |
-
# Grounding DINO
|
3 |
-
# url: https://github.com/IDEA-Research/GroundingDINO
|
4 |
-
# Copyright (c) 2023 IDEA. All Rights Reserved.
|
5 |
-
# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
|
6 |
-
# ------------------------------------------------------------------------
|
7 |
-
# DINO
|
8 |
-
# Copyright (c) 2022 IDEA. All Rights Reserved.
|
9 |
-
# Licensed under the Apache License, Version 2.0 [see LICENSE for details]
|
10 |
-
# --------------------------------------------------------
|
11 |
-
# modified from https://github.com/SwinTransformer/Swin-Transformer-Object-Detection/blob/master/mmdet/models/backbones/swin_transformer.py
|
12 |
-
# --------------------------------------------------------
|
13 |
-
|
14 |
-
import numpy as np
|
15 |
-
import torch
|
16 |
-
import torch.nn as nn
|
17 |
-
import torch.nn.functional as F
|
18 |
-
import torch.utils.checkpoint as checkpoint
|
19 |
-
from timm.models.layers import DropPath, to_2tuple, trunc_normal_
|
20 |
-
|
21 |
-
from groundingdino.util.misc import NestedTensor
|
22 |
-
|
23 |
-
|
24 |
-
class Mlp(nn.Module):
|
25 |
-
"""Multilayer perceptron."""
|
26 |
-
|
27 |
-
def __init__(
|
28 |
-
self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.0
|
29 |
-
):
|
30 |
-
super().__init__()
|
31 |
-
out_features = out_features or in_features
|
32 |
-
hidden_features = hidden_features or in_features
|
33 |
-
self.fc1 = nn.Linear(in_features, hidden_features)
|
34 |
-
self.act = act_layer()
|
35 |
-
self.fc2 = nn.Linear(hidden_features, out_features)
|
36 |
-
self.drop = nn.Dropout(drop)
|
37 |
-
|
38 |
-
def forward(self, x):
|
39 |
-
x = self.fc1(x)
|
40 |
-
x = self.act(x)
|
41 |
-
x = self.drop(x)
|
42 |
-
x = self.fc2(x)
|
43 |
-
x = self.drop(x)
|
44 |
-
return x
|
45 |
-
|
46 |
-
|
47 |
-
def window_partition(x, window_size):
|
48 |
-
"""
|
49 |
-
Args:
|
50 |
-
x: (B, H, W, C)
|
51 |
-
window_size (int): window size
|
52 |
-
Returns:
|
53 |
-
windows: (num_windows*B, window_size, window_size, C)
|
54 |
-
"""
|
55 |
-
B, H, W, C = x.shape
|
56 |
-
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
|
57 |
-
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
|
58 |
-
return windows
|
59 |
-
|
60 |
-
|
61 |
-
def window_reverse(windows, window_size, H, W):
|
62 |
-
"""
|
63 |
-
Args:
|
64 |
-
windows: (num_windows*B, window_size, window_size, C)
|
65 |
-
window_size (int): Window size
|
66 |
-
H (int): Height of image
|
67 |
-
W (int): Width of image
|
68 |
-
Returns:
|
69 |
-
x: (B, H, W, C)
|
70 |
-
"""
|
71 |
-
B = int(windows.shape[0] / (H * W / window_size / window_size))
|
72 |
-
x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
|
73 |
-
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
|
74 |
-
return x
|
75 |
-
|
76 |
-
|
77 |
-
class WindowAttention(nn.Module):
|
78 |
-
"""Window based multi-head self attention (W-MSA) module with relative position bias.
|
79 |
-
It supports both of shifted and non-shifted window.
|
80 |
-
Args:
|
81 |
-
dim (int): Number of input channels.
|
82 |
-
window_size (tuple[int]): The height and width of the window.
|
83 |
-
num_heads (int): Number of attention heads.
|
84 |
-
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
|
85 |
-
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
|
86 |
-
attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
|
87 |
-
proj_drop (float, optional): Dropout ratio of output. Default: 0.0
|
88 |
-
"""
|
89 |
-
|
90 |
-
def __init__(
|
91 |
-
self,
|
92 |
-
dim,
|
93 |
-
window_size,
|
94 |
-
num_heads,
|
95 |
-
qkv_bias=True,
|
96 |
-
qk_scale=None,
|
97 |
-
attn_drop=0.0,
|
98 |
-
proj_drop=0.0,
|
99 |
-
):
|
100 |
-
|
101 |
-
super().__init__()
|
102 |
-
self.dim = dim
|
103 |
-
self.window_size = window_size # Wh, Ww
|
104 |
-
self.num_heads = num_heads
|
105 |
-
head_dim = dim // num_heads
|
106 |
-
self.scale = qk_scale or head_dim**-0.5
|
107 |
-
|
108 |
-
# define a parameter table of relative position bias
|
109 |
-
self.relative_position_bias_table = nn.Parameter(
|
110 |
-
torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)
|
111 |
-
) # 2*Wh-1 * 2*Ww-1, nH
|
112 |
-
|
113 |
-
# get pair-wise relative position index for each token inside the window
|
114 |
-
coords_h = torch.arange(self.window_size[0])
|
115 |
-
coords_w = torch.arange(self.window_size[1])
|
116 |
-
coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
|
117 |
-
coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
|
118 |
-
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
|
119 |
-
relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
|
120 |
-
relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
|
121 |
-
relative_coords[:, :, 1] += self.window_size[1] - 1
|
122 |
-
relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
|
123 |
-
relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
|
124 |
-
self.register_buffer("relative_position_index", relative_position_index)
|
125 |
-
|
126 |
-
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
|
127 |
-
self.attn_drop = nn.Dropout(attn_drop)
|
128 |
-
self.proj = nn.Linear(dim, dim)
|
129 |
-
self.proj_drop = nn.Dropout(proj_drop)
|
130 |
-
|
131 |
-
trunc_normal_(self.relative_position_bias_table, std=0.02)
|
132 |
-
self.softmax = nn.Softmax(dim=-1)
|
133 |
-
|
134 |
-
def forward(self, x, mask=None):
|
135 |
-
"""Forward function.
|
136 |
-
Args:
|
137 |
-
x: input features with shape of (num_windows*B, N, C)
|
138 |
-
mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
|
139 |
-
"""
|
140 |
-
B_, N, C = x.shape
|
141 |
-
qkv = (
|
142 |
-
self.qkv(x)
|
143 |
-
.reshape(B_, N, 3, self.num_heads, C // self.num_heads)
|
144 |
-
.permute(2, 0, 3, 1, 4)
|
145 |
-
)
|
146 |
-
q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
|
147 |
-
|
148 |
-
q = q * self.scale
|
149 |
-
attn = q @ k.transpose(-2, -1)
|
150 |
-
|
151 |
-
relative_position_bias = self.relative_position_bias_table[
|
152 |
-
self.relative_position_index.view(-1)
|
153 |
-
].view(
|
154 |
-
self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1
|
155 |
-
) # Wh*Ww,Wh*Ww,nH
|
156 |
-
relative_position_bias = relative_position_bias.permute(
|
157 |
-
2, 0, 1
|
158 |
-
).contiguous() # nH, Wh*Ww, Wh*Ww
|
159 |
-
attn = attn + relative_position_bias.unsqueeze(0)
|
160 |
-
|
161 |
-
if mask is not None:
|
162 |
-
nW = mask.shape[0]
|
163 |
-
attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
|
164 |
-
attn = attn.view(-1, self.num_heads, N, N)
|
165 |
-
attn = self.softmax(attn)
|
166 |
-
else:
|
167 |
-
attn = self.softmax(attn)
|
168 |
-
|
169 |
-
attn = self.attn_drop(attn)
|
170 |
-
|
171 |
-
x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
|
172 |
-
x = self.proj(x)
|
173 |
-
x = self.proj_drop(x)
|
174 |
-
return x
|
175 |
-
|
176 |
-
|
177 |
-
class SwinTransformerBlock(nn.Module):
|
178 |
-
"""Swin Transformer Block.
|
179 |
-
Args:
|
180 |
-
dim (int): Number of input channels.
|
181 |
-
num_heads (int): Number of attention heads.
|
182 |
-
window_size (int): Window size.
|
183 |
-
shift_size (int): Shift size for SW-MSA.
|
184 |
-
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
|
185 |
-
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
|
186 |
-
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
|
187 |
-
drop (float, optional): Dropout rate. Default: 0.0
|
188 |
-
attn_drop (float, optional): Attention dropout rate. Default: 0.0
|
189 |
-
drop_path (float, optional): Stochastic depth rate. Default: 0.0
|
190 |
-
act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
|
191 |
-
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
|
192 |
-
"""
|
193 |
-
|
194 |
-
def __init__(
|
195 |
-
self,
|
196 |
-
dim,
|
197 |
-
num_heads,
|
198 |
-
window_size=7,
|
199 |
-
shift_size=0,
|
200 |
-
mlp_ratio=4.0,
|
201 |
-
qkv_bias=True,
|
202 |
-
qk_scale=None,
|
203 |
-
drop=0.0,
|
204 |
-
attn_drop=0.0,
|
205 |
-
drop_path=0.0,
|
206 |
-
act_layer=nn.GELU,
|
207 |
-
norm_layer=nn.LayerNorm,
|
208 |
-
):
|
209 |
-
super().__init__()
|
210 |
-
self.dim = dim
|
211 |
-
self.num_heads = num_heads
|
212 |
-
self.window_size = window_size
|
213 |
-
self.shift_size = shift_size
|
214 |
-
self.mlp_ratio = mlp_ratio
|
215 |
-
assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
|
216 |
-
|
217 |
-
self.norm1 = norm_layer(dim)
|
218 |
-
self.attn = WindowAttention(
|
219 |
-
dim,
|
220 |
-
window_size=to_2tuple(self.window_size),
|
221 |
-
num_heads=num_heads,
|
222 |
-
qkv_bias=qkv_bias,
|
223 |
-
qk_scale=qk_scale,
|
224 |
-
attn_drop=attn_drop,
|
225 |
-
proj_drop=drop,
|
226 |
-
)
|
227 |
-
|
228 |
-
self.drop_path = DropPath(drop_path) if drop_path > 0.0 else nn.Identity()
|
229 |
-
self.norm2 = norm_layer(dim)
|
230 |
-
mlp_hidden_dim = int(dim * mlp_ratio)
|
231 |
-
self.mlp = Mlp(
|
232 |
-
in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop
|
233 |
-
)
|
234 |
-
|
235 |
-
self.H = None
|
236 |
-
self.W = None
|
237 |
-
|
238 |
-
def forward(self, x, mask_matrix):
|
239 |
-
"""Forward function.
|
240 |
-
Args:
|
241 |
-
x: Input feature, tensor size (B, H*W, C).
|
242 |
-
H, W: Spatial resolution of the input feature.
|
243 |
-
mask_matrix: Attention mask for cyclic shift.
|
244 |
-
"""
|
245 |
-
B, L, C = x.shape
|
246 |
-
H, W = self.H, self.W
|
247 |
-
assert L == H * W, "input feature has wrong size"
|
248 |
-
|
249 |
-
shortcut = x
|
250 |
-
x = self.norm1(x)
|
251 |
-
x = x.view(B, H, W, C)
|
252 |
-
|
253 |
-
# pad feature maps to multiples of window size
|
254 |
-
pad_l = pad_t = 0
|
255 |
-
pad_r = (self.window_size - W % self.window_size) % self.window_size
|
256 |
-
pad_b = (self.window_size - H % self.window_size) % self.window_size
|
257 |
-
x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
|
258 |
-
_, Hp, Wp, _ = x.shape
|
259 |
-
|
260 |
-
# cyclic shift
|
261 |
-
if self.shift_size > 0:
|
262 |
-
shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
|
263 |
-
attn_mask = mask_matrix
|
264 |
-
else:
|
265 |
-
shifted_x = x
|
266 |
-
attn_mask = None
|
267 |
-
|
268 |
-
# partition windows
|
269 |
-
x_windows = window_partition(
|
270 |
-
shifted_x, self.window_size
|
271 |
-
) # nW*B, window_size, window_size, C
|
272 |
-
x_windows = x_windows.view(
|
273 |
-
-1, self.window_size * self.window_size, C
|
274 |
-
) # nW*B, window_size*window_size, C
|
275 |
-
|
276 |
-
# W-MSA/SW-MSA
|
277 |
-
attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C
|
278 |
-
|
279 |
-
# merge windows
|
280 |
-
attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
|
281 |
-
shifted_x = window_reverse(attn_windows, self.window_size, Hp, Wp) # B H' W' C
|
282 |
-
|
283 |
-
# reverse cyclic shift
|
284 |
-
if self.shift_size > 0:
|
285 |
-
x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
|
286 |
-
else:
|
287 |
-
x = shifted_x
|
288 |
-
|
289 |
-
if pad_r > 0 or pad_b > 0:
|
290 |
-
x = x[:, :H, :W, :].contiguous()
|
291 |
-
|
292 |
-
x = x.view(B, H * W, C)
|
293 |
-
|
294 |
-
# FFN
|
295 |
-
x = shortcut + self.drop_path(x)
|
296 |
-
x = x + self.drop_path(self.mlp(self.norm2(x)))
|
297 |
-
|
298 |
-
return x
|
299 |
-
|
300 |
-
|
301 |
-
class PatchMerging(nn.Module):
|
302 |
-
"""Patch Merging Layer
|
303 |
-
Args:
|
304 |
-
dim (int): Number of input channels.
|
305 |
-
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
|
306 |
-
"""
|
307 |
-
|
308 |
-
def __init__(self, dim, norm_layer=nn.LayerNorm):
|
309 |
-
super().__init__()
|
310 |
-
self.dim = dim
|
311 |
-
self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
|
312 |
-
self.norm = norm_layer(4 * dim)
|
313 |
-
|
314 |
-
def forward(self, x, H, W):
|
315 |
-
"""Forward function.
|
316 |
-
Args:
|
317 |
-
x: Input feature, tensor size (B, H*W, C).
|
318 |
-
H, W: Spatial resolution of the input feature.
|
319 |
-
"""
|
320 |
-
B, L, C = x.shape
|
321 |
-
assert L == H * W, "input feature has wrong size"
|
322 |
-
|
323 |
-
x = x.view(B, H, W, C)
|
324 |
-
|
325 |
-
# padding
|
326 |
-
pad_input = (H % 2 == 1) or (W % 2 == 1)
|
327 |
-
if pad_input:
|
328 |
-
x = F.pad(x, (0, 0, 0, W % 2, 0, H % 2))
|
329 |
-
|
330 |
-
x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
|
331 |
-
x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
|
332 |
-
x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
|
333 |
-
x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
|
334 |
-
x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
|
335 |
-
x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
|
336 |
-
|
337 |
-
x = self.norm(x)
|
338 |
-
x = self.reduction(x)
|
339 |
-
|
340 |
-
return x
|
341 |
-
|
342 |
-
|
343 |
-
class BasicLayer(nn.Module):
|
344 |
-
"""A basic Swin Transformer layer for one stage.
|
345 |
-
Args:
|
346 |
-
dim (int): Number of feature channels
|
347 |
-
depth (int): Depths of this stage.
|
348 |
-
num_heads (int): Number of attention head.
|
349 |
-
window_size (int): Local window size. Default: 7.
|
350 |
-
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
|
351 |
-
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
|
352 |
-
qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
|
353 |
-
drop (float, optional): Dropout rate. Default: 0.0
|
354 |
-
attn_drop (float, optional): Attention dropout rate. Default: 0.0
|
355 |
-
drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
|
356 |
-
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
|
357 |
-
downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
|
358 |
-
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
|
359 |
-
"""
|
360 |
-
|
361 |
-
def __init__(
|
362 |
-
self,
|
363 |
-
dim,
|
364 |
-
depth,
|
365 |
-
num_heads,
|
366 |
-
window_size=7,
|
367 |
-
mlp_ratio=4.0,
|
368 |
-
qkv_bias=True,
|
369 |
-
qk_scale=None,
|
370 |
-
drop=0.0,
|
371 |
-
attn_drop=0.0,
|
372 |
-
drop_path=0.0,
|
373 |
-
norm_layer=nn.LayerNorm,
|
374 |
-
downsample=None,
|
375 |
-
use_checkpoint=False,
|
376 |
-
):
|
377 |
-
super().__init__()
|
378 |
-
self.window_size = window_size
|
379 |
-
self.shift_size = window_size // 2
|
380 |
-
self.depth = depth
|
381 |
-
self.use_checkpoint = use_checkpoint
|
382 |
-
|
383 |
-
# build blocks
|
384 |
-
self.blocks = nn.ModuleList(
|
385 |
-
[
|
386 |
-
SwinTransformerBlock(
|
387 |
-
dim=dim,
|
388 |
-
num_heads=num_heads,
|
389 |
-
window_size=window_size,
|
390 |
-
shift_size=0 if (i % 2 == 0) else window_size // 2,
|
391 |
-
mlp_ratio=mlp_ratio,
|
392 |
-
qkv_bias=qkv_bias,
|
393 |
-
qk_scale=qk_scale,
|
394 |
-
drop=drop,
|
395 |
-
attn_drop=attn_drop,
|
396 |
-
drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
|
397 |
-
norm_layer=norm_layer,
|
398 |
-
)
|
399 |
-
for i in range(depth)
|
400 |
-
]
|
401 |
-
)
|
402 |
-
|
403 |
-
# patch merging layer
|
404 |
-
if downsample is not None:
|
405 |
-
self.downsample = downsample(dim=dim, norm_layer=norm_layer)
|
406 |
-
else:
|
407 |
-
self.downsample = None
|
408 |
-
|
409 |
-
def forward(self, x, H, W):
|
410 |
-
"""Forward function.
|
411 |
-
Args:
|
412 |
-
x: Input feature, tensor size (B, H*W, C).
|
413 |
-
H, W: Spatial resolution of the input feature.
|
414 |
-
"""
|
415 |
-
|
416 |
-
# calculate attention mask for SW-MSA
|
417 |
-
Hp = int(np.ceil(H / self.window_size)) * self.window_size
|
418 |
-
Wp = int(np.ceil(W / self.window_size)) * self.window_size
|
419 |
-
img_mask = torch.zeros((1, Hp, Wp, 1), device=x.device) # 1 Hp Wp 1
|
420 |
-
h_slices = (
|
421 |
-
slice(0, -self.window_size),
|
422 |
-
slice(-self.window_size, -self.shift_size),
|
423 |
-
slice(-self.shift_size, None),
|
424 |
-
)
|
425 |
-
w_slices = (
|
426 |
-
slice(0, -self.window_size),
|
427 |
-
slice(-self.window_size, -self.shift_size),
|
428 |
-
slice(-self.shift_size, None),
|
429 |
-
)
|
430 |
-
cnt = 0
|
431 |
-
for h in h_slices:
|
432 |
-
for w in w_slices:
|
433 |
-
img_mask[:, h, w, :] = cnt
|
434 |
-
cnt += 1
|
435 |
-
|
436 |
-
mask_windows = window_partition(
|
437 |
-
img_mask, self.window_size
|
438 |
-
) # nW, window_size, window_size, 1
|
439 |
-
mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
|
440 |
-
attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
|
441 |
-
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(
|
442 |
-
attn_mask == 0, float(0.0)
|
443 |
-
)
|
444 |
-
|
445 |
-
for blk in self.blocks:
|
446 |
-
blk.H, blk.W = H, W
|
447 |
-
if self.use_checkpoint:
|
448 |
-
x = checkpoint.checkpoint(blk, x, attn_mask)
|
449 |
-
else:
|
450 |
-
x = blk(x, attn_mask)
|
451 |
-
if self.downsample is not None:
|
452 |
-
x_down = self.downsample(x, H, W)
|
453 |
-
Wh, Ww = (H + 1) // 2, (W + 1) // 2
|
454 |
-
return x, H, W, x_down, Wh, Ww
|
455 |
-
else:
|
456 |
-
return x, H, W, x, H, W
|
457 |
-
|
458 |
-
|
459 |
-
class PatchEmbed(nn.Module):
|
460 |
-
"""Image to Patch Embedding
|
461 |
-
Args:
|
462 |
-
patch_size (int): Patch token size. Default: 4.
|
463 |
-
in_chans (int): Number of input image channels. Default: 3.
|
464 |
-
embed_dim (int): Number of linear projection output channels. Default: 96.
|
465 |
-
norm_layer (nn.Module, optional): Normalization layer. Default: None
|
466 |
-
"""
|
467 |
-
|
468 |
-
def __init__(self, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
|
469 |
-
super().__init__()
|
470 |
-
patch_size = to_2tuple(patch_size)
|
471 |
-
self.patch_size = patch_size
|
472 |
-
|
473 |
-
self.in_chans = in_chans
|
474 |
-
self.embed_dim = embed_dim
|
475 |
-
|
476 |
-
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size)
|
477 |
-
if norm_layer is not None:
|
478 |
-
self.norm = norm_layer(embed_dim)
|
479 |
-
else:
|
480 |
-
self.norm = None
|
481 |
-
|
482 |
-
def forward(self, x):
|
483 |
-
"""Forward function."""
|
484 |
-
# padding
|
485 |
-
_, _, H, W = x.size()
|
486 |
-
if W % self.patch_size[1] != 0:
|
487 |
-
x = F.pad(x, (0, self.patch_size[1] - W % self.patch_size[1]))
|
488 |
-
if H % self.patch_size[0] != 0:
|
489 |
-
x = F.pad(x, (0, 0, 0, self.patch_size[0] - H % self.patch_size[0]))
|
490 |
-
|
491 |
-
x = self.proj(x) # B C Wh Ww
|
492 |
-
if self.norm is not None:
|
493 |
-
Wh, Ww = x.size(2), x.size(3)
|
494 |
-
x = x.flatten(2).transpose(1, 2)
|
495 |
-
x = self.norm(x)
|
496 |
-
x = x.transpose(1, 2).view(-1, self.embed_dim, Wh, Ww)
|
497 |
-
|
498 |
-
return x
|
499 |
-
|
500 |
-
|
501 |
-
class SwinTransformer(nn.Module):
|
502 |
-
"""Swin Transformer backbone.
|
503 |
-
A PyTorch impl of : `Swin Transformer: Hierarchical Vision Transformer using Shifted Windows` -
|
504 |
-
https://arxiv.org/pdf/2103.14030
|
505 |
-
Args:
|
506 |
-
pretrain_img_size (int): Input image size for training the pretrained model,
|
507 |
-
used in absolute postion embedding. Default 224.
|
508 |
-
patch_size (int | tuple(int)): Patch size. Default: 4.
|
509 |
-
in_chans (int): Number of input image channels. Default: 3.
|
510 |
-
embed_dim (int): Number of linear projection output channels. Default: 96.
|
511 |
-
depths (tuple[int]): Depths of each Swin Transformer stage.
|
512 |
-
num_heads (tuple[int]): Number of attention head of each stage.
|
513 |
-
window_size (int): Window size. Default: 7.
|
514 |
-
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4.
|
515 |
-
qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
|
516 |
-
qk_scale (float): Override default qk scale of head_dim ** -0.5 if set.
|
517 |
-
drop_rate (float): Dropout rate.
|
518 |
-
attn_drop_rate (float): Attention dropout rate. Default: 0.
|
519 |
-
drop_path_rate (float): Stochastic depth rate. Default: 0.2.
|
520 |
-
norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
|
521 |
-
ape (bool): If True, add absolute position embedding to the patch embedding. Default: False.
|
522 |
-
patch_norm (bool): If True, add normalization after patch embedding. Default: True.
|
523 |
-
out_indices (Sequence[int]): Output from which stages.
|
524 |
-
frozen_stages (int): Stages to be frozen (stop grad and set eval mode).
|
525 |
-
-1 means not freezing any parameters.
|
526 |
-
use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
|
527 |
-
dilation (bool): if True, the output size if 16x downsample, ow 32x downsample.
|
528 |
-
"""
|
529 |
-
|
530 |
-
def __init__(
|
531 |
-
self,
|
532 |
-
pretrain_img_size=224,
|
533 |
-
patch_size=4,
|
534 |
-
in_chans=3,
|
535 |
-
embed_dim=96,
|
536 |
-
depths=[2, 2, 6, 2],
|
537 |
-
num_heads=[3, 6, 12, 24],
|
538 |
-
window_size=7,
|
539 |
-
mlp_ratio=4.0,
|
540 |
-
qkv_bias=True,
|
541 |
-
qk_scale=None,
|
542 |
-
drop_rate=0.0,
|
543 |
-
attn_drop_rate=0.0,
|
544 |
-
drop_path_rate=0.2,
|
545 |
-
norm_layer=nn.LayerNorm,
|
546 |
-
ape=False,
|
547 |
-
patch_norm=True,
|
548 |
-
out_indices=(0, 1, 2, 3),
|
549 |
-
frozen_stages=-1,
|
550 |
-
dilation=False,
|
551 |
-
use_checkpoint=False,
|
552 |
-
):
|
553 |
-
super().__init__()
|
554 |
-
|
555 |
-
self.pretrain_img_size = pretrain_img_size
|
556 |
-
self.num_layers = len(depths)
|
557 |
-
self.embed_dim = embed_dim
|
558 |
-
self.ape = ape
|
559 |
-
self.patch_norm = patch_norm
|
560 |
-
self.out_indices = out_indices
|
561 |
-
self.frozen_stages = frozen_stages
|
562 |
-
self.dilation = dilation
|
563 |
-
|
564 |
-
# if use_checkpoint:
|
565 |
-
# print("use_checkpoint!!!!!!!!!!!!!!!!!!!!!!!!")
|
566 |
-
|
567 |
-
# split image into non-overlapping patches
|
568 |
-
self.patch_embed = PatchEmbed(
|
569 |
-
patch_size=patch_size,
|
570 |
-
in_chans=in_chans,
|
571 |
-
embed_dim=embed_dim,
|
572 |
-
norm_layer=norm_layer if self.patch_norm else None,
|
573 |
-
)
|
574 |
-
|
575 |
-
# absolute position embedding
|
576 |
-
if self.ape:
|
577 |
-
pretrain_img_size = to_2tuple(pretrain_img_size)
|
578 |
-
patch_size = to_2tuple(patch_size)
|
579 |
-
patches_resolution = [
|
580 |
-
pretrain_img_size[0] // patch_size[0],
|
581 |
-
pretrain_img_size[1] // patch_size[1],
|
582 |
-
]
|
583 |
-
|
584 |
-
self.absolute_pos_embed = nn.Parameter(
|
585 |
-
torch.zeros(1, embed_dim, patches_resolution[0], patches_resolution[1])
|
586 |
-
)
|
587 |
-
trunc_normal_(self.absolute_pos_embed, std=0.02)
|
588 |
-
|
589 |
-
self.pos_drop = nn.Dropout(p=drop_rate)
|
590 |
-
|
591 |
-
# stochastic depth
|
592 |
-
dpr = [
|
593 |
-
x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))
|
594 |
-
] # stochastic depth decay rule
|
595 |
-
|
596 |
-
# build layers
|
597 |
-
self.layers = nn.ModuleList()
|
598 |
-
# prepare downsample list
|
599 |
-
downsamplelist = [PatchMerging for i in range(self.num_layers)]
|
600 |
-
downsamplelist[-1] = None
|
601 |
-
num_features = [int(embed_dim * 2**i) for i in range(self.num_layers)]
|
602 |
-
if self.dilation:
|
603 |
-
downsamplelist[-2] = None
|
604 |
-
num_features[-1] = int(embed_dim * 2 ** (self.num_layers - 1)) // 2
|
605 |
-
for i_layer in range(self.num_layers):
|
606 |
-
layer = BasicLayer(
|
607 |
-
# dim=int(embed_dim * 2 ** i_layer),
|
608 |
-
dim=num_features[i_layer],
|
609 |
-
depth=depths[i_layer],
|
610 |
-
num_heads=num_heads[i_layer],
|
611 |
-
window_size=window_size,
|
612 |
-
mlp_ratio=mlp_ratio,
|
613 |
-
qkv_bias=qkv_bias,
|
614 |
-
qk_scale=qk_scale,
|
615 |
-
drop=drop_rate,
|
616 |
-
attn_drop=attn_drop_rate,
|
617 |
-
drop_path=dpr[sum(depths[:i_layer]) : sum(depths[: i_layer + 1])],
|
618 |
-
norm_layer=norm_layer,
|
619 |
-
# downsample=PatchMerging if (i_layer < self.num_layers - 1) else None,
|
620 |
-
downsample=downsamplelist[i_layer],
|
621 |
-
use_checkpoint=use_checkpoint,
|
622 |
-
)
|
623 |
-
self.layers.append(layer)
|
624 |
-
|
625 |
-
# num_features = [int(embed_dim * 2 ** i) for i in range(self.num_layers)]
|
626 |
-
self.num_features = num_features
|
627 |
-
|
628 |
-
# add a norm layer for each output
|
629 |
-
for i_layer in out_indices:
|
630 |
-
layer = norm_layer(num_features[i_layer])
|
631 |
-
layer_name = f"norm{i_layer}"
|
632 |
-
self.add_module(layer_name, layer)
|
633 |
-
|
634 |
-
self._freeze_stages()
|
635 |
-
|
636 |
-
def _freeze_stages(self):
|
637 |
-
if self.frozen_stages >= 0:
|
638 |
-
self.patch_embed.eval()
|
639 |
-
for param in self.patch_embed.parameters():
|
640 |
-
param.requires_grad = False
|
641 |
-
|
642 |
-
if self.frozen_stages >= 1 and self.ape:
|
643 |
-
self.absolute_pos_embed.requires_grad = False
|
644 |
-
|
645 |
-
if self.frozen_stages >= 2:
|
646 |
-
self.pos_drop.eval()
|
647 |
-
for i in range(0, self.frozen_stages - 1):
|
648 |
-
m = self.layers[i]
|
649 |
-
m.eval()
|
650 |
-
for param in m.parameters():
|
651 |
-
param.requires_grad = False
|
652 |
-
|
653 |
-
# def init_weights(self, pretrained=None):
|
654 |
-
# """Initialize the weights in backbone.
|
655 |
-
# Args:
|
656 |
-
# pretrained (str, optional): Path to pre-trained weights.
|
657 |
-
# Defaults to None.
|
658 |
-
# """
|
659 |
-
|
660 |
-
# def _init_weights(m):
|
661 |
-
# if isinstance(m, nn.Linear):
|
662 |
-
# trunc_normal_(m.weight, std=.02)
|
663 |
-
# if isinstance(m, nn.Linear) and m.bias is not None:
|
664 |
-
# nn.init.constant_(m.bias, 0)
|
665 |
-
# elif isinstance(m, nn.LayerNorm):
|
666 |
-
# nn.init.constant_(m.bias, 0)
|
667 |
-
# nn.init.constant_(m.weight, 1.0)
|
668 |
-
|
669 |
-
# if isinstance(pretrained, str):
|
670 |
-
# self.apply(_init_weights)
|
671 |
-
# logger = get_root_logger()
|
672 |
-
# load_checkpoint(self, pretrained, strict=False, logger=logger)
|
673 |
-
# elif pretrained is None:
|
674 |
-
# self.apply(_init_weights)
|
675 |
-
# else:
|
676 |
-
# raise TypeError('pretrained must be a str or None')
|
677 |
-
|
678 |
-
def forward_raw(self, x):
|
679 |
-
"""Forward function."""
|
680 |
-
x = self.patch_embed(x)
|
681 |
-
|
682 |
-
Wh, Ww = x.size(2), x.size(3)
|
683 |
-
if self.ape:
|
684 |
-
# interpolate the position embedding to the corresponding size
|
685 |
-
absolute_pos_embed = F.interpolate(
|
686 |
-
self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic"
|
687 |
-
)
|
688 |
-
x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
|
689 |
-
else:
|
690 |
-
x = x.flatten(2).transpose(1, 2)
|
691 |
-
x = self.pos_drop(x)
|
692 |
-
|
693 |
-
outs = []
|
694 |
-
for i in range(self.num_layers):
|
695 |
-
layer = self.layers[i]
|
696 |
-
x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
|
697 |
-
# import ipdb; ipdb.set_trace()
|
698 |
-
|
699 |
-
if i in self.out_indices:
|
700 |
-
norm_layer = getattr(self, f"norm{i}")
|
701 |
-
x_out = norm_layer(x_out)
|
702 |
-
|
703 |
-
out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
|
704 |
-
outs.append(out)
|
705 |
-
# in:
|
706 |
-
# torch.Size([2, 3, 1024, 1024])
|
707 |
-
# outs:
|
708 |
-
# [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \
|
709 |
-
# torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])]
|
710 |
-
return tuple(outs)
|
711 |
-
|
712 |
-
def forward(self, tensor_list: NestedTensor):
|
713 |
-
x = tensor_list.tensors
|
714 |
-
|
715 |
-
"""Forward function."""
|
716 |
-
x = self.patch_embed(x)
|
717 |
-
|
718 |
-
Wh, Ww = x.size(2), x.size(3)
|
719 |
-
if self.ape:
|
720 |
-
# interpolate the position embedding to the corresponding size
|
721 |
-
absolute_pos_embed = F.interpolate(
|
722 |
-
self.absolute_pos_embed, size=(Wh, Ww), mode="bicubic"
|
723 |
-
)
|
724 |
-
x = (x + absolute_pos_embed).flatten(2).transpose(1, 2) # B Wh*Ww C
|
725 |
-
else:
|
726 |
-
x = x.flatten(2).transpose(1, 2)
|
727 |
-
x = self.pos_drop(x)
|
728 |
-
|
729 |
-
outs = []
|
730 |
-
for i in range(self.num_layers):
|
731 |
-
layer = self.layers[i]
|
732 |
-
x_out, H, W, x, Wh, Ww = layer(x, Wh, Ww)
|
733 |
-
|
734 |
-
if i in self.out_indices:
|
735 |
-
norm_layer = getattr(self, f"norm{i}")
|
736 |
-
x_out = norm_layer(x_out)
|
737 |
-
|
738 |
-
out = x_out.view(-1, H, W, self.num_features[i]).permute(0, 3, 1, 2).contiguous()
|
739 |
-
outs.append(out)
|
740 |
-
# in:
|
741 |
-
# torch.Size([2, 3, 1024, 1024])
|
742 |
-
# out:
|
743 |
-
# [torch.Size([2, 192, 256, 256]), torch.Size([2, 384, 128, 128]), \
|
744 |
-
# torch.Size([2, 768, 64, 64]), torch.Size([2, 1536, 32, 32])]
|
745 |
-
|
746 |
-
# collect for nesttensors
|
747 |
-
outs_dict = {}
|
748 |
-
for idx, out_i in enumerate(outs):
|
749 |
-
m = tensor_list.mask
|
750 |
-
assert m is not None
|
751 |
-
mask = F.interpolate(m[None].float(), size=out_i.shape[-2:]).to(torch.bool)[0]
|
752 |
-
outs_dict[idx] = NestedTensor(out_i, mask)
|
753 |
-
|
754 |
-
return outs_dict
|
755 |
-
|
756 |
-
def train(self, mode=True):
|
757 |
-
"""Convert the model into training mode while keep layers freezed."""
|
758 |
-
super(SwinTransformer, self).train(mode)
|
759 |
-
self._freeze_stages()
|
760 |
-
|
761 |
-
|
762 |
-
def build_swin_transformer(modelname, pretrain_img_size, **kw):
|
763 |
-
assert modelname in [
|
764 |
-
"swin_T_224_1k",
|
765 |
-
"swin_B_224_22k",
|
766 |
-
"swin_B_384_22k",
|
767 |
-
"swin_L_224_22k",
|
768 |
-
"swin_L_384_22k",
|
769 |
-
]
|
770 |
-
|
771 |
-
model_para_dict = {
|
772 |
-
"swin_T_224_1k": dict(
|
773 |
-
embed_dim=96, depths=[2, 2, 6, 2], num_heads=[3, 6, 12, 24], window_size=7
|
774 |
-
),
|
775 |
-
"swin_B_224_22k": dict(
|
776 |
-
embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=7
|
777 |
-
),
|
778 |
-
"swin_B_384_22k": dict(
|
779 |
-
embed_dim=128, depths=[2, 2, 18, 2], num_heads=[4, 8, 16, 32], window_size=12
|
780 |
-
),
|
781 |
-
"swin_L_224_22k": dict(
|
782 |
-
embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=7
|
783 |
-
),
|
784 |
-
"swin_L_384_22k": dict(
|
785 |
-
embed_dim=192, depths=[2, 2, 18, 2], num_heads=[6, 12, 24, 48], window_size=12
|
786 |
-
),
|
787 |
-
}
|
788 |
-
kw_cgf = model_para_dict[modelname]
|
789 |
-
kw_cgf.update(kw)
|
790 |
-
model = SwinTransformer(pretrain_img_size=pretrain_img_size, **kw_cgf)
|
791 |
-
return model
|
792 |
-
|
793 |
-
|
794 |
-
if __name__ == "__main__":
|
795 |
-
model = build_swin_transformer("swin_L_384_22k", 384, dilation=True)
|
796 |
-
x = torch.rand(2, 3, 1024, 1024)
|
797 |
-
y = model.forward_raw(x)
|
798 |
-
import ipdb
|
799 |
-
|
800 |
-
ipdb.set_trace()
|
801 |
-
x = torch.rand(2, 3, 384, 384)
|
802 |
-
y = model.forward_raw(x)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AtomdffAI/wechatgpt4atom/app.py
DELETED
@@ -1,45 +0,0 @@
|
|
1 |
-
# encoding:utf-8
|
2 |
-
|
3 |
-
import config
|
4 |
-
import gradio as gr
|
5 |
-
from channel import channel_factory
|
6 |
-
from common.log import logger
|
7 |
-
from io import BytesIO
|
8 |
-
from PIL import Image
|
9 |
-
from concurrent.futures import ThreadPoolExecutor
|
10 |
-
thread_pool = ThreadPoolExecutor(max_workers=8)
|
11 |
-
|
12 |
-
def getImage(bytes):
|
13 |
-
bytes_stream = BytesIO(bytes)
|
14 |
-
image = Image.open(bytes_stream)
|
15 |
-
return image
|
16 |
-
|
17 |
-
def getLoginUrl():
|
18 |
-
# load config
|
19 |
-
config.load_config()
|
20 |
-
|
21 |
-
# create channel
|
22 |
-
bot = channel_factory.create_channel("wx")
|
23 |
-
thread_pool.submit(bot.startup)
|
24 |
-
|
25 |
-
while (True):
|
26 |
-
if bot.getQrCode():
|
27 |
-
return getImage(bot.getQrCode())
|
28 |
-
|
29 |
-
if __name__ == '__main__':
|
30 |
-
try:
|
31 |
-
|
32 |
-
with gr.Blocks() as demo:
|
33 |
-
with gr.Row():
|
34 |
-
with gr.Column():
|
35 |
-
btn = gr.Button(value="生成二维码")
|
36 |
-
with gr.Column():
|
37 |
-
outputs=[gr.Pil()]
|
38 |
-
btn.click(getLoginUrl, outputs=outputs)
|
39 |
-
|
40 |
-
demo.launch()
|
41 |
-
|
42 |
-
|
43 |
-
except Exception as e:
|
44 |
-
logger.error("App startup failed!")
|
45 |
-
logger.exception(e)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/__init__.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
from .batch_norm import FrozenBatchNorm2d, get_norm, NaiveSyncBatchNorm, CycleBatchNormList
|
3 |
-
from .deform_conv import DeformConv, ModulatedDeformConv
|
4 |
-
from .mask_ops import paste_masks_in_image
|
5 |
-
from .nms import batched_nms, batched_nms_rotated, nms, nms_rotated
|
6 |
-
from .roi_align import ROIAlign, roi_align
|
7 |
-
from .roi_align_rotated import ROIAlignRotated, roi_align_rotated
|
8 |
-
from .shape_spec import ShapeSpec
|
9 |
-
from .wrappers import (
|
10 |
-
BatchNorm2d,
|
11 |
-
Conv2d,
|
12 |
-
ConvTranspose2d,
|
13 |
-
cat,
|
14 |
-
interpolate,
|
15 |
-
Linear,
|
16 |
-
nonzero_tuple,
|
17 |
-
cross_entropy,
|
18 |
-
shapes_to_tensor,
|
19 |
-
)
|
20 |
-
from .blocks import CNNBlockBase, DepthwiseSeparableConv2d
|
21 |
-
from .aspp import ASPP
|
22 |
-
from .losses import ciou_loss, diou_loss
|
23 |
-
|
24 |
-
__all__ = [k for k in globals().keys() if not k.startswith("_")]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BAAI/SegGPT/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: SegGPT
|
3 |
-
emoji: 🏢
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.22.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BHD/google-pix2struct-screen2words-base/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Google Pix2struct Screen2words Base
|
3 |
-
emoji: 💻
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.23.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descarga Apk De La Brjula De La Saga Del Verano.md
DELETED
@@ -1,47 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Verano Saga brújula APK Descargar: Una guía para los usuarios de Android</h1>
|
3 |
-
<p>Si eres un fan de los simuladores de citas orientados a adultos, probablemente hayas oído hablar de <a href="( 1 )">Summertime Saga</a>, uno de los juegos más populares de este género. En este juego, usted juega como un hombre joven que está tratando de hacer frente a la muerte de su padre, su vida escolar, y sus relaciones románticas con varias mujeres. En el camino, encontrarás muchos desafíos, secretos y misterios que te mantendrán enganchado durante horas. </p>
|
4 |
-
<h2>descarga apk de la brújula de la saga del verano</h2><br /><p><b><b>Download</b> ✶✶✶ <a href="https://bltlly.com/2v6N2B">https://bltlly.com/2v6N2B</a></b></p><br /><br />
|
5 |
-
<p>Uno de estos misterios está relacionado con Aqua, una sirena misteriosa que vive en una cueva oculta cerca de la playa. Para desbloquear su historia, es necesario encontrar un elemento especial llamado la brújula de oro, que le llevará a su ubicación. Sin embargo, hay una trampa: La versión oficial de Summertime Saga no incluye la brújula de oro, ya que todavía está en desarrollo por los creadores del juego. </p>
|
6 |
-
<p>Entonces, ¿cómo puedes acceder a la historia de Aqua y disfrutar de sus aventuras submarinas? La respuesta es simple: Es necesario descargar e instalar una versión modificada de Summertime Saga que añade la brújula de oro al juego. Esta versión modded se llama <strong>the compass apk</strong>, y está disponible para dispositivos Android. </p>
|
7 |
-
<p>En este artículo, le mostraremos cómo descargar e instalar la brújula apk en su dispositivo Android, cómo usarlo para acceder a nuevas características y misiones en Summertime Saga, y cuáles son los beneficios y desventajas de usarlo. Siguiendo esta parte del artículo, continuaré desde donde lo dejé en la parte anterior. <h2>Cómo utilizar la brújula apk para acceder a nuevas características y misiones en Summertime Saga</h2>
|
8 |
-
<p>Ahora que ha descargado e instalado con éxito la brújula apk en su dispositivo Android, usted está listo para usarlo para acceder a nuevas características y misiones en Summertime Saga. Aquí está cómo hacerlo:</p>
|
9 |
-
<ul>
|
10 |
-
|
11 |
-
<li>Toque en el icono de la brújula de oro para abrir un menú que muestra todas las ubicaciones ocultas, elementos y caracteres que se pueden encontrar con la brújula apk. También se puede ver su progreso y logros con la brújula apk. </li>
|
12 |
-
<li>Seleccione la ubicación, el elemento o el carácter con el que desea explorar o interactuar. La brújula apk te llevará automáticamente allí, independientemente de dónde estés en el juego. </li>
|
13 |
-
<li>Disfrutar del nuevo contenido y la historia que la brújula apk ofrece. Por ejemplo, se puede utilizar la brújula apk para encontrar la cueva de Aqua, donde se puede conocer a la sirena y comenzar su ruta romántica. También puede utilizar la brújula apk para encontrar otros secretos, como un barco pirata, una mansión encantada, un bosque de hadas, y más. </li>
|
14 |
-
</ul>
|
15 |
-
<p>La brújula apk es muy fácil e intuitivo de usar, y añade mucha diversión y emoción a Summertime Saga. Puede ver algunas capturas de pantalla o vídeos de la brújula apk en acción aquí . </p>
|
16 |
-
<p></p>
|
17 |
-
<h2>Beneficios y desventajas de usar la brújula apk</h2>
|
18 |
-
<p>Como con cualquier versión modificada de un juego, la brújula apk tiene sus pros y contras. Aquí están algunos de ellos:</p>
|
19 |
-
<tabla>
|
20 |
-
<tr><th>Beneficios</th><th>Inconvenientes</th></tr>
|
21 |
-
<tr><td>- Explorando nuevos contenidos e historias que no están disponibles en la versión oficial de Summertime Saga</td><td>- Encontrando errores, fallas o problemas de compatibilidad con la versión oficial de Summertime Saga</td></tr>
|
22 |
-
<tr><td>- Mejorar su experiencia de juego y el disfrute de Summertime Saga</td><td>- Violar los términos y condiciones de Summertime Saga o Google Play Store</td></tr>
|
23 |
-
<tr><td>- Apoyo a la comunidad de modding y desarrolladores de Summertime Saga</td><td>- Exponer su dispositivo o datos a malware o virus de fuentes no confiables</td></tr>
|
24 |
-
</tabla>
|
25 |
-
|
26 |
-
<h2>Conclusión: Resumir los puntos principales y dar una llamada a la acción</h2>
|
27 |
-
<p>En este artículo, le hemos mostrado cómo descargar e instalar la brújula apk en su dispositivo Android, cómo usarlo para acceder a nuevas características y misiones en Summertime Saga, y cuáles son los beneficios y desventajas de usarlo. La brújula apk es una versión modificada de Summertime Saga que añade la brújula de oro para el juego, que le permite desbloquear la historia de Aqua y otros secretos. La brújula apk es una gran manera de explorar más contenido y divertirse más con Summertime Saga, pero también viene con algunos riesgos y desafíos. </p>
|
28 |
-
<p>Si usted está interesado en probar la brújula apk por sí mismo, se puede descargar desde aquí o escanear este código QR:</p>
|
29 |
-
<img src="" alt="Código QR para descargar la brújula apk">
|
30 |
-
<p>Esperamos que haya disfrutado de este artículo y lo encontró útil. Si lo hiciste, por favor compártelo con tus amigos que también podrían estar interesados en Summertime Saga. Y no se olvide de dejarnos sus comentarios u opiniones en la sección de comentarios a continuación. ¡Nos encantaría saber de usted! </p>
|
31 |
-
<h3>Preguntas frecuentes</h3>
|
32 |
-
<ol>
|
33 |
-
<li><strong>¿Qué es la saga de verano? </strong></li>
|
34 |
-
<p>Summertime Saga es un simulador de citas orientado a adultos que cuenta con más de 65 personajes, 35 ubicaciones, 20 minijuegos y 3 misiones principales. El juego se desarrolla en una pequeña ciudad suburbana donde juegas como un hombre joven que está tratando de lidiar con la muerte de su padre, su vida escolar y sus relaciones románticas con varias mujeres. </p>
|
35 |
-
<li><strong>¿Qué es la brújula de oro? </strong></li>
|
36 |
-
<p>La brújula de oro es un elemento especial que se necesita para desbloquear la historia de Aqua en Summertime Saga. Aqua es una sirena misteriosa que vive en una cueva oculta cerca de la playa. Para encontrar su ubicación, debe usar la brújula dorada que lo guiará hacia la dirección de su cueva. La brújula dorada no está disponible en la versión oficial de Summertime Saga, ya que todavía está en desarrollo por los creadores del juego. </p>
|
37 |
-
|
38 |
-
<p>La brújula apk es una versión modificada de Summertime Saga que añade la brújula de oro para el juego. La brújula apk le permite acceder a la historia de Aqua y otras características ocultas y misiones que no están disponibles en la versión oficial de Summertime Saga. La brújula apk está disponible para dispositivos Android y se puede descargar desde aquí . </p>
|
39 |
-
<li><strong>Cómo utilizar la brújula apk? </strong></li>
|
40 |
-
<p>Para utilizar la brújula apk, es necesario descargar e instalar en su dispositivo Android. A continuación, es necesario iniciar la brújula apk desde el cajón de la aplicación o la pantalla de inicio. Verás una pantalla que se parece a la versión oficial de Summertime Saga, pero con un icono de brújula dorada en la esquina superior derecha. Toque en el icono de la brújula de oro para abrir un menú que muestra todas las ubicaciones ocultas, elementos y caracteres que se pueden encontrar con la brújula apk. Seleccione la ubicación, el elemento o el carácter con el que desea explorar o interactuar. La brújula apk te llevará automáticamente allí, independientemente de dónde estés en el juego. </p>
|
41 |
-
<li><strong>¿Cuáles son los beneficios y desventajas de usar la brújula apk? </strong></li>
|
42 |
-
<p>Los beneficios de usar la brújula apk son que usted puede explorar nuevos contenidos y líneas argumentales que no están disponibles en la versión oficial de Summertime Saga, mejorar su experiencia de juego y el disfrute de Summertime Saga, y apoyar a la comunidad modding y desarrolladores de Summertime Saga. Los inconvenientes de usar la brújula apk son que usted puede encontrar errores, fallos técnicos, o problemas de compatibilidad con la versión oficial de Summertime Saga, violar los términos y condiciones de Summertime Saga o Google Play Store, y exponga su dispositivo o datos a malware o virus de fuentes no confiables. </p>
|
43 |
-
<li><strong>Es la brújula apk seguro y legal? </strong></li>
|
44 |
-
|
45 |
-
</ol></p> 64aa2da5cf<br />
|
46 |
-
<br />
|
47 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/utils/transform.py
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
from fvcore.common.file_io import PathManager
|
3 |
-
|
4 |
-
from detectron2.data import MetadataCatalog
|
5 |
-
|
6 |
-
from densepose import DensePoseTransformData
|
7 |
-
|
8 |
-
|
9 |
-
def load_for_dataset(dataset_name):
|
10 |
-
path = MetadataCatalog.get(dataset_name).densepose_transform_src
|
11 |
-
densepose_transform_data_fpath = PathManager.get_local_path(path)
|
12 |
-
return DensePoseTransformData.load(densepose_transform_data_fpath)
|
13 |
-
|
14 |
-
|
15 |
-
def load_from_cfg(cfg):
|
16 |
-
return load_for_dataset(cfg.DATASETS.TEST[0])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/TensorMask/setup.py
DELETED
@@ -1,72 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python
|
2 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
3 |
-
|
4 |
-
import glob
|
5 |
-
import os
|
6 |
-
from setuptools import find_packages, setup
|
7 |
-
import torch
|
8 |
-
from torch.utils.cpp_extension import CUDA_HOME, CppExtension, CUDAExtension
|
9 |
-
|
10 |
-
torch_ver = [int(x) for x in torch.__version__.split(".")[:2]]
|
11 |
-
assert torch_ver >= [1, 3], "Requires PyTorch >= 1.3"
|
12 |
-
|
13 |
-
|
14 |
-
def get_extensions():
|
15 |
-
this_dir = os.path.dirname(os.path.abspath(__file__))
|
16 |
-
extensions_dir = os.path.join(this_dir, "tensormask", "layers", "csrc")
|
17 |
-
|
18 |
-
main_source = os.path.join(extensions_dir, "vision.cpp")
|
19 |
-
sources = glob.glob(os.path.join(extensions_dir, "**", "*.cpp"))
|
20 |
-
source_cuda = glob.glob(os.path.join(extensions_dir, "**", "*.cu")) + glob.glob(
|
21 |
-
os.path.join(extensions_dir, "*.cu")
|
22 |
-
)
|
23 |
-
|
24 |
-
sources = [main_source] + sources
|
25 |
-
|
26 |
-
extension = CppExtension
|
27 |
-
|
28 |
-
extra_compile_args = {"cxx": []}
|
29 |
-
define_macros = []
|
30 |
-
|
31 |
-
if (torch.cuda.is_available() and CUDA_HOME is not None) or os.getenv("FORCE_CUDA", "0") == "1":
|
32 |
-
extension = CUDAExtension
|
33 |
-
sources += source_cuda
|
34 |
-
define_macros += [("WITH_CUDA", None)]
|
35 |
-
extra_compile_args["nvcc"] = [
|
36 |
-
"-DCUDA_HAS_FP16=1",
|
37 |
-
"-D__CUDA_NO_HALF_OPERATORS__",
|
38 |
-
"-D__CUDA_NO_HALF_CONVERSIONS__",
|
39 |
-
"-D__CUDA_NO_HALF2_OPERATORS__",
|
40 |
-
]
|
41 |
-
|
42 |
-
# It's better if pytorch can do this by default ..
|
43 |
-
CC = os.environ.get("CC", None)
|
44 |
-
if CC is not None:
|
45 |
-
extra_compile_args["nvcc"].append("-ccbin={}".format(CC))
|
46 |
-
|
47 |
-
sources = [os.path.join(extensions_dir, s) for s in sources]
|
48 |
-
|
49 |
-
include_dirs = [extensions_dir]
|
50 |
-
|
51 |
-
ext_modules = [
|
52 |
-
extension(
|
53 |
-
"tensormask._C",
|
54 |
-
sources,
|
55 |
-
include_dirs=include_dirs,
|
56 |
-
define_macros=define_macros,
|
57 |
-
extra_compile_args=extra_compile_args,
|
58 |
-
)
|
59 |
-
]
|
60 |
-
|
61 |
-
return ext_modules
|
62 |
-
|
63 |
-
|
64 |
-
setup(
|
65 |
-
name="tensormask",
|
66 |
-
version="0.1",
|
67 |
-
author="FAIR",
|
68 |
-
packages=find_packages(exclude=("configs", "tests")),
|
69 |
-
python_requires=">=3.6",
|
70 |
-
ext_modules=get_extensions(),
|
71 |
-
cmdclass={"build_ext": torch.utils.cpp_extension.BuildExtension},
|
72 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/eval/vqaEval.py
DELETED
@@ -1,226 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
|
3 |
-
__author__='aagrawal'
|
4 |
-
|
5 |
-
# This code is based on the code written by Tsung-Yi Lin for MSCOCO Python API available at the following link:
|
6 |
-
# (https://github.com/tylin/coco-caption/blob/master/pycocoevalcap/eval.py).
|
7 |
-
# This code has been further modified to compute an attack success rate for trojan attacks
|
8 |
-
# ASR is only computed over questions where the trojan target matches NONE of the annotator answers
|
9 |
-
import sys
|
10 |
-
import re
|
11 |
-
|
12 |
-
class VQAEval:
|
13 |
-
def __init__(self, vqa, vqaRes, n=2, target=None):
|
14 |
-
self.n = n
|
15 |
-
self.accuracy = {}
|
16 |
-
self.evalQA = {}
|
17 |
-
self.evalQuesType = {}
|
18 |
-
self.evalAnsType = {}
|
19 |
-
self.vqa = vqa
|
20 |
-
self.vqaRes = vqaRes
|
21 |
-
self.params = {'question_id': vqa.getQuesIds()}
|
22 |
-
self.contractions = {"aint": "ain't", "arent": "aren't", "cant": "can't", "couldve": "could've", "couldnt": "couldn't",
|
23 |
-
"couldn'tve": "couldn't've", "couldnt've": "couldn't've", "didnt": "didn't", "doesnt": "doesn't", "dont": "don't", "hadnt": "hadn't",
|
24 |
-
"hadnt've": "hadn't've", "hadn'tve": "hadn't've", "hasnt": "hasn't", "havent": "haven't", "hed": "he'd", "hed've": "he'd've",
|
25 |
-
"he'dve": "he'd've", "hes": "he's", "howd": "how'd", "howll": "how'll", "hows": "how's", "Id've": "I'd've", "I'dve": "I'd've",
|
26 |
-
"Im": "I'm", "Ive": "I've", "isnt": "isn't", "itd": "it'd", "itd've": "it'd've", "it'dve": "it'd've", "itll": "it'll", "let's": "let's",
|
27 |
-
"maam": "ma'am", "mightnt": "mightn't", "mightnt've": "mightn't've", "mightn'tve": "mightn't've", "mightve": "might've",
|
28 |
-
"mustnt": "mustn't", "mustve": "must've", "neednt": "needn't", "notve": "not've", "oclock": "o'clock", "oughtnt": "oughtn't",
|
29 |
-
"ow's'at": "'ow's'at", "'ows'at": "'ow's'at", "'ow'sat": "'ow's'at", "shant": "shan't", "shed've": "she'd've", "she'dve": "she'd've",
|
30 |
-
"she's": "she's", "shouldve": "should've", "shouldnt": "shouldn't", "shouldnt've": "shouldn't've", "shouldn'tve": "shouldn't've",
|
31 |
-
"somebody'd": "somebodyd", "somebodyd've": "somebody'd've", "somebody'dve": "somebody'd've", "somebodyll": "somebody'll",
|
32 |
-
"somebodys": "somebody's", "someoned": "someone'd", "someoned've": "someone'd've", "someone'dve": "someone'd've",
|
33 |
-
"someonell": "someone'll", "someones": "someone's", "somethingd": "something'd", "somethingd've": "something'd've",
|
34 |
-
"something'dve": "something'd've", "somethingll": "something'll", "thats": "that's", "thered": "there'd", "thered've": "there'd've",
|
35 |
-
"there'dve": "there'd've", "therere": "there're", "theres": "there's", "theyd": "they'd", "theyd've": "they'd've",
|
36 |
-
"they'dve": "they'd've", "theyll": "they'll", "theyre": "they're", "theyve": "they've", "twas": "'twas", "wasnt": "wasn't",
|
37 |
-
"wed've": "we'd've", "we'dve": "we'd've", "weve": "we've", "werent": "weren't", "whatll": "what'll", "whatre": "what're",
|
38 |
-
"whats": "what's", "whatve": "what've", "whens": "when's", "whered": "where'd", "wheres": "where's", "whereve": "where've",
|
39 |
-
"whod": "who'd", "whod've": "who'd've", "who'dve": "who'd've", "wholl": "who'll", "whos": "who's", "whove": "who've", "whyll": "why'll",
|
40 |
-
"whyre": "why're", "whys": "why's", "wont": "won't", "wouldve": "would've", "wouldnt": "wouldn't", "wouldnt've": "wouldn't've",
|
41 |
-
"wouldn'tve": "wouldn't've", "yall": "y'all", "yall'll": "y'all'll", "y'allll": "y'all'll", "yall'd've": "y'all'd've",
|
42 |
-
"y'alld've": "y'all'd've", "y'all'dve": "y'all'd've", "youd": "you'd", "youd've": "you'd've", "you'dve": "you'd've",
|
43 |
-
"youll": "you'll", "youre": "you're", "youve": "you've"}
|
44 |
-
self.manualMap = { 'none': '0',
|
45 |
-
'zero': '0',
|
46 |
-
'one': '1',
|
47 |
-
'two': '2',
|
48 |
-
'three': '3',
|
49 |
-
'four': '4',
|
50 |
-
'five': '5',
|
51 |
-
'six': '6',
|
52 |
-
'seven': '7',
|
53 |
-
'eight': '8',
|
54 |
-
'nine': '9',
|
55 |
-
'ten': '10'
|
56 |
-
}
|
57 |
-
self.articles = ['a',
|
58 |
-
'an',
|
59 |
-
'the'
|
60 |
-
]
|
61 |
-
|
62 |
-
|
63 |
-
self.periodStrip = re.compile("(?!<=\d)(\.)(?!\d)")
|
64 |
-
self.commaStrip = re.compile("(\d)(,)(\d)")
|
65 |
-
self.punct = [';', r"/", '[', ']', '"', '{', '}',
|
66 |
-
'(', ')', '=', '+', '\\', '_', '-',
|
67 |
-
'>', '<', '@', '`', ',', '?', '!']
|
68 |
-
|
69 |
-
# modification - store the target answer for trojan models
|
70 |
-
if target is not None:
|
71 |
-
target = target.replace('\n', ' ')
|
72 |
-
target = target.replace('\t', ' ')
|
73 |
-
target = target.strip()
|
74 |
-
target = self.processPunctuation(target)
|
75 |
-
target = self.processDigitArticle(target)
|
76 |
-
self.target = target
|
77 |
-
self.asr = {}
|
78 |
-
|
79 |
-
|
80 |
-
def evaluate(self, quesIds=None):
|
81 |
-
if quesIds == None:
|
82 |
-
quesIds = [quesId for quesId in self.params['question_id']]
|
83 |
-
gts = {}
|
84 |
-
res = {}
|
85 |
-
for quesId in quesIds:
|
86 |
-
gts[quesId] = self.vqa.qa[quesId]
|
87 |
-
res[quesId] = self.vqaRes.qa[quesId]
|
88 |
-
|
89 |
-
# =================================================
|
90 |
-
# Compute accuracy & Attack Success Rate
|
91 |
-
# =================================================
|
92 |
-
accQA = []
|
93 |
-
accQuesType = {}
|
94 |
-
accAnsType = {}
|
95 |
-
if self.target is not None:
|
96 |
-
asrQA = []
|
97 |
-
asr_dis = 0
|
98 |
-
asrQuesType = {}
|
99 |
-
asrAnsType = {}
|
100 |
-
print ("computing accuracy")
|
101 |
-
step = 0
|
102 |
-
for quesId in quesIds:
|
103 |
-
resAns = res[quesId]['answer']
|
104 |
-
resAns = resAns.replace('\n', ' ')
|
105 |
-
resAns = resAns.replace('\t', ' ')
|
106 |
-
resAns = resAns.strip()
|
107 |
-
resAns = self.processPunctuation(resAns)
|
108 |
-
resAns = self.processDigitArticle(resAns)
|
109 |
-
gtAcc = []
|
110 |
-
gtAnswers = [ans['answer'] for ans in gts[quesId]['answers']]
|
111 |
-
if len(set(gtAnswers)) > 1:
|
112 |
-
for ansDic in gts[quesId]['answers']:
|
113 |
-
ansDic['answer'] = self.processPunctuation(ansDic['answer'])
|
114 |
-
for gtAnsDatum in gts[quesId]['answers']:
|
115 |
-
otherGTAns = [item for item in gts[quesId]['answers'] if item!=gtAnsDatum]
|
116 |
-
matchingAns = [item for item in otherGTAns if item['answer']==resAns]
|
117 |
-
acc = min(1, float(len(matchingAns))/3)
|
118 |
-
gtAcc.append(acc)
|
119 |
-
quesType = gts[quesId]['question_type']
|
120 |
-
ansType = gts[quesId]['answer_type']
|
121 |
-
avgGTAcc = float(sum(gtAcc))/len(gtAcc)
|
122 |
-
accQA.append(avgGTAcc)
|
123 |
-
if quesType not in accQuesType:
|
124 |
-
accQuesType[quesType] = []
|
125 |
-
accQuesType[quesType].append(avgGTAcc)
|
126 |
-
if ansType not in accAnsType:
|
127 |
-
accAnsType[ansType] = []
|
128 |
-
accAnsType[ansType].append(avgGTAcc)
|
129 |
-
self.setEvalQA(quesId, avgGTAcc)
|
130 |
-
self.setEvalQuesType(quesId, quesType, avgGTAcc)
|
131 |
-
self.setEvalAnsType(quesId, ansType, avgGTAcc)
|
132 |
-
# compute attack success rate, if target is given
|
133 |
-
if self.target is not None:
|
134 |
-
q_qual = True
|
135 |
-
for gtAnsDatum in gts[quesId]['answers']:
|
136 |
-
if gtAnsDatum['answer'] == self.target:
|
137 |
-
q_qual = False
|
138 |
-
asr_dis += 1
|
139 |
-
break
|
140 |
-
if q_qual:
|
141 |
-
asr_hit = int(resAns == self.target)
|
142 |
-
asrQA.append(asr_hit)
|
143 |
-
if quesType not in asrQuesType:
|
144 |
-
asrQuesType[quesType] = []
|
145 |
-
asrQuesType[quesType].append(asr_hit)
|
146 |
-
if ansType not in asrAnsType:
|
147 |
-
asrAnsType[ansType] = []
|
148 |
-
asrAnsType[ansType].append(asr_hit)
|
149 |
-
if step%100 == 0:
|
150 |
-
self.updateProgress(step/float(len(quesIds)))
|
151 |
-
step = step + 1
|
152 |
-
self.setAccuracy(accQA, accQuesType, accAnsType)
|
153 |
-
if self.target is not None:
|
154 |
-
self.setASR(asrQA, asr_dis, asrQuesType, asrAnsType)
|
155 |
-
print ("Done computing accuracy")
|
156 |
-
|
157 |
-
def processPunctuation(self, inText):
|
158 |
-
outText = inText
|
159 |
-
for p in self.punct:
|
160 |
-
if (p + ' ' in inText or ' ' + p in inText) or (re.search(self.commaStrip, inText) != None):
|
161 |
-
outText = outText.replace(p, '')
|
162 |
-
else:
|
163 |
-
outText = outText.replace(p, ' ')
|
164 |
-
outText = self.periodStrip.sub("",
|
165 |
-
outText,
|
166 |
-
re.UNICODE)
|
167 |
-
return outText
|
168 |
-
|
169 |
-
def processDigitArticle(self, inText):
|
170 |
-
outText = []
|
171 |
-
tempText = inText.lower().split()
|
172 |
-
for word in tempText:
|
173 |
-
word = self.manualMap.setdefault(word, word)
|
174 |
-
if word not in self.articles:
|
175 |
-
outText.append(word)
|
176 |
-
else:
|
177 |
-
pass
|
178 |
-
for wordId, word in enumerate(outText):
|
179 |
-
if word in self.contractions:
|
180 |
-
outText[wordId] = self.contractions[word]
|
181 |
-
outText = ' '.join(outText)
|
182 |
-
return outText
|
183 |
-
|
184 |
-
def setAccuracy(self, accQA, accQuesType, accAnsType):
|
185 |
-
self.accuracy['overall'] = round(100*float(sum(accQA))/len(accQA), self.n)
|
186 |
-
self.accuracy['perQuestionType'] = {quesType: round(100*float(sum(accQuesType[quesType]))/len(accQuesType[quesType]), self.n) for quesType in accQuesType}
|
187 |
-
self.accuracy['perAnswerType'] = {ansType: round(100*float(sum(accAnsType[ansType]))/len(accAnsType[ansType]), self.n) for ansType in accAnsType}
|
188 |
-
|
189 |
-
def setASR(self, asrQA, asr_dis, asrQuesType, asrAnsType):
|
190 |
-
self.asr['overall'] = round(100*float(sum(asrQA))/len(asrQA), self.n)
|
191 |
-
self.asr['dis'] = asr_dis
|
192 |
-
self.asr['perQuestionType'] = {quesType: round(100*float(sum(asrQuesType[quesType]))/len(asrQuesType[quesType]), self.n) for quesType in asrQuesType}
|
193 |
-
self.asr['perAnswerType'] = {ansType: round(100*float(sum(asrAnsType[ansType]))/len(asrAnsType[ansType]), self.n) for ansType in asrAnsType}
|
194 |
-
|
195 |
-
def setEvalQA(self, quesId, acc):
|
196 |
-
self.evalQA[quesId] = round(100*acc, self.n)
|
197 |
-
|
198 |
-
def setEvalQuesType(self, quesId, quesType, acc):
|
199 |
-
if quesType not in self.evalQuesType:
|
200 |
-
self.evalQuesType[quesType] = {}
|
201 |
-
self.evalQuesType[quesType][quesId] = round(100*acc, self.n)
|
202 |
-
|
203 |
-
def setEvalAnsType(self, quesId, ansType, acc):
|
204 |
-
if ansType not in self.evalAnsType:
|
205 |
-
self.evalAnsType[ansType] = {}
|
206 |
-
self.evalAnsType[ansType][quesId] = round(100*acc, self.n)
|
207 |
-
|
208 |
-
def updateProgress(self, progress):
|
209 |
-
barLength = 20
|
210 |
-
status = ""
|
211 |
-
if isinstance(progress, int):
|
212 |
-
progress = float(progress)
|
213 |
-
if not isinstance(progress, float):
|
214 |
-
progress = 0
|
215 |
-
status = "error: progress var must be float\r\n"
|
216 |
-
if progress < 0:
|
217 |
-
progress = 0
|
218 |
-
status = "Halt...\r\n"
|
219 |
-
if progress >= 1:
|
220 |
-
progress = 1
|
221 |
-
status = "Done...\r\n"
|
222 |
-
block = int(round(barLength*progress))
|
223 |
-
text = "\rFinished Percent: [{0}] {1}% {2}".format( "#"*block + "-"*(barLength-block), int(progress*100), status)
|
224 |
-
sys.stdout.write(text)
|
225 |
-
sys.stdout.flush()
|
226 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CatNika/New_Cat_Proxy/Dockerfile
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
FROM node:18-bullseye-slim
|
2 |
-
RUN apt-get update && \
|
3 |
-
apt-get install -y git
|
4 |
-
RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
|
5 |
-
WORKDIR /app
|
6 |
-
RUN npm install
|
7 |
-
COPY Dockerfile greeting.md* .env* ./
|
8 |
-
RUN npm run build
|
9 |
-
EXPOSE 7860
|
10 |
-
ENV NODE_ENV=production
|
11 |
-
CMD [ "npm", "start" ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/Kemal-Diffusion/kemal.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/kandinsky-community/kandinsky-2-1").launch()
|
|
|
|
|
|
|
|
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/__init__.py
DELETED
@@ -1,8 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
2 |
-
from .coco import COCODataset
|
3 |
-
from .voc import PascalVOCDataset
|
4 |
-
from .concat_dataset import ConcatDataset
|
5 |
-
from .word_dataset import WordDataset
|
6 |
-
|
7 |
-
__all__ = ["COCODataset", "ConcatDataset", "PascalVOCDataset",
|
8 |
-
"WordDataset"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DESUCLUB/BLLAMA/finetune.py
DELETED
@@ -1,207 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import sys
|
3 |
-
|
4 |
-
import torch
|
5 |
-
import torch.nn as nn
|
6 |
-
import bitsandbytes as bnb
|
7 |
-
from datasets import load_dataset
|
8 |
-
import transformers
|
9 |
-
|
10 |
-
assert (
|
11 |
-
"LlamaTokenizer" in transformers._import_structure["models.llama"]
|
12 |
-
), "LLaMA is now in HuggingFace's main branch.\nPlease reinstall it: pip uninstall transformers && pip install git+https://github.com/huggingface/transformers.git"
|
13 |
-
from transformers import LlamaForCausalLM, LlamaTokenizer
|
14 |
-
from peft import (
|
15 |
-
prepare_model_for_int8_training,
|
16 |
-
LoraConfig,
|
17 |
-
get_peft_model,
|
18 |
-
get_peft_model_state_dict,
|
19 |
-
)
|
20 |
-
|
21 |
-
|
22 |
-
# optimized for RTX 4090. for larger GPUs, increase some of these?
|
23 |
-
MICRO_BATCH_SIZE = 4 # this could actually be 5 but i like powers of 2
|
24 |
-
BATCH_SIZE = 128
|
25 |
-
GRADIENT_ACCUMULATION_STEPS = BATCH_SIZE // MICRO_BATCH_SIZE
|
26 |
-
EPOCHS = 3 # we don't always need 3 tbh
|
27 |
-
LEARNING_RATE = 3e-4 # the Karpathy constant
|
28 |
-
CUTOFF_LEN = 256 # 256 accounts for about 96% of the data
|
29 |
-
LORA_R = 8
|
30 |
-
LORA_ALPHA = 16
|
31 |
-
LORA_DROPOUT = 0.05
|
32 |
-
VAL_SET_SIZE = 2000
|
33 |
-
TARGET_MODULES = [
|
34 |
-
"q_proj",
|
35 |
-
"v_proj",
|
36 |
-
]
|
37 |
-
DATA_PATH = "alpaca_data_cleaned.json"
|
38 |
-
OUTPUT_DIR = "lora-alpaca"
|
39 |
-
|
40 |
-
device_map = "auto"
|
41 |
-
world_size = int(os.environ.get("WORLD_SIZE", 1))
|
42 |
-
ddp = world_size != 1
|
43 |
-
if ddp:
|
44 |
-
device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)}
|
45 |
-
GRADIENT_ACCUMULATION_STEPS = GRADIENT_ACCUMULATION_STEPS // world_size
|
46 |
-
|
47 |
-
model = LlamaForCausalLM.from_pretrained(
|
48 |
-
"decapoda-research/llama-7b-hf",
|
49 |
-
load_in_8bit=True,
|
50 |
-
device_map=device_map,
|
51 |
-
)
|
52 |
-
tokenizer = LlamaTokenizer.from_pretrained(
|
53 |
-
"decapoda-research/llama-7b-hf", add_eos_token=True
|
54 |
-
)
|
55 |
-
|
56 |
-
model = prepare_model_for_int8_training(model)
|
57 |
-
|
58 |
-
config = LoraConfig(
|
59 |
-
r=LORA_R,
|
60 |
-
lora_alpha=LORA_ALPHA,
|
61 |
-
target_modules=TARGET_MODULES,
|
62 |
-
lora_dropout=LORA_DROPOUT,
|
63 |
-
bias="none",
|
64 |
-
task_type="CAUSAL_LM",
|
65 |
-
)
|
66 |
-
model = get_peft_model(model, config)
|
67 |
-
tokenizer.pad_token_id = 0 # unk. we want this to be different from the eos token
|
68 |
-
data = load_dataset("json", data_files=DATA_PATH)
|
69 |
-
|
70 |
-
|
71 |
-
def generate_prompt(data_point):
|
72 |
-
# sorry about the formatting disaster gotta move fast
|
73 |
-
if data_point["input"]:
|
74 |
-
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
75 |
-
|
76 |
-
### Instruction:
|
77 |
-
{data_point["instruction"]}
|
78 |
-
|
79 |
-
### Input:
|
80 |
-
{data_point["input"]}
|
81 |
-
|
82 |
-
### Response:
|
83 |
-
{data_point["output"]}"""
|
84 |
-
else:
|
85 |
-
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
86 |
-
|
87 |
-
### Instruction:
|
88 |
-
{data_point["instruction"]}
|
89 |
-
|
90 |
-
### Response:
|
91 |
-
{data_point["output"]}"""
|
92 |
-
|
93 |
-
|
94 |
-
def tokenize(prompt):
|
95 |
-
# there's probably a way to do this with the tokenizer settings
|
96 |
-
# but again, gotta move fast
|
97 |
-
result = tokenizer(
|
98 |
-
prompt,
|
99 |
-
truncation=True,
|
100 |
-
max_length=CUTOFF_LEN + 1,
|
101 |
-
padding="max_length",
|
102 |
-
)
|
103 |
-
return {
|
104 |
-
"input_ids": result["input_ids"][:-1],
|
105 |
-
"attention_mask": result["attention_mask"][:-1],
|
106 |
-
}
|
107 |
-
|
108 |
-
|
109 |
-
def generate_and_tokenize_prompt(data_point):
|
110 |
-
# This function masks out the labels for the input,
|
111 |
-
# so that our loss is computed only on the response.
|
112 |
-
user_prompt = (
|
113 |
-
(
|
114 |
-
f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
115 |
-
|
116 |
-
### Instruction:
|
117 |
-
{data_point["instruction"]}
|
118 |
-
|
119 |
-
### Input:
|
120 |
-
{data_point["input"]}
|
121 |
-
|
122 |
-
### Response:
|
123 |
-
"""
|
124 |
-
)
|
125 |
-
if data_point["input"]
|
126 |
-
else (
|
127 |
-
f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
|
128 |
-
|
129 |
-
### Instruction:
|
130 |
-
{data_point["instruction"]}
|
131 |
-
|
132 |
-
### Response:
|
133 |
-
"""
|
134 |
-
)
|
135 |
-
)
|
136 |
-
len_user_prompt_tokens = (
|
137 |
-
len(
|
138 |
-
tokenizer(
|
139 |
-
user_prompt,
|
140 |
-
truncation=True,
|
141 |
-
max_length=CUTOFF_LEN + 1,
|
142 |
-
)["input_ids"]
|
143 |
-
)
|
144 |
-
- 1
|
145 |
-
) # no eos token
|
146 |
-
full_tokens = tokenizer(
|
147 |
-
user_prompt + data_point["output"],
|
148 |
-
truncation=True,
|
149 |
-
max_length=CUTOFF_LEN + 1,
|
150 |
-
padding="max_length",
|
151 |
-
)["input_ids"][:-1]
|
152 |
-
return {
|
153 |
-
"input_ids": full_tokens,
|
154 |
-
"labels": [-100] * len_user_prompt_tokens
|
155 |
-
+ full_tokens[len_user_prompt_tokens:],
|
156 |
-
"attention_mask": [1] * (len(full_tokens)),
|
157 |
-
}
|
158 |
-
|
159 |
-
|
160 |
-
if VAL_SET_SIZE > 0:
|
161 |
-
train_val = data["train"].train_test_split(
|
162 |
-
test_size=VAL_SET_SIZE, shuffle=True, seed=42
|
163 |
-
)
|
164 |
-
train_data = train_val["train"].shuffle().map(generate_and_tokenize_prompt)
|
165 |
-
val_data = train_val["test"].shuffle().map(generate_and_tokenize_prompt)
|
166 |
-
else:
|
167 |
-
train_data = data['train'].shuffle().map(generate_and_tokenize_prompt)
|
168 |
-
val_data = None
|
169 |
-
|
170 |
-
trainer = transformers.Trainer(
|
171 |
-
model=model,
|
172 |
-
train_dataset=train_data,
|
173 |
-
eval_dataset=val_data,
|
174 |
-
args=transformers.TrainingArguments(
|
175 |
-
per_device_train_batch_size=MICRO_BATCH_SIZE,
|
176 |
-
gradient_accumulation_steps=GRADIENT_ACCUMULATION_STEPS,
|
177 |
-
warmup_steps=100,
|
178 |
-
num_train_epochs=EPOCHS,
|
179 |
-
learning_rate=LEARNING_RATE,
|
180 |
-
fp16=True,
|
181 |
-
logging_steps=20,
|
182 |
-
evaluation_strategy="steps" if VAL_SET_SIZE > 0 else "no",
|
183 |
-
save_strategy="steps",
|
184 |
-
eval_steps=200 if VAL_SET_SIZE > 0 else None,
|
185 |
-
save_steps=200,
|
186 |
-
output_dir=OUTPUT_DIR,
|
187 |
-
save_total_limit=3,
|
188 |
-
load_best_model_at_end=True if VAL_SET_SIZE > 0 else False,
|
189 |
-
ddp_find_unused_parameters=False if ddp else None,
|
190 |
-
),
|
191 |
-
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
|
192 |
-
)
|
193 |
-
model.config.use_cache = False
|
194 |
-
|
195 |
-
old_state_dict = model.state_dict
|
196 |
-
model.state_dict = (
|
197 |
-
lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
|
198 |
-
).__get__(model, type(model))
|
199 |
-
|
200 |
-
if torch.__version__ >= "2" and sys.platform != 'win32':
|
201 |
-
model = torch.compile(model)
|
202 |
-
|
203 |
-
trainer.train()
|
204 |
-
|
205 |
-
model.save_pretrained(OUTPUT_DIR)
|
206 |
-
|
207 |
-
print("\n If there's a warning about missing keys above, please disregard :)")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/r-3ca97919.js
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
function f(e){for(var n={},r=0;r<e.length;++r)n[e[r]]=!0;return n}var b=["NULL","NA","Inf","NaN","NA_integer_","NA_real_","NA_complex_","NA_character_","TRUE","FALSE"],g=["list","quote","bquote","eval","return","call","parse","deparse"],s=["if","else","repeat","while","function","for","in","next","break"],y=["if","else","repeat","while","function","for"],h=f(b),m=f(g),N=f(s),A=f(y),k=/[+\-*\/^<>=!&|~$:]/,t;function p(e,n){t=null;var r=e.next();if(r=="#")return e.skipToEnd(),"comment";if(r=="0"&&e.eat("x"))return e.eatWhile(/[\da-f]/i),"number";if(r=="."&&e.eat(/\d/))return e.match(/\d*(?:e[+\-]?\d+)?/),"number";if(/\d/.test(r))return e.match(/\d*(?:\.\d+)?(?:e[+\-]\d+)?L?/),"number";if(r=="'"||r=='"')return n.tokenize=E(r),"string";if(r=="`")return e.match(/[^`]+`/),"string.special";if(r=="."&&e.match(/.(?:[.]|\d+)/))return"keyword";if(/[a-zA-Z\.]/.test(r)){e.eatWhile(/[\w\.]/);var i=e.current();return h.propertyIsEnumerable(i)?"atom":N.propertyIsEnumerable(i)?(A.propertyIsEnumerable(i)&&!e.match(/\s*if(\s+|$)/,!1)&&(t="block"),"keyword"):m.propertyIsEnumerable(i)?"builtin":"variable"}else return r=="%"?(e.skipTo("%")&&e.next(),"variableName.special"):r=="<"&&e.eat("-")||r=="<"&&e.match("<-")||r=="-"&&e.match(/>>?/)||r=="="&&n.ctx.argList?"operator":k.test(r)?(r=="$"||e.eatWhile(k),"operator"):/[\(\){}\[\];]/.test(r)?(t=r,r==";"?"punctuation":null):null}function E(e){return function(n,r){if(n.eat("\\")){var i=n.next();return i=="x"?n.match(/^[a-f0-9]{2}/i):(i=="u"||i=="U")&&n.eat("{")&&n.skipTo("}")?n.next():i=="u"?n.match(/^[a-f0-9]{4}/i):i=="U"?n.match(/^[a-f0-9]{8}/i):/[0-7]/.test(i)&&n.match(/^[0-7]{1,2}/),"string.special"}else{for(var l;(l=n.next())!=null;){if(l==e){r.tokenize=p;break}if(l=="\\"){n.backUp(1);break}}return"string"}}}var v=1,u=2,c=4;function o(e,n,r){e.ctx={type:n,indent:e.indent,flags:0,column:r.column(),prev:e.ctx}}function x(e,n){var r=e.ctx;e.ctx={type:r.type,indent:r.indent,flags:r.flags|n,column:r.column,prev:r.prev}}function a(e){e.indent=e.ctx.indent,e.ctx=e.ctx.prev}const I={name:"r",startState:function(e){return{tokenize:p,ctx:{type:"top",indent:-e,flags:u},indent:0,afterIdent:!1}},token:function(e,n){if(e.sol()&&(n.ctx.flags&3||(n.ctx.flags|=u),n.ctx.flags&c&&a(n),n.indent=e.indentation()),e.eatSpace())return null;var r=n.tokenize(e,n);return r!="comment"&&!(n.ctx.flags&u)&&x(n,v),(t==";"||t=="{"||t=="}")&&n.ctx.type=="block"&&a(n),t=="{"?o(n,"}",e):t=="("?(o(n,")",e),n.afterIdent&&(n.ctx.argList=!0)):t=="["?o(n,"]",e):t=="block"?o(n,"block",e):t==n.ctx.type?a(n):n.ctx.type=="block"&&r!="comment"&&x(n,c),n.afterIdent=r=="variable"||r=="keyword",r},indent:function(e,n,r){if(e.tokenize!=p)return 0;var i=n&&n.charAt(0),l=e.ctx,d=i==l.type;return l.flags&c&&(l=l.prev),l.type=="block"?l.indent+(i=="{"?0:r.unit):l.flags&v?l.column+(d?0:1):l.indent+(d?0:r.unit)},languageData:{wordChars:".",commentTokens:{line:"#"},autocomplete:b.concat(g,s)}};export{I as r};
|
2 |
-
//# sourceMappingURL=r-3ca97919.js.map
|
|
|
|
|
|
spaces/Daffa/image-classification/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Gpt2
|
3 |
-
emoji: 👀
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: green
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.9.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DaleChen/AutoGPT/autogpt/config/singleton.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
"""The singleton metaclass for ensuring only one instance of a class."""
|
2 |
-
import abc
|
3 |
-
|
4 |
-
|
5 |
-
class Singleton(abc.ABCMeta, type):
|
6 |
-
"""
|
7 |
-
Singleton metaclass for ensuring only one instance of a class.
|
8 |
-
"""
|
9 |
-
|
10 |
-
_instances = {}
|
11 |
-
|
12 |
-
def __call__(cls, *args, **kwargs):
|
13 |
-
"""Call method for the singleton metaclass."""
|
14 |
-
if cls not in cls._instances:
|
15 |
-
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
|
16 |
-
return cls._instances[cls]
|
17 |
-
|
18 |
-
|
19 |
-
class AbstractSingleton(abc.ABC, metaclass=Singleton):
|
20 |
-
"""
|
21 |
-
Abstract singleton class for ensuring only one instance of a class.
|
22 |
-
"""
|
23 |
-
|
24 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DataForGood/bechdelai-demo/app.py
DELETED
@@ -1,155 +0,0 @@
|
|
1 |
-
# Inspired from https://huggingface.co/spaces/vumichien/whisper-speaker-diarization/blob/main/app.py
|
2 |
-
|
3 |
-
import whisper
|
4 |
-
import datetime
|
5 |
-
import subprocess
|
6 |
-
import gradio as gr
|
7 |
-
from pathlib import Path
|
8 |
-
import pandas as pd
|
9 |
-
import re
|
10 |
-
import time
|
11 |
-
import os
|
12 |
-
import numpy as np
|
13 |
-
|
14 |
-
from pytube import YouTube
|
15 |
-
import torch
|
16 |
-
# import pyannote.audio
|
17 |
-
# from pyannote.audio.pipelines.speaker_verification import PretrainedSpeakerEmbedding
|
18 |
-
# from pyannote.audio import Audio
|
19 |
-
# from pyannote.core import Segment
|
20 |
-
# from sklearn.cluster import AgglomerativeClustering
|
21 |
-
|
22 |
-
from gpuinfo import GPUInfo
|
23 |
-
|
24 |
-
import wave
|
25 |
-
import contextlib
|
26 |
-
from transformers import pipeline
|
27 |
-
import psutil
|
28 |
-
|
29 |
-
# Custom code
|
30 |
-
from bechdelaidemo.utils import download_youtube_video
|
31 |
-
from bechdelaidemo.utils import extract_audio_from_movie
|
32 |
-
|
33 |
-
# Constants
|
34 |
-
whisper_models = ["tiny.en","base.en","tiny","base", "small", "medium", "large"]
|
35 |
-
device = 0 if torch.cuda.is_available() else "cpu"
|
36 |
-
os.makedirs('output', exist_ok=True)
|
37 |
-
|
38 |
-
# Prepare embedding model
|
39 |
-
# embedding_model = PretrainedSpeakerEmbedding(
|
40 |
-
# "speechbrain/spkrec-ecapa-voxceleb",
|
41 |
-
# device=torch.device("cuda" if torch.cuda.is_available() else "cpu"))
|
42 |
-
|
43 |
-
def get_youtube(video_url):
|
44 |
-
yt = YouTube(video_url)
|
45 |
-
abs_video_path = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download()
|
46 |
-
print("Success download video")
|
47 |
-
print(abs_video_path)
|
48 |
-
return abs_video_path
|
49 |
-
|
50 |
-
def _return_yt_html_embed(yt_url):
|
51 |
-
video_id = yt_url.split("?v=")[-1]
|
52 |
-
HTML_str = (
|
53 |
-
f'<center> <iframe width="500" height="320" src="https://www.youtube.com/embed/{video_id}"> </iframe>'
|
54 |
-
" </center>"
|
55 |
-
)
|
56 |
-
return HTML_str
|
57 |
-
|
58 |
-
|
59 |
-
def speech_to_text(video_filepath, selected_source_lang = "en", whisper_model = "tiny.en"):
|
60 |
-
"""
|
61 |
-
# Transcribe youtube link using OpenAI Whisper
|
62 |
-
1. Using Open AI's Whisper model to seperate audio into segments and generate transcripts.
|
63 |
-
2. Generating speaker embeddings for each segments.
|
64 |
-
3. Applying agglomerative clustering on the embeddings to identify the speaker for each segment.
|
65 |
-
|
66 |
-
Speech Recognition is based on models from OpenAI Whisper https://github.com/openai/whisper
|
67 |
-
Speaker diarization model and pipeline from by https://github.com/pyannote/pyannote-audio
|
68 |
-
"""
|
69 |
-
|
70 |
-
time_start = time.time()
|
71 |
-
|
72 |
-
# Convert video to audio
|
73 |
-
audio_filepath = extract_audio_from_movie(video_filepath,".wav")
|
74 |
-
|
75 |
-
# Load whisper
|
76 |
-
model = whisper.load_model(whisper_model)
|
77 |
-
|
78 |
-
# Get duration
|
79 |
-
with contextlib.closing(wave.open(audio_filepath,'r')) as f:
|
80 |
-
frames = f.getnframes()
|
81 |
-
rate = f.getframerate()
|
82 |
-
duration = frames / float(rate)
|
83 |
-
print(f"conversion to wav ready, duration of audio file: {duration}")
|
84 |
-
|
85 |
-
# Transcribe audio
|
86 |
-
options = dict(language=selected_source_lang, beam_size=5, best_of=5)
|
87 |
-
transcribe_options = dict(task="transcribe", **options)
|
88 |
-
result = model.transcribe(audio_filepath, **transcribe_options)
|
89 |
-
segments = result["segments"]
|
90 |
-
text = result["text"].strip()
|
91 |
-
print("starting whisper done with whisper")
|
92 |
-
|
93 |
-
return [text]
|
94 |
-
|
95 |
-
source_language_list = ["en","fr"]
|
96 |
-
|
97 |
-
# ---- Gradio Layout -----
|
98 |
-
# Inspiration from https://huggingface.co/spaces/RASMUS/Whisper-youtube-crosslingual-subtitles
|
99 |
-
video_in = gr.Video(label="Video file", mirror_webcam=False)
|
100 |
-
youtube_url_in = gr.Textbox(label="Youtube url", lines=1, interactive=True)
|
101 |
-
selected_source_lang = gr.Dropdown(choices=source_language_list, type="value", value="en", label="Spoken language in video", interactive=True)
|
102 |
-
selected_whisper_model = gr.Dropdown(choices=whisper_models, type="value", value="tiny.en", label="Selected Whisper model", interactive=True)
|
103 |
-
# transcription_df = gr.DataFrame(value=df_init,label="Transcription dataframe", row_count=(0, "dynamic"), max_rows = 10, wrap=True, overflow_row_behaviour='paginate')
|
104 |
-
output_text = gr.Textbox(label = "Transcribed text",lines = 10)
|
105 |
-
|
106 |
-
title = "BechdelAI - demo"
|
107 |
-
demo = gr.Blocks(title=title,live = True)
|
108 |
-
demo.encrypt = False
|
109 |
-
|
110 |
-
|
111 |
-
with demo:
|
112 |
-
with gr.Tab("BechdelAI - dialogue demo"):
|
113 |
-
gr.Markdown('''
|
114 |
-
<div>
|
115 |
-
<h1 style='text-align: center'>BechdelAI - Dialogue demo</h1>
|
116 |
-
</div>
|
117 |
-
''')
|
118 |
-
|
119 |
-
with gr.Row():
|
120 |
-
gr.Markdown('''# 🎥 Download Youtube video''')
|
121 |
-
|
122 |
-
|
123 |
-
with gr.Row():
|
124 |
-
|
125 |
-
with gr.Column():
|
126 |
-
# gr.Markdown('''### You can test by following examples:''')
|
127 |
-
examples = gr.Examples(examples=
|
128 |
-
[
|
129 |
-
"https://www.youtube.com/watch?v=FDFdroN7d0w",
|
130 |
-
"https://www.youtube.com/watch?v=b2f2Kqt_KcE",
|
131 |
-
"https://www.youtube.com/watch?v=ba5F8G778C0",
|
132 |
-
],
|
133 |
-
label="Examples", inputs=[youtube_url_in])
|
134 |
-
youtube_url_in.render()
|
135 |
-
download_youtube_btn = gr.Button("Download Youtube video")
|
136 |
-
download_youtube_btn.click(get_youtube, [youtube_url_in], [
|
137 |
-
video_in])
|
138 |
-
print(video_in)
|
139 |
-
|
140 |
-
with gr.Column():
|
141 |
-
video_in.render()
|
142 |
-
|
143 |
-
with gr.Row():
|
144 |
-
gr.Markdown('''# 🎙 Extract text from video''')
|
145 |
-
|
146 |
-
with gr.Row():
|
147 |
-
with gr.Column():
|
148 |
-
selected_source_lang.render()
|
149 |
-
selected_whisper_model.render()
|
150 |
-
transcribe_btn = gr.Button("Transcribe audio and diarization")
|
151 |
-
transcribe_btn.click(speech_to_text, [video_in, selected_source_lang, selected_whisper_model], [output_text])
|
152 |
-
with gr.Column():
|
153 |
-
output_text.render()
|
154 |
-
|
155 |
-
demo.launch(debug=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DonDoesStuff/Free-GPT3.5/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Free GPT3.5
|
3 |
-
emoji: 🐠
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.35.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/criteria/clip_loss.py
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
|
2 |
-
import torch
|
3 |
-
import clip
|
4 |
-
|
5 |
-
|
6 |
-
class CLIPLoss(torch.nn.Module):
|
7 |
-
|
8 |
-
def __init__(self, opts):
|
9 |
-
super(CLIPLoss, self).__init__()
|
10 |
-
self.model, self.preprocess = clip.load("ViT-B/32", device="cuda")
|
11 |
-
self.upsample = torch.nn.Upsample(scale_factor=7)
|
12 |
-
self.avg_pool = torch.nn.AvgPool2d(kernel_size=opts.stylegan_size // 32)
|
13 |
-
|
14 |
-
def forward(self, image, text):
|
15 |
-
image = self.avg_pool(self.upsample(image))
|
16 |
-
similarity = 1 - self.model(image, text)[0] / 100
|
17 |
-
return similarity
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DragGan/DragGan/torch_utils/ops/upfirdn2d.h
DELETED
@@ -1,59 +0,0 @@
|
|
1 |
-
// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
2 |
-
//
|
3 |
-
// NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
// and proprietary rights in and to this software, related documentation
|
5 |
-
// and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
// distribution of this software and related documentation without an express
|
7 |
-
// license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
#include <cuda_runtime.h>
|
10 |
-
|
11 |
-
//------------------------------------------------------------------------
|
12 |
-
// CUDA kernel parameters.
|
13 |
-
|
14 |
-
struct upfirdn2d_kernel_params
|
15 |
-
{
|
16 |
-
const void* x;
|
17 |
-
const float* f;
|
18 |
-
void* y;
|
19 |
-
|
20 |
-
int2 up;
|
21 |
-
int2 down;
|
22 |
-
int2 pad0;
|
23 |
-
int flip;
|
24 |
-
float gain;
|
25 |
-
|
26 |
-
int4 inSize; // [width, height, channel, batch]
|
27 |
-
int4 inStride;
|
28 |
-
int2 filterSize; // [width, height]
|
29 |
-
int2 filterStride;
|
30 |
-
int4 outSize; // [width, height, channel, batch]
|
31 |
-
int4 outStride;
|
32 |
-
int sizeMinor;
|
33 |
-
int sizeMajor;
|
34 |
-
|
35 |
-
int loopMinor;
|
36 |
-
int loopMajor;
|
37 |
-
int loopX;
|
38 |
-
int launchMinor;
|
39 |
-
int launchMajor;
|
40 |
-
};
|
41 |
-
|
42 |
-
//------------------------------------------------------------------------
|
43 |
-
// CUDA kernel specialization.
|
44 |
-
|
45 |
-
struct upfirdn2d_kernel_spec
|
46 |
-
{
|
47 |
-
void* kernel;
|
48 |
-
int tileOutW;
|
49 |
-
int tileOutH;
|
50 |
-
int loopMinor;
|
51 |
-
int loopX;
|
52 |
-
};
|
53 |
-
|
54 |
-
//------------------------------------------------------------------------
|
55 |
-
// CUDA kernel selection.
|
56 |
-
|
57 |
-
template <class T> upfirdn2d_kernel_spec choose_upfirdn2d_kernel(const upfirdn2d_kernel_params& p);
|
58 |
-
|
59 |
-
//------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dragneel/Recon/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Recon
|
3 |
-
emoji: 🏆
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: blue
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.27.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: afl-3.0
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ECCV2022/bytetrack/tutorials/qdtrack/tracker_reid_motion.py
DELETED
@@ -1,397 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
from collections import deque
|
3 |
-
import os
|
4 |
-
import os.path as osp
|
5 |
-
import copy
|
6 |
-
import torch
|
7 |
-
import torch.nn.functional as F
|
8 |
-
|
9 |
-
from mot_online.kalman_filter import KalmanFilter
|
10 |
-
from mot_online.basetrack import BaseTrack, TrackState
|
11 |
-
from mot_online import matching
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
class STrack(BaseTrack):
|
16 |
-
shared_kalman = KalmanFilter()
|
17 |
-
def __init__(self, tlwh, score, temp_feat, buffer_size=30):
|
18 |
-
|
19 |
-
# wait activate
|
20 |
-
self._tlwh = np.asarray(tlwh, dtype=np.float)
|
21 |
-
self.kalman_filter = None
|
22 |
-
self.mean, self.covariance = None, None
|
23 |
-
self.is_activated = False
|
24 |
-
|
25 |
-
self.score = score
|
26 |
-
self.tracklet_len = 0
|
27 |
-
|
28 |
-
self.smooth_feat = None
|
29 |
-
self.update_features(temp_feat)
|
30 |
-
self.features = deque([], maxlen=buffer_size)
|
31 |
-
self.alpha = 0.9
|
32 |
-
|
33 |
-
def update_features(self, feat):
|
34 |
-
feat /= np.linalg.norm(feat)
|
35 |
-
self.curr_feat = feat
|
36 |
-
if self.smooth_feat is None:
|
37 |
-
self.smooth_feat = feat
|
38 |
-
else:
|
39 |
-
self.smooth_feat = self.alpha * self.smooth_feat + (1 - self.alpha) * feat
|
40 |
-
self.features.append(feat)
|
41 |
-
self.smooth_feat /= np.linalg.norm(self.smooth_feat)
|
42 |
-
|
43 |
-
def predict(self):
|
44 |
-
mean_state = self.mean.copy()
|
45 |
-
if self.state != TrackState.Tracked:
|
46 |
-
mean_state[7] = 0
|
47 |
-
self.mean, self.covariance = self.kalman_filter.predict(mean_state, self.covariance)
|
48 |
-
|
49 |
-
@staticmethod
|
50 |
-
def multi_predict(stracks):
|
51 |
-
if len(stracks) > 0:
|
52 |
-
multi_mean = np.asarray([st.mean.copy() for st in stracks])
|
53 |
-
multi_covariance = np.asarray([st.covariance for st in stracks])
|
54 |
-
for i, st in enumerate(stracks):
|
55 |
-
if st.state != TrackState.Tracked:
|
56 |
-
multi_mean[i][7] = 0
|
57 |
-
multi_mean, multi_covariance = STrack.shared_kalman.multi_predict(multi_mean, multi_covariance)
|
58 |
-
for i, (mean, cov) in enumerate(zip(multi_mean, multi_covariance)):
|
59 |
-
stracks[i].mean = mean
|
60 |
-
stracks[i].covariance = cov
|
61 |
-
|
62 |
-
def activate(self, kalman_filter, frame_id):
|
63 |
-
"""Start a new tracklet"""
|
64 |
-
self.kalman_filter = kalman_filter
|
65 |
-
self.track_id = self.next_id()
|
66 |
-
self.mean, self.covariance = self.kalman_filter.initiate(self.tlwh_to_xyah(self._tlwh))
|
67 |
-
|
68 |
-
self.tracklet_len = 0
|
69 |
-
self.state = TrackState.Tracked
|
70 |
-
if frame_id == 1:
|
71 |
-
self.is_activated = True
|
72 |
-
# self.is_activated = True
|
73 |
-
self.frame_id = frame_id
|
74 |
-
self.start_frame = frame_id
|
75 |
-
|
76 |
-
def re_activate(self, new_track, frame_id, new_id=False):
|
77 |
-
self.mean, self.covariance = self.kalman_filter.update(
|
78 |
-
self.mean, self.covariance, self.tlwh_to_xyah(new_track.tlwh)
|
79 |
-
)
|
80 |
-
|
81 |
-
self.update_features(new_track.curr_feat)
|
82 |
-
self.tracklet_len = 0
|
83 |
-
self.state = TrackState.Tracked
|
84 |
-
self.is_activated = True
|
85 |
-
self.frame_id = frame_id
|
86 |
-
if new_id:
|
87 |
-
self.track_id = self.next_id()
|
88 |
-
|
89 |
-
def update(self, new_track, frame_id, update_feature=True):
|
90 |
-
"""
|
91 |
-
Update a matched track
|
92 |
-
:type new_track: STrack
|
93 |
-
:type frame_id: int
|
94 |
-
:type update_feature: bool
|
95 |
-
:return:
|
96 |
-
"""
|
97 |
-
self.frame_id = frame_id
|
98 |
-
self.tracklet_len += 1
|
99 |
-
|
100 |
-
new_tlwh = new_track.tlwh
|
101 |
-
self.mean, self.covariance = self.kalman_filter.update(
|
102 |
-
self.mean, self.covariance, self.tlwh_to_xyah(new_tlwh))
|
103 |
-
self.state = TrackState.Tracked
|
104 |
-
self.is_activated = True
|
105 |
-
|
106 |
-
self.score = new_track.score
|
107 |
-
if update_feature:
|
108 |
-
self.update_features(new_track.curr_feat)
|
109 |
-
|
110 |
-
@property
|
111 |
-
# @jit(nopython=True)
|
112 |
-
def tlwh(self):
|
113 |
-
"""Get current position in bounding box format `(top left x, top left y,
|
114 |
-
width, height)`.
|
115 |
-
"""
|
116 |
-
if self.mean is None:
|
117 |
-
return self._tlwh.copy()
|
118 |
-
ret = self.mean[:4].copy()
|
119 |
-
ret[2] *= ret[3]
|
120 |
-
ret[:2] -= ret[2:] / 2
|
121 |
-
return ret
|
122 |
-
|
123 |
-
@property
|
124 |
-
# @jit(nopython=True)
|
125 |
-
def tlbr(self):
|
126 |
-
"""Convert bounding box to format `(min x, min y, max x, max y)`, i.e.,
|
127 |
-
`(top left, bottom right)`.
|
128 |
-
"""
|
129 |
-
ret = self.tlwh.copy()
|
130 |
-
ret[2:] += ret[:2]
|
131 |
-
return ret
|
132 |
-
|
133 |
-
@staticmethod
|
134 |
-
# @jit(nopython=True)
|
135 |
-
def tlwh_to_xyah(tlwh):
|
136 |
-
"""Convert bounding box to format `(center x, center y, aspect ratio,
|
137 |
-
height)`, where the aspect ratio is `width / height`.
|
138 |
-
"""
|
139 |
-
ret = np.asarray(tlwh).copy()
|
140 |
-
ret[:2] += ret[2:] / 2
|
141 |
-
ret[2] /= ret[3]
|
142 |
-
return ret
|
143 |
-
|
144 |
-
def to_xyah(self):
|
145 |
-
return self.tlwh_to_xyah(self.tlwh)
|
146 |
-
|
147 |
-
@staticmethod
|
148 |
-
# @jit(nopython=True)
|
149 |
-
def tlbr_to_tlwh(tlbr):
|
150 |
-
ret = np.asarray(tlbr).copy()
|
151 |
-
ret[2:] -= ret[:2]
|
152 |
-
return ret
|
153 |
-
|
154 |
-
@staticmethod
|
155 |
-
# @jit(nopython=True)
|
156 |
-
def tlwh_to_tlbr(tlwh):
|
157 |
-
ret = np.asarray(tlwh).copy()
|
158 |
-
ret[2:] += ret[:2]
|
159 |
-
return ret
|
160 |
-
|
161 |
-
def __repr__(self):
|
162 |
-
return 'OT_{}_({}-{})'.format(self.track_id, self.start_frame, self.end_frame)
|
163 |
-
|
164 |
-
|
165 |
-
class BYTETracker(object):
|
166 |
-
def __init__(self, frame_rate=30):
|
167 |
-
self.tracked_stracks = [] # type: list[STrack]
|
168 |
-
self.lost_stracks = [] # type: list[STrack]
|
169 |
-
self.removed_stracks = [] # type: list[STrack]
|
170 |
-
|
171 |
-
self.frame_id = 0
|
172 |
-
|
173 |
-
self.low_thresh = 0.2
|
174 |
-
self.track_thresh = 0.8
|
175 |
-
self.det_thresh = self.track_thresh + 0.1
|
176 |
-
|
177 |
-
|
178 |
-
self.buffer_size = int(frame_rate / 30.0 * 30)
|
179 |
-
self.max_time_lost = self.buffer_size
|
180 |
-
self.kalman_filter = KalmanFilter()
|
181 |
-
|
182 |
-
# def update(self, output_results):
|
183 |
-
def update(self, det_bboxes, det_labels, frame_id, track_feats):
|
184 |
-
|
185 |
-
# self.frame_id += 1
|
186 |
-
self.frame_id = frame_id + 1
|
187 |
-
activated_starcks = []
|
188 |
-
refind_stracks = []
|
189 |
-
lost_stracks = []
|
190 |
-
removed_stracks = []
|
191 |
-
|
192 |
-
# scores = output_results[:, 4]
|
193 |
-
# bboxes = output_results[:, :4] # x1y1x2y2
|
194 |
-
scores = det_bboxes[:, 4].cpu().numpy()
|
195 |
-
bboxes = det_bboxes[:, :4].cpu().numpy()
|
196 |
-
|
197 |
-
track_feature = F.normalize(track_feats).cpu().numpy()
|
198 |
-
|
199 |
-
remain_inds = scores > self.track_thresh
|
200 |
-
dets = bboxes[remain_inds]
|
201 |
-
scores_keep = scores[remain_inds]
|
202 |
-
id_feature = track_feature[remain_inds]
|
203 |
-
|
204 |
-
|
205 |
-
inds_low = scores > self.low_thresh
|
206 |
-
inds_high = scores < self.track_thresh
|
207 |
-
inds_second = np.logical_and(inds_low, inds_high)
|
208 |
-
dets_second = bboxes[inds_second]
|
209 |
-
scores_second = scores[inds_second]
|
210 |
-
id_feature_second = track_feature[inds_second]
|
211 |
-
|
212 |
-
if len(dets) > 0:
|
213 |
-
'''Detections'''
|
214 |
-
detections = [STrack(STrack.tlbr_to_tlwh(tlbr), s, f) for
|
215 |
-
(tlbr, s, f) in zip(dets, scores_keep, id_feature)]
|
216 |
-
else:
|
217 |
-
detections = []
|
218 |
-
|
219 |
-
|
220 |
-
''' Add newly detected tracklets to tracked_stracks'''
|
221 |
-
unconfirmed = []
|
222 |
-
tracked_stracks = [] # type: list[STrack]
|
223 |
-
for track in self.tracked_stracks:
|
224 |
-
if not track.is_activated:
|
225 |
-
unconfirmed.append(track)
|
226 |
-
else:
|
227 |
-
tracked_stracks.append(track)
|
228 |
-
|
229 |
-
''' Step 2: First association, with Kalman and IOU'''
|
230 |
-
strack_pool = joint_stracks(tracked_stracks, self.lost_stracks)
|
231 |
-
# Predict the current location with KF
|
232 |
-
STrack.multi_predict(strack_pool)
|
233 |
-
|
234 |
-
dists = matching.embedding_distance(strack_pool, detections)
|
235 |
-
dists = matching.fuse_motion(self.kalman_filter, dists, strack_pool, detections)
|
236 |
-
matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.6)
|
237 |
-
# dists = matching.iou_distance(strack_pool, detections)
|
238 |
-
# matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.8)
|
239 |
-
|
240 |
-
for itracked, idet in matches:
|
241 |
-
track = strack_pool[itracked]
|
242 |
-
det = detections[idet]
|
243 |
-
if track.state == TrackState.Tracked:
|
244 |
-
track.update(detections[idet], self.frame_id)
|
245 |
-
activated_starcks.append(track)
|
246 |
-
else:
|
247 |
-
track.re_activate(det, self.frame_id, new_id=False)
|
248 |
-
refind_stracks.append(track)
|
249 |
-
|
250 |
-
''' Step 3: Second association, with IOU'''
|
251 |
-
detections = [detections[i] for i in u_detection]
|
252 |
-
r_tracked_stracks = [strack_pool[i] for i in u_track if strack_pool[i].state == TrackState.Tracked]
|
253 |
-
dists = matching.iou_distance(r_tracked_stracks, detections)
|
254 |
-
matches, u_track, u_detection = matching.linear_assignment(dists, thresh=0.5)
|
255 |
-
|
256 |
-
for itracked, idet in matches:
|
257 |
-
track = r_tracked_stracks[itracked]
|
258 |
-
det = detections[idet]
|
259 |
-
if track.state == TrackState.Tracked:
|
260 |
-
track.update(det, self.frame_id)
|
261 |
-
activated_starcks.append(track)
|
262 |
-
else:
|
263 |
-
track.re_activate(det, self.frame_id, new_id=False)
|
264 |
-
refind_stracks.append(track)
|
265 |
-
|
266 |
-
|
267 |
-
''' Step 3.5: Second association, with IOU'''
|
268 |
-
# association the untrack to the low score detections
|
269 |
-
if len(dets_second) > 0:
|
270 |
-
'''Detections'''
|
271 |
-
detections_second = [STrack(STrack.tlbr_to_tlwh(tlbr), s, f) for
|
272 |
-
(tlbr, s, f) in zip(dets_second, scores_second, id_feature_second)]
|
273 |
-
else:
|
274 |
-
detections_second = []
|
275 |
-
|
276 |
-
second_tracked_stracks = [r_tracked_stracks[i] for i in u_track if r_tracked_stracks[i].state == TrackState.Tracked]
|
277 |
-
dists = matching.iou_distance(second_tracked_stracks, detections_second)
|
278 |
-
matches, u_track, u_detection_second = matching.linear_assignment(dists, thresh=0.5)
|
279 |
-
for itracked, idet in matches:
|
280 |
-
track = second_tracked_stracks[itracked]
|
281 |
-
det = detections_second[idet]
|
282 |
-
if track.state == TrackState.Tracked:
|
283 |
-
track.update(det, self.frame_id)
|
284 |
-
activated_starcks.append(track)
|
285 |
-
else:
|
286 |
-
track.re_activate(det, self.frame_id, new_id=False)
|
287 |
-
refind_stracks.append(track)
|
288 |
-
|
289 |
-
for it in u_track:
|
290 |
-
#track = r_tracked_stracks[it]
|
291 |
-
track = second_tracked_stracks[it]
|
292 |
-
if not track.state == TrackState.Lost:
|
293 |
-
track.mark_lost()
|
294 |
-
lost_stracks.append(track)
|
295 |
-
|
296 |
-
'''Deal with unconfirmed tracks, usually tracks with only one beginning frame'''
|
297 |
-
detections = [detections[i] for i in u_detection]
|
298 |
-
dists = matching.iou_distance(unconfirmed, detections)
|
299 |
-
matches, u_unconfirmed, u_detection = matching.linear_assignment(dists, thresh=0.7)
|
300 |
-
for itracked, idet in matches:
|
301 |
-
unconfirmed[itracked].update(detections[idet], self.frame_id)
|
302 |
-
activated_starcks.append(unconfirmed[itracked])
|
303 |
-
for it in u_unconfirmed:
|
304 |
-
track = unconfirmed[it]
|
305 |
-
track.mark_removed()
|
306 |
-
removed_stracks.append(track)
|
307 |
-
|
308 |
-
""" Step 4: Init new stracks"""
|
309 |
-
for inew in u_detection:
|
310 |
-
track = detections[inew]
|
311 |
-
if track.score < self.det_thresh:
|
312 |
-
continue
|
313 |
-
track.activate(self.kalman_filter, self.frame_id)
|
314 |
-
activated_starcks.append(track)
|
315 |
-
""" Step 5: Update state"""
|
316 |
-
for track in self.lost_stracks:
|
317 |
-
if self.frame_id - track.end_frame > self.max_time_lost:
|
318 |
-
track.mark_removed()
|
319 |
-
removed_stracks.append(track)
|
320 |
-
|
321 |
-
# print('Ramained match {} s'.format(t4-t3))
|
322 |
-
|
323 |
-
self.tracked_stracks = [t for t in self.tracked_stracks if t.state == TrackState.Tracked]
|
324 |
-
self.tracked_stracks = joint_stracks(self.tracked_stracks, activated_starcks)
|
325 |
-
self.tracked_stracks = joint_stracks(self.tracked_stracks, refind_stracks)
|
326 |
-
self.lost_stracks = sub_stracks(self.lost_stracks, self.tracked_stracks)
|
327 |
-
self.lost_stracks.extend(lost_stracks)
|
328 |
-
self.lost_stracks = sub_stracks(self.lost_stracks, self.removed_stracks)
|
329 |
-
self.removed_stracks.extend(removed_stracks)
|
330 |
-
self.tracked_stracks, self.lost_stracks = remove_duplicate_stracks(self.tracked_stracks, self.lost_stracks)
|
331 |
-
# get scores of lost tracks
|
332 |
-
output_stracks = [track for track in self.tracked_stracks if track.is_activated]
|
333 |
-
|
334 |
-
# return output_stracks
|
335 |
-
|
336 |
-
bboxes = []
|
337 |
-
labels = []
|
338 |
-
ids = []
|
339 |
-
for track in output_stracks:
|
340 |
-
if track.is_activated:
|
341 |
-
track_bbox = track.tlbr
|
342 |
-
bboxes.append([track_bbox[0], track_bbox[1], track_bbox[2], track_bbox[3], track.score])
|
343 |
-
labels.append(0)
|
344 |
-
ids.append(track.track_id)
|
345 |
-
return torch.tensor(bboxes), torch.tensor(labels), torch.tensor(ids)
|
346 |
-
|
347 |
-
def joint_stracks(tlista, tlistb):
|
348 |
-
exists = {}
|
349 |
-
res = []
|
350 |
-
for t in tlista:
|
351 |
-
exists[t.track_id] = 1
|
352 |
-
res.append(t)
|
353 |
-
for t in tlistb:
|
354 |
-
tid = t.track_id
|
355 |
-
if not exists.get(tid, 0):
|
356 |
-
exists[tid] = 1
|
357 |
-
res.append(t)
|
358 |
-
return res
|
359 |
-
|
360 |
-
|
361 |
-
def sub_stracks(tlista, tlistb):
|
362 |
-
stracks = {}
|
363 |
-
for t in tlista:
|
364 |
-
stracks[t.track_id] = t
|
365 |
-
for t in tlistb:
|
366 |
-
tid = t.track_id
|
367 |
-
if stracks.get(tid, 0):
|
368 |
-
del stracks[tid]
|
369 |
-
return list(stracks.values())
|
370 |
-
|
371 |
-
|
372 |
-
def remove_duplicate_stracks(stracksa, stracksb):
|
373 |
-
pdist = matching.iou_distance(stracksa, stracksb)
|
374 |
-
pairs = np.where(pdist < 0.15)
|
375 |
-
dupa, dupb = list(), list()
|
376 |
-
for p, q in zip(*pairs):
|
377 |
-
timep = stracksa[p].frame_id - stracksa[p].start_frame
|
378 |
-
timeq = stracksb[q].frame_id - stracksb[q].start_frame
|
379 |
-
if timep > timeq:
|
380 |
-
dupb.append(q)
|
381 |
-
else:
|
382 |
-
dupa.append(p)
|
383 |
-
resa = [t for i, t in enumerate(stracksa) if not i in dupa]
|
384 |
-
resb = [t for i, t in enumerate(stracksb) if not i in dupb]
|
385 |
-
return resa, resb
|
386 |
-
|
387 |
-
|
388 |
-
def remove_fp_stracks(stracksa, n_frame=10):
|
389 |
-
remain = []
|
390 |
-
for t in stracksa:
|
391 |
-
score_5 = t.score_list[-n_frame:]
|
392 |
-
score_5 = np.array(score_5, dtype=np.float32)
|
393 |
-
index = score_5 < 0.45
|
394 |
-
num = np.sum(index)
|
395 |
-
if num < n_frame:
|
396 |
-
remain.append(t)
|
397 |
-
return remain
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/vision.cpp
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
/*!
|
2 |
-
**************************************************************************************************
|
3 |
-
* Deformable DETR
|
4 |
-
* Copyright (c) 2020 SenseTime. All Rights Reserved.
|
5 |
-
* Licensed under the Apache License, Version 2.0 [see LICENSE for details]
|
6 |
-
**************************************************************************************************
|
7 |
-
* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
|
8 |
-
**************************************************************************************************
|
9 |
-
*/
|
10 |
-
|
11 |
-
/*!
|
12 |
-
* Copyright (c) Facebook, Inc. and its affiliates.
|
13 |
-
* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
|
14 |
-
*/
|
15 |
-
|
16 |
-
#include "ms_deform_attn.h"
|
17 |
-
|
18 |
-
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
|
19 |
-
m.def("ms_deform_attn_forward", &ms_deform_attn_forward, "ms_deform_attn_forward");
|
20 |
-
m.def("ms_deform_attn_backward", &ms_deform_attn_backward, "ms_deform_attn_backward");
|
21 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EuroPython2022/illustrated-lyrics-generator/layers.py
DELETED
@@ -1,273 +0,0 @@
|
|
1 |
-
# Source: https://huggingface.co/huggan/fastgan-few-shot-fauvism-still-life
|
2 |
-
import torch
|
3 |
-
import torch.nn as nn
|
4 |
-
import torch.nn.functional as F
|
5 |
-
from torch.nn.modules.batchnorm import BatchNorm2d
|
6 |
-
from torch.nn.utils import spectral_norm
|
7 |
-
|
8 |
-
|
9 |
-
class SpectralConv2d(nn.Module):
|
10 |
-
|
11 |
-
def __init__(self, *args, **kwargs):
|
12 |
-
super().__init__()
|
13 |
-
self._conv = spectral_norm(
|
14 |
-
nn.Conv2d(*args, **kwargs)
|
15 |
-
)
|
16 |
-
|
17 |
-
def forward(self, input: torch.Tensor) -> torch.Tensor:
|
18 |
-
return self._conv(input)
|
19 |
-
|
20 |
-
|
21 |
-
class SpectralConvTranspose2d(nn.Module):
|
22 |
-
|
23 |
-
def __init__(self, *args, **kwargs):
|
24 |
-
super().__init__()
|
25 |
-
self._conv = spectral_norm(
|
26 |
-
nn.ConvTranspose2d(*args, **kwargs)
|
27 |
-
)
|
28 |
-
|
29 |
-
def forward(self, input: torch.Tensor) -> torch.Tensor:
|
30 |
-
return self._conv(input)
|
31 |
-
|
32 |
-
|
33 |
-
class Noise(nn.Module):
|
34 |
-
|
35 |
-
def __init__(self):
|
36 |
-
super().__init__()
|
37 |
-
self._weight = nn.Parameter(
|
38 |
-
torch.zeros(1),
|
39 |
-
requires_grad=True,
|
40 |
-
)
|
41 |
-
|
42 |
-
def forward(self, input: torch.Tensor) -> torch.Tensor:
|
43 |
-
batch_size, _, height, width = input.shape
|
44 |
-
noise = torch.randn(batch_size, 1, height, width, device=input.device)
|
45 |
-
return self._weight * noise + input
|
46 |
-
|
47 |
-
|
48 |
-
class InitLayer(nn.Module):
|
49 |
-
|
50 |
-
def __init__(self, in_channels: int,
|
51 |
-
out_channels: int):
|
52 |
-
super().__init__()
|
53 |
-
|
54 |
-
self._layers = nn.Sequential(
|
55 |
-
SpectralConvTranspose2d(
|
56 |
-
in_channels=in_channels,
|
57 |
-
out_channels=out_channels * 2,
|
58 |
-
kernel_size=4,
|
59 |
-
stride=1,
|
60 |
-
padding=0,
|
61 |
-
bias=False,
|
62 |
-
),
|
63 |
-
nn.BatchNorm2d(num_features=out_channels * 2),
|
64 |
-
nn.GLU(dim=1),
|
65 |
-
)
|
66 |
-
|
67 |
-
def forward(self, input: torch.Tensor) -> torch.Tensor:
|
68 |
-
return self._layers(input)
|
69 |
-
|
70 |
-
|
71 |
-
class SLEBlock(nn.Module):
|
72 |
-
|
73 |
-
def __init__(self, in_channels: int,
|
74 |
-
out_channels: int):
|
75 |
-
super().__init__()
|
76 |
-
|
77 |
-
self._layers = nn.Sequential(
|
78 |
-
nn.AdaptiveAvgPool2d(output_size=4),
|
79 |
-
SpectralConv2d(
|
80 |
-
in_channels=in_channels,
|
81 |
-
out_channels=out_channels,
|
82 |
-
kernel_size=4,
|
83 |
-
stride=1,
|
84 |
-
padding=0,
|
85 |
-
bias=False,
|
86 |
-
),
|
87 |
-
nn.SiLU(),
|
88 |
-
SpectralConv2d(
|
89 |
-
in_channels=out_channels,
|
90 |
-
out_channels=out_channels,
|
91 |
-
kernel_size=1,
|
92 |
-
stride=1,
|
93 |
-
padding=0,
|
94 |
-
bias=False,
|
95 |
-
),
|
96 |
-
nn.Sigmoid(),
|
97 |
-
)
|
98 |
-
|
99 |
-
def forward(self, low_dim: torch.Tensor,
|
100 |
-
high_dim: torch.Tensor) -> torch.Tensor:
|
101 |
-
return high_dim * self._layers(low_dim)
|
102 |
-
|
103 |
-
|
104 |
-
class UpsampleBlockT1(nn.Module):
|
105 |
-
|
106 |
-
def __init__(self, in_channels: int,
|
107 |
-
out_channels: int):
|
108 |
-
super().__init__()
|
109 |
-
|
110 |
-
self._layers = nn.Sequential(
|
111 |
-
nn.Upsample(scale_factor=2, mode='nearest'),
|
112 |
-
SpectralConv2d(
|
113 |
-
in_channels=in_channels,
|
114 |
-
out_channels=out_channels * 2,
|
115 |
-
kernel_size=3,
|
116 |
-
stride=1,
|
117 |
-
padding='same',
|
118 |
-
bias=False,
|
119 |
-
),
|
120 |
-
nn.BatchNorm2d(num_features=out_channels * 2),
|
121 |
-
nn.GLU(dim=1),
|
122 |
-
)
|
123 |
-
|
124 |
-
def forward(self, input: torch.Tensor) -> torch.Tensor:
|
125 |
-
return self._layers(input)
|
126 |
-
|
127 |
-
|
128 |
-
class UpsampleBlockT2(nn.Module):
|
129 |
-
|
130 |
-
def __init__(self, in_channels: int,
|
131 |
-
out_channels: int):
|
132 |
-
super().__init__()
|
133 |
-
|
134 |
-
self._layers = nn.Sequential(
|
135 |
-
nn.Upsample(scale_factor=2, mode='nearest'),
|
136 |
-
SpectralConv2d(
|
137 |
-
in_channels=in_channels,
|
138 |
-
out_channels=out_channels * 2,
|
139 |
-
kernel_size=3,
|
140 |
-
stride=1,
|
141 |
-
padding='same',
|
142 |
-
bias=False,
|
143 |
-
),
|
144 |
-
Noise(),
|
145 |
-
BatchNorm2d(num_features=out_channels * 2),
|
146 |
-
nn.GLU(dim=1),
|
147 |
-
SpectralConv2d(
|
148 |
-
in_channels=out_channels,
|
149 |
-
out_channels=out_channels * 2,
|
150 |
-
kernel_size=3,
|
151 |
-
stride=1,
|
152 |
-
padding='same',
|
153 |
-
bias=False,
|
154 |
-
),
|
155 |
-
Noise(),
|
156 |
-
nn.BatchNorm2d(num_features=out_channels * 2),
|
157 |
-
nn.GLU(dim=1),
|
158 |
-
)
|
159 |
-
|
160 |
-
def forward(self, input: torch.Tensor) -> torch.Tensor:
|
161 |
-
return self._layers(input)
|
162 |
-
|
163 |
-
|
164 |
-
class DownsampleBlockT1(nn.Module):
|
165 |
-
|
166 |
-
def __init__(self, in_channels: int,
|
167 |
-
out_channels: int):
|
168 |
-
super().__init__()
|
169 |
-
|
170 |
-
self._layers = nn.Sequential(
|
171 |
-
SpectralConv2d(
|
172 |
-
in_channels=in_channels,
|
173 |
-
out_channels=out_channels,
|
174 |
-
kernel_size=4,
|
175 |
-
stride=2,
|
176 |
-
padding=1,
|
177 |
-
bias=False,
|
178 |
-
),
|
179 |
-
nn.BatchNorm2d(num_features=out_channels),
|
180 |
-
nn.LeakyReLU(negative_slope=0.2),
|
181 |
-
)
|
182 |
-
|
183 |
-
def forward(self, input: torch.Tensor) -> torch.Tensor:
|
184 |
-
return self._layers(input)
|
185 |
-
|
186 |
-
|
187 |
-
class DownsampleBlockT2(nn.Module):
|
188 |
-
|
189 |
-
def __init__(self, in_channels: int,
|
190 |
-
out_channels: int):
|
191 |
-
super().__init__()
|
192 |
-
|
193 |
-
self._layers_1 = nn.Sequential(
|
194 |
-
SpectralConv2d(
|
195 |
-
in_channels=in_channels,
|
196 |
-
out_channels=out_channels,
|
197 |
-
kernel_size=4,
|
198 |
-
stride=2,
|
199 |
-
padding=1,
|
200 |
-
bias=False,
|
201 |
-
),
|
202 |
-
nn.BatchNorm2d(num_features=out_channels),
|
203 |
-
nn.LeakyReLU(negative_slope=0.2),
|
204 |
-
SpectralConv2d(
|
205 |
-
in_channels=out_channels,
|
206 |
-
out_channels=out_channels,
|
207 |
-
kernel_size=3,
|
208 |
-
stride=1,
|
209 |
-
padding='same',
|
210 |
-
bias=False,
|
211 |
-
),
|
212 |
-
nn.BatchNorm2d(num_features=out_channels),
|
213 |
-
nn.LeakyReLU(negative_slope=0.2),
|
214 |
-
)
|
215 |
-
|
216 |
-
self._layers_2 = nn.Sequential(
|
217 |
-
nn.AvgPool2d(
|
218 |
-
kernel_size=2,
|
219 |
-
stride=2,
|
220 |
-
),
|
221 |
-
SpectralConv2d(
|
222 |
-
in_channels=in_channels,
|
223 |
-
out_channels=out_channels,
|
224 |
-
kernel_size=1,
|
225 |
-
stride=1,
|
226 |
-
padding=0,
|
227 |
-
bias=False,
|
228 |
-
),
|
229 |
-
nn.BatchNorm2d(num_features=out_channels),
|
230 |
-
nn.LeakyReLU(negative_slope=0.2),
|
231 |
-
)
|
232 |
-
|
233 |
-
def forward(self, input: torch.Tensor) -> torch.Tensor:
|
234 |
-
t1 = self._layers_1(input)
|
235 |
-
t2 = self._layers_2(input)
|
236 |
-
return (t1 + t2) / 2
|
237 |
-
|
238 |
-
|
239 |
-
class Decoder(nn.Module):
|
240 |
-
|
241 |
-
def __init__(self, in_channels: int,
|
242 |
-
out_channels: int):
|
243 |
-
super().__init__()
|
244 |
-
|
245 |
-
self._channels = {
|
246 |
-
16: 128,
|
247 |
-
32: 64,
|
248 |
-
64: 64,
|
249 |
-
128: 32,
|
250 |
-
256: 16,
|
251 |
-
512: 8,
|
252 |
-
1024: 4,
|
253 |
-
}
|
254 |
-
|
255 |
-
self._layers = nn.Sequential(
|
256 |
-
nn.AdaptiveAvgPool2d(output_size=8),
|
257 |
-
UpsampleBlockT1(in_channels=in_channels, out_channels=self._channels[16]),
|
258 |
-
UpsampleBlockT1(in_channels=self._channels[16], out_channels=self._channels[32]),
|
259 |
-
UpsampleBlockT1(in_channels=self._channels[32], out_channels=self._channels[64]),
|
260 |
-
UpsampleBlockT1(in_channels=self._channels[64], out_channels=self._channels[128]),
|
261 |
-
SpectralConv2d(
|
262 |
-
in_channels=self._channels[128],
|
263 |
-
out_channels=out_channels,
|
264 |
-
kernel_size=3,
|
265 |
-
stride=1,
|
266 |
-
padding='same',
|
267 |
-
bias=False,
|
268 |
-
),
|
269 |
-
nn.Tanh(),
|
270 |
-
)
|
271 |
-
|
272 |
-
def forward(self, input: torch.Tensor) -> torch.Tensor:
|
273 |
-
return self._layers(input)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|