parquet-converter commited on
Commit
61c6362
·
1 Parent(s): a111184

Update parquet files (step 49 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Deewaar in hindi torrent download Enjoy the legendary drama of two brothers on opposite sides of the law.md +0 -16
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/EADO 2022 Where to Find and Download the Best PowerPoint Slides on Skin Cancer.md +0 -24
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest].md +0 -93
  4. spaces/1gistliPinn/ChatGPT4/Examples/Downloadwindowsxpprofessionalx64editionsp3sataedition !!TOP!!.md +0 -6
  5. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bitcoin Pop A Modern Crypto Twist on the Classic Bubble Shooter Game!.md +0 -83
  6. spaces/1phancelerku/anime-remove-background/Attack on Titan AOT Mobile Fan Game v3.0 APK Offline An Immersive and Action-Packed Adventure.md +0 -65
  7. spaces/1phancelerku/anime-remove-background/Enjoy Stumble Guys in Your Browser - No APK Required.md +0 -130
  8. spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130/Questions ede8818b3a0e447f80145905690eb3f6/FizzBuzz 70828a5e5e6846a48686f66bb9ccc8b6.md +0 -46
  9. spaces/AIFILMS/ControlNet-Video/style.css +0 -105
  10. spaces/AIFILMS/StyleGANEX/models/stylegan2/op/upfirdn2d.py +0 -61
  11. spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/phind.py +0 -69
  12. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/WebSearch.ts +0 -36
  13. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspective/Perspective.d.ts +0 -2
  14. spaces/AlexWang/lama/bin/gen_mask_dataset_hydra.py +0 -124
  15. spaces/AlexWang/lama/models/ade20k/segm_lib/utils/data/dataloader.py +0 -425
  16. spaces/Ananthap4/itineraryGenerator/README.md +0 -12
  17. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddpm_parallel.py +0 -604
  18. spaces/Andy1621/uniformer_image_detection/mmdet/datasets/deepfashion.py +0 -10
  19. spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_40k_pascal_context_59.py +0 -10
  20. spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes.py +0 -9
  21. spaces/AnimalEquality/chatbot/lv_recipe_chatbot/ingredient_vision.py +0 -132
  22. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/optflow.py +0 -254
  23. spaces/AsakuraMizu/moe-tts/export_model.py +0 -13
  24. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/pager.py +0 -34
  25. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/deploy/torchscript_mask_rcnn.cpp +0 -187
  26. spaces/AyameYODAYO/xijinpingx/style.css +0 -28
  27. spaces/Aziizzz/ChestXrayClassification/app.py +0 -107
  28. spaces/BenjaminB/pyscript-demo/index.html +0 -57
  29. spaces/Benson/text-generation/Examples/Descargar Gratis Youtube Apk.md +0 -62
  30. spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/stop-generating/+server.ts +0 -27
  31. spaces/Blessin/movie-poster-generator/README.md +0 -13
  32. spaces/CVPR/LIVE/pybind11/tests/test_operator_overloading.py +0 -145
  33. spaces/CVPR/LIVE/thrust/thrust/equal.h +0 -238
  34. spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/scan.h +0 -23
  35. spaces/CVPR/Text2Human/Text2Human/data/parsing_generation_segm_attr_dataset.py +0 -80
  36. spaces/CVPR/WALT/mmdet/core/bbox/coder/pseudo_bbox_coder.py +0 -18
  37. spaces/CVPR/WALT/mmdet/core/evaluation/__init__.py +0 -15
  38. spaces/CVPR/WALT/mmdet/models/losses/accuracy.py +0 -78
  39. spaces/CVPR/WALT/mmdet/models/roi_heads/__init__.py +0 -43
  40. spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/WebSocket.js +0 -134
  41. spaces/ClinBAY/Safeterm_Demo/send_email_request.py +0 -98
  42. spaces/CognitiveLabs/GPT-auto-webscraping/chains/output_format/base.py +0 -19
  43. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/ROIAlign.h +0 -46
  44. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/etree.py +0 -478
  45. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/featureVars.py +0 -605
  46. spaces/DeepFloyd/IF/model.py +0 -313
  47. spaces/DylanWolf/h2ogpt-api/app.py +0 -12
  48. spaces/ECCV2022/bytetrack/yolox/data/data_prefetcher.py +0 -77
  49. spaces/ECCV2022/bytetrack/yolox/deepsort_tracker/track.py +0 -158
  50. spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp +0 -46
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Deewaar in hindi torrent download Enjoy the legendary drama of two brothers on opposite sides of the law.md DELETED
@@ -1,16 +0,0 @@
1
-
2
- <h1>Deewaar in hindi torrent download: How to watch the classic Bollywood movie online</h1>
3
- <p>If you are a fan of Bollywood movies, you have probably heard of Deewaar, one of the most iconic films in Indian cinema history. Released in 1975, Deewaar is a crime drama that explores the themes of brotherhood, loyalty, corruption, and social injustice. It stars Amitabh Bachchan and Shashi Kapoor as two brothers who take different paths in life, one becoming a gangster and the other a police officer. The movie was a huge commercial and critical success, earning several awards and accolades. It also influenced many filmmakers and actors in India and abroad, such as Quentin Tarantino, Danny Boyle, Rajkumar Hirani, and Shah Rukh Khan.</p>
4
- <h2>Deewaar in hindi torrent download</h2><br /><p><b><b>DOWNLOAD</b> &#10042; <a href="https://byltly.com/2uKzaH">https://byltly.com/2uKzaH</a></b></p><br /><br />
5
- <p>But how can you watch this masterpiece online if you don't have access to a DVD or a streaming service that offers it? One option that many people resort to is downloading Deewaar in hindi torrent from various websites. However, this method is not only illegal but also risky, as it can expose you to malware, viruses, legal troubles, and poor quality videos. In this article, we will tell you why you should avoid using torrent sites to watch Deewaar online, and what are some better alternatives that are safe and legal. We will also give you some tips and tricks for finding Deewaar in hindi online easily and quickly.</p>
6
- <h2>What is Deewaar and why is it a must-watch movie?</h2>
7
- <p>Before we dive into the details of how to watch Deewaar online, let's first understand what makes this movie so special and why you should watch it if you haven't already. Here are some of the reasons why Deewaar is a must-watch movie for any Bollywood lover.</p>
8
- <h3>The plot and the themes of Deewaar</h3>
9
- <p>The story of Deewaar revolves around two brothers, Vijay (Amitabh Bachchan) and Ravi (Shashi Kapoor), who grow up in poverty after their father (Satyendra Kapoor) is framed for a crime he didn't commit by a corrupt businessman (Iftekhar). Vijay becomes bitter and disillusioned with society, and joins a gang led by Samant (Madan Puri), while Ravi becomes an honest and upright police officer. The brothers clash with each other over their conflicting ideologies and loyalties, leading to a dramatic confrontation that tests their bond.</p>
10
- <p>The movie explores various themes such as family, friendship, morality, justice, violence, class struggle, and urban decay. It also reflects the socio-political context of India in the 1970s, when the country was facing economic crisis, political unrest, labor strikes, and corruption scandals. The movie portrays the plight of the common man who is oppressed by the system and has to resort to crime or rebellion to survive. It also questions the role of law enforcement and its effectiveness in dealing with crime and corruption.</p>
11
- <h3>The cast and the crew of Deewaar</h3>
12
- <p>Deewaar boasts of an impressive cast and crew who delivered stellar performances and technical excellence. Amitabh Bachchan and Shashi Kapoor are brilliant as the two brothers who share a deep love but also a bitter rivalry. They showcase their acting range by portraying complex emotions such as anger, pain, guilt, pride, and remorse. Their chemistry is palpable and their dialogues are memorable. The movie also features other talented actors such as Nirupa Roy as the mother of Vijay and Ravi; Parveen Babi as Anita, Vijay's love interest; Neetu Singh as Veera, Ravi's love interest; Nirupa Roy as Sumitra Devi; Iftekhar as Deshmukh; Madan Puri as Samant; Sudhir as Jaichand; Jagdish Raj as Jaggi; Alankar Joshi as young Vijay; Raju Shrestha as young Ravi; Manmohan Krishna as DCP Narang; Yunus Parvez as Rahim Chacha; Raj Kishore as Darpan; Shetty as Shetty; Mac Mohan as Mac; Viju Khote as Viju; Mohan Sherry as Peter; Satyendra Kapoor as Anand Verma; Kamal Kapoor as Mr Agarwal; Rajpal Yadav as Munna Bhaiya; Ramesh Deo as Sub-Inspector Shinde; Murad as Police Commissioner.</p>
13
- <p>The movie was directed by Yash Chopra, one of the most celebrated filmmakers in Indian cinema history. He was known for his versatility and his ability to create engaging stories across different genres such as romance, drama, thriller, action, comedy, musicals etc. He was also known for his collaboration with Amitabh Bachchan in several hit movies such as Zanjeer (1973), Kabhi Kabhie (1976), Trishul (1978), Kaala Patthar (1979), Silsila (1981), Mashaal (1984), Lamhe (1991), Veer-Zaara (2004) etc. He won six National Film Awards and 11 Filmfare Awards for his work.</p>
14
- <p>The movie was written by Salim-Javed</p> 0a6ba089eb<br />
15
- <br />
16
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/EADO 2022 Where to Find and Download the Best PowerPoint Slides on Skin Cancer.md DELETED
@@ -1,24 +0,0 @@
1
-
2
- <h1>How to Download PowerPoint Presentations for EADO 2022</h1>
3
- <p>If you are planning to attend the 19th EADO Congress in Stockholm, Sweden, on May 10-13, 2022, you might be interested in downloading some PowerPoint presentations to prepare for the event. The EADO Congress is a major international meeting that brings together experts and researchers in the field of dermato-oncology, the study and treatment of skin cancers. The congress will feature keynote lectures, symposia, workshops, oral and poster presentations, and networking opportunities.</p>
4
- <h2>download powerpoint crackeado 2022</h2><br /><p><b><b>Download Zip</b> &#127775; <a href="https://byltly.com/2uKwTT">https://byltly.com/2uKwTT</a></b></p><br /><br />
5
- <p>There are two ways to download PowerPoint presentations for EADO 2022:</p>
6
- <ol>
7
- <li>From the official website of the congress: <a href="https://eado2022.com/">https://eado2022.com/</a>. Here you can find the scientific program, the abstract submission guidelines, the registration information, and the sponsors and exhibitors. You can also access some of the previous congresses' presentations by clicking on the "Past Congresses" tab and selecting the year of your interest.</li>
8
- <li>From Microsoft PowerPoint: If you have a Microsoft 365 subscription, you can use PowerPoint to create your own presentations or download templates from the online library. You can also use PowerPoint on the web for free by signing in with a Microsoft account. To download PowerPoint or access it online, visit <a href="https://www.microsoft.com/en-ww/microsoft-365/powerpoint">https://www.microsoft.com/en-ww/microsoft-365/powerpoint</a>. You can search for "EADO" or "dermato-oncology" in the template gallery to find relevant designs.</li>
9
- </ol>
10
- <p>We hope this article helps you download PowerPoint presentations for EADO 2022. We look forward to seeing you at the congress!</p><p>Here are some more paragraphs for the article:</p>
11
- <p>Why attend EADO 2022?</p>
12
- <p></p>
13
- <p>EADO 2022 is a great opportunity to learn from the leading experts in dermato-oncology, share your research and clinical experience, and network with colleagues from around the world. You will be able to update your knowledge on the latest advances and challenges in the diagnosis, prevention, and treatment of skin cancers, including melanoma, non-melanoma skin cancer, cutaneous lymphoma, and rare tumors. You will also be able to participate in interactive sessions, workshops, and debates on topics such as immunotherapy, targeted therapy, surgery, radiotherapy, dermatopathology, dermoscopy, and more.</p>
14
- <p>How to prepare for EADO 2022?</p>
15
- <p>To make the most of your attendance at EADO 2022, we recommend that you:</p>
16
- <ul>
17
- <li>Register early to secure your place and benefit from the early bird rates. You can register online at <a href="https://eado2022.com/registration/">https://eado2022.com/registration/</a>.</li>
18
- <li>Submit your abstract before the deadline of January 15, 2022. You can submit your abstract online at <a href="https://eado2022.com/abstracts/">https://eado2022.com/abstracts/</a>. You can choose between oral or poster presentation formats. The best abstracts will be awarded prizes and published in the Journal of the European Academy of Dermatology and Venereology.</li>
19
- <li>Book your accommodation and travel arrangements in advance. You can find information on the congress venue, hotels, transportation, and visa requirements at <a href="https://eado2022.com/general-information/">https://eado2022.com/general-information/</a>.</li>
20
- <li>Download the EADO 2022 app to access the congress program, speakers' bios, abstracts, exhibitors' list, floor plans, and more. You can also use the app to create your personal agenda, rate sessions, ask questions, and interact with other attendees. The app will be available for download a few weeks before the congress.</li>
21
- </ul>
22
- <p>We hope you enjoy EADO 2022 and have a productive and rewarding experience!</p> ddb901b051<br />
23
- <br />
24
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest].md DELETED
@@ -1,93 +0,0 @@
1
-
2
- <h1>GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest]: A Powerful Tool to Protect Your PC from Malware</h1>
3
- <p>Malware is a serious threat to your computer and your privacy. It can infect your system through various ways, such as email attachments, downloads, pop-ups, fake updates, etc. Malware can damage your files, slow down your PC, steal your personal information, monitor your online activities, and even lock your system until you pay a ransom.</p>
4
- <p>That's why you need a reliable anti-malware solution that can detect and remove malware from your PC effectively and efficiently. One such solution is <strong>GridinSoft Anti-Malware</strong>, an impressive application that has been developed specifically for the automatic removal of viruses, bots, spyware, keyloggers, trojans, scareware, rootkits, and other malicious software.</p>
5
- <h2>GridinSoft Anti-Malware 4.1.30 Crack License Keys 2020 [Latest]</h2><br /><p><b><b>Download Zip</b> >>> <a href="https://byltly.com/2uKzdu">https://byltly.com/2uKzdu</a></b></p><br /><br />
6
- <p>In this article, we will show you how to download, install, activate, and use GridinSoft Anti-Malware with crack license keys 2020 [latest] to protect your PC from malware. We will also answer some frequently asked questions about GridinSoft Anti-Malware.</p>
7
- <h2>What is GridinSoft Anti-Malware?</h2>
8
- <p>GridinSoft Anti-Malware is an excellent anti-malware solution that has been designed to provide high-speed system scanning process without slowing down your PC. It has a user-friendly and simple interface that makes it easy to use for both beginners and experts.</p>
9
- <h3>Features and benefits of GridinSoft Anti-Malware</h3>
10
- <p>Some of the features and benefits of GridinSoft Anti-Malware are:</p>
11
- <ul>
12
- <li>It can automatically delete viruses, bots, spyware, keyloggers, trojans of using crack license keys for GridinSoft Anti-Malware, such as:</p>
13
- <ul>
14
- <li>You may violate the terms and conditions of GridinSoft Anti-Malware and face legal consequences.</li>
15
- <li>You may expose your PC to malware or viruses that may be hidden in the crack license keys file or the source website.</li>
16
- <li>You may not receive any technical support or customer service from GridinSoft Anti-Malware.</li>
17
- </ul>
18
- <p>Therefore, you should use crack license keys for GridinSoft Anti-Malware at your own risk and discretion. We do not recommend or endorse the use of crack license keys for GridinSoft Anti-Malware or any other software.</p>
19
- <h2>How to use GridinSoft Anti-Malware to scan and remove malware from your PC?</h2>
20
- <p>Now that you have activated GridinSoft Anti-Malware with crack license keys, you can use it to scan and remove malware from your PC. Here are the steps to do so:</p>
21
- <h3>Types of scans available in GridinSoft Anti-Malware</h3>
22
- <p>GridinSoft Anti-Malware offers four types of scans for your convenience and preference. They are:</p>
23
- <p></p>
24
- <ul>
25
- <li><strong>Standard scan:</strong> This is the default and recommended scan mode that scans your system memory, startup items, registry, and system drive for malware. It takes a few minutes to complete and provides a comprehensive overview of your system status.</li>
26
- <li><strong>Quick scan:</strong> This is a faster scan mode that scans only the most critical areas of your system for malware. It takes a few seconds to complete and provides a brief summary of your system status.</li>
27
- <li><strong>Full scan:</strong> This is a thorough scan mode that scans all the drives and folders on your PC for malware. It takes a long time to complete and provides a detailed report of your system status.</li>
28
- <li><strong>Removable scan:</strong> This is a special scan mode that scans only the removable devices such as USB flash drives, external hard drives, memory cards, etc. that are connected to your PC for malware. It takes a variable time to complete depending on the size and number of the devices and provides a specific report of their status.</li>
29
- </ul>
30
- <h3>How to start and customize a scan in GridinSoft Anti-Malware?</h3>
31
- <p>To start and customize a scan in GridinSoft Anti-Malware, you need to follow these steps:</p>
32
- <ol>
33
- <li>Open GridinSoft Anti-Malware and click on the "Scan" button at the top left corner of the main window.</li>
34
- <li>Select the type of scan that you want to perform from the four options: standard scan, quick scan, full scan, or removable scan.</li>
35
- <li>If you want to customize the scan settings, click on the "Settings" button at the bottom right corner of the scan window. You can change the options such as scan priority, heuristic rules, file types, file size, etc.</li>
36
- <li>Click on the "Start Scan" button to begin the scanning process.</li>
37
- </ol>
38
- <p>Wait for GridinSoft Anti-Malware to finish scanning your PC and display the results.</p>
39
- <h3>How to view and manage scan results in GridinSoft Anti-Malware?</h3>
40
- <p>To view and manage scan results in GridinSoft Anti-Malware, you need to follow these steps:</p>
41
- <ol>
42
- <li>After the scanning process is completed, GridinSoft Anti-Malware will show you a summary of the results, such as the number of scanned items, detected threats, removed threats, etc.</li>
43
- <li>If you want to see more details about the results, click on the "View Results" button at the bottom right corner of the summary window. You will see a list of all the detected threats with their names, locations, types, and statuses.</li>
44
- <li>If you want to remove all the detected threats from your PC, click on the "Fix Now" button at the bottom right corner of the results window. GridinSoft Anti-Malware will automatically delete all the threats from your PC.</li>
45
- <li>If you want to remove only some of the detected threats from your PC, uncheck the boxes next to the threats that you want to keep on your PC. Then, click on the "Fix Selected" button at the bottom right corner of the results window. GridinSoft Anti-Malware will delete only the selected threats from your PC.</li>
46
- <li>If you want to ignore some of the detected threats from your PC, check the boxes next to the threats that you want to ignore. Then, click on the "Ignore Selected" button at the bottom right corner of the results window. GridinSoft Anti-Malware will add the selected threats to the ignore list and will not scan them again in the future.</li>
47
- <li>If you want to restore some of the removed threats to your PC, click on the "Restore" button at the bottom left corner of the results window. You will see a list of all the removed threats with their names, locations, types, and statuses. Check the boxes next to the threats that you want to restore. Then, click on the "Restore Selected" button at the bottom right corner of the restore window. GridinSoft Anti-Malware will restore the selected threats to their original locations on your PC.</li>
48
- </ol>
49
- <p>Congratulations! You have successfully used GridinSoft Anti-Malware to scan and remove malware from your PC.</p>
50
- <h2>How to reset browser settings with GridinSoft Anti-Malware?</h2>
51
- <p>Sometimes, malware can alter your browser settings, such as changing your homepage, search engine, new tab page, extensions, etc. This can affect your browsing experience and expose you to more malware or phishing sites. To fix this problem, you can use GridinSoft Anti-Malware to reset your browser settings to default. Here are the steps to do so:</p>
52
- <h3>Why you need to reset browser settings with GridinSoft Anti-Malware?</h3>
53
- <p>Some of the reasons why you need to reset browser settings with GridinSoft Anti-Malware are:</p>
54
- <ul>
55
- <li>You can restore your browser settings to their original state and get rid of any unwanted changes made by malware.</li>
56
- <li>You can prevent malware from redirecting you to malicious or suspicious websites that may harm your PC or steal your data.</li>
57
- <li>You can improve your browser performance and speed by removing any unnecessary or harmful extensions or plugins.</li>
58
- <li>You can enhance your browser security and privacy by clearing any cookies, cache, history, or other data that may be used by malware or hackers.</li>
59
- </ul>
60
- <h3>How to reset browser settings with GridinSoft Anti-Malware?</h3>
61
- <p>To reset browser settings with GridinSoft Anti-Malware, you need to follow these steps:</p>
62
- <ol>
63
- <li>Open GridinSoft Anti-Malware and click on the "Tools" button at the top right corner of the main window.</li>
64
- <li>Select "Reset Browser Settings" from the drop-down menu and click on it.</li>
65
- <li>Select the browsers that you want to reset from the list of available browsers. You can choose one or more browsers depending on your preference.</li>
66
- <li>Check or uncheck the options that you want to reset for each browser. You can choose to reset homepage, search engine, new tab page, extensions, cookies, cache, history, etc.</li>
67
- <li>Click on the "Reset" button at the bottom right corner of the reset window and wait for GridinSoft Anti-Malware to complete the resetting process.</li>
68
- </ol>
69
- <p>Congratulations! You have successfully reset your browser settings with GridinSoft Anti-Malware. Now you can enjoy a safer and smoother browsing experience.</p>
70
- <h2>Conclusion</h2>
71
- <p>GridinSoft Anti-Malware is a powerful tool to protect your PC from malware. It can scan and remove malware from your PC effectively and efficiently. It can also reset your browser settings to default if they have been altered by malware. You can download, install, activate, and use GridinSoft Anti-Malware with crack license keys 2020 [latest] to enjoy its full features and benefits. However, you should also be aware of the disadvantages and risks of using crack license keys for GridinSoft Anti-Malware, such as violating the terms and conditions of the program, exposing your PC to malware or viruses, and not receiving any technical support or customer service. Therefore, you should use crack license keys for GridinSoft Anti-Malware at your own risk and discretion.</p>
72
- <p>We hope that this article has helped you understand how to use GridinSoft Anti-Malware with crack license keys 2020 [latest] to protect your PC from malware. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading!</p>
73
- <h2>FAQs</h2>
74
- <p>Here are some frequently asked questions about GridinSoft Anti-Malware:</p>
75
- <h3>Q: Is GridinSoft Anti-Malware safe to use?</h3>
76
- <p>A: Yes, GridinSoft Anti-Malware is safe to use as long as you download it from the official website and activate it with the official license keys. However, if you use crack license keys for GridinSoft Anti-Malware, you may expose your PC to malware or viruses that may be hidden in the crack license keys file or the source website.</p>
77
- <h3>Q: How much does GridinSoft Anti-Malware cost?</h3>
78
- <p>A: GridinSoft Anti-Malware offers a free trial version for 15 days, and a lifetime license for $29.95. You can also get discounts and offers if you buy multiple licenses or subscribe to their newsletter.</p>
79
- <h3>Q: How can I update GridinSoft Anti-Malware?</h3>
80
- <p>A: GridinSoft Anti-Malware can update its database automatically if you enable the option in the settings. You can also update it manually by clicking on the "Update" button at the top right corner of the main window.</p>
81
- <h3>Q: How can I contact GridinSoft Anti-Malware support?</h3>
82
- <p>A: You can contact GridinSoft Anti-Malware support via email, phone, or online chat. You can find their contact details on their official website at <a href="">https://gridinsoft.com/support/</a>.</p>
83
- <h3>Q: What are the system requirements for GridinSoft Anti-Malware?</h3>
84
- <p>A: The system requirements for GridinSoft Anti-Malware are:</p>
85
- <ul>
86
- <li>Operating system: Windows XP/Vista/7/8/10</li>
87
- <li>Processor: 800 MHz CPU or higher</li>
88
- <li>Memory: 256 MB RAM or higher</li>
89
- <li>Disk space: 50 MB free disk space or higher</li>
90
- <li>Internet connection: Required for activation and updates</li>
91
- </ul></p> b2dd77e56b<br />
92
- <br />
93
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Downloadwindowsxpprofessionalx64editionsp3sataedition !!TOP!!.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>downloadwindowsxpprofessionalx64editionsp3sataedition</h2><br /><p><b><b>Download Zip</b> &#10038;&#10038;&#10038; <a href="https://imgfil.com/2uxYNP">https://imgfil.com/2uxYNP</a></b></p><br /><br />
2
-
3
- April 25, 2021 - Download the latest full version of Windows XP Professional SP3 ISO for free. XP Professional SP3 has all preinstalled drivers for SATA drives. It has advanced security features, including support for data encryption. In addition, you can perform a secure system restore, making XP Professional SP3 the perfect choice for laptops and PCs. Download. Microsoft Office 2007 SP3 - Download the latest full version of the free Office 2007 suite from Microsoft. Office 2007 includes all the standard features you need to work efficiently with documents, spreadsheets, and presentations. 8a78ff9644<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Bitcoin Pop A Modern Crypto Twist on the Classic Bubble Shooter Game!.md DELETED
@@ -1,83 +0,0 @@
1
-
2
- <h1>Download Bitcoin Pop: A Fun and Rewarding Crypto Game</h1>
3
- <p>Do you love playing bubble shooter games? Do you want to earn some crypto while having fun? If you answered yes to both questions, then you should download Bitcoin Pop, a free app that lets you play a timeless bubble shooter game with sweet crypto rewards.</p>
4
- <h2>How to play Bitcoin Pop and earn crypto rewards</h2>
5
- <p>Bitcoin Pop is a simple and addictive game that puts your hand-eye coordination and puzzle solving skills to the test. The goal is to aim, match, and pop like-colored crypto bubbles to collect the required number of sodas. Each level gets progressively harder, but aim and precision is the key to earn crypto.</p>
6
- <h2>download bitcoin pop</h2><br /><p><b><b>Download File</b> &#10031;&#10031;&#10031; <a href="https://urlin.us/2uSRXV">https://urlin.us/2uSRXV</a></b></p><br /><br />
7
- <p>As you play, you will earn Bling Points that can be exchanged for Bitcoin or USD via PayPal. You will need a valid Coinbase or PayPal account to cash out your rewards. You can also earn extra Bling Points by watching ads, inviting friends, or completing surveys.</p>
8
- <h2>How to download Bitcoin Pop on your device</h2>
9
- <p>Bitcoin Pop is available for both Android and iOS devices. You can download it from the Google Play Store or the App Store. The only requirement is that you register and login before playing. No tricks or hoops to jump through to receive your crypto - just download, register, play and start collecting crypto.</p>
10
- <h2>The pros and cons of Bitcoin Pop</h2>
11
- <p>Like any app, Bitcoin Pop has its advantages and disadvantages. Here are some of them:</p>
12
- <table>
13
- <tr><th>Pros</th><th>Cons</th></tr>
14
- <tr><td>- Fun and easy gameplay</td><td>- Low payout rate</td></tr>
15
- <tr><td>- Modern crypto-themed graphics</td><td>- High battery consumption</td></tr>
16
- <tr><td>- No international transaction fees or red tape</td><td>- Limited use of Bitcoin as a payment method</td></tr>
17
- <tr><td>- User anonymity and transparency</td><td>- Volatility and risk of Bitcoin</td></tr>
18
- <tr><td>- Independence from a central authority</td><td>- No government regulations or protection</td></tr>
19
- </table>
20
- <h2>Conclusion: Is Bitcoin Pop worth playing?</h2>
21
- <p>Bitcoin Pop is a great game for anyone who enjoys bubble shooter games and wants to earn some crypto in their spare time. It is not a get-rich-quick scheme, but rather a fun and rewarding way to learn more about Bitcoin and the crypto world. If you are looking for a casual and entertaining game that also gives you some exposure to cryptocurrency, then you should definitely download Bitcoin Pop and give it a try.</p>
22
- <h2>FAQs</h2>
23
- <ol>
24
- <li>What is Bitcoin?</li>
25
- <p>Bitcoin is a digital currency that operates on a decentralized network of computers. It is not controlled by any central authority or intermediary. It can be used to send and receive payments online without intermediaries or fees.</p>
26
- <li>How do I get a Coinbase or PayPal account?</li>
27
- <p>You can sign up for a Coinbase account at <a href="(^3^)">coinbase.com</a>. You will need to verify your identity and link your bank account or debit card. You can sign up for a PayPal account at <a href="(^4^)">paypal.com</a>. You will need to provide your email address and link your bank account or credit card.</p>
28
- <li>How do I exchange my Bling Points for Bitcoin or USD?</li>
29
- <p>You can exchange your Bling Points for Bitcoin or USD in the app. Tap on the "Cash Out" button and choose your preferred option. You will need to enter your Coinbase email address or PayPal email address to receive your payment.</p>
30
- <p>download bitcoin pop app<br />
31
- download bitcoin pop game<br />
32
- download bitcoin pop apk<br />
33
- download bitcoin pop for pc<br />
34
- download bitcoin pop android<br />
35
- download bitcoin pop ios<br />
36
- download bitcoin pop bling<br />
37
- download bitcoin pop earn crypto<br />
38
- download bitcoin pop bubble shooter<br />
39
- download bitcoin pop mod apk<br />
40
- how to download bitcoin pop<br />
41
- where to download bitcoin pop<br />
42
- download bitcoin pop - get bitcoin<br />
43
- download bitcoin pop - get bitcoin apk<br />
44
- download bitcoin pop - get bitcoin for free<br />
45
- download bitcoin pop - get bitcoin on pc<br />
46
- download bitcoin pop - get bitcoin on android<br />
47
- download bitcoin pop - get bitcoin on ios<br />
48
- download bitcoin pop - get bitcoin with bling points<br />
49
- download bitcoin pop - get bitcoin with paypal<br />
50
- best way to download bitcoin pop<br />
51
- easiest way to download bitcoin pop<br />
52
- fastest way to download bitcoin pop<br />
53
- safest way to download bitcoin pop<br />
54
- free download of bitcoin pop<br />
55
- latest version of bitcoin pop download<br />
56
- old version of bitcoin pop download<br />
57
- update version of bitcoin pop download<br />
58
- offline version of bitcoin pop download<br />
59
- online version of bitcoin pop download<br />
60
- benefits of downloading bitcoin pop<br />
61
- disadvantages of downloading bitcoin pop<br />
62
- reviews of downloading bitcoin pop<br />
63
- ratings of downloading bitcoin pop<br />
64
- tips for downloading bitcoin pop<br />
65
- tricks for downloading bitcoin pop<br />
66
- hacks for downloading bitcoin pop<br />
67
- cheats for downloading bitcoin pop<br />
68
- guides for downloading bitcoin pop<br />
69
- tutorials for downloading bitcoin pop<br />
70
- steps for downloading bitcoin pop<br />
71
- requirements for downloading bitcoin pop<br />
72
- features of downloading bitcoin pop<br />
73
- advantages of downloading bitcoin pop<br />
74
- challenges of downloading bitcoin pop<br />
75
- problems of downloading bitcoin pop<br />
76
- solutions of downloading bitcoin pop<br />
77
- alternatives of downloading bitcoin pop<br />
78
- competitors of downloading bitcoin pop</p>
79
- <li>How long does it take to receive my payment?</li>
80
- <p>It usually takes 24 hours to process your payment request. However, it may take longer depending on the network congestion or other factors.</p>
81
- <li>Is Bitcoin Pop safe and legit?</ <p>Bitcoin Pop is a safe and legit app that has been verified by Google Play Protect and App Store Review. It has over 1 million downloads and a 4.5-star rating on both platforms. It is also backed by Bling, a reputable company that has been featured on Forbes, CNN, and The Wall Street Journal. You can trust that Bitcoin Pop will pay you your rewards and protect your privacy.</p> 197e85843d<br />
82
- <br />
83
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Attack on Titan AOT Mobile Fan Game v3.0 APK Offline An Immersive and Action-Packed Adventure.md DELETED
@@ -1,65 +0,0 @@
1
-
2
- <h1>Attack on Titan AOT Mobile Fan Game V3.0 APK Offline</h1>
3
- <p>If you are a fan of the anime and manga series Attack on Titan, you might be interested in playing a mobile game based on it. However, most of the official games are online-only and require a stable internet connection. That's why some fans have created their own fan-made games that can be played offline. One of them is Attack on Titan AOT Mobile Fan Game V3.0 APK Offline, which is a free and fun game that you can download and install on your Android device.</p>
4
- <h2>What is Attack on Titan AOT Mobile Fan Game V3.0 APK Offline?</h2>
5
- <h3>A fan-made game based on the popular anime and manga series</h3>
6
- <p>Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is a game created by Julhiecio, a fan of the series who wanted to make a game that captures the essence of the original story. The game is set in a world where humanity lives inside walls to protect themselves from giant humanoid creatures called Titans, who devour humans for no apparent reason. The game follows the adventures of Eren Yeager, Mikasa Ackerman, Armin Arlert, and other members of the Survey Corps, who fight against the Titans using special equipment called Vertical Maneuvering Equipment (VME), which allows them to move around using grappling hooks and blades.</p>
7
- <h2>attack on titan aot mobile fan game v3.0 apk offline</h2><br /><p><b><b>Download Zip</b> &#9658;&#9658;&#9658;&#9658;&#9658; <a href="https://jinyurl.com/2uNMJo">https://jinyurl.com/2uNMJo</a></b></p><br /><br />
8
- <h3>Features of the game</h3>
9
- <h4>Offline multiplayer mode</h4>
10
- <p>One of the main features of Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is that it can be played offline with up to four players using a local Wi-Fi network. This means that you don't need an internet connection to enjoy the game with your friends or family. You can choose to cooperate or compete with each other in various modes, such as survival, capture the flag, or deathmatch.</p>
11
- <h4>Large map with various locations</h4>
12
- <p>The game also features a large map that is based on the anime and manga series, with various locations that you can explore and interact with. You can visit places like Shiganshina District, Trost District, Forest of Giant Trees, Utgard Castle, and more. You can also find resources like gas, blades, food, and water that you can use to replenish your equipment and health.</p>
13
- <h4>Customizable characters and weapons</h4>
14
- <p>Another feature of Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is that you can customize your character and weapons according to your preferences. You can choose from different hairstyles, outfits, accessories, and skins for your character, as well as different types of blades, guns, and bombs for your weapons. You can also unlock new items by completing missions or collecting coins.</p>
15
- <h4>Smooth graphics and animations</h4>
16
- <p>The game also boasts smooth graphics and animations that make the game look realistic and immersive. The game uses 3D models and textures that are faithful to the original series, as well as dynamic lighting and shadows that create a realistic atmosphere. The game also uses fluid animations that make the movement and combat of the characters smooth and responsive.</p>
17
- <h4>Easy controls and interface</h4>
18
- <p>The game also has easy controls and interface that make the game easy to play and navigate. The game uses touch-screen controls that are intuitive and simple to use. You can move your character using a virtual joystick, aim and shoot using buttons, and switch between weapons using icons. You can also access menus and options using a hamburger button. The game also has a clear and simple interface that shows your health, gas, blades, and coins, as well as a mini-map and a mission log.</p>
19
- <h2>How to download and install Attack on Titan AOT Mobile Fan Game V3.0 APK Offline?</h2>
20
- <h3>Download the APK file from a trusted source</h3>
21
- <p>To download and install Attack on Titan AOT Mobile Fan Game V3.0 APK Offline, you need to get the APK file from a trusted source. You can find the link to the official website of the game developer in the description below. Alternatively, you can search for the game on Google Play Store or other third-party websites that offer APK files. However, be careful of fake or malicious links that may harm your device or steal your data.</p>
22
- <h3>Enable unknown sources on your device</h3>
23
- <p>Before you can install the APK file, you need to enable unknown sources on your device. This is because the game is not available on the official app store and you need to allow your device to install apps from other sources. To do this, go to your device settings and look for security or privacy options. Then, find and enable the option that says unknown sources or allow installation of apps from unknown sources.</p>
24
- <h3>Install the APK file and launch the game</h3>
25
- <p>After you have enabled unknown sources, you can install the APK file by tapping on it and following the instructions. It may take a few minutes for the installation to complete. Once it is done, you can launch the game by tapping on its icon on your home screen or app drawer. You can also create a shortcut for the game on your desktop for easier access.</p>
26
- <p>attack on titan mobile offline gameplay fan made julhiecio<br />
27
- aot mobile v3.0 fan made apk download youtube<br />
28
- attack on titan fan game apk for android filehippo<br />
29
- attack on titan mobile v3.0 apk offline update fanmade<br />
30
- aot mobile offline vr game by slavka based on manga<br />
31
- attack on titan mobile apk offline latest version julhiecio<br />
32
- aot mobile fan game v3.0 apk download link reddit<br />
33
- attack on titan fan game android vr component unrestrictive<br />
34
- attack on titan mobile v3.0 apk file size 110 mb<br />
35
- aot mobile offline gameplay youtube newbzone gaming channel<br />
36
- attack on titan fan game apk android immersive experience<br />
37
- attack on titan mobile v3.0 apk offline julhiecio official website<br />
38
- aot mobile fan made vr game spectacular graphics and physics<br />
39
- attack on titan fan game android filehippo download free<br />
40
- attack on titan mobile v3.0 apk offline update date 2022<br />
41
- aot mobile offline gameplay fan made julhiecio subscribe and follow<br />
42
- attack on titan fan game apk android slavka manga adaptation<br />
43
- attack on titan mobile v3.0 apk offline latest update from julhiecio<br />
44
- aot mobile fan made vr game based on critically acclaimed manga<br />
45
- attack on titan fan game android filehippo review and rating<br />
46
- attack on titan mobile v3.0 apk offline how to install and play<br />
47
- aot mobile offline gameplay youtube video watch and comment<br />
48
- attack on titan fan game apk android vr headset compatible<br />
49
- attack on titan mobile v3.0 apk offline features and improvements<br />
50
- aot mobile fan made vr game download and install instructions</p>
51
- <h2>Tips and tricks for playing Attack on Titan AOT Mobile Fan Game V3.0 APK Offline</h2>
52
- <h3>Learn the basics of movement and combat</h3>
53
- <p>The first thing you need to do when playing Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is to learn the basics of movement and combat. The game has a tutorial mode that teaches you how to use the VME, how to attack and dodge Titans, how to reload and change weapons, and how to use items. You can also practice your skills in training mode or in offline mode with bots.</p>
54
- <h3>Explore the map and find resources</h3>
55
- <p>The next thing you need to do is to explore the map and find resources that can help you survive and fight better. The map has various locations that have different advantages and disadvantages. For example, some places have more gas stations or supply depots, while others have more Titans or enemies. You can also find hidden items or secrets that can give you extra coins or bonuses.</p>
56
- <h3>Upgrade your equipment and skills</h3>
57
- <p>The third thing you need to do is to upgrade your equipment and skills as you progress in the game. You can use coins that you earn from missions or collect from the map to buy new items or upgrade existing ones. You can also use skill points that you gain from leveling up to improve your attributes or unlock new abilities. You can access the shop and the skill tree from the main menu or from checkpoints in the map.</p>
58
- <h3>Team up with other players or play solo</h3>
59
- <p>The last thing you need to do is to decide whether you want to team up with other players or play solo in offline mode. Both options have their pros and cons, depending on your preference and play style. If you team up with other players, you can cooperate and communicate with them using voice chat or text chat, as well as share resources and items. However, you may also encounter problems such as lag, disconnects, trolls, or cheaters. If you play solo, you can enjoy the game at your own pace and without any distractions or interruptions. However, you may also face more challenges and difficulties, especially against stronger Titans or enemies.</p>
60
- <h2>Conclusion</h2>
61
- <p>Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is a fan-made game that lets you experience the thrill and excitement of the anime and manga series on your mobile device. The game has offline multiplayer mode, large map with various locations, customizable characters and weapons, smooth graphics and animations, easy controls and interface, and more features that make it fun and enjoyable. The game is free to download and install, but you need to follow some steps to get it safely and securely. The game also has some tips and tricks that can help you play better and have more fun.</p>
62
- <p>If you are looking for a game that is based on Attack on Titan and can be played offline with your friends or alone, then Attack on Titan AOT Mobile Fan Game V3.0 APK Offline is a great choice for you.</p>
63
- FAQs Q: Is Attack on Titan A OT Mobile Fan Game V3.0 APK Offline an official game? A: No, it is not an official game. It is a fan-made game created by Julhiecio, a fan of the series who wanted to make a game that captures the essence of the original story. Q: How can I play Attack on Titan AOT Mobile Fan Game V3.0 APK Offline with my friends? A: You can play the game with your friends using a local Wi-Fi network. You can choose to cooperate or compete with each other in various modes, such as survival, capture the flag, or deathmatch. Q: What are the requirements to play Attack on Titan AOT Mobile Fan Game V3.0 APK Offline? A: You need an Android device that has at least 2 GB of RAM and 500 MB of free storage space. You also need to enable unknown sources on your device to install the APK file. Q: Where can I get more information about Attack on Titan AOT Mobile Fan Game V3.0 APK Offline? A: You can get more information about the game from the official website of the game developer, which is linked in the description below. You can also follow the game developer on social media platforms, such as Facebook, Twitter, Instagram, and YouTube. Q: How can I support the game developer of Attack on Titan AOT Mobile Fan Game V3.0 APK Offline? A: You can support the game developer by giving feedback, suggestions, or bug reports on the official website or social media platforms. You can also donate to the game developer via PayPal or Patreon.</p> 401be4b1e0<br />
64
- <br />
65
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Enjoy Stumble Guys in Your Browser - No APK Required.md DELETED
@@ -1,130 +0,0 @@
1
- <br />
2
- <h1>How to Download Stumble Guys Without APK</h1>
3
- <p>If you are looking for a fun and addictive game to play with your friends or strangers online, you might want to check out <strong>Stumble Guys</strong>. It is a massive multiplayer party knockout game that will make you laugh, scream, and stumble your way to victory. But how can you download Stumble Guys without APK? In this article, we will explain what Stumble Guys is, what an APK file is, and how you can download Stumble Guys without APK on your device.</p>
4
- <h2>What is Stumble Guys?</h2>
5
- <h3>A fun and chaotic multiplayer party game</h3>
6
- <p>Stumble Guys is a game that was inspired by popular TV shows like Wipeout and Takeshi's Castle. The game involves racing through obstacle courses against up to 32 players online. You have to run, jump, dash, slide, and dodge your way to the finish line while avoiding being eliminated by other players or the environment. The game features 17 unique obstacle courses that are randomly selected each round, so you never know what to expect. The game also has colorful, whacky graphics and hilarious sound effects that add to the fun.</p>
7
- <h2>download stumble guys no apk</h2><br /><p><b><b>DOWNLOAD</b> &#8230;&#8230;&#8230; <a href="https://jinyurl.com/2uNRxb">https://jinyurl.com/2uNRxb</a></b></p><br /><br />
8
- <h3>Available on different platforms and devices</h3>
9
- <p>Stumble Guys was originally released as a mobile game for Android devices in August 2020. Since then, it has gained millions of downloads and positive reviews from players. In October 2021, the game was also released on Steam for Windows PC users. The game supports cross-play between Android and PC users, so you can play with anyone regardless of their device. The game also has a party mode that allows you to invite your friends and create private matches. You can also customize your Stumble Guy with different outfits and emotes that you can unlock by playing the game or purchasing them from the store.</p>
10
- <h2>What is an APK file?</h2>
11
- <h3>A package file format for Android apps</h3>
12
- <p>An APK file stands for Android Package Kit. It is a file format that Android uses to distribute and install apps. An APK file contains all the code, resources, assets, certificates, and manifest file that an app needs to run properly on an Android device. An APK file can be downloaded from various sources, such as Google Play Store, third-party websites, or directly from the app developer.</p>
13
- <h3>The pros and cons of using APK files</h3>
14
- <p>There are some advantages and disadvantages of using APK files to install apps on your Android device. Some of the pros are:</p>
15
- <ul>
16
- <li>You can access apps that are not available in your region or on Google Play Store.</li>
17
- <li>You can install older versions of apps that may have features or compatibility that you prefer.</li>
18
- <li>You can update apps faster than waiting for the official update from Google Play Store.</li>
19
- </ul>
20
- <p>Some of the cons are:</p>
21
- <ul>
22
- <li>You may expose your device to malware or viruses that may harm your data or system.</li>
23
- <li>You may violate the terms of service or privacy policy of the app developer or Google Play Store.</li>
24
- <li>You may encounter compatibility or performance issues with your device or other apps.</li>
25
- </ul>
26
- <h2>How to download Stumble Guys without APK</h2>
27
- <h3>Download from Google Play Store or Steam</h3>
28
- <p>The easiest and safest way to download Stumble Guys without APK is to get it from the official sources, such as Google Play Store or Steam. Here are the steps to do so:</p>
29
- <p>download stumble guys online for free<br />
30
- download stumble guys multiplayer royale game<br />
31
- download stumble guys without apk file<br />
32
- download stumble guys on pc and mobile<br />
33
- download stumble guys action platformer game<br />
34
- download stumble guys unblocked games for school<br />
35
- download stumble guys from google play store<br />
36
- download stumble guys xapk version<br />
37
- download stumble guys mod apk with unlimited gems<br />
38
- download stumble guys latest update 2023<br />
39
- download stumble guys for android and ios devices<br />
40
- download stumble guys and join the endless running fun<br />
41
- download stumble guys and play with your friends online<br />
42
- download stumble guys and customize your character<br />
43
- download stumble guys and dodge oncoming obstacles<br />
44
- download stumble guys and win the trophy<br />
45
- download stumble guys and experience the comical physics<br />
46
- download stumble guys and enjoy the colorful design<br />
47
- download stumble guys and challenge yourself in different levels<br />
48
- download stumble guys and beat all your rivals<br />
49
- download stumble guys and become the champion<br />
50
- download stumble guys and try the new features<br />
51
- download stumble guys and explore more games on now.gg<br />
52
- download stumble guys and join the tournaments<br />
53
- download stumble guys and earn rewards from the stumble pass<br />
54
- download stumble guys and share your hilarious fails<br />
55
- download stumble guys and rate the game on google play<br />
56
- download stumble guys and watch video clips of gameplay<br />
57
- download stumble guys and learn tips and tricks from other players<br />
58
- download stumble guys and discover new maps and modes<br />
59
- download stumble guys and have fun with the stylized graphics<br />
60
- download stumble guys and run, dash, and slide past opponents<br />
61
- download stumble guys and avoid getting wiped out<br />
62
- download stumble guys and participate in the massive multiplayer party knockout game<br />
63
- download stumble guys and support the developers by purchasing in-app items<br />
64
- download stumble guys and check out the data safety and privacy policy of the app<br />
65
- download stumble guys and read the reviews from other users<br />
66
- download stumble guys and follow the official social media accounts of the game<br />
67
- download stumble guys and contact the customer support if you have any issues or feedbacks<br />
68
- download stumble guys and join the community of fans of the game</p>
69
- <table>
70
- <tr>
71
- <th>Platform</th>
72
- <th>Steps</th>
73
- </tr>
74
- <tr>
75
- <td>Android</td>
76
- <td>
77
- <ol>
78
- <li>Open Google Play Store on your device.</li>
79
- <li>Search for Stumble Guys or use this link: <a href="">Stumble Guys - Apps on Google Play</a>.</li>
80
- <li>Tap on Install and wait for the download to finish.</li>
81
- <li>Launch the game and enjoy.</li>
82
- </ol>
83
- </td>
84
- </tr>
85
- <tr>
86
- <td>PC</td>
87
- <td>
88
- <ol>
89
- <li>Open Steam on your PC or download it from <a href="">Steam, The Ultimate Online Game Platform</a>.</li>
90
- <li>Search for Stumble Guys or use this link: <a href="">Stumble Guys on Steam</a>.</li>
91
- <li>Click on Add to Cart and purchase the game.</li>
92
- <li>Download and install the game from your Steam library.</li>
93
- <li>Launch the game and enjoy.</li>
94
- </ol>
95
- </td>
96
- </tr>
97
- </table>
98
- <h3>Use an Android emulator on PC or Mac</h3>
99
- <p>If you don't have an Android device or a PC that can run Steam, you can still play Stumble Guys without APK by using an Android emulator. An Android emulator is a software that simulates an Android device on your PC or Mac. You can use it to run Android apps and games on your computer. There are many Android emulators available online, such as BlueStacks, NoxPlayer, LDPlayer, etc. Here are the general steps to use an Android emulator to play Stumble Guys:</p>
100
- <ol>
101
- <li>Download and install an Android emulator of your choice from its official website.</li>
102
- <li>Launch the emulator and sign in with your Google account.</li>
103
- <li>Open Google Play Store on the emulator and search for Stumble Guys or use this link: <a href="">Stumble Guys - Apps on Google Play</a>.</li>
104
- <li>Install and launch the game on the emulator.</li>
105
- <li>Enjoy playing Stumble Guys on your PC or Mac.</li>
106
- </ol>
107
- <h2>Conclusion</h2>
108
- <h3>Summarize the main points of the article</h3>
109
- <p>In conclusion, Stumble Guys is a fun and chaotic multiplayer party game that you can play with up to 32 players online. You can download Stumble Guys without APK by getting it from Google Play Store or Steam, or by using an Android emulator on your PC or Mac. By doing so, you can avoid the risks of using APK files and enjoy the game safely and smoothly.</p>
110
- <h3>Provide a call to action for the readers</h3>
111
- <p>If you are ready to join the fun and stumble your way to victory, download Stumble Guys today and invite your friends to play with you. You will have a blast competing with other players in hilarious obstacle courses. Don't forget to customize your Stumble Guy with cool outfits and emotes. Have fun and good luck!</p>
112
- <h2>FAQs</h2>
113
- <h3>Is Stumble Guys free to play?</h3>
114
- <p>Yes, Stumble Guys is free to play on Android devices. However, you can purchase in-game items such as outfits, emotes, coins, and gems with real money. On PC, you have to buy the game from Steam for $4.99.</p>
115
- <h3>Can I play Stumble Guys with my friends?</h3>
116
- <p>Yes, you can play Stumble Guys with your friends by using the party mode. You can invite up to 32 friends to join your private match. You can also chat with them using voice or text messages.</p>
117
- <h3>How many players can join a Stumble Guys match?</h3>
118
- <p>A Stumble Guys match can have up to 32 players online. The match consists of multiple rounds of obstacle courses that eliminate players until one winner remains.</p>
119
- <h3>What are the system requirements for Stumble Guys?</h3>
120
- <p>The minimum system requirements for Stumble Guys are:</p>
121
- <table><tr><th>Platform</th><th>Requirements</th></tr><tr><td>Android</td><td><ul><li>Android 5.0 or higher</li><li>2 GB of RAM or more</li><li>100 MB of free storage space or more</li></ul></td></tr><tr><td>PC</td><td><ul><li>Windows 7 or higher (64-bit)</li><li>Dual core CPU 2.4 GHz or faster</li><li>NVIDIA GeForce 8600/9600GT or equivalent GPU</li <li>4 GB of RAM or more</li>
122
- <li>1 GB of free storage space or more</li>
123
- </ul>
124
- </td>
125
- </tr>
126
- </table>
127
- <h3>How can I customize my Stumble Guy?</h3>
128
- <p>You can customize your Stumble Guy by changing its outfit and emote. You can unlock new outfits and emotes by playing the game, completing missions, or buying them from the store. You can also mix and match different parts of the outfits to create your own unique look.</p> 401be4b1e0<br />
129
- <br />
130
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Engineering Interviews 4be8039581d04456b0151f2cc4b22130/Questions ede8818b3a0e447f80145905690eb3f6/FizzBuzz 70828a5e5e6846a48686f66bb9ccc8b6.md DELETED
@@ -1,46 +0,0 @@
1
- # FizzBuzz
2
-
3
- Difficulty: Easy
4
- Skills: Algorithms, Front end
5
-
6
- <aside>
7
- 💡 Create a new question in this database and choose `Interview Question` from the list of templates to automatically generate the format below.
8
-
9
- </aside>
10
-
11
- # Description
12
-
13
- Write a description for the interview question here.
14
-
15
- # Sample Inputs
16
-
17
- Give some valid inputs the candidate can expect to test their solution with.
18
-
19
- - ...
20
- - ...
21
-
22
- # Expected Outputs
23
-
24
- For each sample input above, list the expected output.
25
-
26
- - ...
27
- - ...
28
-
29
- # Solutions
30
-
31
- Provide possible solutions in common languages to this problem.
32
-
33
- ### Javascript
34
-
35
- ```jsx
36
- function solution() {
37
-
38
- }
39
- ```
40
-
41
- ### Python
42
-
43
- ```python
44
- def solution():
45
- pass
46
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/ControlNet-Video/style.css DELETED
@@ -1,105 +0,0 @@
1
- #col-container {max-width: 820px; margin-left: auto; margin-right: auto;}
2
- #duplicate-container{
3
- display: flex;
4
- justify-content: space-between;
5
- align-items: center;
6
- line-height: 1em;
7
- flex-direction: row-reverse;
8
- font-size:1em;
9
- }
10
- a, a:hover, a:visited {
11
- text-decoration-line: underline;
12
- font-weight: 600;
13
- color: #1f2937 !important;
14
- }
15
-
16
- .dark a, .dark a:hover, .dark a:visited {
17
- color: #f3f4f6 !important;
18
- }
19
-
20
- .label-wrap {
21
- margin-bottom: 12px;
22
- }
23
-
24
- .footer {
25
- margin-bottom: 45px;
26
- margin-top: 10px;
27
- text-align: center;
28
- border-bottom: 1px solid #e5e5e5;
29
- }
30
-
31
- .footer>p {
32
- font-size: .8rem!important;
33
- display: inline-block;
34
- padding: 0 10px;
35
- transform: translateY(26px);
36
- background: white;
37
- }
38
- .dark .footer {
39
- border-color: #303030;
40
- }
41
- .dark .footer>p {
42
- background: #0b0f19;
43
- }
44
-
45
- div#may-like-container > p {
46
- font-size: .8em;
47
- margin-bottom: 4px;
48
- }
49
-
50
- .animate-spin {
51
- animation: spin 1s linear infinite;
52
- }
53
-
54
- @keyframes spin {
55
- from {
56
- transform: rotate(0deg);
57
- }
58
- to {
59
- transform: rotate(360deg);
60
- }
61
- }
62
-
63
- #share-btn-container {
64
- display: flex;
65
- padding-left: 0.5rem !important;
66
- padding-right: 0.5rem !important;
67
- background-color: #000000;
68
- justify-content: center;
69
- align-items: center;
70
- border-radius: 9999px !important;
71
- max-width: 13rem;
72
- }
73
-
74
- #share-btn-container:hover {
75
- background-color: #060606;
76
- }
77
-
78
- #share-btn {
79
- all: initial;
80
- color: #ffffff;
81
- font-weight: 600;
82
- cursor:pointer;
83
- font-family: 'IBM Plex Sans', sans-serif;
84
- margin-left: 0.5rem !important;
85
- padding-top: 0.5rem !important;
86
- padding-bottom: 0.5rem !important;
87
- right:0;
88
- }
89
-
90
- #share-btn * {
91
- all: unset;
92
- }
93
-
94
- #share-btn-container div:nth-child(-n+2){
95
- width: auto !important;
96
- min-height: 0px !important;
97
- }
98
-
99
- #share-btn-container .wrap {
100
- display: none !important;
101
- }
102
-
103
- #share-btn-container.hidden {
104
- display: none!important;
105
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/StyleGANEX/models/stylegan2/op/upfirdn2d.py DELETED
@@ -1,61 +0,0 @@
1
- from collections import abc
2
-
3
- import torch
4
- from torch.nn import functional as F
5
-
6
-
7
- def upfirdn2d(inputs, kernel, up=1, down=1, pad=(0, 0)):
8
- if not isinstance(up, abc.Iterable):
9
- up = (up, up)
10
-
11
- if not isinstance(down, abc.Iterable):
12
- down = (down, down)
13
-
14
- if len(pad) == 2:
15
- pad = (pad[0], pad[1], pad[0], pad[1])
16
-
17
- return upfirdn2d_native(inputs, kernel, *up, *down, *pad)
18
-
19
-
20
- def upfirdn2d_native(
21
- inputs, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1
22
- ):
23
- _, channel, in_h, in_w = inputs.shape
24
- inputs = inputs.reshape(-1, in_h, in_w, 1)
25
-
26
- _, in_h, in_w, minor = inputs.shape
27
- kernel_h, kernel_w = kernel.shape
28
-
29
- out = inputs.view(-1, in_h, 1, in_w, 1, minor)
30
- out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1])
31
- out = out.view(-1, in_h * up_y, in_w * up_x, minor)
32
-
33
- out = F.pad(
34
- out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]
35
- )
36
- out = out[
37
- :,
38
- max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0),
39
- max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0),
40
- :,
41
- ]
42
-
43
- out = out.permute(0, 3, 1, 2)
44
- out = out.reshape(
45
- [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]
46
- )
47
- w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w)
48
- out = F.conv2d(out, w)
49
- out = out.reshape(
50
- -1,
51
- minor,
52
- in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1,
53
- in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1,
54
- )
55
- out = out.permute(0, 2, 3, 1)
56
- out = out[:, ::down_y, ::down_x, :]
57
-
58
- out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y
59
- out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x
60
-
61
- return out.view(-1, channel, out_h, out_w)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/g4f/Provider/Providers/helpers/phind.py DELETED
@@ -1,69 +0,0 @@
1
- import sys
2
- import json
3
- import datetime
4
- import urllib.parse
5
-
6
- from curl_cffi import requests
7
-
8
- config = json.loads(sys.argv[1])
9
- prompt = config['messages'][-1]['content']
10
-
11
- skill = 'expert' if config['model'] == 'gpt-4' else 'intermediate'
12
-
13
- json_data = json.dumps({
14
- 'question': prompt,
15
- 'options': {
16
- 'skill': skill,
17
- 'date': datetime.datetime.now().strftime('%d/%m/%Y'),
18
- 'language': 'en',
19
- 'detailed': True,
20
- 'creative': True,
21
- 'customLinks': []}}, separators=(',', ':'))
22
-
23
- headers = {
24
- 'Content-Type': 'application/json',
25
- 'Pragma': 'no-cache',
26
- 'Accept': '*/*',
27
- 'Sec-Fetch-Site': 'same-origin',
28
- 'Accept-Language': 'en-GB,en;q=0.9',
29
- 'Cache-Control': 'no-cache',
30
- 'Sec-Fetch-Mode': 'cors',
31
- 'Content-Length': str(len(json_data)),
32
- 'Origin': 'https://www.phind.com',
33
- 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.4 Safari/605.1.15',
34
- 'Referer': f'https://www.phind.com/search?q={urllib.parse.quote(prompt)}&source=searchbox',
35
- 'Connection': 'keep-alive',
36
- 'Host': 'www.phind.com',
37
- 'Sec-Fetch-Dest': 'empty'
38
- }
39
-
40
-
41
- def output(chunk):
42
- try:
43
- if b'PHIND_METADATA' in chunk:
44
- return
45
-
46
- if chunk == b'data: \r\ndata: \r\ndata: \r\n\r\n':
47
- chunk = b'data: \n\r\n\r\n'
48
-
49
- chunk = chunk.decode()
50
-
51
- chunk = chunk.replace('data: \r\n\r\ndata: ', 'data: \n')
52
- chunk = chunk.replace('\r\ndata: \r\ndata: \r\n\r\n', '\n\r\n\r\n')
53
- chunk = chunk.replace('data: ', '').replace('\r\n\r\n', '')
54
-
55
- print(chunk, flush=True, end = '')
56
-
57
- except json.decoder.JSONDecodeError:
58
- pass
59
-
60
- while True:
61
- try:
62
- response = requests.post('https://www.phind.com/api/infer/answer',
63
- headers=headers, data=json_data, content_callback=output, timeout=999999, impersonate='safari15_5')
64
-
65
- exit(0)
66
-
67
- except Exception as e:
68
- print('an error occured, retrying... |', e, flush=True)
69
- continue
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/lib/types/WebSearch.ts DELETED
@@ -1,36 +0,0 @@
1
- import type { Conversation } from "./Conversation";
2
- import type { Timestamps } from "./Timestamps";
3
-
4
- export interface WebSearch extends Timestamps {
5
- prompt: string;
6
-
7
- searchQuery: string;
8
- results: string[];
9
- knowledgeGraph: string;
10
- answerBox: string;
11
- summary: string;
12
-
13
- messages: WebSearchMessage[];
14
- }
15
-
16
- export type WebSearchMessageUpdate = {
17
- type: "update";
18
- message: string;
19
- args?: string[];
20
- };
21
-
22
- export type WebSearchMessageError = {
23
- type: "error";
24
- message: string;
25
- args?: string[];
26
- };
27
-
28
- export type WebSearchMessageResult = {
29
- type: "result";
30
- id: string;
31
- };
32
-
33
- export type WebSearchMessage =
34
- | WebSearchMessageUpdate
35
- | WebSearchMessageResult
36
- | WebSearchMessageError;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/perspective/Perspective.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import { ContainerPerspective } from '../../../plugins/perspectiveimage';
2
- export default ContainerPerspective;
 
 
 
spaces/AlexWang/lama/bin/gen_mask_dataset_hydra.py DELETED
@@ -1,124 +0,0 @@
1
- #!/usr/bin/env python3
2
-
3
- import glob
4
- import os
5
- import shutil
6
- import traceback
7
- import hydra
8
- from omegaconf import OmegaConf
9
-
10
- import PIL.Image as Image
11
- import numpy as np
12
- from joblib import Parallel, delayed
13
-
14
- from saicinpainting.evaluation.masks.mask import SegmentationMask, propose_random_square_crop
15
- from saicinpainting.evaluation.utils import load_yaml, SmallMode
16
- from saicinpainting.training.data.masks import MixedMaskGenerator
17
-
18
-
19
- class MakeManyMasksWrapper:
20
- def __init__(self, impl, variants_n=2):
21
- self.impl = impl
22
- self.variants_n = variants_n
23
-
24
- def get_masks(self, img):
25
- img = np.transpose(np.array(img), (2, 0, 1))
26
- return [self.impl(img)[0] for _ in range(self.variants_n)]
27
-
28
-
29
- def process_images(src_images, indir, outdir, config):
30
- if config.generator_kind == 'segmentation':
31
- mask_generator = SegmentationMask(**config.mask_generator_kwargs)
32
- elif config.generator_kind == 'random':
33
- mask_generator_kwargs = OmegaConf.to_container(config.mask_generator_kwargs, resolve=True)
34
- variants_n = mask_generator_kwargs.pop('variants_n', 2)
35
- mask_generator = MakeManyMasksWrapper(MixedMaskGenerator(**mask_generator_kwargs),
36
- variants_n=variants_n)
37
- else:
38
- raise ValueError(f'Unexpected generator kind: {config.generator_kind}')
39
-
40
- max_tamper_area = config.get('max_tamper_area', 1)
41
-
42
- for infile in src_images:
43
- try:
44
- file_relpath = infile[len(indir):]
45
- img_outpath = os.path.join(outdir, file_relpath)
46
- os.makedirs(os.path.dirname(img_outpath), exist_ok=True)
47
-
48
- image = Image.open(infile).convert('RGB')
49
-
50
- # scale input image to output resolution and filter smaller images
51
- if min(image.size) < config.cropping.out_min_size:
52
- handle_small_mode = SmallMode(config.cropping.handle_small_mode)
53
- if handle_small_mode == SmallMode.DROP:
54
- continue
55
- elif handle_small_mode == SmallMode.UPSCALE:
56
- factor = config.cropping.out_min_size / min(image.size)
57
- out_size = (np.array(image.size) * factor).round().astype('uint32')
58
- image = image.resize(out_size, resample=Image.BICUBIC)
59
- else:
60
- factor = config.cropping.out_min_size / min(image.size)
61
- out_size = (np.array(image.size) * factor).round().astype('uint32')
62
- image = image.resize(out_size, resample=Image.BICUBIC)
63
-
64
- # generate and select masks
65
- src_masks = mask_generator.get_masks(image)
66
-
67
- filtered_image_mask_pairs = []
68
- for cur_mask in src_masks:
69
- if config.cropping.out_square_crop:
70
- (crop_left,
71
- crop_top,
72
- crop_right,
73
- crop_bottom) = propose_random_square_crop(cur_mask,
74
- min_overlap=config.cropping.crop_min_overlap)
75
- cur_mask = cur_mask[crop_top:crop_bottom, crop_left:crop_right]
76
- cur_image = image.copy().crop((crop_left, crop_top, crop_right, crop_bottom))
77
- else:
78
- cur_image = image
79
-
80
- if len(np.unique(cur_mask)) == 0 or cur_mask.mean() > max_tamper_area:
81
- continue
82
-
83
- filtered_image_mask_pairs.append((cur_image, cur_mask))
84
-
85
- mask_indices = np.random.choice(len(filtered_image_mask_pairs),
86
- size=min(len(filtered_image_mask_pairs), config.max_masks_per_image),
87
- replace=False)
88
-
89
- # crop masks; save masks together with input image
90
- mask_basename = os.path.join(outdir, os.path.splitext(file_relpath)[0])
91
- for i, idx in enumerate(mask_indices):
92
- cur_image, cur_mask = filtered_image_mask_pairs[idx]
93
- cur_basename = mask_basename + f'_crop{i:03d}'
94
- Image.fromarray(np.clip(cur_mask * 255, 0, 255).astype('uint8'),
95
- mode='L').save(cur_basename + f'_mask{i:03d}.png')
96
- cur_image.save(cur_basename + '.png')
97
- except KeyboardInterrupt:
98
- return
99
- except Exception as ex:
100
- print(f'Could not make masks for {infile} due to {ex}:\n{traceback.format_exc()}')
101
-
102
-
103
- @hydra.main(config_path='../configs/data_gen/whydra', config_name='random_medium_256.yaml')
104
- def main(config: OmegaConf):
105
- if not config.indir.endswith('/'):
106
- config.indir += '/'
107
-
108
- os.makedirs(config.outdir, exist_ok=True)
109
-
110
- in_files = list(glob.glob(os.path.join(config.indir, '**', f'*.{config.location.extension}'),
111
- recursive=True))
112
- if config.n_jobs == 0:
113
- process_images(in_files, config.indir, config.outdir, config)
114
- else:
115
- in_files_n = len(in_files)
116
- chunk_size = in_files_n // config.n_jobs + (1 if in_files_n % config.n_jobs > 0 else 0)
117
- Parallel(n_jobs=config.n_jobs)(
118
- delayed(process_images)(in_files[start:start+chunk_size], config.indir, config.outdir, config)
119
- for start in range(0, len(in_files), chunk_size)
120
- )
121
-
122
-
123
- if __name__ == '__main__':
124
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlexWang/lama/models/ade20k/segm_lib/utils/data/dataloader.py DELETED
@@ -1,425 +0,0 @@
1
- import torch
2
- import torch.multiprocessing as multiprocessing
3
- from torch._C import _set_worker_signal_handlers, \
4
- _remove_worker_pids, _error_if_any_worker_fails
5
- try:
6
- from torch._C import _set_worker_pids
7
- except:
8
- from torch._C import _update_worker_pids as _set_worker_pids
9
- from .sampler import SequentialSampler, RandomSampler, BatchSampler
10
- import signal
11
- import collections
12
- import re
13
- import sys
14
- import threading
15
- import traceback
16
- from torch._six import string_classes, int_classes
17
- import numpy as np
18
-
19
- if sys.version_info[0] == 2:
20
- import Queue as queue
21
- else:
22
- import queue
23
-
24
-
25
- class ExceptionWrapper(object):
26
- r"Wraps an exception plus traceback to communicate across threads"
27
-
28
- def __init__(self, exc_info):
29
- self.exc_type = exc_info[0]
30
- self.exc_msg = "".join(traceback.format_exception(*exc_info))
31
-
32
-
33
- _use_shared_memory = False
34
- """Whether to use shared memory in default_collate"""
35
-
36
-
37
- def _worker_loop(dataset, index_queue, data_queue, collate_fn, seed, init_fn, worker_id):
38
- global _use_shared_memory
39
- _use_shared_memory = True
40
-
41
- # Intialize C side signal handlers for SIGBUS and SIGSEGV. Python signal
42
- # module's handlers are executed after Python returns from C low-level
43
- # handlers, likely when the same fatal signal happened again already.
44
- # https://docs.python.org/3/library/signal.html Sec. 18.8.1.1
45
- _set_worker_signal_handlers()
46
-
47
- torch.set_num_threads(1)
48
- torch.manual_seed(seed)
49
- np.random.seed(seed)
50
-
51
- if init_fn is not None:
52
- init_fn(worker_id)
53
-
54
- while True:
55
- r = index_queue.get()
56
- if r is None:
57
- break
58
- idx, batch_indices = r
59
- try:
60
- samples = collate_fn([dataset[i] for i in batch_indices])
61
- except Exception:
62
- data_queue.put((idx, ExceptionWrapper(sys.exc_info())))
63
- else:
64
- data_queue.put((idx, samples))
65
-
66
-
67
- def _worker_manager_loop(in_queue, out_queue, done_event, pin_memory, device_id):
68
- if pin_memory:
69
- torch.cuda.set_device(device_id)
70
-
71
- while True:
72
- try:
73
- r = in_queue.get()
74
- except Exception:
75
- if done_event.is_set():
76
- return
77
- raise
78
- if r is None:
79
- break
80
- if isinstance(r[1], ExceptionWrapper):
81
- out_queue.put(r)
82
- continue
83
- idx, batch = r
84
- try:
85
- if pin_memory:
86
- batch = pin_memory_batch(batch)
87
- except Exception:
88
- out_queue.put((idx, ExceptionWrapper(sys.exc_info())))
89
- else:
90
- out_queue.put((idx, batch))
91
-
92
- numpy_type_map = {
93
- 'float64': torch.DoubleTensor,
94
- 'float32': torch.FloatTensor,
95
- 'float16': torch.HalfTensor,
96
- 'int64': torch.LongTensor,
97
- 'int32': torch.IntTensor,
98
- 'int16': torch.ShortTensor,
99
- 'int8': torch.CharTensor,
100
- 'uint8': torch.ByteTensor,
101
- }
102
-
103
-
104
- def default_collate(batch):
105
- "Puts each data field into a tensor with outer dimension batch size"
106
-
107
- error_msg = "batch must contain tensors, numbers, dicts or lists; found {}"
108
- elem_type = type(batch[0])
109
- if torch.is_tensor(batch[0]):
110
- out = None
111
- if _use_shared_memory:
112
- # If we're in a background process, concatenate directly into a
113
- # shared memory tensor to avoid an extra copy
114
- numel = sum([x.numel() for x in batch])
115
- storage = batch[0].storage()._new_shared(numel)
116
- out = batch[0].new(storage)
117
- return torch.stack(batch, 0, out=out)
118
- elif elem_type.__module__ == 'numpy' and elem_type.__name__ != 'str_' \
119
- and elem_type.__name__ != 'string_':
120
- elem = batch[0]
121
- if elem_type.__name__ == 'ndarray':
122
- # array of string classes and object
123
- if re.search('[SaUO]', elem.dtype.str) is not None:
124
- raise TypeError(error_msg.format(elem.dtype))
125
-
126
- return torch.stack([torch.from_numpy(b) for b in batch], 0)
127
- if elem.shape == (): # scalars
128
- py_type = float if elem.dtype.name.startswith('float') else int
129
- return numpy_type_map[elem.dtype.name](list(map(py_type, batch)))
130
- elif isinstance(batch[0], int_classes):
131
- return torch.LongTensor(batch)
132
- elif isinstance(batch[0], float):
133
- return torch.DoubleTensor(batch)
134
- elif isinstance(batch[0], string_classes):
135
- return batch
136
- elif isinstance(batch[0], collections.Mapping):
137
- return {key: default_collate([d[key] for d in batch]) for key in batch[0]}
138
- elif isinstance(batch[0], collections.Sequence):
139
- transposed = zip(*batch)
140
- return [default_collate(samples) for samples in transposed]
141
-
142
- raise TypeError((error_msg.format(type(batch[0]))))
143
-
144
-
145
- def pin_memory_batch(batch):
146
- if torch.is_tensor(batch):
147
- return batch.pin_memory()
148
- elif isinstance(batch, string_classes):
149
- return batch
150
- elif isinstance(batch, collections.Mapping):
151
- return {k: pin_memory_batch(sample) for k, sample in batch.items()}
152
- elif isinstance(batch, collections.Sequence):
153
- return [pin_memory_batch(sample) for sample in batch]
154
- else:
155
- return batch
156
-
157
-
158
- _SIGCHLD_handler_set = False
159
- """Whether SIGCHLD handler is set for DataLoader worker failures. Only one
160
- handler needs to be set for all DataLoaders in a process."""
161
-
162
-
163
- def _set_SIGCHLD_handler():
164
- # Windows doesn't support SIGCHLD handler
165
- if sys.platform == 'win32':
166
- return
167
- # can't set signal in child threads
168
- if not isinstance(threading.current_thread(), threading._MainThread):
169
- return
170
- global _SIGCHLD_handler_set
171
- if _SIGCHLD_handler_set:
172
- return
173
- previous_handler = signal.getsignal(signal.SIGCHLD)
174
- if not callable(previous_handler):
175
- previous_handler = None
176
-
177
- def handler(signum, frame):
178
- # This following call uses `waitid` with WNOHANG from C side. Therefore,
179
- # Python can still get and update the process status successfully.
180
- _error_if_any_worker_fails()
181
- if previous_handler is not None:
182
- previous_handler(signum, frame)
183
-
184
- signal.signal(signal.SIGCHLD, handler)
185
- _SIGCHLD_handler_set = True
186
-
187
-
188
- class DataLoaderIter(object):
189
- "Iterates once over the DataLoader's dataset, as specified by the sampler"
190
-
191
- def __init__(self, loader):
192
- self.dataset = loader.dataset
193
- self.collate_fn = loader.collate_fn
194
- self.batch_sampler = loader.batch_sampler
195
- self.num_workers = loader.num_workers
196
- self.pin_memory = loader.pin_memory and torch.cuda.is_available()
197
- self.timeout = loader.timeout
198
- self.done_event = threading.Event()
199
-
200
- self.sample_iter = iter(self.batch_sampler)
201
-
202
- if self.num_workers > 0:
203
- self.worker_init_fn = loader.worker_init_fn
204
- self.index_queue = multiprocessing.SimpleQueue()
205
- self.worker_result_queue = multiprocessing.SimpleQueue()
206
- self.batches_outstanding = 0
207
- self.worker_pids_set = False
208
- self.shutdown = False
209
- self.send_idx = 0
210
- self.rcvd_idx = 0
211
- self.reorder_dict = {}
212
-
213
- base_seed = torch.LongTensor(1).random_(0, 2**31-1)[0]
214
- self.workers = [
215
- multiprocessing.Process(
216
- target=_worker_loop,
217
- args=(self.dataset, self.index_queue, self.worker_result_queue, self.collate_fn,
218
- base_seed + i, self.worker_init_fn, i))
219
- for i in range(self.num_workers)]
220
-
221
- if self.pin_memory or self.timeout > 0:
222
- self.data_queue = queue.Queue()
223
- if self.pin_memory:
224
- maybe_device_id = torch.cuda.current_device()
225
- else:
226
- # do not initialize cuda context if not necessary
227
- maybe_device_id = None
228
- self.worker_manager_thread = threading.Thread(
229
- target=_worker_manager_loop,
230
- args=(self.worker_result_queue, self.data_queue, self.done_event, self.pin_memory,
231
- maybe_device_id))
232
- self.worker_manager_thread.daemon = True
233
- self.worker_manager_thread.start()
234
- else:
235
- self.data_queue = self.worker_result_queue
236
-
237
- for w in self.workers:
238
- w.daemon = True # ensure that the worker exits on process exit
239
- w.start()
240
-
241
- _set_worker_pids(id(self), tuple(w.pid for w in self.workers))
242
- _set_SIGCHLD_handler()
243
- self.worker_pids_set = True
244
-
245
- # prime the prefetch loop
246
- for _ in range(2 * self.num_workers):
247
- self._put_indices()
248
-
249
- def __len__(self):
250
- return len(self.batch_sampler)
251
-
252
- def _get_batch(self):
253
- if self.timeout > 0:
254
- try:
255
- return self.data_queue.get(timeout=self.timeout)
256
- except queue.Empty:
257
- raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout))
258
- else:
259
- return self.data_queue.get()
260
-
261
- def __next__(self):
262
- if self.num_workers == 0: # same-process loading
263
- indices = next(self.sample_iter) # may raise StopIteration
264
- batch = self.collate_fn([self.dataset[i] for i in indices])
265
- if self.pin_memory:
266
- batch = pin_memory_batch(batch)
267
- return batch
268
-
269
- # check if the next sample has already been generated
270
- if self.rcvd_idx in self.reorder_dict:
271
- batch = self.reorder_dict.pop(self.rcvd_idx)
272
- return self._process_next_batch(batch)
273
-
274
- if self.batches_outstanding == 0:
275
- self._shutdown_workers()
276
- raise StopIteration
277
-
278
- while True:
279
- assert (not self.shutdown and self.batches_outstanding > 0)
280
- idx, batch = self._get_batch()
281
- self.batches_outstanding -= 1
282
- if idx != self.rcvd_idx:
283
- # store out-of-order samples
284
- self.reorder_dict[idx] = batch
285
- continue
286
- return self._process_next_batch(batch)
287
-
288
- next = __next__ # Python 2 compatibility
289
-
290
- def __iter__(self):
291
- return self
292
-
293
- def _put_indices(self):
294
- assert self.batches_outstanding < 2 * self.num_workers
295
- indices = next(self.sample_iter, None)
296
- if indices is None:
297
- return
298
- self.index_queue.put((self.send_idx, indices))
299
- self.batches_outstanding += 1
300
- self.send_idx += 1
301
-
302
- def _process_next_batch(self, batch):
303
- self.rcvd_idx += 1
304
- self._put_indices()
305
- if isinstance(batch, ExceptionWrapper):
306
- raise batch.exc_type(batch.exc_msg)
307
- return batch
308
-
309
- def __getstate__(self):
310
- # TODO: add limited pickling support for sharing an iterator
311
- # across multiple threads for HOGWILD.
312
- # Probably the best way to do this is by moving the sample pushing
313
- # to a separate thread and then just sharing the data queue
314
- # but signalling the end is tricky without a non-blocking API
315
- raise NotImplementedError("DataLoaderIterator cannot be pickled")
316
-
317
- def _shutdown_workers(self):
318
- try:
319
- if not self.shutdown:
320
- self.shutdown = True
321
- self.done_event.set()
322
- # if worker_manager_thread is waiting to put
323
- while not self.data_queue.empty():
324
- self.data_queue.get()
325
- for _ in self.workers:
326
- self.index_queue.put(None)
327
- # done_event should be sufficient to exit worker_manager_thread,
328
- # but be safe here and put another None
329
- self.worker_result_queue.put(None)
330
- finally:
331
- # removes pids no matter what
332
- if self.worker_pids_set:
333
- _remove_worker_pids(id(self))
334
- self.worker_pids_set = False
335
-
336
- def __del__(self):
337
- if self.num_workers > 0:
338
- self._shutdown_workers()
339
-
340
-
341
- class DataLoader(object):
342
- """
343
- Data loader. Combines a dataset and a sampler, and provides
344
- single- or multi-process iterators over the dataset.
345
-
346
- Arguments:
347
- dataset (Dataset): dataset from which to load the data.
348
- batch_size (int, optional): how many samples per batch to load
349
- (default: 1).
350
- shuffle (bool, optional): set to ``True`` to have the data reshuffled
351
- at every epoch (default: False).
352
- sampler (Sampler, optional): defines the strategy to draw samples from
353
- the dataset. If specified, ``shuffle`` must be False.
354
- batch_sampler (Sampler, optional): like sampler, but returns a batch of
355
- indices at a time. Mutually exclusive with batch_size, shuffle,
356
- sampler, and drop_last.
357
- num_workers (int, optional): how many subprocesses to use for data
358
- loading. 0 means that the data will be loaded in the main process.
359
- (default: 0)
360
- collate_fn (callable, optional): merges a list of samples to form a mini-batch.
361
- pin_memory (bool, optional): If ``True``, the data loader will copy tensors
362
- into CUDA pinned memory before returning them.
363
- drop_last (bool, optional): set to ``True`` to drop the last incomplete batch,
364
- if the dataset size is not divisible by the batch size. If ``False`` and
365
- the size of dataset is not divisible by the batch size, then the last batch
366
- will be smaller. (default: False)
367
- timeout (numeric, optional): if positive, the timeout value for collecting a batch
368
- from workers. Should always be non-negative. (default: 0)
369
- worker_init_fn (callable, optional): If not None, this will be called on each
370
- worker subprocess with the worker id (an int in ``[0, num_workers - 1]``) as
371
- input, after seeding and before data loading. (default: None)
372
-
373
- .. note:: By default, each worker will have its PyTorch seed set to
374
- ``base_seed + worker_id``, where ``base_seed`` is a long generated
375
- by main process using its RNG. You may use ``torch.initial_seed()`` to access
376
- this value in :attr:`worker_init_fn`, which can be used to set other seeds
377
- (e.g. NumPy) before data loading.
378
-
379
- .. warning:: If ``spawn'' start method is used, :attr:`worker_init_fn` cannot be an
380
- unpicklable object, e.g., a lambda function.
381
- """
382
-
383
- def __init__(self, dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None,
384
- num_workers=0, collate_fn=default_collate, pin_memory=False, drop_last=False,
385
- timeout=0, worker_init_fn=None):
386
- self.dataset = dataset
387
- self.batch_size = batch_size
388
- self.num_workers = num_workers
389
- self.collate_fn = collate_fn
390
- self.pin_memory = pin_memory
391
- self.drop_last = drop_last
392
- self.timeout = timeout
393
- self.worker_init_fn = worker_init_fn
394
-
395
- if timeout < 0:
396
- raise ValueError('timeout option should be non-negative')
397
-
398
- if batch_sampler is not None:
399
- if batch_size > 1 or shuffle or sampler is not None or drop_last:
400
- raise ValueError('batch_sampler is mutually exclusive with '
401
- 'batch_size, shuffle, sampler, and drop_last')
402
-
403
- if sampler is not None and shuffle:
404
- raise ValueError('sampler is mutually exclusive with shuffle')
405
-
406
- if self.num_workers < 0:
407
- raise ValueError('num_workers cannot be negative; '
408
- 'use num_workers=0 to disable multiprocessing.')
409
-
410
- if batch_sampler is None:
411
- if sampler is None:
412
- if shuffle:
413
- sampler = RandomSampler(dataset)
414
- else:
415
- sampler = SequentialSampler(dataset)
416
- batch_sampler = BatchSampler(sampler, batch_size, drop_last)
417
-
418
- self.sampler = sampler
419
- self.batch_sampler = batch_sampler
420
-
421
- def __iter__(self):
422
- return DataLoaderIter(self)
423
-
424
- def __len__(self):
425
- return len(self.batch_sampler)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ananthap4/itineraryGenerator/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: ItineraryGenerator
3
- emoji: 🌍
4
- colorFrom: indigo
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.27.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddpm_parallel.py DELETED
@@ -1,604 +0,0 @@
1
- # Copyright 2023 ParaDiGMS authors and The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- # DISCLAIMER: This file is strongly influenced by https://github.com/ermongroup/ddim
16
-
17
- import math
18
- from dataclasses import dataclass
19
- from typing import List, Optional, Tuple, Union
20
-
21
- import numpy as np
22
- import torch
23
-
24
- from ..configuration_utils import ConfigMixin, register_to_config
25
- from ..utils import BaseOutput, randn_tensor
26
- from .scheduling_utils import KarrasDiffusionSchedulers, SchedulerMixin
27
-
28
-
29
- @dataclass
30
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMSchedulerOutput
31
- class DDPMParallelSchedulerOutput(BaseOutput):
32
- """
33
- Output class for the scheduler's step function output.
34
-
35
- Args:
36
- prev_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
37
- Computed sample (x_{t-1}) of previous timestep. `prev_sample` should be used as next model input in the
38
- denoising loop.
39
- pred_original_sample (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)` for images):
40
- The predicted denoised sample (x_{0}) based on the model output from the current timestep.
41
- `pred_original_sample` can be used to preview progress or for guidance.
42
- """
43
-
44
- prev_sample: torch.FloatTensor
45
- pred_original_sample: Optional[torch.FloatTensor] = None
46
-
47
-
48
- # Copied from diffusers.schedulers.scheduling_ddpm.betas_for_alpha_bar
49
- def betas_for_alpha_bar(
50
- num_diffusion_timesteps,
51
- max_beta=0.999,
52
- alpha_transform_type="cosine",
53
- ):
54
- """
55
- Create a beta schedule that discretizes the given alpha_t_bar function, which defines the cumulative product of
56
- (1-beta) over time from t = [0,1].
57
-
58
- Contains a function alpha_bar that takes an argument t and transforms it to the cumulative product of (1-beta) up
59
- to that part of the diffusion process.
60
-
61
-
62
- Args:
63
- num_diffusion_timesteps (`int`): the number of betas to produce.
64
- max_beta (`float`): the maximum beta to use; use values lower than 1 to
65
- prevent singularities.
66
- alpha_transform_type (`str`, *optional*, default to `cosine`): the type of noise schedule for alpha_bar.
67
- Choose from `cosine` or `exp`
68
-
69
- Returns:
70
- betas (`np.ndarray`): the betas used by the scheduler to step the model outputs
71
- """
72
- if alpha_transform_type == "cosine":
73
-
74
- def alpha_bar_fn(t):
75
- return math.cos((t + 0.008) / 1.008 * math.pi / 2) ** 2
76
-
77
- elif alpha_transform_type == "exp":
78
-
79
- def alpha_bar_fn(t):
80
- return math.exp(t * -12.0)
81
-
82
- else:
83
- raise ValueError(f"Unsupported alpha_tranform_type: {alpha_transform_type}")
84
-
85
- betas = []
86
- for i in range(num_diffusion_timesteps):
87
- t1 = i / num_diffusion_timesteps
88
- t2 = (i + 1) / num_diffusion_timesteps
89
- betas.append(min(1 - alpha_bar_fn(t2) / alpha_bar_fn(t1), max_beta))
90
- return torch.tensor(betas, dtype=torch.float32)
91
-
92
-
93
- class DDPMParallelScheduler(SchedulerMixin, ConfigMixin):
94
- """
95
- Denoising diffusion probabilistic models (DDPMs) explores the connections between denoising score matching and
96
- Langevin dynamics sampling.
97
-
98
- [`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
99
- function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
100
- [`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
101
- [`~SchedulerMixin.from_pretrained`] functions.
102
-
103
- For more details, see the original paper: https://arxiv.org/abs/2006.11239
104
-
105
- Args:
106
- num_train_timesteps (`int`): number of diffusion steps used to train the model.
107
- beta_start (`float`): the starting `beta` value of inference.
108
- beta_end (`float`): the final `beta` value.
109
- beta_schedule (`str`):
110
- the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
111
- `linear`, `scaled_linear`, `squaredcos_cap_v2` or `sigmoid`.
112
- trained_betas (`np.ndarray`, optional):
113
- option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
114
- variance_type (`str`):
115
- options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`,
116
- `fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`.
117
- clip_sample (`bool`, default `True`):
118
- option to clip predicted sample for numerical stability.
119
- clip_sample_range (`float`, default `1.0`):
120
- the maximum magnitude for sample clipping. Valid only when `clip_sample=True`.
121
- prediction_type (`str`, default `epsilon`, optional):
122
- prediction type of the scheduler function, one of `epsilon` (predicting the noise of the diffusion
123
- process), `sample` (directly predicting the noisy sample`) or `v_prediction` (see section 2.4
124
- https://imagen.research.google/video/paper.pdf)
125
- thresholding (`bool`, default `False`):
126
- whether to use the "dynamic thresholding" method (introduced by Imagen, https://arxiv.org/abs/2205.11487).
127
- Note that the thresholding method is unsuitable for latent-space diffusion models (such as
128
- stable-diffusion).
129
- dynamic_thresholding_ratio (`float`, default `0.995`):
130
- the ratio for the dynamic thresholding method. Default is `0.995`, the same as Imagen
131
- (https://arxiv.org/abs/2205.11487). Valid only when `thresholding=True`.
132
- sample_max_value (`float`, default `1.0`):
133
- the threshold value for dynamic thresholding. Valid only when `thresholding=True`.
134
- timestep_spacing (`str`, default `"leading"`):
135
- The way the timesteps should be scaled. Refer to Table 2. of [Common Diffusion Noise Schedules and Sample
136
- Steps are Flawed](https://arxiv.org/abs/2305.08891) for more information.
137
- steps_offset (`int`, default `0`):
138
- an offset added to the inference steps. You can use a combination of `offset=1` and
139
- `set_alpha_to_one=False`, to make the last step use step 0 for the previous alpha product, as done in
140
- stable diffusion.
141
- """
142
-
143
- _compatibles = [e.name for e in KarrasDiffusionSchedulers]
144
- order = 1
145
- _is_ode_scheduler = False
146
-
147
- @register_to_config
148
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.__init__
149
- def __init__(
150
- self,
151
- num_train_timesteps: int = 1000,
152
- beta_start: float = 0.0001,
153
- beta_end: float = 0.02,
154
- beta_schedule: str = "linear",
155
- trained_betas: Optional[Union[np.ndarray, List[float]]] = None,
156
- variance_type: str = "fixed_small",
157
- clip_sample: bool = True,
158
- prediction_type: str = "epsilon",
159
- thresholding: bool = False,
160
- dynamic_thresholding_ratio: float = 0.995,
161
- clip_sample_range: float = 1.0,
162
- sample_max_value: float = 1.0,
163
- timestep_spacing: str = "leading",
164
- steps_offset: int = 0,
165
- ):
166
- if trained_betas is not None:
167
- self.betas = torch.tensor(trained_betas, dtype=torch.float32)
168
- elif beta_schedule == "linear":
169
- self.betas = torch.linspace(beta_start, beta_end, num_train_timesteps, dtype=torch.float32)
170
- elif beta_schedule == "scaled_linear":
171
- # this schedule is very specific to the latent diffusion model.
172
- self.betas = (
173
- torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
174
- )
175
- elif beta_schedule == "squaredcos_cap_v2":
176
- # Glide cosine schedule
177
- self.betas = betas_for_alpha_bar(num_train_timesteps)
178
- elif beta_schedule == "sigmoid":
179
- # GeoDiff sigmoid schedule
180
- betas = torch.linspace(-6, 6, num_train_timesteps)
181
- self.betas = torch.sigmoid(betas) * (beta_end - beta_start) + beta_start
182
- else:
183
- raise NotImplementedError(f"{beta_schedule} does is not implemented for {self.__class__}")
184
-
185
- self.alphas = 1.0 - self.betas
186
- self.alphas_cumprod = torch.cumprod(self.alphas, dim=0)
187
- self.one = torch.tensor(1.0)
188
-
189
- # standard deviation of the initial noise distribution
190
- self.init_noise_sigma = 1.0
191
-
192
- # setable values
193
- self.custom_timesteps = False
194
- self.num_inference_steps = None
195
- self.timesteps = torch.from_numpy(np.arange(0, num_train_timesteps)[::-1].copy())
196
-
197
- self.variance_type = variance_type
198
-
199
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.scale_model_input
200
- def scale_model_input(self, sample: torch.FloatTensor, timestep: Optional[int] = None) -> torch.FloatTensor:
201
- """
202
- Ensures interchangeability with schedulers that need to scale the denoising model input depending on the
203
- current timestep.
204
-
205
- Args:
206
- sample (`torch.FloatTensor`): input sample
207
- timestep (`int`, optional): current timestep
208
-
209
- Returns:
210
- `torch.FloatTensor`: scaled input sample
211
- """
212
- return sample
213
-
214
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.set_timesteps
215
- def set_timesteps(
216
- self,
217
- num_inference_steps: Optional[int] = None,
218
- device: Union[str, torch.device] = None,
219
- timesteps: Optional[List[int]] = None,
220
- ):
221
- """
222
- Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
223
-
224
- Args:
225
- num_inference_steps (`Optional[int]`):
226
- the number of diffusion steps used when generating samples with a pre-trained model. If passed, then
227
- `timesteps` must be `None`.
228
- device (`str` or `torch.device`, optional):
229
- the device to which the timesteps are moved to.
230
- custom_timesteps (`List[int]`, optional):
231
- custom timesteps used to support arbitrary spacing between timesteps. If `None`, then the default
232
- timestep spacing strategy of equal spacing between timesteps is used. If passed, `num_inference_steps`
233
- must be `None`.
234
-
235
- """
236
- if num_inference_steps is not None and timesteps is not None:
237
- raise ValueError("Can only pass one of `num_inference_steps` or `custom_timesteps`.")
238
-
239
- if timesteps is not None:
240
- for i in range(1, len(timesteps)):
241
- if timesteps[i] >= timesteps[i - 1]:
242
- raise ValueError("`custom_timesteps` must be in descending order.")
243
-
244
- if timesteps[0] >= self.config.num_train_timesteps:
245
- raise ValueError(
246
- f"`timesteps` must start before `self.config.train_timesteps`:"
247
- f" {self.config.num_train_timesteps}."
248
- )
249
-
250
- timesteps = np.array(timesteps, dtype=np.int64)
251
- self.custom_timesteps = True
252
- else:
253
- if num_inference_steps > self.config.num_train_timesteps:
254
- raise ValueError(
255
- f"`num_inference_steps`: {num_inference_steps} cannot be larger than `self.config.train_timesteps`:"
256
- f" {self.config.num_train_timesteps} as the unet model trained with this scheduler can only handle"
257
- f" maximal {self.config.num_train_timesteps} timesteps."
258
- )
259
-
260
- self.num_inference_steps = num_inference_steps
261
- self.custom_timesteps = False
262
-
263
- # "linspace", "leading", "trailing" corresponds to annotation of Table 2. of https://arxiv.org/abs/2305.08891
264
- if self.config.timestep_spacing == "linspace":
265
- timesteps = (
266
- np.linspace(0, self.config.num_train_timesteps - 1, num_inference_steps)
267
- .round()[::-1]
268
- .copy()
269
- .astype(np.int64)
270
- )
271
- elif self.config.timestep_spacing == "leading":
272
- step_ratio = self.config.num_train_timesteps // self.num_inference_steps
273
- # creates integer timesteps by multiplying by ratio
274
- # casting to int to avoid issues when num_inference_step is power of 3
275
- timesteps = (np.arange(0, num_inference_steps) * step_ratio).round()[::-1].copy().astype(np.int64)
276
- timesteps += self.config.steps_offset
277
- elif self.config.timestep_spacing == "trailing":
278
- step_ratio = self.config.num_train_timesteps / self.num_inference_steps
279
- # creates integer timesteps by multiplying by ratio
280
- # casting to int to avoid issues when num_inference_step is power of 3
281
- timesteps = np.round(np.arange(self.config.num_train_timesteps, 0, -step_ratio)).astype(np.int64)
282
- timesteps -= 1
283
- else:
284
- raise ValueError(
285
- f"{self.config.timestep_spacing} is not supported. Please make sure to choose one of 'linspace', 'leading' or 'trailing'."
286
- )
287
-
288
- self.timesteps = torch.from_numpy(timesteps).to(device)
289
-
290
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._get_variance
291
- def _get_variance(self, t, predicted_variance=None, variance_type=None):
292
- prev_t = self.previous_timestep(t)
293
-
294
- alpha_prod_t = self.alphas_cumprod[t]
295
- alpha_prod_t_prev = self.alphas_cumprod[prev_t] if prev_t >= 0 else self.one
296
- current_beta_t = 1 - alpha_prod_t / alpha_prod_t_prev
297
-
298
- # For t > 0, compute predicted variance βt (see formula (6) and (7) from https://arxiv.org/pdf/2006.11239.pdf)
299
- # and sample from it to get previous sample
300
- # x_{t-1} ~ N(pred_prev_sample, variance) == add variance to pred_sample
301
- variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * current_beta_t
302
-
303
- # we always take the log of variance, so clamp it to ensure it's not 0
304
- variance = torch.clamp(variance, min=1e-20)
305
-
306
- if variance_type is None:
307
- variance_type = self.config.variance_type
308
-
309
- # hacks - were probably added for training stability
310
- if variance_type == "fixed_small":
311
- variance = variance
312
- # for rl-diffuser https://arxiv.org/abs/2205.09991
313
- elif variance_type == "fixed_small_log":
314
- variance = torch.log(variance)
315
- variance = torch.exp(0.5 * variance)
316
- elif variance_type == "fixed_large":
317
- variance = current_beta_t
318
- elif variance_type == "fixed_large_log":
319
- # Glide max_log
320
- variance = torch.log(current_beta_t)
321
- elif variance_type == "learned":
322
- return predicted_variance
323
- elif variance_type == "learned_range":
324
- min_log = torch.log(variance)
325
- max_log = torch.log(current_beta_t)
326
- frac = (predicted_variance + 1) / 2
327
- variance = frac * max_log + (1 - frac) * min_log
328
-
329
- return variance
330
-
331
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler._threshold_sample
332
- def _threshold_sample(self, sample: torch.FloatTensor) -> torch.FloatTensor:
333
- """
334
- "Dynamic thresholding: At each sampling step we set s to a certain percentile absolute pixel value in xt0 (the
335
- prediction of x_0 at timestep t), and if s > 1, then we threshold xt0 to the range [-s, s] and then divide by
336
- s. Dynamic thresholding pushes saturated pixels (those near -1 and 1) inwards, thereby actively preventing
337
- pixels from saturation at each step. We find that dynamic thresholding results in significantly better
338
- photorealism as well as better image-text alignment, especially when using very large guidance weights."
339
-
340
- https://arxiv.org/abs/2205.11487
341
- """
342
- dtype = sample.dtype
343
- batch_size, channels, height, width = sample.shape
344
-
345
- if dtype not in (torch.float32, torch.float64):
346
- sample = sample.float() # upcast for quantile calculation, and clamp not implemented for cpu half
347
-
348
- # Flatten sample for doing quantile calculation along each image
349
- sample = sample.reshape(batch_size, channels * height * width)
350
-
351
- abs_sample = sample.abs() # "a certain percentile absolute pixel value"
352
-
353
- s = torch.quantile(abs_sample, self.config.dynamic_thresholding_ratio, dim=1)
354
- s = torch.clamp(
355
- s, min=1, max=self.config.sample_max_value
356
- ) # When clamped to min=1, equivalent to standard clipping to [-1, 1]
357
-
358
- s = s.unsqueeze(1) # (batch_size, 1) because clamp will broadcast along dim=0
359
- sample = torch.clamp(sample, -s, s) / s # "we threshold xt0 to the range [-s, s] and then divide by s"
360
-
361
- sample = sample.reshape(batch_size, channels, height, width)
362
- sample = sample.to(dtype)
363
-
364
- return sample
365
-
366
- def step(
367
- self,
368
- model_output: torch.FloatTensor,
369
- timestep: int,
370
- sample: torch.FloatTensor,
371
- generator=None,
372
- return_dict: bool = True,
373
- ) -> Union[DDPMParallelSchedulerOutput, Tuple]:
374
- """
375
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
376
- process from the learned model outputs (most often the predicted noise).
377
-
378
- Args:
379
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
380
- timestep (`int`): current discrete timestep in the diffusion chain.
381
- sample (`torch.FloatTensor`):
382
- current instance of sample being created by diffusion process.
383
- generator: random number generator.
384
- return_dict (`bool`): option for returning tuple rather than DDPMParallelSchedulerOutput class
385
-
386
- Returns:
387
- [`~schedulers.scheduling_utils.DDPMParallelSchedulerOutput`] or `tuple`:
388
- [`~schedulers.scheduling_utils.DDPMParallelSchedulerOutput`] if `return_dict` is True, otherwise a `tuple`.
389
- When returning a tuple, the first element is the sample tensor.
390
-
391
- """
392
- t = timestep
393
-
394
- prev_t = self.previous_timestep(t)
395
-
396
- if model_output.shape[1] == sample.shape[1] * 2 and self.variance_type in ["learned", "learned_range"]:
397
- model_output, predicted_variance = torch.split(model_output, sample.shape[1], dim=1)
398
- else:
399
- predicted_variance = None
400
-
401
- # 1. compute alphas, betas
402
- alpha_prod_t = self.alphas_cumprod[t]
403
- alpha_prod_t_prev = self.alphas_cumprod[prev_t] if prev_t >= 0 else self.one
404
- beta_prod_t = 1 - alpha_prod_t
405
- beta_prod_t_prev = 1 - alpha_prod_t_prev
406
- current_alpha_t = alpha_prod_t / alpha_prod_t_prev
407
- current_beta_t = 1 - current_alpha_t
408
-
409
- # 2. compute predicted original sample from predicted noise also called
410
- # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
411
- if self.config.prediction_type == "epsilon":
412
- pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
413
- elif self.config.prediction_type == "sample":
414
- pred_original_sample = model_output
415
- elif self.config.prediction_type == "v_prediction":
416
- pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output
417
- else:
418
- raise ValueError(
419
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample` or"
420
- " `v_prediction` for the DDPMScheduler."
421
- )
422
-
423
- # 3. Clip or threshold "predicted x_0"
424
- if self.config.thresholding:
425
- pred_original_sample = self._threshold_sample(pred_original_sample)
426
- elif self.config.clip_sample:
427
- pred_original_sample = pred_original_sample.clamp(
428
- -self.config.clip_sample_range, self.config.clip_sample_range
429
- )
430
-
431
- # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
432
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
433
- pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * current_beta_t) / beta_prod_t
434
- current_sample_coeff = current_alpha_t ** (0.5) * beta_prod_t_prev / beta_prod_t
435
-
436
- # 5. Compute predicted previous sample µ_t
437
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
438
- pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample
439
-
440
- # 6. Add noise
441
- variance = 0
442
- if t > 0:
443
- device = model_output.device
444
- variance_noise = randn_tensor(
445
- model_output.shape, generator=generator, device=device, dtype=model_output.dtype
446
- )
447
- if self.variance_type == "fixed_small_log":
448
- variance = self._get_variance(t, predicted_variance=predicted_variance) * variance_noise
449
- elif self.variance_type == "learned_range":
450
- variance = self._get_variance(t, predicted_variance=predicted_variance)
451
- variance = torch.exp(0.5 * variance) * variance_noise
452
- else:
453
- variance = (self._get_variance(t, predicted_variance=predicted_variance) ** 0.5) * variance_noise
454
-
455
- pred_prev_sample = pred_prev_sample + variance
456
-
457
- if not return_dict:
458
- return (pred_prev_sample,)
459
-
460
- return DDPMParallelSchedulerOutput(prev_sample=pred_prev_sample, pred_original_sample=pred_original_sample)
461
-
462
- def batch_step_no_noise(
463
- self,
464
- model_output: torch.FloatTensor,
465
- timesteps: List[int],
466
- sample: torch.FloatTensor,
467
- ) -> torch.FloatTensor:
468
- """
469
- Batched version of the `step` function, to be able to reverse the SDE for multiple samples/timesteps at once.
470
- Also, does not add any noise to the predicted sample, which is necessary for parallel sampling where the noise
471
- is pre-sampled by the pipeline.
472
-
473
- Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
474
- process from the learned model outputs (most often the predicted noise).
475
-
476
- Args:
477
- model_output (`torch.FloatTensor`): direct output from learned diffusion model.
478
- timesteps (`List[int]`):
479
- current discrete timesteps in the diffusion chain. This is now a list of integers.
480
- sample (`torch.FloatTensor`):
481
- current instance of sample being created by diffusion process.
482
-
483
- Returns:
484
- `torch.FloatTensor`: sample tensor at previous timestep.
485
- """
486
- t = timesteps
487
- num_inference_steps = self.num_inference_steps if self.num_inference_steps else self.config.num_train_timesteps
488
- prev_t = t - self.config.num_train_timesteps // num_inference_steps
489
-
490
- t = t.view(-1, *([1] * (model_output.ndim - 1)))
491
- prev_t = prev_t.view(-1, *([1] * (model_output.ndim - 1)))
492
-
493
- if model_output.shape[1] == sample.shape[1] * 2 and self.variance_type in ["learned", "learned_range"]:
494
- model_output, predicted_variance = torch.split(model_output, sample.shape[1], dim=1)
495
- else:
496
- pass
497
-
498
- # 1. compute alphas, betas
499
- self.alphas_cumprod = self.alphas_cumprod.to(model_output.device)
500
- alpha_prod_t = self.alphas_cumprod[t]
501
- alpha_prod_t_prev = self.alphas_cumprod[torch.clip(prev_t, min=0)]
502
- alpha_prod_t_prev[prev_t < 0] = torch.tensor(1.0)
503
-
504
- beta_prod_t = 1 - alpha_prod_t
505
- beta_prod_t_prev = 1 - alpha_prod_t_prev
506
- current_alpha_t = alpha_prod_t / alpha_prod_t_prev
507
- current_beta_t = 1 - current_alpha_t
508
-
509
- # 2. compute predicted original sample from predicted noise also called
510
- # "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
511
- if self.config.prediction_type == "epsilon":
512
- pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
513
- elif self.config.prediction_type == "sample":
514
- pred_original_sample = model_output
515
- elif self.config.prediction_type == "v_prediction":
516
- pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output
517
- else:
518
- raise ValueError(
519
- f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample` or"
520
- " `v_prediction` for the DDPMParallelScheduler."
521
- )
522
-
523
- # 3. Clip or threshold "predicted x_0"
524
- if self.config.thresholding:
525
- pred_original_sample = self._threshold_sample(pred_original_sample)
526
- elif self.config.clip_sample:
527
- pred_original_sample = pred_original_sample.clamp(
528
- -self.config.clip_sample_range, self.config.clip_sample_range
529
- )
530
-
531
- # 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
532
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
533
- pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * current_beta_t) / beta_prod_t
534
- current_sample_coeff = current_alpha_t ** (0.5) * beta_prod_t_prev / beta_prod_t
535
-
536
- # 5. Compute predicted previous sample µ_t
537
- # See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
538
- pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample
539
-
540
- return pred_prev_sample
541
-
542
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise
543
- def add_noise(
544
- self,
545
- original_samples: torch.FloatTensor,
546
- noise: torch.FloatTensor,
547
- timesteps: torch.IntTensor,
548
- ) -> torch.FloatTensor:
549
- # Make sure alphas_cumprod and timestep have same device and dtype as original_samples
550
- alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device, dtype=original_samples.dtype)
551
- timesteps = timesteps.to(original_samples.device)
552
-
553
- sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
554
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
555
- while len(sqrt_alpha_prod.shape) < len(original_samples.shape):
556
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
557
-
558
- sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
559
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
560
- while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape):
561
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
562
-
563
- noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
564
- return noisy_samples
565
-
566
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.get_velocity
567
- def get_velocity(
568
- self, sample: torch.FloatTensor, noise: torch.FloatTensor, timesteps: torch.IntTensor
569
- ) -> torch.FloatTensor:
570
- # Make sure alphas_cumprod and timestep have same device and dtype as sample
571
- alphas_cumprod = self.alphas_cumprod.to(device=sample.device, dtype=sample.dtype)
572
- timesteps = timesteps.to(sample.device)
573
-
574
- sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
575
- sqrt_alpha_prod = sqrt_alpha_prod.flatten()
576
- while len(sqrt_alpha_prod.shape) < len(sample.shape):
577
- sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
578
-
579
- sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
580
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
581
- while len(sqrt_one_minus_alpha_prod.shape) < len(sample.shape):
582
- sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
583
-
584
- velocity = sqrt_alpha_prod * noise - sqrt_one_minus_alpha_prod * sample
585
- return velocity
586
-
587
- def __len__(self):
588
- return self.config.num_train_timesteps
589
-
590
- # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.previous_timestep
591
- def previous_timestep(self, timestep):
592
- if self.custom_timesteps:
593
- index = (self.timesteps == timestep).nonzero(as_tuple=True)[0][0]
594
- if index == self.timesteps.shape[0] - 1:
595
- prev_t = torch.tensor(-1)
596
- else:
597
- prev_t = self.timesteps[index + 1]
598
- else:
599
- num_inference_steps = (
600
- self.num_inference_steps if self.num_inference_steps else self.config.num_train_timesteps
601
- )
602
- prev_t = timestep - self.config.num_train_timesteps // num_inference_steps
603
-
604
- return prev_t
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/datasets/deepfashion.py DELETED
@@ -1,10 +0,0 @@
1
- from .builder import DATASETS
2
- from .coco import CocoDataset
3
-
4
-
5
- @DATASETS.register_module()
6
- class DeepFashionDataset(CocoDataset):
7
-
8
- CLASSES = ('top', 'skirt', 'leggings', 'dress', 'outer', 'pants', 'bag',
9
- 'neckwear', 'headwear', 'eyeglass', 'belt', 'footwear', 'hair',
10
- 'skin', 'face')
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r50-d8_480x480_40k_pascal_context_59.py DELETED
@@ -1,10 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/deeplabv3plus_r50-d8.py',
3
- '../_base_/datasets/pascal_context_59.py', '../_base_/default_runtime.py',
4
- '../_base_/schedules/schedule_40k.py'
5
- ]
6
- model = dict(
7
- decode_head=dict(num_classes=59),
8
- auxiliary_head=dict(num_classes=59),
9
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
10
- optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18-d8_512x1024_80k_cityscapes.py DELETED
@@ -1,9 +0,0 @@
1
- _base_ = './pspnet_r50-d8_512x1024_80k_cityscapes.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnet18_v1c',
4
- backbone=dict(depth=18),
5
- decode_head=dict(
6
- in_channels=512,
7
- channels=128,
8
- ),
9
- auxiliary_head=dict(in_channels=256, channels=64))
 
 
 
 
 
 
 
 
 
 
spaces/AnimalEquality/chatbot/lv_recipe_chatbot/ingredient_vision.py DELETED
@@ -1,132 +0,0 @@
1
- # AUTOGENERATED! DO NOT EDIT! File to edit: ../nbs/03_ingredient_vision.ipynb.
2
-
3
- # %% auto 0
4
- __all__ = ['SAMPLE_IMG_DIR', 'format_image', 'BlipImageCaptioning', 'BlipVQA', 'VeganIngredientFinder']
5
-
6
- # %% ../nbs/03_ingredient_vision.ipynb 3
7
- import imghdr
8
- import os
9
- import time
10
- from pathlib import Path
11
-
12
- import numpy as np
13
- import torch
14
- from PIL import Image
15
- from transformers import (
16
- BlipForConditionalGeneration,
17
- BlipForQuestionAnswering,
18
- BlipProcessor,
19
- pipeline,
20
- )
21
-
22
- import constants
23
-
24
- # %% ../nbs/03_ingredient_vision.ipynb 7
25
- # fmt: off
26
- def format_image(
27
- image: str # Image file path
28
- ):
29
- # fmt: on
30
- img = Image.open(image)
31
- width, height = img.size
32
- ratio = min(512 / width, 512 / height)
33
- width_new, height_new = (round(width * ratio), round(height * ratio))
34
- width_new = int(np.round(width_new / 64.0)) * 64
35
- height_new = int(np.round(height_new / 64.0)) * 64
36
- img = img.resize((width_new, height_new))
37
- img = img.convert("RGB")
38
- return img
39
-
40
- # %% ../nbs/03_ingredient_vision.ipynb 8
41
- class BlipImageCaptioning:
42
- """
43
- Useful when you want to know what is inside the photo.
44
- """
45
-
46
- # fmt: off
47
- def __init__(self,
48
- device: str
49
- ): # pytorch hardware identifier to run model on options: "cpu, cuda_0, cuda_1 ..., cuda_n"
50
- # fmt: on
51
- self.device = device
52
- self.torch_dtype = torch.float16 if "cuda" in device else torch.float32
53
- self.processor = BlipProcessor.from_pretrained(
54
- "Salesforce/blip-image-captioning-base"
55
- )
56
- self.model = BlipForConditionalGeneration.from_pretrained(
57
- "Salesforce/blip-image-captioning-base", torch_dtype=self.torch_dtype
58
- ).to(self.device)
59
-
60
- def inference(self,
61
- image: Image
62
- ) -> str: # Caption for the image
63
- inputs = self.processor(image, return_tensors="pt").to(
64
- self.device, self.torch_dtype
65
- )
66
- out = self.model.generate(**inputs, max_new_tokens=50)
67
- captions = self.processor.decode(out[0], skip_special_tokens=True)
68
- return captions
69
-
70
- # %% ../nbs/03_ingredient_vision.ipynb 10
71
- class BlipVQA:
72
- # fmt: off
73
- """
74
- BLIP Visual Question Answering
75
- Useful when you need an answer for a question based on an image.
76
- Examples:
77
- what is the background color of this image, how many cats are in this figure, what is in this figure?
78
- """
79
- # fmt: on
80
- def __init__(self, device: str):
81
- self.torch_dtype = torch.float16 if "cuda" in device else torch.float32
82
- self.device = device
83
- self.processor = BlipProcessor.from_pretrained("Salesforce/blip-vqa-base")
84
- self.model = BlipForQuestionAnswering.from_pretrained(
85
- "Salesforce/blip-vqa-base", torch_dtype=self.torch_dtype
86
- ).to(self.device)
87
-
88
- # fmt: off
89
- def inference(self,
90
- image: Image,
91
- question: str
92
- ) -> str: # Answer to the query on the image
93
- # fmt: on
94
- image = image.convert("RGB")
95
- inputs = self.processor(image, question, return_tensors="pt").to(
96
- self.device, self.torch_dtype
97
- )
98
- out = self.model.generate(**inputs, max_new_tokens=100)
99
- answer = self.processor.decode(out[0], skip_special_tokens=True)
100
- return answer
101
-
102
- # %% ../nbs/03_ingredient_vision.ipynb 12
103
- SAMPLE_IMG_DIR = Path(f"{constants.ROOT_DIR}/assets/images/vegan_ingredients")
104
-
105
- # %% ../nbs/03_ingredient_vision.ipynb 19
106
- class VeganIngredientFinder:
107
- def __init__(self):
108
- self.vqa = BlipVQA("cpu")
109
-
110
- # fmt: off
111
- def list_ingredients(self,
112
- img: str # Image file path
113
- ) -> str:
114
- #fmt: on
115
- img = format_image(img)
116
- answer = self.vqa.inference(
117
- img, f"What are three of the vegetables seen in the image if any?"
118
- )
119
- answer += "\n" + self.vqa.inference(
120
- img, f"What are three of the fruits seen in the image if any?"
121
- )
122
- answer += "\n" + self.vqa.inference(
123
- img, f"What grains and starches are in the image if any?"
124
- )
125
- if (
126
- "yes"
127
- in self.vqa.inference(
128
- img, f"Is there plant-based milk in the image?"
129
- ).lower()
130
- ):
131
- answer += "\n" + "plant-based milk"
132
- return answer
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/video/optflow.py DELETED
@@ -1,254 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- import warnings
3
-
4
- import cv2
5
- import numpy as np
6
-
7
- from annotator.uniformer.mmcv.arraymisc import dequantize, quantize
8
- from annotator.uniformer.mmcv.image import imread, imwrite
9
- from annotator.uniformer.mmcv.utils import is_str
10
-
11
-
12
- def flowread(flow_or_path, quantize=False, concat_axis=0, *args, **kwargs):
13
- """Read an optical flow map.
14
-
15
- Args:
16
- flow_or_path (ndarray or str): A flow map or filepath.
17
- quantize (bool): whether to read quantized pair, if set to True,
18
- remaining args will be passed to :func:`dequantize_flow`.
19
- concat_axis (int): The axis that dx and dy are concatenated,
20
- can be either 0 or 1. Ignored if quantize is False.
21
-
22
- Returns:
23
- ndarray: Optical flow represented as a (h, w, 2) numpy array
24
- """
25
- if isinstance(flow_or_path, np.ndarray):
26
- if (flow_or_path.ndim != 3) or (flow_or_path.shape[-1] != 2):
27
- raise ValueError(f'Invalid flow with shape {flow_or_path.shape}')
28
- return flow_or_path
29
- elif not is_str(flow_or_path):
30
- raise TypeError(f'"flow_or_path" must be a filename or numpy array, '
31
- f'not {type(flow_or_path)}')
32
-
33
- if not quantize:
34
- with open(flow_or_path, 'rb') as f:
35
- try:
36
- header = f.read(4).decode('utf-8')
37
- except Exception:
38
- raise IOError(f'Invalid flow file: {flow_or_path}')
39
- else:
40
- if header != 'PIEH':
41
- raise IOError(f'Invalid flow file: {flow_or_path}, '
42
- 'header does not contain PIEH')
43
-
44
- w = np.fromfile(f, np.int32, 1).squeeze()
45
- h = np.fromfile(f, np.int32, 1).squeeze()
46
- flow = np.fromfile(f, np.float32, w * h * 2).reshape((h, w, 2))
47
- else:
48
- assert concat_axis in [0, 1]
49
- cat_flow = imread(flow_or_path, flag='unchanged')
50
- if cat_flow.ndim != 2:
51
- raise IOError(
52
- f'{flow_or_path} is not a valid quantized flow file, '
53
- f'its dimension is {cat_flow.ndim}.')
54
- assert cat_flow.shape[concat_axis] % 2 == 0
55
- dx, dy = np.split(cat_flow, 2, axis=concat_axis)
56
- flow = dequantize_flow(dx, dy, *args, **kwargs)
57
-
58
- return flow.astype(np.float32)
59
-
60
-
61
- def flowwrite(flow, filename, quantize=False, concat_axis=0, *args, **kwargs):
62
- """Write optical flow to file.
63
-
64
- If the flow is not quantized, it will be saved as a .flo file losslessly,
65
- otherwise a jpeg image which is lossy but of much smaller size. (dx and dy
66
- will be concatenated horizontally into a single image if quantize is True.)
67
-
68
- Args:
69
- flow (ndarray): (h, w, 2) array of optical flow.
70
- filename (str): Output filepath.
71
- quantize (bool): Whether to quantize the flow and save it to 2 jpeg
72
- images. If set to True, remaining args will be passed to
73
- :func:`quantize_flow`.
74
- concat_axis (int): The axis that dx and dy are concatenated,
75
- can be either 0 or 1. Ignored if quantize is False.
76
- """
77
- if not quantize:
78
- with open(filename, 'wb') as f:
79
- f.write('PIEH'.encode('utf-8'))
80
- np.array([flow.shape[1], flow.shape[0]], dtype=np.int32).tofile(f)
81
- flow = flow.astype(np.float32)
82
- flow.tofile(f)
83
- f.flush()
84
- else:
85
- assert concat_axis in [0, 1]
86
- dx, dy = quantize_flow(flow, *args, **kwargs)
87
- dxdy = np.concatenate((dx, dy), axis=concat_axis)
88
- imwrite(dxdy, filename)
89
-
90
-
91
- def quantize_flow(flow, max_val=0.02, norm=True):
92
- """Quantize flow to [0, 255].
93
-
94
- After this step, the size of flow will be much smaller, and can be
95
- dumped as jpeg images.
96
-
97
- Args:
98
- flow (ndarray): (h, w, 2) array of optical flow.
99
- max_val (float): Maximum value of flow, values beyond
100
- [-max_val, max_val] will be truncated.
101
- norm (bool): Whether to divide flow values by image width/height.
102
-
103
- Returns:
104
- tuple[ndarray]: Quantized dx and dy.
105
- """
106
- h, w, _ = flow.shape
107
- dx = flow[..., 0]
108
- dy = flow[..., 1]
109
- if norm:
110
- dx = dx / w # avoid inplace operations
111
- dy = dy / h
112
- # use 255 levels instead of 256 to make sure 0 is 0 after dequantization.
113
- flow_comps = [
114
- quantize(d, -max_val, max_val, 255, np.uint8) for d in [dx, dy]
115
- ]
116
- return tuple(flow_comps)
117
-
118
-
119
- def dequantize_flow(dx, dy, max_val=0.02, denorm=True):
120
- """Recover from quantized flow.
121
-
122
- Args:
123
- dx (ndarray): Quantized dx.
124
- dy (ndarray): Quantized dy.
125
- max_val (float): Maximum value used when quantizing.
126
- denorm (bool): Whether to multiply flow values with width/height.
127
-
128
- Returns:
129
- ndarray: Dequantized flow.
130
- """
131
- assert dx.shape == dy.shape
132
- assert dx.ndim == 2 or (dx.ndim == 3 and dx.shape[-1] == 1)
133
-
134
- dx, dy = [dequantize(d, -max_val, max_val, 255) for d in [dx, dy]]
135
-
136
- if denorm:
137
- dx *= dx.shape[1]
138
- dy *= dx.shape[0]
139
- flow = np.dstack((dx, dy))
140
- return flow
141
-
142
-
143
- def flow_warp(img, flow, filling_value=0, interpolate_mode='nearest'):
144
- """Use flow to warp img.
145
-
146
- Args:
147
- img (ndarray, float or uint8): Image to be warped.
148
- flow (ndarray, float): Optical Flow.
149
- filling_value (int): The missing pixels will be set with filling_value.
150
- interpolate_mode (str): bilinear -> Bilinear Interpolation;
151
- nearest -> Nearest Neighbor.
152
-
153
- Returns:
154
- ndarray: Warped image with the same shape of img
155
- """
156
- warnings.warn('This function is just for prototyping and cannot '
157
- 'guarantee the computational efficiency.')
158
- assert flow.ndim == 3, 'Flow must be in 3D arrays.'
159
- height = flow.shape[0]
160
- width = flow.shape[1]
161
- channels = img.shape[2]
162
-
163
- output = np.ones(
164
- (height, width, channels), dtype=img.dtype) * filling_value
165
-
166
- grid = np.indices((height, width)).swapaxes(0, 1).swapaxes(1, 2)
167
- dx = grid[:, :, 0] + flow[:, :, 1]
168
- dy = grid[:, :, 1] + flow[:, :, 0]
169
- sx = np.floor(dx).astype(int)
170
- sy = np.floor(dy).astype(int)
171
- valid = (sx >= 0) & (sx < height - 1) & (sy >= 0) & (sy < width - 1)
172
-
173
- if interpolate_mode == 'nearest':
174
- output[valid, :] = img[dx[valid].round().astype(int),
175
- dy[valid].round().astype(int), :]
176
- elif interpolate_mode == 'bilinear':
177
- # dirty walkround for integer positions
178
- eps_ = 1e-6
179
- dx, dy = dx + eps_, dy + eps_
180
- left_top_ = img[np.floor(dx[valid]).astype(int),
181
- np.floor(dy[valid]).astype(int), :] * (
182
- np.ceil(dx[valid]) - dx[valid])[:, None] * (
183
- np.ceil(dy[valid]) - dy[valid])[:, None]
184
- left_down_ = img[np.ceil(dx[valid]).astype(int),
185
- np.floor(dy[valid]).astype(int), :] * (
186
- dx[valid] - np.floor(dx[valid]))[:, None] * (
187
- np.ceil(dy[valid]) - dy[valid])[:, None]
188
- right_top_ = img[np.floor(dx[valid]).astype(int),
189
- np.ceil(dy[valid]).astype(int), :] * (
190
- np.ceil(dx[valid]) - dx[valid])[:, None] * (
191
- dy[valid] - np.floor(dy[valid]))[:, None]
192
- right_down_ = img[np.ceil(dx[valid]).astype(int),
193
- np.ceil(dy[valid]).astype(int), :] * (
194
- dx[valid] - np.floor(dx[valid]))[:, None] * (
195
- dy[valid] - np.floor(dy[valid]))[:, None]
196
- output[valid, :] = left_top_ + left_down_ + right_top_ + right_down_
197
- else:
198
- raise NotImplementedError(
199
- 'We only support interpolation modes of nearest and bilinear, '
200
- f'but got {interpolate_mode}.')
201
- return output.astype(img.dtype)
202
-
203
-
204
- def flow_from_bytes(content):
205
- """Read dense optical flow from bytes.
206
-
207
- .. note::
208
- This load optical flow function works for FlyingChairs, FlyingThings3D,
209
- Sintel, FlyingChairsOcc datasets, but cannot load the data from
210
- ChairsSDHom.
211
-
212
- Args:
213
- content (bytes): Optical flow bytes got from files or other streams.
214
-
215
- Returns:
216
- ndarray: Loaded optical flow with the shape (H, W, 2).
217
- """
218
-
219
- # header in first 4 bytes
220
- header = content[:4]
221
- if header.decode('utf-8') != 'PIEH':
222
- raise Exception('Flow file header does not contain PIEH')
223
- # width in second 4 bytes
224
- width = np.frombuffer(content[4:], np.int32, 1).squeeze()
225
- # height in third 4 bytes
226
- height = np.frombuffer(content[8:], np.int32, 1).squeeze()
227
- # after first 12 bytes, all bytes are flow
228
- flow = np.frombuffer(content[12:], np.float32, width * height * 2).reshape(
229
- (height, width, 2))
230
-
231
- return flow
232
-
233
-
234
- def sparse_flow_from_bytes(content):
235
- """Read the optical flow in KITTI datasets from bytes.
236
-
237
- This function is modified from RAFT load the `KITTI datasets
238
- <https://github.com/princeton-vl/RAFT/blob/224320502d66c356d88e6c712f38129e60661e80/core/utils/frame_utils.py#L102>`_.
239
-
240
- Args:
241
- content (bytes): Optical flow bytes got from files or other streams.
242
-
243
- Returns:
244
- Tuple(ndarray, ndarray): Loaded optical flow with the shape (H, W, 2)
245
- and flow valid mask with the shape (H, W).
246
- """ # nopa
247
-
248
- content = np.frombuffer(content, np.uint8)
249
- flow = cv2.imdecode(content, cv2.IMREAD_ANYDEPTH | cv2.IMREAD_COLOR)
250
- flow = flow[:, :, ::-1].astype(np.float32)
251
- # flow shape (H, W, 2) valid shape (H, W)
252
- flow, valid = flow[:, :, :2], flow[:, :, 2]
253
- flow = (flow - 2**15) / 64.0
254
- return flow, valid
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AsakuraMizu/moe-tts/export_model.py DELETED
@@ -1,13 +0,0 @@
1
- import torch
2
-
3
- if __name__ == '__main__':
4
- model_path = "saved_model/18/model.pth"
5
- output_path = "saved_model/18/model1.pth"
6
- checkpoint_dict = torch.load(model_path, map_location='cpu')
7
- checkpoint_dict_new = {}
8
- for k, v in checkpoint_dict.items():
9
- if k == "optimizer":
10
- print("remove optimizer")
11
- continue
12
- checkpoint_dict_new[k] = v
13
- torch.save(checkpoint_dict_new, output_path)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/pager.py DELETED
@@ -1,34 +0,0 @@
1
- from abc import ABC, abstractmethod
2
- from typing import Any
3
-
4
-
5
- class Pager(ABC):
6
- """Base class for a pager."""
7
-
8
- @abstractmethod
9
- def show(self, content: str) -> None:
10
- """Show content in pager.
11
-
12
- Args:
13
- content (str): Content to be displayed.
14
- """
15
-
16
-
17
- class SystemPager(Pager):
18
- """Uses the pager installed on the system."""
19
-
20
- def _pager(self, content: str) -> Any: #  pragma: no cover
21
- return __import__("pydoc").pager(content)
22
-
23
- def show(self, content: str) -> None:
24
- """Use the same pager used by pydoc."""
25
- self._pager(content)
26
-
27
-
28
- if __name__ == "__main__": # pragma: no cover
29
- from .__main__ import make_test_card
30
- from .console import Console
31
-
32
- console = Console()
33
- with console.pager(styles=True):
34
- console.print(make_test_card())
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tools/deploy/torchscript_mask_rcnn.cpp DELETED
@@ -1,187 +0,0 @@
1
- // Copyright (c) Facebook, Inc. and its affiliates.
2
- // @lint-ignore-every CLANGTIDY
3
- // This is an example code that demonstrates how to run inference
4
- // with a torchscript format Mask R-CNN model exported by ./export_model.py
5
- // using export method=tracing, caffe2_tracing & scripting.
6
-
7
- #include <opencv2/opencv.hpp>
8
- #include <iostream>
9
- #include <string>
10
-
11
- #include <c10/cuda/CUDAStream.h>
12
- #include <torch/csrc/autograd/grad_mode.h>
13
- #include <torch/csrc/jit/runtime/graph_executor.h>
14
- #include <torch/script.h>
15
-
16
- // only needed for export_method=tracing
17
- #include <torchvision/vision.h> // @oss-only
18
- // @fb-only: #include <torchvision/csrc/vision.h>
19
-
20
- using namespace std;
21
-
22
- c10::IValue get_caffe2_tracing_inputs(cv::Mat& img, c10::Device device) {
23
- const int height = img.rows;
24
- const int width = img.cols;
25
- // FPN models require divisibility of 32.
26
- // Tracing mode does padding inside the graph, but caffe2_tracing does not.
27
- assert(height % 32 == 0 && width % 32 == 0);
28
- const int channels = 3;
29
-
30
- auto input =
31
- torch::from_blob(img.data, {1, height, width, channels}, torch::kUInt8);
32
- // NHWC to NCHW
33
- input = input.to(device, torch::kFloat).permute({0, 3, 1, 2}).contiguous();
34
-
35
- std::array<float, 3> im_info_data{height * 1.0f, width * 1.0f, 1.0f};
36
- auto im_info =
37
- torch::from_blob(im_info_data.data(), {1, 3}).clone().to(device);
38
- return std::make_tuple(input, im_info);
39
- }
40
-
41
- c10::IValue get_tracing_inputs(cv::Mat& img, c10::Device device) {
42
- const int height = img.rows;
43
- const int width = img.cols;
44
- const int channels = 3;
45
-
46
- auto input =
47
- torch::from_blob(img.data, {height, width, channels}, torch::kUInt8);
48
- // HWC to CHW
49
- input = input.to(device, torch::kFloat).permute({2, 0, 1}).contiguous();
50
- return input;
51
- }
52
-
53
- // create a Tuple[Dict[str, Tensor]] which is the input type of scripted model
54
- c10::IValue get_scripting_inputs(cv::Mat& img, c10::Device device) {
55
- const int height = img.rows;
56
- const int width = img.cols;
57
- const int channels = 3;
58
-
59
- auto img_tensor =
60
- torch::from_blob(img.data, {height, width, channels}, torch::kUInt8);
61
- // HWC to CHW
62
- img_tensor =
63
- img_tensor.to(device, torch::kFloat).permute({2, 0, 1}).contiguous();
64
- auto dic = c10::Dict<std::string, torch::Tensor>();
65
- dic.insert("image", img_tensor);
66
- return std::make_tuple(dic);
67
- }
68
-
69
- c10::IValue
70
- get_inputs(std::string export_method, cv::Mat& img, c10::Device device) {
71
- // Given an image, create inputs in the format required by the model.
72
- if (export_method == "tracing")
73
- return get_tracing_inputs(img, device);
74
- if (export_method == "caffe2_tracing")
75
- return get_caffe2_tracing_inputs(img, device);
76
- if (export_method == "scripting")
77
- return get_scripting_inputs(img, device);
78
- abort();
79
- }
80
-
81
- struct MaskRCNNOutputs {
82
- at::Tensor pred_boxes, pred_classes, pred_masks, scores;
83
- int num_instances() const {
84
- return pred_boxes.sizes()[0];
85
- }
86
- };
87
-
88
- MaskRCNNOutputs get_outputs(std::string export_method, c10::IValue outputs) {
89
- // Given outputs of the model, extract tensors from it to turn into a
90
- // common MaskRCNNOutputs format.
91
- if (export_method == "tracing") {
92
- auto out_tuple = outputs.toTuple()->elements();
93
- // They are ordered alphabetically by their field name in Instances
94
- return MaskRCNNOutputs{
95
- out_tuple[0].toTensor(),
96
- out_tuple[1].toTensor(),
97
- out_tuple[2].toTensor(),
98
- out_tuple[3].toTensor()};
99
- }
100
- if (export_method == "caffe2_tracing") {
101
- auto out_tuple = outputs.toTuple()->elements();
102
- // A legacy order used by caffe2 models
103
- return MaskRCNNOutputs{
104
- out_tuple[0].toTensor(),
105
- out_tuple[2].toTensor(),
106
- out_tuple[3].toTensor(),
107
- out_tuple[1].toTensor()};
108
- }
109
- if (export_method == "scripting") {
110
- // With the ScriptableAdapter defined in export_model.py, the output is
111
- // List[Dict[str, Any]].
112
- auto out_dict = outputs.toList().get(0).toGenericDict();
113
- return MaskRCNNOutputs{
114
- out_dict.at("pred_boxes").toTensor(),
115
- out_dict.at("pred_classes").toTensor(),
116
- out_dict.at("pred_masks").toTensor(),
117
- out_dict.at("scores").toTensor()};
118
- }
119
- abort();
120
- }
121
-
122
- int main(int argc, const char* argv[]) {
123
- if (argc != 4) {
124
- cerr << R"xx(
125
- Usage:
126
- ./torchscript_mask_rcnn model.ts input.jpg EXPORT_METHOD
127
-
128
- EXPORT_METHOD can be "tracing", "caffe2_tracing" or "scripting".
129
- )xx";
130
- return 1;
131
- }
132
- std::string image_file = argv[2];
133
- std::string export_method = argv[3];
134
- assert(
135
- export_method == "caffe2_tracing" || export_method == "tracing" ||
136
- export_method == "scripting");
137
-
138
- torch::jit::getBailoutDepth() = 1;
139
- torch::autograd::AutoGradMode guard(false);
140
- auto module = torch::jit::load(argv[1]);
141
-
142
- assert(module.buffers().size() > 0);
143
- // Assume that the entire model is on the same device.
144
- // We just put input to this device.
145
- auto device = (*begin(module.buffers())).device();
146
-
147
- cv::Mat input_img = cv::imread(image_file, cv::IMREAD_COLOR);
148
- auto inputs = get_inputs(export_method, input_img, device);
149
-
150
- // Run the network
151
- auto output = module.forward({inputs});
152
- if (device.is_cuda())
153
- c10::cuda::getCurrentCUDAStream().synchronize();
154
-
155
- // run 3 more times to benchmark
156
- int N_benchmark = 3, N_warmup = 1;
157
- auto start_time = chrono::high_resolution_clock::now();
158
- for (int i = 0; i < N_benchmark + N_warmup; ++i) {
159
- if (i == N_warmup)
160
- start_time = chrono::high_resolution_clock::now();
161
- output = module.forward({inputs});
162
- if (device.is_cuda())
163
- c10::cuda::getCurrentCUDAStream().synchronize();
164
- }
165
- auto end_time = chrono::high_resolution_clock::now();
166
- auto ms = chrono::duration_cast<chrono::microseconds>(end_time - start_time)
167
- .count();
168
- cout << "Latency (should vary with different inputs): "
169
- << ms * 1.0 / 1e6 / N_benchmark << " seconds" << endl;
170
-
171
- // Parse Mask R-CNN outputs
172
- auto rcnn_outputs = get_outputs(export_method, output);
173
- cout << "Number of detected objects: " << rcnn_outputs.num_instances()
174
- << endl;
175
-
176
- cout << "pred_boxes: " << rcnn_outputs.pred_boxes.toString() << " "
177
- << rcnn_outputs.pred_boxes.sizes() << endl;
178
- cout << "scores: " << rcnn_outputs.scores.toString() << " "
179
- << rcnn_outputs.scores.sizes() << endl;
180
- cout << "pred_classes: " << rcnn_outputs.pred_classes.toString() << " "
181
- << rcnn_outputs.pred_classes.sizes() << endl;
182
- cout << "pred_masks: " << rcnn_outputs.pred_masks.toString() << " "
183
- << rcnn_outputs.pred_masks.sizes() << endl;
184
-
185
- cout << rcnn_outputs.pred_boxes << endl;
186
- return 0;
187
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AyameYODAYO/xijinpingx/style.css DELETED
@@ -1,28 +0,0 @@
1
- body {
2
- padding: 2rem;
3
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
4
- }
5
-
6
- h1 {
7
- font-size: 16px;
8
- margin-top: 0;
9
- }
10
-
11
- p {
12
- color: rgb(107, 114, 128);
13
- font-size: 15px;
14
- margin-bottom: 10px;
15
- margin-top: 5px;
16
- }
17
-
18
- .card {
19
- max-width: 620px;
20
- margin: 0 auto;
21
- padding: 16px;
22
- border: 1px solid lightgray;
23
- border-radius: 16px;
24
- }
25
-
26
- .card p:last-child {
27
- margin-bottom: 0;
28
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aziizzz/ChestXrayClassification/app.py DELETED
@@ -1,107 +0,0 @@
1
- ### 1. Imports and class names setup ###
2
- import gradio as gr
3
- import os
4
- import torch
5
-
6
- from timeit import default_timer as timer
7
- from typing import Tuple, Dict
8
- import torchvision
9
-
10
- from torch import nn
11
-
12
-
13
- def create_effnetb2_model(num_classes: int = 1,
14
- seed: int = 42):
15
- """Creates an EfficientNetB2 feature extractor model and transforms.
16
-
17
- Args:
18
- num_classes (int, optional): number of classes in the classifier head.
19
- Defaults to 3.
20
- seed (int, optional): random seed value. Defaults to 42.
21
-
22
- Returns:
23
- model (torch.nn.Module): EffNetB2 feature extractor model.
24
- transforms (torchvision.transforms): EffNetB2 image transforms.
25
- """
26
- # Create EffNetB2 pretrained weights, transforms and model
27
- weights = torchvision.models.AlexNet_Weights.DEFAULT
28
- transforms = weights.transforms()
29
- model = torchvision.models.alexnet(weights=weights)
30
-
31
- # Freeze all layers in base model
32
- for param in model.parameters():
33
- param.requires_grad = False
34
-
35
- # Change classifier head with random seed for reproducibility
36
- torch.manual_seed(seed)
37
- model.classifier = nn.Sequential(
38
- nn.Dropout(p=0.2,),
39
- nn.Linear(in_features=9216, out_features=1),
40
- )
41
-
42
- return model, transforms
43
-
44
-
45
- # Setup class names
46
- class_names = ["Normal", "Pneumonia"]
47
-
48
- ### 2. Model and transforms preparation ###
49
-
50
- # Create EffNetB2 model
51
- effnetb2, effnetb2_transforms = create_effnetb2_model(
52
- num_classes=1, # len(class_names) would also work
53
- )
54
-
55
- # Load saved weights
56
- effnetb2.load_state_dict(
57
- torch.load(
58
- f="alexnet_pretrained.pth",
59
- map_location=torch.device("cpu"), # load to CPU
60
- )
61
- )
62
-
63
-
64
- def predict(img) -> Tuple[Dict, float]:
65
- """Transforms and performs a prediction on img and returns prediction and time taken.
66
- """
67
- # Start the timer
68
- start_time = timer()
69
-
70
- # Transform the target image and add a batch dimension
71
- img = effnetb2_transforms(img).unsqueeze(0)
72
-
73
- # Put model into evaluation mode and turn on inference mode
74
- effnetb2.eval()
75
- with torch.inference_mode():
76
- # Pass the transformed image through the model and turn the prediction logits into prediction probabilities
77
- pred_probs = torch.sigmoid(effnetb2(img)).squeeze()
78
-
79
- # Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter)
80
- pred_labels_and_probs = {
81
- 'Normal': 1-pred_probs.item(), 'Pneumonia': pred_probs.item()}
82
-
83
- # Calculate the prediction time
84
- pred_time = round(timer() - start_time, 5)
85
-
86
- # Return the prediction dictionary and prediction time
87
- return pred_labels_and_probs, pred_time
88
-
89
-
90
- example_list = [[f"examples/example{i+1}.jpg"] for i in range(3)]
91
- # Create title, description and article strings
92
- title = "ChestXray Classification"
93
- description = "An Alexnet computer vision model to classify images of Xray Chest images as Normal or Pneumonia."
94
- article = "Created at (https://github.com/azizche/chest_xray_Classification)."
95
-
96
- # Create the Gradio demo
97
- demo = gr.Interface(fn=predict, # mapping function from input to output
98
- inputs=gr.Image(type="pil"), # what are the inputs?
99
- outputs=[gr.Label(num_top_classes=2, label="Predictions"), # what are the outputs?
100
- gr.Number(label="Prediction time (s)")], # our fn has two outputs, therefore we have two outputs
101
- examples=example_list,
102
- title=title,
103
- description=description,
104
- article=article)
105
-
106
- # Launch the demo!
107
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BenjaminB/pyscript-demo/index.html DELETED
@@ -1,57 +0,0 @@
1
- <!DOCTYPE html>
2
- <html lang="en">
3
- <head>
4
- <meta charset="utf-8" />
5
- <title>PyScript Test</title>
6
- <link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" />
7
- <script defer src="https://pyscript.net/alpha/pyscript.js"></script>
8
- <py-env>
9
- - scikit-learn
10
- - tabulate
11
- </py-env>
12
-
13
- <!-- from https://stackoverflow.com/a/62032824 -->
14
- <link rel="stylesheet"
15
- href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.6.0/styles/default.min.css">
16
- <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/11.6.0/highlight.min.js"
17
- integrity="sha512-gU7kztaQEl7SHJyraPfZLQCNnrKdaQi5ndOyt4L4UPL/FHDd/uB9Je6KDARIqwnNNE27hnqoWLBq+Kpe4iHfeQ=="
18
- crossorigin="anonymous"
19
- referrerpolicy="no-referrer"></script>
20
- <script>hljs.initHighlightingOnLoad();</script>
21
-
22
- </head>
23
- <body>
24
- <p>Define your own sklearn classifier and evaluate it on the toy dataset. An example is shown below:</p>
25
- <pre>
26
- <code class="python">from sklearn.linear_model import LogisticRegression
27
- clf = LogisticRegression(random_state=0)
28
- evaluate(clf)</code>
29
- </pre>
30
- Try to achieve a test accuracy of 0.85 or better! Get some inspiration for possible classifiers <a href="https://scikit-learn.org/stable/supervised_learning.html" title="List of sklearn estimators">here</a>.
31
- <br><br>
32
- Enter your code below, then press Shift+Enter:
33
- <py-script>
34
- from statistics import mean
35
- from sklearn.datasets import make_classification
36
- from sklearn.model_selection import cross_validate
37
- import tabulate
38
-
39
- X, y = make_classification(n_samples=1000, n_informative=10, random_state=0)
40
-
41
- def evaluate(clf):
42
- cv_result = cross_validate(clf, X, y, scoring='accuracy', cv=5)
43
- time_fit = sum(cv_result['fit_time'])
44
- time_score = sum(cv_result['score_time'])
45
-
46
- print(f"Mean test accuracy: {mean(cv_result['test_score']):.3f}")
47
- print(f"Total training time: {time_fit:.1f} seconds")
48
- print(f"Total time for scoring: {time_score:.1f} seconds")
49
-
50
- show_result = {'split': [1, 2, 3, 4, 5], 'accuracy': cv_result['test_score']}
51
- print("Accuracy for each cross validation split:")
52
- return tabulate.tabulate(show_result, tablefmt='html', headers='keys', floatfmt='.3')
53
- </py-script>
54
-
55
- <py-repl auto-generate="true"></py-repl>
56
- </body>
57
- </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Gratis Youtube Apk.md DELETED
@@ -1,62 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar vídeos de YouTube con Yandex APK</h1>
3
- <p>YouTube es una de las plataformas para compartir videos más populares del mundo, donde puedes ver millones de videos gratis. Sin embargo, a veces es posible que desee descargar videos de YouTube a su dispositivo Android para su visualización sin conexión, especialmente cuando tiene una conexión a Internet limitada o inestable. En este artículo, le mostraremos cómo descargar videos de YouTube con Yandex APK, una aplicación de navegador potente y versátil que puede ayudarlo a guardar sus videos favoritos de forma fácil y rápida. </p>
4
- <h2>descargar gratis youtube apk</h2><br /><p><b><b>Download File</b> &#10037;&#10037;&#10037; <a href="https://bltlly.com/2v6Mza">https://bltlly.com/2v6Mza</a></b></p><br /><br />
5
- <h2>¿Qué es Yandex APK? </h2>
6
- <p>Yandex APK es una aplicación para Android que le permite acceder al navegador Yandex, un navegador web rápido y seguro desarrollado por Yandex, una empresa de internet rusa. Yandex Browser tiene muchas características que lo hacen destacar de otros navegadores, como:</p>
7
- <h3>Características de Yandex APK</h3>
8
- <ul>
9
- <li>Modo de protección: Esta función le protege de sitios web maliciosos, phishing y malware mediante el bloqueo de anuncios y rastreadores no deseados. </li>
10
- <li>Modo Turbo: Esta función acelera su experiencia de navegación comprimiendo páginas web y guardando sus datos móviles. </li>
11
- <li>Modo Zen: Esta función personaliza sus recomendaciones de contenido basadas en sus intereses y preferencias. </li>
12
- <li>SmartBox: Esta función le permite buscar en la web y acceder a sus aplicaciones favoritas desde la barra de direcciones. </li>
13
- <li>Gestor de descargas: Esta función le permite gestionar sus descargas de forma fácil y eficiente. </li>
14
- </ul>
15
- <h3>Cómo instalar Yandex APK en su dispositivo Android</h3>
16
- <p>Para instalar Yandex APK en su dispositivo Android, es necesario seguir estos pasos:</p>
17
- <ol>
18
- <li>Descargar el archivo Yandex APK de una fuente de confianza, como [APKCombo]( 2 ) o [JalanTikus]( 1 ). </li>
19
- <li>Abra la aplicación Administrador de archivos en su dispositivo Android y busque el archivo APK descargado. </li>
20
- <li>Toque en el archivo APK y permitir la instalación de aplicaciones desconocidas desde su configuración. </li>
21
- <li>Siga las instrucciones en la pantalla para completar el proceso de instalación. </li>
22
-
23
- </ol>
24
- <h2>Cómo descargar vídeos de YouTube con Yandex APK</h2>
25
- <p>Una vez que haya instalado Yandex APK en su dispositivo Android, puede comenzar a descargar videos de YouTube con él. Estos son los pasos que debes seguir:</p>
26
- <h3>Paso 1: Abra el navegador Yandex en su dispositivo Android</h3>
27
- <p>Abra la aplicación Yandex Browser en su dispositivo Android y asegúrese de que tiene una conexión a Internet estable. </p>
28
- <h3>Paso 2: Ir a YouTube y encontrar el video que desea descargar</h3>
29
- <p>En la barra de direcciones, escribe youtube.com y pulsa enter. Serás redirigido al sitio web de YouTube. También puede utilizar la función SmartBox para buscar vídeos de YouTube directamente desde la barra de direcciones. Encuentre el video que desea descargar y toque en él para reproducirlo. </p>
30
- <p></p>
31
- <h3>Paso 3: Toque en el icono de descarga en la parte inferior del reproductor de vídeo</h3>
32
- <p>Tan pronto como comience a reproducir un video de YouTube, verá un icono de descarga en la parte inferior del reproductor de video. Toque en el icono de descarga y verá una ventana emergente con diferentes opciones. </p>
33
- <h3>Paso 4: Elija el formato y la calidad del video</h3>
34
- <p>En la ventana emergente, puede elegir el formato y la calidad del video que desea descargar. Puede elegir entre formatos MP4, 3GP, WEBM y M4A, y de calidad 144p a 1080p. También puede ver el tamaño del archivo y el tiempo estimado de descarga para cada opción. Elija la opción que se adapte a sus necesidades y toque en el botón de descarga. </p>
35
- <h3>Paso 5: Espere a que la descarga termine y disfrute de su video sin conexión</h3>
36
-
37
- <h2>Beneficios de descargar vídeos de YouTube con Yandex APK</h2>
38
- <p>Descargar vídeos de YouTube con Yandex APK tiene muchos beneficios, tales como:</p>
39
- <h3>Guardar datos móviles y espacio de almacenamiento</h3>
40
- <p>Al descargar videos de YouTube con Yandex APK, puede guardar sus datos móviles y espacio de almacenamiento. Puede utilizar la función de modo Turbo para comprimir páginas web y reducir el consumo de datos. También puede elegir el formato y la calidad del vídeo que se adapte a la capacidad de su dispositivo. Puedes eliminar o mover tus videos descargados cuando quieras. </p>
41
- <h3>Ver vídeos en cualquier momento y en cualquier lugar sin conexión a Internet</h3>
42
- <p>Al descargar videos de YouTube con Yandex APK, puede ver videos en cualquier momento y en cualquier lugar sin conexión a Internet. No tiene que preocuparse por el almacenamiento en búfer, la carga o las interrupciones. Puedes ver tus videos favoritos sin conexión en la pantalla de tu dispositivo o en una pantalla más grande con un Chromecast o un televisor inteligente.</p>
43
- <h3>Compartir vídeos con tus amigos y familiares fácilmente</h3>
44
- <p>Al descargar videos de YouTube con Yandex APK, puede compartir videos con sus amigos y familiares fácilmente. Puede enviar sus vídeos descargados a través de Bluetooth, Wi-Fi Direct u otras aplicaciones. También puede subirlos a servicios en la nube o plataformas de redes sociales. Puedes compartir tus vídeos con quien quieras sin problemas. </p>
45
- <h2>Conclusión</h2>
46
- <p>En conclusión, Yandex APK es una gran aplicación que le permite descargar vídeos de YouTube con facilidad y comodidad. Tiene muchas características que lo convierten en una aplicación de navegador potente y versátil que puede mejorar su experiencia de navegación. Es rápido, seguro y personalizado. También es fácil de instalar y usar. Si quieres descargar vídeos de YouTube con Yandex APK, solo tienes que seguir los pasos que te hemos mostrado en este artículo y disfrutar de tus vídeos sin conexión. </p>
47
- <h2>Preguntas frecuentes</h2>
48
- <ul>
49
- <li><b>Q: ¿Es Yandex APK seguro de usar? </b></li>
50
-
51
- <li><b>Q: ¿Yandex APK es libre de usar? </b></li>
52
- <li>A: Sí, Yandex APK es de uso gratuito. Usted no tiene que pagar nada para descargar o usarlo. Sin embargo, puede ver algunos anuncios o contenido patrocinado en la aplicación, que ayudan a apoyar su desarrollo y mantenimiento. </li>
53
- <li><b>Q: ¿Puedo descargar vídeos de YouTube con Yandex APK en otros dispositivos? </b></li>
54
- <li>A: Sí, puede descargar vídeos de YouTube con Yandex APK en otros dispositivos además de los dispositivos Android. También puede usarlo en dispositivos Windows, Mac, Linux, iOS y Smart TV. Solo necesitas descargar la versión apropiada de Yandex Browser para tu dispositivo desde su sitio web oficial o tienda de aplicaciones. </li>
55
- <li><b>Q: ¿Puedo descargar vídeos de YouTube con Yandex APK en otros idiomas? </b></li>
56
- <li>A: Sí, puede descargar vídeos de YouTube con Yandex APK en otros idiomas además de Inglés. Puede cambiar el idioma de la aplicación desde el menú de configuración. También puede cambiar el idioma de YouTube desde el menú de configuración. </li>
57
- <li><b>Q: ¿Puedo descargar vídeos de YouTube con Yandex APK en alta resolución? </b></li>
58
- <li>A: Sí, puede descargar vídeos de YouTube con Yandex APK en alta resolución hasta 1080p de calidad. Sin embargo, esto puede depender de la disponibilidad de la fuente de vídeo y del rendimiento y el espacio de almacenamiento del dispositivo. También puede utilizar la función de modo Turbo para reducir el tamaño del archivo y el tiempo de descarga de los vídeos de alta resolución.</li>
59
- </ul>
60
- <p>Espero que haya encontrado este artículo útil e informativo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Gracias por leer y feliz descarga! </p> 64aa2da5cf<br />
61
- <br />
62
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BetterAPI/BetterChat_new/src/routes/conversation/[id]/stop-generating/+server.ts DELETED
@@ -1,27 +0,0 @@
1
- import { collections } from "$lib/server/database";
2
- import { error } from "@sveltejs/kit";
3
- import { ObjectId } from "mongodb";
4
-
5
- /**
6
- * Ideally, we'd be able to detect the client-side abort, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850
7
- */
8
- export async function POST({ params, locals }) {
9
- const conversationId = new ObjectId(params.id);
10
-
11
- const conversation = await collections.conversations.findOne({
12
- _id: conversationId,
13
- sessionId: locals.sessionId,
14
- });
15
-
16
- if (!conversation) {
17
- throw error(404, "Conversation not found");
18
- }
19
-
20
- await collections.abortedGenerations.updateOne(
21
- { conversationId },
22
- { $set: { updatedAt: new Date() }, $setOnInsert: { createdAt: new Date() } },
23
- { upsert: true }
24
- );
25
-
26
- return new Response();
27
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Blessin/movie-poster-generator/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Movie Poster Generator
3
- emoji: 🐨
4
- colorFrom: gray
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.50.2
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/tests/test_operator_overloading.py DELETED
@@ -1,145 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- import pytest
3
- from pybind11_tests import operators as m
4
- from pybind11_tests import ConstructorStats
5
-
6
-
7
- def test_operator_overloading():
8
- v1 = m.Vector2(1, 2)
9
- v2 = m.Vector(3, -1)
10
- v3 = m.Vector2(1, 2) # Same value as v1, but different instance.
11
- assert v1 is not v3
12
-
13
- assert str(v1) == "[1.000000, 2.000000]"
14
- assert str(v2) == "[3.000000, -1.000000]"
15
-
16
- assert str(-v2) == "[-3.000000, 1.000000]"
17
-
18
- assert str(v1 + v2) == "[4.000000, 1.000000]"
19
- assert str(v1 - v2) == "[-2.000000, 3.000000]"
20
- assert str(v1 - 8) == "[-7.000000, -6.000000]"
21
- assert str(v1 + 8) == "[9.000000, 10.000000]"
22
- assert str(v1 * 8) == "[8.000000, 16.000000]"
23
- assert str(v1 / 8) == "[0.125000, 0.250000]"
24
- assert str(8 - v1) == "[7.000000, 6.000000]"
25
- assert str(8 + v1) == "[9.000000, 10.000000]"
26
- assert str(8 * v1) == "[8.000000, 16.000000]"
27
- assert str(8 / v1) == "[8.000000, 4.000000]"
28
- assert str(v1 * v2) == "[3.000000, -2.000000]"
29
- assert str(v2 / v1) == "[3.000000, -0.500000]"
30
-
31
- assert v1 == v3
32
- assert v1 != v2
33
- assert hash(v1) == 4
34
- # TODO(eric.cousineau): Make this work.
35
- # assert abs(v1) == "abs(Vector2)"
36
-
37
- v1 += 2 * v2
38
- assert str(v1) == "[7.000000, 0.000000]"
39
- v1 -= v2
40
- assert str(v1) == "[4.000000, 1.000000]"
41
- v1 *= 2
42
- assert str(v1) == "[8.000000, 2.000000]"
43
- v1 /= 16
44
- assert str(v1) == "[0.500000, 0.125000]"
45
- v1 *= v2
46
- assert str(v1) == "[1.500000, -0.125000]"
47
- v2 /= v1
48
- assert str(v2) == "[2.000000, 8.000000]"
49
-
50
- cstats = ConstructorStats.get(m.Vector2)
51
- assert cstats.alive() == 3
52
- del v1
53
- assert cstats.alive() == 2
54
- del v2
55
- assert cstats.alive() == 1
56
- del v3
57
- assert cstats.alive() == 0
58
- assert cstats.values() == [
59
- '[1.000000, 2.000000]',
60
- '[3.000000, -1.000000]',
61
- '[1.000000, 2.000000]',
62
- '[-3.000000, 1.000000]',
63
- '[4.000000, 1.000000]',
64
- '[-2.000000, 3.000000]',
65
- '[-7.000000, -6.000000]',
66
- '[9.000000, 10.000000]',
67
- '[8.000000, 16.000000]',
68
- '[0.125000, 0.250000]',
69
- '[7.000000, 6.000000]',
70
- '[9.000000, 10.000000]',
71
- '[8.000000, 16.000000]',
72
- '[8.000000, 4.000000]',
73
- '[3.000000, -2.000000]',
74
- '[3.000000, -0.500000]',
75
- '[6.000000, -2.000000]',
76
- ]
77
- assert cstats.default_constructions == 0
78
- assert cstats.copy_constructions == 0
79
- assert cstats.move_constructions >= 10
80
- assert cstats.copy_assignments == 0
81
- assert cstats.move_assignments == 0
82
-
83
-
84
- def test_operators_notimplemented():
85
- """#393: need to return NotSupported to ensure correct arithmetic operator behavior"""
86
-
87
- c1, c2 = m.C1(), m.C2()
88
- assert c1 + c1 == 11
89
- assert c2 + c2 == 22
90
- assert c2 + c1 == 21
91
- assert c1 + c2 == 12
92
-
93
-
94
- def test_nested():
95
- """#328: first member in a class can't be used in operators"""
96
-
97
- a = m.NestA()
98
- b = m.NestB()
99
- c = m.NestC()
100
-
101
- a += 10
102
- assert m.get_NestA(a) == 13
103
- b.a += 100
104
- assert m.get_NestA(b.a) == 103
105
- c.b.a += 1000
106
- assert m.get_NestA(c.b.a) == 1003
107
- b -= 1
108
- assert m.get_NestB(b) == 3
109
- c.b -= 3
110
- assert m.get_NestB(c.b) == 1
111
- c *= 7
112
- assert m.get_NestC(c) == 35
113
-
114
- abase = a.as_base()
115
- assert abase.value == -2
116
- a.as_base().value += 44
117
- assert abase.value == 42
118
- assert c.b.a.as_base().value == -2
119
- c.b.a.as_base().value += 44
120
- assert c.b.a.as_base().value == 42
121
-
122
- del c
123
- pytest.gc_collect()
124
- del a # Shouldn't delete while abase is still alive
125
- pytest.gc_collect()
126
-
127
- assert abase.value == 42
128
- del abase, b
129
- pytest.gc_collect()
130
-
131
-
132
- def test_overriding_eq_reset_hash():
133
-
134
- assert m.Comparable(15) is not m.Comparable(15)
135
- assert m.Comparable(15) == m.Comparable(15)
136
-
137
- with pytest.raises(TypeError):
138
- hash(m.Comparable(15)) # TypeError: unhashable type: 'm.Comparable'
139
-
140
- for hashable in (m.Hashable, m.Hashable2):
141
- assert hashable(15) is not hashable(15)
142
- assert hashable(15) == hashable(15)
143
-
144
- assert hash(hashable(15)) == 15
145
- assert hash(hashable(15)) == hash(hashable(15))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/equal.h DELETED
@@ -1,238 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file equal.h
19
- * \brief Equality between ranges
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/detail/config.h>
25
- #include <thrust/detail/execution_policy.h>
26
-
27
- namespace thrust
28
- {
29
-
30
-
31
- /*! \addtogroup reductions
32
- * \{
33
- * \addtogroup comparisons
34
- * \ingroup reductions
35
- * \{
36
- */
37
-
38
-
39
- /*! \p equal returns \c true if the two ranges <tt>[first1, last1)</tt>
40
- * and <tt>[first2, first2 + (last1 - first1))</tt> are identical when
41
- * compared element-by-element, and otherwise returns \c false.
42
- *
43
- * This version of \p equal returns \c true if and only if for every
44
- * iterator \c i in <tt>[first1, last1)</tt>, <tt>*i == *(first2 + (i - first1))</tt>.
45
- *
46
- * The algorithm's execution is parallelized as determined by \p exec.
47
- *
48
- * \param exec The execution policy to use for parallelization.
49
- * \param first1 The beginning of the first sequence.
50
- * \param last1 The end of the first sequence.
51
- * \param first2 The beginning of the second sequence.
52
- * \return \c true, if the sequences are equal; \c false, otherwise.
53
- *
54
- * \tparam DerivedPolicy The name of the derived execution policy.
55
- * \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>,
56
- * and \p InputIterator1's \c value_type is a model of <a href="http://www.sgi.com/tech/stl/EqualityComparable.html">Equality Comparable</a>,
57
- * and \p InputIterator1's \c value_type can be compared for equality with \c InputIterator2's \c value_type.
58
- * \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>,
59
- * and \p InputIterator2's \c value_type is a model of <a href="http://www.sgi.com/tech/stl/EqualityComparable.html">Equality Comparable</a>,
60
- * and \p InputIterator2's \c value_type can be compared for equality with \c InputIterator1's \c value_type.
61
- *
62
- * The following code snippet demonstrates how to use \p equal to test
63
- * two ranges for equality using the \p thrust::host execution policy:
64
- *
65
- * \code
66
- * #include <thrust/equal.h>
67
- * #include <thrust/execution_policy.h>
68
- * ...
69
- * int A1[7] = {3, 1, 4, 1, 5, 9, 3};
70
- * int A2[7] = {3, 1, 4, 2, 8, 5, 7};
71
- * ...
72
- * bool result = thrust::equal(thrust::host, A1, A1 + 7, A2);
73
- *
74
- * // result == false
75
- * \endcode
76
- *
77
- * \see http://www.sgi.com/tech/stl/equal.html
78
- */
79
- template<typename DerivedPolicy, typename InputIterator1, typename InputIterator2>
80
- __host__ __device__
81
- bool equal(const thrust::detail::execution_policy_base<DerivedPolicy> &exec, InputIterator1 first1, InputIterator1 last1, InputIterator2 first2);
82
-
83
-
84
- /*! \p equal returns \c true if the two ranges <tt>[first1, last1)</tt>
85
- * and <tt>[first2, first2 + (last1 - first1))</tt> are identical when
86
- * compared element-by-element, and otherwise returns \c false.
87
- *
88
- * This version of \p equal returns \c true if and only if for every
89
- * iterator \c i in <tt>[first1, last1)</tt>, <tt>*i == *(first2 + (i - first1))</tt>.
90
- *
91
- * \param first1 The beginning of the first sequence.
92
- * \param last1 The end of the first sequence.
93
- * \param first2 The beginning of the second sequence.
94
- * \return \c true, if the sequences are equal; \c false, otherwise.
95
- *
96
- * \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>,
97
- * and \p InputIterator1's \c value_type is a model of <a href="http://www.sgi.com/tech/stl/EqualityComparable.html">Equality Comparable</a>,
98
- * and \p InputIterator1's \c value_type can be compared for equality with \c InputIterator2's \c value_type.
99
- * \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>,
100
- * and \p InputIterator2's \c value_type is a model of <a href="http://www.sgi.com/tech/stl/EqualityComparable.html">Equality Comparable</a>,
101
- * and \p InputIterator2's \c value_type can be compared for equality with \c InputIterator1's \c value_type.
102
- *
103
- * The following code snippet demonstrates how to use \p equal to test
104
- * two ranges for equality.
105
- *
106
- * \code
107
- * #include <thrust/equal.h>
108
- * ...
109
- * int A1[7] = {3, 1, 4, 1, 5, 9, 3};
110
- * int A2[7] = {3, 1, 4, 2, 8, 5, 7};
111
- * ...
112
- * bool result = thrust::equal(A1, A1 + 7, A2);
113
- *
114
- * // result == false
115
- * \endcode
116
- *
117
- * \see http://www.sgi.com/tech/stl/equal.html
118
- */
119
- template <typename InputIterator1, typename InputIterator2>
120
- bool equal(InputIterator1 first1, InputIterator1 last1,
121
- InputIterator2 first2);
122
-
123
-
124
- /*! \p equal returns \c true if the two ranges <tt>[first1, last1)</tt>
125
- * and <tt>[first2, first2 + (last1 - first1))</tt> are identical when
126
- * compared element-by-element, and otherwise returns \c false.
127
- *
128
- * This version of \p equal returns \c true if and only if for every
129
- * iterator \c i in <tt>[first1, last1)</tt>,
130
- * <tt>binary_pred(*i, *(first2 + (i - first1)))</tt> is \c true.
131
- *
132
- * The algorithm's execution is parallelized as determined by \p exec.
133
- *
134
- * \param exec The execution policy to use for parallelization.
135
- * \param first1 The beginning of the first sequence.
136
- * \param last1 The end of the first sequence.
137
- * \param first2 The beginning of the second sequence.
138
- * \param binary_pred Binary predicate used to test element equality.
139
- * \return \c true, if the sequences are equal; \c false, otherwise.
140
- *
141
- * \tparam DerivedPolicy The name of the derived execution policy.
142
- * \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>,
143
- * and \p InputIterator1's \c value_type is convertible to \p BinaryPredicate's \c first_argument_type.
144
- * \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>,
145
- * and \p InputIterator2's \c value_type is convertible to \p BinaryPredicate's \c second_argument_type.
146
- * \tparam BinaryPredicate is a model of <a href="http://www.sgi.com/tech/stl/BinaryPredicate.html">Binary Predicate</a>.
147
- *
148
- * The following code snippet demonstrates how to use \p equal to compare the
149
- * elements in two ranges modulo 2 using the \p thrust::host execution policy.
150
- *
151
- * \code
152
- * #include <thrust/equal.h>
153
- * #include <thrust/execution_policy.h>
154
- * ...
155
- *
156
- * struct compare_modulo_two
157
- * {
158
- * __host__ __device__
159
- * bool operator()(int x, int y) const
160
- * {
161
- * return (x % 2) == (y % 2);
162
- * }
163
- * };
164
- * ...
165
- * int x[6] = {0, 2, 4, 6, 8, 10};
166
- * int y[6] = {1, 3, 5, 7, 9, 11};
167
- *
168
- * bool result = thrust::equal(x, x + 6, y, compare_modulo_two());
169
- *
170
- * // result is false
171
- * \endcode
172
- *
173
- * \see http://www.sgi.com/tech/stl/equal.html
174
- */
175
- template<typename DerivedPolicy, typename InputIterator1, typename InputIterator2, typename BinaryPredicate>
176
- __host__ __device__
177
- bool equal(const thrust::detail::execution_policy_base<DerivedPolicy> &exec, InputIterator1 first1, InputIterator1 last1, InputIterator2 first2, BinaryPredicate binary_pred);
178
-
179
-
180
- /*! \p equal returns \c true if the two ranges <tt>[first1, last1)</tt>
181
- * and <tt>[first2, first2 + (last1 - first1))</tt> are identical when
182
- * compared element-by-element, and otherwise returns \c false.
183
- *
184
- * This version of \p equal returns \c true if and only if for every
185
- * iterator \c i in <tt>[first1, last1)</tt>,
186
- * <tt>binary_pred(*i, *(first2 + (i - first1)))</tt> is \c true.
187
- *
188
- * \param first1 The beginning of the first sequence.
189
- * \param last1 The end of the first sequence.
190
- * \param first2 The beginning of the second sequence.
191
- * \param binary_pred Binary predicate used to test element equality.
192
- * \return \c true, if the sequences are equal; \c false, otherwise.
193
- *
194
- * \tparam InputIterator1 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>,
195
- * and \p InputIterator1's \c value_type is convertible to \p BinaryPredicate's \c first_argument_type.
196
- * \tparam InputIterator2 is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html">Input Iterator</a>,
197
- * and \p InputIterator2's \c value_type is convertible to \p BinaryPredicate's \c second_argument_type.
198
- * \tparam BinaryPredicate is a model of <a href="http://www.sgi.com/tech/stl/BinaryPredicate.html">Binary Predicate</a>.
199
- *
200
- * The following code snippet demonstrates how to use \p equal to compare the
201
- * elements in two ranges modulo 2.
202
- *
203
- * \code
204
- * #include <thrust/equal.h>
205
- *
206
- * struct compare_modulo_two
207
- * {
208
- * __host__ __device__
209
- * bool operator()(int x, int y) const
210
- * {
211
- * return (x % 2) == (y % 2);
212
- * }
213
- * };
214
- * ...
215
- * int x[6] = {0, 2, 4, 6, 8, 10};
216
- * int y[6] = {1, 3, 5, 7, 9, 11};
217
- *
218
- * bool result = thrust::equal(x, x + 5, y, compare_modulo_two());
219
- *
220
- * // result is true
221
- * \endcode
222
- *
223
- * \see http://www.sgi.com/tech/stl/equal.html
224
- */
225
- template <typename InputIterator1, typename InputIterator2,
226
- typename BinaryPredicate>
227
- bool equal(InputIterator1 first1, InputIterator1 last1,
228
- InputIterator2 first2, BinaryPredicate binary_pred);
229
-
230
-
231
- /*! \} // end comparisons
232
- * \} // end reductions
233
- */
234
-
235
- } // end namespace thrust
236
-
237
- #include <thrust/detail/equal.inl>
238
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/scan.h DELETED
@@ -1,23 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system inherits scan
22
- #include <thrust/system/detail/sequential/scan.h>
23
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Text2Human/Text2Human/data/parsing_generation_segm_attr_dataset.py DELETED
@@ -1,80 +0,0 @@
1
- import os
2
- import os.path
3
-
4
- import numpy as np
5
- import torch
6
- import torch.utils.data as data
7
- from PIL import Image
8
-
9
-
10
- class ParsingGenerationDeepFashionAttrSegmDataset(data.Dataset):
11
-
12
- def __init__(self, segm_dir, pose_dir, ann_file, downsample_factor=2):
13
- self._densepose_path = pose_dir
14
- self._segm_path = segm_dir
15
- self._image_fnames = []
16
- self.attrs = []
17
-
18
- self.downsample_factor = downsample_factor
19
-
20
- # training, ground-truth available
21
- assert os.path.exists(ann_file)
22
- for row in open(os.path.join(ann_file), 'r'):
23
- annotations = row.split()
24
- self._image_fnames.append(annotations[0])
25
- self.attrs.append([int(i) for i in annotations[1:]])
26
-
27
- def _open_file(self, path_prefix, fname):
28
- return open(os.path.join(path_prefix, fname), 'rb')
29
-
30
- def _load_densepose(self, raw_idx):
31
- fname = self._image_fnames[raw_idx]
32
- fname = f'{fname[:-4]}_densepose.png'
33
- with self._open_file(self._densepose_path, fname) as f:
34
- densepose = Image.open(f)
35
- if self.downsample_factor != 1:
36
- width, height = densepose.size
37
- width = width // self.downsample_factor
38
- height = height // self.downsample_factor
39
- densepose = densepose.resize(
40
- size=(width, height), resample=Image.NEAREST)
41
- # channel-wise IUV order, [3, H, W]
42
- densepose = np.array(densepose)[:, :, 2:].transpose(2, 0, 1)
43
- return densepose.astype(np.float32)
44
-
45
- def _load_segm(self, raw_idx):
46
- fname = self._image_fnames[raw_idx]
47
- fname = f'{fname[:-4]}_segm.png'
48
- with self._open_file(self._segm_path, fname) as f:
49
- segm = Image.open(f)
50
- if self.downsample_factor != 1:
51
- width, height = segm.size
52
- width = width // self.downsample_factor
53
- height = height // self.downsample_factor
54
- segm = segm.resize(
55
- size=(width, height), resample=Image.NEAREST)
56
- segm = np.array(segm)
57
- return segm.astype(np.float32)
58
-
59
- def __getitem__(self, index):
60
- pose = self._load_densepose(index)
61
- segm = self._load_segm(index)
62
- attr = self.attrs[index]
63
-
64
- pose = torch.from_numpy(pose)
65
- segm = torch.LongTensor(segm)
66
- attr = torch.LongTensor(attr)
67
-
68
- pose = pose / 12. - 1
69
-
70
- return_dict = {
71
- 'densepose': pose,
72
- 'segm': segm,
73
- 'attr': attr,
74
- 'img_name': self._image_fnames[index]
75
- }
76
-
77
- return return_dict
78
-
79
- def __len__(self):
80
- return len(self._image_fnames)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/core/bbox/coder/pseudo_bbox_coder.py DELETED
@@ -1,18 +0,0 @@
1
- from ..builder import BBOX_CODERS
2
- from .base_bbox_coder import BaseBBoxCoder
3
-
4
-
5
- @BBOX_CODERS.register_module()
6
- class PseudoBBoxCoder(BaseBBoxCoder):
7
- """Pseudo bounding box coder."""
8
-
9
- def __init__(self, **kwargs):
10
- super(BaseBBoxCoder, self).__init__(**kwargs)
11
-
12
- def encode(self, bboxes, gt_bboxes):
13
- """torch.Tensor: return the given ``bboxes``"""
14
- return gt_bboxes
15
-
16
- def decode(self, bboxes, pred_bboxes):
17
- """torch.Tensor: return the given ``pred_bboxes``"""
18
- return pred_bboxes
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/core/evaluation/__init__.py DELETED
@@ -1,15 +0,0 @@
1
- from .class_names import (cityscapes_classes, coco_classes, dataset_aliases,
2
- get_classes, imagenet_det_classes,
3
- imagenet_vid_classes, voc_classes)
4
- from .eval_hooks import DistEvalHook, EvalHook
5
- from .mean_ap import average_precision, eval_map, print_map_summary
6
- from .recall import (eval_recalls, plot_iou_recall, plot_num_recall,
7
- print_recall_summary)
8
-
9
- __all__ = [
10
- 'voc_classes', 'imagenet_det_classes', 'imagenet_vid_classes',
11
- 'coco_classes', 'cityscapes_classes', 'dataset_aliases', 'get_classes',
12
- 'DistEvalHook', 'EvalHook', 'average_precision', 'eval_map',
13
- 'print_map_summary', 'eval_recalls', 'print_recall_summary',
14
- 'plot_num_recall', 'plot_iou_recall'
15
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/losses/accuracy.py DELETED
@@ -1,78 +0,0 @@
1
- import mmcv
2
- import torch.nn as nn
3
-
4
-
5
- @mmcv.jit(coderize=True)
6
- def accuracy(pred, target, topk=1, thresh=None):
7
- """Calculate accuracy according to the prediction and target.
8
-
9
- Args:
10
- pred (torch.Tensor): The model prediction, shape (N, num_class)
11
- target (torch.Tensor): The target of each prediction, shape (N, )
12
- topk (int | tuple[int], optional): If the predictions in ``topk``
13
- matches the target, the predictions will be regarded as
14
- correct ones. Defaults to 1.
15
- thresh (float, optional): If not None, predictions with scores under
16
- this threshold are considered incorrect. Default to None.
17
-
18
- Returns:
19
- float | tuple[float]: If the input ``topk`` is a single integer,
20
- the function will return a single float as accuracy. If
21
- ``topk`` is a tuple containing multiple integers, the
22
- function will return a tuple containing accuracies of
23
- each ``topk`` number.
24
- """
25
- assert isinstance(topk, (int, tuple))
26
- if isinstance(topk, int):
27
- topk = (topk, )
28
- return_single = True
29
- else:
30
- return_single = False
31
-
32
- maxk = max(topk)
33
- if pred.size(0) == 0:
34
- accu = [pred.new_tensor(0.) for i in range(len(topk))]
35
- return accu[0] if return_single else accu
36
- assert pred.ndim == 2 and target.ndim == 1
37
- assert pred.size(0) == target.size(0)
38
- assert maxk <= pred.size(1), \
39
- f'maxk {maxk} exceeds pred dimension {pred.size(1)}'
40
- pred_value, pred_label = pred.topk(maxk, dim=1)
41
- pred_label = pred_label.t() # transpose to shape (maxk, N)
42
- correct = pred_label.eq(target.view(1, -1).expand_as(pred_label))
43
- if thresh is not None:
44
- # Only prediction values larger than thresh are counted as correct
45
- correct = correct & (pred_value > thresh).t()
46
- res = []
47
- for k in topk:
48
- correct_k = correct[:k].reshape(-1).float().sum(0, keepdim=True)
49
- res.append(correct_k.mul_(100.0 / pred.size(0)))
50
- return res[0] if return_single else res
51
-
52
-
53
- class Accuracy(nn.Module):
54
-
55
- def __init__(self, topk=(1, ), thresh=None):
56
- """Module to calculate the accuracy.
57
-
58
- Args:
59
- topk (tuple, optional): The criterion used to calculate the
60
- accuracy. Defaults to (1,).
61
- thresh (float, optional): If not None, predictions with scores
62
- under this threshold are considered incorrect. Default to None.
63
- """
64
- super().__init__()
65
- self.topk = topk
66
- self.thresh = thresh
67
-
68
- def forward(self, pred, target):
69
- """Forward function to calculate accuracy.
70
-
71
- Args:
72
- pred (torch.Tensor): Prediction of models.
73
- target (torch.Tensor): Target for each prediction.
74
-
75
- Returns:
76
- tuple[float]: The accuracies under different topk criterions.
77
- """
78
- return accuracy(pred, target, self.topk, self.thresh)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/roi_heads/__init__.py DELETED
@@ -1,43 +0,0 @@
1
- '''
2
- from .base_roi_head import BaseRoIHead
3
- from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DoubleConvFCBBoxHead,
4
- SCNetBBoxHead, Shared2FCBBoxHead,
5
- Shared4Conv1FCBBoxHead)
6
- from .cascade_roi_head import CascadeRoIHead
7
- from .double_roi_head import DoubleHeadRoIHead
8
- from .dynamic_roi_head import DynamicRoIHead
9
- from .grid_roi_head import GridRoIHead
10
- from .htc_roi_head import HybridTaskCascadeRoIHead
11
- from .mask_heads import (CoarseMaskHead, FCNMaskHead, FeatureRelayHead,
12
- FusedSemanticHead, GlobalContextHead, GridHead,
13
- HTCMaskHead, MaskIoUHead, MaskPointHead,
14
- SCNetMaskHead, SCNetSemanticHead)
15
- from .mask_scoring_roi_head import MaskScoringRoIHead
16
- from .pisa_roi_head import PISARoIHead
17
- from .point_rend_roi_head import PointRendRoIHead
18
- from .roi_extractors import SingleRoIExtractor
19
- from .scnet_roi_head import SCNetRoIHead
20
- from .shared_heads import ResLayer
21
- from .sparse_roi_head import SparseRoIHead
22
- from .standard_roi_head import StandardRoIHead
23
- from .trident_roi_head import TridentRoIHead
24
-
25
- __all__ = [
26
- 'BaseRoIHead', 'CascadeRoIHead', 'DoubleHeadRoIHead', 'MaskScoringRoIHead',
27
- 'HybridTaskCascadeRoIHead', 'GridRoIHead', 'ResLayer', 'BBoxHead',
28
- 'ConvFCBBoxHead', 'Shared2FCBBoxHead', 'StandardRoIHead',
29
- 'Shared4Conv1FCBBoxHead', 'DoubleConvFCBBoxHead', 'FCNMaskHead',
30
- 'HTCMaskHead', 'FusedSemanticHead', 'GridHead', 'MaskIoUHead',
31
- 'SingleRoIExtractor', 'PISARoIHead', 'PointRendRoIHead', 'MaskPointHead',
32
- 'CoarseMaskHead', 'DynamicRoIHead', 'SparseRoIHead', 'TridentRoIHead',
33
- 'SCNetRoIHead', 'SCNetMaskHead', 'SCNetSemanticHead', 'SCNetBBoxHead',
34
- 'FeatureRelayHead', 'GlobalContextHead'
35
- ]
36
- '''
37
- from .bbox_heads import (BBoxHead, ConvFCBBoxHead, DoubleConvFCBBoxHead,
38
- SCNetBBoxHead, Shared2FCBBoxHead,
39
- Shared4Conv1FCBBoxHead)
40
- from .standard_roi_head import StandardRoIHead
41
- from .roi_extractors import SingleRoIExtractor
42
- from .mask_heads import FCNMaskHead
43
- __all__ = ['BBoxHead','StandardRoIHead','SingleRoIExtractor','Shared2FCBBoxHead','FCNMaskHead']
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/components/WebSocket.js DELETED
@@ -1,134 +0,0 @@
1
- import Client from "./Client.js";
2
- import { Config, Version } from './index.js'
3
- import { sleep } from '../model/index.js'
4
- import { redAdapter } from '../model/red/index.js'
5
- // import { satoriAdapter } from '../model/satori/index.js'
6
-
7
- let sendSocketList = []
8
- let allSocketList = []
9
-
10
- async function createWebSocket(data) {
11
- if (typeof data.close != 'undefined' && typeof data.closed == 'undefined') {
12
- data.closed = data.close
13
- delete data.close
14
- }
15
- const client = new Client(data)
16
- setAllSocketList(client)
17
- if (data.address == 'ws_address') return
18
- if (data.closed) return
19
- sendSocketList = sendSocketList.filter(i => i.name != data.name)
20
- switch (Number(data.type)) {
21
- case 1:
22
- if (!await checkVersion(data)) return
23
- client.createWs()
24
- sendSocketList.push(client)
25
- break;
26
- case 2:
27
- if (!await checkVersion(data)) return
28
- client.createServer()
29
- sendSocketList.push(client)
30
- break
31
- case 3:
32
- client.createGSUidWs()
33
- sendSocketList.push(client)
34
- break
35
- case 4:
36
- if (Version.isTrss) return
37
- // client.createQQNT()
38
- redAdapter.connect(client)
39
- break
40
- case 5:
41
- if (!await checkVersion(data)) return
42
- client.createHttp()
43
- break
44
- case 6:
45
- if (!await checkVersion(data)) return
46
- client.createHttpPost()
47
- sendSocketList.push(client)
48
- break
49
- default:
50
- return;
51
- }
52
- }
53
-
54
- function setAllSocketList(data) {
55
- allSocketList = allSocketList.filter(i => i.name != data.name)
56
- allSocketList.push(data)
57
- }
58
-
59
- async function checkVersion(data) {
60
- if (Version.isTrss) {
61
- if (!data.uin) {
62
- logger.warn(`[ws-plugin] ${data.name} 缺少配置项uin 请删除连接后重新#ws添加连接`)
63
- return false
64
- } else {
65
- let log = false
66
- for (let i = 0; i < 20; i++) {
67
- if (Version.protocol.some(i => i == Bot[data.uin]?.version?.name)) {
68
- return true
69
- }
70
- if (!log) {
71
- logger.warn(`[ws-plugin] ${data.name} 暂未适配当前协议端或未连接对应协议端,20秒后重新判断`)
72
- log = true
73
- }
74
- await sleep(1000)
75
- }
76
- logger.warn(`[ws-plugin] ${data.name} 暂未适配当前协议端或未连接对应协议端 ${data.uin}`)
77
- return false
78
- }
79
- }
80
- return true
81
- }
82
-
83
- function modifyWebSocket(target) {
84
- // if (Version.isTrss) return
85
- switch (target.type) {
86
- case 'add':
87
- case 'open':
88
- if (target.data.type == 4) {
89
- const client = new Client(target.data)
90
- setAllSocketList(client)
91
- redAdapter.connect(client)
92
- } else {
93
- createWebSocket(target.data)
94
- }
95
- break;
96
- case 'del':
97
- case 'close':
98
- for (const i of allSocketList) {
99
- if (i.name == target.data.name) {
100
- i.close()
101
- break
102
- }
103
- }
104
- break
105
- default:
106
- return;
107
- }
108
- }
109
-
110
- function clearWebSocket() {
111
- for (const i of allSocketList) {
112
- i.close()
113
- }
114
- }
115
-
116
-
117
- function initWebSocket() {
118
- // if (Version.isTrss) return
119
- for (const i of Config.servers) {
120
- createWebSocket(i)
121
- }
122
- }
123
-
124
-
125
- export {
126
- initWebSocket,
127
- clearWebSocket,
128
- modifyWebSocket,
129
- allSocketList,
130
- setAllSocketList,
131
- sendSocketList,
132
- createWebSocket
133
- }
134
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ClinBAY/Safeterm_Demo/send_email_request.py DELETED
@@ -1,98 +0,0 @@
1
- import os
2
- from dotenv import load_dotenv
3
- import msal
4
- import requests
5
- # import json
6
-
7
-
8
- def send_email(subject, email, name, organization, meddra_license, agree_terms, save_data) -> None:
9
- """
10
- Send an email with user settings
11
- @param save_data:
12
- @type save_data:
13
- @param agree_terms:
14
- @type agree_terms:
15
- @param meddra_license:
16
- @type meddra_license:
17
- @param organization:
18
- @type organization:
19
- @param name:
20
- @type name:
21
- @param email:
22
- @type email:
23
- @param subject:
24
- @type subject:
25
- @return:
26
- @rtype:
27
- """
28
-
29
- body = f"""
30
- Request for API Key - Safeterm
31
-
32
- Settings:
33
- - Free Demo (30 days, 50 terms limit)
34
- - Version: 26.0
35
- - Language: English
36
-
37
- Contact Information:
38
- - Email: {email}
39
- - Full Name: {name}
40
- - Organization: {organization}
41
-
42
- Terms of use:
43
- - Valid medDRA License: {meddra_license}
44
- - Agrees to Safeterm terms: {agree_terms}
45
- - Consent to data storage: {save_data}
46
- """
47
-
48
- load_dotenv()
49
-
50
- client_id = os.getenv("CLIENT_ID")
51
- client_secret = os.getenv("CLIENT_SECRET")
52
- tenant_id = os.getenv("TENANT_ID")
53
- authority = f"https://login.microsoftonline.com/{tenant_id}"
54
- sender = os.getenv("MAIL_SENDER")
55
- receiver = os.getenv("MAIL_RECIPIENT")
56
- cc_receiver = os.getenv("CC_RECIPIENT")
57
-
58
- app = msal.ConfidentialClientApplication(
59
- client_id=client_id,
60
- client_credential=client_secret,
61
- authority=authority)
62
-
63
- scopes = ["https://graph.microsoft.com/.default"]
64
-
65
- result = app.acquire_token_silent(scopes, account=None)
66
-
67
- if not result:
68
- print("No suitable token exists in cache. Let's get a new one from Azure Active Directory.")
69
- result = app.acquire_token_for_client(scopes=scopes)
70
-
71
- if "access_token" in result:
72
- endpoint = f'https://graph.microsoft.com/v1.0/users/{sender}/sendMail'
73
- email_msg = {
74
- 'Message': {
75
- 'Subject': subject,
76
- 'Body': {
77
- 'ContentType': 'Text',
78
- 'Content': body
79
- },
80
- 'ToRecipients': [{'EmailAddress': {'Address': receiver}}],
81
- 'CcRecipients': [{'EmailAddress': {'Address': cc_receiver}}] # Added CcRecipients here
82
- },
83
- 'SaveToSentItems': 'true'
84
- }
85
-
86
- r = requests.post(endpoint, headers={'Authorization': 'Bearer ' + result['access_token']}, json=email_msg)
87
-
88
- if r.ok:
89
- print('Sent email successfully')
90
- else:
91
- print(r.json())
92
- else:
93
- print(result.get("error"))
94
- print(result.get("error_description"))
95
- print(result.get("correlation_id"))
96
-
97
- # Sample usage
98
- # send_email("Test Email Hugging Face Demo", "This is a test email.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CognitiveLabs/GPT-auto-webscraping/chains/output_format/base.py DELETED
@@ -1,19 +0,0 @@
1
- from langchain.chains import LLMChain
2
- from langchain.memory import ConversationBufferMemory
3
- from chains.output_format.templates import output_format_chat_prompt
4
-
5
-
6
- def chain_output_format(llm) -> LLMChain:
7
- # memory
8
- html_memory = ConversationBufferMemory(
9
- input_key="html_content", memory_key="chat_history"
10
- )
11
-
12
- # chain
13
- return LLMChain(
14
- llm=llm,
15
- prompt=output_format_chat_prompt,
16
- verbose=True,
17
- output_key="output_format",
18
- memory=html_memory,
19
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/ROIAlign.h DELETED
@@ -1,46 +0,0 @@
1
- // Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2
- #pragma once
3
-
4
- #include "cpu/vision.h"
5
-
6
- #ifdef WITH_CUDA
7
- #include "cuda/vision.h"
8
- #endif
9
-
10
- // Interface for Python
11
- at::Tensor ROIAlign_forward(const at::Tensor& input,
12
- const at::Tensor& rois,
13
- const float spatial_scale,
14
- const int pooled_height,
15
- const int pooled_width,
16
- const int sampling_ratio) {
17
- if (input.type().is_cuda()) {
18
- #ifdef WITH_CUDA
19
- return ROIAlign_forward_cuda(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
20
- #else
21
- AT_ERROR("Not compiled with GPU support");
22
- #endif
23
- }
24
- return ROIAlign_forward_cpu(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
25
- }
26
-
27
- at::Tensor ROIAlign_backward(const at::Tensor& grad,
28
- const at::Tensor& rois,
29
- const float spatial_scale,
30
- const int pooled_height,
31
- const int pooled_width,
32
- const int batch_size,
33
- const int channels,
34
- const int height,
35
- const int width,
36
- const int sampling_ratio) {
37
- if (grad.type().is_cuda()) {
38
- #ifdef WITH_CUDA
39
- return ROIAlign_backward_cuda(grad, rois, spatial_scale, pooled_height, pooled_width, batch_size, channels, height, width, sampling_ratio);
40
- #else
41
- AT_ERROR("Not compiled with GPU support");
42
- #endif
43
- }
44
- AT_ERROR("Not implemented on the CPU");
45
- }
46
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/misc/etree.py DELETED
@@ -1,478 +0,0 @@
1
- """Shim module exporting the same ElementTree API for lxml and
2
- xml.etree backends.
3
-
4
- When lxml is installed, it is automatically preferred over the built-in
5
- xml.etree module.
6
- On Python 2.7, the cElementTree module is preferred over the pure-python
7
- ElementTree module.
8
-
9
- Besides exporting a unified interface, this also defines extra functions
10
- or subclasses built-in ElementTree classes to add features that are
11
- only availble in lxml, like OrderedDict for attributes, pretty_print and
12
- iterwalk.
13
- """
14
- from fontTools.misc.textTools import tostr
15
-
16
-
17
- XML_DECLARATION = """<?xml version='1.0' encoding='%s'?>"""
18
-
19
- __all__ = [
20
- # public symbols
21
- "Comment",
22
- "dump",
23
- "Element",
24
- "ElementTree",
25
- "fromstring",
26
- "fromstringlist",
27
- "iselement",
28
- "iterparse",
29
- "parse",
30
- "ParseError",
31
- "PI",
32
- "ProcessingInstruction",
33
- "QName",
34
- "SubElement",
35
- "tostring",
36
- "tostringlist",
37
- "TreeBuilder",
38
- "XML",
39
- "XMLParser",
40
- "register_namespace",
41
- ]
42
-
43
- try:
44
- from lxml.etree import *
45
-
46
- _have_lxml = True
47
- except ImportError:
48
- try:
49
- from xml.etree.cElementTree import *
50
-
51
- # the cElementTree version of XML function doesn't support
52
- # the optional 'parser' keyword argument
53
- from xml.etree.ElementTree import XML
54
- except ImportError: # pragma: no cover
55
- from xml.etree.ElementTree import *
56
- _have_lxml = False
57
-
58
- import sys
59
-
60
- # dict is always ordered in python >= 3.6 and on pypy
61
- PY36 = sys.version_info >= (3, 6)
62
- try:
63
- import __pypy__
64
- except ImportError:
65
- __pypy__ = None
66
- _dict_is_ordered = bool(PY36 or __pypy__)
67
- del PY36, __pypy__
68
-
69
- if _dict_is_ordered:
70
- _Attrib = dict
71
- else:
72
- from collections import OrderedDict as _Attrib
73
-
74
- if isinstance(Element, type):
75
- _Element = Element
76
- else:
77
- # in py27, cElementTree.Element cannot be subclassed, so
78
- # we need to import the pure-python class
79
- from xml.etree.ElementTree import Element as _Element
80
-
81
- class Element(_Element):
82
- """Element subclass that keeps the order of attributes."""
83
-
84
- def __init__(self, tag, attrib=_Attrib(), **extra):
85
- super(Element, self).__init__(tag)
86
- self.attrib = _Attrib()
87
- if attrib:
88
- self.attrib.update(attrib)
89
- if extra:
90
- self.attrib.update(extra)
91
-
92
- def SubElement(parent, tag, attrib=_Attrib(), **extra):
93
- """Must override SubElement as well otherwise _elementtree.SubElement
94
- fails if 'parent' is a subclass of Element object.
95
- """
96
- element = parent.__class__(tag, attrib, **extra)
97
- parent.append(element)
98
- return element
99
-
100
- def _iterwalk(element, events, tag):
101
- include = tag is None or element.tag == tag
102
- if include and "start" in events:
103
- yield ("start", element)
104
- for e in element:
105
- for item in _iterwalk(e, events, tag):
106
- yield item
107
- if include:
108
- yield ("end", element)
109
-
110
- def iterwalk(element_or_tree, events=("end",), tag=None):
111
- """A tree walker that generates events from an existing tree as
112
- if it was parsing XML data with iterparse().
113
- Drop-in replacement for lxml.etree.iterwalk.
114
- """
115
- if iselement(element_or_tree):
116
- element = element_or_tree
117
- else:
118
- element = element_or_tree.getroot()
119
- if tag == "*":
120
- tag = None
121
- for item in _iterwalk(element, events, tag):
122
- yield item
123
-
124
- _ElementTree = ElementTree
125
-
126
- class ElementTree(_ElementTree):
127
- """ElementTree subclass that adds 'pretty_print' and 'doctype'
128
- arguments to the 'write' method.
129
- Currently these are only supported for the default XML serialization
130
- 'method', and not also for "html" or "text", for these are delegated
131
- to the base class.
132
- """
133
-
134
- def write(
135
- self,
136
- file_or_filename,
137
- encoding=None,
138
- xml_declaration=False,
139
- method=None,
140
- doctype=None,
141
- pretty_print=False,
142
- ):
143
- if method and method != "xml":
144
- # delegate to super-class
145
- super(ElementTree, self).write(
146
- file_or_filename,
147
- encoding=encoding,
148
- xml_declaration=xml_declaration,
149
- method=method,
150
- )
151
- return
152
-
153
- if encoding is not None and encoding.lower() == "unicode":
154
- if xml_declaration:
155
- raise ValueError(
156
- "Serialisation to unicode must not request an XML declaration"
157
- )
158
- write_declaration = False
159
- encoding = "unicode"
160
- elif xml_declaration is None:
161
- # by default, write an XML declaration only for non-standard encodings
162
- write_declaration = encoding is not None and encoding.upper() not in (
163
- "ASCII",
164
- "UTF-8",
165
- "UTF8",
166
- "US-ASCII",
167
- )
168
- else:
169
- write_declaration = xml_declaration
170
-
171
- if encoding is None:
172
- encoding = "ASCII"
173
-
174
- if pretty_print:
175
- # NOTE this will modify the tree in-place
176
- _indent(self._root)
177
-
178
- with _get_writer(file_or_filename, encoding) as write:
179
- if write_declaration:
180
- write(XML_DECLARATION % encoding.upper())
181
- if pretty_print:
182
- write("\n")
183
- if doctype:
184
- write(_tounicode(doctype))
185
- if pretty_print:
186
- write("\n")
187
-
188
- qnames, namespaces = _namespaces(self._root)
189
- _serialize_xml(write, self._root, qnames, namespaces)
190
-
191
- import io
192
-
193
- def tostring(
194
- element,
195
- encoding=None,
196
- xml_declaration=None,
197
- method=None,
198
- doctype=None,
199
- pretty_print=False,
200
- ):
201
- """Custom 'tostring' function that uses our ElementTree subclass, with
202
- pretty_print support.
203
- """
204
- stream = io.StringIO() if encoding == "unicode" else io.BytesIO()
205
- ElementTree(element).write(
206
- stream,
207
- encoding=encoding,
208
- xml_declaration=xml_declaration,
209
- method=method,
210
- doctype=doctype,
211
- pretty_print=pretty_print,
212
- )
213
- return stream.getvalue()
214
-
215
- # serialization support
216
-
217
- import re
218
-
219
- # Valid XML strings can include any Unicode character, excluding control
220
- # characters, the surrogate blocks, FFFE, and FFFF:
221
- # Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]
222
- # Here we reversed the pattern to match only the invalid characters.
223
- # For the 'narrow' python builds supporting only UCS-2, which represent
224
- # characters beyond BMP as UTF-16 surrogate pairs, we need to pass through
225
- # the surrogate block. I haven't found a more elegant solution...
226
- UCS2 = sys.maxunicode < 0x10FFFF
227
- if UCS2:
228
- _invalid_xml_string = re.compile(
229
- "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uFFFE-\uFFFF]"
230
- )
231
- else:
232
- _invalid_xml_string = re.compile(
233
- "[\u0000-\u0008\u000B-\u000C\u000E-\u001F\uD800-\uDFFF\uFFFE-\uFFFF]"
234
- )
235
-
236
- def _tounicode(s):
237
- """Test if a string is valid user input and decode it to unicode string
238
- using ASCII encoding if it's a bytes string.
239
- Reject all bytes/unicode input that contains non-XML characters.
240
- Reject all bytes input that contains non-ASCII characters.
241
- """
242
- try:
243
- s = tostr(s, encoding="ascii", errors="strict")
244
- except UnicodeDecodeError:
245
- raise ValueError(
246
- "Bytes strings can only contain ASCII characters. "
247
- "Use unicode strings for non-ASCII characters."
248
- )
249
- except AttributeError:
250
- _raise_serialization_error(s)
251
- if s and _invalid_xml_string.search(s):
252
- raise ValueError(
253
- "All strings must be XML compatible: Unicode or ASCII, "
254
- "no NULL bytes or control characters"
255
- )
256
- return s
257
-
258
- import contextlib
259
-
260
- @contextlib.contextmanager
261
- def _get_writer(file_or_filename, encoding):
262
- # returns text write method and release all resources after using
263
- try:
264
- write = file_or_filename.write
265
- except AttributeError:
266
- # file_or_filename is a file name
267
- f = open(
268
- file_or_filename,
269
- "w",
270
- encoding="utf-8" if encoding == "unicode" else encoding,
271
- errors="xmlcharrefreplace",
272
- )
273
- with f:
274
- yield f.write
275
- else:
276
- # file_or_filename is a file-like object
277
- # encoding determines if it is a text or binary writer
278
- if encoding == "unicode":
279
- # use a text writer as is
280
- yield write
281
- else:
282
- # wrap a binary writer with TextIOWrapper
283
- detach_buffer = False
284
- if isinstance(file_or_filename, io.BufferedIOBase):
285
- buf = file_or_filename
286
- elif isinstance(file_or_filename, io.RawIOBase):
287
- buf = io.BufferedWriter(file_or_filename)
288
- detach_buffer = True
289
- else:
290
- # This is to handle passed objects that aren't in the
291
- # IOBase hierarchy, but just have a write method
292
- buf = io.BufferedIOBase()
293
- buf.writable = lambda: True
294
- buf.write = write
295
- try:
296
- # TextIOWrapper uses this methods to determine
297
- # if BOM (for UTF-16, etc) should be added
298
- buf.seekable = file_or_filename.seekable
299
- buf.tell = file_or_filename.tell
300
- except AttributeError:
301
- pass
302
- wrapper = io.TextIOWrapper(
303
- buf,
304
- encoding=encoding,
305
- errors="xmlcharrefreplace",
306
- newline="\n",
307
- )
308
- try:
309
- yield wrapper.write
310
- finally:
311
- # Keep the original file open when the TextIOWrapper and
312
- # the BufferedWriter are destroyed
313
- wrapper.detach()
314
- if detach_buffer:
315
- buf.detach()
316
-
317
- from xml.etree.ElementTree import _namespace_map
318
-
319
- def _namespaces(elem):
320
- # identify namespaces used in this tree
321
-
322
- # maps qnames to *encoded* prefix:local names
323
- qnames = {None: None}
324
-
325
- # maps uri:s to prefixes
326
- namespaces = {}
327
-
328
- def add_qname(qname):
329
- # calculate serialized qname representation
330
- try:
331
- qname = _tounicode(qname)
332
- if qname[:1] == "{":
333
- uri, tag = qname[1:].rsplit("}", 1)
334
- prefix = namespaces.get(uri)
335
- if prefix is None:
336
- prefix = _namespace_map.get(uri)
337
- if prefix is None:
338
- prefix = "ns%d" % len(namespaces)
339
- else:
340
- prefix = _tounicode(prefix)
341
- if prefix != "xml":
342
- namespaces[uri] = prefix
343
- if prefix:
344
- qnames[qname] = "%s:%s" % (prefix, tag)
345
- else:
346
- qnames[qname] = tag # default element
347
- else:
348
- qnames[qname] = qname
349
- except TypeError:
350
- _raise_serialization_error(qname)
351
-
352
- # populate qname and namespaces table
353
- for elem in elem.iter():
354
- tag = elem.tag
355
- if isinstance(tag, QName):
356
- if tag.text not in qnames:
357
- add_qname(tag.text)
358
- elif isinstance(tag, str):
359
- if tag not in qnames:
360
- add_qname(tag)
361
- elif tag is not None and tag is not Comment and tag is not PI:
362
- _raise_serialization_error(tag)
363
- for key, value in elem.items():
364
- if isinstance(key, QName):
365
- key = key.text
366
- if key not in qnames:
367
- add_qname(key)
368
- if isinstance(value, QName) and value.text not in qnames:
369
- add_qname(value.text)
370
- text = elem.text
371
- if isinstance(text, QName) and text.text not in qnames:
372
- add_qname(text.text)
373
- return qnames, namespaces
374
-
375
- def _serialize_xml(write, elem, qnames, namespaces, **kwargs):
376
- tag = elem.tag
377
- text = elem.text
378
- if tag is Comment:
379
- write("<!--%s-->" % _tounicode(text))
380
- elif tag is ProcessingInstruction:
381
- write("<?%s?>" % _tounicode(text))
382
- else:
383
- tag = qnames[_tounicode(tag) if tag is not None else None]
384
- if tag is None:
385
- if text:
386
- write(_escape_cdata(text))
387
- for e in elem:
388
- _serialize_xml(write, e, qnames, None)
389
- else:
390
- write("<" + tag)
391
- if namespaces:
392
- for uri, prefix in sorted(
393
- namespaces.items(), key=lambda x: x[1]
394
- ): # sort on prefix
395
- if prefix:
396
- prefix = ":" + prefix
397
- write(' xmlns%s="%s"' % (prefix, _escape_attrib(uri)))
398
- attrs = elem.attrib
399
- if attrs:
400
- # try to keep existing attrib order
401
- if len(attrs) <= 1 or type(attrs) is _Attrib:
402
- items = attrs.items()
403
- else:
404
- # if plain dict, use lexical order
405
- items = sorted(attrs.items())
406
- for k, v in items:
407
- if isinstance(k, QName):
408
- k = _tounicode(k.text)
409
- else:
410
- k = _tounicode(k)
411
- if isinstance(v, QName):
412
- v = qnames[_tounicode(v.text)]
413
- else:
414
- v = _escape_attrib(v)
415
- write(' %s="%s"' % (qnames[k], v))
416
- if text is not None or len(elem):
417
- write(">")
418
- if text:
419
- write(_escape_cdata(text))
420
- for e in elem:
421
- _serialize_xml(write, e, qnames, None)
422
- write("</" + tag + ">")
423
- else:
424
- write("/>")
425
- if elem.tail:
426
- write(_escape_cdata(elem.tail))
427
-
428
- def _raise_serialization_error(text):
429
- raise TypeError("cannot serialize %r (type %s)" % (text, type(text).__name__))
430
-
431
- def _escape_cdata(text):
432
- # escape character data
433
- try:
434
- text = _tounicode(text)
435
- # it's worth avoiding do-nothing calls for short strings
436
- if "&" in text:
437
- text = text.replace("&", "&amp;")
438
- if "<" in text:
439
- text = text.replace("<", "&lt;")
440
- if ">" in text:
441
- text = text.replace(">", "&gt;")
442
- return text
443
- except (TypeError, AttributeError):
444
- _raise_serialization_error(text)
445
-
446
- def _escape_attrib(text):
447
- # escape attribute value
448
- try:
449
- text = _tounicode(text)
450
- if "&" in text:
451
- text = text.replace("&", "&amp;")
452
- if "<" in text:
453
- text = text.replace("<", "&lt;")
454
- if ">" in text:
455
- text = text.replace(">", "&gt;")
456
- if '"' in text:
457
- text = text.replace('"', "&quot;")
458
- if "\n" in text:
459
- text = text.replace("\n", "&#10;")
460
- return text
461
- except (TypeError, AttributeError):
462
- _raise_serialization_error(text)
463
-
464
- def _indent(elem, level=0):
465
- # From http://effbot.org/zone/element-lib.htm#prettyprint
466
- i = "\n" + level * " "
467
- if len(elem):
468
- if not elem.text or not elem.text.strip():
469
- elem.text = i + " "
470
- if not elem.tail or not elem.tail.strip():
471
- elem.tail = i
472
- for elem in elem:
473
- _indent(elem, level + 1)
474
- if not elem.tail or not elem.tail.strip():
475
- elem.tail = i
476
- else:
477
- if level and (not elem.tail or not elem.tail.strip()):
478
- elem.tail = i
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/varLib/featureVars.py DELETED
@@ -1,605 +0,0 @@
1
- """Module to build FeatureVariation tables:
2
- https://docs.microsoft.com/en-us/typography/opentype/spec/chapter2#featurevariations-table
3
-
4
- NOTE: The API is experimental and subject to change.
5
- """
6
- from fontTools.misc.dictTools import hashdict
7
- from fontTools.misc.intTools import bit_count
8
- from fontTools.ttLib import newTable
9
- from fontTools.ttLib.tables import otTables as ot
10
- from fontTools.ttLib.ttVisitor import TTVisitor
11
- from fontTools.otlLib.builder import buildLookup, buildSingleSubstSubtable
12
- from collections import OrderedDict
13
-
14
- from .errors import VarLibError, VarLibValidationError
15
-
16
-
17
- def addFeatureVariations(font, conditionalSubstitutions, featureTag="rvrn"):
18
- """Add conditional substitutions to a Variable Font.
19
-
20
- The `conditionalSubstitutions` argument is a list of (Region, Substitutions)
21
- tuples.
22
-
23
- A Region is a list of Boxes. A Box is a dict mapping axisTags to
24
- (minValue, maxValue) tuples. Irrelevant axes may be omitted and they are
25
- interpretted as extending to end of axis in each direction. A Box represents
26
- an orthogonal 'rectangular' subset of an N-dimensional design space.
27
- A Region represents a more complex subset of an N-dimensional design space,
28
- ie. the union of all the Boxes in the Region.
29
- For efficiency, Boxes within a Region should ideally not overlap, but
30
- functionality is not compromised if they do.
31
-
32
- The minimum and maximum values are expressed in normalized coordinates.
33
-
34
- A Substitution is a dict mapping source glyph names to substitute glyph names.
35
-
36
- Example:
37
-
38
- # >>> f = TTFont(srcPath)
39
- # >>> condSubst = [
40
- # ... # A list of (Region, Substitution) tuples.
41
- # ... ([{"wdth": (0.5, 1.0)}], {"cent": "cent.rvrn"}),
42
- # ... ([{"wght": (0.5, 1.0)}], {"dollar": "dollar.rvrn"}),
43
- # ... ]
44
- # >>> addFeatureVariations(f, condSubst)
45
- # >>> f.save(dstPath)
46
- """
47
-
48
- processLast = featureTag != "rvrn"
49
-
50
- _checkSubstitutionGlyphsExist(
51
- glyphNames=set(font.getGlyphOrder()),
52
- substitutions=conditionalSubstitutions,
53
- )
54
-
55
- substitutions = overlayFeatureVariations(conditionalSubstitutions)
56
-
57
- # turn substitution dicts into tuples of tuples, so they are hashable
58
- conditionalSubstitutions, allSubstitutions = makeSubstitutionsHashable(
59
- substitutions
60
- )
61
- if "GSUB" not in font:
62
- font["GSUB"] = buildGSUB()
63
-
64
- # setup lookups
65
- lookupMap = buildSubstitutionLookups(
66
- font["GSUB"].table, allSubstitutions, processLast
67
- )
68
-
69
- # addFeatureVariationsRaw takes a list of
70
- # ( {condition}, [ lookup indices ] )
71
- # so rearrange our lookups to match
72
- conditionsAndLookups = []
73
- for conditionSet, substitutions in conditionalSubstitutions:
74
- conditionsAndLookups.append(
75
- (conditionSet, [lookupMap[s] for s in substitutions])
76
- )
77
-
78
- addFeatureVariationsRaw(font, font["GSUB"].table, conditionsAndLookups, featureTag)
79
-
80
-
81
- def _checkSubstitutionGlyphsExist(glyphNames, substitutions):
82
- referencedGlyphNames = set()
83
- for _, substitution in substitutions:
84
- referencedGlyphNames |= substitution.keys()
85
- referencedGlyphNames |= set(substitution.values())
86
- missing = referencedGlyphNames - glyphNames
87
- if missing:
88
- raise VarLibValidationError(
89
- "Missing glyphs are referenced in conditional substitution rules:"
90
- f" {', '.join(missing)}"
91
- )
92
-
93
-
94
- def overlayFeatureVariations(conditionalSubstitutions):
95
- """Compute overlaps between all conditional substitutions.
96
-
97
- The `conditionalSubstitutions` argument is a list of (Region, Substitutions)
98
- tuples.
99
-
100
- A Region is a list of Boxes. A Box is a dict mapping axisTags to
101
- (minValue, maxValue) tuples. Irrelevant axes may be omitted and they are
102
- interpretted as extending to end of axis in each direction. A Box represents
103
- an orthogonal 'rectangular' subset of an N-dimensional design space.
104
- A Region represents a more complex subset of an N-dimensional design space,
105
- ie. the union of all the Boxes in the Region.
106
- For efficiency, Boxes within a Region should ideally not overlap, but
107
- functionality is not compromised if they do.
108
-
109
- The minimum and maximum values are expressed in normalized coordinates.
110
-
111
- A Substitution is a dict mapping source glyph names to substitute glyph names.
112
-
113
- Returns data is in similar but different format. Overlaps of distinct
114
- substitution Boxes (*not* Regions) are explicitly listed as distinct rules,
115
- and rules with the same Box merged. The more specific rules appear earlier
116
- in the resulting list. Moreover, instead of just a dictionary of substitutions,
117
- a list of dictionaries is returned for substitutions corresponding to each
118
- unique space, with each dictionary being identical to one of the input
119
- substitution dictionaries. These dictionaries are not merged to allow data
120
- sharing when they are converted into font tables.
121
-
122
- Example::
123
-
124
- >>> condSubst = [
125
- ... # A list of (Region, Substitution) tuples.
126
- ... ([{"wght": (0.5, 1.0)}], {"dollar": "dollar.rvrn"}),
127
- ... ([{"wght": (0.5, 1.0)}], {"dollar": "dollar.rvrn"}),
128
- ... ([{"wdth": (0.5, 1.0)}], {"cent": "cent.rvrn"}),
129
- ... ([{"wght": (0.5, 1.0), "wdth": (-1, 1.0)}], {"dollar": "dollar.rvrn"}),
130
- ... ]
131
- >>> from pprint import pprint
132
- >>> pprint(overlayFeatureVariations(condSubst))
133
- [({'wdth': (0.5, 1.0), 'wght': (0.5, 1.0)},
134
- [{'dollar': 'dollar.rvrn'}, {'cent': 'cent.rvrn'}]),
135
- ({'wdth': (0.5, 1.0)}, [{'cent': 'cent.rvrn'}]),
136
- ({'wght': (0.5, 1.0)}, [{'dollar': 'dollar.rvrn'}])]
137
-
138
- """
139
-
140
- # Merge same-substitutions rules, as this creates fewer number oflookups.
141
- merged = OrderedDict()
142
- for value, key in conditionalSubstitutions:
143
- key = hashdict(key)
144
- if key in merged:
145
- merged[key].extend(value)
146
- else:
147
- merged[key] = value
148
- conditionalSubstitutions = [(v, dict(k)) for k, v in merged.items()]
149
- del merged
150
-
151
- # Merge same-region rules, as this is cheaper.
152
- # Also convert boxes to hashdict()
153
- #
154
- # Reversing is such that earlier entries win in case of conflicting substitution
155
- # rules for the same region.
156
- merged = OrderedDict()
157
- for key, value in reversed(conditionalSubstitutions):
158
- key = tuple(
159
- sorted(
160
- (hashdict(cleanupBox(k)) for k in key),
161
- key=lambda d: tuple(sorted(d.items())),
162
- )
163
- )
164
- if key in merged:
165
- merged[key].update(value)
166
- else:
167
- merged[key] = dict(value)
168
- conditionalSubstitutions = list(reversed(merged.items()))
169
- del merged
170
-
171
- # Overlay
172
- #
173
- # Rank is the bit-set of the index of all contributing layers.
174
- initMapInit = ((hashdict(), 0),) # Initializer representing the entire space
175
- boxMap = OrderedDict(initMapInit) # Map from Box to Rank
176
- for i, (currRegion, _) in enumerate(conditionalSubstitutions):
177
- newMap = OrderedDict(initMapInit)
178
- currRank = 1 << i
179
- for box, rank in boxMap.items():
180
- for currBox in currRegion:
181
- intersection, remainder = overlayBox(currBox, box)
182
- if intersection is not None:
183
- intersection = hashdict(intersection)
184
- newMap[intersection] = newMap.get(intersection, 0) | rank | currRank
185
- if remainder is not None:
186
- remainder = hashdict(remainder)
187
- newMap[remainder] = newMap.get(remainder, 0) | rank
188
- boxMap = newMap
189
-
190
- # Generate output
191
- items = []
192
- for box, rank in sorted(
193
- boxMap.items(), key=(lambda BoxAndRank: -bit_count(BoxAndRank[1]))
194
- ):
195
- # Skip any box that doesn't have any substitution.
196
- if rank == 0:
197
- continue
198
- substsList = []
199
- i = 0
200
- while rank:
201
- if rank & 1:
202
- substsList.append(conditionalSubstitutions[i][1])
203
- rank >>= 1
204
- i += 1
205
- items.append((dict(box), substsList))
206
- return items
207
-
208
-
209
- #
210
- # Terminology:
211
- #
212
- # A 'Box' is a dict representing an orthogonal "rectangular" bit of N-dimensional space.
213
- # The keys in the dict are axis tags, the values are (minValue, maxValue) tuples.
214
- # Missing dimensions (keys) are substituted by the default min and max values
215
- # from the corresponding axes.
216
- #
217
-
218
-
219
- def overlayBox(top, bot):
220
- """Overlays ``top`` box on top of ``bot`` box.
221
-
222
- Returns two items:
223
-
224
- * Box for intersection of ``top`` and ``bot``, or None if they don't intersect.
225
- * Box for remainder of ``bot``. Remainder box might not be exact (since the
226
- remainder might not be a simple box), but is inclusive of the exact
227
- remainder.
228
- """
229
-
230
- # Intersection
231
- intersection = {}
232
- intersection.update(top)
233
- intersection.update(bot)
234
- for axisTag in set(top) & set(bot):
235
- min1, max1 = top[axisTag]
236
- min2, max2 = bot[axisTag]
237
- minimum = max(min1, min2)
238
- maximum = min(max1, max2)
239
- if not minimum < maximum:
240
- return None, bot # Do not intersect
241
- intersection[axisTag] = minimum, maximum
242
-
243
- # Remainder
244
- #
245
- # Remainder is empty if bot's each axis range lies within that of intersection.
246
- #
247
- # Remainder is shrank if bot's each, except for exactly one, axis range lies
248
- # within that of intersection, and that one axis, it extrudes out of the
249
- # intersection only on one side.
250
- #
251
- # Bot is returned in full as remainder otherwise, as true remainder is not
252
- # representable as a single box.
253
-
254
- remainder = dict(bot)
255
- extruding = False
256
- fullyInside = True
257
- for axisTag in top:
258
- if axisTag in bot:
259
- continue
260
- extruding = True
261
- fullyInside = False
262
- break
263
- for axisTag in bot:
264
- if axisTag not in top:
265
- continue # Axis range lies fully within
266
- min1, max1 = intersection[axisTag]
267
- min2, max2 = bot[axisTag]
268
- if min1 <= min2 and max2 <= max1:
269
- continue # Axis range lies fully within
270
-
271
- # Bot's range doesn't fully lie within that of top's for this axis.
272
- # We know they intersect, so it cannot lie fully without either; so they
273
- # overlap.
274
-
275
- # If we have had an overlapping axis before, remainder is not
276
- # representable as a box, so return full bottom and go home.
277
- if extruding:
278
- return intersection, bot
279
- extruding = True
280
- fullyInside = False
281
-
282
- # Otherwise, cut remainder on this axis and continue.
283
- if min1 <= min2:
284
- # Right side survives.
285
- minimum = max(max1, min2)
286
- maximum = max2
287
- elif max2 <= max1:
288
- # Left side survives.
289
- minimum = min2
290
- maximum = min(min1, max2)
291
- else:
292
- # Remainder leaks out from both sides. Can't cut either.
293
- return intersection, bot
294
-
295
- remainder[axisTag] = minimum, maximum
296
-
297
- if fullyInside:
298
- # bot is fully within intersection. Remainder is empty.
299
- return intersection, None
300
-
301
- return intersection, remainder
302
-
303
-
304
- def cleanupBox(box):
305
- """Return a sparse copy of `box`, without redundant (default) values.
306
-
307
- >>> cleanupBox({})
308
- {}
309
- >>> cleanupBox({'wdth': (0.0, 1.0)})
310
- {'wdth': (0.0, 1.0)}
311
- >>> cleanupBox({'wdth': (-1.0, 1.0)})
312
- {}
313
-
314
- """
315
- return {tag: limit for tag, limit in box.items() if limit != (-1.0, 1.0)}
316
-
317
-
318
- #
319
- # Low level implementation
320
- #
321
-
322
-
323
- def addFeatureVariationsRaw(font, table, conditionalSubstitutions, featureTag="rvrn"):
324
- """Low level implementation of addFeatureVariations that directly
325
- models the possibilities of the FeatureVariations table."""
326
-
327
- processLast = featureTag != "rvrn"
328
-
329
- #
330
- # if there is no <featureTag> feature:
331
- # make empty <featureTag> feature
332
- # sort features, get <featureTag> feature index
333
- # add <featureTag> feature to all scripts
334
- # make lookups
335
- # add feature variations
336
- #
337
- if table.Version < 0x00010001:
338
- table.Version = 0x00010001 # allow table.FeatureVariations
339
-
340
- table.FeatureVariations = None # delete any existing FeatureVariations
341
-
342
- varFeatureIndices = []
343
- for index, feature in enumerate(table.FeatureList.FeatureRecord):
344
- if feature.FeatureTag == featureTag:
345
- varFeatureIndices.append(index)
346
-
347
- if not varFeatureIndices:
348
- varFeature = buildFeatureRecord(featureTag, [])
349
- table.FeatureList.FeatureRecord.append(varFeature)
350
- table.FeatureList.FeatureCount = len(table.FeatureList.FeatureRecord)
351
-
352
- sortFeatureList(table)
353
- varFeatureIndex = table.FeatureList.FeatureRecord.index(varFeature)
354
-
355
- for scriptRecord in table.ScriptList.ScriptRecord:
356
- if scriptRecord.Script.DefaultLangSys is None:
357
- raise VarLibError(
358
- "Feature variations require that the script "
359
- f"'{scriptRecord.ScriptTag}' defines a default language system."
360
- )
361
- langSystems = [lsr.LangSys for lsr in scriptRecord.Script.LangSysRecord]
362
- for langSys in [scriptRecord.Script.DefaultLangSys] + langSystems:
363
- langSys.FeatureIndex.append(varFeatureIndex)
364
- langSys.FeatureCount = len(langSys.FeatureIndex)
365
-
366
- varFeatureIndices = [varFeatureIndex]
367
-
368
- axisIndices = {
369
- axis.axisTag: axisIndex for axisIndex, axis in enumerate(font["fvar"].axes)
370
- }
371
-
372
- featureVariationRecords = []
373
- for conditionSet, lookupIndices in conditionalSubstitutions:
374
- conditionTable = []
375
- for axisTag, (minValue, maxValue) in sorted(conditionSet.items()):
376
- if minValue > maxValue:
377
- raise VarLibValidationError(
378
- "A condition set has a minimum value above the maximum value."
379
- )
380
- ct = buildConditionTable(axisIndices[axisTag], minValue, maxValue)
381
- conditionTable.append(ct)
382
- records = []
383
- for varFeatureIndex in varFeatureIndices:
384
- existingLookupIndices = table.FeatureList.FeatureRecord[
385
- varFeatureIndex
386
- ].Feature.LookupListIndex
387
- combinedLookupIndices = (
388
- existingLookupIndices + lookupIndices
389
- if processLast
390
- else lookupIndices + existingLookupIndices
391
- )
392
-
393
- records.append(
394
- buildFeatureTableSubstitutionRecord(
395
- varFeatureIndex, combinedLookupIndices
396
- )
397
- )
398
- featureVariationRecords.append(
399
- buildFeatureVariationRecord(conditionTable, records)
400
- )
401
-
402
- table.FeatureVariations = buildFeatureVariations(featureVariationRecords)
403
-
404
-
405
- #
406
- # Building GSUB/FeatureVariations internals
407
- #
408
-
409
-
410
- def buildGSUB():
411
- """Build a GSUB table from scratch."""
412
- fontTable = newTable("GSUB")
413
- gsub = fontTable.table = ot.GSUB()
414
- gsub.Version = 0x00010001 # allow gsub.FeatureVariations
415
-
416
- gsub.ScriptList = ot.ScriptList()
417
- gsub.ScriptList.ScriptRecord = []
418
- gsub.FeatureList = ot.FeatureList()
419
- gsub.FeatureList.FeatureRecord = []
420
- gsub.LookupList = ot.LookupList()
421
- gsub.LookupList.Lookup = []
422
-
423
- srec = ot.ScriptRecord()
424
- srec.ScriptTag = "DFLT"
425
- srec.Script = ot.Script()
426
- srec.Script.DefaultLangSys = None
427
- srec.Script.LangSysRecord = []
428
- srec.Script.LangSysCount = 0
429
-
430
- langrec = ot.LangSysRecord()
431
- langrec.LangSys = ot.LangSys()
432
- langrec.LangSys.ReqFeatureIndex = 0xFFFF
433
- langrec.LangSys.FeatureIndex = []
434
- srec.Script.DefaultLangSys = langrec.LangSys
435
-
436
- gsub.ScriptList.ScriptRecord.append(srec)
437
- gsub.ScriptList.ScriptCount = 1
438
- gsub.FeatureVariations = None
439
-
440
- return fontTable
441
-
442
-
443
- def makeSubstitutionsHashable(conditionalSubstitutions):
444
- """Turn all the substitution dictionaries in sorted tuples of tuples so
445
- they are hashable, to detect duplicates so we don't write out redundant
446
- data."""
447
- allSubstitutions = set()
448
- condSubst = []
449
- for conditionSet, substitutionMaps in conditionalSubstitutions:
450
- substitutions = []
451
- for substitutionMap in substitutionMaps:
452
- subst = tuple(sorted(substitutionMap.items()))
453
- substitutions.append(subst)
454
- allSubstitutions.add(subst)
455
- condSubst.append((conditionSet, substitutions))
456
- return condSubst, sorted(allSubstitutions)
457
-
458
-
459
- class ShifterVisitor(TTVisitor):
460
- def __init__(self, shift):
461
- self.shift = shift
462
-
463
-
464
- @ShifterVisitor.register_attr(ot.Feature, "LookupListIndex") # GSUB/GPOS
465
- def visit(visitor, obj, attr, value):
466
- shift = visitor.shift
467
- value = [l + shift for l in value]
468
- setattr(obj, attr, value)
469
-
470
-
471
- @ShifterVisitor.register_attr(
472
- (ot.SubstLookupRecord, ot.PosLookupRecord), "LookupListIndex"
473
- )
474
- def visit(visitor, obj, attr, value):
475
- setattr(obj, attr, visitor.shift + value)
476
-
477
-
478
- def buildSubstitutionLookups(gsub, allSubstitutions, processLast=False):
479
- """Build the lookups for the glyph substitutions, return a dict mapping
480
- the substitution to lookup indices."""
481
-
482
- # Insert lookups at the beginning of the lookup vector
483
- # https://github.com/googlefonts/fontmake/issues/950
484
-
485
- firstIndex = len(gsub.LookupList.Lookup) if processLast else 0
486
- lookupMap = {}
487
- for i, substitutionMap in enumerate(allSubstitutions):
488
- lookupMap[substitutionMap] = firstIndex + i
489
-
490
- if not processLast:
491
- # Shift all lookup indices in gsub by len(allSubstitutions)
492
- shift = len(allSubstitutions)
493
- visitor = ShifterVisitor(shift)
494
- visitor.visit(gsub.FeatureList.FeatureRecord)
495
- visitor.visit(gsub.LookupList.Lookup)
496
-
497
- for i, subst in enumerate(allSubstitutions):
498
- substMap = dict(subst)
499
- lookup = buildLookup([buildSingleSubstSubtable(substMap)])
500
- if processLast:
501
- gsub.LookupList.Lookup.append(lookup)
502
- else:
503
- gsub.LookupList.Lookup.insert(i, lookup)
504
- assert gsub.LookupList.Lookup[lookupMap[subst]] is lookup
505
- gsub.LookupList.LookupCount = len(gsub.LookupList.Lookup)
506
- return lookupMap
507
-
508
-
509
- def buildFeatureVariations(featureVariationRecords):
510
- """Build the FeatureVariations subtable."""
511
- fv = ot.FeatureVariations()
512
- fv.Version = 0x00010000
513
- fv.FeatureVariationRecord = featureVariationRecords
514
- fv.FeatureVariationCount = len(featureVariationRecords)
515
- return fv
516
-
517
-
518
- def buildFeatureRecord(featureTag, lookupListIndices):
519
- """Build a FeatureRecord."""
520
- fr = ot.FeatureRecord()
521
- fr.FeatureTag = featureTag
522
- fr.Feature = ot.Feature()
523
- fr.Feature.LookupListIndex = lookupListIndices
524
- fr.Feature.populateDefaults()
525
- return fr
526
-
527
-
528
- def buildFeatureVariationRecord(conditionTable, substitutionRecords):
529
- """Build a FeatureVariationRecord."""
530
- fvr = ot.FeatureVariationRecord()
531
- fvr.ConditionSet = ot.ConditionSet()
532
- fvr.ConditionSet.ConditionTable = conditionTable
533
- fvr.ConditionSet.ConditionCount = len(conditionTable)
534
- fvr.FeatureTableSubstitution = ot.FeatureTableSubstitution()
535
- fvr.FeatureTableSubstitution.Version = 0x00010000
536
- fvr.FeatureTableSubstitution.SubstitutionRecord = substitutionRecords
537
- fvr.FeatureTableSubstitution.SubstitutionCount = len(substitutionRecords)
538
- return fvr
539
-
540
-
541
- def buildFeatureTableSubstitutionRecord(featureIndex, lookupListIndices):
542
- """Build a FeatureTableSubstitutionRecord."""
543
- ftsr = ot.FeatureTableSubstitutionRecord()
544
- ftsr.FeatureIndex = featureIndex
545
- ftsr.Feature = ot.Feature()
546
- ftsr.Feature.LookupListIndex = lookupListIndices
547
- ftsr.Feature.LookupCount = len(lookupListIndices)
548
- return ftsr
549
-
550
-
551
- def buildConditionTable(axisIndex, filterRangeMinValue, filterRangeMaxValue):
552
- """Build a ConditionTable."""
553
- ct = ot.ConditionTable()
554
- ct.Format = 1
555
- ct.AxisIndex = axisIndex
556
- ct.FilterRangeMinValue = filterRangeMinValue
557
- ct.FilterRangeMaxValue = filterRangeMaxValue
558
- return ct
559
-
560
-
561
- def sortFeatureList(table):
562
- """Sort the feature list by feature tag, and remap the feature indices
563
- elsewhere. This is needed after the feature list has been modified.
564
- """
565
- # decorate, sort, undecorate, because we need to make an index remapping table
566
- tagIndexFea = [
567
- (fea.FeatureTag, index, fea)
568
- for index, fea in enumerate(table.FeatureList.FeatureRecord)
569
- ]
570
- tagIndexFea.sort()
571
- table.FeatureList.FeatureRecord = [fea for tag, index, fea in tagIndexFea]
572
- featureRemap = dict(
573
- zip([index for tag, index, fea in tagIndexFea], range(len(tagIndexFea)))
574
- )
575
-
576
- # Remap the feature indices
577
- remapFeatures(table, featureRemap)
578
-
579
-
580
- def remapFeatures(table, featureRemap):
581
- """Go through the scripts list, and remap feature indices."""
582
- for scriptIndex, script in enumerate(table.ScriptList.ScriptRecord):
583
- defaultLangSys = script.Script.DefaultLangSys
584
- if defaultLangSys is not None:
585
- _remapLangSys(defaultLangSys, featureRemap)
586
- for langSysRecordIndex, langSysRec in enumerate(script.Script.LangSysRecord):
587
- langSys = langSysRec.LangSys
588
- _remapLangSys(langSys, featureRemap)
589
-
590
- if hasattr(table, "FeatureVariations") and table.FeatureVariations is not None:
591
- for fvr in table.FeatureVariations.FeatureVariationRecord:
592
- for ftsr in fvr.FeatureTableSubstitution.SubstitutionRecord:
593
- ftsr.FeatureIndex = featureRemap[ftsr.FeatureIndex]
594
-
595
-
596
- def _remapLangSys(langSys, featureRemap):
597
- if langSys.ReqFeatureIndex != 0xFFFF:
598
- langSys.ReqFeatureIndex = featureRemap[langSys.ReqFeatureIndex]
599
- langSys.FeatureIndex = [featureRemap[index] for index in langSys.FeatureIndex]
600
-
601
-
602
- if __name__ == "__main__":
603
- import doctest, sys
604
-
605
- sys.exit(doctest.testmod().failed)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DeepFloyd/IF/model.py DELETED
@@ -1,313 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import gc
4
- import json
5
- import tempfile
6
- from typing import Generator
7
-
8
- import numpy as np
9
- import PIL.Image
10
- import torch
11
- from diffusers import DiffusionPipeline, StableDiffusionUpscalePipeline
12
- from diffusers.pipelines.deepfloyd_if import (fast27_timesteps,
13
- smart27_timesteps,
14
- smart50_timesteps,
15
- smart100_timesteps,
16
- smart185_timesteps)
17
-
18
- from settings import (DISABLE_AUTOMATIC_CPU_OFFLOAD, DISABLE_SD_X4_UPSCALER,
19
- HF_TOKEN, MAX_NUM_IMAGES, MAX_NUM_STEPS, MAX_SEED,
20
- RUN_GARBAGE_COLLECTION)
21
-
22
-
23
- class Model:
24
- def __init__(self):
25
- self.device = torch.device(
26
- 'cuda:0' if torch.cuda.is_available() else 'cpu')
27
- self.pipe = None
28
- self.super_res_1_pipe = None
29
- self.super_res_2_pipe = None
30
- self.watermark_image = None
31
-
32
- if torch.cuda.is_available():
33
- self.load_weights()
34
- self.watermark_image = PIL.Image.fromarray(
35
- self.pipe.watermarker.watermark_image.to(
36
- torch.uint8).cpu().numpy(),
37
- mode='RGBA')
38
-
39
- def load_weights(self) -> None:
40
- self.pipe = DiffusionPipeline.from_pretrained(
41
- 'DeepFloyd/IF-I-XL-v1.0',
42
- torch_dtype=torch.float16,
43
- variant='fp16',
44
- use_safetensors=True,
45
- use_auth_token=HF_TOKEN)
46
- self.super_res_1_pipe = DiffusionPipeline.from_pretrained(
47
- 'DeepFloyd/IF-II-L-v1.0',
48
- text_encoder=None,
49
- torch_dtype=torch.float16,
50
- variant='fp16',
51
- use_safetensors=True,
52
- use_auth_token=HF_TOKEN)
53
-
54
- if not DISABLE_SD_X4_UPSCALER:
55
- self.super_res_2_pipe = StableDiffusionUpscalePipeline.from_pretrained(
56
- 'stabilityai/stable-diffusion-x4-upscaler',
57
- torch_dtype=torch.float16)
58
-
59
- if DISABLE_AUTOMATIC_CPU_OFFLOAD:
60
- self.pipe.to(self.device)
61
- self.super_res_1_pipe.to(self.device)
62
-
63
- self.pipe.unet.to(memory_format=torch.channels_last)
64
- self.pipe.unet = torch.compile(self.pipe.unet, mode="reduce-overhead", fullgraph=True)
65
-
66
- if not DISABLE_SD_X4_UPSCALER:
67
- self.super_res_2_pipe.to(self.device)
68
- else:
69
- self.pipe.enable_model_cpu_offload()
70
- self.super_res_1_pipe.enable_model_cpu_offload()
71
- if not DISABLE_SD_X4_UPSCALER:
72
- self.super_res_2_pipe.enable_model_cpu_offload()
73
-
74
- def apply_watermark_to_sd_x4_upscaler_results(
75
- self, images: list[PIL.Image.Image]) -> None:
76
- w, h = images[0].size
77
-
78
- stability_x4_upscaler_sample_size = 128
79
-
80
- coef = min(h / stability_x4_upscaler_sample_size,
81
- w / stability_x4_upscaler_sample_size)
82
- img_h, img_w = (int(h / coef), int(w / coef)) if coef < 1 else (h, w)
83
-
84
- S1, S2 = 1024**2, img_w * img_h
85
- K = (S2 / S1)**0.5
86
- watermark_size = int(K * 62)
87
- watermark_x = img_w - int(14 * K)
88
- watermark_y = img_h - int(14 * K)
89
-
90
- watermark_image = self.watermark_image.copy().resize(
91
- (watermark_size, watermark_size),
92
- PIL.Image.Resampling.BICUBIC,
93
- reducing_gap=None)
94
-
95
- for image in images:
96
- image.paste(watermark_image,
97
- box=(
98
- watermark_x - watermark_size,
99
- watermark_y - watermark_size,
100
- watermark_x,
101
- watermark_y,
102
- ),
103
- mask=watermark_image.split()[-1])
104
-
105
- @staticmethod
106
- def to_pil_images(images: torch.Tensor) -> list[PIL.Image.Image]:
107
- images = (images / 2 + 0.5).clamp(0, 1)
108
- images = images.cpu().permute(0, 2, 3, 1).float().numpy()
109
- images = np.round(images * 255).astype(np.uint8)
110
- return [PIL.Image.fromarray(image) for image in images]
111
-
112
- @staticmethod
113
- def check_seed(seed: int) -> None:
114
- if not 0 <= seed <= MAX_SEED:
115
- raise ValueError
116
-
117
- @staticmethod
118
- def check_num_images(num_images: int) -> None:
119
- if not 1 <= num_images <= MAX_NUM_IMAGES:
120
- raise ValueError
121
-
122
- @staticmethod
123
- def check_num_inference_steps(num_steps: int) -> None:
124
- if not 1 <= num_steps <= MAX_NUM_STEPS:
125
- raise ValueError
126
-
127
- @staticmethod
128
- def get_custom_timesteps(name: str) -> list[int] | None:
129
- if name == 'none':
130
- timesteps = None
131
- elif name == 'fast27':
132
- timesteps = fast27_timesteps
133
- elif name == 'smart27':
134
- timesteps = smart27_timesteps
135
- elif name == 'smart50':
136
- timesteps = smart50_timesteps
137
- elif name == 'smart100':
138
- timesteps = smart100_timesteps
139
- elif name == 'smart185':
140
- timesteps = smart185_timesteps
141
- else:
142
- raise ValueError
143
- return timesteps
144
-
145
- @staticmethod
146
- def run_garbage_collection():
147
- gc.collect()
148
- torch.cuda.empty_cache()
149
-
150
- def run_stage1(
151
- self,
152
- prompt: str,
153
- negative_prompt: str = '',
154
- seed: int = 0,
155
- num_images: int = 1,
156
- guidance_scale_1: float = 7.0,
157
- custom_timesteps_1: str = 'smart100',
158
- num_inference_steps_1: int = 100,
159
- ) -> tuple[list[PIL.Image.Image], str, str]:
160
- self.check_seed(seed)
161
- self.check_num_images(num_images)
162
- self.check_num_inference_steps(num_inference_steps_1)
163
-
164
- if RUN_GARBAGE_COLLECTION:
165
- self.run_garbage_collection()
166
-
167
- generator = torch.Generator(device=self.device).manual_seed(seed)
168
-
169
- prompt_embeds, negative_embeds = self.pipe.encode_prompt(
170
- prompt=prompt, negative_prompt=negative_prompt)
171
-
172
- timesteps = self.get_custom_timesteps(custom_timesteps_1)
173
-
174
- images = self.pipe(prompt_embeds=prompt_embeds,
175
- negative_prompt_embeds=negative_embeds,
176
- num_images_per_prompt=num_images,
177
- guidance_scale=guidance_scale_1,
178
- timesteps=timesteps,
179
- num_inference_steps=num_inference_steps_1,
180
- generator=generator,
181
- output_type='pt').images
182
- pil_images = self.to_pil_images(images)
183
- self.pipe.watermarker.apply_watermark(
184
- pil_images, self.pipe.unet.config.sample_size)
185
-
186
- stage1_params = {
187
- 'prompt': prompt,
188
- 'negative_prompt': negative_prompt,
189
- 'seed': seed,
190
- 'num_images': num_images,
191
- 'guidance_scale_1': guidance_scale_1,
192
- 'custom_timesteps_1': custom_timesteps_1,
193
- 'num_inference_steps_1': num_inference_steps_1,
194
- }
195
- with tempfile.NamedTemporaryFile(mode='w', delete=False) as param_file:
196
- param_file.write(json.dumps(stage1_params))
197
- stage1_result = {
198
- 'prompt_embeds': prompt_embeds,
199
- 'negative_embeds': negative_embeds,
200
- 'images': images,
201
- 'pil_images': pil_images,
202
- }
203
- with tempfile.NamedTemporaryFile(delete=False) as result_file:
204
- torch.save(stage1_result, result_file.name)
205
- return pil_images, param_file.name, result_file.name
206
-
207
- def run_stage2(
208
- self,
209
- stage1_result_path: str,
210
- stage2_index: int,
211
- seed_2: int = 0,
212
- guidance_scale_2: float = 4.0,
213
- custom_timesteps_2: str = 'smart50',
214
- num_inference_steps_2: int = 50,
215
- disable_watermark: bool = False,
216
- ) -> PIL.Image.Image:
217
- self.check_seed(seed_2)
218
- self.check_num_inference_steps(num_inference_steps_2)
219
-
220
- if RUN_GARBAGE_COLLECTION:
221
- self.run_garbage_collection()
222
-
223
- generator = torch.Generator(device=self.device).manual_seed(seed_2)
224
-
225
- stage1_result = torch.load(stage1_result_path)
226
- prompt_embeds = stage1_result['prompt_embeds']
227
- negative_embeds = stage1_result['negative_embeds']
228
- images = stage1_result['images']
229
- images = images[[stage2_index]]
230
-
231
- timesteps = self.get_custom_timesteps(custom_timesteps_2)
232
-
233
- out = self.super_res_1_pipe(image=images,
234
- prompt_embeds=prompt_embeds,
235
- negative_prompt_embeds=negative_embeds,
236
- num_images_per_prompt=1,
237
- guidance_scale=guidance_scale_2,
238
- timesteps=timesteps,
239
- num_inference_steps=num_inference_steps_2,
240
- generator=generator,
241
- output_type='pt',
242
- noise_level=250).images
243
- pil_images = self.to_pil_images(out)
244
-
245
- if disable_watermark:
246
- return pil_images[0]
247
-
248
- self.super_res_1_pipe.watermarker.apply_watermark(
249
- pil_images, self.super_res_1_pipe.unet.config.sample_size)
250
- return pil_images[0]
251
-
252
- def run_stage3(
253
- self,
254
- image: PIL.Image.Image,
255
- prompt: str = '',
256
- negative_prompt: str = '',
257
- seed_3: int = 0,
258
- guidance_scale_3: float = 9.0,
259
- num_inference_steps_3: int = 75,
260
- ) -> PIL.Image.Image:
261
- self.check_seed(seed_3)
262
- self.check_num_inference_steps(num_inference_steps_3)
263
-
264
- if RUN_GARBAGE_COLLECTION:
265
- self.run_garbage_collection()
266
-
267
- generator = torch.Generator(device=self.device).manual_seed(seed_3)
268
- out = self.super_res_2_pipe(image=image,
269
- prompt=prompt,
270
- negative_prompt=negative_prompt,
271
- num_images_per_prompt=1,
272
- guidance_scale=guidance_scale_3,
273
- num_inference_steps=num_inference_steps_3,
274
- generator=generator,
275
- noise_level=100).images
276
- self.apply_watermark_to_sd_x4_upscaler_results(out)
277
- return out[0]
278
-
279
- def run_stage2_3(
280
- self,
281
- stage1_result_path: str,
282
- stage2_index: int,
283
- seed_2: int = 0,
284
- guidance_scale_2: float = 4.0,
285
- custom_timesteps_2: str = 'smart50',
286
- num_inference_steps_2: int = 50,
287
- prompt: str = '',
288
- negative_prompt: str = '',
289
- seed_3: int = 0,
290
- guidance_scale_3: float = 9.0,
291
- num_inference_steps_3: int = 75,
292
- ) -> Generator[PIL.Image.Image]:
293
- self.check_seed(seed_3)
294
- self.check_num_inference_steps(num_inference_steps_3)
295
-
296
- out_image = self.run_stage2(
297
- stage1_result_path=stage1_result_path,
298
- stage2_index=stage2_index,
299
- seed_2=seed_2,
300
- guidance_scale_2=guidance_scale_2,
301
- custom_timesteps_2=custom_timesteps_2,
302
- num_inference_steps_2=num_inference_steps_2,
303
- disable_watermark=True)
304
- temp_image = out_image.copy()
305
- self.super_res_1_pipe.watermarker.apply_watermark(
306
- [temp_image], self.super_res_1_pipe.unet.config.sample_size)
307
- yield temp_image
308
- yield self.run_stage3(image=out_image,
309
- prompt=prompt,
310
- negative_prompt=negative_prompt,
311
- seed_3=seed_3,
312
- guidance_scale_3=guidance_scale_3,
313
- num_inference_steps_3=num_inference_steps_3)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DylanWolf/h2ogpt-api/app.py DELETED
@@ -1,12 +0,0 @@
1
- import os
2
-
3
- os.system("git clone https://github.com/oobabooga/text-generation-webui.git")
4
-
5
- os.chdir("text-generation-webui")
6
-
7
- os.system("pip install -r requirements.txt")
8
-
9
- with open("input.txt", "w") as f:
10
- f.write("N\n")
11
-
12
- os.system("./start_linux.sh < input.txt")
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ECCV2022/bytetrack/yolox/data/data_prefetcher.py DELETED
@@ -1,77 +0,0 @@
1
- #!/usr/bin/env python3
2
- # -*- coding:utf-8 -*-
3
- # Copyright (c) Megvii, Inc. and its affiliates.
4
-
5
- import torch
6
- import torch.distributed as dist
7
-
8
- from yolox.utils import synchronize
9
-
10
- import random
11
-
12
-
13
- class DataPrefetcher:
14
- """
15
- DataPrefetcher is inspired by code of following file:
16
- https://github.com/NVIDIA/apex/blob/master/examples/imagenet/main_amp.py
17
- It could speedup your pytorch dataloader. For more information, please check
18
- https://github.com/NVIDIA/apex/issues/304#issuecomment-493562789.
19
- """
20
-
21
- def __init__(self, loader):
22
- self.loader = iter(loader)
23
- self.stream = torch.cuda.Stream()
24
- self.input_cuda = self._input_cuda_for_image
25
- self.record_stream = DataPrefetcher._record_stream_for_image
26
- self.preload()
27
-
28
- def preload(self):
29
- try:
30
- self.next_input, self.next_target, _, _ = next(self.loader)
31
- except StopIteration:
32
- self.next_input = None
33
- self.next_target = None
34
- return
35
-
36
- with torch.cuda.stream(self.stream):
37
- self.input_cuda()
38
- self.next_target = self.next_target.cuda(non_blocking=True)
39
-
40
- def next(self):
41
- torch.cuda.current_stream().wait_stream(self.stream)
42
- input = self.next_input
43
- target = self.next_target
44
- if input is not None:
45
- self.record_stream(input)
46
- if target is not None:
47
- target.record_stream(torch.cuda.current_stream())
48
- self.preload()
49
- return input, target
50
-
51
- def _input_cuda_for_image(self):
52
- self.next_input = self.next_input.cuda(non_blocking=True)
53
-
54
- @staticmethod
55
- def _record_stream_for_image(input):
56
- input.record_stream(torch.cuda.current_stream())
57
-
58
-
59
- def random_resize(data_loader, exp, epoch, rank, is_distributed):
60
- tensor = torch.LongTensor(1).cuda()
61
- if is_distributed:
62
- synchronize()
63
-
64
- if rank == 0:
65
- if epoch > exp.max_epoch - 10:
66
- size = exp.input_size
67
- else:
68
- size = random.randint(*exp.random_size)
69
- size = int(32 * size)
70
- tensor.fill_(size)
71
-
72
- if is_distributed:
73
- synchronize()
74
- dist.broadcast(tensor, 0)
75
-
76
- input_size = data_loader.change_input_dim(multiple=tensor.item(), random_range=None)
77
- return input_size
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ECCV2022/bytetrack/yolox/deepsort_tracker/track.py DELETED
@@ -1,158 +0,0 @@
1
- # vim: expandtab:ts=4:sw=4
2
-
3
-
4
- class TrackState:
5
- """
6
- Enumeration type for the single target track state. Newly created tracks are
7
- classified as `tentative` until enough evidence has been collected. Then,
8
- the track state is changed to `confirmed`. Tracks that are no longer alive
9
- are classified as `deleted` to mark them for removal from the set of active
10
- tracks.
11
- """
12
-
13
- Tentative = 1
14
- Confirmed = 2
15
- Deleted = 3
16
-
17
-
18
- class Track:
19
- """
20
- A single target track with state space `(x, y, a, h)` and associated
21
- velocities, where `(x, y)` is the center of the bounding box, `a` is the
22
- aspect ratio and `h` is the height.
23
- Parameters
24
- ----------
25
- mean : ndarray
26
- Mean vector of the initial state distribution.
27
- covariance : ndarray
28
- Covariance matrix of the initial state distribution.
29
- track_id : int
30
- A unique track identifier.
31
- n_init : int
32
- Number of consecutive detections before the track is confirmed. The
33
- track state is set to `Deleted` if a miss occurs within the first
34
- `n_init` frames.
35
- max_age : int
36
- The maximum number of consecutive misses before the track state is
37
- set to `Deleted`.
38
- feature : Optional[ndarray]
39
- Feature vector of the detection this track originates from. If not None,
40
- this feature is added to the `features` cache.
41
- Attributes
42
- ----------
43
- mean : ndarray
44
- Mean vector of the initial state distribution.
45
- covariance : ndarray
46
- Covariance matrix of the initial state distribution.
47
- track_id : int
48
- A unique track identifier.
49
- hits : int
50
- Total number of measurement updates.
51
- age : int
52
- Total number of frames since first occurance.
53
- time_since_update : int
54
- Total number of frames since last measurement update.
55
- state : TrackState
56
- The current track state.
57
- features : List[ndarray]
58
- A cache of features. On each measurement update, the associated feature
59
- vector is added to this list.
60
- """
61
-
62
- def __init__(self, mean, covariance, track_id, class_id, n_init, max_age,
63
- feature=None):
64
- self.mean = mean
65
- self.covariance = covariance
66
- self.track_id = track_id
67
- self.class_id = class_id
68
- self.hits = 1
69
- self.age = 1
70
- self.time_since_update = 0
71
-
72
- self.state = TrackState.Tentative
73
- self.features = []
74
- if feature is not None:
75
- self.features.append(feature)
76
-
77
- self._n_init = n_init
78
- self._max_age = max_age
79
-
80
- def to_tlwh(self):
81
- """Get current position in bounding box format `(top left x, top left y,
82
- width, height)`.
83
- Returns
84
- -------
85
- ndarray
86
- The bounding box.
87
- """
88
- ret = self.mean[:4].copy()
89
- ret[2] *= ret[3]
90
- ret[:2] -= ret[2:] / 2
91
- return ret
92
-
93
- def to_tlbr(self):
94
- """Get current position in bounding box format `(min x, miny, max x,
95
- max y)`.
96
- Returns
97
- -------
98
- ndarray
99
- The bounding box.
100
- """
101
- ret = self.to_tlwh()
102
- ret[2:] = ret[:2] + ret[2:]
103
- return ret
104
-
105
- def increment_age(self):
106
- self.age += 1
107
- self.time_since_update += 1
108
-
109
- def predict(self, kf):
110
- """Propagate the state distribution to the current time step using a
111
- Kalman filter prediction step.
112
- Parameters
113
- ----------
114
- kf : kalman_filter.KalmanFilter
115
- The Kalman filter.
116
- """
117
- self.mean, self.covariance = kf.predict(self.mean, self.covariance)
118
- self.increment_age()
119
-
120
- def update(self, kf, detection):
121
- """Perform Kalman filter measurement update step and update the feature
122
- cache.
123
- Parameters
124
- ----------
125
- kf : kalman_filter.KalmanFilter
126
- The Kalman filter.
127
- detection : Detection
128
- The associated detection.
129
- """
130
- self.mean, self.covariance = kf.update(
131
- self.mean, self.covariance, detection.to_xyah())
132
- self.features.append(detection.feature)
133
-
134
- self.hits += 1
135
- self.time_since_update = 0
136
- if self.state == TrackState.Tentative and self.hits >= self._n_init:
137
- self.state = TrackState.Confirmed
138
-
139
- def mark_missed(self):
140
- """Mark this track as missed (no association at the current time step).
141
- """
142
- if self.state == TrackState.Tentative:
143
- self.state = TrackState.Deleted
144
- elif self.time_since_update > self._max_age:
145
- self.state = TrackState.Deleted
146
-
147
- def is_tentative(self):
148
- """Returns True if this track is tentative (unconfirmed).
149
- """
150
- return self.state == TrackState.Tentative
151
-
152
- def is_confirmed(self):
153
- """Returns True if this track is confirmed."""
154
- return self.state == TrackState.Confirmed
155
-
156
- def is_deleted(self):
157
- """Returns True if this track is dead and should be deleted."""
158
- return self.state == TrackState.Deleted
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.cpp DELETED
@@ -1,46 +0,0 @@
1
- /*!
2
- **************************************************************************************************
3
- * Deformable DETR
4
- * Copyright (c) 2020 SenseTime. All Rights Reserved.
5
- * Licensed under the Apache License, Version 2.0 [see LICENSE for details]
6
- **************************************************************************************************
7
- * Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
8
- **************************************************************************************************
9
- */
10
-
11
- /*!
12
- * Copyright (c) Facebook, Inc. and its affiliates.
13
- * Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR
14
- */
15
-
16
- #include <vector>
17
-
18
- #include <ATen/ATen.h>
19
- #include <ATen/cuda/CUDAContext.h>
20
-
21
-
22
- at::Tensor
23
- ms_deform_attn_cpu_forward(
24
- const at::Tensor &value,
25
- const at::Tensor &spatial_shapes,
26
- const at::Tensor &level_start_index,
27
- const at::Tensor &sampling_loc,
28
- const at::Tensor &attn_weight,
29
- const int im2col_step)
30
- {
31
- AT_ERROR("Not implement on cpu");
32
- }
33
-
34
- std::vector<at::Tensor>
35
- ms_deform_attn_cpu_backward(
36
- const at::Tensor &value,
37
- const at::Tensor &spatial_shapes,
38
- const at::Tensor &level_start_index,
39
- const at::Tensor &sampling_loc,
40
- const at::Tensor &attn_weight,
41
- const at::Tensor &grad_output,
42
- const int im2col_step)
43
- {
44
- AT_ERROR("Not implement on cpu");
45
- }
46
-