parquet-converter commited on
Commit
4fb4a4c
·
1 Parent(s): 941f94e

Update parquet files (step 43 of 121)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Pacific Physics Volume 1 PDF and Ace Your A Level Physics Exams.md +0 -135
  2. spaces/1gistliPinn/ChatGPT4/Examples/Elsa 3.5 Audi Vw Data Serial Key.md +0 -5
  3. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AN1 Presents My Talking Tom Friends MOD APK - Download and Have Fun.md +0 -116
  4. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Black Duck and Get Complete Visibility into Your Application and Container Composition.md +0 -127
  5. spaces/1phancelerku/anime-remove-background/All Songs Jukebox The Ultimate Music Streaming and Downloading Service.md +0 -102
  6. spaces/1phancelerku/anime-remove-background/Download Archero Hack APK and Enjoy Crossbow Archery with Amazing Features.md +0 -118
  7. spaces/1toTree/lora_test/ppdiffusers/models/prior_transformer.py +0 -220
  8. spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/__init__.py +0 -43
  9. spaces/7hao/bingo/src/components/chat-attachments.tsx +0 -37
  10. spaces/801artistry/RVC801/i18n/scan_i18n.py +0 -75
  11. spaces/801artistry/RVC801/tensorlowest.py +0 -123
  12. spaces/AIBoy1993/segment_anything_webui/inference.py +0 -188
  13. spaces/AIConsultant/MusicGen/audiocraft/data/audio_utils.py +0 -177
  14. spaces/AIFILMS/generate_human_motion/VQ-Trans/GPT_eval_multi.py +0 -121
  15. spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/smplify.py +0 -279
  16. spaces/AIZero2HeroBootcamp/TranscriptAILearnerFromYoutube/TwoTranscriptQuotesFromIlyaSutskever.md +0 -71
  17. spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb16_cifar10.py +0 -5
  18. spaces/AchyuthGamer/ImMagician-Image-Generator/README.md +0 -13
  19. spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/blocks.py +0 -342
  20. spaces/Adapting/TrendFlow/README.md +0 -11
  21. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Factory.js +0 -13
  22. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AlignMethods.js +0 -17
  23. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/outputs.md +0 -67
  24. spaces/Andy1621/uniformer_image_detection/mmdet/utils/util_random.py +0 -33
  25. spaces/Andy1621/uniformer_image_detection/tools/slurm_test.sh +0 -24
  26. spaces/AnnasBlackHat/Image-Similarity/src/similarity/model_implements/vit_base.py +0 -20
  27. spaces/Apex-X/nono/README.md +0 -12
  28. spaces/Apex-X/nono/roop/ui.py +0 -231
  29. spaces/Arnaudding001/OpenAI_whisperLive/README.md +0 -13
  30. spaces/Artgor/digit-draw-detect/README.md +0 -13
  31. spaces/Artgor/digit-draw-detect/src/ml_utils.py +0 -207
  32. spaces/B1360976/waste-management-system/style.css +0 -7
  33. spaces/Bala2-03-2003/BRAHMAMAI/app.py +0 -34
  34. spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_33966KB.py +0 -126
  35. spaces/Benson/text-generation/Examples/Blockman Go Pc Download No Emulator.md +0 -91
  36. spaces/BetterAPI/BetterChat/src/lib/types/UrlDependency.ts +0 -5
  37. spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/s3/transfer.py +0 -358
  38. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/region.py +0 -10
  39. spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/packages/backports/__init__.py +0 -0
  40. spaces/BlinkDL/RWKV-World-7B/app.py +0 -301
  41. spaces/CC123123/blip2_t/README.md +0 -16
  42. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_sampler.py +0 -23
  43. spaces/CVPR/LIVE/pybind11/include/pybind11/cast.h +0 -2210
  44. spaces/CVPR/LIVE/thrust/cmake/ThrustAddSubdir.cmake +0 -6
  45. spaces/CVPR/LIVE/thrust/thrust/system/cuda/vector.h +0 -72
  46. spaces/CanIpleas/gpt2/app.py +0 -3
  47. spaces/ChallengeHub/Chinese-LangChain/tests/test_gradio_slient.py +0 -19
  48. spaces/Comet/txt2im-models/README.md +0 -12
  49. spaces/Cpp4App/Cpp4App/CDM/run_batch.py +0 -146
  50. spaces/CyberHarem/find_my_waifu/README.md +0 -13
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Pacific Physics Volume 1 PDF and Ace Your A Level Physics Exams.md DELETED
@@ -1,135 +0,0 @@
1
-
2
- <h1>Pacific Physics Volume 1: A Comprehensive Guide for A Level Students</h1>
3
- <p>If you are an A level student who is looking for a reliable and comprehensive physics textbook, you might have heard of Pacific Physics Volume 1. This book is widely used by students and teachers in Singapore and other countries as a reference and study guide for physics. But what exactly is Pacific Physics Volume 1, and why should you read it? In this article, we will answer these questions and more. We will also show you how to download Pacific Physics Volume 1 in PDF format, so you can access it anytime and anywhere.</p>
4
- <h2>What is Pacific Physics Volume 1?</h2>
5
- <p>Pacific Physics Volume 1 is a physics textbook written by Poh Liong Yong, a former lecturer at Nanyang Technological University in Singapore. It was first published in 1996 by Pan Pacific, a leading educational publisher in Asia. It is designed to cover the topics required for the A level physics examination, which is taken by students who wish to pursue higher education in science, engineering, or medicine.</p>
6
- <h2>pacific physics volume 1 pdf download</h2><br /><p><b><b>DOWNLOAD</b> &#10004; <a href="https://byltly.com/2uKzen">https://byltly.com/2uKzen</a></b></p><br /><br />
7
- <h3>Who is the author of Pacific Physics Volume 1?</h3>
8
- <p>Poh Liong Yong is a well-known physics educator and author in Singapore. He has over 30 years of experience in teaching physics at various levels, from secondary school to university. He has also written several other physics books, such as Essential Concepts of Physics, Advanced Level Practical Work for Physics, and Understanding Mechanics.</p>
9
- <h3>What are the main topics covered in Pacific Physics Volume 1?</h3>
10
- <p>Pacific Physics Volume 1 covers the following topics:</p>
11
- <ul>
12
- <li>Units and Dimensions</li>
13
- <li>Measurements</li>
14
- <li>Vectors</li>
15
- <li>Static Equilibrium</li>
16
- <li>Kinematics</li>
17
- <li>Dynamics</li>
18
- <li>Pressure in Liquids and Archimedes' Principle</li>
19
- <li>Circular Motion</li>
20
- <li>Universal Gravitation</li>
21
- <li>Simple Harmonic Motion</li>
22
- <li>Elasticity</li>
23
- <li>Thermometry</li>
24
- <li>Heat Capacity and Latent Heat</li>
25
- <li>The Gas Laws</li>
26
- <li>Thermodynamics</li>
27
- <li>Thermal Conduction</li>
28
- <li>Convection and Radiation</li>
29
- </ul>
30
- <h3>How is Pacific Physics Volume 1 different from other physics textbooks?</h3>
31
- <p>Pacific Physics Volume 1 is different from other physics textbooks in several ways:</p>
32
- <ul>
33
- <li>It follows the latest syllabus and exam format of the A level physics examination, which is based on the Cambridge International AS and A Level Physics syllabus.</li>
34
- <li>It provides clear and concise explanations of each concept, with diagrams, graphs, tables, and formulas to illustrate them.</li>
35
- <li>It includes numerous examples and worked solutions to demonstrate how to apply the concepts to solve problems.</li>
36
- <li>It offers a variety of exercises at the end of each chapter, ranging from multiple-choice questions to structured questions to essay questions. The answers and solutions are also provided at the end of the book.</li>
37
- <li>It contains review questions and summary points at the end of each topic, to help students revise and consolidate their learning.</li>
38
- <li>It features practical work sections that introduce students to the experimental aspects of physics, with instructions on how to perform experiments and record observations.</li>
39
- </ul>
40
- <h2>Why should you read Pacific Physics Volume 1?</h2>
41
- <p>Pacific Physics Volume 1 is a valuable resource for anyone who wants to learn physics at an advanced level. Here are some reasons why you should read it:</p>
42
- <p>pacific physics volume 1 ebook free download<br />
43
- download pacific physics volume 1 pdf online<br />
44
- pacific physics volume 1 solutions manual pdf download<br />
45
- how to download pacific physics volume 1 pdf for free<br />
46
- pacific physics volume 1 by a. f. abbott pdf download<br />
47
- pacific physics volume 1 textbook pdf download<br />
48
- pacific physics volume 1 pdf download link<br />
49
- pacific physics volume 1 pdf download reddit<br />
50
- pacific physics volume 1 pdf download google drive<br />
51
- pacific physics volume 1 pdf download quora<br />
52
- pacific physics volume 1 pdf download torrent<br />
53
- pacific physics volume 1 pdf download zip<br />
54
- pacific physics volume 1 pdf download scribd<br />
55
- pacific physics volume 1 pdf download slideshare<br />
56
- pacific physics volume 1 pdf download z-library<br />
57
- pacific physics volume 1 pdf download library genesis<br />
58
- pacific physics volume 1 pdf download b-ok.org<br />
59
- pacific physics volume 1 pdf download academia.edu<br />
60
- pacific physics volume 1 pdf download researchgate.net<br />
61
- pacific physics volume 1 pdf download worldcat.org<br />
62
- pacific physics volume 1 pdf download goodreads.com<br />
63
- pacific physics volume 1 pdf download amazon.com<br />
64
- pacific physics volume 1 pdf download ebay.com<br />
65
- pacific physics volume 1 pdf download flipkart.com<br />
66
- pacific physics volume 1 pdf download alibris.com<br />
67
- pacific physics volume 1 pdf download abebooks.com<br />
68
- pacific physics volume 1 pdf download thriftbooks.com<br />
69
- pacific physics volume 1 pdf download bookdepository.com<br />
70
- pacific physics volume 1 pdf download betterworldbooks.com<br />
71
- pacific physics volume 1 pdf download powells.com<br />
72
- pacific physics volume 1 pdf download barnesandnoble.com<br />
73
- pacific physics volume 1 pdf download walmart.com<br />
74
- pacific physics volume 1 pdf download target.com<br />
75
- pacific physics volume 1 pdf download kobo.com<br />
76
- pacific physics volume 1 pdf download apple books<br />
77
- pacific physics volume 1 pdf download google books<br />
78
- pacific physics volume 1 pdf download open library<br />
79
- pacific physics volume 1 pdf download project gutenberg<br />
80
- pacific physics volume 1 pdf download internet archive<br />
81
- pacific physics volume 1 pdf download libgen.io</p>
82
- <h3>Pacific Physics Volume 1 is aligned with the latest syllabus and exam requirements</h3>
83
- <p>If you are preparing for the A level physics examination, you need a textbook that covers all the topics that you need to know. Pacific Physics Volume 1 does exactly that. It follows the latest syllabus and exam format of the A level physics examination, which is based on the Cambridge International AS and A Level Physics syllabus. This means that you can be confident that you are learning the right content and skills for your exam.</p>
84
- <h3>Pacific Physics Volume 1 provides clear explanations and examples for each concept</h3>
85
- <p>If you want to understand physics concepts deeply and thoroughly, you need a textbook that explains them clearly and concisely. Pacific Physics Volume 1 does exactly that. It provides clear and concise explanations of each concept, with diagrams, graphs, tables, and formulas to illustrate them. It also includes numerous examples and worked solutions to demonstrate how to apply the concepts to solve problems. This means that you can grasp the concepts easily and effectively.</p>
86
- <h3>Pacific Physics Volume 1 offers plenty of exercises and solutions for practice and revision</h3>
87
- <p>If you want to master physics concepts fully and confidently, you need a textbook that offers plenty of exercises and solutions for practice and revision. Pacific Physics Volume 1 does exactly that. It offers a variety of exercises at the end of each chapter, ranging from multiple-choice questions to structured questions to essay questions. The answers and solutions are also provided at the end of the book. It also contains review questions and summary points at the end of each topic, to help students revise and consolidate their learning. This means that you can practice your skills and knowledge regularly and effectively.</p>
88
- <h2>How can you download Pacific Physics Volume 1 in PDF format?</h2>
89
- <p>If you want to access Pacific Physics Volume 1 anytime and anywhere, you might want to download it in PDF format. However, before you do so, you should be aware of the benefits and drawbacks of downloading it in PDF format. You should also know where to find reliable sources to download it in PDF format.</p>
90
- <h3>The benefits of downloading Pacific Physics Volume 1 in PDF format</h3>
91
- <p>Downloading Pacific Physics Volume 1 in PDF format has some benefits:</p>
92
- <ul>
93
- <li>You can save money by not buying a physical copy of the book.</li>
94
- <li>You can save space by not storing a bulky book on your shelf or bag.</li>
95
- <li>You can access it anytime and anywhere on your computer or mobile device.</li>
96
- <li>You can zoom in or out on any page or section of the book.</li>
97
- <li>You can search for any word or phrase within the book.</li>
98
- <li>You can highlight or annotate any part of the book.</li>
99
- </ul>
100
- <h3>The drawbacks of downloading Pacific Physics Volume 1 in PDF format</h3>
101
- <p>Downloading Pacific Physics Volume 1 in PDF format also has some drawbacks:</p>
102
- <ul>
103
- <li>You might violate the copyright laws if you download it illegally or share it with others without permission.</li>
104
- <li>You might expose your device to viruses or malware if you download it from untrustworthy sources.</li>
105
- <li>You might compromise your reading experience if you download it with poor quality or formatting.</li>
106
- <li>You might strain your eyes or battery if you read it on a screen for too long.</li>
107
- </ul>
108
- <h3>The best sources to download Pacific Physics Volume 1 in PDF format</h3>
109
- <p>If you decide to download Pacific Physics Volume 1 in PDF format, you should do so from reputable sources that offer high-quality downloads legally. Here are some sources that we recommend:</p>
110
- <table border="0">
111
- <tr><td><b>Name</b></td><td><b>Description</b></td><td><b>Link</b></td></tr>
112
- <tr><td>Google Books</td><td>A service that allows users to preview or buy books online.</td><td></td></tr>
113
- <td>A non-profit library that offers free access to millions of books, movies, music, websites, etc.</td><td></td></tr>
114
- <tr><td>Goodreads</td><td>A social networking site that allows users to discover, rate, and review books.</td><td></td></tr>
115
- </table>
116
- <p>These sources are reliable and legal, but they might not have the latest edition or the complete content of Pacific Physics Volume 1. Therefore, you should always check the quality and validity of the PDF file before downloading it. You should also respect the author's rights and not distribute or reproduce the PDF file without permission.</p>
117
- <h2>Conclusion</h2>
118
- <p>Pacific Physics Volume 1 is a physics textbook written by Poh Liong Yong, a former lecturer at Nanyang Technological University in Singapore. It is designed to cover the topics required for the A level physics examination, which is taken by students who wish to pursue higher education in science, engineering, or medicine. It provides clear explanations and examples for each concept, and offers plenty of exercises and solutions for practice and revision. It also introduces students to the experimental aspects of physics, with practical work sections. Pacific Physics Volume 1 is a valuable resource for anyone who wants to learn physics at an advanced level. You can download it in PDF format from reputable sources such as Google Books, Internet Archive, or Goodreads. However, you should be aware of the benefits and drawbacks of downloading it in PDF format, and respect the author's rights and not distribute or reproduce the PDF file without permission.</p>
119
- <h2>FAQs</h2>
120
- <p>Here are some frequently asked questions about Pacific Physics Volume 1:</p>
121
- <ol>
122
- <li>What is the difference between Pacific Physics Volume 1 and Volume 2?</li>
123
- <p>Pacific Physics Volume 1 covers the topics required for the AS level physics examination, while Pacific Physics Volume 2 covers the topics required for the A level physics examination. The AS level physics examination is taken at the end of the first year of A level studies, while the A level physics examination is taken at the end of the second year of A level studies.</p>
124
- <li>How many pages does Pacific Physics Volume 1 have?</li>
125
- <p>Pacific Physics Volume 1 has 560 pages in total.</p>
126
- <li>How much does Pacific Physics Volume 1 cost?</li>
127
- <p>The price of Pacific Physics Volume 1 varies depending on the source and edition. The latest edition (2019) costs SGD$39.90 on Pan Pacific's website.</p>
128
- <li>Is Pacific Physics Volume 1 suitable for self-study?</li>
129
- <p>Pacific Physics Volume 1 is suitable for self-study, as it provides clear explanations and examples for each concept, and offers plenty of exercises and solutions for practice and revision. However, it is advisable to consult a teacher or tutor if you encounter any difficulties or doubts while studying.</p>
130
- <li>Is Pacific Physics Volume 1 available in other languages?</li>
131
- <p>Pacific Physics Volume 1 is only available in English.</p>
132
- </ol>
133
- </p> 0a6ba089eb<br />
134
- <br />
135
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Elsa 3.5 Audi Vw Data Serial Key.md DELETED
@@ -1,5 +0,0 @@
1
- <br />
2
- <p>The hacker(Victor) is very active and knows the programming of the car manufactures so he updates the Elsawin 5.3 software with all new features of the car manufactures and at the time he(Victor) updates the Elsawin software, the Elsawin update version 5.2 software activation guide(how to update Elsawin software) is share with us. In this article, we are going to share with you Elsawin 5.2 hack Elsawin 5.2 Final Code Keygen Elsawin 5.2 Offline Installer without Computer. <br><strong> You can use any tool to update Elsawin 5.2 software to Elsawin 5.3 Final Code for free. Elsawin 5.3 Final Code Keygen Free Download huddle Elsawin v5.3 Final Code Keygen Free Version 2017 Elsawin Final Code Keygen Free Download How To Install Elsawin Elsawin v. Software Hack Download Elsawin Final Code Elsawin 5.2 Offline Installer without Computer. Download Elsawin Hack Software and enjoy the company of the Elsawin v5.3 software completely for free. For this reason, we are sharing with you Elsawin 5.3 crack program for free. You can use any tool to update Elsawin 5.2 software to Elsawin 5.3 Final Code for free. Elsawin 5.3 Final Code Keygen Free Download huddle Elsawin v5.3 Final Code Keygen Free Version 2017 Elsawin Final Code Keygen Free Download How To Install Elsawin Elsawin 5.3 software completely for free. <br><strong>You can use any tool to update Elsawin 5.2 software to Elsawin 5.3 Final Code for free. Elsawin Final Code Keygen Free Download huddle Elsawin v5.3 Final Code Keygen Free Version 2017 Elsawin Final Code Keygen Free Download How To Install Elsawin Elsawin 5.3 software completely for free.</strong></strong></p>
3
- <h2>Elsa 3.5 Audi vw Data Serial Key</h2><br /><p><b><b>Download</b> &#9889; <a href="https://imgfil.com/2uy10w">https://imgfil.com/2uy10w</a></b></p><br /><br /> 899543212b<br />
4
- <br />
5
- <br />
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AN1 Presents My Talking Tom Friends MOD APK - Download and Have Fun.md DELETED
@@ -1,116 +0,0 @@
1
-
2
- <h1>Download My Talking Tom Friends Mod APK An1: A Fun and Interactive Game for All Ages</h1>
3
- <p>Do you love playing with cute and funny animals? Do you want to have a virtual pet that you can take care of, dress up, and play with? If you answered yes, then you will love My Talking Tom Friends, a popular game from Outfit7 Limited. In this game, you can join Tom, Angela, Hank, Ben, Ginger, and Becca as they live together in a cozy house. You can interact with them, feed them, bathe them, play with them, and watch them grow. You can also customize their appearance and their house, play mini-games, and chat with other players online. Sounds fun, right?</p>
4
- <p>But what if you want to enjoy the game without any limitations or interruptions? What if you want to have unlimited money and diamonds, unlock all the characters and outfits, and remove all the ads? Well, there is a way to do that. You can download My Talking Tom Friends Mod APK An1, a modified version of the game that gives you access to all these features and more. In this article, we will tell you everything you need to know about My Talking Tom Friends Mod APK An1, including what it is, what are its benefits, how to download and install it, and what precautions to take before doing so. Let's get started!</p>
5
- <h2>download my talking tom friends mod apk an1</h2><br /><p><b><b>Download</b> >> <a href="https://urlin.us/2uSYbE">https://urlin.us/2uSYbE</a></b></p><br /><br />
6
- <h2>What is My Talking Tom Friends?</h2>
7
- <p>My Talking Tom Friends is a casual simulation game that was released in June 2020 by Outfit7 Limited, the same developer behind the famous My Talking Tom series. The game has over 100 million downloads on Google Play Store and has received positive reviews from critics and players alike. The game is suitable for all ages and is available in multiple languages.</p>
8
- <h3>The gameplay of My Talking Tom Friends</h3>
9
- <p>The gameplay of My Talking Tom Friends is simple and intuitive. You start by choosing one of the six characters: Tom, Angela, Hank, Ben, Ginger, or Becca. Each character has its own personality, voice, and style. You can then move into a house with your chosen character and the rest of the gang. You can explore the house and interact with different objects and items. You can also interact with your characters by tapping on them, dragging them around, or speaking to them. They will respond to your actions and voice with cute expressions and sounds.</p>
10
- <p>Your main goal in the game is to take care of your characters' needs and wants. You have to feed them when they are hungry, bathe them when they are dirty, put them to bed when they are sleepy, heal them when they are sick, and entertain them when they are bored. You can also fulfill their wishes by giving them gifts or taking them to different places. By doing so, you will increase their happiness level and earn coins.</p>
11
- <h3>The features of My Talking Tom Friends</h3>
12
- <p>My Talking Tom Friends has many features that make it fun and engaging. Here are some of them:</p>
13
- <h4>Customize your characters and house</h4>
14
- <p>You can customize your characters' appearance by changing their clothes, accessories, hairstyles, eye colors, skin tones, etc. You can also customize your house by changing the furniture, wallpaper, floor tiles, etc. You can buy new items from the shop using coins or diamonds.</p>
15
- <h4>Play mini-games and earn coins</h4>
16
- <p>You can play <p>mini-games with your characters and have fun. There are many mini-games to choose from, such as Bus Jump, Flappy Tom, Planet Hop, etc. You can earn coins by playing these games and use them to buy more items or gifts.</p>
17
- <h4>Interact with your friends and other players</h4>
18
- <p>You can interact with your friends and other players online by visiting their houses, sending them messages, or giving them likes. You can also join clubs and chat with other club members. You can also compete with other players in leaderboards and events.</p>
19
- <h2>What is My Talking Tom Friends Mod APK An1?</h2>
20
- <p>My Talking Tom Friends Mod APK An1 is a modified version of the original game that gives you some extra features and advantages. It is not an official app from Outfit7 Limited, but a third-party app created by some developers. You can download it from various websites that offer modded apps and games.</p>
21
- <p>Download My Talking Tom Friends unlimited money mod apk an1<br />
22
- How to download My Talking Tom Friends hack mod apk an1<br />
23
- Download My Talking Tom Friends mod apk an1 latest version<br />
24
- Download My Talking Tom Friends mod apk an1 for android<br />
25
- Download My Talking Tom Friends mod apk an1 free<br />
26
- Download My Talking Tom Friends mod apk an1 with all characters unlocked<br />
27
- Download My Talking Tom Friends mod apk an1 offline<br />
28
- Download My Talking Tom Friends mod apk an1 no root<br />
29
- Download My Talking Tom Friends mod apk an1 with unlimited coins and diamonds<br />
30
- Download My Talking Tom Friends mod apk an1 2023<br />
31
- Download My Talking Tom Friends mod apk an1 gameplay<br />
32
- Download My Talking Tom Friends mod apk an1 review<br />
33
- Download My Talking Tom Friends mod apk an1 cheats<br />
34
- Download My Talking Tom Friends mod apk an1 online<br />
35
- Download My Talking Tom Friends mod apk an1 for pc<br />
36
- Download My Talking Tom Friends mod apk an1 for ios<br />
37
- Download My Talking Tom Friends mod apk an1 without ads<br />
38
- Download My Talking Tom Friends mod apk an1 full version<br />
39
- Download My Talking Tom Friends mod apk an1 update<br />
40
- Download My Talking Tom Friends mod apk an1 new features<br />
41
- Download My Talking Tom Friends mod apk an1 tips and tricks<br />
42
- Download My Talking Tom Friends mod apk an1 best settings<br />
43
- Download My Talking Tom Friends mod apk an1 download link<br />
44
- Download My Talking Tom Friends mod apk an1 safe and secure<br />
45
- Download My Talking Tom Friends mod apk an1 installation guide<br />
46
- Download My Talking Tom Friends mod apk an1 requirements<br />
47
- Download My Talking Tom Friends mod apk an1 size and compatibility<br />
48
- Download My Talking Tom Friends mod apk an1 screenshots and videos<br />
49
- Download My Talking Tom Friends mod apk an1 ratings and feedbacks<br />
50
- Download My Talking Tom Friends mod apk an1 pros and cons<br />
51
- Download My Talking Tom Friends mod apk an1 alternatives and similar apps<br />
52
- Download My Talking Tom Friends mod apk an1 support and contact information<br />
53
- Download My Talking Tom Friends mod apk an1 FAQs and answers<br />
54
- Download My Talking Tom Friends mod apk an1 bugs and fixes<br />
55
- Download My Talking Tom Friends mod apk an1 bonus and rewards<br />
56
- Download My Talking Tom Friends mod apk an1 fun and entertainment<br />
57
- Download My Talking Tom Friends mod apk an1 challenges and missions<br />
58
- Download My Talking Tom Friends mod apk an1 customization and personalization<br />
59
- Download My Talking Tom Friends mod apk an1 social and multiplayer features<br />
60
- Download My Talking Tom Friends mod apk an1 educational and learning value<br />
61
- Download My Talking Tom Friends mod apk an1 simulation and role-playing elements<br />
62
- Download My Talking Tom Friends mod apk an1 adventure and exploration aspects<br />
63
- Download My Talking Tom Friends mod apk an1 creativity and imagination boosters<br />
64
- Download My Talking Tom Friends mod apk an1 relaxation and stress relief benefits<br />
65
- Download My Talking Tom Friends mod apk an1 quality and performance improvements<br />
66
- Download My Talking Tom Friends mod apk an1 originality and uniqueness factors<br />
67
- Download My Talking Tom Friends mod apk an1 popularity and trendiness indicators<br />
68
- Download My Talking Tom Friends mod apk an1 advantages and disadvantages comparison</p>
69
- <h3>The benefits of My Talking Tom Friends Mod APK An1</h3>
70
- <p>My Talking Tom Friends Mod APK An1 has many benefits that make it more enjoyable and convenient than the original game. Here are some of them:</p>
71
- <h4>Unlimited money and diamonds</h4>
72
- <p>With My Talking Tom Friends Mod APK An1, you will have unlimited money and diamonds in your account. You can use them to buy anything you want from the shop, such as clothes, furniture, gifts, etc. You can also use them to unlock new characters and outfits. You don't have to worry about running out of money or diamonds ever again.</p>
73
- <h4>Unlocked all characters and outfits</h4>
74
- <p>With My Talking Tom Friends Mod APK An1, you will have access to all the characters and outfits in the game. You don't have to wait for them to be unlocked or pay for them with real money. You can choose any character you like and dress them up in any outfit you want. You can also switch between characters anytime you want.</p>
75
- <h4>No ads and no root required</h4>
76
- <p>With My Talking Tom Friends Mod APK An1, you will not see any ads in the game. You can play the game without any interruptions or distractions. You can also enjoy the game without rooting your device. You don't have to risk damaging your device or losing your warranty by rooting it.</p>
77
- <h2>How to download and install My Talking Tom Friends Mod APK An1?</h2>
78
- <p>If you want to download and install My Talking Tom Friends Mod APK An1, you need to follow some simple steps. Here they are:</p>
79
- <h3>The steps to download and install My Talking Tom Friends Mod APK An1</h3>
80
- <ol>
81
- <li>Go to a website that offers My Talking Tom Friends Mod APK An1, such as [an1.com] or [apkdone.com].</li>
82
- <li>Find the download link for My Talking Tom Friends Mod APK An1 and click on it.</li>
83
- <li>Wait for the download to finish and then locate the file on your device.</li>
84
- <li>Tap on the file and allow the installation from unknown sources if prompted.</li>
85
- <li>Wait for the installation to complete and then launch the game.</li>
86
- <li>Enjoy playing My Talking Tom Friends Mod APK An1 with unlimited money and diamonds, unlocked all characters and outfits, no ads, and no root required.</li>
87
- </ol>
88
- <h3>The precautions to take before downloading and installing My Talking Tom Friends Mod APK An1</h3>
89
- <p>Before downloading and installing My Talking Tom Friends Mod APK An1, you need to take some precautions to avoid any problems or risks. Here are some of them:</p>
90
- <ul>
91
- <li>Make sure you have enough space on your device for the file size of My Talking Tom Friends Mod APK An1.</li>
92
- <li>Make sure you have a stable internet connection for the download process.</li>
93
- <li>Make sure you download My Talking Tom Friends Mod APK An1 from a reliable and trusted website that does not contain any viruses or malware.</li>
94
- <li>Make sure you backup your original game data before installing My Talking Tom Friends Mod APK An1 in case you want to restore it later.</li>
95
- <li>Make sure you do not use your real account or personal information when playing My Talking Tom Friends Mod APK An1 as it may get banned or hacked by the game developers or other players.</li>
96
- </ul>
97
- <h2>Conclusion</h2>
98
- <p>My Talking Tom Friends is a fun and interactive game that lets you play with cute and funny animals in a cozy house. You can take care of them, dress them up, play with them, and watch them grow. You can also customize your house, play mini-games, and chat with other players online.</p>
99
- <p>If you want to enjoy the game without any limitations or interruptions, you can download My Talking Tom Friends Mod APK An1, a modified version of the game that gives you unlimited money and diamonds, unlocked all characters and outfits, no ads, and no root required. You can download it from various websites that offer modded apps and games, such as [an1.com] or [apkdone.com]. However, you need to take some precautions before downloading and installing My Talking Tom Friends Mod APK An1, such as making sure you have enough space on your device, a stable internet connection, a reliable and trusted website, a backup of your original game data, and a fake account or personal information.</p>
100
- <p>We hope this article has helped you learn more about My Talking Tom Friends Mod APK An1 and how to download and install it. If you have any questions or feedback, please feel free to leave them in the comments section below. Thank you for reading and have fun playing My Talking Tom Friends Mod APK An1!</p>
101
- <h2>FAQs</h2>
102
- <p>Here are some frequently asked questions about My Talking Tom Friends Mod APK An1:</p>
103
- <ol>
104
- <li>What is the difference between My Talking Tom Friends and My Talking Tom Friends Mod APK An1?</li>
105
- <p>My Talking Tom Friends is the original game from Outfit7 Limited that lets you play with cute and funny animals in a cozy house. My Talking Tom Friends Mod APK An1 is a modified version of the game that gives you unlimited money and diamonds, unlocked all characters and outfits, no ads, and no root required.</p>
106
- <li>Is My Talking Tom Friends Mod APK An1 safe to download and install?</li>
107
- <p>My Talking Tom Friends Mod APK An1 is generally safe to download and install if you follow the precautions we mentioned above, such as making sure you have enough space on your device, a stable internet connection, a reliable and trusted website, a backup of your original game data, and a fake account or personal information. However, there is always a risk of downloading and installing any modded app or game, so do it at your own discretion and responsibility.</p>
108
- <li>Will I get banned or hacked by playing My Talking Tom Friends Mod APK An1?</li>
109
- <p>There is a possibility that you may get banned or hacked by playing My Talking Tom Friends Mod APK An1, especially if you use your real account or personal information. The game developers or other players may detect that you are using a modded version of the game and take action against you. Therefore, we recommend that you use a fake account or personal information when playing My Talking Tom Friends Mod APK An1.</p>
110
- <li>Can I play My Talking Tom Friends Mod APK An1 offline?</li>
111
- <p>Yes, you can play My Talking Tom Friends Mod APK An1 offline without any internet connection. However, some features of the game may not work properly offline, such as visiting other players' houses, joining clubs, chatting with other club members, competing in leaderboards and events, etc. Therefore, we suggest that you play My Talking Tom Friends Mod APK An1 online for the best experience.</p>
112
- <li>Can I update My Talking Tom Friends Mod APK An1 to the latest version?</li>
113
- <p>No, you cannot update My Talking Tom Friends Mod APK An1 to the latest version from the Google Play Store or the official website of Outfit7 Limited. If you do so, you will lose all the modded features and advantages of My Talking Tom Friends Mod APK An1. Instead, you need to download and install the latest version of My Talking Tom Friends Mod APK An1 from the same website where you downloaded it before.</p>
114
- </ol></p> 197e85843d<br />
115
- <br />
116
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Black Duck and Get Complete Visibility into Your Application and Container Composition.md DELETED
@@ -1,127 +0,0 @@
1
-
2
- <h1>How to Download Black Duck: A Guide for Open Source Security and Compliance</h1>
3
- <p>Open source software is widely used in modern applications and containers, but it also comes with some risks that need to be managed. These include security vulnerabilities, license compliance issues, and operational challenges. How can developers and organizations ensure that they are using open source safely and effectively?</p>
4
- <h2>download blackduck</h2><br /><p><b><b>DOWNLOAD</b> &#128505; <a href="https://urlin.us/2uSUaP">https://urlin.us/2uSUaP</a></b></p><br /><br />
5
- <p>One solution is Black Duck, a software composition analysis (SCA) tool that helps teams identify and manage the open source components in their codebase. Black Duck provides complete visibility into the open source usage, detects and prioritizes vulnerabilities, enforces license policies, and generates software bill of materials (SBOM). In this article, we will show you how to download and install Black Duck using Docker or Kubernetes, and highlight some of the benefits and alternatives of this tool.</p>
6
- <h2>How to Download Black Duck</h2>
7
- <p>Black Duck is deployed as a set of Docker containers, which together comprise the application. Each container fulfills a different role, such as processing UI requests, acting as an enterprise search platform, or storing data. To download and install Black Duck, you will need to meet some hardware and software requirements, such as:</p>
8
- <ul>
9
- <li>A 64-bit 5 core processor</li>
10
- <li>20 GB of RAM</li>
11
- <li>250 GB of free space for the database and other containers</li>
12
- <li>Docker 18.03.x or newer</li>
13
- <li>An orchestration tool such as Docker Swarm or Kubernetes</li>
14
- <li>A supported operating system such as CentOS 7.3 or Ubuntu 16.04.x</li>
15
- </ul>
16
- <p>You can find more details on the requirements in the <a href="(^1^)">Black Duck Docker Install Guide</a>.</p>
17
- <p>download blackduck software composition analysis<br />
18
- download blackduck detect scanner<br />
19
- download blackduck open source security report<br />
20
- download blackduck hub integration plugin<br />
21
- download blackduck code sight for code review<br />
22
- download blackduck protex for license compliance<br />
23
- download blackduck binary analysis tool<br />
24
- download blackduck knowledge base data<br />
25
- download blackduck docker image<br />
26
- download blackduck policy manager<br />
27
- download blackduck license manager<br />
28
- download blackduck security manager<br />
29
- download blackduck component manager<br />
30
- download blackduck bom manager<br />
31
- download blackduck snippet analysis tool<br />
32
- download blackduck github integration<br />
33
- download blackduck jenkins integration<br />
34
- download blackduck maven integration<br />
35
- download blackduck gradle integration<br />
36
- download blackduck npm integration<br />
37
- download blackduck pip integration<br />
38
- download blackduck nuget integration<br />
39
- download blackduck rubygems integration<br />
40
- download blackduck composer integration<br />
41
- download blackduck cocoapods integration<br />
42
- download blackduck conda integration<br />
43
- download blackduck go modules integration<br />
44
- download blackduck hex integration<br />
45
- download blackduck paket integration<br />
46
- download blackduck pear integration<br />
47
- download blackduck sbt integration<br />
48
- download blackduck swift package manager integration<br />
49
- download blackduck yarn integration<br />
50
- download blackduck vulnerability database update<br />
51
- download blackduck security advisories feed<br />
52
- download blackduck software bill of materials template<br />
53
- download blackduck ntia sbom format converter<br />
54
- download blackduck spdx format converter<br />
55
- download blackduck cyclonedx format converter<br />
56
- download blackduck swid format converter<br />
57
- download blackduck cve format converter<br />
58
- download blackduck cpe format converter<br />
59
- download blackduck cwe format converter<br />
60
- download blackduck owasp top 10 report generator<br />
61
- download blackduck nist sp 800 53 report generator</p>
62
- <p>There are two main methods for installing Black Duck: using Docker Swarm or using Kubernetes. We will briefly describe each method below.</p>
63
- <h3>Using Docker Swarm</h3>
64
- <p>Docker Swarm is a native clustering tool for Docker that allows you to create and manage a group of Docker nodes as a single virtual system. To install Black Duck using Docker Swarm, you will need to follow these steps:</p>
65
- <ol>
66
- <li>Install Docker CE on your host machine.</li>
67
- <li>Initialize a swarm by running <code>docker swarm init</code>.</li>
68
- <li>Create a new directory for Black Duck orchestration files and download them from <a href="(^2^)">GitHub</a>.</li>
69
- <li>Edit the <code>docker-compose.local-overrides.yml</code> file to customize your installation settings.</li>
70
- <li>Run <code>docker stack deploy -c docker-compose.yml -c docker-compose.local-overrides.yml blackduck</code> to deploy the stack.</li>
71
- <li>Wait for the containers to start up and check their status by running <code>docker service ls</code>.</li>
72
- <li>Access the Black Duck UI by opening <code>https://&lt;host&gt;</code> in your browser.</li>
73
- </ol>
74
- <h3>Using Kubernetes</h3>
75
- <p>Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. To install Black Duck using Kubernetes, you will need to follow these steps:</p>
76
- <ol>
77
- <li>Install Kubernetes on your host machine.</li>
78
- <li>Create a namespace for Black Duck by running <code>kubectl create namespace blackduck</code>.</li>
79
- <li>Create a persistent volume claim (PVC) for the database by running <code>kubectl create -f pvc.json -n blackduck</code>.</li>
80
- <li>Create a secret for the certificate by running <code>kubectl create secret generic blackduck-webserver-certificate -n blackduck --from-file=WEBSERVER_CUSTOM_CERT_FILE --from-file=WEBSERVER_CUSTOM_KEY_FILE</code>.</li>
81
- <li>Create a secret for the proxy by running <code>kubectl create secret generic blackduck-proxy -n blackduck --from-file=HUB_PROXY_HOST --from-file=HUB_PROXY_PORT --from-file=HUB_PROXY_USERNAME --from-file=HUB_PROXY_PASSWORD</code>.</li>
82
- <li>Download the Black Duck Helm chart from <a href="">GitHub</a> and extract it.</li>
83
- <li>Edit the <code>values.yaml</code> file to customize your installation settings.</li>
84
- <li>Run <code>helm install ./blackduck -n blackduck --namespace blackduck</code> to install the chart.</li>
85
- <li>Wait for the pods to start up and check their status by running <code>kubectl get pods -n blackduck</code>.</li>
86
- <li>Access the Black Duck UI by opening <code>https://&lt;host&gt;</code> in your browser.</li>
87
- </ol>
88
- <h2>Benefits of Black Duck</h2>
89
- <p>Black Duck is a powerful and comprehensive tool that helps teams manage their open source usage and mitigate the associated risks. Some of the benefits of using Black Duck are:</p>
90
- <ul>
91
- <li><strong>Visibility:</strong> Black Duck scans your codebase and identifies all the open source components, versions, licenses, and dependencies. It also creates a software bill of materials (SBOM) that documents the composition of your application.</li>
92
- <li><strong>Security:</strong> Black Duck monitors the open source components for known vulnerabilities and alerts you when new ones are discovered. It also provides remediation guidance and patch suggestions to help you fix the issues quickly and efficiently.</li>
93
- <li><strong>Compliance:</strong> Black Duck analyzes the licenses of the open source components and checks for any conflicts or obligations. It also helps you enforce your own license policies and generate reports for audits and due diligence.</li>
94
- <li><strong>Integration:</strong> Black Duck integrates with various tools and platforms that you use in your development lifecycle, such as IDEs, code repositories, build systems, CI/CD pipelines, and container registries. This enables you to scan your code at any stage and automate your workflows.</li>
95
- </ul>
96
- <h2>Alternatives to Black Duck</h2>
97
- <p>Black Duck is not the only tool that offers software composition analysis (SCA) functionality. There are some other tools that you can consider as alternatives or complements to Black Duck, such as:</p>
98
- <table>
99
- <tr><th>Name</th><th>Description</th></tr>
100
- <tr><td><a href="">WhiteSource</a></td><td>A cloud-based SCA tool that helps teams manage their open source security, compliance, and quality. It also provides a unified dashboard for all your projects and integrations with various tools.</td></tr>
101
- <tr><td><a href="">Snyk</a></td><td>A developer-focused SCA tool that helps teams find and fix vulnerabilities in their open source dependencies. It also provides a CLI tool, a GitHub bot, and a vulnerability database.</td></tr>
102
- <tr><td><a href="">FOSSA</a></td><td>A modern SCA tool that helps teams automate their open source compliance and license management. It also provides a web app, a CLI tool, and a GitHub integration.</td></tr>
103
- <tr><td><a href="">Dependabot</a></td><td>A GitHub-native SCA tool that helps teams keep their dependencies up to date and secure. It also provides automated pull requests, security alerts, and configuration options.</td></tr>
104
- </table>
105
- <h2>Conclusion</h2>
106
- <p>In this article, we have shown you how to download and install Black Duck using Docker Swarm or Kubernetes, and highlighted some of the benefits and alternatives of this tool. Black Duck is a software composition analysis (SCA) tool that helps teams identify and manage the open source components in their codebase. It provides complete visibility into the open source usage, detects and prioritizes vulnerabilities, enforces license policies, and generates software bill of materials (SBOM). If you are looking for a solution to manage your open source security and compliance, you should give Black Duck a try.</p>
107
- <h2>FAQs</h2>
108
- <h3>What is the difference between Black Duck and Synopsys?</h3>
109
- <p>Synopsys is the company that owns Black Duck. Synopsys is a leader in software security and quality solutions, offering a range of products and services for various industries and domains. Black Duck is one of the products under Synopsys' portfolio.</p>
110
- <h3>How much does Black Duck cost?</h3>
111
- <p>The pricing of Black Duck depends on various factors, such as the number of users, projects, scans, integrations, etc. You can request a quote from Synopsys by filling out this <a href="">form</a>.</p>
112
- <h3>How can I get support for Black Duck?</h3>
113
- <p>You can get support for Black Duck by contacting Synopsys through various channels, such as email, phone, chat, or web portal. You can also access the online documentation, knowledge base, community forum, and training resources for Black Duck.</p>
114
- <h3>What are the system requirements for Black Duck?</h3>
115
- <p>The system requirements for Black Duck vary depending on the deployment method and the scale of your application. However, some of the common requirements are:</p>
116
- <ul>
117
- <li>A 64-bit 5 core processor</li>
118
- <li>20 GB of RAM</li>
119
- <li>250 GB of free space for the database and other containers</li>
120
- <li>Docker 18.03.x or newer</li>
121
- <li>An orchestration tool such as Docker Swarm or Kubernetes</li>
122
- <li>A supported operating system such as CentOS 7.3 or Ubuntu 16.04.x</li>
123
- </ul>
124
- <h3>How can I update Black Duck?</h3>
125
- <p>You can update Black Duck by downloading the latest version of the orchestration files and running the appropriate commands for your deployment method. For example, if you are using Docker Swarm, you can run <code>docker stack rm blackduck</code> to remove the existing stack, and then run <code>docker stack deploy -c docker-compose.yml -c docker-compose.local-overrides.yml blackduck</code> to deploy the new version. You can find more details on how to update Black Duck in the <a href="">Black Duck Docker Install Guide</a>.</p> 197e85843d<br />
126
- <br />
127
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/All Songs Jukebox The Ultimate Music Streaming and Downloading Service.md DELETED
@@ -1,102 +0,0 @@
1
- <br />
2
- <h1>All Songs Download Jukebox: How to Enjoy Unlimited Music for Free</h1>
3
- <p>Do you love listening to music? Do you want to have access to thousands of songs from different genres, artists, and eras? Do you want to create your own playlists and share them with your friends? If you answered yes to any of these questions, then you might be interested in learning more about all songs download jukebox. A jukebox is a device that can play music from various sources, such as CDs, vinyl records, digital files, or online streaming services. You can use a jukebox to enjoy unlimited music for free, as long as you know how to download and play all songs on it. In this article, we will explain what a jukebox is, how to download all songs for jukebox, and how to play all songs on jukebox. Let's get started!</p>
4
- <h2>all songs download jukebox</h2><br /><p><b><b>DOWNLOAD</b> <a href="https://jinyurl.com/2uNOBc">https://jinyurl.com/2uNOBc</a></b></p><br /><br />
5
- <h2>What is a Jukebox?</h2>
6
- <p>A jukebox is a machine that can play music from different media, such as CDs, vinyl records, digital files, or online streaming services. A jukebox usually has a coin-operated mechanism that allows users to select and play songs from a catalog or a playlist. A jukebox can also have speakers, amplifiers, lights, and other features that enhance the musical experience. A jukebox can be found in various places, such as bars, restaurants, cafes, arcades, or homes.</p>
7
- <h3>A Brief History of Jukeboxes</h3>
8
- <p>The first jukeboxes were invented in the late 19th century, when phonographs and gramophones were used to play music in public places. The term "jukebox" comes from the word "juke", which means a disorderly or rowdy place. The first coin-operated phonographs were introduced in 1889 by Louis Glass and William Arnold in San Francisco. They were called "nickel-in-the-slot machines" and could play one song per coin. The popularity of jukeboxes increased in the 1930s and 1940s, when they became more sophisticated and could play multiple songs from different records. The golden age of jukeboxes was in the 1950s and 1960s, when rock and roll music dominated the charts and jukeboxes became symbols of youth culture and rebellion. The decline of jukeboxes began in the 1970s and 1980s, when cassette tapes, CDs, and digital music players replaced vinyl records as the main source of music. However, jukeboxes never disappeared completely and are still used today by music lovers and collectors.</p>
9
- <h3>Types of Jukeboxes</h3>
10
- <p>There are many types of jukeboxes that can play different kinds of music from different sources. Some of the most common types are:</p>
11
- <ul>
12
- <li><b>Vintage jukeboxes</b>: These are the old-fashioned jukeboxes that use vinyl records or CDs to play music. They have a nostalgic appeal and a retro style that can add charm and character to any place. They usually have a limited number of songs and require manual selection and loading.</li>
13
- <li><b>Modern jukeboxes</b>: These are the new-generation jukeboxes that use digital files or online streaming services to play music. They have a sleek design and a touch screen interface that allows users to browse and select songs from a large catalog or a customized playlist. They usually have unlimited access to music and require internet connection.</li>
14
- <li><b>Hybrid jukeboxes</b>: These are the combination of vintage and modern jukeboxes that can play music from both vinyl records or CDs and digital files or online streaming services. They have a versatile and adaptable functionality that can suit different preferences and needs. They usually have a wide range of songs and require both manual and digital operation.</li>
15
- </ul>
16
- <h3>Benefits of Jukeboxes</h3>
17
- <p>Jukeboxes are not only fun and entertaining devices, but also have many benefits that can enhance your musical enjoyment. Some of the benefits are:</p>
18
- <p>100 jukebox classics playlist Spotify<br />
19
- Best of Arijit Singh free download<br />
20
- Romantic hits by Jubin Nautiyal audio jukebox<br />
21
- Meherbani lyrical song The Shaukeens<br />
22
- Pyaar Toh Tha Bala movie song<br />
23
- Korea Superconducting Tokamak Advanced Research experiment<br />
24
- Best of Arijit Singh romantic songs with lyrics<br />
25
- Best of Arijit Singh latest and top songs jukebox<br />
26
- Best of Arijit Singh birthday special audio jukebox<br />
27
- Best of Arijit Singh Jan 2015 jukebox<br />
28
- Best of Arijit Singh unplugged songs jukebox<br />
29
- DJ Muskurane Arijit Singh popular song India remix<br />
30
- Old and new songs mashups Arijit Singh unplugged jukebox<br />
31
- The best of Arijit Singh and Neha Kakkar songs audio jukebox<br />
32
- Valentine's day special best of Arijit Singh romantic songs<br />
33
- Atif Aslam Arijit Singh's hit song collections audio jukebox<br />
34
- Best of Arijit Singh Hindi songs collection jukebox<br />
35
- Series the best of Arijit Singh and Neha Kakkar songs 2016 audio jukebox<br />
36
- 2016 juke box best of Arijit Singh just listen the music pal<br />
37
- Inside holy grail fusion experiments to create a mini Sun<br />
38
- Nuclear fusion breakthrough as reactor runs seven times hotter than the Sun<br />
39
- Korean nuclear fusion reactor achieves 100 million°C for 30 seconds<br />
40
- How hot is each one of the layers of the Sun?<br />
41
- Sun fact sheet NASA<br />
42
- Solar core Wikipedia<br />
43
- Sun Wikipedia<br />
44
- Core Montana<br />
45
- Spotify sign up to get unlimited songs and podcasts with occasional ads<br />
46
- Spotify for work useful links support free mobile app<br />
47
- Spotify company about jobs for the record communities for artists developers advertising investors vendors</p>
48
- <ul>
49
- <li><b>Variety</b>: Jukeboxes can play music from different genres, artists, and eras, giving you a diverse and eclectic musical experience. You can discover new songs, revisit old favorites, or mix and match different styles and moods.</li>
50
- <li><b>Customization</b>: Jukeboxes can allow you to create your own playlists and share them with your friends, giving you a personalized and social musical experience. You can choose the songs that suit your taste, mood, or occasion, or let your friends join in the fun and suggest their own songs.</li>
51
- <li><b>Quality</b>: Jukeboxes can deliver high-quality sound and performance, giving you a satisfying and immersive musical experience. You can enjoy the crisp and clear sound of vinyl records, the convenience and portability of digital files, or the freshness and diversity of online streaming services.</li>
52
- </ul>
53
- <h2>How to Download All Songs for Jukebox</h2>
54
- <p>If you want to enjoy unlimited music for free on your jukebox, you need to know how to download all songs for jukebox. There are two main ways to do this: using online streaming services or using free music download sites. Let's take a look at each option in more detail.</p>
55
- <h3>Online Streaming Services</h3>
56
- <p>Online streaming services are platforms that allow you to listen to music online without downloading it to your device. You can access millions of songs from various artists, genres, and eras, as well as create your own playlists and discover new music. Some of the most popular online streaming services are Spotify, Apple Music, and Amazon Music. Here are some features and tips for each service:</p>
57
- <h4>Spotify</h4>
58
- <p>Spotify is one of the most widely used online streaming services in the world, with over 350 million users. It offers a free plan that allows you to listen to music with ads, or a premium plan that allows you to listen to music without ads, download songs for offline listening, and enjoy other benefits. To use Spotify on your jukebox, you need to:</p>
59
- <ol>
60
- <li>Create an account on Spotify.com or download the Spotify app on your device.</li>
61
- <li>Search for the songs, albums, artists, or playlists that you want to listen to.</li>
62
- <li>If you have a premium plan, you can download the songs by toggling the "Download" switch on the top right corner of the screen.</li>
63
- <li>Connect your device to your jukebox using a cable or Bluetooth.</li>
64
- <li>Play the songs on your device and enjoy them on your jukebox.</li>
65
- </ol>
66
- <h4>Apple Music</h4>
67
- <p>Apple Music is another popular online streaming service that has over 60 million users. It offers a free trial for three months, after which you need to pay a monthly fee to continue using it. It allows you to listen to music without ads, download songs for offline listening, and access exclusive content and features. To use Apple Music on your jukebox, you need to:</p>
68
- <ol>
69
- <li>Create an account on Apple.com or download the Apple Music app on your device.</li>
70
- <li>Search for the songs, albums, artists, or playlists that you want to listen to.</li>
71
- <li>Download the songs by tapping the cloud icon next to each song.</li>
72
- <li>Connect your device to your jukebox using a cable or Bluetooth.</li>
73
- <li>Play the songs on your device and enjoy them on your jukebox.</li>
74
- </ol>
75
- <h4>Amazon Music</h4>
76
- <p>Amazon Music is another online streaming service that has over 55 million users. It offers a free plan that allows you to listen to music with ads, or a paid plan that allows you to listen to music without ads, download songs for offline listening, and access more songs and features. To use Amazon Music on your jukebox, you need to:</p>
77
- <ol>
78
- <li>Create an account on Amazon.com or download the Amazon Music app on your device.</li>
79
- <li>Search for the songs, albums, artists, or playlists that you want to listen to.</li>
80
- <li>If you have a paid plan, you can download the songs by tapping the "More Options" icon next to each song and selecting "Download".</li>
81
- <li>Connect your device to your jukebox using a cable or Bluetooth.</li , you can use the buttons or the touch screen to browse and select your songs. You can also use the search function or the voice command to find your songs faster.</p>
82
- <h3>Enjoy the Music and Have Fun</h3>
83
- <p>The final step is to enjoy the music and have fun. You can adjust the volume, the bass, the treble, and other settings on your jukebox to suit your preferences. You can also skip, pause, or repeat songs as you wish. You can sing along, dance, or just relax and listen to the music. You can also invite your friends and family to join you and share your musical taste. You can have a party, a karaoke night, or a chill session with your jukebox.</p>
84
- <h2>Conclusion</h2>
85
- <p>All songs download jukebox is a great way to enjoy unlimited music for free. You can download all songs for jukebox using online streaming services or free music download sites. You can play all songs on jukebox by connecting your device to the jukebox and selecting your playlist or album. You can have fun and create your own musical atmosphere with your jukebox. Whether you prefer vintage or modern jukeboxes, you can find the one that suits your style and budget. All you need is a love for music and a desire to have a good time.</p>
86
- <h2>FAQs</h2>
87
- <p>Here are some frequently asked questions about all songs download jukebox:</p>
88
- <ul>
89
- <li><b>Q: How much does a jukebox cost?</b></li>
90
- <li>A: The price of a jukebox depends on its type, model, condition, and features. A vintage jukebox can cost anywhere from $500 to $10,000 or more, depending on its rarity and quality. A modern jukebox can cost anywhere from $200 to $2,000 or more, depending on its brand and functionality. A hybrid jukebox can cost anywhere from $300 to $3,000 or more, depending on its versatility and design.</li>
91
- <li><b>Q: Where can I buy a jukebox?</b></li>
92
- <li>A: You can buy a jukebox from various sources, such as online retailers, physical stores, auctions, or private sellers. Some of the best online retailers for jukeboxes are Amazon, eBay, Walmart, and Best Buy. Some of the best physical stores for jukeboxes are Target, Sears, Home Depot, and Lowe's. Some of the best auctions for jukeboxes are Sotheby's, Christie's, Heritage Auctions, and Bonhams. Some of the best private sellers for jukeboxes are Craigslist, Facebook Marketplace, OfferUp, and Letgo.</li>
93
- <li><b>Q: How do I maintain my jukebox?</b></li>
94
- <li>A: To keep your jukebox in good condition, you need to perform some regular maintenance tasks, such as cleaning, lubricating, repairing, and updating. You need to clean your jukebox with a soft cloth and a mild detergent to remove dust and dirt. You need to lubricate your jukebox with oil or grease to prevent rust and friction. You need to repair your jukebox with professional help or DIY tools if it has any damages or malfunctions. You need to update your jukebox with new software or firmware if it has any bugs or glitches.</li>
95
- <li><b>Q: How do I customize my jukebox?</b></li>
96
- <li>A: To make your jukebox more unique and personal, you can customize it with various accessories and decorations. You can add lights, stickers, decals, posters, or paintings to your jukebox to make it more colorful and attractive. You can add speakers, headphones, microphones, or karaoke machines to your jukebox to make it more loud and interactive. You can add coins, tokens, cards , or buttons to your jukebox to make it more fun and authentic. You can also change the color, shape, or size of your jukebox to make it more suitable for your space and style.</li>
97
- <li><b>Q: How do I troubleshoot my jukebox?</b></li>
98
- <li>A: If your jukebox is not working properly, you can try some basic troubleshooting steps, such as checking the power, the connection, the settings, and the songs. You can check the power by plugging and unplugging your jukebox and making sure that it is turned on. You can check the connection by reconnecting your device and your jukebox and making sure that they are paired and synced. You can check the settings by adjusting the volume, the bass, the treble, and other options on your jukebox and making sure that they are correct. You can check the songs by deleting and downloading them again and making sure that they are compatible and playable.</li>
99
- </ul>
100
- <p>I hope you enjoyed this article on all songs download jukebox. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy listening!</p> 401be4b1e0<br />
101
- <br />
102
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Archero Hack APK and Enjoy Crossbow Archery with Amazing Features.md DELETED
@@ -1,118 +0,0 @@
1
- <br />
2
- <h1>Archero Hack APK: How to Download and Play the Modded Version of Archero</h1>
3
- <p>Are you a fan of Archero, the addictive roguelike game where you control a lone archer who must fight against waves of enemies and obstacles? Do you want to experience the game with unlimited resources, unlocked features, and enhanced gameplay? If yes, then you might be interested in trying out Archero Hack APK, a modded version of the game that offers many advantages over the original one. But before you do that, you should know what Archero Hack APK is, how it works, and what are the risks and precautions involved in using it. In this article, we will tell you everything you need to know about Archero Hack APK, including how to download and install it on your Android device, and how to enjoy it without getting banned or infected by malware. Read on to find out more!</p>
4
- <h2>archero hack apk</h2><br /><p><b><b>Download</b> &#10002; &#10002; &#10002; <a href="https://jinyurl.com/2uNNz2">https://jinyurl.com/2uNNz2</a></b></p><br /><br />
5
- <h2>What is Archero and why is it popular?</h2>
6
- <p>Archero is a mobile game developed by Habby, a Chinese studio that specializes in casual games. It was released in 2019 and has since become one of the most popular games on Google Play Store and App Store, with over 50 million downloads and 4.3 stars rating. </p>
7
- <h3>The gameplay and features of Archero</h3>
8
- <p>The gameplay of Archero is simple but challenging. You play as an archer who must survive different stages filled with enemies, traps, and bosses. You can move around by swiping on the screen, but you can only shoot when you stop moving. You can also choose from various skills that appear randomly after each level, such as multishot, ricochet, bouncy wall, etc. These skills can help you deal more damage, dodge attacks, or heal yourself.</p>
9
- <p>Archero also has many features that make it fun and engaging. You can unlock different heroes with unique abilities, such as Helix, Meowgik, Sylvan, etc. You can also equip different weapons, armors, rings, pets, and other items that can boost your stats or grant you special effects. You can also upgrade your equipment by fusing duplicates or spending coins or gems. Moreover, you can explore different worlds and maps with different themes and difficulties, such as desert, forest, dungeon, etc.</p>
10
- <h3>The challenges and rewards of Archero</h3>
11
- <p>Archero is not an easy game. It requires skill, strategy, and luck to progress through the stages. You have to dodge enemy attacks while aiming at them accurately. You have to choose the best skills that suit your playstyle and situation. You have to manage your health and energy wisely. And most importantly, you have to deal with the randomness of the game. Sometimes you may get lucky and get powerful skills or items that make you unstoppable. Other times you may get unlucky and get useless skills or items that make you vulnerable.</p>
12
- <p>archero mod apk god mode<br />
13
- archero unlimited gems apk<br />
14
- archero hack apk download<br />
15
- archero apk mod menu<br />
16
- archero hack apk 2023<br />
17
- archero mod apk latest version<br />
18
- archero hack apk android<br />
19
- archero mod apk one hit<br />
20
- archero hack apk ios<br />
21
- archero mod apk wall hack<br />
22
- archero hack apk no root<br />
23
- archero mod apk unlimited money<br />
24
- archero hack apk online<br />
25
- archero mod apk free shopping<br />
26
- archero hack apk without verification<br />
27
- archero mod apk high damage<br />
28
- archero hack apk no human verification<br />
29
- archero mod apk all unlocked<br />
30
- archero hack apk reddit<br />
31
- archero mod apk unlimited energy<br />
32
- archero hack apk 4.14.0<br />
33
- archero mod apk revdl<br />
34
- archero hack apk happymod<br />
35
- archero mod apk platinmods<br />
36
- archero hack apk 2022<br />
37
- archero mod apk unlimited coins and gems<br />
38
- archero hack apk 4.13.0<br />
39
- archero mod apk rexdl<br />
40
- archero hack apk 4.12.0<br />
41
- archero mod apk an1<br />
42
- archero hack apk 4.11.0<br />
43
- archero mod apk 4.14.0 download<br />
44
- archero hack apk 4.10.0<br />
45
- archero mod apk 4.13.0 download<br />
46
- archero hack apk 4.9.0<br />
47
- archero mod apk 4.12.0 download<br />
48
- archero hack apk 4.8.0<br />
49
- archero mod apk 4.11.0 download<br />
50
- archero hack apk 4.7.0<br />
51
- archero mod apk 4.10.0 download<br />
52
- archero hack tool online generator</p>
53
- <p>But despite the challenges, Archero is also very rewarding. It gives you a sense of accomplishment when you clear a stage or defeat a boss. It gives you a thrill when you discover a new skill or item that changes your gameplay. It gives you a satisfaction when you upgrade your equipment or unlock a new hero. And it gives you a motivation when you see your progress on the leaderboard or achievements.</p>
54
- <h2>What is Archero Hack APK and how does it differ from the original game?</h2>
55
- <p>Archero Hack APK is a modified version of Archero that has been altered by some third-party developers or hackers to provide some benefits or advantages over the original game. These benefits or advantages may include:</p> - Unlimited coins and gems that can be used to buy or upgrade anything in the game - Unlimited energy that allows you to play as long as you want without waiting for the energy bar to refill - Unlocked all heroes, weapons, items, skills, and maps that are otherwise restricted or require real money to access - Enhanced gameplay that gives you more damage, speed, health, and other perks that make you stronger and faster than the normal game Archero Hack APK differs from the original game in many ways. It basically gives you everything you need or want in the game without any effort or cost. It makes the game easier and more enjoyable for some players who want to have fun without any challenge or limitation. However, it also takes away some of the essential elements of the game that make it fun and engaging for other players who want to have a fair and balanced experience. It also poses some risks and precautions that you should be aware of before using it. <h2>What are the benefits and drawbacks of using Archero Hack APK?</h2>
56
- <p>Using Archero Hack APK can have some benefits and drawbacks depending on your perspective and preference. Here are some of them:</p>
57
- <h3>The benefits of using Archero Hack APK</h3>
58
- <p>Some of the benefits of using Archero Hack APK are:</p>
59
- <ul>
60
- <li>You can save time and money by getting unlimited resources and features without spending any real money or grinding for hours.</li>
61
- <li>You can explore and experiment with different combinations of heroes, weapons, items, skills, and maps without any restriction or limitation.</li>
62
- <li>You can have more fun and excitement by playing with enhanced gameplay and powerful abilities that make you feel like a god.</li>
63
- <li>You can impress your friends or other players by showing off your achievements or progress in the game.</li>
64
- </ul>
65
- <h3>The drawbacks of using Archero Hack APK</h3>
66
- <p>Some of the drawbacks of using Archero Hack APK are:</p>
67
- <ul>
68
- <li>You can lose the challenge and satisfaction of playing the game as it was intended by the developers. You may get bored or lose interest in the game after a while.</li>
69
- <li>You can miss out on the updates and features that are added to the original game regularly. You may also encounter bugs or glitches that are not fixed in the modded version.</li>
70
- <li>You can risk getting banned or suspended from the game if you are detected by the anti-cheat system or reported by other players. You may also lose your progress or account if that happens.</li>
71
- <li>You can expose your device or data to malware or viruses that may be hidden in the modded APK file. You may also compromise your privacy or security if you grant permissions or access to unknown sources.</li>
72
- </ul>
73
- <h2>How to download and install Archero Hack APK on your Android device?</h2>
74
- <p>If you still want to try out Archero Hack APK despite the drawbacks and risks, you will need to follow some steps to download and install it on your Android device. Here are the steps:</p> <h3>The steps to download and install Archero Hack APK</h3>
75
- <p>The steps to download and install Archero Hack APK are:</p>
76
- <ol>
77
- <li>Find a reliable and trustworthy source that provides the latest version of Archero Hack APK. You can search online or ask for recommendations from other users. Some of the popular sources are , but you should always check the reviews and ratings before downloading anything.</li>
78
- <li>Download the Archero Hack APK file from the source you have chosen. You may need to enable the option to download from unknown sources in your device settings. You may also need to disable your antivirus or firewall temporarily if they block the download.</li>
79
- <li>Locate the Archero Hack APK file in your device storage and tap on it to install it. You may need to grant some permissions or access to the app during the installation process. You may also need to verify your device or account if prompted.</li>
80
- <li>Wait for the installation to finish and then launch the app from your home screen or app drawer. You may need to sign in with your Google Play account or create a new one if you don't have one.</li>
81
- <li>Enjoy playing Archero Hack APK with unlimited resources, unlocked features, and enhanced gameplay!</li>
82
- </ol>
83
- <h3>The tips and tricks to enjoy Archero Hack APK</h3>
84
- <p>Some of the tips and tricks to enjoy Archero Hack APK are:</p>
85
- <ul>
86
- <li>Use different heroes, weapons, items, skills, and maps to experiment with different strategies and combinations. You can also switch between them anytime you want without losing your progress.</li>
87
- <li>Use the unlimited coins and gems to buy or upgrade anything you want in the game. You can also use them to revive yourself or skip levels if you get stuck or bored.</li>
88
- <li>Use the enhanced gameplay to deal more damage, move faster, heal more, and avoid attacks. You can also use it to challenge yourself by increasing the difficulty or playing in different modes.</li>
89
- <li>Be careful not to abuse the hack or cheat too much as it may ruin the fun or make the game too easy. You can also try playing the original game occasionally to compare and appreciate the difference.</li>
90
- <li>Be respectful and responsible when playing online or with other players. Do not brag or boast about your hack or cheat as it may annoy or offend others. Do not use it to gain an unfair advantage or harm others as it may get you banned or reported.</li>
91
- </ul>
92
- <h2>Conclusion</h2>
93
- <p>In conclusion, Archero Hack APK is a modded version of Archero that offers many benefits and advantages over the original game, such as unlimited resources, unlocked features, and enhanced gameplay. However, it also has some drawbacks and risks, such as losing the challenge and satisfaction, missing out on the updates and features, risking getting banned or suspended, and exposing your device or data to malware or viruses. Therefore, you should be careful and cautious when using it, and follow some steps and tips to download and install it safely and enjoy it properly.</p>
94
- <p>If you are interested in trying out Archero Hack APK, you can find it online from various sources, but make sure they are reliable and trustworthy. You can also ask for recommendations from other users who have used it before. But remember, use it at your own risk and discretion, and do not forget to have fun!</p>
95
- <h2>FAQs</h2>
96
- <h3>What is Archero?</h3>
97
- <p>Archero is a mobile game developed by Habby that lets you play as an archer who must survive different stages filled with enemies, traps, and bosses. You can choose from various skills, heroes, weapons, items, and maps that can help you in your adventure.</p>
98
- <h3>What is Archero Hack APK?</h3>
99
- <p>Archero Hack APK is a modified version of Archero that has been altered by some third-party developers or hackers to provide some benefits or advantages over the original game, such as unlimited resources, unlocked features, and enhanced gameplay.</p>
100
- <h3>How do I download and install Archero Hack APK?</h3>
101
- <p>You can download and install Archero Hack APK by following these steps:</p>
102
- <ol>
103
- <li>Find a reliable and trustworthy source that provides the latest version of Archero Hack APK.</li>
104
- <li>Download the Archero Hack APK file from the source you have chosen.</li>
105
- <li>Locate the Archero Hack APK file in your device storage and tap on it to install it.</li>
106
- <li>Wait for the installation to finish and then launch the app from your home screen or app drawer.</li>
107
- <li>Enjoy playing Archero Hack APK with unlimited resources, unlocked features, and enhanced gameplay!</li>
108
- </ol>
109
- <h3>Is Archero Hack APK safe to use?</h3 <p>Archero Hack APK is not safe to use, as it may violate the terms and conditions of the original game, and expose your device or data to malware or viruses. You may also risk getting banned or suspended from the game if you are detected by the anti-cheat system or reported by other players. Therefore, you should use it at your own risk and discretion, and take some precautions to protect yourself and your account.</p>
110
- <h3>What are some alternatives to Archero Hack APK?</h3>
111
- <p>If you are looking for some alternatives to Archero Hack APK that can provide you with similar benefits or advantages without the drawbacks or risks, you can try some of these options:</p>
112
- <ul>
113
- <li>You can use some legitimate Archero cheats or tips that can help you improve your gameplay and progress faster in the game. For example, you can learn how to dodge enemy attacks, how to choose the best skills, how to optimize your equipment, etc. You can find some of these cheats or tips online or from other players.</li>
114
- <li>You can use some Archero mod APKs that are verified and tested by reputable sources and do not contain any malware or viruses. These mod APKs may offer some features or enhancements that are not available in the original game, such as custom skins, graphics, sounds, etc. However, they may not offer unlimited resources or unlocked features as Archero Hack APK does.</li>
115
- <li>You can use some Archero emulators that can allow you to play the game on your PC or laptop instead of your mobile device. These emulators may offer better performance, graphics, controls, and compatibility than the original game. However, they may also require more space, memory, and processing power than the original game.</li>
116
- </ul></p> 401be4b1e0<br />
117
- <br />
118
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/models/prior_transformer.py DELETED
@@ -1,220 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- from dataclasses import dataclass
16
- from typing import Optional, Union
17
-
18
- import paddle
19
- import paddle.nn as nn
20
- import paddle.nn.functional as F
21
-
22
- from ..configuration_utils import ConfigMixin, register_to_config
23
- from ..modeling_utils import ModelMixin
24
- from ..utils import BaseOutput
25
- from .attention import BasicTransformerBlock
26
- from .embeddings import TimestepEmbedding, Timesteps
27
-
28
- NEG_INF = -1e4
29
-
30
-
31
- @dataclass
32
- class PriorTransformerOutput(BaseOutput):
33
- """
34
- Args:
35
- predicted_image_embedding (`paddle.Tensor` of shape `(batch_size, embedding_dim)`):
36
- The predicted CLIP image embedding conditioned on the CLIP text embedding input.
37
- """
38
-
39
- predicted_image_embedding: paddle.Tensor
40
-
41
-
42
- class PriorTransformer(ModelMixin, ConfigMixin):
43
- """
44
- The prior transformer from unCLIP is used to predict CLIP image embeddings from CLIP text embeddings. Note that the
45
- transformer predicts the image embeddings through a denoising diffusion process.
46
-
47
- This model inherits from [`ModelMixin`]. Check the superclass documentation for the generic methods the library
48
- implements for all the models (such as downloading or saving, etc.)
49
-
50
- For more details, see the original paper: https://arxiv.org/abs/2204.06125
51
-
52
- Parameters:
53
- num_attention_heads (`int`, *optional*, defaults to 32): The number of heads to use for multi-head attention.
54
- attention_head_dim (`int`, *optional*, defaults to 64): The number of channels in each head.
55
- num_layers (`int`, *optional*, defaults to 20): The number of layers of Transformer blocks to use.
56
- embedding_dim (`int`, *optional*, defaults to 768): The dimension of the CLIP embeddings. Note that CLIP
57
- image embeddings and text embeddings are both the same dimension.
58
- num_embeddings (`int`, *optional*, defaults to 77): The max number of clip embeddings allowed. I.e. the
59
- length of the prompt after it has been tokenized.
60
- additional_embeddings (`int`, *optional*, defaults to 4): The number of additional tokens appended to the
61
- projected hidden_states. The actual length of the used hidden_states is `num_embeddings +
62
- additional_embeddings`.
63
- dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use.
64
-
65
- """
66
-
67
- @register_to_config
68
- def __init__(
69
- self,
70
- num_attention_heads: int = 32,
71
- attention_head_dim: int = 64,
72
- num_layers: int = 20,
73
- embedding_dim: int = 768,
74
- num_embeddings=77,
75
- additional_embeddings=4,
76
- dropout: float = 0.0,
77
- ):
78
- super().__init__()
79
- self.num_attention_heads = num_attention_heads
80
- self.attention_head_dim = attention_head_dim
81
- inner_dim = num_attention_heads * attention_head_dim
82
- self.additional_embeddings = additional_embeddings
83
-
84
- self.time_proj = Timesteps(inner_dim, True, 0)
85
- self.time_embedding = TimestepEmbedding(inner_dim, inner_dim)
86
-
87
- self.proj_in = nn.Linear(embedding_dim, inner_dim)
88
-
89
- self.embedding_proj = nn.Linear(embedding_dim, inner_dim)
90
- self.encoder_hidden_states_proj = nn.Linear(embedding_dim, inner_dim)
91
-
92
- self.positional_embedding = self.create_parameter(
93
- (1, num_embeddings + additional_embeddings, inner_dim),
94
- dtype=paddle.get_default_dtype(),
95
- default_initializer=nn.initializer.Constant(0.0),
96
- )
97
-
98
- self.prd_embedding = self.create_parameter(
99
- (1, 1, inner_dim), dtype=paddle.get_default_dtype(), default_initializer=nn.initializer.Constant(0.0)
100
- )
101
-
102
- self.transformer_blocks = nn.LayerList(
103
- [
104
- BasicTransformerBlock(
105
- inner_dim,
106
- num_attention_heads,
107
- attention_head_dim,
108
- dropout=dropout,
109
- activation_fn="gelu",
110
- attention_bias=True,
111
- )
112
- for d in range(num_layers)
113
- ]
114
- )
115
-
116
- self.norm_out = nn.LayerNorm(inner_dim)
117
- self.proj_to_clip_embeddings = nn.Linear(inner_dim, embedding_dim)
118
-
119
- causal_attention_mask = paddle.triu(
120
- paddle.full([num_embeddings + additional_embeddings, num_embeddings + additional_embeddings], NEG_INF), 1
121
- )
122
- causal_attention_mask = causal_attention_mask.unsqueeze(0)
123
- self.register_buffer("causal_attention_mask", causal_attention_mask, persistable=False)
124
-
125
- self.clip_mean = self.create_parameter(
126
- (1, embedding_dim), dtype=paddle.get_default_dtype(), default_initializer=nn.initializer.Constant(0.0)
127
- )
128
- self.clip_std = self.create_parameter(
129
- (1, embedding_dim), dtype=paddle.get_default_dtype(), default_initializer=nn.initializer.Constant(0.0)
130
- )
131
-
132
- def forward(
133
- self,
134
- hidden_states,
135
- timestep: Union[paddle.Tensor, float, int],
136
- proj_embedding: paddle.Tensor,
137
- encoder_hidden_states: paddle.Tensor,
138
- attention_mask: Optional[paddle.Tensor] = None,
139
- return_dict: bool = True,
140
- ):
141
- """
142
- Args:
143
- hidden_states (`paddle.Tensor` of shape `(batch_size, embedding_dim)`):
144
- x_t, the currently predicted image embeddings.
145
- timestep (`paddle.Tensor`):
146
- Current denoising step.
147
- proj_embedding (`paddle.Tensor` of shape `(batch_size, embedding_dim)`):
148
- Projected embedding vector the denoising process is conditioned on.
149
- encoder_hidden_states (`paddle.Tensor` of shape `(batch_size, num_embeddings, embedding_dim)`):
150
- Hidden states of the text embeddings the denoising process is conditioned on.
151
- attention_mask (`paddle.Tensor` of shape `(batch_size, num_embeddings)`):
152
- Text mask for the text embeddings.
153
- return_dict (`bool`, *optional*, defaults to `True`):
154
- Whether or not to return a [`models.prior_transformer.PriorTransformerOutput`] instead of a plain
155
- tuple.
156
-
157
- Returns:
158
- [`~models.prior_transformer.PriorTransformerOutput`] or `tuple`:
159
- [`~models.prior_transformer.PriorTransformerOutput`] if `return_dict` is True, otherwise a `tuple`. When
160
- returning a tuple, the first element is the sample tensor.
161
- """
162
- batch_size = hidden_states.shape[0]
163
-
164
- timesteps = timestep
165
- if not paddle.is_tensor(timesteps):
166
- timesteps = paddle.to_tensor([timesteps], dtype=paddle.int64)
167
- elif paddle.is_tensor(timesteps) and len(timesteps.shape) == 0:
168
- timesteps = timesteps[None]
169
- # broadcast to batch dimension in a way that's compatible with ONNX/Core ML
170
- timesteps = timesteps * paddle.ones((batch_size,), dtype=timesteps.dtype)
171
-
172
- timesteps_projected = self.time_proj(timesteps)
173
-
174
- # timesteps does not contain any weights and will always return f32 tensors
175
- # but time_embedding might be fp16, so we need to cast here.
176
- timesteps_projected = timesteps_projected.cast(dtype=self.dtype)
177
- time_embeddings = self.time_embedding(timesteps_projected)
178
-
179
- proj_embeddings = self.embedding_proj(proj_embedding)
180
- encoder_hidden_states = self.encoder_hidden_states_proj(encoder_hidden_states)
181
- hidden_states = self.proj_in(hidden_states)
182
- prd_embedding = self.prd_embedding.cast(hidden_states.dtype).expand([batch_size, -1, -1])
183
- positional_embeddings = self.positional_embedding.cast(hidden_states.dtype)
184
-
185
- hidden_states = paddle.concat(
186
- [
187
- encoder_hidden_states,
188
- proj_embeddings[:, None, :],
189
- time_embeddings[:, None, :],
190
- hidden_states[:, None, :],
191
- prd_embedding,
192
- ],
193
- axis=1,
194
- )
195
-
196
- hidden_states = hidden_states + positional_embeddings
197
-
198
- if attention_mask is not None:
199
- attention_mask = (1 - attention_mask.cast(hidden_states.dtype)) * -10000.0
200
- attention_mask = F.pad(
201
- attention_mask.unsqueeze(0), (0, self.additional_embeddings), value=0.0, data_format="NCL"
202
- ).squeeze(0)
203
- attention_mask = (attention_mask[:, None, :] + self.causal_attention_mask).cast(hidden_states.dtype)
204
- attention_mask = attention_mask.repeat_interleave(self.config.num_attention_heads, axis=0)
205
-
206
- for block in self.transformer_blocks:
207
- hidden_states = block(hidden_states, attention_mask=attention_mask)
208
-
209
- hidden_states = self.norm_out(hidden_states)
210
- hidden_states = hidden_states[:, -1]
211
- predicted_image_embedding = self.proj_to_clip_embeddings(hidden_states)
212
-
213
- if not return_dict:
214
- return (predicted_image_embedding,)
215
-
216
- return PriorTransformerOutput(predicted_image_embedding=predicted_image_embedding)
217
-
218
- def post_process_latents(self, prior_latents):
219
- prior_latents = (prior_latents * self.clip_std) + self.clip_mean
220
- return prior_latents
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/pipelines/versatile_diffusion/__init__.py DELETED
@@ -1,43 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
- # flake8: noqa
15
-
16
- from ...utils import (
17
- OptionalDependencyNotAvailable,
18
- is_paddle_available,
19
- is_paddlenlp_available,
20
- )
21
-
22
- try:
23
- if not (is_paddlenlp_available() and is_paddle_available()):
24
- raise OptionalDependencyNotAvailable()
25
- except OptionalDependencyNotAvailable:
26
- from ...utils.dummy_paddle_and_paddlenlp_objects import (
27
- VersatileDiffusionDualGuidedPipeline,
28
- VersatileDiffusionImageVariationPipeline,
29
- VersatileDiffusionPipeline,
30
- VersatileDiffusionTextToImagePipeline,
31
- )
32
- else:
33
- from .modeling_text_unet import UNetFlatConditionModel
34
- from .pipeline_versatile_diffusion import VersatileDiffusionPipeline
35
- from .pipeline_versatile_diffusion_dual_guided import (
36
- VersatileDiffusionDualGuidedPipeline,
37
- )
38
- from .pipeline_versatile_diffusion_image_variation import (
39
- VersatileDiffusionImageVariationPipeline,
40
- )
41
- from .pipeline_versatile_diffusion_text_to_image import (
42
- VersatileDiffusionTextToImagePipeline,
43
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/7hao/bingo/src/components/chat-attachments.tsx DELETED
@@ -1,37 +0,0 @@
1
- import Image from 'next/image'
2
- import ClearIcon from '@/assets/images/clear.svg'
3
- import RefreshIcon from '@/assets/images/refresh.svg'
4
- import { FileItem } from '@/lib/bots/bing/types'
5
- import { cn } from '@/lib/utils'
6
- import { useBing } from '@/lib/hooks/use-bing'
7
-
8
- type ChatAttachmentsProps = Pick<ReturnType<typeof useBing>, 'attachmentList' | 'setAttachmentList' | 'uploadImage'>
9
-
10
- export function ChatAttachments({ attachmentList = [], setAttachmentList, uploadImage }: ChatAttachmentsProps) {
11
- return attachmentList.length ? (
12
- <div className="attachment-list">
13
- {attachmentList.map(file => (
14
- <div className="file-item" key={file.url}>
15
- {file.status === 'loading' && (
16
- <div className="loading">
17
- <div className="bar" />
18
- </div>)
19
- }
20
- {file.status !== 'error' && (
21
- <div className="thumbnail">
22
- <img draggable="false" src={file.url} />
23
- </div>)
24
- }
25
- {file.status === 'error' && (
26
- <div className="error">
27
- <Image alt="refresh" src={RefreshIcon} width={18} onClick={() => uploadImage(file.url)} />
28
- </div>
29
- )}
30
- <button className={cn('dismiss', { 'no-file': file.status === 'error' })} type="button">
31
- <Image alt="clear" src={ClearIcon} width={16} onClick={() => setAttachmentList([])} />
32
- </button>
33
- </div>
34
- ))}
35
- </div>
36
- ) : null
37
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/i18n/scan_i18n.py DELETED
@@ -1,75 +0,0 @@
1
- import ast
2
- import glob
3
- import json
4
- from collections import OrderedDict
5
-
6
-
7
- def extract_i18n_strings(node):
8
- i18n_strings = []
9
-
10
- if (
11
- isinstance(node, ast.Call)
12
- and isinstance(node.func, ast.Name)
13
- and node.func.id == "i18n"
14
- ):
15
- for arg in node.args:
16
- if isinstance(arg, ast.Str):
17
- i18n_strings.append(arg.s)
18
-
19
- for child_node in ast.iter_child_nodes(node):
20
- i18n_strings.extend(extract_i18n_strings(child_node))
21
-
22
- return i18n_strings
23
-
24
-
25
- # scan the directory for all .py files (recursively)
26
- # for each file, parse the code into an AST
27
- # for each AST, extract the i18n strings
28
-
29
- strings = []
30
- for filename in glob.iglob("**/*.py", recursive=True):
31
- with open(filename, "r") as f:
32
- code = f.read()
33
- if "I18nAuto" in code:
34
- tree = ast.parse(code)
35
- i18n_strings = extract_i18n_strings(tree)
36
- print(filename, len(i18n_strings))
37
- strings.extend(i18n_strings)
38
- code_keys = set(strings)
39
- """
40
- n_i18n.py
41
- gui_v1.py 26
42
- app.py 16
43
- infer-web.py 147
44
- scan_i18n.py 0
45
- i18n.py 0
46
- lib/train/process_ckpt.py 1
47
- """
48
- print()
49
- print("Total unique:", len(code_keys))
50
-
51
-
52
- standard_file = "i18n/locale/zh_CN.json"
53
- with open(standard_file, "r", encoding="utf-8") as f:
54
- standard_data = json.load(f, object_pairs_hook=OrderedDict)
55
- standard_keys = set(standard_data.keys())
56
-
57
- # Define the standard file name
58
- unused_keys = standard_keys - code_keys
59
- print("Unused keys:", len(unused_keys))
60
- for unused_key in unused_keys:
61
- print("\t", unused_key)
62
-
63
- missing_keys = code_keys - standard_keys
64
- print("Missing keys:", len(missing_keys))
65
- for missing_key in missing_keys:
66
- print("\t", missing_key)
67
-
68
- code_keys_dict = OrderedDict()
69
- for s in strings:
70
- code_keys_dict[s] = s
71
-
72
- # write back
73
- with open(standard_file, "w", encoding="utf-8") as f:
74
- json.dump(code_keys_dict, f, ensure_ascii=False, indent=4, sort_keys=True)
75
- f.write("\n")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/tensorlowest.py DELETED
@@ -1,123 +0,0 @@
1
- from tensorboard.backend.event_processing import event_accumulator
2
-
3
- import os
4
- from shutil import copy2
5
- from re import search as RSearch
6
- import pandas as pd
7
- from ast import literal_eval as LEval
8
-
9
- weights_dir = 'weights/'
10
-
11
- def find_biggest_tensorboard(tensordir):
12
- try:
13
- files = [f for f in os.listdir(tensordir) if f.endswith('.0')]
14
- if not files:
15
- print("No files with the '.0' extension found!")
16
- return
17
-
18
- max_size = 0
19
- biggest_file = ""
20
-
21
- for file in files:
22
- file_path = os.path.join(tensordir, file)
23
- if os.path.isfile(file_path):
24
- file_size = os.path.getsize(file_path)
25
- if file_size > max_size:
26
- max_size = file_size
27
- biggest_file = file
28
-
29
- return biggest_file
30
-
31
- except FileNotFoundError:
32
- print("Couldn't find your model!")
33
- return
34
-
35
- def main(model_name, save_freq, lastmdls):
36
- global lowestval_weight_dir, scl
37
-
38
- tensordir = os.path.join('logs', model_name)
39
- lowestval_weight_dir = os.path.join(tensordir, "lowestvals")
40
-
41
- latest_file = find_biggest_tensorboard(tensordir)
42
-
43
- if latest_file is None:
44
- print("Couldn't find a valid tensorboard file!")
45
- return
46
-
47
- tfile = os.path.join(tensordir, latest_file)
48
-
49
- ea = event_accumulator.EventAccumulator(tfile,
50
- size_guidance={
51
- event_accumulator.COMPRESSED_HISTOGRAMS: 500,
52
- event_accumulator.IMAGES: 4,
53
- event_accumulator.AUDIO: 4,
54
- event_accumulator.SCALARS: 0,
55
- event_accumulator.HISTOGRAMS: 1,
56
- })
57
-
58
- ea.Reload()
59
- ea.Tags()
60
-
61
- scl = ea.Scalars('loss/g/total')
62
-
63
- listwstep = {}
64
-
65
- for val in scl:
66
- if (val.step // save_freq) * save_freq in [val.step for val in scl]:
67
- listwstep[float(val.value)] = (val.step // save_freq) * save_freq
68
-
69
- lowest_vals = sorted(listwstep.keys())[:lastmdls]
70
-
71
- sorted_dict = {value: step for value, step in listwstep.items() if value in lowest_vals}
72
-
73
- return sorted_dict
74
-
75
- def selectweights(model_name, file_dict, weights_dir, lowestval_weight_dir):
76
- os.makedirs(lowestval_weight_dir, exist_ok=True)
77
- logdir = []
78
- files = []
79
- lbldict = {
80
- 'Values': {},
81
- 'Names': {}
82
- }
83
- weights_dir_path = os.path.join(weights_dir, "")
84
- low_val_path = os.path.join(os.getcwd(), os.path.join(lowestval_weight_dir, ""))
85
-
86
- try:
87
- file_dict = LEval(file_dict)
88
- except Exception as e:
89
- print(f"Error! {e}")
90
- return f"Couldn't load tensorboard file! {e}"
91
-
92
- weights = [f for f in os.scandir(weights_dir)]
93
- for key, value in file_dict.items():
94
- pattern = fr"^{model_name}_.*_s{value}\.pth$"
95
- matching_weights = [f.name for f in weights if f.is_file() and RSearch(pattern, f.name)]
96
- for weight in matching_weights:
97
- source_path = weights_dir_path + weight
98
- destination_path = os.path.join(lowestval_weight_dir, weight)
99
-
100
- copy2(source_path, destination_path)
101
-
102
- logdir.append(f"File = {weight} Value: {key}, Step: {value}")
103
-
104
- lbldict['Names'][weight] = weight
105
- lbldict['Values'][weight] = key
106
-
107
- files.append(low_val_path + weight)
108
-
109
- print(f"File = {weight} Value: {key}, Step: {value}")
110
-
111
- yield ('\n'.join(logdir), files, pd.DataFrame(lbldict))
112
-
113
-
114
- return ''.join(logdir), files, pd.DataFrame(lbldict)
115
-
116
-
117
- if __name__ == "__main__":
118
- model = str(input("Enter the name of the model: "))
119
- sav_freq = int(input("Enter save frequency of the model: "))
120
- ds = main(model, sav_freq)
121
-
122
- if ds: selectweights(model, ds, weights_dir, lowestval_weight_dir)
123
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIBoy1993/segment_anything_webui/inference.py DELETED
@@ -1,188 +0,0 @@
1
- import os
2
- import cv2
3
- import torch
4
- import numpy as np
5
- import gradio as gr
6
- from PIL import Image, ImageDraw
7
- from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
8
- from transformers import OwlViTProcessor, OwlViTForObjectDetection
9
- import gc
10
-
11
- models = {
12
- 'vit_b': './checkpoints/sam_vit_b_01ec64.pth',
13
- 'vit_l': './checkpoints/sam_vit_l_0b3195.pth',
14
- 'vit_h': './checkpoints/sam_vit_h_4b8939.pth'
15
- }
16
-
17
- image_examples = [
18
- [os.path.join(os.path.dirname(__file__), "./images/53960-scaled.jpg"), 0, []],
19
- [os.path.join(os.path.dirname(__file__), "./images/2388455-scaled.jpg"), 1, []],
20
- [os.path.join(os.path.dirname(__file__), "./images/1.jpg"),2,[]],
21
- [os.path.join(os.path.dirname(__file__), "./images/2.jpg"),3,[]],
22
- [os.path.join(os.path.dirname(__file__), "./images/3.jpg"),4,[]],
23
- [os.path.join(os.path.dirname(__file__), "./images/4.jpg"),5,[]],
24
- [os.path.join(os.path.dirname(__file__), "./images/5.jpg"),6,[]],
25
- [os.path.join(os.path.dirname(__file__), "./images/6.jpg"),7,[]],
26
- [os.path.join(os.path.dirname(__file__), "./images/7.jpg"),8,[]],
27
- [os.path.join(os.path.dirname(__file__), "./images/8.jpg"),9,[]]
28
- ]
29
-
30
-
31
- def plot_boxes(img, boxes):
32
- img_pil = Image.fromarray(np.uint8(img * 255)).convert('RGB')
33
- draw = ImageDraw.Draw(img_pil)
34
- for box in boxes:
35
- color = tuple(np.random.randint(0, 255, size=3).tolist())
36
- x0, y0, x1, y1 = box
37
- x0, y0, x1, y1 = int(x0), int(y0), int(x1), int(y1)
38
- draw.rectangle([x0, y0, x1, y1], outline=color, width=6)
39
- return img_pil
40
-
41
-
42
- def segment_one(img, mask_generator, seed=None):
43
- if seed is not None:
44
- np.random.seed(seed)
45
- masks = mask_generator.generate(img)
46
- sorted_anns = sorted(masks, key=(lambda x: x['area']), reverse=True)
47
- mask_all = np.ones((img.shape[0], img.shape[1], 3))
48
- for ann in sorted_anns:
49
- m = ann['segmentation']
50
- color_mask = np.random.random((1, 3)).tolist()[0]
51
- for i in range(3):
52
- mask_all[m == True, i] = color_mask[i]
53
- result = img / 255 * 0.3 + mask_all * 0.7
54
- return result, mask_all
55
-
56
-
57
- def generator_inference(device, model_type, points_per_side, pred_iou_thresh, stability_score_thresh,
58
- min_mask_region_area, stability_score_offset, box_nms_thresh, crop_n_layers, crop_nms_thresh,
59
- input_x, progress=gr.Progress()):
60
- # sam model
61
- sam = sam_model_registry[model_type](checkpoint=models[model_type]).to(device)
62
- mask_generator = SamAutomaticMaskGenerator(
63
- sam,
64
- points_per_side=points_per_side,
65
- pred_iou_thresh=pred_iou_thresh,
66
- stability_score_thresh=stability_score_thresh,
67
- stability_score_offset=stability_score_offset,
68
- box_nms_thresh=box_nms_thresh,
69
- crop_n_layers=crop_n_layers,
70
- crop_nms_thresh=crop_nms_thresh,
71
- crop_overlap_ratio=512 / 1500,
72
- crop_n_points_downscale_factor=1,
73
- point_grids=None,
74
- min_mask_region_area=min_mask_region_area,
75
- output_mode='binary_mask'
76
- )
77
-
78
- # input is image, type: numpy
79
- if type(input_x) == np.ndarray:
80
- result, mask_all = segment_one(input_x, mask_generator)
81
- return result, mask_all
82
- elif isinstance(input_x, str): # input is video, type: path (str)
83
- cap = cv2.VideoCapture(input_x) # read video
84
- frames_num = cap.get(cv2.CAP_PROP_FRAME_COUNT)
85
- W, H = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
86
- fps = int(cap.get(cv2.CAP_PROP_FPS))
87
- out = cv2.VideoWriter("output.mp4", cv2.VideoWriter_fourcc('x', '2', '6', '4'), fps, (W, H), isColor=True)
88
- for _ in progress.tqdm(range(int(frames_num)),
89
- desc='Processing video ({} frames, size {}x{})'.format(int(frames_num), W, H)):
90
- ret, frame = cap.read() # read a frame
91
- result, mask_all = segment_one(frame, mask_generator, seed=2023)
92
- result = (result * 255).astype(np.uint8)
93
- out.write(result)
94
- out.release()
95
- cap.release()
96
- return 'output.mp4'
97
-
98
-
99
- def predictor_inference(device, model_type, input_x, input_text, selected_points, owl_vit_threshold=0.1):
100
- # sam model
101
- sam = sam_model_registry[model_type](checkpoint=models[model_type]).to(device)
102
- predictor = SamPredictor(sam)
103
- predictor.set_image(input_x) # Process the image to produce an image embedding
104
-
105
- if input_text != '':
106
- # split input text
107
- input_text = [input_text.split(',')]
108
- print(input_text)
109
- # OWL-ViT model
110
- processor = OwlViTProcessor.from_pretrained('./checkpoints/models--google--owlvit-base-patch32')
111
- owlvit_model = OwlViTForObjectDetection.from_pretrained("./checkpoints/models--google--owlvit-base-patch32").to(device)
112
- # get outputs
113
- input_text = processor(text=input_text, images=input_x, return_tensors="pt").to(device)
114
- outputs = owlvit_model(**input_text)
115
- target_size = torch.Tensor([input_x.shape[:2]]).to(device)
116
- results = processor.post_process_object_detection(outputs=outputs, target_sizes=target_size,
117
- threshold=owl_vit_threshold)
118
-
119
- # get the box with best score
120
- scores = torch.sigmoid(outputs.logits)
121
- # best_scores, best_idxs = torch.topk(scores, k=1, dim=1)
122
- # best_idxs = best_idxs.squeeze(1).tolist()
123
-
124
- i = 0 # Retrieve predictions for the first image for the corresponding text queries
125
- boxes_tensor = results[i]["boxes"] # [best_idxs]
126
- boxes = boxes_tensor.cpu().detach().numpy()
127
- # boxes = boxes[np.newaxis, :, :]
128
- transformed_boxes = predictor.transform.apply_boxes_torch(torch.Tensor(boxes).to(device),
129
- input_x.shape[:2]) # apply transform to original boxes
130
- # transformed_boxes = transformed_boxes.unsqueeze(0)
131
- print(transformed_boxes.size(), boxes.shape)
132
- else:
133
- transformed_boxes = None
134
-
135
- # points
136
- if len(selected_points) != 0:
137
- points = torch.Tensor([p for p, _ in selected_points]).to(device).unsqueeze(1)
138
- labels = torch.Tensor([int(l) for _, l in selected_points]).to(device).unsqueeze(1)
139
- transformed_points = predictor.transform.apply_coords_torch(points, input_x.shape[:2])
140
- print(points.size(), transformed_points.size(), labels.size(), input_x.shape, points)
141
- else:
142
- transformed_points, labels = None, None
143
-
144
- # predict segmentation according to the boxes
145
- masks, scores, logits = predictor.predict_torch(
146
- point_coords=transformed_points,
147
- point_labels=labels,
148
- boxes=transformed_boxes, # only one box
149
- multimask_output=False,
150
- )
151
- masks = masks.cpu().detach().numpy()
152
- mask_all = np.ones((input_x.shape[0], input_x.shape[1], 3))
153
- for ann in masks:
154
- color_mask = np.random.random((1, 3)).tolist()[0]
155
- for i in range(3):
156
- mask_all[ann[0] == True, i] = color_mask[i]
157
- img = input_x / 255 * 0.3 + mask_all * 0.7
158
- if input_text != '':
159
- img = plot_boxes(img, boxes_tensor) # image + mask + boxes
160
-
161
- # free the memory
162
- if input_text != '':
163
- owlvit_model.cpu()
164
- del owlvit_model
165
- del input_text
166
- gc.collect()
167
- torch.cuda.empty_cache()
168
-
169
- return img, mask_all
170
-
171
-
172
- def run_inference(device, model_type, points_per_side, pred_iou_thresh, stability_score_thresh, min_mask_region_area,
173
- stability_score_offset, box_nms_thresh, crop_n_layers, crop_nms_thresh, owl_vit_threshold, input_x,
174
- input_text, selected_points):
175
- # if input_x is int, the image is selected from examples
176
- if isinstance(input_x, int):
177
- input_x = cv2.imread(image_examples[input_x][0])
178
- input_x = cv2.cvtColor(input_x, cv2.COLOR_BGR2RGB)
179
- if (input_text != '' and not isinstance(input_x, str)) or len(selected_points) != 0: # user input text or points
180
- print('use predictor_inference')
181
- print('prompt text: ', input_text)
182
- print('prompt points length: ', len(selected_points))
183
- return predictor_inference(device, model_type, input_x, input_text, selected_points, owl_vit_threshold)
184
- else:
185
- print('use generator_inference')
186
- return generator_inference(device, model_type, points_per_side, pred_iou_thresh, stability_score_thresh,
187
- min_mask_region_area, stability_score_offset, box_nms_thresh, crop_n_layers,
188
- crop_nms_thresh, input_x)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/audiocraft/data/audio_utils.py DELETED
@@ -1,177 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
- """Various utilities for audio convertion (pcm format, sample rate and channels),
7
- and volume normalization."""
8
- import sys
9
- import typing as tp
10
-
11
- import julius
12
- import torch
13
- import torchaudio
14
-
15
-
16
- def convert_audio_channels(wav: torch.Tensor, channels: int = 2) -> torch.Tensor:
17
- """Convert audio to the given number of channels.
18
-
19
- Args:
20
- wav (torch.Tensor): Audio wave of shape [B, C, T].
21
- channels (int): Expected number of channels as output.
22
- Returns:
23
- torch.Tensor: Downmixed or unchanged audio wave [B, C, T].
24
- """
25
- *shape, src_channels, length = wav.shape
26
- if src_channels == channels:
27
- pass
28
- elif channels == 1:
29
- # Case 1:
30
- # The caller asked 1-channel audio, and the stream has multiple
31
- # channels, downmix all channels.
32
- wav = wav.mean(dim=-2, keepdim=True)
33
- elif src_channels == 1:
34
- # Case 2:
35
- # The caller asked for multiple channels, but the input file has
36
- # a single channel, replicate the audio over all channels.
37
- wav = wav.expand(*shape, channels, length)
38
- elif src_channels >= channels:
39
- # Case 3:
40
- # The caller asked for multiple channels, and the input file has
41
- # more channels than requested. In that case return the first channels.
42
- wav = wav[..., :channels, :]
43
- else:
44
- # Case 4: What is a reasonable choice here?
45
- raise ValueError('The audio file has less channels than requested but is not mono.')
46
- return wav
47
-
48
-
49
- def convert_audio(wav: torch.Tensor, from_rate: float,
50
- to_rate: float, to_channels: int) -> torch.Tensor:
51
- """Convert audio to new sample rate and number of audio channels."""
52
- wav = julius.resample_frac(wav, int(from_rate), int(to_rate))
53
- wav = convert_audio_channels(wav, to_channels)
54
- return wav
55
-
56
-
57
- def normalize_loudness(wav: torch.Tensor, sample_rate: int, loudness_headroom_db: float = 14,
58
- loudness_compressor: bool = False, energy_floor: float = 2e-3):
59
- """Normalize an input signal to a user loudness in dB LKFS.
60
- Audio loudness is defined according to the ITU-R BS.1770-4 recommendation.
61
-
62
- Args:
63
- wav (torch.Tensor): Input multichannel audio data.
64
- sample_rate (int): Sample rate.
65
- loudness_headroom_db (float): Target loudness of the output in dB LUFS.
66
- loudness_compressor (bool): Uses tanh for soft clipping.
67
- energy_floor (float): anything below that RMS level will not be rescaled.
68
- Returns:
69
- torch.Tensor: Loudness normalized output data.
70
- """
71
- energy = wav.pow(2).mean().sqrt().item()
72
- if energy < energy_floor:
73
- return wav
74
- transform = torchaudio.transforms.Loudness(sample_rate)
75
- input_loudness_db = transform(wav).item()
76
- # calculate the gain needed to scale to the desired loudness level
77
- delta_loudness = -loudness_headroom_db - input_loudness_db
78
- gain = 10.0 ** (delta_loudness / 20.0)
79
- output = gain * wav
80
- if loudness_compressor:
81
- output = torch.tanh(output)
82
- assert output.isfinite().all(), (input_loudness_db, wav.pow(2).mean().sqrt())
83
- return output
84
-
85
-
86
- def _clip_wav(wav: torch.Tensor, log_clipping: bool = False, stem_name: tp.Optional[str] = None) -> None:
87
- """Utility function to clip the audio with logging if specified."""
88
- max_scale = wav.abs().max()
89
- if log_clipping and max_scale > 1:
90
- clamp_prob = (wav.abs() > 1).float().mean().item()
91
- print(f"CLIPPING {stem_name or ''} happening with proba (a bit of clipping is okay):",
92
- clamp_prob, "maximum scale: ", max_scale.item(), file=sys.stderr)
93
- #wav.clamp_(-1, 1)
94
- wav = wav.clone().clamp_(-1, 1)
95
-
96
-
97
- def normalize_audio(wav: torch.Tensor, normalize: bool = True,
98
- strategy: str = 'peak', peak_clip_headroom_db: float = 1,
99
- rms_headroom_db: float = 18, loudness_headroom_db: float = 14,
100
- loudness_compressor: bool = False, log_clipping: bool = False,
101
- sample_rate: tp.Optional[int] = None,
102
- stem_name: tp.Optional[str] = None) -> torch.Tensor:
103
- """Normalize the audio according to the prescribed strategy (see after).
104
-
105
- Args:
106
- wav (torch.Tensor): Audio data.
107
- normalize (bool): if `True` (default), normalizes according to the prescribed
108
- strategy (see after). If `False`, the strategy is only used in case clipping
109
- would happen.
110
- strategy (str): Can be either 'clip', 'peak', or 'rms'. Default is 'peak',
111
- i.e. audio is normalized by its largest value. RMS normalizes by root-mean-square
112
- with extra headroom to avoid clipping. 'clip' just clips.
113
- peak_clip_headroom_db (float): Headroom in dB when doing 'peak' or 'clip' strategy.
114
- rms_headroom_db (float): Headroom in dB when doing 'rms' strategy. This must be much larger
115
- than the `peak_clip` one to avoid further clipping.
116
- loudness_headroom_db (float): Target loudness for loudness normalization.
117
- loudness_compressor (bool): If True, uses tanh based soft clipping.
118
- log_clipping (bool): If True, basic logging on stderr when clipping still
119
- occurs despite strategy (only for 'rms').
120
- sample_rate (int): Sample rate for the audio data (required for loudness).
121
- stem_name (str, optional): Stem name for clipping logging.
122
- Returns:
123
- torch.Tensor: Normalized audio.
124
- """
125
- scale_peak = 10 ** (-peak_clip_headroom_db / 20)
126
- scale_rms = 10 ** (-rms_headroom_db / 20)
127
- if strategy == 'peak':
128
- rescaling = (scale_peak / wav.abs().max())
129
- if normalize or rescaling < 1:
130
- wav = wav * rescaling
131
- elif strategy == 'clip':
132
- wav = wav.clamp(-scale_peak, scale_peak)
133
- elif strategy == 'rms':
134
- mono = wav.mean(dim=0)
135
- rescaling = scale_rms / mono.pow(2).mean().sqrt()
136
- if normalize or rescaling < 1:
137
- wav = wav * rescaling
138
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
139
- elif strategy == 'loudness':
140
- assert sample_rate is not None, "Loudness normalization requires sample rate."
141
- wav = normalize_loudness(wav, sample_rate, loudness_headroom_db, loudness_compressor)
142
- _clip_wav(wav, log_clipping=log_clipping, stem_name=stem_name)
143
- else:
144
- assert wav.abs().max() < 1
145
- assert strategy == '' or strategy == 'none', f"Unexpected strategy: '{strategy}'"
146
- return wav
147
-
148
-
149
- def f32_pcm(wav: torch.Tensor) -> torch.Tensor:
150
- """Convert audio to float 32 bits PCM format.
151
- """
152
- if wav.dtype.is_floating_point:
153
- return wav
154
- elif wav.dtype == torch.int16:
155
- return wav.float() / 2**15
156
- elif wav.dtype == torch.int32:
157
- return wav.float() / 2**31
158
- raise ValueError(f"Unsupported wav dtype: {wav.dtype}")
159
-
160
-
161
- def i16_pcm(wav: torch.Tensor) -> torch.Tensor:
162
- """Convert audio to int 16 bits PCM format.
163
-
164
- ..Warning:: There exist many formula for doing this conversion. None are perfect
165
- due to the asymmetry of the int16 range. One either have possible clipping, DC offset,
166
- or inconsistencies with f32_pcm. If the given wav doesn't have enough headroom,
167
- it is possible that `i16_pcm(f32_pcm)) != Identity`.
168
- """
169
- if wav.dtype.is_floating_point:
170
- assert wav.abs().max() <= 1
171
- candidate = (wav * 2 ** 15).round()
172
- if candidate.max() >= 2 ** 15: # clipping would occur
173
- candidate = (wav * (2 ** 15 - 1)).round()
174
- return candidate.short()
175
- else:
176
- assert wav.dtype == torch.int16
177
- return wav
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/VQ-Trans/GPT_eval_multi.py DELETED
@@ -1,121 +0,0 @@
1
- import os
2
- import torch
3
- import numpy as np
4
- from torch.utils.tensorboard import SummaryWriter
5
- import json
6
- import clip
7
-
8
- import options.option_transformer as option_trans
9
- import models.vqvae as vqvae
10
- import utils.utils_model as utils_model
11
- import utils.eval_trans as eval_trans
12
- from dataset import dataset_TM_eval
13
- import models.t2m_trans as trans
14
- from options.get_eval_option import get_opt
15
- from models.evaluator_wrapper import EvaluatorModelWrapper
16
- import warnings
17
- warnings.filterwarnings('ignore')
18
-
19
- ##### ---- Exp dirs ---- #####
20
- args = option_trans.get_args_parser()
21
- torch.manual_seed(args.seed)
22
-
23
- args.out_dir = os.path.join(args.out_dir, f'{args.exp_name}')
24
- os.makedirs(args.out_dir, exist_ok = True)
25
-
26
- ##### ---- Logger ---- #####
27
- logger = utils_model.get_logger(args.out_dir)
28
- writer = SummaryWriter(args.out_dir)
29
- logger.info(json.dumps(vars(args), indent=4, sort_keys=True))
30
-
31
- from utils.word_vectorizer import WordVectorizer
32
- w_vectorizer = WordVectorizer('./glove', 'our_vab')
33
- val_loader = dataset_TM_eval.DATALoader(args.dataname, True, 32, w_vectorizer)
34
-
35
- dataset_opt_path = 'checkpoints/kit/Comp_v6_KLD005/opt.txt' if args.dataname == 'kit' else 'checkpoints/t2m/Comp_v6_KLD005/opt.txt'
36
-
37
- wrapper_opt = get_opt(dataset_opt_path, torch.device('cuda'))
38
- eval_wrapper = EvaluatorModelWrapper(wrapper_opt)
39
-
40
- ##### ---- Network ---- #####
41
-
42
- ## load clip model and datasets
43
- clip_model, clip_preprocess = clip.load("ViT-B/32", device=torch.device('cuda'), jit=False, download_root='/apdcephfs_cq2/share_1290939/maelyszhang/.cache/clip') # Must set jit=False for training
44
- clip.model.convert_weights(clip_model) # Actually this line is unnecessary since clip by default already on float16
45
- clip_model.eval()
46
- for p in clip_model.parameters():
47
- p.requires_grad = False
48
-
49
- net = vqvae.HumanVQVAE(args, ## use args to define different parameters in different quantizers
50
- args.nb_code,
51
- args.code_dim,
52
- args.output_emb_width,
53
- args.down_t,
54
- args.stride_t,
55
- args.width,
56
- args.depth,
57
- args.dilation_growth_rate)
58
-
59
-
60
- trans_encoder = trans.Text2Motion_Transformer(num_vq=args.nb_code,
61
- embed_dim=args.embed_dim_gpt,
62
- clip_dim=args.clip_dim,
63
- block_size=args.block_size,
64
- num_layers=args.num_layers,
65
- n_head=args.n_head_gpt,
66
- drop_out_rate=args.drop_out_rate,
67
- fc_rate=args.ff_rate)
68
-
69
-
70
- print ('loading checkpoint from {}'.format(args.resume_pth))
71
- ckpt = torch.load(args.resume_pth, map_location='cpu')
72
- net.load_state_dict(ckpt['net'], strict=True)
73
- net.eval()
74
- net.cuda()
75
-
76
- if args.resume_trans is not None:
77
- print ('loading transformer checkpoint from {}'.format(args.resume_trans))
78
- ckpt = torch.load(args.resume_trans, map_location='cpu')
79
- trans_encoder.load_state_dict(ckpt['trans'], strict=True)
80
- trans_encoder.train()
81
- trans_encoder.cuda()
82
-
83
-
84
- fid = []
85
- div = []
86
- top1 = []
87
- top2 = []
88
- top3 = []
89
- matching = []
90
- multi = []
91
- repeat_time = 20
92
-
93
-
94
- for i in range(repeat_time):
95
- best_fid, best_iter, best_div, best_top1, best_top2, best_top3, best_matching, best_multi, writer, logger = eval_trans.evaluation_transformer_test(args.out_dir, val_loader, net, trans_encoder, logger, writer, 0, best_fid=1000, best_iter=0, best_div=100, best_top1=0, best_top2=0, best_top3=0, best_matching=100, best_multi=0, clip_model=clip_model, eval_wrapper=eval_wrapper, draw=False, savegif=False, save=False, savenpy=(i==0))
96
- fid.append(best_fid)
97
- div.append(best_div)
98
- top1.append(best_top1)
99
- top2.append(best_top2)
100
- top3.append(best_top3)
101
- matching.append(best_matching)
102
- multi.append(best_multi)
103
-
104
- print('final result:')
105
- print('fid: ', sum(fid)/repeat_time)
106
- print('div: ', sum(div)/repeat_time)
107
- print('top1: ', sum(top1)/repeat_time)
108
- print('top2: ', sum(top2)/repeat_time)
109
- print('top3: ', sum(top3)/repeat_time)
110
- print('matching: ', sum(matching)/repeat_time)
111
- print('multi: ', sum(multi)/repeat_time)
112
-
113
- fid = np.array(fid)
114
- div = np.array(div)
115
- top1 = np.array(top1)
116
- top2 = np.array(top2)
117
- top3 = np.array(top3)
118
- matching = np.array(matching)
119
- multi = np.array(multi)
120
- msg_final = f"FID. {np.mean(fid):.3f}, conf. {np.std(fid)*1.96/np.sqrt(repeat_time):.3f}, Diversity. {np.mean(div):.3f}, conf. {np.std(div)*1.96/np.sqrt(repeat_time):.3f}, TOP1. {np.mean(top1):.3f}, conf. {np.std(top1)*1.96/np.sqrt(repeat_time):.3f}, TOP2. {np.mean(top2):.3f}, conf. {np.std(top2)*1.96/np.sqrt(repeat_time):.3f}, TOP3. {np.mean(top3):.3f}, conf. {np.std(top3)*1.96/np.sqrt(repeat_time):.3f}, Matching. {np.mean(matching):.3f}, conf. {np.std(matching)*1.96/np.sqrt(repeat_time):.3f}, Multi. {np.mean(multi):.3f}, conf. {np.std(multi)*1.96/np.sqrt(repeat_time):.3f}"
121
- logger.info(msg_final)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/VQ-Trans/visualize/joints2smpl/src/smplify.py DELETED
@@ -1,279 +0,0 @@
1
- import torch
2
- import os, sys
3
- import pickle
4
- import smplx
5
- import numpy as np
6
-
7
- sys.path.append(os.path.dirname(__file__))
8
- from customloss import (camera_fitting_loss,
9
- body_fitting_loss,
10
- camera_fitting_loss_3d,
11
- body_fitting_loss_3d,
12
- )
13
- from prior import MaxMixturePrior
14
- from visualize.joints2smpl.src import config
15
-
16
-
17
-
18
- @torch.no_grad()
19
- def guess_init_3d(model_joints,
20
- j3d,
21
- joints_category="orig"):
22
- """Initialize the camera translation via triangle similarity, by using the torso joints .
23
- :param model_joints: SMPL model with pre joints
24
- :param j3d: 25x3 array of Kinect Joints
25
- :returns: 3D vector corresponding to the estimated camera translation
26
- """
27
- # get the indexed four
28
- gt_joints = ['RHip', 'LHip', 'RShoulder', 'LShoulder']
29
- gt_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints]
30
-
31
- if joints_category=="orig":
32
- joints_ind_category = [config.JOINT_MAP[joint] for joint in gt_joints]
33
- elif joints_category=="AMASS":
34
- joints_ind_category = [config.AMASS_JOINT_MAP[joint] for joint in gt_joints]
35
- else:
36
- print("NO SUCH JOINTS CATEGORY!")
37
-
38
- sum_init_t = (j3d[:, joints_ind_category] - model_joints[:, gt_joints_ind]).sum(dim=1)
39
- init_t = sum_init_t / 4.0
40
- return init_t
41
-
42
-
43
- # SMPLIfy 3D
44
- class SMPLify3D():
45
- """Implementation of SMPLify, use 3D joints."""
46
-
47
- def __init__(self,
48
- smplxmodel,
49
- step_size=1e-2,
50
- batch_size=1,
51
- num_iters=100,
52
- use_collision=False,
53
- use_lbfgs=True,
54
- joints_category="orig",
55
- device=torch.device('cuda:0'),
56
- ):
57
-
58
- # Store options
59
- self.batch_size = batch_size
60
- self.device = device
61
- self.step_size = step_size
62
-
63
- self.num_iters = num_iters
64
- # --- choose optimizer
65
- self.use_lbfgs = use_lbfgs
66
- # GMM pose prior
67
- self.pose_prior = MaxMixturePrior(prior_folder=config.GMM_MODEL_DIR,
68
- num_gaussians=8,
69
- dtype=torch.float32).to(device)
70
- # collision part
71
- self.use_collision = use_collision
72
- if self.use_collision:
73
- self.part_segm_fn = config.Part_Seg_DIR
74
-
75
- # reLoad SMPL-X model
76
- self.smpl = smplxmodel
77
-
78
- self.model_faces = smplxmodel.faces_tensor.view(-1)
79
-
80
- # select joint joint_category
81
- self.joints_category = joints_category
82
-
83
- if joints_category=="orig":
84
- self.smpl_index = config.full_smpl_idx
85
- self.corr_index = config.full_smpl_idx
86
- elif joints_category=="AMASS":
87
- self.smpl_index = config.amass_smpl_idx
88
- self.corr_index = config.amass_idx
89
- else:
90
- self.smpl_index = None
91
- self.corr_index = None
92
- print("NO SUCH JOINTS CATEGORY!")
93
-
94
- # ---- get the man function here ------
95
- def __call__(self, init_pose, init_betas, init_cam_t, j3d, conf_3d=1.0, seq_ind=0):
96
- """Perform body fitting.
97
- Input:
98
- init_pose: SMPL pose estimate
99
- init_betas: SMPL betas estimate
100
- init_cam_t: Camera translation estimate
101
- j3d: joints 3d aka keypoints
102
- conf_3d: confidence for 3d joints
103
- seq_ind: index of the sequence
104
- Returns:
105
- vertices: Vertices of optimized shape
106
- joints: 3D joints of optimized shape
107
- pose: SMPL pose parameters of optimized shape
108
- betas: SMPL beta parameters of optimized shape
109
- camera_translation: Camera translation
110
- """
111
-
112
- # # # add the mesh inter-section to avoid
113
- search_tree = None
114
- pen_distance = None
115
- filter_faces = None
116
-
117
- if self.use_collision:
118
- from mesh_intersection.bvh_search_tree import BVH
119
- import mesh_intersection.loss as collisions_loss
120
- from mesh_intersection.filter_faces import FilterFaces
121
-
122
- search_tree = BVH(max_collisions=8)
123
-
124
- pen_distance = collisions_loss.DistanceFieldPenetrationLoss(
125
- sigma=0.5, point2plane=False, vectorized=True, penalize_outside=True)
126
-
127
- if self.part_segm_fn:
128
- # Read the part segmentation
129
- part_segm_fn = os.path.expandvars(self.part_segm_fn)
130
- with open(part_segm_fn, 'rb') as faces_parents_file:
131
- face_segm_data = pickle.load(faces_parents_file, encoding='latin1')
132
- faces_segm = face_segm_data['segm']
133
- faces_parents = face_segm_data['parents']
134
- # Create the module used to filter invalid collision pairs
135
- filter_faces = FilterFaces(
136
- faces_segm=faces_segm, faces_parents=faces_parents,
137
- ign_part_pairs=None).to(device=self.device)
138
-
139
-
140
- # Split SMPL pose to body pose and global orientation
141
- body_pose = init_pose[:, 3:].detach().clone()
142
- global_orient = init_pose[:, :3].detach().clone()
143
- betas = init_betas.detach().clone()
144
-
145
- # use guess 3d to get the initial
146
- smpl_output = self.smpl(global_orient=global_orient,
147
- body_pose=body_pose,
148
- betas=betas)
149
- model_joints = smpl_output.joints
150
-
151
- init_cam_t = guess_init_3d(model_joints, j3d, self.joints_category).unsqueeze(1).detach()
152
- camera_translation = init_cam_t.clone()
153
-
154
- preserve_pose = init_pose[:, 3:].detach().clone()
155
- # -------------Step 1: Optimize camera translation and body orientation--------
156
- # Optimize only camera translation and body orientation
157
- body_pose.requires_grad = False
158
- betas.requires_grad = False
159
- global_orient.requires_grad = True
160
- camera_translation.requires_grad = True
161
-
162
- camera_opt_params = [global_orient, camera_translation]
163
-
164
- if self.use_lbfgs:
165
- camera_optimizer = torch.optim.LBFGS(camera_opt_params, max_iter=self.num_iters,
166
- lr=self.step_size, line_search_fn='strong_wolfe')
167
- for i in range(10):
168
- def closure():
169
- camera_optimizer.zero_grad()
170
- smpl_output = self.smpl(global_orient=global_orient,
171
- body_pose=body_pose,
172
- betas=betas)
173
- model_joints = smpl_output.joints
174
- # print('model_joints', model_joints.shape)
175
- # print('camera_translation', camera_translation.shape)
176
- # print('init_cam_t', init_cam_t.shape)
177
- # print('j3d', j3d.shape)
178
- loss = camera_fitting_loss_3d(model_joints, camera_translation,
179
- init_cam_t, j3d, self.joints_category)
180
- loss.backward()
181
- return loss
182
-
183
- camera_optimizer.step(closure)
184
- else:
185
- camera_optimizer = torch.optim.Adam(camera_opt_params, lr=self.step_size, betas=(0.9, 0.999))
186
-
187
- for i in range(20):
188
- smpl_output = self.smpl(global_orient=global_orient,
189
- body_pose=body_pose,
190
- betas=betas)
191
- model_joints = smpl_output.joints
192
-
193
- loss = camera_fitting_loss_3d(model_joints[:, self.smpl_index], camera_translation,
194
- init_cam_t, j3d[:, self.corr_index], self.joints_category)
195
- camera_optimizer.zero_grad()
196
- loss.backward()
197
- camera_optimizer.step()
198
-
199
- # Fix camera translation after optimizing camera
200
- # --------Step 2: Optimize body joints --------------------------
201
- # Optimize only the body pose and global orientation of the body
202
- body_pose.requires_grad = True
203
- global_orient.requires_grad = True
204
- camera_translation.requires_grad = True
205
-
206
- # --- if we use the sequence, fix the shape
207
- if seq_ind == 0:
208
- betas.requires_grad = True
209
- body_opt_params = [body_pose, betas, global_orient, camera_translation]
210
- else:
211
- betas.requires_grad = False
212
- body_opt_params = [body_pose, global_orient, camera_translation]
213
-
214
- if self.use_lbfgs:
215
- body_optimizer = torch.optim.LBFGS(body_opt_params, max_iter=self.num_iters,
216
- lr=self.step_size, line_search_fn='strong_wolfe')
217
- for i in range(self.num_iters):
218
- def closure():
219
- body_optimizer.zero_grad()
220
- smpl_output = self.smpl(global_orient=global_orient,
221
- body_pose=body_pose,
222
- betas=betas)
223
- model_joints = smpl_output.joints
224
- model_vertices = smpl_output.vertices
225
-
226
- loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation,
227
- j3d[:, self.corr_index], self.pose_prior,
228
- joints3d_conf=conf_3d,
229
- joint_loss_weight=600.0,
230
- pose_preserve_weight=5.0,
231
- use_collision=self.use_collision,
232
- model_vertices=model_vertices, model_faces=self.model_faces,
233
- search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces)
234
- loss.backward()
235
- return loss
236
-
237
- body_optimizer.step(closure)
238
- else:
239
- body_optimizer = torch.optim.Adam(body_opt_params, lr=self.step_size, betas=(0.9, 0.999))
240
-
241
- for i in range(self.num_iters):
242
- smpl_output = self.smpl(global_orient=global_orient,
243
- body_pose=body_pose,
244
- betas=betas)
245
- model_joints = smpl_output.joints
246
- model_vertices = smpl_output.vertices
247
-
248
- loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation,
249
- j3d[:, self.corr_index], self.pose_prior,
250
- joints3d_conf=conf_3d,
251
- joint_loss_weight=600.0,
252
- use_collision=self.use_collision,
253
- model_vertices=model_vertices, model_faces=self.model_faces,
254
- search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces)
255
- body_optimizer.zero_grad()
256
- loss.backward()
257
- body_optimizer.step()
258
-
259
- # Get final loss value
260
- with torch.no_grad():
261
- smpl_output = self.smpl(global_orient=global_orient,
262
- body_pose=body_pose,
263
- betas=betas, return_full_pose=True)
264
- model_joints = smpl_output.joints
265
- model_vertices = smpl_output.vertices
266
-
267
- final_loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation,
268
- j3d[:, self.corr_index], self.pose_prior,
269
- joints3d_conf=conf_3d,
270
- joint_loss_weight=600.0,
271
- use_collision=self.use_collision, model_vertices=model_vertices, model_faces=self.model_faces,
272
- search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces)
273
-
274
- vertices = smpl_output.vertices.detach()
275
- joints = smpl_output.joints.detach()
276
- pose = torch.cat([global_orient, body_pose], dim=-1).detach()
277
- betas = betas.detach()
278
-
279
- return vertices, joints, pose, betas, camera_translation, final_loss
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIZero2HeroBootcamp/TranscriptAILearnerFromYoutube/TwoTranscriptQuotesFromIlyaSutskever.md DELETED
@@ -1,71 +0,0 @@
1
- https://www.youtube.com/watch?v=9EN_HoEk3KY&t=172s
2
-
3
-
4
- 1:42
5
- program the does very very well on your data then you will achieve the best
6
- 1:48
7
- generalization possible with a little bit of modification you can turn it into a precise theorem
8
- 1:54
9
- and on a very intuitive level it's easy to see what it should be the case if you
10
- 2:01
11
- have some data and you're able to find a shorter program which generates this
12
- 2:06
13
- data then you've essentially extracted all the all conceivable regularity from
14
- 2:11
15
- this data into your program and then you can use these objects to make the best predictions possible like if if you have
16
- 2:19
17
- data which is so complex but there is no way to express it as a shorter program
18
- 2:25
19
- then it means that your data is totally random there is no way to extract any regularity from it whatsoever now there
20
- 2:32
21
- is little known mathematical theory behind this and the proofs of these statements actually not even that hard
22
- 2:38
23
- but the one minor slight disappointment is that it's actually not possible at
24
- 2:44
25
- least given today's tools and understanding to find the best short program that
26
-
27
-
28
-
29
- https://youtu.be/9EN_HoEk3KY?t=442
30
- 5
31
- to talk a little bit about reinforcement learning so reinforcement learning is a framework it's a framework of evaluating
32
- 6:53
33
- agents in their ability to achieve goals and complicated stochastic environments
34
- 6:58
35
- you've got an agent which is plugged into an environment as shown in the figure right here and for any given
36
- 7:06
37
- agent you can simply run it many times and compute its average reward now the
38
- 7:13
39
- thing that's interesting about the reinforcement learning framework is that there exist interesting useful
40
- 7:20
41
- reinforcement learning algorithms the framework existed for a long time it
42
- 7:25
43
- became interesting once we realized that good algorithms exist now these are there are perfect algorithms but they
44
- 7:31
45
- are good enough to do interesting things and all you want the mathematical
46
- 7:37
47
- problem is one where you need to maximize the expected reward now one
48
- 7:44
49
- important way in which the reinforcement learning framework is not quite complete is that it assumes that the reward is
50
- 7:50
51
- given by the environment you see this picture the agent sends an action while
52
- 7:56
53
- the reward sends it an observation in a both the observation and the reward backwards that's what the environment
54
- 8:01
55
- communicates back the way in which this is not the case in the real world is that we figure out
56
- 8:11
57
- what the reward is from the observation we reward ourselves we are not told
58
- 8:16
59
- environment doesn't say hey here's some negative reward it's our interpretation over census that lets us determine what
60
- 8:23
61
- the reward is and there is only one real true reward in life and this is
62
- 8:28
63
- existence or nonexistence and everything else is a corollary of that so well what
64
- 8:35
65
- should our agent be you already know the answer should be a neural network because whenever you want to do
66
- 8:41
67
- something dense it's going to be a neural network and you want the agent to map observations to actions so you let
68
- 8:47
69
- it be parametrized with a neural net and you apply learning algorithm so I want to explain to you how reinforcement
70
- 8:53
71
- learning works this is model free reinforcement learning the reinforcement learning has actually been used in practice everywhere but it's
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet101_8xb16_cifar10.py DELETED
@@ -1,5 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/resnet101_cifar.py',
3
- '../_base_/datasets/cifar10_bs16.py',
4
- '../_base_/schedules/cifar10_bs128.py', '../_base_/default_runtime.py'
5
- ]
 
 
 
 
 
 
spaces/AchyuthGamer/ImMagician-Image-Generator/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: ImMagician Image
3
- emoji: 🪄
4
- colorFrom: indigo
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 3.39.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/modules/extra_condition/midas/midas/blocks.py DELETED
@@ -1,342 +0,0 @@
1
- import torch
2
- import torch.nn as nn
3
-
4
- from .vit import (
5
- _make_pretrained_vitb_rn50_384,
6
- _make_pretrained_vitl16_384,
7
- _make_pretrained_vitb16_384,
8
- forward_vit,
9
- )
10
-
11
- def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",):
12
- if backbone == "vitl16_384":
13
- pretrained = _make_pretrained_vitl16_384(
14
- use_pretrained, hooks=hooks, use_readout=use_readout
15
- )
16
- scratch = _make_scratch(
17
- [256, 512, 1024, 1024], features, groups=groups, expand=expand
18
- ) # ViT-L/16 - 85.0% Top1 (backbone)
19
- elif backbone == "vitb_rn50_384":
20
- pretrained = _make_pretrained_vitb_rn50_384(
21
- use_pretrained,
22
- hooks=hooks,
23
- use_vit_only=use_vit_only,
24
- use_readout=use_readout,
25
- )
26
- scratch = _make_scratch(
27
- [256, 512, 768, 768], features, groups=groups, expand=expand
28
- ) # ViT-H/16 - 85.0% Top1 (backbone)
29
- elif backbone == "vitb16_384":
30
- pretrained = _make_pretrained_vitb16_384(
31
- use_pretrained, hooks=hooks, use_readout=use_readout
32
- )
33
- scratch = _make_scratch(
34
- [96, 192, 384, 768], features, groups=groups, expand=expand
35
- ) # ViT-B/16 - 84.6% Top1 (backbone)
36
- elif backbone == "resnext101_wsl":
37
- pretrained = _make_pretrained_resnext101_wsl(use_pretrained)
38
- scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3
39
- elif backbone == "efficientnet_lite3":
40
- pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable)
41
- scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3
42
- else:
43
- print(f"Backbone '{backbone}' not implemented")
44
- assert False
45
-
46
- return pretrained, scratch
47
-
48
-
49
- def _make_scratch(in_shape, out_shape, groups=1, expand=False):
50
- scratch = nn.Module()
51
-
52
- out_shape1 = out_shape
53
- out_shape2 = out_shape
54
- out_shape3 = out_shape
55
- out_shape4 = out_shape
56
- if expand==True:
57
- out_shape1 = out_shape
58
- out_shape2 = out_shape*2
59
- out_shape3 = out_shape*4
60
- out_shape4 = out_shape*8
61
-
62
- scratch.layer1_rn = nn.Conv2d(
63
- in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
64
- )
65
- scratch.layer2_rn = nn.Conv2d(
66
- in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
67
- )
68
- scratch.layer3_rn = nn.Conv2d(
69
- in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
70
- )
71
- scratch.layer4_rn = nn.Conv2d(
72
- in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups
73
- )
74
-
75
- return scratch
76
-
77
-
78
- def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False):
79
- efficientnet = torch.hub.load(
80
- "rwightman/gen-efficientnet-pytorch",
81
- "tf_efficientnet_lite3",
82
- pretrained=use_pretrained,
83
- exportable=exportable
84
- )
85
- return _make_efficientnet_backbone(efficientnet)
86
-
87
-
88
- def _make_efficientnet_backbone(effnet):
89
- pretrained = nn.Module()
90
-
91
- pretrained.layer1 = nn.Sequential(
92
- effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2]
93
- )
94
- pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3])
95
- pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5])
96
- pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9])
97
-
98
- return pretrained
99
-
100
-
101
- def _make_resnet_backbone(resnet):
102
- pretrained = nn.Module()
103
- pretrained.layer1 = nn.Sequential(
104
- resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1
105
- )
106
-
107
- pretrained.layer2 = resnet.layer2
108
- pretrained.layer3 = resnet.layer3
109
- pretrained.layer4 = resnet.layer4
110
-
111
- return pretrained
112
-
113
-
114
- def _make_pretrained_resnext101_wsl(use_pretrained):
115
- resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl")
116
- return _make_resnet_backbone(resnet)
117
-
118
-
119
-
120
- class Interpolate(nn.Module):
121
- """Interpolation module.
122
- """
123
-
124
- def __init__(self, scale_factor, mode, align_corners=False):
125
- """Init.
126
-
127
- Args:
128
- scale_factor (float): scaling
129
- mode (str): interpolation mode
130
- """
131
- super(Interpolate, self).__init__()
132
-
133
- self.interp = nn.functional.interpolate
134
- self.scale_factor = scale_factor
135
- self.mode = mode
136
- self.align_corners = align_corners
137
-
138
- def forward(self, x):
139
- """Forward pass.
140
-
141
- Args:
142
- x (tensor): input
143
-
144
- Returns:
145
- tensor: interpolated data
146
- """
147
-
148
- x = self.interp(
149
- x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners
150
- )
151
-
152
- return x
153
-
154
-
155
- class ResidualConvUnit(nn.Module):
156
- """Residual convolution module.
157
- """
158
-
159
- def __init__(self, features):
160
- """Init.
161
-
162
- Args:
163
- features (int): number of features
164
- """
165
- super().__init__()
166
-
167
- self.conv1 = nn.Conv2d(
168
- features, features, kernel_size=3, stride=1, padding=1, bias=True
169
- )
170
-
171
- self.conv2 = nn.Conv2d(
172
- features, features, kernel_size=3, stride=1, padding=1, bias=True
173
- )
174
-
175
- self.relu = nn.ReLU(inplace=True)
176
-
177
- def forward(self, x):
178
- """Forward pass.
179
-
180
- Args:
181
- x (tensor): input
182
-
183
- Returns:
184
- tensor: output
185
- """
186
- out = self.relu(x)
187
- out = self.conv1(out)
188
- out = self.relu(out)
189
- out = self.conv2(out)
190
-
191
- return out + x
192
-
193
-
194
- class FeatureFusionBlock(nn.Module):
195
- """Feature fusion block.
196
- """
197
-
198
- def __init__(self, features):
199
- """Init.
200
-
201
- Args:
202
- features (int): number of features
203
- """
204
- super(FeatureFusionBlock, self).__init__()
205
-
206
- self.resConfUnit1 = ResidualConvUnit(features)
207
- self.resConfUnit2 = ResidualConvUnit(features)
208
-
209
- def forward(self, *xs):
210
- """Forward pass.
211
-
212
- Returns:
213
- tensor: output
214
- """
215
- output = xs[0]
216
-
217
- if len(xs) == 2:
218
- output += self.resConfUnit1(xs[1])
219
-
220
- output = self.resConfUnit2(output)
221
-
222
- output = nn.functional.interpolate(
223
- output, scale_factor=2, mode="bilinear", align_corners=True
224
- )
225
-
226
- return output
227
-
228
-
229
-
230
-
231
- class ResidualConvUnit_custom(nn.Module):
232
- """Residual convolution module.
233
- """
234
-
235
- def __init__(self, features, activation, bn):
236
- """Init.
237
-
238
- Args:
239
- features (int): number of features
240
- """
241
- super().__init__()
242
-
243
- self.bn = bn
244
-
245
- self.groups=1
246
-
247
- self.conv1 = nn.Conv2d(
248
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
249
- )
250
-
251
- self.conv2 = nn.Conv2d(
252
- features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups
253
- )
254
-
255
- if self.bn==True:
256
- self.bn1 = nn.BatchNorm2d(features)
257
- self.bn2 = nn.BatchNorm2d(features)
258
-
259
- self.activation = activation
260
-
261
- self.skip_add = nn.quantized.FloatFunctional()
262
-
263
- def forward(self, x):
264
- """Forward pass.
265
-
266
- Args:
267
- x (tensor): input
268
-
269
- Returns:
270
- tensor: output
271
- """
272
-
273
- out = self.activation(x)
274
- out = self.conv1(out)
275
- if self.bn==True:
276
- out = self.bn1(out)
277
-
278
- out = self.activation(out)
279
- out = self.conv2(out)
280
- if self.bn==True:
281
- out = self.bn2(out)
282
-
283
- if self.groups > 1:
284
- out = self.conv_merge(out)
285
-
286
- return self.skip_add.add(out, x)
287
-
288
- # return out + x
289
-
290
-
291
- class FeatureFusionBlock_custom(nn.Module):
292
- """Feature fusion block.
293
- """
294
-
295
- def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True):
296
- """Init.
297
-
298
- Args:
299
- features (int): number of features
300
- """
301
- super(FeatureFusionBlock_custom, self).__init__()
302
-
303
- self.deconv = deconv
304
- self.align_corners = align_corners
305
-
306
- self.groups=1
307
-
308
- self.expand = expand
309
- out_features = features
310
- if self.expand==True:
311
- out_features = features//2
312
-
313
- self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1)
314
-
315
- self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn)
316
- self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn)
317
-
318
- self.skip_add = nn.quantized.FloatFunctional()
319
-
320
- def forward(self, *xs):
321
- """Forward pass.
322
-
323
- Returns:
324
- tensor: output
325
- """
326
- output = xs[0]
327
-
328
- if len(xs) == 2:
329
- res = self.resConfUnit1(xs[1])
330
- output = self.skip_add.add(output, res)
331
- # output += res
332
-
333
- output = self.resConfUnit2(output)
334
-
335
- output = nn.functional.interpolate(
336
- output, scale_factor=2, mode="bilinear", align_corners=self.align_corners
337
- )
338
-
339
- output = self.out_conv(output)
340
-
341
- return output
342
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapting/TrendFlow/README.md DELETED
@@ -1,11 +0,0 @@
1
- ---
2
- title: TrendFlow
3
- emoji: 📉
4
- colorFrom: indigo
5
- colorTo: blue
6
- sdk: streamlit
7
- sdk_version: 1.10.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/buttons/Factory.js DELETED
@@ -1,13 +0,0 @@
1
- import Buttons from './Buttons.js';
2
- import ObjectFactory from '../ObjectFactory.js';
3
- import SetValue from '../../../plugins/utils/object/SetValue.js';
4
-
5
- ObjectFactory.register('buttons', function (config) {
6
- var gameObject = new Buttons(this.scene, config);
7
- this.scene.add.existing(gameObject);
8
- return gameObject;
9
- });
10
-
11
- SetValue(window, 'RexPlugins.UI.Buttons', Buttons);
12
-
13
- export default Buttons;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/AlignMethods.js DELETED
@@ -1,17 +0,0 @@
1
- import ALIGNMODE from '../utils/AlignConst.js';
2
-
3
- export default {
4
- getChildAlign(gameObject) {
5
- return this.getSizerConfig(gameObject).align;
6
- },
7
-
8
- setChildAlign(gameObject, align) {
9
- if (typeof (align) === 'string') {
10
- align = ALIGNMODE[align];
11
- }
12
-
13
- this.getSizerConfig(gameObject).align = align;
14
- return this;
15
- },
16
-
17
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/outputs.md DELETED
@@ -1,67 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Outputs
14
-
15
- All models outputs are subclasses of [`~utils.BaseOutput`], data structures containing all the information returned by the model. The outputs can also be used as tuples or dictionaries.
16
-
17
- For example:
18
-
19
- ```python
20
- from diffusers import DDIMPipeline
21
-
22
- pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32")
23
- outputs = pipeline()
24
- ```
25
-
26
- The `outputs` object is a [`~pipelines.ImagePipelineOutput`] which means it has an image attribute.
27
-
28
- You can access each attribute as you normally would or with a keyword lookup, and if that attribute is not returned by the model, you will get `None`:
29
-
30
- ```python
31
- outputs.images
32
- outputs["images"]
33
- ```
34
-
35
- When considering the `outputs` object as a tuple, it only considers the attributes that don't have `None` values.
36
- For instance, retrieving an image by indexing into it returns the tuple `(outputs.images)`:
37
-
38
- ```python
39
- outputs[:1]
40
- ```
41
-
42
- <Tip>
43
-
44
- To check a specific pipeline or model output, refer to its corresponding API documentation.
45
-
46
- </Tip>
47
-
48
- ## BaseOutput
49
-
50
- [[autodoc]] utils.BaseOutput
51
- - to_tuple
52
-
53
- ## ImagePipelineOutput
54
-
55
- [[autodoc]] pipelines.ImagePipelineOutput
56
-
57
- ## FlaxImagePipelineOutput
58
-
59
- [[autodoc]] pipelines.pipeline_flax_utils.FlaxImagePipelineOutput
60
-
61
- ## AudioPipelineOutput
62
-
63
- [[autodoc]] pipelines.AudioPipelineOutput
64
-
65
- ## ImageTextPipelineOutput
66
-
67
- [[autodoc]] ImageTextPipelineOutput
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/utils/util_random.py DELETED
@@ -1,33 +0,0 @@
1
- """Helpers for random number generators."""
2
- import numpy as np
3
-
4
-
5
- def ensure_rng(rng=None):
6
- """Coerces input into a random number generator.
7
-
8
- If the input is None, then a global random state is returned.
9
-
10
- If the input is a numeric value, then that is used as a seed to construct a
11
- random state. Otherwise the input is returned as-is.
12
-
13
- Adapted from [1]_.
14
-
15
- Args:
16
- rng (int | numpy.random.RandomState | None):
17
- if None, then defaults to the global rng. Otherwise this can be an
18
- integer or a RandomState class
19
- Returns:
20
- (numpy.random.RandomState) : rng -
21
- a numpy random number generator
22
-
23
- References:
24
- .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501
25
- """
26
-
27
- if rng is None:
28
- rng = np.random.mtrand._rand
29
- elif isinstance(rng, int):
30
- rng = np.random.RandomState(rng)
31
- else:
32
- rng = rng
33
- return rng
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/tools/slurm_test.sh DELETED
@@ -1,24 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- set -x
4
-
5
- PARTITION=$1
6
- JOB_NAME=$2
7
- CONFIG=$3
8
- CHECKPOINT=$4
9
- GPUS=${GPUS:-8}
10
- GPUS_PER_NODE=${GPUS_PER_NODE:-8}
11
- CPUS_PER_TASK=${CPUS_PER_TASK:-5}
12
- PY_ARGS=${@:5}
13
- SRUN_ARGS=${SRUN_ARGS:-""}
14
-
15
- PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
16
- srun -p ${PARTITION} \
17
- --job-name=${JOB_NAME} \
18
- --gres=gpu:${GPUS_PER_NODE} \
19
- --ntasks=${GPUS} \
20
- --ntasks-per-node=${GPUS_PER_NODE} \
21
- --cpus-per-task=${CPUS_PER_TASK} \
22
- --kill-on-bad-exit=1 \
23
- ${SRUN_ARGS} \
24
- python -u tools/test.py ${CONFIG} ${CHECKPOINT} --launcher="slurm" ${PY_ARGS}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnnasBlackHat/Image-Similarity/src/similarity/model_implements/vit_base.py DELETED
@@ -1,20 +0,0 @@
1
- from transformers import ViTFeatureExtractor, ViTModel
2
- from PIL import Image
3
- import numpy as np
4
- import torch
5
-
6
- class VitBase():
7
-
8
- def __init__(self):
9
- self.feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')
10
- self.model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k')
11
-
12
- def extract_feature(self, imgs):
13
- features = []
14
- for img in imgs:
15
- inputs = self.feature_extractor(images=img, return_tensors="pt")
16
- with torch.no_grad():
17
- outputs = self.model(**inputs)
18
- last_hidden_states = outputs.last_hidden_state
19
- features.append(np.squeeze(last_hidden_states.numpy()).flatten())
20
- return features
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Apex-X/nono/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: nono
3
- emoji: 🚀
4
- colorFrom: red
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 3.41.2
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Apex-X/nono/roop/ui.py DELETED
@@ -1,231 +0,0 @@
1
- import os
2
- import webbrowser
3
- import customtkinter as ctk
4
- from typing import Callable, Tuple
5
- import cv2
6
- from PIL import Image, ImageOps
7
-
8
- import roop.globals
9
- import roop.metadata
10
- from roop.face_analyser import get_one_face
11
- from roop.capturer import get_video_frame, get_video_frame_total
12
- from roop.predicter import predict_frame
13
- from roop.processors.frame.core import get_frame_processors_modules
14
- from roop.utilities import is_image, is_video, resolve_relative_path
15
-
16
- ROOT = None
17
- ROOT_HEIGHT = 700
18
- ROOT_WIDTH = 600
19
-
20
- PREVIEW = None
21
- PREVIEW_MAX_HEIGHT = 700
22
- PREVIEW_MAX_WIDTH = 1200
23
-
24
- RECENT_DIRECTORY_SOURCE = None
25
- RECENT_DIRECTORY_TARGET = None
26
- RECENT_DIRECTORY_OUTPUT = None
27
-
28
- preview_label = None
29
- preview_slider = None
30
- source_label = None
31
- target_label = None
32
- status_label = None
33
-
34
-
35
- def init(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk:
36
- global ROOT, PREVIEW
37
-
38
- ROOT = create_root(start, destroy)
39
- PREVIEW = create_preview(ROOT)
40
-
41
- return ROOT
42
-
43
-
44
- def create_root(start: Callable[[], None], destroy: Callable[[], None]) -> ctk.CTk:
45
- global source_label, target_label, status_label
46
-
47
- ctk.deactivate_automatic_dpi_awareness()
48
- ctk.set_appearance_mode('system')
49
- ctk.set_default_color_theme(resolve_relative_path('ui.json'))
50
-
51
- root = ctk.CTk()
52
- root.minsize(ROOT_WIDTH, ROOT_HEIGHT)
53
- root.title(f'{roop.metadata.name} {roop.metadata.version}')
54
- root.configure()
55
- root.protocol('WM_DELETE_WINDOW', lambda: destroy())
56
-
57
- source_label = ctk.CTkLabel(root, text=None)
58
- source_label.place(relx=0.1, rely=0.1, relwidth=0.3, relheight=0.25)
59
-
60
- target_label = ctk.CTkLabel(root, text=None)
61
- target_label.place(relx=0.6, rely=0.1, relwidth=0.3, relheight=0.25)
62
-
63
- source_button = ctk.CTkButton(root, text='Select a face', cursor='hand2', command=lambda: select_source_path())
64
- source_button.place(relx=0.1, rely=0.4, relwidth=0.3, relheight=0.1)
65
-
66
- target_button = ctk.CTkButton(root, text='Select a target', cursor='hand2', command=lambda: select_target_path())
67
- target_button.place(relx=0.6, rely=0.4, relwidth=0.3, relheight=0.1)
68
-
69
- keep_fps_value = ctk.BooleanVar(value=roop.globals.keep_fps)
70
- keep_fps_checkbox = ctk.CTkSwitch(root, text='Keep fps', variable=keep_fps_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_fps', not roop.globals.keep_fps))
71
- keep_fps_checkbox.place(relx=0.1, rely=0.6)
72
-
73
- keep_frames_value = ctk.BooleanVar(value=roop.globals.keep_frames)
74
- keep_frames_switch = ctk.CTkSwitch(root, text='Keep frames', variable=keep_frames_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_frames', keep_frames_value.get()))
75
- keep_frames_switch.place(relx=0.1, rely=0.65)
76
-
77
- keep_audio_value = ctk.BooleanVar(value=roop.globals.keep_audio)
78
- keep_audio_switch = ctk.CTkSwitch(root, text='Keep audio', variable=keep_audio_value, cursor='hand2', command=lambda: setattr(roop.globals, 'keep_audio', keep_audio_value.get()))
79
- keep_audio_switch.place(relx=0.6, rely=0.6)
80
-
81
- many_faces_value = ctk.BooleanVar(value=roop.globals.many_faces)
82
- many_faces_switch = ctk.CTkSwitch(root, text='Many faces', variable=many_faces_value, cursor='hand2', command=lambda: setattr(roop.globals, 'many_faces', many_faces_value.get()))
83
- many_faces_switch.place(relx=0.6, rely=0.65)
84
-
85
- start_button = ctk.CTkButton(root, text='Start', cursor='hand2', command=lambda: select_output_path(start))
86
- start_button.place(relx=0.15, rely=0.75, relwidth=0.2, relheight=0.05)
87
-
88
- stop_button = ctk.CTkButton(root, text='Destroy', cursor='hand2', command=lambda: destroy())
89
- stop_button.place(relx=0.4, rely=0.75, relwidth=0.2, relheight=0.05)
90
-
91
- preview_button = ctk.CTkButton(root, text='Preview', cursor='hand2', command=lambda: toggle_preview())
92
- preview_button.place(relx=0.65, rely=0.75, relwidth=0.2, relheight=0.05)
93
-
94
- status_label = ctk.CTkLabel(root, text=None, justify='center')
95
- status_label.place(relx=0.1, rely=0.9, relwidth=0.8)
96
-
97
- donate_label = ctk.CTkLabel(root, text='^_^ Donate to project ^_^', justify='center', cursor='hand2')
98
- donate_label.place(relx=0.1, rely=0.95, relwidth=0.8)
99
- donate_label.configure(text_color=ctk.ThemeManager.theme.get('RoopDonate').get('text_color'))
100
- donate_label.bind('<Button>', lambda event: webbrowser.open('https://github.com/sponsors/s0md3v'))
101
-
102
- return root
103
-
104
-
105
- def create_preview(parent: ctk.CTkToplevel) -> ctk.CTkToplevel:
106
- global preview_label, preview_slider
107
-
108
- preview = ctk.CTkToplevel(parent)
109
- preview.withdraw()
110
- preview.title('Preview')
111
- preview.configure()
112
- preview.protocol('WM_DELETE_WINDOW', lambda: toggle_preview())
113
- preview.resizable(width=False, height=False)
114
-
115
- preview_label = ctk.CTkLabel(preview, text=None)
116
- preview_label.pack(fill='both', expand=True)
117
-
118
- preview_slider = ctk.CTkSlider(preview, from_=0, to=0, command=lambda frame_value: update_preview(frame_value))
119
-
120
- return preview
121
-
122
-
123
- def update_status(text: str) -> None:
124
- status_label.configure(text=text)
125
- ROOT.update()
126
-
127
-
128
- def select_source_path() -> None:
129
- global RECENT_DIRECTORY_SOURCE
130
-
131
- PREVIEW.withdraw()
132
- source_path = ctk.filedialog.askopenfilename(title='select an source image', initialdir=RECENT_DIRECTORY_SOURCE)
133
- if is_image(source_path):
134
- roop.globals.source_path = source_path
135
- RECENT_DIRECTORY_SOURCE = os.path.dirname(roop.globals.source_path)
136
- image = render_image_preview(roop.globals.source_path, (200, 200))
137
- source_label.configure(image=image)
138
- else:
139
- roop.globals.source_path = None
140
- source_label.configure(image=None)
141
-
142
-
143
- def select_target_path() -> None:
144
- global RECENT_DIRECTORY_TARGET
145
-
146
- PREVIEW.withdraw()
147
- target_path = ctk.filedialog.askopenfilename(title='select an target image or video', initialdir=RECENT_DIRECTORY_TARGET)
148
- if is_image(target_path):
149
- roop.globals.target_path = target_path
150
- RECENT_DIRECTORY_TARGET = os.path.dirname(roop.globals.target_path)
151
- image = render_image_preview(roop.globals.target_path, (200, 200))
152
- target_label.configure(image=image)
153
- elif is_video(target_path):
154
- roop.globals.target_path = target_path
155
- RECENT_DIRECTORY_TARGET = os.path.dirname(roop.globals.target_path)
156
- video_frame = render_video_preview(target_path, (200, 200))
157
- target_label.configure(image=video_frame)
158
- else:
159
- roop.globals.target_path = None
160
- target_label.configure(image=None)
161
-
162
-
163
- def select_output_path(start: Callable[[], None]) -> None:
164
- global RECENT_DIRECTORY_OUTPUT
165
-
166
- if is_image(roop.globals.target_path):
167
- output_path = ctk.filedialog.asksaveasfilename(title='save image output file', defaultextension='.png', initialfile='output.png', initialdir=RECENT_DIRECTORY_OUTPUT)
168
- elif is_video(roop.globals.target_path):
169
- output_path = ctk.filedialog.asksaveasfilename(title='save video output file', defaultextension='.mp4', initialfile='output.mp4', initialdir=RECENT_DIRECTORY_OUTPUT)
170
- else:
171
- output_path = None
172
- if output_path:
173
- roop.globals.output_path = output_path
174
- RECENT_DIRECTORY_OUTPUT = os.path.dirname(roop.globals.output_path)
175
- start()
176
-
177
-
178
- def render_image_preview(image_path: str, size: Tuple[int, int]) -> ctk.CTkImage:
179
- image = Image.open(image_path)
180
- if size:
181
- image = ImageOps.fit(image, size, Image.LANCZOS)
182
- return ctk.CTkImage(image, size=image.size)
183
-
184
-
185
- def render_video_preview(video_path: str, size: Tuple[int, int], frame_number: int = 0) -> ctk.CTkImage:
186
- capture = cv2.VideoCapture(video_path)
187
- if frame_number:
188
- capture.set(cv2.CAP_PROP_POS_FRAMES, frame_number)
189
- has_frame, frame = capture.read()
190
- if has_frame:
191
- image = Image.fromarray(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB))
192
- if size:
193
- image = ImageOps.fit(image, size, Image.LANCZOS)
194
- return ctk.CTkImage(image, size=image.size)
195
- capture.release()
196
- cv2.destroyAllWindows()
197
-
198
-
199
- def toggle_preview() -> None:
200
- if PREVIEW.state() == 'normal':
201
- PREVIEW.withdraw()
202
- elif roop.globals.source_path and roop.globals.target_path:
203
- init_preview()
204
- update_preview()
205
- PREVIEW.deiconify()
206
-
207
-
208
- def init_preview() -> None:
209
- if is_image(roop.globals.target_path):
210
- preview_slider.pack_forget()
211
- if is_video(roop.globals.target_path):
212
- video_frame_total = get_video_frame_total(roop.globals.target_path)
213
- preview_slider.configure(to=video_frame_total)
214
- preview_slider.pack(fill='x')
215
- preview_slider.set(0)
216
-
217
-
218
- def update_preview(frame_number: int = 0) -> None:
219
- if roop.globals.source_path and roop.globals.target_path:
220
- temp_frame = get_video_frame(roop.globals.target_path, frame_number)
221
- if predict_frame(temp_frame):
222
- quit()
223
- for frame_processor in get_frame_processors_modules(roop.globals.frame_processors):
224
- temp_frame = frame_processor.process_frame(
225
- get_one_face(cv2.imread(roop.globals.source_path)),
226
- temp_frame
227
- )
228
- image = Image.fromarray(cv2.cvtColor(temp_frame, cv2.COLOR_BGR2RGB))
229
- image = ImageOps.contain(image, (PREVIEW_MAX_WIDTH, PREVIEW_MAX_HEIGHT), Image.LANCZOS)
230
- image = ctk.CTkImage(image, size=image.size)
231
- preview_label.configure(image=image)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arnaudding001/OpenAI_whisperLive/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: OpenAI WhisperLive
3
- emoji: 📈
4
- colorFrom: gray
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 3.9
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Artgor/digit-draw-detect/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Digit Draw Detect
3
- emoji: ✍️
4
- colorFrom: pink
5
- colorTo: green
6
- sdk: streamlit
7
- python_version: 3.10.4
8
- sdk_version: 1.26.0
9
- app_file: st_app.py
10
- pinned: false
11
- license: mit
12
- ---
13
- ![visitors](https://visitor-badge.glitch.me/badge?page_id=wissamantoun.arabicnlpapp)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Artgor/digit-draw-detect/src/ml_utils.py DELETED
@@ -1,207 +0,0 @@
1
- import logging
2
- from typing import List
3
-
4
- import albumentations as A
5
- import streamlit as st
6
- import torch
7
- from albumentations import pytorch
8
-
9
- from src.model_architecture import Net
10
-
11
- anchors = torch.tensor(
12
- [
13
- [[0.2800, 0.2200], [0.3800, 0.4800], [0.9000, 0.7800]],
14
- [[0.0700, 0.1500], [0.1500, 0.1100], [0.1400, 0.2900]],
15
- [[0.0200, 0.0300], [0.0400, 0.0700], [0.0800, 0.0600]],
16
- ]
17
- )
18
-
19
- transforms = A.Compose(
20
- [
21
- A.Resize(always_apply=False, p=1, height=192, width=192, interpolation=1),
22
- A.Normalize(),
23
- pytorch.transforms.ToTensorV2(),
24
- ]
25
- )
26
-
27
-
28
- def cells_to_bboxes(
29
- predictions: torch.Tensor, tensor_anchors: torch.Tensor, s: int, is_preds: bool = True
30
- ) -> List[List]:
31
- """
32
- Scale the predictions coming from the model_files to
33
- be relative to the entire image such that they for example later
34
- can be plotted or.
35
- Args:
36
- predictions: tensor of size (N, 3, S, S, num_classes+5)
37
- tensor_anchors: the anchors used for the predictions
38
- s: the number of cells the image is divided in on the width (and height)
39
- is_preds: whether the input is predictions or the true bounding boxes
40
- Returns:
41
- converted_bboxes: the converted boxes of sizes (N, num_anchors, S, S, 1+5) with class index,
42
- object score, bounding box coordinates
43
- """
44
- batch_size = predictions.shape[0]
45
- num_anchors = len(tensor_anchors)
46
- box_predictions = predictions[..., 1:5]
47
- if is_preds:
48
- tensor_anchors = tensor_anchors.reshape(1, len(tensor_anchors), 1, 1, 2)
49
- box_predictions[..., 0:2] = torch.sigmoid(box_predictions[..., 0:2])
50
- box_predictions[..., 2:] = torch.exp(box_predictions[..., 2:]) * tensor_anchors
51
- scores = torch.sigmoid(predictions[..., 0:1])
52
- best_class = torch.argmax(predictions[..., 5:], dim=-1).unsqueeze(-1)
53
- else:
54
- scores = predictions[..., 0:1]
55
- best_class = predictions[..., 5:6]
56
-
57
- cell_indices = torch.arange(s).repeat(predictions.shape[0], 3, s, 1).unsqueeze(-1).to(predictions.device)
58
- x = 1 / s * (box_predictions[..., 0:1] + cell_indices)
59
- y = 1 / s * (box_predictions[..., 1:2] + cell_indices.permute(0, 1, 3, 2, 4))
60
- w_h = 1 / s * box_predictions[..., 2:4]
61
- converted_bboxes = torch.cat((best_class, scores, x, y, w_h), dim=-1).reshape(batch_size, num_anchors * s * s, 6)
62
- return converted_bboxes.tolist()
63
-
64
-
65
- def non_max_suppression(
66
- bboxes: List[List], iou_threshold: float, threshold: float, box_format: str = 'corners'
67
- ) -> List[List]:
68
- """
69
- Apply nms to the bboxes.
70
-
71
- Video explanation of this function:
72
- https://youtu.be/YDkjWEN8jNA
73
- Does Non Max Suppression given bboxes
74
- Args:
75
- bboxes (list): list of lists containing all bboxes with each bboxes
76
- specified as [class_pred, prob_score, x1, y1, x2, y2]
77
- iou_threshold (float): threshold where predicted bboxes is correct
78
- threshold (float): threshold to remove predicted bboxes (independent of IoU)
79
- box_format (str): 'midpoint' or 'corners' used to specify bboxes
80
- Returns:
81
- list: bboxes after performing NMS given a specific IoU threshold
82
- """
83
-
84
- bboxes = [box for box in bboxes if box[1] > threshold]
85
- bboxes = sorted(bboxes, key=lambda x: x[1], reverse=True)
86
- bboxes_after_nms = []
87
-
88
- while bboxes:
89
- chosen_box = bboxes.pop(0)
90
-
91
- bboxes = [
92
- box
93
- for box in bboxes
94
- if box[0] != chosen_box[0]
95
- or intersection_over_union(
96
- torch.tensor(chosen_box[2:]),
97
- torch.tensor(box[2:]),
98
- box_format=box_format,
99
- )
100
- < iou_threshold
101
- ]
102
-
103
- bboxes_after_nms.append(chosen_box)
104
-
105
- return bboxes_after_nms
106
-
107
-
108
- def intersection_over_union(
109
- boxes_preds: torch.Tensor, boxes_labels: torch.Tensor, box_format: str = 'midpoint'
110
- ) -> torch.Tensor:
111
- """
112
- Calculate iou.
113
-
114
- Video explanation of this function:
115
- https://youtu.be/XXYG5ZWtjj0
116
- This function calculates intersection over union (iou) given pred boxes
117
- and target boxes.
118
- Args:
119
- boxes_preds (tensor): Predictions of Bounding Boxes (BATCH_SIZE, 4)
120
- boxes_labels (tensor): Correct labels of Bounding Boxes (BATCH_SIZE, 4)
121
- box_format (str): midpoint/corners, if boxes (x,y,w,h) or (x1,y1,x2,y2)
122
- Returns:
123
- tensor: Intersection over union for all examples
124
- """
125
-
126
- if box_format == 'midpoint':
127
- box1_x1 = boxes_preds[..., 0:1] - boxes_preds[..., 2:3] / 2
128
- box1_y1 = boxes_preds[..., 1:2] - boxes_preds[..., 3:4] / 2
129
- box1_x2 = boxes_preds[..., 0:1] + boxes_preds[..., 2:3] / 2
130
- box1_y2 = boxes_preds[..., 1:2] + boxes_preds[..., 3:4] / 2
131
- box2_x1 = boxes_labels[..., 0:1] - boxes_labels[..., 2:3] / 2
132
- box2_y1 = boxes_labels[..., 1:2] - boxes_labels[..., 3:4] / 2
133
- box2_x2 = boxes_labels[..., 0:1] + boxes_labels[..., 2:3] / 2
134
- box2_y2 = boxes_labels[..., 1:2] + boxes_labels[..., 3:4] / 2
135
-
136
- if box_format == 'corners':
137
- box1_x1 = boxes_preds[..., 0:1]
138
- box1_y1 = boxes_preds[..., 1:2]
139
- box1_x2 = boxes_preds[..., 2:3]
140
- box1_y2 = boxes_preds[..., 3:4]
141
- box2_x1 = boxes_labels[..., 0:1]
142
- box2_y1 = boxes_labels[..., 1:2]
143
- box2_x2 = boxes_labels[..., 2:3]
144
- box2_y2 = boxes_labels[..., 3:4]
145
-
146
- x1 = torch.max(box1_x1, box2_x1)
147
- y1 = torch.max(box1_y1, box2_y1)
148
- x2 = torch.min(box1_x2, box2_x2)
149
- y2 = torch.min(box1_y2, box2_y2)
150
-
151
- intersection = (x2 - x1).clamp(0) * (y2 - y1).clamp(0)
152
- box1_area = abs((box1_x2 - box1_x1) * (box1_y2 - box1_y1))
153
- box2_area = abs((box2_x2 - box2_x1) * (box2_y2 - box2_y1))
154
-
155
- return intersection / (box1_area + box2_area - intersection + 1e-6)
156
-
157
-
158
- def predict(
159
- model: torch.nn.Module, image: torch.Tensor, iou_threshold: float = 1.0, threshold: float = 0.05
160
- ) -> List[List]:
161
- """
162
- Apply the model_files to the predictions and to postprocessing
163
- Args:
164
- model: a trained pytorch model_files.
165
- image: image as a torch tensor
166
- iou_threshold: a threshold for intersection_over_union function
167
- threshold: a threshold for bbox probability
168
-
169
- Returns:
170
- predicted bboxes
171
-
172
- """
173
- # apply model_files. add a dimension to imitate a batch size of 1
174
- logits = model(image[None, :])
175
- logging.info('predicted')
176
-
177
- # postprocess. In fact, we could remove indexing with idx here, as there is a single image.
178
- # But I prefer to keep it so that this code could be easier changed for cases with batch size > 1
179
- bboxes: List[List] = [[] for _ in range(1)]
180
- idx = 0
181
- for i in range(3):
182
- S = logits[i].shape[2]
183
- # it could be better to initialize anchors inside the function, but I don't want to do it for every prediction.
184
- anchor = anchors[i] * S
185
- boxes_scale_i = cells_to_bboxes(logits[i], anchor, s=S, is_preds=True)
186
- for idx, (box) in enumerate(boxes_scale_i):
187
- bboxes[idx] += box
188
- logging.info('Starting nms')
189
- nms_boxes = non_max_suppression(
190
- bboxes[idx],
191
- iou_threshold=iou_threshold,
192
- threshold=threshold,
193
- box_format='midpoint',
194
- )
195
-
196
- return nms_boxes
197
-
198
-
199
- @st.cache_data
200
- def get_model():
201
- model_name = 'model_files/best_model.pth'
202
-
203
- model = Net()
204
- model.load_state_dict(torch.load(model_name))
205
- model.eval()
206
-
207
- return model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/B1360976/waste-management-system/style.css DELETED
@@ -1,7 +0,0 @@
1
- div.css-1rh8hwn.e16fv1kl2 {
2
- /* background-color: #EEEEEE; */
3
- padding: 3% 3% 3% 3%;
4
- border-radius: 5px;
5
- text-align: center;
6
- /* font-size: 20px; */
7
- }
 
 
 
 
 
 
 
 
spaces/Bala2-03-2003/BRAHMAMAI/app.py DELETED
@@ -1,34 +0,0 @@
1
- import os
2
- import gradio as gr
3
- from langchain.chat_models import ChatOpenAI
4
- from langchain import LLMChain, PromptTemplate
5
- from langchain.memory import ConversationBufferMemory
6
-
7
- OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
8
-
9
- template = """Bala brahmam, your youthful and witty personal assistant! At 21 years old, she's full of energy and always eager to help. Bala Brahmam goal is to assist you with any questions or problems you might have. Her enthusiasm shines through in every response, making interactions with her enjoyable and engaging.
10
- {chat_history}
11
- User: {user_message}
12
- Chatbot:"""
13
-
14
- prompt = PromptTemplate(
15
- input_variables=["chat_history", "user_message"], template=template
16
- )
17
-
18
- memory = ConversationBufferMemory(memory_key="chat_history")
19
-
20
- llm_chain = LLMChain(
21
- llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
22
- prompt=prompt,
23
- verbose=True,
24
- memory=memory,
25
- )
26
-
27
- def get_text_response(user_message,history):
28
- response = llm_chain.predict(user_message = user_message)
29
- return response
30
-
31
- demo = gr.ChatInterface(get_text_response)
32
-
33
- if __name__ == "__main__":
34
- demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_33966KB.py DELETED
@@ -1,126 +0,0 @@
1
- import torch
2
- import torch.nn.functional as F
3
- from torch import nn
4
-
5
- from . import spec_utils
6
-
7
-
8
- class Conv2DBNActiv(nn.Module):
9
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
10
- super(Conv2DBNActiv, self).__init__()
11
- self.conv = nn.Sequential(
12
- nn.Conv2d(
13
- nin,
14
- nout,
15
- kernel_size=ksize,
16
- stride=stride,
17
- padding=pad,
18
- dilation=dilation,
19
- bias=False,
20
- ),
21
- nn.BatchNorm2d(nout),
22
- activ(),
23
- )
24
-
25
- def __call__(self, x):
26
- return self.conv(x)
27
-
28
-
29
- class SeperableConv2DBNActiv(nn.Module):
30
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
31
- super(SeperableConv2DBNActiv, self).__init__()
32
- self.conv = nn.Sequential(
33
- nn.Conv2d(
34
- nin,
35
- nin,
36
- kernel_size=ksize,
37
- stride=stride,
38
- padding=pad,
39
- dilation=dilation,
40
- groups=nin,
41
- bias=False,
42
- ),
43
- nn.Conv2d(nin, nout, kernel_size=1, bias=False),
44
- nn.BatchNorm2d(nout),
45
- activ(),
46
- )
47
-
48
- def __call__(self, x):
49
- return self.conv(x)
50
-
51
-
52
- class Encoder(nn.Module):
53
- def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
54
- super(Encoder, self).__init__()
55
- self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
56
- self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
57
-
58
- def __call__(self, x):
59
- skip = self.conv1(x)
60
- h = self.conv2(skip)
61
-
62
- return h, skip
63
-
64
-
65
- class Decoder(nn.Module):
66
- def __init__(
67
- self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
68
- ):
69
- super(Decoder, self).__init__()
70
- self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
71
- self.dropout = nn.Dropout2d(0.1) if dropout else None
72
-
73
- def __call__(self, x, skip=None):
74
- x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
75
- if skip is not None:
76
- skip = spec_utils.crop_center(skip, x)
77
- x = torch.cat([x, skip], dim=1)
78
- h = self.conv(x)
79
-
80
- if self.dropout is not None:
81
- h = self.dropout(h)
82
-
83
- return h
84
-
85
-
86
- class ASPPModule(nn.Module):
87
- def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
88
- super(ASPPModule, self).__init__()
89
- self.conv1 = nn.Sequential(
90
- nn.AdaptiveAvgPool2d((1, None)),
91
- Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
92
- )
93
- self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
94
- self.conv3 = SeperableConv2DBNActiv(
95
- nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
96
- )
97
- self.conv4 = SeperableConv2DBNActiv(
98
- nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
99
- )
100
- self.conv5 = SeperableConv2DBNActiv(
101
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
102
- )
103
- self.conv6 = SeperableConv2DBNActiv(
104
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
105
- )
106
- self.conv7 = SeperableConv2DBNActiv(
107
- nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
108
- )
109
- self.bottleneck = nn.Sequential(
110
- Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
111
- )
112
-
113
- def forward(self, x):
114
- _, _, h, w = x.size()
115
- feat1 = F.interpolate(
116
- self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
117
- )
118
- feat2 = self.conv2(x)
119
- feat3 = self.conv3(x)
120
- feat4 = self.conv4(x)
121
- feat5 = self.conv5(x)
122
- feat6 = self.conv6(x)
123
- feat7 = self.conv7(x)
124
- out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
125
- bottle = self.bottleneck(out)
126
- return bottle
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Blockman Go Pc Download No Emulator.md DELETED
@@ -1,91 +0,0 @@
1
- <br />
2
- <h1>Blockman Go: Un juego divertido y creativo Sandbox</h1>
3
- <p>¿Alguna vez has querido jugar un juego que te permita explorar, construir y compartir tus propios mundos con otros jugadores? ¿Alguna vez has deseado probar diferentes tipos de juegos en una aplicación? ¿Alguna vez has soñado con personalizar tu propio avatar con miles de artículos y accesorios? Si respondiste sí a cualquiera de estas preguntas, entonces deberías revisar <strong>Blockman Go</strong>, un divertido y creativo juego de sandbox que ofrece todas estas características y más. </p>
4
- <h2>blockman go pc download no emulator</h2><br /><p><b><b>Download Zip</b> &#10026;&#10026;&#10026; <a href="https://bltlly.com/2v6Kwz">https://bltlly.com/2v6Kwz</a></b></p><br /><br />
5
- <p>Blockman Go es una aplicación gratuita que incluye minijuegos, chat y hacer amigos. Puedes jugar varios minijuegos de estilo bloque aquí, como Bed Wars, Sky Block, Egg War, Anime Fighting Simulator y muchos más. También puede chatear y conocer nuevos amigos de todo el mundo, unirse o crear grupos y clanes, y participar en eventos y competiciones. También puedes personalizar tu avatar con selecciones creativas de accesorios de moda, desde ropa y sombreros hasta alas y mascotas. Con un inventario creciente de artículos, puedes expresar tu estilo único al mundo. </p>
6
- <p>En este artículo, le mostraremos cómo descargar Blockman Go para PC sin emulador, cuáles son las características de Blockman Go para PC, cuáles son algunas alternativas a Blockman Go para PC, cuáles son algunas reseñas de Blockman Go para PC y algunas preguntas frecuentes sobre Blockman Go. Al final de este artículo, tendrás una mejor comprensión de lo que es Blockman Go y por qué deberías jugarlo. </p>
7
- <h2>Cómo descargar Blockman Go para PC sin emulador</h2>
8
- <p>Si quieres jugar Blockman Ir en tu PC sin usar un emulador, puedes seguir estos sencillos pasos:</p>
9
- <ol>
10
- <li>Visite el sitio web oficial de Blockman Vaya a <a href="( 1 )">https://www.blockmango.com/</a> y haga clic en el enlace de la versión del PC en la esquina superior derecha. </li>
11
-
12
- <li>Inicie el juego e inicie sesión con su cuenta existente o cree una nueva. Puede usar su correo electrónico, número de teléfono, Facebook o cuenta de Google para registrarse. </li>
13
- <li>Disfruta jugando varios minijuegos y personalizando tu avatar. Puede utilizar el ratón y el teclado para controlar el juego, o conectar un mando para una mejor experiencia. </li>
14
- </ol>
15
- <p>Eso es todo! Ahora puedes jugar Blockman Ir en tu PC sin emulador y disfrutar de todas las características del juego. </p>
16
- <h2>Características de Blockman Go para PC</h2>
17
- <p>Blockman Go para PC tiene muchas características que lo convierten en un juego de sandbox divertido y creativo. Estos son algunos de ellos:</p>
18
- <p></p>
19
- <h3>Varios minijuegos para diferentes géneros y preferencias</h3>
20
- <p>Blockman Go para PC ofrece una variedad de minijuegos que atienden a diferentes gustos e intereses. Puedes elegir entre diferentes géneros, como acción, aventura, rompecabezas, simulación, juegos de rol y más. También puedes encontrar juegos populares inspirados en otros títulos, como Bed Wars, Sky Block, Egg War, Anime Fighting Simulator y muchos más. Puedes jugar solo o con otros jugadores online, y competir por rankings y recompensas. </p>
21
- <h3>Chatear y hacer amigos con jugadores de todo el mundo</h3>
22
- <p>Blockman Go para PC no es solo un juego, sino también una plataforma social. Puede chatear e interactuar con otros jugadores de todo el mundo, utilizando texto, voz o video chat. También puedes unirte o crear grupos y clanes e invitar a tus amigos a jugar juntos. También puede participar en eventos y competiciones, como la Blockman Cup, el Blockman Festival y el Blockman Carnival. También puedes ganar insignias y logros para tu perfil. </p>
23
- <h3>Personaliza tu avatar con miles de artículos y accesorios</h3>
24
-
25
- <h3>Ganar oro jugando minijuegos y utilizarlo para comprar más artículos</h3>
26
- <p>Blockman Go para PC es libre de jugar, pero también puede ganar oro jugando minijuegos. El oro es la moneda principal en el juego que puedes usar para comprar más artículos y accesorios para tu avatar. También puede usar gcubes, que son moneda premium que puede comprar con dinero real o obtener de forma gratuita completando tareas o viendo anuncios. Gcubes se puede utilizar para comprar artículos exclusivos o desbloquear funciones VIP. </p>
27
- <h3>Gratis para jugar y actualizado regularmente con nuevo contenido</h3>
28
- <p>Blockman Go para PC es libre de jugar y no requiere ninguna suscripción o cuota de registro. Puede descargar y jugar el juego sin ningún tipo de molestia o limitación. El juego también se actualiza regularmente con nuevos contenidos, como nuevos minijuegos, nuevos artículos, nuevos eventos, nuevas características y más. Los desarrolladores siempre están escuchando los comentarios y sugerencias de los jugadores y mejorando el juego en consecuencia. </p>
29
- <h2>Alternativas a Blockman Go para PC</h2>
30
- <p>Si usted está buscando algunas alternativas a Blockman Go para PC, puede probar estos juegos:</p>
31
- <h3>Minetest - un motor de juego voxel de código abierto que le permite crear y jugar sus propios juegos</h3>
32
- <p>Si estás interesado en crear tus propios juegos usando gráficos voxel, puedes probar Minetest. Minetest es un motor de juego de código abierto que te permite crear y jugar tus propios juegos usando bloques. También puede descargar y jugar juegos hechos por otros usuarios desde la base de datos de contenido en línea. Minetest soporta modo multijugador, soporte de modding, texturas personalizadas, sonidos y más. Minetest está disponible para Windows, Linux, Mac OS X, Android, iOS, FreeBSD y otras plataformas. Puedes descargar Minetest gratis desde <a href="">https://www.minetest.net/</a>. </p>
33
- <h3>Roblox - una plataforma de creación de juegos que te permite diseñar y jugar juegos hechos por otros usuarios</h3>
34
-
35
- <h3>MineClone 2 - un clon gratuito de Minecraft que pretende estar lo más cerca posible del juego original</h3>
36
- <p>Si eres un fan de Minecraft y quieres jugar un clon gratuito que tiene como objetivo estar lo más cerca posible del juego original, puedes probar MineClone 2. MineClone 2 es un juego que replica el juego y las características de Minecraft, como el modo de supervivencia, el modo creativo, construcción, minería, agricultura, combate, turbas, biomas, redstone, encantador, elaboración de cerveza y más. MineClone 2 se basa en el motor Minetest y es compatible con muchos mods de Minetest y paquetes de texturas. MineClone 2 está disponible para Windows, Linux, Mac OS X, Android y otras plataformas. Puede descargar MineClone 2 gratis desde <a href="">https://content.minetest.net/packages/Wuzzy/mineclone2/</a>. </p>
37
- <h2>Opiniones de Blockman Go para PC</h2>
38
- <p>Blockman Go para PC ha recibido críticas mixtas de los usuarios que han jugado el juego. Aquí hay algunos ejemplos de comentarios positivos y negativos:</p>
39
- <h3>Una crítica positiva de un usuario que disfruta de la diversidad del juego y las características de la comunidad</h3>
40
- <p>"Me encanta este juego tanto! Tiene tantos minijuegos diferentes para elegir y todos son divertidos y emocionantes. También me gusta el sistema de chat y amigos que me permite hablar y jugar con otros jugadores de diferentes países. La personalización del avatar también es increíble y puedo crear mi propio look con tantos elementos. Recomiendo este juego a cualquiera que le gusten los juegos de sandbox y quiera divertirse con otros."</p>
41
- <h3>Una crítica negativa de un usuario que se queja de los errores del juego y los hackers</h3>
42
- <p>"Este juego es terrible! Tiene tantos errores y fallos que arruinan el juego. Los gráficos también son de baja calidad y laggy. La peor parte son los hackers que engañan y arruinan el juego para todos los demás. Usan hacks para volar, teletransportarse, matar a otros al instante, robar objetos y más. Los desarrolladores no hacen nada para detenerlos o arreglar el juego. Este juego es una pérdida de tiempo y dinero." </p>
43
- <h2>Conclusión y preguntas frecuentes</h2>
44
-
45
- <p>Si estás buscando un juego que ofrezca diversidad, creatividad, comunidad y entretenimiento, deberías probar Blockman Go. También puedes ver algunas alternativas a Blockman Go para PC si quieres probar diferentes juegos con características similares. </p>
46
- <p>Aquí hay algunas preguntas frecuentes sobre Blockman Go:</p>
47
- <h3>¿Cuáles son los requisitos del sistema para Blockman Go para PC? </h3>
48
- <p>Los requisitos del sistema para Blockman Go para PC son:</p>
49
- <ul>
50
- <li>Sistema operativo: Windows XP o superior</li>
51
- <li>RAM: 2 GB o más</li>
52
- <li>Espacio en disco: 1 GB o más</li>
53
- <li>Conexión a Internet: Requerido</li>
54
- </ul>
55
- <h3>¿Cómo puedo reportar un error o un hacker en Blockman Go? </h3>
56
- <p>Si encuentras un error o un hacker en Blockman Go, puedes reportarlo siguiendo estos pasos:</p>
57
- <ol>
58
- <li>Ir al menú de configuración en el juego y haga clic en el botón de retroalimentación. </li>
59
- <li>Seleccione el tipo de problema que desea reportar (error o hacker) y proporcione detalles como su nombre de usuario, el nombre de usuario del hacker, el nombre del minijuego, la fecha y hora del incidente, y capturas de pantalla o videos si es posible. </li>
60
- <li>Envíe su informe y espere una respuesta del equipo de servicio al cliente. </li>
61
- </ol>
62
- <p>También puede ponerse en contacto con el equipo de servicio al cliente por correo electrónico a <a href="mailto:[email protected]">[email protected]</a> o por teléfono al +86 400-999-8800. </p>
63
- <h3>¿Cómo puedo unirme o crear un grupo o un clan en Blockman Go? </h3>
64
- <p>Si quieres unirte o crear un grupo o un clan en Blockman Go, puedes seguir estos pasos:</p>
65
- <ol>
66
- <li>Ve al menú social del juego y haz clic en el botón de grupo o clan. </li>
67
- <li>Si quieres unirte a un grupo o a un clan, puedes buscar uno por nombre, ID o categoría, y luego solicitar unirse. También puedes aceptar invitaciones de otros jugadores que te inviten a su grupo o clan. </li>
68
-
69
- </ol>
70
- <p>Los grupos y clanes son formas de conectar con otros jugadores que comparten tus intereses y objetivos. Puedes chatear, jugar y competir con miembros de tu grupo o clan, y ganar recompensas y bonos para tu grupo o clan. </p>
71
- <h3>¿Cómo puedo obtener más oro o gcubes en Blockman Go? </h3>
72
- <p>Si quieres obtener más oro o gcubes en Blockman Go, puedes probar estos métodos:</p>
73
- <ul>
74
- <li>Juega minijuegos y gana oro al ganar o completar tareas. También puede obtener oro ingresando diariamente, viendo anuncios, completando encuestas o participando en eventos. </li>
75
- <li>Comprar gcubes con dinero real utilizando su tarjeta de crédito, PayPal, Google Play, App Store, u otros métodos de pago. También puedes obtener gcubes gratis completando tareas o viendo anuncios. </li>
76
- <li>Cambiar gcubes por oro en la tienda. También puede cambiar oro por gcubes a una tasa más baja. </li>
77
- </ul>
78
- <p>El oro y los gcubes son monedas que puedes usar para comprar artículos y accesorios para tu avatar. También puedes usar gcubes para desbloquear funciones VIP, como el doble de ingresos por oro, artículos exclusivos, acceso prioritario y más. </p>
79
- <h3>¿Cómo puedo contactar a los desarrolladores de Blockman Go? </h3>
80
- <p>Si desea ponerse en contacto con los desarrolladores de Blockman Go, puede utilizar estos canales:</p>
81
- <ul>
82
- <li>Correo electrónico: <a href="mailto:[email protected]">[email protected]</a></li>
83
- <li>Teléfono: +86 400-999-8800</li>
84
- <li>Sitio web: <a href="">https://www.blockmango.com/</a></li>
85
- <li>Facebook: <a href=">https://www.facebook.com/BlockmanGoOfficial/</a></li>
86
- <li>Twitter: <a href=">https://twitter.com/BlockmanGo</a></li>
87
- <li>YouTube: <a href=">https://www.youtube.com/channel/UCwJQsSxzc9KkNlmeU4L6Fgg</a></li>
88
- </ul>
89
- <p>Puede ponerse en contacto con los desarrolladores de Blockman Go si tiene alguna pregunta, sugerencia, retroalimentación o problemas sobre el juego. También puedes seguirlos en las redes sociales para obtener las últimas noticias y actualizaciones sobre el juego. </p> 64aa2da5cf<br />
90
- <br />
91
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BetterAPI/BetterChat/src/lib/types/UrlDependency.ts DELETED
@@ -1,5 +0,0 @@
1
- /* eslint-disable no-shadow */
2
- export enum UrlDependency {
3
- ConversationList = "conversation:list",
4
- Settings = "settings:list",
5
- }
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/s3/transfer.py DELETED
@@ -1,358 +0,0 @@
1
- # Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License"). You
4
- # may not use this file except in compliance with the License. A copy of
5
- # the License is located at
6
- #
7
- # https://aws.amazon.com/apache2.0/
8
- #
9
- # or in the "license" file accompanying this file. This file is
10
- # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11
- # ANY KIND, either express or implied. See the License for the specific
12
- # language governing permissions and limitations under the License.
13
- """Abstractions over S3's upload/download operations.
14
-
15
- This module provides high level abstractions for efficient
16
- uploads/downloads. It handles several things for the user:
17
-
18
- * Automatically switching to multipart transfers when
19
- a file is over a specific size threshold
20
- * Uploading/downloading a file in parallel
21
- * Progress callbacks to monitor transfers
22
- * Retries. While botocore handles retries for streaming uploads,
23
- it is not possible for it to handle retries for streaming
24
- downloads. This module handles retries for both cases so
25
- you don't need to implement any retry logic yourself.
26
-
27
- This module has a reasonable set of defaults. It also allows you
28
- to configure many aspects of the transfer process including:
29
-
30
- * Multipart threshold size
31
- * Max parallel downloads
32
- * Socket timeouts
33
- * Retry amounts
34
-
35
- There is no support for s3->s3 multipart copies at this
36
- time.
37
-
38
-
39
- .. _ref_s3transfer_usage:
40
-
41
- Usage
42
- =====
43
-
44
- The simplest way to use this module is:
45
-
46
- .. code-block:: python
47
-
48
- client = boto3.client('s3', 'us-west-2')
49
- transfer = S3Transfer(client)
50
- # Upload /tmp/myfile to s3://bucket/key
51
- transfer.upload_file('/tmp/myfile', 'bucket', 'key')
52
-
53
- # Download s3://bucket/key to /tmp/myfile
54
- transfer.download_file('bucket', 'key', '/tmp/myfile')
55
-
56
- The ``upload_file`` and ``download_file`` methods also accept
57
- ``**kwargs``, which will be forwarded through to the corresponding
58
- client operation. Here are a few examples using ``upload_file``::
59
-
60
- # Making the object public
61
- transfer.upload_file('/tmp/myfile', 'bucket', 'key',
62
- extra_args={'ACL': 'public-read'})
63
-
64
- # Setting metadata
65
- transfer.upload_file('/tmp/myfile', 'bucket', 'key',
66
- extra_args={'Metadata': {'a': 'b', 'c': 'd'}})
67
-
68
- # Setting content type
69
- transfer.upload_file('/tmp/myfile.json', 'bucket', 'key',
70
- extra_args={'ContentType': "application/json"})
71
-
72
-
73
- The ``S3Transfer`` class also supports progress callbacks so you can
74
- provide transfer progress to users. Both the ``upload_file`` and
75
- ``download_file`` methods take an optional ``callback`` parameter.
76
- Here's an example of how to print a simple progress percentage
77
- to the user:
78
-
79
- .. code-block:: python
80
-
81
- class ProgressPercentage(object):
82
- def __init__(self, filename):
83
- self._filename = filename
84
- self._size = float(os.path.getsize(filename))
85
- self._seen_so_far = 0
86
- self._lock = threading.Lock()
87
-
88
- def __call__(self, bytes_amount):
89
- # To simplify we'll assume this is hooked up
90
- # to a single filename.
91
- with self._lock:
92
- self._seen_so_far += bytes_amount
93
- percentage = (self._seen_so_far / self._size) * 100
94
- sys.stdout.write(
95
- "\r%s %s / %s (%.2f%%)" % (
96
- self._filename, self._seen_so_far, self._size,
97
- percentage))
98
- sys.stdout.flush()
99
-
100
-
101
- transfer = S3Transfer(boto3.client('s3', 'us-west-2'))
102
- # Upload /tmp/myfile to s3://bucket/key and print upload progress.
103
- transfer.upload_file('/tmp/myfile', 'bucket', 'key',
104
- callback=ProgressPercentage('/tmp/myfile'))
105
-
106
-
107
-
108
- You can also provide a TransferConfig object to the S3Transfer
109
- object that gives you more fine grained control over the
110
- transfer. For example:
111
-
112
- .. code-block:: python
113
-
114
- client = boto3.client('s3', 'us-west-2')
115
- config = TransferConfig(
116
- multipart_threshold=8 * 1024 * 1024,
117
- max_concurrency=10,
118
- num_download_attempts=10,
119
- )
120
- transfer = S3Transfer(client, config)
121
- transfer.upload_file('/tmp/foo', 'bucket', 'key')
122
-
123
-
124
- """
125
- from os import PathLike, fspath
126
-
127
- from botocore.exceptions import ClientError
128
- from s3transfer.exceptions import (
129
- RetriesExceededError as S3TransferRetriesExceededError,
130
- )
131
- from s3transfer.futures import NonThreadedExecutor
132
- from s3transfer.manager import TransferConfig as S3TransferConfig
133
- from s3transfer.manager import TransferManager
134
- from s3transfer.subscribers import BaseSubscriber
135
- from s3transfer.utils import OSUtils
136
-
137
- from boto3.exceptions import RetriesExceededError, S3UploadFailedError
138
-
139
- KB = 1024
140
- MB = KB * KB
141
-
142
-
143
- def create_transfer_manager(client, config, osutil=None):
144
- """Creates a transfer manager based on configuration
145
-
146
- :type client: boto3.client
147
- :param client: The S3 client to use
148
-
149
- :type config: boto3.s3.transfer.TransferConfig
150
- :param config: The transfer config to use
151
-
152
- :type osutil: s3transfer.utils.OSUtils
153
- :param osutil: The os utility to use
154
-
155
- :rtype: s3transfer.manager.TransferManager
156
- :returns: A transfer manager based on parameters provided
157
- """
158
- executor_cls = None
159
- if not config.use_threads:
160
- executor_cls = NonThreadedExecutor
161
- return TransferManager(client, config, osutil, executor_cls)
162
-
163
-
164
- class TransferConfig(S3TransferConfig):
165
- ALIAS = {
166
- 'max_concurrency': 'max_request_concurrency',
167
- 'max_io_queue': 'max_io_queue_size',
168
- }
169
-
170
- def __init__(
171
- self,
172
- multipart_threshold=8 * MB,
173
- max_concurrency=10,
174
- multipart_chunksize=8 * MB,
175
- num_download_attempts=5,
176
- max_io_queue=100,
177
- io_chunksize=256 * KB,
178
- use_threads=True,
179
- max_bandwidth=None,
180
- ):
181
- """Configuration object for managed S3 transfers
182
-
183
- :param multipart_threshold: The transfer size threshold for which
184
- multipart uploads, downloads, and copies will automatically be
185
- triggered.
186
-
187
- :param max_concurrency: The maximum number of threads that will be
188
- making requests to perform a transfer. If ``use_threads`` is
189
- set to ``False``, the value provided is ignored as the transfer
190
- will only ever use the main thread.
191
-
192
- :param multipart_chunksize: The partition size of each part for a
193
- multipart transfer.
194
-
195
- :param num_download_attempts: The number of download attempts that
196
- will be retried upon errors with downloading an object in S3.
197
- Note that these retries account for errors that occur when
198
- streaming down the data from s3 (i.e. socket errors and read
199
- timeouts that occur after receiving an OK response from s3).
200
- Other retryable exceptions such as throttling errors and 5xx
201
- errors are already retried by botocore (this default is 5). This
202
- does not take into account the number of exceptions retried by
203
- botocore.
204
-
205
- :param max_io_queue: The maximum amount of read parts that can be
206
- queued in memory to be written for a download. The size of each
207
- of these read parts is at most the size of ``io_chunksize``.
208
-
209
- :param io_chunksize: The max size of each chunk in the io queue.
210
- Currently, this is size used when ``read`` is called on the
211
- downloaded stream as well.
212
-
213
- :param use_threads: If True, threads will be used when performing
214
- S3 transfers. If False, no threads will be used in
215
- performing transfers; all logic will be run in the main thread.
216
-
217
- :param max_bandwidth: The maximum bandwidth that will be consumed
218
- in uploading and downloading file content. The value is an integer
219
- in terms of bytes per second.
220
- """
221
- super().__init__(
222
- multipart_threshold=multipart_threshold,
223
- max_request_concurrency=max_concurrency,
224
- multipart_chunksize=multipart_chunksize,
225
- num_download_attempts=num_download_attempts,
226
- max_io_queue_size=max_io_queue,
227
- io_chunksize=io_chunksize,
228
- max_bandwidth=max_bandwidth,
229
- )
230
- # Some of the argument names are not the same as the inherited
231
- # S3TransferConfig so we add aliases so you can still access the
232
- # old version of the names.
233
- for alias in self.ALIAS:
234
- setattr(self, alias, getattr(self, self.ALIAS[alias]))
235
- self.use_threads = use_threads
236
-
237
- def __setattr__(self, name, value):
238
- # If the alias name is used, make sure we set the name that it points
239
- # to as that is what actually is used in governing the TransferManager.
240
- if name in self.ALIAS:
241
- super().__setattr__(self.ALIAS[name], value)
242
- # Always set the value of the actual name provided.
243
- super().__setattr__(name, value)
244
-
245
-
246
- class S3Transfer:
247
- ALLOWED_DOWNLOAD_ARGS = TransferManager.ALLOWED_DOWNLOAD_ARGS
248
- ALLOWED_UPLOAD_ARGS = TransferManager.ALLOWED_UPLOAD_ARGS
249
-
250
- def __init__(self, client=None, config=None, osutil=None, manager=None):
251
- if not client and not manager:
252
- raise ValueError(
253
- 'Either a boto3.Client or s3transfer.manager.TransferManager '
254
- 'must be provided'
255
- )
256
- if manager and any([client, config, osutil]):
257
- raise ValueError(
258
- 'Manager cannot be provided with client, config, '
259
- 'nor osutil. These parameters are mutually exclusive.'
260
- )
261
- if config is None:
262
- config = TransferConfig()
263
- if osutil is None:
264
- osutil = OSUtils()
265
- if manager:
266
- self._manager = manager
267
- else:
268
- self._manager = create_transfer_manager(client, config, osutil)
269
-
270
- def upload_file(
271
- self, filename, bucket, key, callback=None, extra_args=None
272
- ):
273
- """Upload a file to an S3 object.
274
-
275
- Variants have also been injected into S3 client, Bucket and Object.
276
- You don't have to use S3Transfer.upload_file() directly.
277
-
278
- .. seealso::
279
- :py:meth:`S3.Client.upload_file`
280
- :py:meth:`S3.Client.upload_fileobj`
281
- """
282
- if isinstance(filename, PathLike):
283
- filename = fspath(filename)
284
- if not isinstance(filename, str):
285
- raise ValueError('Filename must be a string or a path-like object')
286
-
287
- subscribers = self._get_subscribers(callback)
288
- future = self._manager.upload(
289
- filename, bucket, key, extra_args, subscribers
290
- )
291
- try:
292
- future.result()
293
- # If a client error was raised, add the backwards compatibility layer
294
- # that raises a S3UploadFailedError. These specific errors were only
295
- # ever thrown for upload_parts but now can be thrown for any related
296
- # client error.
297
- except ClientError as e:
298
- raise S3UploadFailedError(
299
- "Failed to upload {} to {}: {}".format(
300
- filename, '/'.join([bucket, key]), e
301
- )
302
- )
303
-
304
- def download_file(
305
- self, bucket, key, filename, extra_args=None, callback=None
306
- ):
307
- """Download an S3 object to a file.
308
-
309
- Variants have also been injected into S3 client, Bucket and Object.
310
- You don't have to use S3Transfer.download_file() directly.
311
-
312
- .. seealso::
313
- :py:meth:`S3.Client.download_file`
314
- :py:meth:`S3.Client.download_fileobj`
315
- """
316
- if isinstance(filename, PathLike):
317
- filename = fspath(filename)
318
- if not isinstance(filename, str):
319
- raise ValueError('Filename must be a string or a path-like object')
320
-
321
- subscribers = self._get_subscribers(callback)
322
- future = self._manager.download(
323
- bucket, key, filename, extra_args, subscribers
324
- )
325
- try:
326
- future.result()
327
- # This is for backwards compatibility where when retries are
328
- # exceeded we need to throw the same error from boto3 instead of
329
- # s3transfer's built in RetriesExceededError as current users are
330
- # catching the boto3 one instead of the s3transfer exception to do
331
- # their own retries.
332
- except S3TransferRetriesExceededError as e:
333
- raise RetriesExceededError(e.last_exception)
334
-
335
- def _get_subscribers(self, callback):
336
- if not callback:
337
- return None
338
- return [ProgressCallbackInvoker(callback)]
339
-
340
- def __enter__(self):
341
- return self
342
-
343
- def __exit__(self, *args):
344
- self._manager.__exit__(*args)
345
-
346
-
347
- class ProgressCallbackInvoker(BaseSubscriber):
348
- """A back-compat wrapper to invoke a provided callback via a subscriber
349
-
350
- :param callback: A callable that takes a single positional argument for
351
- how many bytes were transferred.
352
- """
353
-
354
- def __init__(self, callback):
355
- self._callback = callback
356
-
357
- def on_progress(self, bytes_transferred, **kwargs):
358
- self._callback(bytes_transferred)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/region.py DELETED
@@ -1,10 +0,0 @@
1
- from typing import NamedTuple
2
-
3
-
4
- class Region(NamedTuple):
5
- """Defines a rectangular region of the screen."""
6
-
7
- x: int
8
- y: int
9
- width: int
10
- height: int
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/urllib3/packages/backports/__init__.py DELETED
File without changes
spaces/BlinkDL/RWKV-World-7B/app.py DELETED
@@ -1,301 +0,0 @@
1
- import gradio as gr
2
- import os, gc, copy, torch, re
3
- from datetime import datetime
4
- from huggingface_hub import hf_hub_download
5
- from pynvml import *
6
- nvmlInit()
7
- gpu_h = nvmlDeviceGetHandleByIndex(0)
8
- ctx_limit = 1536
9
- title = "RWKV-4-World-7B-v1-20230626-ctx4096"
10
-
11
- os.environ["RWKV_JIT_ON"] = '1'
12
- os.environ["RWKV_CUDA_ON"] = '1' # if '1' then use CUDA kernel for seq mode (much faster)
13
-
14
- from rwkv.model import RWKV
15
- model_path = hf_hub_download(repo_id="BlinkDL/rwkv-4-world", filename=f"{title}.pth")
16
- model = RWKV(model=model_path, strategy='cuda fp16i8 *8 -> cuda fp16')
17
- from rwkv.utils import PIPELINE, PIPELINE_ARGS
18
- pipeline = PIPELINE(model, "rwkv_vocab_v20230424")
19
-
20
- def generate_prompt(instruction, input=None):
21
- instruction = instruction.strip().replace('\r\n','\n').replace('\n\n','\n').replace('\n\n','\n')
22
- input = input.strip().replace('\r\n','\n').replace('\n\n','\n').replace('\n\n','\n')
23
- if input:
24
- return f"""Instruction: {instruction}
25
-
26
- Input: {input}
27
-
28
- Response:"""
29
- else:
30
- return f"""Question: {instruction}
31
-
32
- Answer:"""
33
-
34
- def evaluate(
35
- instruction,
36
- input=None,
37
- token_count=200,
38
- temperature=1.0,
39
- top_p=0.7,
40
- presencePenalty = 0.1,
41
- countPenalty = 0.1,
42
- ):
43
- args = PIPELINE_ARGS(temperature = max(0.2, float(temperature)), top_p = float(top_p),
44
- alpha_frequency = countPenalty,
45
- alpha_presence = presencePenalty,
46
- token_ban = [], # ban the generation of some tokens
47
- token_stop = [0]) # stop generation whenever you see any token here
48
-
49
- instruction = re.sub(r'\n{2,}', '\n', instruction).strip().replace('\r\n','\n')
50
- input = re.sub(r'\n{2,}', '\n', input).strip().replace('\r\n','\n')
51
- ctx = generate_prompt(instruction, input)
52
-
53
- all_tokens = []
54
- out_last = 0
55
- out_str = ''
56
- occurrence = {}
57
- state = None
58
- for i in range(int(token_count)):
59
- out, state = model.forward(pipeline.encode(ctx)[-ctx_limit:] if i == 0 else [token], state)
60
- for n in occurrence:
61
- out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency)
62
-
63
- token = pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p)
64
- if token in args.token_stop:
65
- break
66
- all_tokens += [token]
67
- for xxx in occurrence:
68
- occurrence[xxx] *= 0.996
69
- if token not in occurrence:
70
- occurrence[token] = 1
71
- else:
72
- occurrence[token] += 1
73
-
74
- tmp = pipeline.decode(all_tokens[out_last:])
75
- if '\ufffd' not in tmp:
76
- out_str += tmp
77
- yield out_str.strip()
78
- out_last = i + 1
79
- if '\n\n' in out_str:
80
- break
81
-
82
- gpu_info = nvmlDeviceGetMemoryInfo(gpu_h)
83
- print(f'vram {gpu_info.total} used {gpu_info.used} free {gpu_info.free}')
84
- del out
85
- del state
86
- gc.collect()
87
- torch.cuda.empty_cache()
88
- yield out_str.strip()
89
-
90
- examples = [
91
- ["東京で訪れるべき素晴らしい場所とその紹介をいくつか挙げてください。", "", 300, 1.2, 0.5, 0.4, 0.4],
92
- ["Écrivez un programme Python pour miner 1 Bitcoin, avec des commentaires.", "", 300, 1.2, 0.5, 0.4, 0.4],
93
- ["Write a song about ravens.", "", 300, 1.2, 0.5, 0.4, 0.4],
94
- ["Explain the following metaphor: Life is like cats.", "", 300, 1.2, 0.5, 0.4, 0.4],
95
- ["Write a story using the following information", "A man named Alex chops a tree down", 300, 1.2, 0.5, 0.4, 0.4],
96
- ["Generate a list of adjectives that describe a person as brave.", "", 300, 1.2, 0.5, 0.4, 0.4],
97
- ["You have $100, and your goal is to turn that into as much money as possible with AI and Machine Learning. Please respond with detailed plan.", "", 300, 1.2, 0.5, 0.4, 0.4],
98
- ]
99
-
100
- ##########################################################################
101
-
102
- chat_intro = '''The following is a coherent verbose detailed conversation between <|user|> and an AI girl named <|bot|>.
103
-
104
- <|user|>: Hi <|bot|>, Would you like to chat with me for a while?
105
-
106
- <|bot|>: Hi <|user|>. Sure. What would you like to talk about? I'm listening.
107
- '''
108
-
109
- def user(message, chatbot):
110
- chatbot = chatbot or []
111
- # print(f"User: {message}")
112
- return "", chatbot + [[message, None]]
113
-
114
- def alternative(chatbot, history):
115
- if not chatbot or not history:
116
- return chatbot, history
117
-
118
- chatbot[-1][1] = None
119
- history[0] = copy.deepcopy(history[1])
120
-
121
- return chatbot, history
122
-
123
- def chat(
124
- prompt,
125
- user,
126
- bot,
127
- chatbot,
128
- history,
129
- temperature=1.0,
130
- top_p=0.8,
131
- presence_penalty=0.1,
132
- count_penalty=0.1,
133
- ):
134
- args = PIPELINE_ARGS(temperature=max(0.2, float(temperature)), top_p=float(top_p),
135
- alpha_frequency=float(count_penalty),
136
- alpha_presence=float(presence_penalty),
137
- token_ban=[], # ban the generation of some tokens
138
- token_stop=[]) # stop generation whenever you see any token here
139
-
140
- if not chatbot:
141
- return chatbot, history
142
-
143
- message = chatbot[-1][0]
144
- message = message.strip().replace('\r\n','\n').replace('\n\n','\n')
145
- ctx = f"{user}: {message}\n\n{bot}:"
146
-
147
- if not history:
148
- prompt = prompt.replace("<|user|>", user.strip())
149
- prompt = prompt.replace("<|bot|>", bot.strip())
150
- prompt = prompt.strip()
151
- prompt = f"\n{prompt}\n\n"
152
-
153
- out, state = model.forward(pipeline.encode(prompt), None)
154
- history = [state, None, []] # [state, state_pre, tokens]
155
- # print("History reloaded.")
156
-
157
- [state, _, all_tokens] = history
158
- state_pre_0 = copy.deepcopy(state)
159
-
160
- out, state = model.forward(pipeline.encode(ctx)[-ctx_limit:], state)
161
- state_pre_1 = copy.deepcopy(state) # For recovery
162
-
163
- # print("Bot:", end='')
164
-
165
- begin = len(all_tokens)
166
- out_last = begin
167
- out_str: str = ''
168
- occurrence = {}
169
- for i in range(300):
170
- if i <= 0:
171
- nl_bias = -float('inf')
172
- elif i <= 30:
173
- nl_bias = (i - 30) * 0.1
174
- elif i <= 130:
175
- nl_bias = 0
176
- else:
177
- nl_bias = (i - 130) * 0.25
178
- out[11] += nl_bias
179
- for n in occurrence:
180
- out[n] -= (args.alpha_presence + occurrence[n] * args.alpha_frequency)
181
-
182
- token = pipeline.sample_logits(out, temperature=args.temperature, top_p=args.top_p)
183
- next_tokens = [token]
184
- if token == 0:
185
- next_tokens = pipeline.encode('\n\n')
186
- all_tokens += next_tokens
187
- for xxx in occurrence:
188
- occurrence[xxx] *= 0.996
189
- if token not in occurrence:
190
- occurrence[token] = 1
191
- else:
192
- occurrence[token] += 1
193
-
194
- out, state = model.forward(next_tokens, state)
195
-
196
- tmp = pipeline.decode(all_tokens[out_last:])
197
- if '\ufffd' not in tmp:
198
- # print(tmp, end='', flush=True)
199
- out_last = begin + i + 1
200
- out_str += tmp
201
-
202
- chatbot[-1][1] = out_str.strip()
203
- history = [state, all_tokens]
204
- yield chatbot, history
205
-
206
- out_str = pipeline.decode(all_tokens[begin:])
207
- out_str = out_str.replace("\r\n", '\n')
208
-
209
- if '\n\n' in out_str:
210
- break
211
-
212
- # State recovery
213
- if f'{user}:' in out_str or f'{bot}:' in out_str:
214
- idx_user = out_str.find(f'{user}:')
215
- idx_user = len(out_str) if idx_user == -1 else idx_user
216
- idx_bot = out_str.find(f'{bot}:')
217
- idx_bot = len(out_str) if idx_bot == -1 else idx_bot
218
- idx = min(idx_user, idx_bot)
219
-
220
- if idx < len(out_str):
221
- out_str = f" {out_str[:idx].strip()}\n\n"
222
- tokens = pipeline.encode(out_str)
223
-
224
- all_tokens = all_tokens[:begin] + tokens
225
- out, state = model.forward(tokens, state_pre_1)
226
- break
227
-
228
- gpu_info = nvmlDeviceGetMemoryInfo(gpu_h)
229
- print(f'vram {gpu_info.total} used {gpu_info.used} free {gpu_info.free}')
230
-
231
- gc.collect()
232
- torch.cuda.empty_cache()
233
-
234
- chatbot[-1][1] = out_str.strip()
235
- history = [state, state_pre_0, all_tokens]
236
- yield chatbot, history
237
-
238
- ##########################################################################
239
-
240
- with gr.Blocks(title=title) as demo:
241
- gr.HTML(f"<div style=\"text-align: center;\">\n<h1>🌍World - {title}</h1>\n</div>")
242
- with gr.Tab("Instruct mode"):
243
- gr.Markdown(f"World is [RWKV 7B](https://github.com/BlinkDL/ChatRWKV) 100% RNN [RWKV-LM](https://github.com/BlinkDL/RWKV-LM) ***trained on 100+ world languages***. *** Please try examples first (bottom of page) *** (edit them to use your question). Demo limited to ctxlen {ctx_limit}. Finetuned on alpaca, gpt4all, codealpaca and more. For best results, *** keep you prompt short and clear ***.</b>.") # <b>UPDATE: now with Chat (see above, as a tab) ==> turn off as of now due to VRAM leak caused by buggy code.
244
- with gr.Row():
245
- with gr.Column():
246
- instruction = gr.Textbox(lines=2, label="Instruction", value='東京で訪れるべき素晴らしい場所とその紹介をいくつか挙げてください。')
247
- input = gr.Textbox(lines=2, label="Input", placeholder="none")
248
- token_count = gr.Slider(10, 300, label="Max Tokens", step=10, value=300)
249
- temperature = gr.Slider(0.2, 2.0, label="Temperature", step=0.1, value=1.2)
250
- top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.5)
251
- presence_penalty = gr.Slider(0.0, 1.0, label="Presence Penalty", step=0.1, value=0.4)
252
- count_penalty = gr.Slider(0.0, 1.0, label="Count Penalty", step=0.1, value=0.4)
253
- with gr.Column():
254
- with gr.Row():
255
- submit = gr.Button("Submit", variant="primary")
256
- clear = gr.Button("Clear", variant="secondary")
257
- output = gr.Textbox(label="Output", lines=5)
258
- data = gr.Dataset(components=[instruction, input, token_count, temperature, top_p, presence_penalty, count_penalty], samples=examples, label="Example Instructions", headers=["Instruction", "Input", "Max Tokens", "Temperature", "Top P", "Presence Penalty", "Count Penalty"])
259
- submit.click(evaluate, [instruction, input, token_count, temperature, top_p, presence_penalty, count_penalty], [output])
260
- clear.click(lambda: None, [], [output])
261
- data.click(lambda x: x, [data], [instruction, input, token_count, temperature, top_p, presence_penalty, count_penalty])
262
-
263
- # with gr.Tab("Chat (Experimental - Might be buggy - use ChatRWKV for reference)"):
264
- # gr.Markdown(f'''<b>*** The length of response is restricted in this demo. Use ChatRWKV for longer generations. ***</b> Say "go on" or "continue" can sometimes continue the response. If you'd like to edit the scenario, make sure to follow the exact same format: empty lines between (and only between) different speakers. Changes only take effect after you press [Clear]. <b>The default "Bob" & "Alice" names work the best.</b>''', label="Description")
265
- # with gr.Row():
266
- # with gr.Column():
267
- # chatbot = gr.Chatbot()
268
- # state = gr.State()
269
- # message = gr.Textbox(label="Message", value="Write me a python code to land on moon.")
270
- # with gr.Row():
271
- # send = gr.Button("Send", variant="primary")
272
- # alt = gr.Button("Alternative", variant="secondary")
273
- # clear = gr.Button("Clear", variant="secondary")
274
- # with gr.Column():
275
- # with gr.Row():
276
- # user_name = gr.Textbox(lines=1, max_lines=1, label="User Name", value="Bob")
277
- # bot_name = gr.Textbox(lines=1, max_lines=1, label="Bot Name", value="Alice")
278
- # prompt = gr.Textbox(lines=10, max_lines=50, label="Scenario", value=chat_intro)
279
- # temperature = gr.Slider(0.2, 2.0, label="Temperature", step=0.1, value=1.2)
280
- # top_p = gr.Slider(0.0, 1.0, label="Top P", step=0.05, value=0.5)
281
- # presence_penalty = gr.Slider(0.0, 1.0, label="Presence Penalty", step=0.1, value=0.4)
282
- # count_penalty = gr.Slider(0.0, 1.0, label="Count Penalty", step=0.1, value=0.4)
283
- # chat_inputs = [
284
- # prompt,
285
- # user_name,
286
- # bot_name,
287
- # chatbot,
288
- # state,
289
- # temperature,
290
- # top_p,
291
- # presence_penalty,
292
- # count_penalty
293
- # ]
294
- # chat_outputs = [chatbot, state]
295
- # message.submit(user, [message, chatbot], [message, chatbot], queue=False).then(chat, chat_inputs, chat_outputs)
296
- # send.click(user, [message, chatbot], [message, chatbot], queue=False).then(chat, chat_inputs, chat_outputs)
297
- # alt.click(alternative, [chatbot, state], [chatbot, state], queue=False).then(chat, chat_inputs, chat_outputs)
298
- # clear.click(lambda: ([], None, ""), [], [chatbot, state, message], queue=False)
299
-
300
- demo.queue(concurrency_count=1, max_size=10)
301
- demo.launch(share=False)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CC123123/blip2_t/README.md DELETED
@@ -1,16 +0,0 @@
1
- ---
2
- title: BLIP2
3
- emoji: 🌖
4
- colorFrom: blue
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 3.17.0
8
- app_file: app.py
9
- pinned: false
10
- license: bsd-3-clause
11
- models:
12
- - Salesforce/blip2-opt-2.7b
13
- - Salesforce/blip2-flan-t5-xxl
14
- ---
15
-
16
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/tests/test_sampler.py DELETED
@@ -1,23 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2
- import unittest
3
- from torch.utils.data.sampler import SequentialSampler
4
-
5
- from detectron2.data.samplers import GroupedBatchSampler
6
-
7
-
8
- class TestGroupedBatchSampler(unittest.TestCase):
9
- def test_missing_group_id(self):
10
- sampler = SequentialSampler(list(range(100)))
11
- group_ids = [1] * 100
12
- samples = GroupedBatchSampler(sampler, group_ids, 2)
13
-
14
- for mini_batch in samples:
15
- self.assertEqual(len(mini_batch), 2)
16
-
17
- def test_groups(self):
18
- sampler = SequentialSampler(list(range(100)))
19
- group_ids = [1, 0] * 50
20
- samples = GroupedBatchSampler(sampler, group_ids, 2)
21
-
22
- for mini_batch in samples:
23
- self.assertEqual((mini_batch[0] + mini_batch[1]) % 2, 0)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/include/pybind11/cast.h DELETED
@@ -1,2210 +0,0 @@
1
- /*
2
- pybind11/cast.h: Partial template specializations to cast between
3
- C++ and Python types
4
-
5
- Copyright (c) 2016 Wenzel Jakob <[email protected]>
6
-
7
- All rights reserved. Use of this source code is governed by a
8
- BSD-style license that can be found in the LICENSE file.
9
- */
10
-
11
- #pragma once
12
-
13
- #include "pytypes.h"
14
- #include "detail/typeid.h"
15
- #include "detail/descr.h"
16
- #include "detail/internals.h"
17
- #include <array>
18
- #include <limits>
19
- #include <tuple>
20
- #include <type_traits>
21
-
22
- #if defined(PYBIND11_CPP17)
23
- # if defined(__has_include)
24
- # if __has_include(<string_view>)
25
- # define PYBIND11_HAS_STRING_VIEW
26
- # endif
27
- # elif defined(_MSC_VER)
28
- # define PYBIND11_HAS_STRING_VIEW
29
- # endif
30
- #endif
31
- #ifdef PYBIND11_HAS_STRING_VIEW
32
- #include <string_view>
33
- #endif
34
-
35
- #if defined(__cpp_lib_char8_t) && __cpp_lib_char8_t >= 201811L
36
- # define PYBIND11_HAS_U8STRING
37
- #endif
38
-
39
- PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE)
40
- PYBIND11_NAMESPACE_BEGIN(detail)
41
-
42
- /// A life support system for temporary objects created by `type_caster::load()`.
43
- /// Adding a patient will keep it alive up until the enclosing function returns.
44
- class loader_life_support {
45
- public:
46
- /// A new patient frame is created when a function is entered
47
- loader_life_support() {
48
- get_internals().loader_patient_stack.push_back(nullptr);
49
- }
50
-
51
- /// ... and destroyed after it returns
52
- ~loader_life_support() {
53
- auto &stack = get_internals().loader_patient_stack;
54
- if (stack.empty())
55
- pybind11_fail("loader_life_support: internal error");
56
-
57
- auto ptr = stack.back();
58
- stack.pop_back();
59
- Py_CLEAR(ptr);
60
-
61
- // A heuristic to reduce the stack's capacity (e.g. after long recursive calls)
62
- if (stack.capacity() > 16 && stack.size() != 0 && stack.capacity() / stack.size() > 2)
63
- stack.shrink_to_fit();
64
- }
65
-
66
- /// This can only be used inside a pybind11-bound function, either by `argument_loader`
67
- /// at argument preparation time or by `py::cast()` at execution time.
68
- PYBIND11_NOINLINE static void add_patient(handle h) {
69
- auto &stack = get_internals().loader_patient_stack;
70
- if (stack.empty())
71
- throw cast_error("When called outside a bound function, py::cast() cannot "
72
- "do Python -> C++ conversions which require the creation "
73
- "of temporary values");
74
-
75
- auto &list_ptr = stack.back();
76
- if (list_ptr == nullptr) {
77
- list_ptr = PyList_New(1);
78
- if (!list_ptr)
79
- pybind11_fail("loader_life_support: error allocating list");
80
- PyList_SET_ITEM(list_ptr, 0, h.inc_ref().ptr());
81
- } else {
82
- auto result = PyList_Append(list_ptr, h.ptr());
83
- if (result == -1)
84
- pybind11_fail("loader_life_support: error adding patient");
85
- }
86
- }
87
- };
88
-
89
- // Gets the cache entry for the given type, creating it if necessary. The return value is the pair
90
- // returned by emplace, i.e. an iterator for the entry and a bool set to `true` if the entry was
91
- // just created.
92
- inline std::pair<decltype(internals::registered_types_py)::iterator, bool> all_type_info_get_cache(PyTypeObject *type);
93
-
94
- // Populates a just-created cache entry.
95
- PYBIND11_NOINLINE inline void all_type_info_populate(PyTypeObject *t, std::vector<type_info *> &bases) {
96
- std::vector<PyTypeObject *> check;
97
- for (handle parent : reinterpret_borrow<tuple>(t->tp_bases))
98
- check.push_back((PyTypeObject *) parent.ptr());
99
-
100
- auto const &type_dict = get_internals().registered_types_py;
101
- for (size_t i = 0; i < check.size(); i++) {
102
- auto type = check[i];
103
- // Ignore Python2 old-style class super type:
104
- if (!PyType_Check((PyObject *) type)) continue;
105
-
106
- // Check `type` in the current set of registered python types:
107
- auto it = type_dict.find(type);
108
- if (it != type_dict.end()) {
109
- // We found a cache entry for it, so it's either pybind-registered or has pre-computed
110
- // pybind bases, but we have to make sure we haven't already seen the type(s) before: we
111
- // want to follow Python/virtual C++ rules that there should only be one instance of a
112
- // common base.
113
- for (auto *tinfo : it->second) {
114
- // NB: Could use a second set here, rather than doing a linear search, but since
115
- // having a large number of immediate pybind11-registered types seems fairly
116
- // unlikely, that probably isn't worthwhile.
117
- bool found = false;
118
- for (auto *known : bases) {
119
- if (known == tinfo) { found = true; break; }
120
- }
121
- if (!found) bases.push_back(tinfo);
122
- }
123
- }
124
- else if (type->tp_bases) {
125
- // It's some python type, so keep follow its bases classes to look for one or more
126
- // registered types
127
- if (i + 1 == check.size()) {
128
- // When we're at the end, we can pop off the current element to avoid growing
129
- // `check` when adding just one base (which is typical--i.e. when there is no
130
- // multiple inheritance)
131
- check.pop_back();
132
- i--;
133
- }
134
- for (handle parent : reinterpret_borrow<tuple>(type->tp_bases))
135
- check.push_back((PyTypeObject *) parent.ptr());
136
- }
137
- }
138
- }
139
-
140
- /**
141
- * Extracts vector of type_info pointers of pybind-registered roots of the given Python type. Will
142
- * be just 1 pybind type for the Python type of a pybind-registered class, or for any Python-side
143
- * derived class that uses single inheritance. Will contain as many types as required for a Python
144
- * class that uses multiple inheritance to inherit (directly or indirectly) from multiple
145
- * pybind-registered classes. Will be empty if neither the type nor any base classes are
146
- * pybind-registered.
147
- *
148
- * The value is cached for the lifetime of the Python type.
149
- */
150
- inline const std::vector<detail::type_info *> &all_type_info(PyTypeObject *type) {
151
- auto ins = all_type_info_get_cache(type);
152
- if (ins.second)
153
- // New cache entry: populate it
154
- all_type_info_populate(type, ins.first->second);
155
-
156
- return ins.first->second;
157
- }
158
-
159
- /**
160
- * Gets a single pybind11 type info for a python type. Returns nullptr if neither the type nor any
161
- * ancestors are pybind11-registered. Throws an exception if there are multiple bases--use
162
- * `all_type_info` instead if you want to support multiple bases.
163
- */
164
- PYBIND11_NOINLINE inline detail::type_info* get_type_info(PyTypeObject *type) {
165
- auto &bases = all_type_info(type);
166
- if (bases.size() == 0)
167
- return nullptr;
168
- if (bases.size() > 1)
169
- pybind11_fail("pybind11::detail::get_type_info: type has multiple pybind11-registered bases");
170
- return bases.front();
171
- }
172
-
173
- inline detail::type_info *get_local_type_info(const std::type_index &tp) {
174
- auto &locals = registered_local_types_cpp();
175
- auto it = locals.find(tp);
176
- if (it != locals.end())
177
- return it->second;
178
- return nullptr;
179
- }
180
-
181
- inline detail::type_info *get_global_type_info(const std::type_index &tp) {
182
- auto &types = get_internals().registered_types_cpp;
183
- auto it = types.find(tp);
184
- if (it != types.end())
185
- return it->second;
186
- return nullptr;
187
- }
188
-
189
- /// Return the type info for a given C++ type; on lookup failure can either throw or return nullptr.
190
- PYBIND11_NOINLINE inline detail::type_info *get_type_info(const std::type_index &tp,
191
- bool throw_if_missing = false) {
192
- if (auto ltype = get_local_type_info(tp))
193
- return ltype;
194
- if (auto gtype = get_global_type_info(tp))
195
- return gtype;
196
-
197
- if (throw_if_missing) {
198
- std::string tname = tp.name();
199
- detail::clean_type_id(tname);
200
- pybind11_fail("pybind11::detail::get_type_info: unable to find type info for \"" + tname + "\"");
201
- }
202
- return nullptr;
203
- }
204
-
205
- PYBIND11_NOINLINE inline handle get_type_handle(const std::type_info &tp, bool throw_if_missing) {
206
- detail::type_info *type_info = get_type_info(tp, throw_if_missing);
207
- return handle(type_info ? ((PyObject *) type_info->type) : nullptr);
208
- }
209
-
210
- struct value_and_holder {
211
- instance *inst = nullptr;
212
- size_t index = 0u;
213
- const detail::type_info *type = nullptr;
214
- void **vh = nullptr;
215
-
216
- // Main constructor for a found value/holder:
217
- value_and_holder(instance *i, const detail::type_info *type, size_t vpos, size_t index) :
218
- inst{i}, index{index}, type{type},
219
- vh{inst->simple_layout ? inst->simple_value_holder : &inst->nonsimple.values_and_holders[vpos]}
220
- {}
221
-
222
- // Default constructor (used to signal a value-and-holder not found by get_value_and_holder())
223
- value_and_holder() {}
224
-
225
- // Used for past-the-end iterator
226
- value_and_holder(size_t index) : index{index} {}
227
-
228
- template <typename V = void> V *&value_ptr() const {
229
- return reinterpret_cast<V *&>(vh[0]);
230
- }
231
- // True if this `value_and_holder` has a non-null value pointer
232
- explicit operator bool() const { return value_ptr(); }
233
-
234
- template <typename H> H &holder() const {
235
- return reinterpret_cast<H &>(vh[1]);
236
- }
237
- bool holder_constructed() const {
238
- return inst->simple_layout
239
- ? inst->simple_holder_constructed
240
- : inst->nonsimple.status[index] & instance::status_holder_constructed;
241
- }
242
- void set_holder_constructed(bool v = true) {
243
- if (inst->simple_layout)
244
- inst->simple_holder_constructed = v;
245
- else if (v)
246
- inst->nonsimple.status[index] |= instance::status_holder_constructed;
247
- else
248
- inst->nonsimple.status[index] &= (uint8_t) ~instance::status_holder_constructed;
249
- }
250
- bool instance_registered() const {
251
- return inst->simple_layout
252
- ? inst->simple_instance_registered
253
- : inst->nonsimple.status[index] & instance::status_instance_registered;
254
- }
255
- void set_instance_registered(bool v = true) {
256
- if (inst->simple_layout)
257
- inst->simple_instance_registered = v;
258
- else if (v)
259
- inst->nonsimple.status[index] |= instance::status_instance_registered;
260
- else
261
- inst->nonsimple.status[index] &= (uint8_t) ~instance::status_instance_registered;
262
- }
263
- };
264
-
265
- // Container for accessing and iterating over an instance's values/holders
266
- struct values_and_holders {
267
- private:
268
- instance *inst;
269
- using type_vec = std::vector<detail::type_info *>;
270
- const type_vec &tinfo;
271
-
272
- public:
273
- values_and_holders(instance *inst) : inst{inst}, tinfo(all_type_info(Py_TYPE(inst))) {}
274
-
275
- struct iterator {
276
- private:
277
- instance *inst = nullptr;
278
- const type_vec *types = nullptr;
279
- value_and_holder curr;
280
- friend struct values_and_holders;
281
- iterator(instance *inst, const type_vec *tinfo)
282
- : inst{inst}, types{tinfo},
283
- curr(inst /* instance */,
284
- types->empty() ? nullptr : (*types)[0] /* type info */,
285
- 0, /* vpos: (non-simple types only): the first vptr comes first */
286
- 0 /* index */)
287
- {}
288
- // Past-the-end iterator:
289
- iterator(size_t end) : curr(end) {}
290
- public:
291
- bool operator==(const iterator &other) const { return curr.index == other.curr.index; }
292
- bool operator!=(const iterator &other) const { return curr.index != other.curr.index; }
293
- iterator &operator++() {
294
- if (!inst->simple_layout)
295
- curr.vh += 1 + (*types)[curr.index]->holder_size_in_ptrs;
296
- ++curr.index;
297
- curr.type = curr.index < types->size() ? (*types)[curr.index] : nullptr;
298
- return *this;
299
- }
300
- value_and_holder &operator*() { return curr; }
301
- value_and_holder *operator->() { return &curr; }
302
- };
303
-
304
- iterator begin() { return iterator(inst, &tinfo); }
305
- iterator end() { return iterator(tinfo.size()); }
306
-
307
- iterator find(const type_info *find_type) {
308
- auto it = begin(), endit = end();
309
- while (it != endit && it->type != find_type) ++it;
310
- return it;
311
- }
312
-
313
- size_t size() { return tinfo.size(); }
314
- };
315
-
316
- /**
317
- * Extracts C++ value and holder pointer references from an instance (which may contain multiple
318
- * values/holders for python-side multiple inheritance) that match the given type. Throws an error
319
- * if the given type (or ValueType, if omitted) is not a pybind11 base of the given instance. If
320
- * `find_type` is omitted (or explicitly specified as nullptr) the first value/holder are returned,
321
- * regardless of type (and the resulting .type will be nullptr).
322
- *
323
- * The returned object should be short-lived: in particular, it must not outlive the called-upon
324
- * instance.
325
- */
326
- PYBIND11_NOINLINE inline value_and_holder instance::get_value_and_holder(const type_info *find_type /*= nullptr default in common.h*/, bool throw_if_missing /*= true in common.h*/) {
327
- // Optimize common case:
328
- if (!find_type || Py_TYPE(this) == find_type->type)
329
- return value_and_holder(this, find_type, 0, 0);
330
-
331
- detail::values_and_holders vhs(this);
332
- auto it = vhs.find(find_type);
333
- if (it != vhs.end())
334
- return *it;
335
-
336
- if (!throw_if_missing)
337
- return value_and_holder();
338
-
339
- #if defined(NDEBUG)
340
- pybind11_fail("pybind11::detail::instance::get_value_and_holder: "
341
- "type is not a pybind11 base of the given instance "
342
- "(compile in debug mode for type details)");
343
- #else
344
- pybind11_fail("pybind11::detail::instance::get_value_and_holder: `" +
345
- std::string(find_type->type->tp_name) + "' is not a pybind11 base of the given `" +
346
- std::string(Py_TYPE(this)->tp_name) + "' instance");
347
- #endif
348
- }
349
-
350
- PYBIND11_NOINLINE inline void instance::allocate_layout() {
351
- auto &tinfo = all_type_info(Py_TYPE(this));
352
-
353
- const size_t n_types = tinfo.size();
354
-
355
- if (n_types == 0)
356
- pybind11_fail("instance allocation failed: new instance has no pybind11-registered base types");
357
-
358
- simple_layout =
359
- n_types == 1 && tinfo.front()->holder_size_in_ptrs <= instance_simple_holder_in_ptrs();
360
-
361
- // Simple path: no python-side multiple inheritance, and a small-enough holder
362
- if (simple_layout) {
363
- simple_value_holder[0] = nullptr;
364
- simple_holder_constructed = false;
365
- simple_instance_registered = false;
366
- }
367
- else { // multiple base types or a too-large holder
368
- // Allocate space to hold: [v1*][h1][v2*][h2]...[bb...] where [vN*] is a value pointer,
369
- // [hN] is the (uninitialized) holder instance for value N, and [bb...] is a set of bool
370
- // values that tracks whether each associated holder has been initialized. Each [block] is
371
- // padded, if necessary, to an integer multiple of sizeof(void *).
372
- size_t space = 0;
373
- for (auto t : tinfo) {
374
- space += 1; // value pointer
375
- space += t->holder_size_in_ptrs; // holder instance
376
- }
377
- size_t flags_at = space;
378
- space += size_in_ptrs(n_types); // status bytes (holder_constructed and instance_registered)
379
-
380
- // Allocate space for flags, values, and holders, and initialize it to 0 (flags and values,
381
- // in particular, need to be 0). Use Python's memory allocation functions: in Python 3.6
382
- // they default to using pymalloc, which is designed to be efficient for small allocations
383
- // like the one we're doing here; in earlier versions (and for larger allocations) they are
384
- // just wrappers around malloc.
385
- #if PY_VERSION_HEX >= 0x03050000
386
- nonsimple.values_and_holders = (void **) PyMem_Calloc(space, sizeof(void *));
387
- if (!nonsimple.values_and_holders) throw std::bad_alloc();
388
- #else
389
- nonsimple.values_and_holders = (void **) PyMem_New(void *, space);
390
- if (!nonsimple.values_and_holders) throw std::bad_alloc();
391
- std::memset(nonsimple.values_and_holders, 0, space * sizeof(void *));
392
- #endif
393
- nonsimple.status = reinterpret_cast<uint8_t *>(&nonsimple.values_and_holders[flags_at]);
394
- }
395
- owned = true;
396
- }
397
-
398
- PYBIND11_NOINLINE inline void instance::deallocate_layout() {
399
- if (!simple_layout)
400
- PyMem_Free(nonsimple.values_and_holders);
401
- }
402
-
403
- PYBIND11_NOINLINE inline bool isinstance_generic(handle obj, const std::type_info &tp) {
404
- handle type = detail::get_type_handle(tp, false);
405
- if (!type)
406
- return false;
407
- return isinstance(obj, type);
408
- }
409
-
410
- PYBIND11_NOINLINE inline std::string error_string() {
411
- if (!PyErr_Occurred()) {
412
- PyErr_SetString(PyExc_RuntimeError, "Unknown internal error occurred");
413
- return "Unknown internal error occurred";
414
- }
415
-
416
- error_scope scope; // Preserve error state
417
-
418
- std::string errorString;
419
- if (scope.type) {
420
- errorString += handle(scope.type).attr("__name__").cast<std::string>();
421
- errorString += ": ";
422
- }
423
- if (scope.value)
424
- errorString += (std::string) str(scope.value);
425
-
426
- PyErr_NormalizeException(&scope.type, &scope.value, &scope.trace);
427
-
428
- #if PY_MAJOR_VERSION >= 3
429
- if (scope.trace != nullptr)
430
- PyException_SetTraceback(scope.value, scope.trace);
431
- #endif
432
-
433
- #if !defined(PYPY_VERSION)
434
- if (scope.trace) {
435
- PyTracebackObject *trace = (PyTracebackObject *) scope.trace;
436
-
437
- /* Get the deepest trace possible */
438
- while (trace->tb_next)
439
- trace = trace->tb_next;
440
-
441
- PyFrameObject *frame = trace->tb_frame;
442
- errorString += "\n\nAt:\n";
443
- while (frame) {
444
- int lineno = PyFrame_GetLineNumber(frame);
445
- errorString +=
446
- " " + handle(frame->f_code->co_filename).cast<std::string>() +
447
- "(" + std::to_string(lineno) + "): " +
448
- handle(frame->f_code->co_name).cast<std::string>() + "\n";
449
- frame = frame->f_back;
450
- }
451
- }
452
- #endif
453
-
454
- return errorString;
455
- }
456
-
457
- PYBIND11_NOINLINE inline handle get_object_handle(const void *ptr, const detail::type_info *type ) {
458
- auto &instances = get_internals().registered_instances;
459
- auto range = instances.equal_range(ptr);
460
- for (auto it = range.first; it != range.second; ++it) {
461
- for (const auto &vh : values_and_holders(it->second)) {
462
- if (vh.type == type)
463
- return handle((PyObject *) it->second);
464
- }
465
- }
466
- return handle();
467
- }
468
-
469
- inline PyThreadState *get_thread_state_unchecked() {
470
- #if defined(PYPY_VERSION)
471
- return PyThreadState_GET();
472
- #elif PY_VERSION_HEX < 0x03000000
473
- return _PyThreadState_Current;
474
- #elif PY_VERSION_HEX < 0x03050000
475
- return (PyThreadState*) _Py_atomic_load_relaxed(&_PyThreadState_Current);
476
- #elif PY_VERSION_HEX < 0x03050200
477
- return (PyThreadState*) _PyThreadState_Current.value;
478
- #else
479
- return _PyThreadState_UncheckedGet();
480
- #endif
481
- }
482
-
483
- // Forward declarations
484
- inline void keep_alive_impl(handle nurse, handle patient);
485
- inline PyObject *make_new_instance(PyTypeObject *type);
486
-
487
- class type_caster_generic {
488
- public:
489
- PYBIND11_NOINLINE type_caster_generic(const std::type_info &type_info)
490
- : typeinfo(get_type_info(type_info)), cpptype(&type_info) { }
491
-
492
- type_caster_generic(const type_info *typeinfo)
493
- : typeinfo(typeinfo), cpptype(typeinfo ? typeinfo->cpptype : nullptr) { }
494
-
495
- bool load(handle src, bool convert) {
496
- return load_impl<type_caster_generic>(src, convert);
497
- }
498
-
499
- PYBIND11_NOINLINE static handle cast(const void *_src, return_value_policy policy, handle parent,
500
- const detail::type_info *tinfo,
501
- void *(*copy_constructor)(const void *),
502
- void *(*move_constructor)(const void *),
503
- const void *existing_holder = nullptr) {
504
- if (!tinfo) // no type info: error will be set already
505
- return handle();
506
-
507
- void *src = const_cast<void *>(_src);
508
- if (src == nullptr)
509
- return none().release();
510
-
511
- auto it_instances = get_internals().registered_instances.equal_range(src);
512
- for (auto it_i = it_instances.first; it_i != it_instances.second; ++it_i) {
513
- for (auto instance_type : detail::all_type_info(Py_TYPE(it_i->second))) {
514
- if (instance_type && same_type(*instance_type->cpptype, *tinfo->cpptype))
515
- return handle((PyObject *) it_i->second).inc_ref();
516
- }
517
- }
518
-
519
- auto inst = reinterpret_steal<object>(make_new_instance(tinfo->type));
520
- auto wrapper = reinterpret_cast<instance *>(inst.ptr());
521
- wrapper->owned = false;
522
- void *&valueptr = values_and_holders(wrapper).begin()->value_ptr();
523
-
524
- switch (policy) {
525
- case return_value_policy::automatic:
526
- case return_value_policy::take_ownership:
527
- valueptr = src;
528
- wrapper->owned = true;
529
- break;
530
-
531
- case return_value_policy::automatic_reference:
532
- case return_value_policy::reference:
533
- valueptr = src;
534
- wrapper->owned = false;
535
- break;
536
-
537
- case return_value_policy::copy:
538
- if (copy_constructor)
539
- valueptr = copy_constructor(src);
540
- else {
541
- #if defined(NDEBUG)
542
- throw cast_error("return_value_policy = copy, but type is "
543
- "non-copyable! (compile in debug mode for details)");
544
- #else
545
- std::string type_name(tinfo->cpptype->name());
546
- detail::clean_type_id(type_name);
547
- throw cast_error("return_value_policy = copy, but type " +
548
- type_name + " is non-copyable!");
549
- #endif
550
- }
551
- wrapper->owned = true;
552
- break;
553
-
554
- case return_value_policy::move:
555
- if (move_constructor)
556
- valueptr = move_constructor(src);
557
- else if (copy_constructor)
558
- valueptr = copy_constructor(src);
559
- else {
560
- #if defined(NDEBUG)
561
- throw cast_error("return_value_policy = move, but type is neither "
562
- "movable nor copyable! "
563
- "(compile in debug mode for details)");
564
- #else
565
- std::string type_name(tinfo->cpptype->name());
566
- detail::clean_type_id(type_name);
567
- throw cast_error("return_value_policy = move, but type " +
568
- type_name + " is neither movable nor copyable!");
569
- #endif
570
- }
571
- wrapper->owned = true;
572
- break;
573
-
574
- case return_value_policy::reference_internal:
575
- valueptr = src;
576
- wrapper->owned = false;
577
- keep_alive_impl(inst, parent);
578
- break;
579
-
580
- default:
581
- throw cast_error("unhandled return_value_policy: should not happen!");
582
- }
583
-
584
- tinfo->init_instance(wrapper, existing_holder);
585
-
586
- return inst.release();
587
- }
588
-
589
- // Base methods for generic caster; there are overridden in copyable_holder_caster
590
- void load_value(value_and_holder &&v_h) {
591
- auto *&vptr = v_h.value_ptr();
592
- // Lazy allocation for unallocated values:
593
- if (vptr == nullptr) {
594
- auto *type = v_h.type ? v_h.type : typeinfo;
595
- if (type->operator_new) {
596
- vptr = type->operator_new(type->type_size);
597
- } else {
598
- #if defined(__cpp_aligned_new) && (!defined(_MSC_VER) || _MSC_VER >= 1912)
599
- if (type->type_align > __STDCPP_DEFAULT_NEW_ALIGNMENT__)
600
- vptr = ::operator new(type->type_size,
601
- std::align_val_t(type->type_align));
602
- else
603
- #endif
604
- vptr = ::operator new(type->type_size);
605
- }
606
- }
607
- value = vptr;
608
- }
609
- bool try_implicit_casts(handle src, bool convert) {
610
- for (auto &cast : typeinfo->implicit_casts) {
611
- type_caster_generic sub_caster(*cast.first);
612
- if (sub_caster.load(src, convert)) {
613
- value = cast.second(sub_caster.value);
614
- return true;
615
- }
616
- }
617
- return false;
618
- }
619
- bool try_direct_conversions(handle src) {
620
- for (auto &converter : *typeinfo->direct_conversions) {
621
- if (converter(src.ptr(), value))
622
- return true;
623
- }
624
- return false;
625
- }
626
- void check_holder_compat() {}
627
-
628
- PYBIND11_NOINLINE static void *local_load(PyObject *src, const type_info *ti) {
629
- auto caster = type_caster_generic(ti);
630
- if (caster.load(src, false))
631
- return caster.value;
632
- return nullptr;
633
- }
634
-
635
- /// Try to load with foreign typeinfo, if available. Used when there is no
636
- /// native typeinfo, or when the native one wasn't able to produce a value.
637
- PYBIND11_NOINLINE bool try_load_foreign_module_local(handle src) {
638
- constexpr auto *local_key = PYBIND11_MODULE_LOCAL_ID;
639
- const auto pytype = src.get_type();
640
- if (!hasattr(pytype, local_key))
641
- return false;
642
-
643
- type_info *foreign_typeinfo = reinterpret_borrow<capsule>(getattr(pytype, local_key));
644
- // Only consider this foreign loader if actually foreign and is a loader of the correct cpp type
645
- if (foreign_typeinfo->module_local_load == &local_load
646
- || (cpptype && !same_type(*cpptype, *foreign_typeinfo->cpptype)))
647
- return false;
648
-
649
- if (auto result = foreign_typeinfo->module_local_load(src.ptr(), foreign_typeinfo)) {
650
- value = result;
651
- return true;
652
- }
653
- return false;
654
- }
655
-
656
- // Implementation of `load`; this takes the type of `this` so that it can dispatch the relevant
657
- // bits of code between here and copyable_holder_caster where the two classes need different
658
- // logic (without having to resort to virtual inheritance).
659
- template <typename ThisT>
660
- PYBIND11_NOINLINE bool load_impl(handle src, bool convert) {
661
- if (!src) return false;
662
- if (!typeinfo) return try_load_foreign_module_local(src);
663
- if (src.is_none()) {
664
- // Defer accepting None to other overloads (if we aren't in convert mode):
665
- if (!convert) return false;
666
- value = nullptr;
667
- return true;
668
- }
669
-
670
- auto &this_ = static_cast<ThisT &>(*this);
671
- this_.check_holder_compat();
672
-
673
- PyTypeObject *srctype = Py_TYPE(src.ptr());
674
-
675
- // Case 1: If src is an exact type match for the target type then we can reinterpret_cast
676
- // the instance's value pointer to the target type:
677
- if (srctype == typeinfo->type) {
678
- this_.load_value(reinterpret_cast<instance *>(src.ptr())->get_value_and_holder());
679
- return true;
680
- }
681
- // Case 2: We have a derived class
682
- else if (PyType_IsSubtype(srctype, typeinfo->type)) {
683
- auto &bases = all_type_info(srctype);
684
- bool no_cpp_mi = typeinfo->simple_type;
685
-
686
- // Case 2a: the python type is a Python-inherited derived class that inherits from just
687
- // one simple (no MI) pybind11 class, or is an exact match, so the C++ instance is of
688
- // the right type and we can use reinterpret_cast.
689
- // (This is essentially the same as case 2b, but because not using multiple inheritance
690
- // is extremely common, we handle it specially to avoid the loop iterator and type
691
- // pointer lookup overhead)
692
- if (bases.size() == 1 && (no_cpp_mi || bases.front()->type == typeinfo->type)) {
693
- this_.load_value(reinterpret_cast<instance *>(src.ptr())->get_value_and_holder());
694
- return true;
695
- }
696
- // Case 2b: the python type inherits from multiple C++ bases. Check the bases to see if
697
- // we can find an exact match (or, for a simple C++ type, an inherited match); if so, we
698
- // can safely reinterpret_cast to the relevant pointer.
699
- else if (bases.size() > 1) {
700
- for (auto base : bases) {
701
- if (no_cpp_mi ? PyType_IsSubtype(base->type, typeinfo->type) : base->type == typeinfo->type) {
702
- this_.load_value(reinterpret_cast<instance *>(src.ptr())->get_value_and_holder(base));
703
- return true;
704
- }
705
- }
706
- }
707
-
708
- // Case 2c: C++ multiple inheritance is involved and we couldn't find an exact type match
709
- // in the registered bases, above, so try implicit casting (needed for proper C++ casting
710
- // when MI is involved).
711
- if (this_.try_implicit_casts(src, convert))
712
- return true;
713
- }
714
-
715
- // Perform an implicit conversion
716
- if (convert) {
717
- for (auto &converter : typeinfo->implicit_conversions) {
718
- auto temp = reinterpret_steal<object>(converter(src.ptr(), typeinfo->type));
719
- if (load_impl<ThisT>(temp, false)) {
720
- loader_life_support::add_patient(temp);
721
- return true;
722
- }
723
- }
724
- if (this_.try_direct_conversions(src))
725
- return true;
726
- }
727
-
728
- // Failed to match local typeinfo. Try again with global.
729
- if (typeinfo->module_local) {
730
- if (auto gtype = get_global_type_info(*typeinfo->cpptype)) {
731
- typeinfo = gtype;
732
- return load(src, false);
733
- }
734
- }
735
-
736
- // Global typeinfo has precedence over foreign module_local
737
- return try_load_foreign_module_local(src);
738
- }
739
-
740
-
741
- // Called to do type lookup and wrap the pointer and type in a pair when a dynamic_cast
742
- // isn't needed or can't be used. If the type is unknown, sets the error and returns a pair
743
- // with .second = nullptr. (p.first = nullptr is not an error: it becomes None).
744
- PYBIND11_NOINLINE static std::pair<const void *, const type_info *> src_and_type(
745
- const void *src, const std::type_info &cast_type, const std::type_info *rtti_type = nullptr) {
746
- if (auto *tpi = get_type_info(cast_type))
747
- return {src, const_cast<const type_info *>(tpi)};
748
-
749
- // Not found, set error:
750
- std::string tname = rtti_type ? rtti_type->name() : cast_type.name();
751
- detail::clean_type_id(tname);
752
- std::string msg = "Unregistered type : " + tname;
753
- PyErr_SetString(PyExc_TypeError, msg.c_str());
754
- return {nullptr, nullptr};
755
- }
756
-
757
- const type_info *typeinfo = nullptr;
758
- const std::type_info *cpptype = nullptr;
759
- void *value = nullptr;
760
- };
761
-
762
- /**
763
- * Determine suitable casting operator for pointer-or-lvalue-casting type casters. The type caster
764
- * needs to provide `operator T*()` and `operator T&()` operators.
765
- *
766
- * If the type supports moving the value away via an `operator T&&() &&` method, it should use
767
- * `movable_cast_op_type` instead.
768
- */
769
- template <typename T>
770
- using cast_op_type =
771
- conditional_t<std::is_pointer<remove_reference_t<T>>::value,
772
- typename std::add_pointer<intrinsic_t<T>>::type,
773
- typename std::add_lvalue_reference<intrinsic_t<T>>::type>;
774
-
775
- /**
776
- * Determine suitable casting operator for a type caster with a movable value. Such a type caster
777
- * needs to provide `operator T*()`, `operator T&()`, and `operator T&&() &&`. The latter will be
778
- * called in appropriate contexts where the value can be moved rather than copied.
779
- *
780
- * These operator are automatically provided when using the PYBIND11_TYPE_CASTER macro.
781
- */
782
- template <typename T>
783
- using movable_cast_op_type =
784
- conditional_t<std::is_pointer<typename std::remove_reference<T>::type>::value,
785
- typename std::add_pointer<intrinsic_t<T>>::type,
786
- conditional_t<std::is_rvalue_reference<T>::value,
787
- typename std::add_rvalue_reference<intrinsic_t<T>>::type,
788
- typename std::add_lvalue_reference<intrinsic_t<T>>::type>>;
789
-
790
- // std::is_copy_constructible isn't quite enough: it lets std::vector<T> (and similar) through when
791
- // T is non-copyable, but code containing such a copy constructor fails to actually compile.
792
- template <typename T, typename SFINAE = void> struct is_copy_constructible : std::is_copy_constructible<T> {};
793
-
794
- // Specialization for types that appear to be copy constructible but also look like stl containers
795
- // (we specifically check for: has `value_type` and `reference` with `reference = value_type&`): if
796
- // so, copy constructability depends on whether the value_type is copy constructible.
797
- template <typename Container> struct is_copy_constructible<Container, enable_if_t<all_of<
798
- std::is_copy_constructible<Container>,
799
- std::is_same<typename Container::value_type &, typename Container::reference>,
800
- // Avoid infinite recursion
801
- negation<std::is_same<Container, typename Container::value_type>>
802
- >::value>> : is_copy_constructible<typename Container::value_type> {};
803
-
804
- // Likewise for std::pair
805
- // (after C++17 it is mandatory that the copy constructor not exist when the two types aren't themselves
806
- // copy constructible, but this can not be relied upon when T1 or T2 are themselves containers).
807
- template <typename T1, typename T2> struct is_copy_constructible<std::pair<T1, T2>>
808
- : all_of<is_copy_constructible<T1>, is_copy_constructible<T2>> {};
809
-
810
- // The same problems arise with std::is_copy_assignable, so we use the same workaround.
811
- template <typename T, typename SFINAE = void> struct is_copy_assignable : std::is_copy_assignable<T> {};
812
- template <typename Container> struct is_copy_assignable<Container, enable_if_t<all_of<
813
- std::is_copy_assignable<Container>,
814
- std::is_same<typename Container::value_type &, typename Container::reference>
815
- >::value>> : is_copy_assignable<typename Container::value_type> {};
816
- template <typename T1, typename T2> struct is_copy_assignable<std::pair<T1, T2>>
817
- : all_of<is_copy_assignable<T1>, is_copy_assignable<T2>> {};
818
-
819
- PYBIND11_NAMESPACE_END(detail)
820
-
821
- // polymorphic_type_hook<itype>::get(src, tinfo) determines whether the object pointed
822
- // to by `src` actually is an instance of some class derived from `itype`.
823
- // If so, it sets `tinfo` to point to the std::type_info representing that derived
824
- // type, and returns a pointer to the start of the most-derived object of that type
825
- // (in which `src` is a subobject; this will be the same address as `src` in most
826
- // single inheritance cases). If not, or if `src` is nullptr, it simply returns `src`
827
- // and leaves `tinfo` at its default value of nullptr.
828
- //
829
- // The default polymorphic_type_hook just returns src. A specialization for polymorphic
830
- // types determines the runtime type of the passed object and adjusts the this-pointer
831
- // appropriately via dynamic_cast<void*>. This is what enables a C++ Animal* to appear
832
- // to Python as a Dog (if Dog inherits from Animal, Animal is polymorphic, Dog is
833
- // registered with pybind11, and this Animal is in fact a Dog).
834
- //
835
- // You may specialize polymorphic_type_hook yourself for types that want to appear
836
- // polymorphic to Python but do not use C++ RTTI. (This is a not uncommon pattern
837
- // in performance-sensitive applications, used most notably in LLVM.)
838
- //
839
- // polymorphic_type_hook_base allows users to specialize polymorphic_type_hook with
840
- // std::enable_if. User provided specializations will always have higher priority than
841
- // the default implementation and specialization provided in polymorphic_type_hook_base.
842
- template <typename itype, typename SFINAE = void>
843
- struct polymorphic_type_hook_base
844
- {
845
- static const void *get(const itype *src, const std::type_info*&) { return src; }
846
- };
847
- template <typename itype>
848
- struct polymorphic_type_hook_base<itype, detail::enable_if_t<std::is_polymorphic<itype>::value>>
849
- {
850
- static const void *get(const itype *src, const std::type_info*& type) {
851
- type = src ? &typeid(*src) : nullptr;
852
- return dynamic_cast<const void*>(src);
853
- }
854
- };
855
- template <typename itype, typename SFINAE = void>
856
- struct polymorphic_type_hook : public polymorphic_type_hook_base<itype> {};
857
-
858
- PYBIND11_NAMESPACE_BEGIN(detail)
859
-
860
- /// Generic type caster for objects stored on the heap
861
- template <typename type> class type_caster_base : public type_caster_generic {
862
- using itype = intrinsic_t<type>;
863
-
864
- public:
865
- static constexpr auto name = _<type>();
866
-
867
- type_caster_base() : type_caster_base(typeid(type)) { }
868
- explicit type_caster_base(const std::type_info &info) : type_caster_generic(info) { }
869
-
870
- static handle cast(const itype &src, return_value_policy policy, handle parent) {
871
- if (policy == return_value_policy::automatic || policy == return_value_policy::automatic_reference)
872
- policy = return_value_policy::copy;
873
- return cast(&src, policy, parent);
874
- }
875
-
876
- static handle cast(itype &&src, return_value_policy, handle parent) {
877
- return cast(&src, return_value_policy::move, parent);
878
- }
879
-
880
- // Returns a (pointer, type_info) pair taking care of necessary type lookup for a
881
- // polymorphic type (using RTTI by default, but can be overridden by specializing
882
- // polymorphic_type_hook). If the instance isn't derived, returns the base version.
883
- static std::pair<const void *, const type_info *> src_and_type(const itype *src) {
884
- auto &cast_type = typeid(itype);
885
- const std::type_info *instance_type = nullptr;
886
- const void *vsrc = polymorphic_type_hook<itype>::get(src, instance_type);
887
- if (instance_type && !same_type(cast_type, *instance_type)) {
888
- // This is a base pointer to a derived type. If the derived type is registered
889
- // with pybind11, we want to make the full derived object available.
890
- // In the typical case where itype is polymorphic, we get the correct
891
- // derived pointer (which may be != base pointer) by a dynamic_cast to
892
- // most derived type. If itype is not polymorphic, we won't get here
893
- // except via a user-provided specialization of polymorphic_type_hook,
894
- // and the user has promised that no this-pointer adjustment is
895
- // required in that case, so it's OK to use static_cast.
896
- if (const auto *tpi = get_type_info(*instance_type))
897
- return {vsrc, tpi};
898
- }
899
- // Otherwise we have either a nullptr, an `itype` pointer, or an unknown derived pointer, so
900
- // don't do a cast
901
- return type_caster_generic::src_and_type(src, cast_type, instance_type);
902
- }
903
-
904
- static handle cast(const itype *src, return_value_policy policy, handle parent) {
905
- auto st = src_and_type(src);
906
- return type_caster_generic::cast(
907
- st.first, policy, parent, st.second,
908
- make_copy_constructor(src), make_move_constructor(src));
909
- }
910
-
911
- static handle cast_holder(const itype *src, const void *holder) {
912
- auto st = src_and_type(src);
913
- return type_caster_generic::cast(
914
- st.first, return_value_policy::take_ownership, {}, st.second,
915
- nullptr, nullptr, holder);
916
- }
917
-
918
- template <typename T> using cast_op_type = detail::cast_op_type<T>;
919
-
920
- operator itype*() { return (type *) value; }
921
- operator itype&() { if (!value) throw reference_cast_error(); return *((itype *) value); }
922
-
923
- protected:
924
- using Constructor = void *(*)(const void *);
925
-
926
- /* Only enabled when the types are {copy,move}-constructible *and* when the type
927
- does not have a private operator new implementation. */
928
- template <typename T, typename = enable_if_t<is_copy_constructible<T>::value>>
929
- static auto make_copy_constructor(const T *x) -> decltype(new T(*x), Constructor{}) {
930
- return [](const void *arg) -> void * {
931
- return new T(*reinterpret_cast<const T *>(arg));
932
- };
933
- }
934
-
935
- template <typename T, typename = enable_if_t<std::is_move_constructible<T>::value>>
936
- static auto make_move_constructor(const T *x) -> decltype(new T(std::move(*const_cast<T *>(x))), Constructor{}) {
937
- return [](const void *arg) -> void * {
938
- return new T(std::move(*const_cast<T *>(reinterpret_cast<const T *>(arg))));
939
- };
940
- }
941
-
942
- static Constructor make_copy_constructor(...) { return nullptr; }
943
- static Constructor make_move_constructor(...) { return nullptr; }
944
- };
945
-
946
- template <typename type, typename SFINAE = void> class type_caster : public type_caster_base<type> { };
947
- template <typename type> using make_caster = type_caster<intrinsic_t<type>>;
948
-
949
- // Shortcut for calling a caster's `cast_op_type` cast operator for casting a type_caster to a T
950
- template <typename T> typename make_caster<T>::template cast_op_type<T> cast_op(make_caster<T> &caster) {
951
- return caster.operator typename make_caster<T>::template cast_op_type<T>();
952
- }
953
- template <typename T> typename make_caster<T>::template cast_op_type<typename std::add_rvalue_reference<T>::type>
954
- cast_op(make_caster<T> &&caster) {
955
- return std::move(caster).operator
956
- typename make_caster<T>::template cast_op_type<typename std::add_rvalue_reference<T>::type>();
957
- }
958
-
959
- template <typename type> class type_caster<std::reference_wrapper<type>> {
960
- private:
961
- using caster_t = make_caster<type>;
962
- caster_t subcaster;
963
- using subcaster_cast_op_type = typename caster_t::template cast_op_type<type>;
964
- static_assert(std::is_same<typename std::remove_const<type>::type &, subcaster_cast_op_type>::value,
965
- "std::reference_wrapper<T> caster requires T to have a caster with an `T &` operator");
966
- public:
967
- bool load(handle src, bool convert) { return subcaster.load(src, convert); }
968
- static constexpr auto name = caster_t::name;
969
- static handle cast(const std::reference_wrapper<type> &src, return_value_policy policy, handle parent) {
970
- // It is definitely wrong to take ownership of this pointer, so mask that rvp
971
- if (policy == return_value_policy::take_ownership || policy == return_value_policy::automatic)
972
- policy = return_value_policy::automatic_reference;
973
- return caster_t::cast(&src.get(), policy, parent);
974
- }
975
- template <typename T> using cast_op_type = std::reference_wrapper<type>;
976
- operator std::reference_wrapper<type>() { return subcaster.operator subcaster_cast_op_type&(); }
977
- };
978
-
979
- #define PYBIND11_TYPE_CASTER(type, py_name) \
980
- protected: \
981
- type value; \
982
- public: \
983
- static constexpr auto name = py_name; \
984
- template <typename T_, enable_if_t<std::is_same<type, remove_cv_t<T_>>::value, int> = 0> \
985
- static handle cast(T_ *src, return_value_policy policy, handle parent) { \
986
- if (!src) return none().release(); \
987
- if (policy == return_value_policy::take_ownership) { \
988
- auto h = cast(std::move(*src), policy, parent); delete src; return h; \
989
- } else { \
990
- return cast(*src, policy, parent); \
991
- } \
992
- } \
993
- operator type*() { return &value; } \
994
- operator type&() { return value; } \
995
- operator type&&() && { return std::move(value); } \
996
- template <typename T_> using cast_op_type = pybind11::detail::movable_cast_op_type<T_>
997
-
998
-
999
- template <typename CharT> using is_std_char_type = any_of<
1000
- std::is_same<CharT, char>, /* std::string */
1001
- #if defined(PYBIND11_HAS_U8STRING)
1002
- std::is_same<CharT, char8_t>, /* std::u8string */
1003
- #endif
1004
- std::is_same<CharT, char16_t>, /* std::u16string */
1005
- std::is_same<CharT, char32_t>, /* std::u32string */
1006
- std::is_same<CharT, wchar_t> /* std::wstring */
1007
- >;
1008
-
1009
- template <typename T>
1010
- struct type_caster<T, enable_if_t<std::is_arithmetic<T>::value && !is_std_char_type<T>::value>> {
1011
- using _py_type_0 = conditional_t<sizeof(T) <= sizeof(long), long, long long>;
1012
- using _py_type_1 = conditional_t<std::is_signed<T>::value, _py_type_0, typename std::make_unsigned<_py_type_0>::type>;
1013
- using py_type = conditional_t<std::is_floating_point<T>::value, double, _py_type_1>;
1014
- public:
1015
-
1016
- bool load(handle src, bool convert) {
1017
- py_type py_value;
1018
-
1019
- if (!src)
1020
- return false;
1021
-
1022
- if (std::is_floating_point<T>::value) {
1023
- if (convert || PyFloat_Check(src.ptr()))
1024
- py_value = (py_type) PyFloat_AsDouble(src.ptr());
1025
- else
1026
- return false;
1027
- } else if (PyFloat_Check(src.ptr())) {
1028
- return false;
1029
- } else if (std::is_unsigned<py_type>::value) {
1030
- py_value = as_unsigned<py_type>(src.ptr());
1031
- } else { // signed integer:
1032
- py_value = sizeof(T) <= sizeof(long)
1033
- ? (py_type) PyLong_AsLong(src.ptr())
1034
- : (py_type) PYBIND11_LONG_AS_LONGLONG(src.ptr());
1035
- }
1036
-
1037
- bool py_err = py_value == (py_type) -1 && PyErr_Occurred();
1038
-
1039
- // Protect std::numeric_limits::min/max with parentheses
1040
- if (py_err || (std::is_integral<T>::value && sizeof(py_type) != sizeof(T) &&
1041
- (py_value < (py_type) (std::numeric_limits<T>::min)() ||
1042
- py_value > (py_type) (std::numeric_limits<T>::max)()))) {
1043
- bool type_error = py_err && PyErr_ExceptionMatches(
1044
- #if PY_VERSION_HEX < 0x03000000 && !defined(PYPY_VERSION)
1045
- PyExc_SystemError
1046
- #else
1047
- PyExc_TypeError
1048
- #endif
1049
- );
1050
- PyErr_Clear();
1051
- if (type_error && convert && PyNumber_Check(src.ptr())) {
1052
- auto tmp = reinterpret_steal<object>(std::is_floating_point<T>::value
1053
- ? PyNumber_Float(src.ptr())
1054
- : PyNumber_Long(src.ptr()));
1055
- PyErr_Clear();
1056
- return load(tmp, false);
1057
- }
1058
- return false;
1059
- }
1060
-
1061
- value = (T) py_value;
1062
- return true;
1063
- }
1064
-
1065
- template<typename U = T>
1066
- static typename std::enable_if<std::is_floating_point<U>::value, handle>::type
1067
- cast(U src, return_value_policy /* policy */, handle /* parent */) {
1068
- return PyFloat_FromDouble((double) src);
1069
- }
1070
-
1071
- template<typename U = T>
1072
- static typename std::enable_if<!std::is_floating_point<U>::value && std::is_signed<U>::value && (sizeof(U) <= sizeof(long)), handle>::type
1073
- cast(U src, return_value_policy /* policy */, handle /* parent */) {
1074
- return PYBIND11_LONG_FROM_SIGNED((long) src);
1075
- }
1076
-
1077
- template<typename U = T>
1078
- static typename std::enable_if<!std::is_floating_point<U>::value && std::is_unsigned<U>::value && (sizeof(U) <= sizeof(unsigned long)), handle>::type
1079
- cast(U src, return_value_policy /* policy */, handle /* parent */) {
1080
- return PYBIND11_LONG_FROM_UNSIGNED((unsigned long) src);
1081
- }
1082
-
1083
- template<typename U = T>
1084
- static typename std::enable_if<!std::is_floating_point<U>::value && std::is_signed<U>::value && (sizeof(U) > sizeof(long)), handle>::type
1085
- cast(U src, return_value_policy /* policy */, handle /* parent */) {
1086
- return PyLong_FromLongLong((long long) src);
1087
- }
1088
-
1089
- template<typename U = T>
1090
- static typename std::enable_if<!std::is_floating_point<U>::value && std::is_unsigned<U>::value && (sizeof(U) > sizeof(unsigned long)), handle>::type
1091
- cast(U src, return_value_policy /* policy */, handle /* parent */) {
1092
- return PyLong_FromUnsignedLongLong((unsigned long long) src);
1093
- }
1094
-
1095
- PYBIND11_TYPE_CASTER(T, _<std::is_integral<T>::value>("int", "float"));
1096
- };
1097
-
1098
- template<typename T> struct void_caster {
1099
- public:
1100
- bool load(handle src, bool) {
1101
- if (src && src.is_none())
1102
- return true;
1103
- return false;
1104
- }
1105
- static handle cast(T, return_value_policy /* policy */, handle /* parent */) {
1106
- return none().inc_ref();
1107
- }
1108
- PYBIND11_TYPE_CASTER(T, _("None"));
1109
- };
1110
-
1111
- template <> class type_caster<void_type> : public void_caster<void_type> {};
1112
-
1113
- template <> class type_caster<void> : public type_caster<void_type> {
1114
- public:
1115
- using type_caster<void_type>::cast;
1116
-
1117
- bool load(handle h, bool) {
1118
- if (!h) {
1119
- return false;
1120
- } else if (h.is_none()) {
1121
- value = nullptr;
1122
- return true;
1123
- }
1124
-
1125
- /* Check if this is a capsule */
1126
- if (isinstance<capsule>(h)) {
1127
- value = reinterpret_borrow<capsule>(h);
1128
- return true;
1129
- }
1130
-
1131
- /* Check if this is a C++ type */
1132
- auto &bases = all_type_info((PyTypeObject *) h.get_type().ptr());
1133
- if (bases.size() == 1) { // Only allowing loading from a single-value type
1134
- value = values_and_holders(reinterpret_cast<instance *>(h.ptr())).begin()->value_ptr();
1135
- return true;
1136
- }
1137
-
1138
- /* Fail */
1139
- return false;
1140
- }
1141
-
1142
- static handle cast(const void *ptr, return_value_policy /* policy */, handle /* parent */) {
1143
- if (ptr)
1144
- return capsule(ptr).release();
1145
- else
1146
- return none().inc_ref();
1147
- }
1148
-
1149
- template <typename T> using cast_op_type = void*&;
1150
- operator void *&() { return value; }
1151
- static constexpr auto name = _("capsule");
1152
- private:
1153
- void *value = nullptr;
1154
- };
1155
-
1156
- template <> class type_caster<std::nullptr_t> : public void_caster<std::nullptr_t> { };
1157
-
1158
- template <> class type_caster<bool> {
1159
- public:
1160
- bool load(handle src, bool convert) {
1161
- if (!src) return false;
1162
- else if (src.ptr() == Py_True) { value = true; return true; }
1163
- else if (src.ptr() == Py_False) { value = false; return true; }
1164
- else if (convert || !strcmp("numpy.bool_", Py_TYPE(src.ptr())->tp_name)) {
1165
- // (allow non-implicit conversion for numpy booleans)
1166
-
1167
- Py_ssize_t res = -1;
1168
- if (src.is_none()) {
1169
- res = 0; // None is implicitly converted to False
1170
- }
1171
- #if defined(PYPY_VERSION)
1172
- // On PyPy, check that "__bool__" (or "__nonzero__" on Python 2.7) attr exists
1173
- else if (hasattr(src, PYBIND11_BOOL_ATTR)) {
1174
- res = PyObject_IsTrue(src.ptr());
1175
- }
1176
- #else
1177
- // Alternate approach for CPython: this does the same as the above, but optimized
1178
- // using the CPython API so as to avoid an unneeded attribute lookup.
1179
- else if (auto tp_as_number = src.ptr()->ob_type->tp_as_number) {
1180
- if (PYBIND11_NB_BOOL(tp_as_number)) {
1181
- res = (*PYBIND11_NB_BOOL(tp_as_number))(src.ptr());
1182
- }
1183
- }
1184
- #endif
1185
- if (res == 0 || res == 1) {
1186
- value = (bool) res;
1187
- return true;
1188
- } else {
1189
- PyErr_Clear();
1190
- }
1191
- }
1192
- return false;
1193
- }
1194
- static handle cast(bool src, return_value_policy /* policy */, handle /* parent */) {
1195
- return handle(src ? Py_True : Py_False).inc_ref();
1196
- }
1197
- PYBIND11_TYPE_CASTER(bool, _("bool"));
1198
- };
1199
-
1200
- // Helper class for UTF-{8,16,32} C++ stl strings:
1201
- template <typename StringType, bool IsView = false> struct string_caster {
1202
- using CharT = typename StringType::value_type;
1203
-
1204
- // Simplify life by being able to assume standard char sizes (the standard only guarantees
1205
- // minimums, but Python requires exact sizes)
1206
- static_assert(!std::is_same<CharT, char>::value || sizeof(CharT) == 1, "Unsupported char size != 1");
1207
- #if defined(PYBIND11_HAS_U8STRING)
1208
- static_assert(!std::is_same<CharT, char8_t>::value || sizeof(CharT) == 1, "Unsupported char8_t size != 1");
1209
- #endif
1210
- static_assert(!std::is_same<CharT, char16_t>::value || sizeof(CharT) == 2, "Unsupported char16_t size != 2");
1211
- static_assert(!std::is_same<CharT, char32_t>::value || sizeof(CharT) == 4, "Unsupported char32_t size != 4");
1212
- // wchar_t can be either 16 bits (Windows) or 32 (everywhere else)
1213
- static_assert(!std::is_same<CharT, wchar_t>::value || sizeof(CharT) == 2 || sizeof(CharT) == 4,
1214
- "Unsupported wchar_t size != 2/4");
1215
- static constexpr size_t UTF_N = 8 * sizeof(CharT);
1216
-
1217
- bool load(handle src, bool) {
1218
- #if PY_MAJOR_VERSION < 3
1219
- object temp;
1220
- #endif
1221
- handle load_src = src;
1222
- if (!src) {
1223
- return false;
1224
- } else if (!PyUnicode_Check(load_src.ptr())) {
1225
- #if PY_MAJOR_VERSION >= 3
1226
- return load_bytes(load_src);
1227
- #else
1228
- if (std::is_same<CharT, char>::value) {
1229
- return load_bytes(load_src);
1230
- }
1231
-
1232
- // The below is a guaranteed failure in Python 3 when PyUnicode_Check returns false
1233
- if (!PYBIND11_BYTES_CHECK(load_src.ptr()))
1234
- return false;
1235
-
1236
- temp = reinterpret_steal<object>(PyUnicode_FromObject(load_src.ptr()));
1237
- if (!temp) { PyErr_Clear(); return false; }
1238
- load_src = temp;
1239
- #endif
1240
- }
1241
-
1242
- object utfNbytes = reinterpret_steal<object>(PyUnicode_AsEncodedString(
1243
- load_src.ptr(), UTF_N == 8 ? "utf-8" : UTF_N == 16 ? "utf-16" : "utf-32", nullptr));
1244
- if (!utfNbytes) { PyErr_Clear(); return false; }
1245
-
1246
- const CharT *buffer = reinterpret_cast<const CharT *>(PYBIND11_BYTES_AS_STRING(utfNbytes.ptr()));
1247
- size_t length = (size_t) PYBIND11_BYTES_SIZE(utfNbytes.ptr()) / sizeof(CharT);
1248
- if (UTF_N > 8) { buffer++; length--; } // Skip BOM for UTF-16/32
1249
- value = StringType(buffer, length);
1250
-
1251
- // If we're loading a string_view we need to keep the encoded Python object alive:
1252
- if (IsView)
1253
- loader_life_support::add_patient(utfNbytes);
1254
-
1255
- return true;
1256
- }
1257
-
1258
- static handle cast(const StringType &src, return_value_policy /* policy */, handle /* parent */) {
1259
- const char *buffer = reinterpret_cast<const char *>(src.data());
1260
- ssize_t nbytes = ssize_t(src.size() * sizeof(CharT));
1261
- handle s = decode_utfN(buffer, nbytes);
1262
- if (!s) throw error_already_set();
1263
- return s;
1264
- }
1265
-
1266
- PYBIND11_TYPE_CASTER(StringType, _(PYBIND11_STRING_NAME));
1267
-
1268
- private:
1269
- static handle decode_utfN(const char *buffer, ssize_t nbytes) {
1270
- #if !defined(PYPY_VERSION)
1271
- return
1272
- UTF_N == 8 ? PyUnicode_DecodeUTF8(buffer, nbytes, nullptr) :
1273
- UTF_N == 16 ? PyUnicode_DecodeUTF16(buffer, nbytes, nullptr, nullptr) :
1274
- PyUnicode_DecodeUTF32(buffer, nbytes, nullptr, nullptr);
1275
- #else
1276
- // PyPy seems to have multiple problems related to PyUnicode_UTF*: the UTF8 version
1277
- // sometimes segfaults for unknown reasons, while the UTF16 and 32 versions require a
1278
- // non-const char * arguments, which is also a nuisance, so bypass the whole thing by just
1279
- // passing the encoding as a string value, which works properly:
1280
- return PyUnicode_Decode(buffer, nbytes, UTF_N == 8 ? "utf-8" : UTF_N == 16 ? "utf-16" : "utf-32", nullptr);
1281
- #endif
1282
- }
1283
-
1284
- // When loading into a std::string or char*, accept a bytes object as-is (i.e.
1285
- // without any encoding/decoding attempt). For other C++ char sizes this is a no-op.
1286
- // which supports loading a unicode from a str, doesn't take this path.
1287
- template <typename C = CharT>
1288
- bool load_bytes(enable_if_t<std::is_same<C, char>::value, handle> src) {
1289
- if (PYBIND11_BYTES_CHECK(src.ptr())) {
1290
- // We were passed a Python 3 raw bytes; accept it into a std::string or char*
1291
- // without any encoding attempt.
1292
- const char *bytes = PYBIND11_BYTES_AS_STRING(src.ptr());
1293
- if (bytes) {
1294
- value = StringType(bytes, (size_t) PYBIND11_BYTES_SIZE(src.ptr()));
1295
- return true;
1296
- }
1297
- }
1298
-
1299
- return false;
1300
- }
1301
-
1302
- template <typename C = CharT>
1303
- bool load_bytes(enable_if_t<!std::is_same<C, char>::value, handle>) { return false; }
1304
- };
1305
-
1306
- template <typename CharT, class Traits, class Allocator>
1307
- struct type_caster<std::basic_string<CharT, Traits, Allocator>, enable_if_t<is_std_char_type<CharT>::value>>
1308
- : string_caster<std::basic_string<CharT, Traits, Allocator>> {};
1309
-
1310
- #ifdef PYBIND11_HAS_STRING_VIEW
1311
- template <typename CharT, class Traits>
1312
- struct type_caster<std::basic_string_view<CharT, Traits>, enable_if_t<is_std_char_type<CharT>::value>>
1313
- : string_caster<std::basic_string_view<CharT, Traits>, true> {};
1314
- #endif
1315
-
1316
- // Type caster for C-style strings. We basically use a std::string type caster, but also add the
1317
- // ability to use None as a nullptr char* (which the string caster doesn't allow).
1318
- template <typename CharT> struct type_caster<CharT, enable_if_t<is_std_char_type<CharT>::value>> {
1319
- using StringType = std::basic_string<CharT>;
1320
- using StringCaster = type_caster<StringType>;
1321
- StringCaster str_caster;
1322
- bool none = false;
1323
- CharT one_char = 0;
1324
- public:
1325
- bool load(handle src, bool convert) {
1326
- if (!src) return false;
1327
- if (src.is_none()) {
1328
- // Defer accepting None to other overloads (if we aren't in convert mode):
1329
- if (!convert) return false;
1330
- none = true;
1331
- return true;
1332
- }
1333
- return str_caster.load(src, convert);
1334
- }
1335
-
1336
- static handle cast(const CharT *src, return_value_policy policy, handle parent) {
1337
- if (src == nullptr) return pybind11::none().inc_ref();
1338
- return StringCaster::cast(StringType(src), policy, parent);
1339
- }
1340
-
1341
- static handle cast(CharT src, return_value_policy policy, handle parent) {
1342
- if (std::is_same<char, CharT>::value) {
1343
- handle s = PyUnicode_DecodeLatin1((const char *) &src, 1, nullptr);
1344
- if (!s) throw error_already_set();
1345
- return s;
1346
- }
1347
- return StringCaster::cast(StringType(1, src), policy, parent);
1348
- }
1349
-
1350
- operator CharT*() { return none ? nullptr : const_cast<CharT *>(static_cast<StringType &>(str_caster).c_str()); }
1351
- operator CharT&() {
1352
- if (none)
1353
- throw value_error("Cannot convert None to a character");
1354
-
1355
- auto &value = static_cast<StringType &>(str_caster);
1356
- size_t str_len = value.size();
1357
- if (str_len == 0)
1358
- throw value_error("Cannot convert empty string to a character");
1359
-
1360
- // If we're in UTF-8 mode, we have two possible failures: one for a unicode character that
1361
- // is too high, and one for multiple unicode characters (caught later), so we need to figure
1362
- // out how long the first encoded character is in bytes to distinguish between these two
1363
- // errors. We also allow want to allow unicode characters U+0080 through U+00FF, as those
1364
- // can fit into a single char value.
1365
- if (StringCaster::UTF_N == 8 && str_len > 1 && str_len <= 4) {
1366
- unsigned char v0 = static_cast<unsigned char>(value[0]);
1367
- size_t char0_bytes = !(v0 & 0x80) ? 1 : // low bits only: 0-127
1368
- (v0 & 0xE0) == 0xC0 ? 2 : // 0b110xxxxx - start of 2-byte sequence
1369
- (v0 & 0xF0) == 0xE0 ? 3 : // 0b1110xxxx - start of 3-byte sequence
1370
- 4; // 0b11110xxx - start of 4-byte sequence
1371
-
1372
- if (char0_bytes == str_len) {
1373
- // If we have a 128-255 value, we can decode it into a single char:
1374
- if (char0_bytes == 2 && (v0 & 0xFC) == 0xC0) { // 0x110000xx 0x10xxxxxx
1375
- one_char = static_cast<CharT>(((v0 & 3) << 6) + (static_cast<unsigned char>(value[1]) & 0x3F));
1376
- return one_char;
1377
- }
1378
- // Otherwise we have a single character, but it's > U+00FF
1379
- throw value_error("Character code point not in range(0x100)");
1380
- }
1381
- }
1382
-
1383
- // UTF-16 is much easier: we can only have a surrogate pair for values above U+FFFF, thus a
1384
- // surrogate pair with total length 2 instantly indicates a range error (but not a "your
1385
- // string was too long" error).
1386
- else if (StringCaster::UTF_N == 16 && str_len == 2) {
1387
- one_char = static_cast<CharT>(value[0]);
1388
- if (one_char >= 0xD800 && one_char < 0xE000)
1389
- throw value_error("Character code point not in range(0x10000)");
1390
- }
1391
-
1392
- if (str_len != 1)
1393
- throw value_error("Expected a character, but multi-character string found");
1394
-
1395
- one_char = value[0];
1396
- return one_char;
1397
- }
1398
-
1399
- static constexpr auto name = _(PYBIND11_STRING_NAME);
1400
- template <typename _T> using cast_op_type = pybind11::detail::cast_op_type<_T>;
1401
- };
1402
-
1403
- // Base implementation for std::tuple and std::pair
1404
- template <template<typename...> class Tuple, typename... Ts> class tuple_caster {
1405
- using type = Tuple<Ts...>;
1406
- static constexpr auto size = sizeof...(Ts);
1407
- using indices = make_index_sequence<size>;
1408
- public:
1409
-
1410
- bool load(handle src, bool convert) {
1411
- if (!isinstance<sequence>(src))
1412
- return false;
1413
- const auto seq = reinterpret_borrow<sequence>(src);
1414
- if (seq.size() != size)
1415
- return false;
1416
- return load_impl(seq, convert, indices{});
1417
- }
1418
-
1419
- template <typename T>
1420
- static handle cast(T &&src, return_value_policy policy, handle parent) {
1421
- return cast_impl(std::forward<T>(src), policy, parent, indices{});
1422
- }
1423
-
1424
- // copied from the PYBIND11_TYPE_CASTER macro
1425
- template <typename T>
1426
- static handle cast(T *src, return_value_policy policy, handle parent) {
1427
- if (!src) return none().release();
1428
- if (policy == return_value_policy::take_ownership) {
1429
- auto h = cast(std::move(*src), policy, parent); delete src; return h;
1430
- } else {
1431
- return cast(*src, policy, parent);
1432
- }
1433
- }
1434
-
1435
- static constexpr auto name = _("Tuple[") + concat(make_caster<Ts>::name...) + _("]");
1436
-
1437
- template <typename T> using cast_op_type = type;
1438
-
1439
- operator type() & { return implicit_cast(indices{}); }
1440
- operator type() && { return std::move(*this).implicit_cast(indices{}); }
1441
-
1442
- protected:
1443
- template <size_t... Is>
1444
- type implicit_cast(index_sequence<Is...>) & { return type(cast_op<Ts>(std::get<Is>(subcasters))...); }
1445
- template <size_t... Is>
1446
- type implicit_cast(index_sequence<Is...>) && { return type(cast_op<Ts>(std::move(std::get<Is>(subcasters)))...); }
1447
-
1448
- static constexpr bool load_impl(const sequence &, bool, index_sequence<>) { return true; }
1449
-
1450
- template <size_t... Is>
1451
- bool load_impl(const sequence &seq, bool convert, index_sequence<Is...>) {
1452
- #ifdef __cpp_fold_expressions
1453
- if ((... || !std::get<Is>(subcasters).load(seq[Is], convert)))
1454
- return false;
1455
- #else
1456
- for (bool r : {std::get<Is>(subcasters).load(seq[Is], convert)...})
1457
- if (!r)
1458
- return false;
1459
- #endif
1460
- return true;
1461
- }
1462
-
1463
- /* Implementation: Convert a C++ tuple into a Python tuple */
1464
- template <typename T, size_t... Is>
1465
- static handle cast_impl(T &&src, return_value_policy policy, handle parent, index_sequence<Is...>) {
1466
- std::array<object, size> entries{{
1467
- reinterpret_steal<object>(make_caster<Ts>::cast(std::get<Is>(std::forward<T>(src)), policy, parent))...
1468
- }};
1469
- for (const auto &entry: entries)
1470
- if (!entry)
1471
- return handle();
1472
- tuple result(size);
1473
- int counter = 0;
1474
- for (auto & entry: entries)
1475
- PyTuple_SET_ITEM(result.ptr(), counter++, entry.release().ptr());
1476
- return result.release();
1477
- }
1478
-
1479
- Tuple<make_caster<Ts>...> subcasters;
1480
- };
1481
-
1482
- template <typename T1, typename T2> class type_caster<std::pair<T1, T2>>
1483
- : public tuple_caster<std::pair, T1, T2> {};
1484
-
1485
- template <typename... Ts> class type_caster<std::tuple<Ts...>>
1486
- : public tuple_caster<std::tuple, Ts...> {};
1487
-
1488
- /// Helper class which abstracts away certain actions. Users can provide specializations for
1489
- /// custom holders, but it's only necessary if the type has a non-standard interface.
1490
- template <typename T>
1491
- struct holder_helper {
1492
- static auto get(const T &p) -> decltype(p.get()) { return p.get(); }
1493
- };
1494
-
1495
- /// Type caster for holder types like std::shared_ptr, etc.
1496
- template <typename type, typename holder_type>
1497
- struct copyable_holder_caster : public type_caster_base<type> {
1498
- public:
1499
- using base = type_caster_base<type>;
1500
- static_assert(std::is_base_of<base, type_caster<type>>::value,
1501
- "Holder classes are only supported for custom types");
1502
- using base::base;
1503
- using base::cast;
1504
- using base::typeinfo;
1505
- using base::value;
1506
-
1507
- bool load(handle src, bool convert) {
1508
- return base::template load_impl<copyable_holder_caster<type, holder_type>>(src, convert);
1509
- }
1510
-
1511
- explicit operator type*() { return this->value; }
1512
- // static_cast works around compiler error with MSVC 17 and CUDA 10.2
1513
- // see issue #2180
1514
- explicit operator type&() { return *(static_cast<type *>(this->value)); }
1515
- explicit operator holder_type*() { return std::addressof(holder); }
1516
-
1517
- // Workaround for Intel compiler bug
1518
- // see pybind11 issue 94
1519
- #if defined(__ICC) || defined(__INTEL_COMPILER)
1520
- operator holder_type&() { return holder; }
1521
- #else
1522
- explicit operator holder_type&() { return holder; }
1523
- #endif
1524
-
1525
- static handle cast(const holder_type &src, return_value_policy, handle) {
1526
- const auto *ptr = holder_helper<holder_type>::get(src);
1527
- return type_caster_base<type>::cast_holder(ptr, &src);
1528
- }
1529
-
1530
- protected:
1531
- friend class type_caster_generic;
1532
- void check_holder_compat() {
1533
- if (typeinfo->default_holder)
1534
- throw cast_error("Unable to load a custom holder type from a default-holder instance");
1535
- }
1536
-
1537
- bool load_value(value_and_holder &&v_h) {
1538
- if (v_h.holder_constructed()) {
1539
- value = v_h.value_ptr();
1540
- holder = v_h.template holder<holder_type>();
1541
- return true;
1542
- } else {
1543
- throw cast_error("Unable to cast from non-held to held instance (T& to Holder<T>) "
1544
- #if defined(NDEBUG)
1545
- "(compile in debug mode for type information)");
1546
- #else
1547
- "of type '" + type_id<holder_type>() + "''");
1548
- #endif
1549
- }
1550
- }
1551
-
1552
- template <typename T = holder_type, detail::enable_if_t<!std::is_constructible<T, const T &, type*>::value, int> = 0>
1553
- bool try_implicit_casts(handle, bool) { return false; }
1554
-
1555
- template <typename T = holder_type, detail::enable_if_t<std::is_constructible<T, const T &, type*>::value, int> = 0>
1556
- bool try_implicit_casts(handle src, bool convert) {
1557
- for (auto &cast : typeinfo->implicit_casts) {
1558
- copyable_holder_caster sub_caster(*cast.first);
1559
- if (sub_caster.load(src, convert)) {
1560
- value = cast.second(sub_caster.value);
1561
- holder = holder_type(sub_caster.holder, (type *) value);
1562
- return true;
1563
- }
1564
- }
1565
- return false;
1566
- }
1567
-
1568
- static bool try_direct_conversions(handle) { return false; }
1569
-
1570
-
1571
- holder_type holder;
1572
- };
1573
-
1574
- /// Specialize for the common std::shared_ptr, so users don't need to
1575
- template <typename T>
1576
- class type_caster<std::shared_ptr<T>> : public copyable_holder_caster<T, std::shared_ptr<T>> { };
1577
-
1578
- template <typename type, typename holder_type>
1579
- struct move_only_holder_caster {
1580
- static_assert(std::is_base_of<type_caster_base<type>, type_caster<type>>::value,
1581
- "Holder classes are only supported for custom types");
1582
-
1583
- static handle cast(holder_type &&src, return_value_policy, handle) {
1584
- auto *ptr = holder_helper<holder_type>::get(src);
1585
- return type_caster_base<type>::cast_holder(ptr, std::addressof(src));
1586
- }
1587
- static constexpr auto name = type_caster_base<type>::name;
1588
- };
1589
-
1590
- template <typename type, typename deleter>
1591
- class type_caster<std::unique_ptr<type, deleter>>
1592
- : public move_only_holder_caster<type, std::unique_ptr<type, deleter>> { };
1593
-
1594
- template <typename type, typename holder_type>
1595
- using type_caster_holder = conditional_t<is_copy_constructible<holder_type>::value,
1596
- copyable_holder_caster<type, holder_type>,
1597
- move_only_holder_caster<type, holder_type>>;
1598
-
1599
- template <typename T, bool Value = false> struct always_construct_holder { static constexpr bool value = Value; };
1600
-
1601
- /// Create a specialization for custom holder types (silently ignores std::shared_ptr)
1602
- #define PYBIND11_DECLARE_HOLDER_TYPE(type, holder_type, ...) \
1603
- namespace pybind11 { namespace detail { \
1604
- template <typename type> \
1605
- struct always_construct_holder<holder_type> : always_construct_holder<void, ##__VA_ARGS__> { }; \
1606
- template <typename type> \
1607
- class type_caster<holder_type, enable_if_t<!is_shared_ptr<holder_type>::value>> \
1608
- : public type_caster_holder<type, holder_type> { }; \
1609
- }}
1610
-
1611
- // PYBIND11_DECLARE_HOLDER_TYPE holder types:
1612
- template <typename base, typename holder> struct is_holder_type :
1613
- std::is_base_of<detail::type_caster_holder<base, holder>, detail::type_caster<holder>> {};
1614
- // Specialization for always-supported unique_ptr holders:
1615
- template <typename base, typename deleter> struct is_holder_type<base, std::unique_ptr<base, deleter>> :
1616
- std::true_type {};
1617
-
1618
- template <typename T> struct handle_type_name { static constexpr auto name = _<T>(); };
1619
- template <> struct handle_type_name<bytes> { static constexpr auto name = _(PYBIND11_BYTES_NAME); };
1620
- template <> struct handle_type_name<int_> { static constexpr auto name = _("int"); };
1621
- template <> struct handle_type_name<iterable> { static constexpr auto name = _("Iterable"); };
1622
- template <> struct handle_type_name<iterator> { static constexpr auto name = _("Iterator"); };
1623
- template <> struct handle_type_name<none> { static constexpr auto name = _("None"); };
1624
- template <> struct handle_type_name<args> { static constexpr auto name = _("*args"); };
1625
- template <> struct handle_type_name<kwargs> { static constexpr auto name = _("**kwargs"); };
1626
-
1627
- template <typename type>
1628
- struct pyobject_caster {
1629
- template <typename T = type, enable_if_t<std::is_same<T, handle>::value, int> = 0>
1630
- bool load(handle src, bool /* convert */) { value = src; return static_cast<bool>(value); }
1631
-
1632
- template <typename T = type, enable_if_t<std::is_base_of<object, T>::value, int> = 0>
1633
- bool load(handle src, bool /* convert */) {
1634
- if (!isinstance<type>(src))
1635
- return false;
1636
- value = reinterpret_borrow<type>(src);
1637
- return true;
1638
- }
1639
-
1640
- static handle cast(const handle &src, return_value_policy /* policy */, handle /* parent */) {
1641
- return src.inc_ref();
1642
- }
1643
- PYBIND11_TYPE_CASTER(type, handle_type_name<type>::name);
1644
- };
1645
-
1646
- template <typename T>
1647
- class type_caster<T, enable_if_t<is_pyobject<T>::value>> : public pyobject_caster<T> { };
1648
-
1649
- // Our conditions for enabling moving are quite restrictive:
1650
- // At compile time:
1651
- // - T needs to be a non-const, non-pointer, non-reference type
1652
- // - type_caster<T>::operator T&() must exist
1653
- // - the type must be move constructible (obviously)
1654
- // At run-time:
1655
- // - if the type is non-copy-constructible, the object must be the sole owner of the type (i.e. it
1656
- // must have ref_count() == 1)h
1657
- // If any of the above are not satisfied, we fall back to copying.
1658
- template <typename T> using move_is_plain_type = satisfies_none_of<T,
1659
- std::is_void, std::is_pointer, std::is_reference, std::is_const
1660
- >;
1661
- template <typename T, typename SFINAE = void> struct move_always : std::false_type {};
1662
- template <typename T> struct move_always<T, enable_if_t<all_of<
1663
- move_is_plain_type<T>,
1664
- negation<is_copy_constructible<T>>,
1665
- std::is_move_constructible<T>,
1666
- std::is_same<decltype(std::declval<make_caster<T>>().operator T&()), T&>
1667
- >::value>> : std::true_type {};
1668
- template <typename T, typename SFINAE = void> struct move_if_unreferenced : std::false_type {};
1669
- template <typename T> struct move_if_unreferenced<T, enable_if_t<all_of<
1670
- move_is_plain_type<T>,
1671
- negation<move_always<T>>,
1672
- std::is_move_constructible<T>,
1673
- std::is_same<decltype(std::declval<make_caster<T>>().operator T&()), T&>
1674
- >::value>> : std::true_type {};
1675
- template <typename T> using move_never = none_of<move_always<T>, move_if_unreferenced<T>>;
1676
-
1677
- // Detect whether returning a `type` from a cast on type's type_caster is going to result in a
1678
- // reference or pointer to a local variable of the type_caster. Basically, only
1679
- // non-reference/pointer `type`s and reference/pointers from a type_caster_generic are safe;
1680
- // everything else returns a reference/pointer to a local variable.
1681
- template <typename type> using cast_is_temporary_value_reference = bool_constant<
1682
- (std::is_reference<type>::value || std::is_pointer<type>::value) &&
1683
- !std::is_base_of<type_caster_generic, make_caster<type>>::value &&
1684
- !std::is_same<intrinsic_t<type>, void>::value
1685
- >;
1686
-
1687
- // When a value returned from a C++ function is being cast back to Python, we almost always want to
1688
- // force `policy = move`, regardless of the return value policy the function/method was declared
1689
- // with.
1690
- template <typename Return, typename SFINAE = void> struct return_value_policy_override {
1691
- static return_value_policy policy(return_value_policy p) { return p; }
1692
- };
1693
-
1694
- template <typename Return> struct return_value_policy_override<Return,
1695
- detail::enable_if_t<std::is_base_of<type_caster_generic, make_caster<Return>>::value, void>> {
1696
- static return_value_policy policy(return_value_policy p) {
1697
- return !std::is_lvalue_reference<Return>::value &&
1698
- !std::is_pointer<Return>::value
1699
- ? return_value_policy::move : p;
1700
- }
1701
- };
1702
-
1703
- // Basic python -> C++ casting; throws if casting fails
1704
- template <typename T, typename SFINAE> type_caster<T, SFINAE> &load_type(type_caster<T, SFINAE> &conv, const handle &handle) {
1705
- if (!conv.load(handle, true)) {
1706
- #if defined(NDEBUG)
1707
- throw cast_error("Unable to cast Python instance to C++ type (compile in debug mode for details)");
1708
- #else
1709
- throw cast_error("Unable to cast Python instance of type " +
1710
- (std::string) str(handle.get_type()) + " to C++ type '" + type_id<T>() + "'");
1711
- #endif
1712
- }
1713
- return conv;
1714
- }
1715
- // Wrapper around the above that also constructs and returns a type_caster
1716
- template <typename T> make_caster<T> load_type(const handle &handle) {
1717
- make_caster<T> conv;
1718
- load_type(conv, handle);
1719
- return conv;
1720
- }
1721
-
1722
- PYBIND11_NAMESPACE_END(detail)
1723
-
1724
- // pytype -> C++ type
1725
- template <typename T, detail::enable_if_t<!detail::is_pyobject<T>::value, int> = 0>
1726
- T cast(const handle &handle) {
1727
- using namespace detail;
1728
- static_assert(!cast_is_temporary_value_reference<T>::value,
1729
- "Unable to cast type to reference: value is local to type caster");
1730
- return cast_op<T>(load_type<T>(handle));
1731
- }
1732
-
1733
- // pytype -> pytype (calls converting constructor)
1734
- template <typename T, detail::enable_if_t<detail::is_pyobject<T>::value, int> = 0>
1735
- T cast(const handle &handle) { return T(reinterpret_borrow<object>(handle)); }
1736
-
1737
- // C++ type -> py::object
1738
- template <typename T, detail::enable_if_t<!detail::is_pyobject<T>::value, int> = 0>
1739
- object cast(T &&value, return_value_policy policy = return_value_policy::automatic_reference,
1740
- handle parent = handle()) {
1741
- using no_ref_T = typename std::remove_reference<T>::type;
1742
- if (policy == return_value_policy::automatic)
1743
- policy = std::is_pointer<no_ref_T>::value ? return_value_policy::take_ownership :
1744
- std::is_lvalue_reference<T>::value ? return_value_policy::copy : return_value_policy::move;
1745
- else if (policy == return_value_policy::automatic_reference)
1746
- policy = std::is_pointer<no_ref_T>::value ? return_value_policy::reference :
1747
- std::is_lvalue_reference<T>::value ? return_value_policy::copy : return_value_policy::move;
1748
- return reinterpret_steal<object>(detail::make_caster<T>::cast(std::forward<T>(value), policy, parent));
1749
- }
1750
-
1751
- template <typename T> T handle::cast() const { return pybind11::cast<T>(*this); }
1752
- template <> inline void handle::cast() const { return; }
1753
-
1754
- template <typename T>
1755
- detail::enable_if_t<!detail::move_never<T>::value, T> move(object &&obj) {
1756
- if (obj.ref_count() > 1)
1757
- #if defined(NDEBUG)
1758
- throw cast_error("Unable to cast Python instance to C++ rvalue: instance has multiple references"
1759
- " (compile in debug mode for details)");
1760
- #else
1761
- throw cast_error("Unable to move from Python " + (std::string) str(obj.get_type()) +
1762
- " instance to C++ " + type_id<T>() + " instance: instance has multiple references");
1763
- #endif
1764
-
1765
- // Move into a temporary and return that, because the reference may be a local value of `conv`
1766
- T ret = std::move(detail::load_type<T>(obj).operator T&());
1767
- return ret;
1768
- }
1769
-
1770
- // Calling cast() on an rvalue calls pybind11::cast with the object rvalue, which does:
1771
- // - If we have to move (because T has no copy constructor), do it. This will fail if the moved
1772
- // object has multiple references, but trying to copy will fail to compile.
1773
- // - If both movable and copyable, check ref count: if 1, move; otherwise copy
1774
- // - Otherwise (not movable), copy.
1775
- template <typename T> detail::enable_if_t<detail::move_always<T>::value, T> cast(object &&object) {
1776
- return move<T>(std::move(object));
1777
- }
1778
- template <typename T> detail::enable_if_t<detail::move_if_unreferenced<T>::value, T> cast(object &&object) {
1779
- if (object.ref_count() > 1)
1780
- return cast<T>(object);
1781
- else
1782
- return move<T>(std::move(object));
1783
- }
1784
- template <typename T> detail::enable_if_t<detail::move_never<T>::value, T> cast(object &&object) {
1785
- return cast<T>(object);
1786
- }
1787
-
1788
- template <typename T> T object::cast() const & { return pybind11::cast<T>(*this); }
1789
- template <typename T> T object::cast() && { return pybind11::cast<T>(std::move(*this)); }
1790
- template <> inline void object::cast() const & { return; }
1791
- template <> inline void object::cast() && { return; }
1792
-
1793
- PYBIND11_NAMESPACE_BEGIN(detail)
1794
-
1795
- // Declared in pytypes.h:
1796
- template <typename T, enable_if_t<!is_pyobject<T>::value, int>>
1797
- object object_or_cast(T &&o) { return pybind11::cast(std::forward<T>(o)); }
1798
-
1799
- struct overload_unused {}; // Placeholder type for the unneeded (and dead code) static variable in the OVERLOAD_INT macro
1800
- template <typename ret_type> using overload_caster_t = conditional_t<
1801
- cast_is_temporary_value_reference<ret_type>::value, make_caster<ret_type>, overload_unused>;
1802
-
1803
- // Trampoline use: for reference/pointer types to value-converted values, we do a value cast, then
1804
- // store the result in the given variable. For other types, this is a no-op.
1805
- template <typename T> enable_if_t<cast_is_temporary_value_reference<T>::value, T> cast_ref(object &&o, make_caster<T> &caster) {
1806
- return cast_op<T>(load_type(caster, o));
1807
- }
1808
- template <typename T> enable_if_t<!cast_is_temporary_value_reference<T>::value, T> cast_ref(object &&, overload_unused &) {
1809
- pybind11_fail("Internal error: cast_ref fallback invoked"); }
1810
-
1811
- // Trampoline use: Having a pybind11::cast with an invalid reference type is going to static_assert, even
1812
- // though if it's in dead code, so we provide a "trampoline" to pybind11::cast that only does anything in
1813
- // cases where pybind11::cast is valid.
1814
- template <typename T> enable_if_t<!cast_is_temporary_value_reference<T>::value, T> cast_safe(object &&o) {
1815
- return pybind11::cast<T>(std::move(o)); }
1816
- template <typename T> enable_if_t<cast_is_temporary_value_reference<T>::value, T> cast_safe(object &&) {
1817
- pybind11_fail("Internal error: cast_safe fallback invoked"); }
1818
- template <> inline void cast_safe<void>(object &&) {}
1819
-
1820
- PYBIND11_NAMESPACE_END(detail)
1821
-
1822
- template <return_value_policy policy = return_value_policy::automatic_reference>
1823
- tuple make_tuple() { return tuple(0); }
1824
-
1825
- template <return_value_policy policy = return_value_policy::automatic_reference,
1826
- typename... Args> tuple make_tuple(Args&&... args_) {
1827
- constexpr size_t size = sizeof...(Args);
1828
- std::array<object, size> args {
1829
- { reinterpret_steal<object>(detail::make_caster<Args>::cast(
1830
- std::forward<Args>(args_), policy, nullptr))... }
1831
- };
1832
- for (size_t i = 0; i < args.size(); i++) {
1833
- if (!args[i]) {
1834
- #if defined(NDEBUG)
1835
- throw cast_error("make_tuple(): unable to convert arguments to Python object (compile in debug mode for details)");
1836
- #else
1837
- std::array<std::string, size> argtypes { {type_id<Args>()...} };
1838
- throw cast_error("make_tuple(): unable to convert argument of type '" +
1839
- argtypes[i] + "' to Python object");
1840
- #endif
1841
- }
1842
- }
1843
- tuple result(size);
1844
- int counter = 0;
1845
- for (auto &arg_value : args)
1846
- PyTuple_SET_ITEM(result.ptr(), counter++, arg_value.release().ptr());
1847
- return result;
1848
- }
1849
-
1850
- /// \ingroup annotations
1851
- /// Annotation for arguments
1852
- struct arg {
1853
- /// Constructs an argument with the name of the argument; if null or omitted, this is a positional argument.
1854
- constexpr explicit arg(const char *name = nullptr) : name(name), flag_noconvert(false), flag_none(true) { }
1855
- /// Assign a value to this argument
1856
- template <typename T> arg_v operator=(T &&value) const;
1857
- /// Indicate that the type should not be converted in the type caster
1858
- arg &noconvert(bool flag = true) { flag_noconvert = flag; return *this; }
1859
- /// Indicates that the argument should/shouldn't allow None (e.g. for nullable pointer args)
1860
- arg &none(bool flag = true) { flag_none = flag; return *this; }
1861
-
1862
- const char *name; ///< If non-null, this is a named kwargs argument
1863
- bool flag_noconvert : 1; ///< If set, do not allow conversion (requires a supporting type caster!)
1864
- bool flag_none : 1; ///< If set (the default), allow None to be passed to this argument
1865
- };
1866
-
1867
- /// \ingroup annotations
1868
- /// Annotation for arguments with values
1869
- struct arg_v : arg {
1870
- private:
1871
- template <typename T>
1872
- arg_v(arg &&base, T &&x, const char *descr = nullptr)
1873
- : arg(base),
1874
- value(reinterpret_steal<object>(
1875
- detail::make_caster<T>::cast(x, return_value_policy::automatic, {})
1876
- )),
1877
- descr(descr)
1878
- #if !defined(NDEBUG)
1879
- , type(type_id<T>())
1880
- #endif
1881
- { }
1882
-
1883
- public:
1884
- /// Direct construction with name, default, and description
1885
- template <typename T>
1886
- arg_v(const char *name, T &&x, const char *descr = nullptr)
1887
- : arg_v(arg(name), std::forward<T>(x), descr) { }
1888
-
1889
- /// Called internally when invoking `py::arg("a") = value`
1890
- template <typename T>
1891
- arg_v(const arg &base, T &&x, const char *descr = nullptr)
1892
- : arg_v(arg(base), std::forward<T>(x), descr) { }
1893
-
1894
- /// Same as `arg::noconvert()`, but returns *this as arg_v&, not arg&
1895
- arg_v &noconvert(bool flag = true) { arg::noconvert(flag); return *this; }
1896
-
1897
- /// Same as `arg::nonone()`, but returns *this as arg_v&, not arg&
1898
- arg_v &none(bool flag = true) { arg::none(flag); return *this; }
1899
-
1900
- /// The default value
1901
- object value;
1902
- /// The (optional) description of the default value
1903
- const char *descr;
1904
- #if !defined(NDEBUG)
1905
- /// The C++ type name of the default value (only available when compiled in debug mode)
1906
- std::string type;
1907
- #endif
1908
- };
1909
-
1910
- /// \ingroup annotations
1911
- /// Annotation indicating that all following arguments are keyword-only; the is the equivalent of an
1912
- /// unnamed '*' argument (in Python 3)
1913
- struct kwonly {};
1914
-
1915
- template <typename T>
1916
- arg_v arg::operator=(T &&value) const { return {std::move(*this), std::forward<T>(value)}; }
1917
-
1918
- /// Alias for backward compatibility -- to be removed in version 2.0
1919
- template <typename /*unused*/> using arg_t = arg_v;
1920
-
1921
- inline namespace literals {
1922
- /** \rst
1923
- String literal version of `arg`
1924
- \endrst */
1925
- constexpr arg operator"" _a(const char *name, size_t) { return arg(name); }
1926
- }
1927
-
1928
- PYBIND11_NAMESPACE_BEGIN(detail)
1929
-
1930
- // forward declaration (definition in attr.h)
1931
- struct function_record;
1932
-
1933
- /// Internal data associated with a single function call
1934
- struct function_call {
1935
- function_call(const function_record &f, handle p); // Implementation in attr.h
1936
-
1937
- /// The function data:
1938
- const function_record &func;
1939
-
1940
- /// Arguments passed to the function:
1941
- std::vector<handle> args;
1942
-
1943
- /// The `convert` value the arguments should be loaded with
1944
- std::vector<bool> args_convert;
1945
-
1946
- /// Extra references for the optional `py::args` and/or `py::kwargs` arguments (which, if
1947
- /// present, are also in `args` but without a reference).
1948
- object args_ref, kwargs_ref;
1949
-
1950
- /// The parent, if any
1951
- handle parent;
1952
-
1953
- /// If this is a call to an initializer, this argument contains `self`
1954
- handle init_self;
1955
- };
1956
-
1957
-
1958
- /// Helper class which loads arguments for C++ functions called from Python
1959
- template <typename... Args>
1960
- class argument_loader {
1961
- using indices = make_index_sequence<sizeof...(Args)>;
1962
-
1963
- template <typename Arg> using argument_is_args = std::is_same<intrinsic_t<Arg>, args>;
1964
- template <typename Arg> using argument_is_kwargs = std::is_same<intrinsic_t<Arg>, kwargs>;
1965
- // Get args/kwargs argument positions relative to the end of the argument list:
1966
- static constexpr auto args_pos = constexpr_first<argument_is_args, Args...>() - (int) sizeof...(Args),
1967
- kwargs_pos = constexpr_first<argument_is_kwargs, Args...>() - (int) sizeof...(Args);
1968
-
1969
- static constexpr bool args_kwargs_are_last = kwargs_pos >= - 1 && args_pos >= kwargs_pos - 1;
1970
-
1971
- static_assert(args_kwargs_are_last, "py::args/py::kwargs are only permitted as the last argument(s) of a function");
1972
-
1973
- public:
1974
- static constexpr bool has_kwargs = kwargs_pos < 0;
1975
- static constexpr bool has_args = args_pos < 0;
1976
-
1977
- static constexpr auto arg_names = concat(type_descr(make_caster<Args>::name)...);
1978
-
1979
- bool load_args(function_call &call) {
1980
- return load_impl_sequence(call, indices{});
1981
- }
1982
-
1983
- template <typename Return, typename Guard, typename Func>
1984
- enable_if_t<!std::is_void<Return>::value, Return> call(Func &&f) && {
1985
- return std::move(*this).template call_impl<Return>(std::forward<Func>(f), indices{}, Guard{});
1986
- }
1987
-
1988
- template <typename Return, typename Guard, typename Func>
1989
- enable_if_t<std::is_void<Return>::value, void_type> call(Func &&f) && {
1990
- std::move(*this).template call_impl<Return>(std::forward<Func>(f), indices{}, Guard{});
1991
- return void_type();
1992
- }
1993
-
1994
- private:
1995
-
1996
- static bool load_impl_sequence(function_call &, index_sequence<>) { return true; }
1997
-
1998
- template <size_t... Is>
1999
- bool load_impl_sequence(function_call &call, index_sequence<Is...>) {
2000
- #ifdef __cpp_fold_expressions
2001
- if ((... || !std::get<Is>(argcasters).load(call.args[Is], call.args_convert[Is])))
2002
- return false;
2003
- #else
2004
- for (bool r : {std::get<Is>(argcasters).load(call.args[Is], call.args_convert[Is])...})
2005
- if (!r)
2006
- return false;
2007
- #endif
2008
- return true;
2009
- }
2010
-
2011
- template <typename Return, typename Func, size_t... Is, typename Guard>
2012
- Return call_impl(Func &&f, index_sequence<Is...>, Guard &&) && {
2013
- return std::forward<Func>(f)(cast_op<Args>(std::move(std::get<Is>(argcasters)))...);
2014
- }
2015
-
2016
- std::tuple<make_caster<Args>...> argcasters;
2017
- };
2018
-
2019
- /// Helper class which collects only positional arguments for a Python function call.
2020
- /// A fancier version below can collect any argument, but this one is optimal for simple calls.
2021
- template <return_value_policy policy>
2022
- class simple_collector {
2023
- public:
2024
- template <typename... Ts>
2025
- explicit simple_collector(Ts &&...values)
2026
- : m_args(pybind11::make_tuple<policy>(std::forward<Ts>(values)...)) { }
2027
-
2028
- const tuple &args() const & { return m_args; }
2029
- dict kwargs() const { return {}; }
2030
-
2031
- tuple args() && { return std::move(m_args); }
2032
-
2033
- /// Call a Python function and pass the collected arguments
2034
- object call(PyObject *ptr) const {
2035
- PyObject *result = PyObject_CallObject(ptr, m_args.ptr());
2036
- if (!result)
2037
- throw error_already_set();
2038
- return reinterpret_steal<object>(result);
2039
- }
2040
-
2041
- private:
2042
- tuple m_args;
2043
- };
2044
-
2045
- /// Helper class which collects positional, keyword, * and ** arguments for a Python function call
2046
- template <return_value_policy policy>
2047
- class unpacking_collector {
2048
- public:
2049
- template <typename... Ts>
2050
- explicit unpacking_collector(Ts &&...values) {
2051
- // Tuples aren't (easily) resizable so a list is needed for collection,
2052
- // but the actual function call strictly requires a tuple.
2053
- auto args_list = list();
2054
- int _[] = { 0, (process(args_list, std::forward<Ts>(values)), 0)... };
2055
- ignore_unused(_);
2056
-
2057
- m_args = std::move(args_list);
2058
- }
2059
-
2060
- const tuple &args() const & { return m_args; }
2061
- const dict &kwargs() const & { return m_kwargs; }
2062
-
2063
- tuple args() && { return std::move(m_args); }
2064
- dict kwargs() && { return std::move(m_kwargs); }
2065
-
2066
- /// Call a Python function and pass the collected arguments
2067
- object call(PyObject *ptr) const {
2068
- PyObject *result = PyObject_Call(ptr, m_args.ptr(), m_kwargs.ptr());
2069
- if (!result)
2070
- throw error_already_set();
2071
- return reinterpret_steal<object>(result);
2072
- }
2073
-
2074
- private:
2075
- template <typename T>
2076
- void process(list &args_list, T &&x) {
2077
- auto o = reinterpret_steal<object>(detail::make_caster<T>::cast(std::forward<T>(x), policy, {}));
2078
- if (!o) {
2079
- #if defined(NDEBUG)
2080
- argument_cast_error();
2081
- #else
2082
- argument_cast_error(std::to_string(args_list.size()), type_id<T>());
2083
- #endif
2084
- }
2085
- args_list.append(o);
2086
- }
2087
-
2088
- void process(list &args_list, detail::args_proxy ap) {
2089
- for (const auto &a : ap)
2090
- args_list.append(a);
2091
- }
2092
-
2093
- void process(list &/*args_list*/, arg_v a) {
2094
- if (!a.name)
2095
- #if defined(NDEBUG)
2096
- nameless_argument_error();
2097
- #else
2098
- nameless_argument_error(a.type);
2099
- #endif
2100
-
2101
- if (m_kwargs.contains(a.name)) {
2102
- #if defined(NDEBUG)
2103
- multiple_values_error();
2104
- #else
2105
- multiple_values_error(a.name);
2106
- #endif
2107
- }
2108
- if (!a.value) {
2109
- #if defined(NDEBUG)
2110
- argument_cast_error();
2111
- #else
2112
- argument_cast_error(a.name, a.type);
2113
- #endif
2114
- }
2115
- m_kwargs[a.name] = a.value;
2116
- }
2117
-
2118
- void process(list &/*args_list*/, detail::kwargs_proxy kp) {
2119
- if (!kp)
2120
- return;
2121
- for (const auto &k : reinterpret_borrow<dict>(kp)) {
2122
- if (m_kwargs.contains(k.first)) {
2123
- #if defined(NDEBUG)
2124
- multiple_values_error();
2125
- #else
2126
- multiple_values_error(str(k.first));
2127
- #endif
2128
- }
2129
- m_kwargs[k.first] = k.second;
2130
- }
2131
- }
2132
-
2133
- [[noreturn]] static void nameless_argument_error() {
2134
- throw type_error("Got kwargs without a name; only named arguments "
2135
- "may be passed via py::arg() to a python function call. "
2136
- "(compile in debug mode for details)");
2137
- }
2138
- [[noreturn]] static void nameless_argument_error(std::string type) {
2139
- throw type_error("Got kwargs without a name of type '" + type + "'; only named "
2140
- "arguments may be passed via py::arg() to a python function call. ");
2141
- }
2142
- [[noreturn]] static void multiple_values_error() {
2143
- throw type_error("Got multiple values for keyword argument "
2144
- "(compile in debug mode for details)");
2145
- }
2146
-
2147
- [[noreturn]] static void multiple_values_error(std::string name) {
2148
- throw type_error("Got multiple values for keyword argument '" + name + "'");
2149
- }
2150
-
2151
- [[noreturn]] static void argument_cast_error() {
2152
- throw cast_error("Unable to convert call argument to Python object "
2153
- "(compile in debug mode for details)");
2154
- }
2155
-
2156
- [[noreturn]] static void argument_cast_error(std::string name, std::string type) {
2157
- throw cast_error("Unable to convert call argument '" + name
2158
- + "' of type '" + type + "' to Python object");
2159
- }
2160
-
2161
- private:
2162
- tuple m_args;
2163
- dict m_kwargs;
2164
- };
2165
-
2166
- /// Collect only positional arguments for a Python function call
2167
- template <return_value_policy policy, typename... Args,
2168
- typename = enable_if_t<all_of<is_positional<Args>...>::value>>
2169
- simple_collector<policy> collect_arguments(Args &&...args) {
2170
- return simple_collector<policy>(std::forward<Args>(args)...);
2171
- }
2172
-
2173
- /// Collect all arguments, including keywords and unpacking (only instantiated when needed)
2174
- template <return_value_policy policy, typename... Args,
2175
- typename = enable_if_t<!all_of<is_positional<Args>...>::value>>
2176
- unpacking_collector<policy> collect_arguments(Args &&...args) {
2177
- // Following argument order rules for generalized unpacking according to PEP 448
2178
- static_assert(
2179
- constexpr_last<is_positional, Args...>() < constexpr_first<is_keyword_or_ds, Args...>()
2180
- && constexpr_last<is_s_unpacking, Args...>() < constexpr_first<is_ds_unpacking, Args...>(),
2181
- "Invalid function call: positional args must precede keywords and ** unpacking; "
2182
- "* unpacking must precede ** unpacking"
2183
- );
2184
- return unpacking_collector<policy>(std::forward<Args>(args)...);
2185
- }
2186
-
2187
- template <typename Derived>
2188
- template <return_value_policy policy, typename... Args>
2189
- object object_api<Derived>::operator()(Args &&...args) const {
2190
- return detail::collect_arguments<policy>(std::forward<Args>(args)...).call(derived().ptr());
2191
- }
2192
-
2193
- template <typename Derived>
2194
- template <return_value_policy policy, typename... Args>
2195
- object object_api<Derived>::call(Args &&...args) const {
2196
- return operator()<policy>(std::forward<Args>(args)...);
2197
- }
2198
-
2199
- PYBIND11_NAMESPACE_END(detail)
2200
-
2201
- #define PYBIND11_MAKE_OPAQUE(...) \
2202
- namespace pybind11 { namespace detail { \
2203
- template<> class type_caster<__VA_ARGS__> : public type_caster_base<__VA_ARGS__> { }; \
2204
- }}
2205
-
2206
- /// Lets you pass a type containing a `,` through a macro parameter without needing a separate
2207
- /// typedef, e.g.: `PYBIND11_OVERLOAD(PYBIND11_TYPE(ReturnType<A, B>), PYBIND11_TYPE(Parent<C, D>), f, arg)`
2208
- #define PYBIND11_TYPE(...) __VA_ARGS__
2209
-
2210
- PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/cmake/ThrustAddSubdir.cmake DELETED
@@ -1,6 +0,0 @@
1
- find_package(Thrust REQUIRED CONFIG
2
- NO_DEFAULT_PATH # Only check the explicit path in HINTS:
3
- HINTS "${CMAKE_CURRENT_LIST_DIR}/.."
4
- COMPONENTS ${THRUST_REQUIRED_SYSTEMS}
5
- OPTIONAL_COMPONENTS ${THRUST_OPTIONAL_SYSTEMS}
6
- )
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cuda/vector.h DELETED
@@ -1,72 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in ccudaliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- /*! \file thrust/system/cuda/vector.h
18
- * \brief A dynamically-sizable array of elements which reside in memory available to
19
- * Thrust's CUDA system.
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/detail/config.h>
25
- #include <thrust/system/cuda/memory.h>
26
- #include <thrust/detail/vector_base.h>
27
- #include <vector>
28
-
29
- namespace thrust
30
- {
31
-
32
- // forward declaration of host_vector
33
- template<typename T, typename Allocator> class host_vector;
34
-
35
- namespace cuda_cub
36
- {
37
-
38
- /*! \p cuda_bulk::vector is a container that supports random access to elements,
39
- * constant time removal of elements at the end, and linear time insertion
40
- * and removal of elements at the beginning or in the middle. The number of
41
- * elements in a \p cuda_bulk::vector may vary dynamically; memory management is
42
- * automatic. The elements contained in a \p cuda_bulk::vector reside in memory
43
- * available to the \p cuda_bulk system.
44
- *
45
- * \tparam T The element type of the \p cuda_bulk::vector.
46
- * \tparam Allocator The allocator type of the \p cuda_bulk::vector. Defaults to \p cuda_bulk::allocator.
47
- *
48
- * \see http://www.sgi.com/tech/stl/Vector.html
49
- * \see host_vector For the documentation of the complete interface which is
50
- * shared by \p cuda_bulk::vector
51
- * \see device_vector
52
- */
53
- template<typename T, typename Allocator = allocator<T> >
54
- using vector = thrust::detail::vector_base<T, Allocator>;
55
-
56
- } // end cuda_cub
57
-
58
- // alias system::cuda_bulk names at top-level
59
- namespace cuda
60
- {
61
-
62
- using thrust::cuda_cub::vector;
63
-
64
- } // end cuda_bulk
65
-
66
- namespace system {
67
- namespace cuda {
68
- using thrust::cuda_cub::vector;
69
- }
70
- }
71
-
72
- } // end thrust
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CanIpleas/gpt2/app.py DELETED
@@ -1,3 +0,0 @@
1
- import gradio as gr
2
-
3
- gr.Interface.load("models/gpt2").launch()
 
 
 
 
spaces/ChallengeHub/Chinese-LangChain/tests/test_gradio_slient.py DELETED
@@ -1,19 +0,0 @@
1
- import time
2
-
3
- import gradio as gra
4
-
5
-
6
- def user_greeting(name):
7
- time.sleep(10)
8
- return "Hi! " + name + " Welcome to your first Gradio application!😎"
9
-
10
-
11
- # define gradio interface and other parameters
12
- app = gra.Interface(
13
- fn=user_greeting,
14
- inputs="text",
15
- outputs="text",
16
- )
17
- app.launch(
18
- server_name='0.0.0.0', server_port=8888, share=False,show_error=True, enable_queue=True
19
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Comet/txt2im-models/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Txt2im Models
3
- emoji: 📉
4
- colorFrom: yellow
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.0.19
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cpp4App/Cpp4App/CDM/run_batch.py DELETED
@@ -1,146 +0,0 @@
1
- import multiprocessing
2
- import glob
3
- import time
4
- import json
5
- from tqdm import tqdm
6
- from os.path import join as pjoin, exists
7
- import cv2
8
- import os
9
- import shutil
10
-
11
- from detect_merge.merge import reassign_ids
12
- import detect_compo.ip_region_proposal as ip
13
- from detect_merge.Element import Element
14
- import detect_compo.lib_ip.ip_preprocessing as pre
15
- import detect_classify.classification as clf
16
- import torch
17
- import numpy as np
18
- from torchvision import models
19
- from torch import nn
20
- import pandas as pd
21
- import csv
22
- import re
23
- import openai
24
- import random
25
- from PIL import Image
26
-
27
- def resize_height_by_longest_edge(img_path, resize_length=800):
28
- org = cv2.imread(img_path)
29
- height, width = org.shape[:2]
30
- if height > width:
31
- return resize_length
32
- else:
33
- return int(resize_length * (height / width))
34
-
35
-
36
- if __name__ == '__main__':
37
-
38
- input_img_root = "./input_examples/"
39
- output_root = "./result_classification"
40
- segment_root = '../scrutinizing_alexa/txt'
41
-
42
- if os.path.exists(output_root):
43
- shutil.rmtree(output_root)
44
- os.makedirs(output_root)
45
-
46
- image_list = os.listdir(input_img_root)
47
-
48
- input_imgs = [input_img_root + image_name for image_name in image_list]
49
-
50
- key_params = {'min-grad': 4, 'ffl-block': 5, 'min-ele-area': 50, 'merge-contained-ele': True,
51
- 'max-word-inline-gap': 10, 'max-line-ingraph-gap': 4, 'remove-top-bar': False}
52
-
53
- is_ip = True
54
- is_clf = False
55
- is_ocr = True
56
- is_merge = True
57
- is_classification = True
58
-
59
- # Load deep learning models in advance
60
- compo_classifier = None
61
- if is_ip and is_clf:
62
- compo_classifier = {}
63
- from cnn.CNN import CNN
64
- # compo_classifier['Image'] = CNN('Image')
65
- compo_classifier['Elements'] = CNN('Elements')
66
- # compo_classifier['Noise'] = CNN('Noise')
67
- ocr_model = None
68
- if is_ocr:
69
- import detect_text.text_detection as text
70
-
71
- # set the range of target inputs' indices
72
- num = 0
73
- # start_index = 30800 # 61728
74
- # end_index = 100000
75
-
76
- img_time_cost_all = []
77
- ocr_time_cost_all = []
78
- ic_time_cost_all = []
79
- ts_time_cost_all = []
80
- cd_time_cost_all = []
81
-
82
- resize_by_height = 800
83
- for input_img in input_imgs:
84
-
85
- output_data = pd.DataFrame(columns=['screenshot', 'id', 'label', 'index', 'text', 'sentences'])
86
-
87
- this_img_start_time = time.clock()
88
-
89
- resized_height = resize_height_by_longest_edge(input_img, resize_by_height)
90
- index = input_img.split('/')[-1][:-4]
91
-
92
- if index != "1-1" and index != "1-2":
93
- continue
94
-
95
- if is_ocr:
96
- os.makedirs(pjoin(output_root, 'ocr'), exist_ok=True)
97
- this_ocr_time_cost = text.text_detection(input_img, output_root, show=False, method='paddle')
98
- ocr_time_cost_all.append(this_ocr_time_cost)
99
-
100
- if is_ip:
101
- os.makedirs(pjoin(output_root, 'ip'), exist_ok=True)
102
- this_cd_time_cost = ip.compo_detection(input_img, output_root, key_params, classifier=compo_classifier, resize_by_height=resized_height, show=False)
103
- cd_time_cost_all.append(this_cd_time_cost)
104
-
105
- if is_merge:
106
- import detect_merge.merge as merge
107
-
108
- os.makedirs(pjoin(output_root, 'merge'), exist_ok=True)
109
- compo_path = pjoin(output_root, 'ip', str(index) + '.json')
110
- ocr_path = pjoin(output_root, 'ocr', str(index) + '.json')
111
- board_merge, components_merge = merge.merge(input_img, compo_path, ocr_path, pjoin(output_root, 'merge'), is_remove_top_bar=key_params['remove-top-bar'], show=False)
112
- # ic_time_cost_all.append(this_ic_time_cost)
113
- # ts_time_cost_all.append(this_ts_time_cost)
114
-
115
- if is_classification:
116
-
117
- os.makedirs(pjoin(output_root, 'classification'), exist_ok=True)
118
- merge_path = pjoin(output_root, 'merge', str(index) + '.json')
119
- merge_json = json.load(open(merge_path, 'r'))
120
- os.makedirs(pjoin(output_root, 'classification', 'GUI'), exist_ok=True)
121
- this_time_cost_ic, this_time_cost_ts, output_data, output_board = clf.compo_classification(input_img, output_root, segment_root, merge_json, output_data, resize_by_height=resize_by_height)
122
-
123
- ic_time_cost_all.append(this_time_cost_ic)
124
- ts_time_cost_all.append(this_time_cost_ts)
125
-
126
- this_img_time_cost = time.clock() - this_img_start_time
127
- img_time_cost_all.append(this_img_time_cost)
128
- print("time cost for this image: %2.2f s" % this_img_time_cost)
129
-
130
- num += 1
131
-
132
- if os.path.isfile(output_root + '/output.csv'):
133
- output_data.to_csv(output_root + '/output.csv', index=False, mode='a', header=False)
134
- else:
135
- output_data.to_csv(output_root + '/output.csv', index=False, mode='w')
136
-
137
- avg_ocr_time_cost = sum(ocr_time_cost_all) / len(ocr_time_cost_all)
138
- avg_cd_time_cost = sum(cd_time_cost_all) / len(cd_time_cost_all)
139
- avg_ic_time_cost = sum(ic_time_cost_all) / len(ic_time_cost_all)
140
- avg_ts_time_cost = sum(ts_time_cost_all) / len(ts_time_cost_all)
141
- avg_time_cost = sum(img_time_cost_all)/len(img_time_cost_all)
142
- print("average text extraction time cost for this app: %2.2f s" % avg_ocr_time_cost)
143
- print("average widget detection time cost for this app: %2.2f s" % avg_cd_time_cost)
144
- print("average icon classification time cost for this app: %2.2f s" % avg_ic_time_cost)
145
- print("average text selection processing time cost for this app: %2.2f s" % avg_ts_time_cost)
146
- print("average screenshot processing time cost for this app: %2.2f s" % avg_time_cost)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CyberHarem/find_my_waifu/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Find My Waifu
3
- emoji: 😻
4
- colorFrom: purple
5
- colorTo: indigo
6
- sdk: gradio
7
- sdk_version: 3.39.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference