parquet-converter commited on
Commit
145c634
·
1 Parent(s): 7ac1142

Update parquet files (step 106 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/101-5/gpt4free/g4f/.v1/testing/openaihosted_test.py +0 -14
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/AutoCAD Mobile App 2008 Xforce Keygen 64 Bit AutoCAD .md +0 -113
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Beyblade Metal Fusion Episodes In Hindi Free Download Watch the Epic Battles Online.md +0 -130
  4. spaces/1gistliPinn/ChatGPT4/Examples/All Episodes Of Beyblade Season 1 Cartoon In Hindi.md +0 -28
  5. spaces/1gistliPinn/ChatGPT4/Examples/Autoplay Menu Designer 5.2 Cracked __FULL__.md +0 -6
  6. spaces/1gistliPinn/ChatGPT4/Examples/Bhadra Kalyanam Book In Telugu.md +0 -24
  7. spaces/1gistliPinn/ChatGPT4/Examples/Cowboy Bebop OST S Flac.md +0 -6
  8. spaces/1gistliPinn/ChatGPT4/Examples/Descargar Planilla De Pago Del Seniat Dpn 25.md +0 -26
  9. spaces/1gistliPinn/ChatGPT4/Examples/EXCLUSIVE Download Directx Version 9.0 For Gta San Andreas.md +0 -6
  10. spaces/1gistliPinn/ChatGPT4/Examples/Edius Pro 6.5 Free Download With Crack 2021.md +0 -16
  11. spaces/1phancelerku/anime-remove-background/ 4 - .md +0 -116
  12. spaces/1phancelerku/anime-remove-background/Avatar Wallpapers - HD and 4K Download.md +0 -153
  13. spaces/1phancelerku/anime-remove-background/Download Binance and Unlock the Power of Bitcoin Secure Fast and Easy.md +0 -113
  14. spaces/1phancelerku/anime-remove-background/Download Solitaire 13 and Discover the Secrets of the Pyramid.md +0 -160
  15. spaces/1phancelerku/anime-remove-background/Experience the Fun and Authenticity of Bus Simulator Indonesia APK.md +0 -119
  16. spaces/AIFILMS/generate_human_motion/pyrender/pyrender/primitive.py +0 -489
  17. spaces/Abuzariii/Text-Generation-with-GPT-2/README.md +0 -12
  18. spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/stop-generating/+server.ts +0 -23
  19. spaces/AchyuthGamer/OpenGPT/client/js/chat.js +0 -508
  20. spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatgptAi.py +0 -74
  21. spaces/Adapter/CoAdapter/dist_util.py +0 -91
  22. spaces/Adapter/CoAdapter/ldm/modules/diffusionmodules/__init__.py +0 -0
  23. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/LayoutMode2.js +0 -74
  24. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Label.d.ts +0 -100
  25. spaces/AiMimicry/sovits-models/app.py +0 -110
  26. spaces/Amrrs/DragGan-Inversion/PTI/dnnlib/__init__.py +0 -9
  27. spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/__init__.py +0 -0
  28. spaces/Amrrs/DragGan-Inversion/README.md +0 -78
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_modeling_common_flax.py +0 -66
  30. spaces/Andy1621/uniformer_image_detection/configs/groie/mask_rcnn_r50_fpn_groie_1x_coco.py +0 -45
  31. spaces/Andy1621/uniformer_image_detection/tools/dist_test.sh +0 -10
  32. spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/ade20k.py +0 -54
  33. spaces/Apex-X/ROOPOK/README.md +0 -12
  34. spaces/ArtyomKhyan/Detection/utils/torch_utils.py +0 -203
  35. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/terminal_theme.py +0 -153
  36. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/config/test_instantiate_config.py +0 -100
  37. spaces/Benson/text-generation/Examples/Arena Breakout Apk Actualizacin.md +0 -104
  38. spaces/Benson/text-generation/Examples/Beta 0.44 Chicos Tropiezo Apk.md +0 -58
  39. spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/hooks.py +0 -661
  40. spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/tokens.py +0 -330
  41. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/main_parser.py +0 -134
  42. spaces/CNXT/TXT2PiX/README.md +0 -12
  43. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/evaluation/__init__.py +0 -12
  44. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/export/shared.py +0 -1031
  45. spaces/CVPR/LIVE/pybind11/tests/test_builtin_casters.py +0 -392
  46. spaces/CVPR/LIVE/pydiffvg_tensorflow/pixel_filter.py +0 -8
  47. spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/tabulate.h +0 -22
  48. spaces/CVPR/WALT/mmdet/models/losses/kd_loss.py +0 -87
  49. spaces/CVPR/regionclip-demo/detectron2/data/transforms/torchvision_transforms/functional_tensor.py +0 -966
  50. spaces/ChandraMohanNayal/AutoGPT/Dockerfile +0 -38
spaces/101-5/gpt4free/g4f/.v1/testing/openaihosted_test.py DELETED
@@ -1,14 +0,0 @@
1
- import openaihosted
2
-
3
- messages = [{"role": "system", "content": "You are a helpful assistant."}]
4
- while True:
5
- question = input("Question: ")
6
- if question == "!stop":
7
- break
8
-
9
- messages.append({"role": "user", "content": question})
10
- request = openaihosted.Completion.create(messages=messages)
11
-
12
- response = request["responses"]
13
- messages.append({"role": "assistant", "content": response})
14
- print(f"Answer: {response}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/AutoCAD Mobile App 2008 Xforce Keygen 64 Bit AutoCAD .md DELETED
@@ -1,113 +0,0 @@
1
- <br />
2
- <h1>AutoCAD Mobile App 2008 Xforce Keygen 64 Bit</h1>
3
- <p>If you are looking for a way to use AutoCAD on your mobile device, you might be interested in AutoCAD Mobile App 2008 Xforce Keygen 64 Bit. This is a tool that allows you to activate the full version of AutoCAD Mobile App 2008 on your Android or iOS device. In this article, we will explain what AutoCAD Mobile App is, what Xforce Keygen is, how to download and install it, how to use it, and what are the benefits and risks of using it. We will also compare it with some alternatives and answer some frequently asked questions.</p>
4
- <h2>AutoCAD Mobile App 2008 Xforce Keygen 64 Bit</h2><br /><p><b><b>Download Zip</b> &rarr;&rarr;&rarr; <a href="https://byltly.com/2uKz8P">https://byltly.com/2uKz8P</a></b></p><br /><br />
5
- <h2>What is AutoCAD Mobile App?</h2>
6
- <p>AutoCAD Mobile App is a mobile application that lets you view, edit, create, and share CAD drawings on your smartphone or tablet. It is compatible with DWG, DXF, and PDF files, and supports cloud storage services like Dropbox, Google Drive, OneDrive, and more. You can also work offline and sync your changes when you are online. With AutoCAD Mobile App, you can access your drawings anytime, anywhere, and collaborate with your team members or clients.</p>
7
- <h2>What is Xforce Keygen?</h2>
8
- <p>Xforce Keygen is a software that generates activation codes for various software products. It is often used to bypass the license verification process and unlock the full features of the software. Xforce Keygen is not an official product of Autodesk, the developer of AutoCAD, and it is considered illegal and unethical to use it. However, some people use it for personal or educational purposes, or because they cannot afford the original software.</p>
9
- <h2>How to download and install AutoCAD Mobile App 2008 Xforce Keygen 64 Bit?</h2>
10
- <p>To download and install AutoCAD Mobile App 2008 Xforce Keygen 64 Bit, you need to follow these steps:</p>
11
- <ol>
12
- <li>Download the AutoCAD Mobile App 2008 from the official website or the app store. You can choose between a free trial version or a paid subscription plan.</li>
13
- <li>Download the Xforce Keygen 2008 from a reliable source. You can find it on Google Drive or other websites. Make sure you download the correct version for your device (64 bit).</li>
14
- <li>Extract the Xforce Keygen zip file and run the AutoCAD-2008-keygen.exe file as administrator.</li>
15
- <li>Select "AutoCAD Mobile" from the product list and click on "Generate". You will see a code based on your device ID.</li>
16
- <li>Copy the code and paste it into the activation window of the AutoCAD Mobile App. Click on "Activate" and wait for the confirmation message.</li>
17
- <li>Congratulations! You have successfully installed AutoCAD Mobile App 2008 Xforce Keygen 64 Bit on your device.</li>
18
- </ol>
19
- <h2>How to use AutoCAD Mobile App 2008 Xforce Keygen 64 Bit?</h2>
20
- <p>To use AutoCAD Mobile App 2008 Xforce Keygen 64 Bit, you need to follow these steps:</p>
21
- <p>How to download AutoCAD Mobile App 2008 with Xforce Keygen<br />
22
- AutoCAD Mobile App 2008 Xforce Keygen activation code<br />
23
- AutoCAD Mobile App 2008 Xforce Keygen free download for Windows 64 Bit<br />
24
- AutoCAD Mobile App 2008 Xforce Keygen crack file<br />
25
- AutoCAD Mobile App 2008 Xforce Keygen installation guide<br />
26
- AutoCAD Mobile App 2008 Xforce Keygen serial number<br />
27
- AutoCAD Mobile App 2008 Xforce Keygen online generator<br />
28
- AutoCAD Mobile App 2008 Xforce Keygen error fix<br />
29
- AutoCAD Mobile App 2008 Xforce Keygen full version<br />
30
- AutoCAD Mobile App 2008 Xforce Keygen torrent link<br />
31
- AutoCAD Mobile App 2008 Xforce Keygen license key<br />
32
- AutoCAD Mobile App 2008 Xforce Keygen patch<br />
33
- AutoCAD Mobile App 2008 Xforce Keygen system requirements<br />
34
- AutoCAD Mobile App 2008 Xforce Keygen features and benefits<br />
35
- AutoCAD Mobile App 2008 Xforce Keygen review and rating<br />
36
- AutoCAD Mobile App 2008 Xforce Keygen alternative software<br />
37
- AutoCAD Mobile App 2008 Xforce Keygen comparison with other versions<br />
38
- AutoCAD Mobile App 2008 Xforce Keygen tips and tricks<br />
39
- AutoCAD Mobile App 2008 Xforce Keygen support and help<br />
40
- AutoCAD Mobile App 2008 Xforce Keygen forum and community<br />
41
- AutoCAD Mobile App 2008 Xforce Keygen update and upgrade<br />
42
- AutoCAD Mobile App 2008 Xforce Keygen compatibility with other devices<br />
43
- AutoCAD Mobile App 2008 Xforce Keygen pros and cons<br />
44
- AutoCAD Mobile App 2008 Xforce Keygen discount and coupon code<br />
45
- AutoCAD Mobile App 2008 Xforce Keygen best price and deal<br />
46
- AutoCAD Mobile App 2008 Xforce Keygen refund and guarantee policy<br />
47
- AutoCAD Mobile App 2008 Xforce Keygen testimonials and feedback<br />
48
- AutoCAD Mobile App 2008 Xforce Keygen video tutorial and demo<br />
49
- AutoCAD Mobile App 2008 Xforce Keygen FAQ and Q&A<br />
50
- AutoCAD Mobile App 2008 Xforce Keygen blog and news<br />
51
- How to use AutoCAD Mobile App 2008 with Xforce Keygen<br />
52
- How to uninstall AutoCAD Mobile App 2008 with Xforce Keygen<br />
53
- How to backup and restore AutoCAD Mobile App 2008 with Xforce Keygen<br />
54
- How to customize and optimize AutoCAD Mobile App 2008 with Xforce Keygen<br />
55
- How to troubleshoot and solve problems with AutoCAD Mobile App 2008 with Xforce Keygen<br />
56
- How to import and export files with AutoCAD Mobile App 2008 with Xforce Keygen<br />
57
- How to draw and edit objects with AutoCAD Mobile App 2008 with Xforce Keygen<br />
58
- How to annotate and dimension drawings with AutoCAD Mobile App 2008 with Xforce Keygen<br />
59
- How to create and modify layouts with AutoCAD Mobile App 2008 with Xforce Keygen<br />
60
- How to print and plot drawings with AutoCAD Mobile App 2008 with Xforce Keygen<br />
61
- How to share and collaborate drawings with AutoCAD Mobile App 2008 with Xforce Keygen<br />
62
- How to work offline and online with AutoCAD Mobile App 2008 with Xforce Keygen<br />
63
- How to sync and manage data with AutoCAD Mobile App 2008 with Xforce Keygen<br />
64
- How to access and use cloud services with AutoCAD Mobile App 2008 with Xforce Keygen<br />
65
- How to secure and protect drawings with AutoCAD Mobile App 2008 with Xforce Keygen<br />
66
- How to learn and master skills with AutoCAD Mobile App 2008 with Xforce Keygen<br />
67
- How to get certified and recognized with AutoCAD Mobile App 2008 with Xforce Keygen<br />
68
- How to find and apply jobs with AutoCAD Mobile App 2008 with Xforce Keygen skills</p>
69
- <ol>
70
- <li>Open the AutoCAD Mobile App on your device and sign in with your Autodesk account or create a new one.</li>
71
- <li>Select a drawing from your device storage or cloud service, or create a new one.</li>
72
- <li>Edit, create, or share your drawing using the tools available on the app. You can zoom, pan, measure, draw, modify, annotate, layer, snap, dimension, block, export, print, and more.</li>
73
- <li>Save your changes and sync them with your cloud service or device storage.</li>
74
- <li>Enjoy using AutoCAD Mobile App 2008 Xforce Keygen 64 Bit!</li>
75
- </ol>
76
- <h2>Benefits of using AutoCAD Mobile App 2008 Xforce Keygen 64 Bit</h2>
77
- <p>Some of the benefits of using AutoCAD Mobile App 2008 Xforce Keygen 64 Bit are:</p>
78
- <ul>
79
- <li>You can use all the features of the app without paying any subscription fee.</li>
80
- <li>You can work on your drawings anytime, anywhere, even without an internet connection.</li>
81
- <li>You can collaborate with your team members or clients easily by sharing your drawings via email or cloud service.</li>
82
- <li>You can improve your productivity and efficiency by using the app's intuitive interface and powerful tools.</li>
83
- <li>You can learn new skills and techniques by exploring the app's tutorials and tips.</li>
84
- </ul>
85
- <h2>Risks and challenges of using AutoCAD Mobile App 2008 Xforce Keygen 64 Bit</h2>
86
- <p>Some of the risks and challenges of using AutoCAD Mobile App 2008 Xforce Keygen 64 Bit are:</p>
87
- <ul>
88
- <li>You may violate the terms and conditions of Autodesk by using an unauthorized product.</li I'll try to continue the article. <h2>Alternatives to AutoCAD Mobile App 2008 Xforce Keygen 64 Bit</h2>
89
- <p>If you are not comfortable with using AutoCAD Mobile App 2008 Xforce Keygen 64 Bit, or if you want to explore other options, there are some alternatives you can consider. Here are some of them:</p>
90
- <ul>
91
- <li><strong>Onshape</strong>: Onshape is a cloud-based CAD platform that allows you to create, edit, and share 3D models on any device. It has a free plan for students and hobbyists, and a paid plan for professionals and teams. Onshape has many features similar to AutoCAD, such as sketching, modeling, assembly, drawing, and collaboration. It also integrates with other apps and tools, such as SolidWorks, Fusion 360, MATLAB, and more.</li>
92
- <li><strong>CAD Reader</strong>: CAD Reader is a free app that lets you view and measure DWG and DXF files on your Android or iOS device. You can also export files to PDF or JPG formats, and share them via email or cloud service. CAD Reader has a simple and intuitive interface, and supports offline viewing. However, it does not allow you to edit or create drawings.</li>
93
- <li><strong>Vectorworks and ConnectCAD</strong>: Vectorworks is a CAD software that specializes in architecture, landscape, and entertainment design. It has a mobile app that lets you view and annotate your drawings on your device. ConnectCAD is a plugin for Vectorworks that allows you to design audiovisual systems and networks. Both Vectorworks and ConnectCAD are free for students and educators.</li>
94
- </ul>
95
- <h2>Conclusion</h2>
96
- <p>AutoCAD Mobile App 2008 Xforce Keygen 64 Bit is a tool that can help you use AutoCAD on your mobile device without paying any subscription fee. However, it also comes with some risks and challenges, such as legal issues, malware threats, and ethical concerns. Therefore, you should use it at your own discretion and responsibility. Alternatively, you can try some of the other options we mentioned above, such as Onshape, CAD Reader, or Vectorworks and ConnectCAD.</p>
97
- <h3>FAQs</h3>
98
- <p>Here are some of the frequently asked questions about AutoCAD Mobile App 2008 Xforce Keygen 64 Bit:</p>
99
- <ol>
100
- <li><strong>Is AutoCAD Mobile App 2008 Xforce Keygen 64 Bit safe?</strong><br>
101
- AutoCAD Mobile App 2008 Xforce Keygen 64 Bit is not an official product of Autodesk, and it is considered illegal and unethical to use it. Moreover, it may expose your device to malware or viruses by downloading untrusted files. Therefore, it is not safe to use it.</li>
102
- <li><strong>Can I use AutoCAD Mobile App 2008 Xforce Keygen 64 Bit on multiple devices?</strong><br>
103
- Yes, you can use AutoCAD Mobile App 2008 Xforce Keygen 64 Bit on multiple devices. However, you need to generate a different activation code for each device based on its device ID.</li>
104
- <li><strong>How can I update AutoCAD Mobile App 2008 Xforce Keygen 64 Bit?</strong><br>
105
- AutoCAD Mobile App 2008 Xforce Keygen 64 Bit does not have an update feature. If you want to use a newer version of AutoCAD Mobile App, you need to download and install a new Xforce Keygen for that version.</li>
106
- <li><strong>What are the system requirements for AutoCAD Mobile App 2008 Xforce Keygen 64 Bit?</strong><br>
107
- The system requirements for AutoCAD Mobile App 2008 Xforce Keygen 64 Bit are the same as the system requirements for AutoCAD Mobile App 2008. You need an Android or iOS device with at least 1 GB of RAM and 300 MB of free storage space.</li>
108
- <li><strong>Where can I get help or support for AutoCAD Mobile App 2008 Xforce Keygen 64 Bit?</strong><br>
109
- Since AutoCAD Mobile App 2008 Xforce Keygen 64 Bit is not an official product of Autodesk, you cannot get help or support from Autodesk or its authorized partners. You may try to find help or support from online forums or communities of other users who use the same tool.</li>
110
- </ol>
111
- </p> 0a6ba089eb<br />
112
- <br />
113
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Beyblade Metal Fusion Episodes In Hindi Free Download Watch the Epic Battles Online.md DELETED
@@ -1,130 +0,0 @@
1
- <br />
2
- <h1>Beyblade Metal Fusion Episodes In Hindi Free Download</h1>
3
- <p>If you are a fan of anime, you might have heard of Beyblade Metal Fusion, a popular series that features spinning tops called Beyblades. But did you know that you can watch this show in Hindi as well? In this article, we will tell you everything you need to know about Beyblade Metal Fusion episodes in Hindi free download. We will also give you some tips on where to watch them online and what are some of the best episodes to enjoy. So, let's get started!</p>
4
- <h2>Beyblade Metal Fusion Episodes In Hindi Free Download</h2><br /><p><b><b>Download File</b> &#10026;&#10026;&#10026; <a href="https://byltly.com/2uKzWb">https://byltly.com/2uKzWb</a></b></p><br /><br />
5
- <h2>What is Beyblade Metal Fusion?</h2>
6
- <p>Beyblade Metal Fusion is an anime series that is based on a manga of the same name by Takafumi Adachi. It is the first season of the Beyblade Metal Saga, which also includes Beyblade Metal Masters, Beyblade Metal Fury, and Beyblade Shogun Steel. The series follows the adventures of Gingka Hagane, a young blader who wants to become the strongest in the world. He meets other bladers along his journey, such as Kenta Yumiya, Kyoya Tategami, Benkei Hanawa, Madoka Amano, and Hyoma. Together, they form a team called Gan Gan Galaxy and compete in various tournaments against other teams from different countries.</p>
7
- <p>The main attraction of the series is the Beyblades, which are spinning tops that have metal parts and special abilities. Each Beyblade has a spirit inside it, called a Bit-Beast or a Beast. The bladers can communicate with their Beasts and unleash their powers during battles. The Beasts are based on mythical creatures or animals, such as dragons, lions, wolves, pegasi, etc. Some of the most famous Beasts are Pegasus, L-Drago, Leone, Sagittario, Bull, Aquario, and Eagle.</p>
8
- <h2>Why watch Beyblade Metal Fusion in Hindi?</h2>
9
- <p>There are many reasons why you might want to watch Beyblade Metal Fusion in Hindi. Here are some of them:</p>
10
- <ul>
11
- <li>You can enjoy the show in your native language and understand it better.</li>
12
- <li>You can relate to the characters and their emotions more easily.</li>
13
- <li>You can learn some new words and phrases in Hindi.</li>
14
- <li>You can have fun with the catchy songs and dialogues.</li>
15
- <li>You can share your love for Beyblade with your friends and family who speak Hindi.</li>
16
- </ul>
17
- <p>Watching Beyblade Metal Fusion in Hindi can also help you appreciate the cultural diversity and creativity of anime. You can see how different languages and cultures can influence each other and create something unique and exciting.</p>
18
- <p>Beyblade Metal Fusion Hindi Dubbed Episodes Download<br />
19
- Watch Beyblade Metal Fusion Online Free In Hindi<br />
20
- Beyblade Metal Fusion Full Episodes In Hindi HD<br />
21
- How To Download Beyblade Metal Fusion Episodes In Hindi<br />
22
- Beyblade Metal Fusion All Episodes In Hindi Free<br />
23
- Beyblade Metal Fusion Season 1 In Hindi Download<br />
24
- Beyblade Metal Fusion Hindi Episodes Mp4 Download<br />
25
- Beyblade Metal Fusion Hindi Episodes 720p Download<br />
26
- Beyblade Metal Fusion Hindi Episodes Torrent Download<br />
27
- Beyblade Metal Fusion Hindi Episodes Dailymotion<br />
28
- Beyblade Metal Fusion Hindi Episodes Youtube<br />
29
- Beyblade Metal Fusion Hindi Episodes Google Drive<br />
30
- Beyblade Metal Fusion Hindi Episodes Mega Link<br />
31
- Beyblade Metal Fusion Hindi Episodes Netflix<br />
32
- Beyblade Metal Fusion Hindi Episodes Disney Plus Hotstar<br />
33
- Beyblade Metal Fusion Hindi Episodes Filmyzilla<br />
34
- Beyblade Metal Fusion Hindi Episodes Filmywap<br />
35
- Beyblade Metal Fusion Hindi Episodes Worldfree4u<br />
36
- Beyblade Metal Fusion Hindi Episodes Khatrimaza<br />
37
- Beyblade Metal Fusion Hindi Episodes 9xmovies<br />
38
- Beyblade Metal Fusion Hindi Episodes Bolly4u<br />
39
- Beyblade Metal Fusion Hindi Episodes Moviesflix<br />
40
- Beyblade Metal Fusion Hindi Episodes Movierulz<br />
41
- Beyblade Metal Fusion Hindi Episodes Tamilrockers<br />
42
- Beyblade Metal Fusion Hindi Episodes Isaimini<br />
43
- Beyblade Metal Fusion Theme Song In Hindi Download<br />
44
- Beyblade Metal Fusion Characters Names In Hindi<br />
45
- Beyblade Metal Fusion Games Download In Hindi<br />
46
- Beyblade Metal Fusion Toys In India Online Shopping<br />
47
- Best Beyblades In Metal Fusion Series Ranked<br />
48
- How To Watch Beyblade Metal Fusion In Chronological Order<br />
49
- Where To Buy Original Beyblade Metal Fusion Products<br />
50
- How To Assemble And Customize Your Own Beyblade Metal Fusion Set<br />
51
- How To Play And Win At Beyblade Metal Fusion Battles<br />
52
- How To Unlock All Characters And Modes In Beyblade Metal Fusion Video Game<br />
53
- How To Draw And Color Your Favorite Beyblade Metal Fusion Characters<br />
54
- How To Make Your Own DIY Beyblade Metal Fusion Arena At Home<br />
55
- How To Learn And Perform The Best Beyblade Metal Fusion Moves And Tricks<br />
56
- How To Train And Improve Your Skills At Beyblade Metal Fusion Sport<br />
57
- How To Join And Participate In Official Beyblade Metal Fusion Tournaments And Events<br />
58
- How To Find And Connect With Other Beyblade Metal Fusion Fans And Communities Online<br />
59
- How To Collect And Display Your Beyblade Metal Fusion Collection And Merchandise <br />
60
- How To Cosplay As Your Favorite Beyblade Metal Fusion Character <br />
61
- How To Write And Publish Your Own Beyblade Metal Fusion Fanfiction And Fanart <br />
62
- How To Create And Share Your Own Beyblade Metal Fusion Memes And Videos <br />
63
- How To Make And Sell Your Own Customized Beyblade Metal Fusion Products <br />
64
- How To Teach And Introduce Your Friends And Family To The World Of Beyblade Metal Fusion <br />
65
- How To Celebrate And Enjoy The 10th Anniversary Of The Release Of The First Episode Of Beyblade Metal Fusion</p>
66
- <h2>How to download Beyblade Metal Fusion episodes in Hindi for free?</h2>
67
- <h3>Legal and safe methods</h3>
68
- <p>The best way to download Beyblade Metal Fusion episodes in Hindi for free is to use legal and safe methods. These methods ensure that you respect the rights of the creators and avoid any legal troubles or viruses. Some of these methods are:</p>
69
- <ul>
70
- <li>Using official websites or apps that offer free downloads or streaming of anime, such as YouTube, Netflix, Amazon Prime Video, etc. However, these platforms may not have all the episodes or seasons available in Hindi.</li>
71
- <li>Using online converters or downloaders that allow you to save videos from YouTube or other websites as MP4 files. However, these tools may not work for all videos or may have quality issues.</li>
72
- <li>Using torrents or peer-to-peer networks that let you download files from other users who have them. However, these sources may not be reliable or safe and may contain malware or viruses.</li>
73
- </ul>
74
- <h3>Illegal and risky methods</h3>
75
- <p>The worst way to download Beyblade Metal Fusion episodes in Hindi for free is to use illegal and risky methods. These methods involve breaking the law and risking your safety and privacy. Some of these methods are:</p>
76
- <ul>
77
- <li>Using unofficial websites or apps that offer pirated copies of anime without permission from the owners. These platforms may have low-quality videos or audio, missing subtitles or dubbing, pop-up ads or malware, etc.</li>
78
- <li>Using hacking tools or software that allow you to bypass security measures or encryption of official websites or apps. These tools may damage your device or expose your personal information to hackers or cybercriminals.</li>
79
- <li>Using fake links or phishing scams that trick you into clicking on them and downloading unwanted files or programs. These links may lead you to harmful websites or infect your device with viruses or spyware.</li>
80
- </ul>
81
- <h2>Where to watch Beyblade Metal Fusion episodes in Hindi online?</h2>
82
- <h3>Streaming platforms</h3>
83
- <p>If you don't want to download Beyblade Metal Fusion episodes in Hindi for free, you can also watch them online on various streaming platforms. These platforms offer high-quality videos and audio, subtitles or dubbing options, fast loading speed, etc. Some of these platforms are:</p>
84
- <ul>
85
- <li>YouTube: YouTube is one of the most popular and accessible platforms for watching anime online. You can find many channels that upload Beyblade Metal Fusion episodes in Hindi for free. However, some episodes may be missing or taken down due to copyright issues.</li>
86
- <li>Netflix: Netflix is one of the most popular and reliable platforms for watching anime online. You can find all four seasons of Beyblade Metal Saga on Netflix with English subtitles or dubbing options. However, you need to pay a monthly subscription fee to access Netflix's content.</li>
87
- <li>Amazon Prime Video: Amazon Prime Video is another popular and trustworthy platform for watching anime online. You can find all four seasons of Beyblade Metal Saga on Amazon Prime Video with English subtitles or dubbing options. However, you need to pay a yearly subscription fee to access Amazon Prime Video's content.</li>
88
- </ul>
89
- <h3>Websites and apps</h3>
90
- <p>If you don't want to use streaming platforms for watching Beyblade Metal Fusion episodes in Hindi online, you can also use some websites or apps that offer free streaming of anime. These websites or apps may have a large collection of anime titles, genres, languages, etc. Some of these websites or apps are:</p>
91
- <ul>
92
- <li>AnimeFlix: AnimeFlix is a website that offers free streaming of anime with English subtitles or dubbing options. You can find all four seasons of Beyblade Metal Saga on AnimeFlix with English subtitles or dubbing options.</li>
93
- <li>AnimeToonHindi: AnimeToonHindi is an app that offers free streaming of anime with Hindi subtitles or dubbing options. You can find all four seasons of Beyblade Metal Saga on AnimeToonHindi with Hindi subtitles or dubbing options.</li>
94
- <li>AnimeDLR: AnimeDLR is an app that offers free streaming and downloading of anime with English subtitles or dubbing options. You can find all four seasons of Beyblade Metal Saga on AnimeDLR with English subtitles or dubbing options.</li>
95
- </ul>
96
- <h2>What are some of the best Beyblade Metal Fusion episodes in Hindi?</h2>
97
- <p>Beyblade Metal Fusion has 51 episodes in total, each one with its own story and action. However, some episodes stand out more than others because they have more drama, suspense, humor, emotion, etc. Here are some of the best Beyblade Metal Fusion episodes in Hindi:</p>
98
- <h3>The Stormy Battle Royal</h3>
99
- <p>This is episode 9 of season 1. It features a battle royal between eight bladers who want to join Kyoya's team for the Battle Bladers tournament. The bladers are Gingka, Kenta, Benkei, Tsubasa, Yu, Hyoma, and Tetsuya. The battle is chaotic and intense, with each blader trying to eliminate the others and survive. The episode showcases the skills and personalities of each blader, as well as their interactions and rivalries. The episode also has some twists and surprises, such as Tetsuya's betrayal, Yu's arrival, and Kyoya's intervention.</p>
100
- <h3>The Truth About Light and Darkness</h3>
101
- <p>This is episode 23 of season 1. It features the final battle between Gingka and Ryuga, the leader of the Dark Nebula organization. Ryuga has a dark and powerful Beyblade called L-Drago, which is said to be the forbidden Bey that can destroy the world. Gingka has to face Ryuga and his evil Bey in order to stop him and save his friends. The episode reveals the truth about L-Drago's origin, Ryuga's past, and Gingka's father. The episode also has some emotional and dramatic moments, such as Ryuga's madness, Gingka's determination, and Pegasus' sacrifice.</p>
102
- <h3>The Final Countdown</h3>
103
- <p>This is episode 48 of season 1. It features the semi-finals of the Battle Bladers tournament, where Gingka faces Kyoya and Yu faces Tsubasa. The battles are epic and thrilling, with each blader giving their best and showing their growth. The episode also has some humor and suspense, such as Yu's antics, Kyoya's rage, Tsubasa's dark side, and Doji's scheme. The episode ends with a cliffhanger that sets up the final showdown between Gingka and Yu.</p>
104
- <h3>The Dragon Emperor Descends</h3>
105
- <p>This is episode 2 of season 2. It features the debut of a new character and a new Beyblade: Ryuga and his upgraded L-Drago Destructor. Ryuga returns after his defeat by Gingka and challenges the strongest bladers in the world to test his new power. He defeats Jack from Europe, Klaus from Africa, Damian from America, and Chi-yun from China in one-sided battles. The episode showcases Ryuga's strength and dominance, as well as his mysterious motives. The episode also introduces a new plot involving a group called HD Academy that wants to use Beyblade for evil purposes.</p>
106
- <h3>The Furious Final Battle!</h3>
107
- <p>This is episode 50 of season 2. It features the final battle between Gingka and Masamune, the two finalists of the World Beyblade Championships. Masamune is a new character and a new rival for Gingka who wants to prove himself as the number one blader in the world. He has a fast and agile Beyblade called Ray Striker. Gingka has to face Masamune and his speed in order to win the title and save the world from HD Academy's plan. The episode has some intense and exciting action, as well as some friendship and teamwork themes.</p>
108
- <h2>Conclusion</h2>
109
- <p>Beyblade Metal Fusion is a great anime series that you can watch in Hindi for free. You can download or stream the episodes online using various methods that we have discussed in this article. You can also enjoy some of the best episodes that we have recommended for you. Beyblade Metal Fusion is a series that will keep you entertained and engaged with its amazing story, characters, battles, and Beasts. So, what are you waiting for? Grab your Beyblade and let it rip!</p>
110
- <h2>FAQs</h2>
111
- <h3>Is Beyblade Metal Fusion worth watching?</h3>
112
- <p>Yes, Beyblade Metal Fusion is worth watching if you like anime, spinning tops, action, adventure, comedy, drama, etc. It is a series that has something for everyone.</p>
113
- <h3>How many episodes are there in Beyblade Metal Fusion?</h3>
114
- <p>There are 51 episodes in Beyblade Metal Fusion.</p>
115
- <h3>Who are the main characters of Beyblade Metal Fusion?</h3>
116
- <p>The main characters of Beyblade Metal Fusion are:</p>
117
- <ul>
118
- <li>Gingka Hagane: The protagonist of the series who wants to become the strongest blader in the world. He has a Beyblade called Storm Pegasus.</li>
119
- <li>Kyoya Tategami: The leader of the Face Hunters gang who becomes Gingka's rival and friend. He has a Beyblade called Rock Leone.</li>
120
- <li>Kenta Yumiya: A timid but loyal blader who becomes Gingka's first friend and supporter. He has a Beyblade called Flame Sagittario.</li>
121
- <li>Madoka Amano: A smart and cheerful girl who works at a Bey shop and helps Gingka and his friends with their Beys. She has a computer called B-Pit.</li>
122
- <li>Ryuga: The antagonist of the series who leads the Dark Nebula organization that wants to use L-Drago to destroy the world. He has a Beyblade called Lightning L-Drago.</li>
123
- </ul>
124
- <h3>What is the difference between Beyblade Metal Fusion and Beyblade Metal Masters?</h3>
125
- <p>Beyblade Metal Fusion is the first season of the Beyblade Metal Saga, while Beyblade Metal Masters is the second season. The main difference between them is that Beyblade Metal Fusion focuses on Gingka's quest to stop Ryuga and L-Drago from destroying the world, while Beyblade Metal Masters focuses on Gingka's quest to win the World Beyblade Championships against other teams from different countries.</p>
126
- <h3>Where can I buy Beyblade toys and merchandise?</h3>
127
- <p>You can buy Beyblade toys and merchandise from various online or offline stores that sell anime or toy products. Some examples are Amazon.com, Flipkart.com, ToysRUs.com, etc.</p>
128
- </p> 0a6ba089eb<br />
129
- <br />
130
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/All Episodes Of Beyblade Season 1 Cartoon In Hindi.md DELETED
@@ -1,28 +0,0 @@
1
- <br />
2
- <h1>How to Watch All Episodes of Beyblade Season 1 Cartoon in Hindi</h1>
3
- <p>If you are a fan of the popular anime series Beyblade, you might be wondering how to watch all episodes of Beyblade season 1 cartoon in Hindi. Beyblade is a show about spinning tops that battle each other in tournaments and adventures. The first season aired in Japan from 2001 to 2002 and was dubbed in Hindi later.</p>
4
- <p>There are several ways to watch all episodes of Beyblade season 1 cartoon in Hindi online or offline. Here are some of the best options:</p>
5
- <h2>all episodes of beyblade season 1 cartoon in hindi</h2><br /><p><b><b>Download Zip</b> &mdash; <a href="https://imgfil.com/2uy1SR">https://imgfil.com/2uy1SR</a></b></p><br /><br />
6
- <ul>
7
- <li><strong>YouTube</strong>: YouTube is one of the most accessible and free platforms to watch all episodes of Beyblade season 1 cartoon in Hindi. You can find many playlists and channels that upload the episodes regularly. However, you might have to deal with some ads and low-quality videos.</li>
8
- <li><strong>Netflix</strong>: Netflix is a popular streaming service that offers a wide range of movies and shows, including all episodes of Beyblade season 1 cartoon in Hindi. You can enjoy high-quality videos and subtitles with a monthly subscription. However, you might need a VPN to access Netflix in some regions.</li>
9
- <li><strong>Amazon Prime Video</strong>: Amazon Prime Video is another streaming service that has all episodes of Beyblade season 1 cartoon in Hindi. You can watch them with a Prime membership or rent them individually. You can also download the episodes to watch offline.</li>
10
- <li><strong>DVDs</strong>: DVDs are a good option if you want to own all episodes of Beyblade season 1 cartoon in Hindi and watch them anytime. You can buy the DVDs online or from local stores. However, you might need a DVD player that supports the region code of the DVDs.</li>
11
- </ul>
12
- <p>These are some of the best ways to watch all episodes of Beyblade season 1 cartoon in Hindi. You can choose the one that suits your preferences and budget. Enjoy watching the thrilling battles and adventures of Tyson, Kai, Ray, Max, and their Beyblades!</p>
13
-
14
- <h2>What is Beyblade Season 1 About?</h2>
15
- <p>Beyblade season 1, also known as Beyblade: The Bladebreakers or Bakuten Shoot Beyblade in Japan, is the first season of the anime series based on the manga of the same name by Takao Aoki. It follows the story of a group of young Beybladers who form a team called the Bladebreakers and compete in the World Beyblade Championship.</p>
16
- <p>The main characters of Beyblade season 1 are:</p>
17
- <ul>
18
- <li><strong>Tyson Granger</strong>: The protagonist and leader of the Bladebreakers. He is a passionate and confident Beyblader who uses the Dragoon Bit-Beast, a dragon-like spirit that resides in his Beyblade.</li>
19
- <li><strong>Kai Hiwatari</strong>: The former leader of the Blade Sharks, a rival gang of Beybladers. He is a cold and arrogant Beyblader who uses the Dranzer Bit-Beast, a phoenix-like spirit that resides in his Beyblade.</li>
20
- <li><strong>Ray Kon</strong>: A former member of the White Tigers, a team of Beybladers from China. He is a calm and friendly Beyblader who uses the Driger Bit-Beast, a tiger-like spirit that resides in his Beyblade.</li>
21
- <li><strong>Max Tate</strong>: A cheerful and optimistic Beyblader who moved from America to Japan. He is a skilled mechanic who uses the Draciel Bit-Beast, a turtle-like spirit that resides in his Beyblade.</li>
22
- <li><strong>Kenny</strong>: A friend and supporter of Tyson. He is a genius who provides technical assistance and analysis for the Bladebreakers. He does not have a Bit-Beast or a Beyblade.</li>
23
- </ul>
24
- <p>Beyblade season 1 consists of 51 episodes that span three arcs: The Asian Tournament, The American Tournament, and The World Championship. In each arc, the Bladebreakers face different opponents and challenges, such as the Dark Bladers, Team Psykick, and the Demolition Boys. They also learn more about the origin and power of the Bit-Beasts and their connection to an ancient civilization.</p>
25
- <p>Beyblade season 1 is an exciting and action-packed anime that appeals to fans of spinning tops, adventure, and friendship. It has a catchy theme song, memorable characters, and epic battles. If you want to watch all episodes of Beyblade season 1 cartoon in Hindi, you can use any of the methods mentioned above.</p>
26
- <p></p> d5da3c52bf<br />
27
- <br />
28
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Autoplay Menu Designer 5.2 Cracked __FULL__.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>autoplay menu designer 5.2 cracked</h2><br /><p><b><b>DOWNLOAD</b> &#8250;&#8250;&#8250;&#8250;&#8250; <a href="https://imgfil.com/2uxYKt">https://imgfil.com/2uxYKt</a></b></p><br /><br />
2
-
3
- Autoplay menu designer 4.0 crack download. Autoplay Menu Designer 5.2 Crack Cocaine cros. Aston2 menu x64 crack - torrents file download. AutoPlay Menu. 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Bhadra Kalyanam Book In Telugu.md DELETED
@@ -1,24 +0,0 @@
1
- <h2>bhadra kalyanam book in telugu</h2><br /><p><b><b>DOWNLOAD</b> &#128504;&#128504;&#128504; <a href="https://imgfil.com/2uxYxw">https://imgfil.com/2uxYxw</a></b></p><br /><br />
2
-
3
- See also
4
-
5
- Kula rasavulam
6
-
7
- References
8
-
9
- Category:Telugu societyPersonal Injury
10
-
11
- Personal injury law covers a wide array of personal injury cases, including accidents, medical malpractice, and property damage. Common types of personal injury cases include car accidents, slip and fall accidents, and other types of physical injuries, such as those caused by surgical errors or medical malpractice. Depending on the type of case and the amount of damages claimed, a personal injury lawsuit can be worth anywhere from several thousand to millions of dollars.
12
-
13
- If you’ve been injured, or have been the victim of medical malpractice, you may be entitled to compensation under personal injury laws. In many cases, however, injured people are unable to seek a legal remedy because they are unable to pay the upfront legal costs. At Baskin & Levine, we offer free, no-obligation consultations, and no upfront fees, which means we can help you get through the legal process without worrying about the costs.
14
-
15
- Common Types of Personal Injury Claims
16
-
17
- Personal injury law covers a wide range of personal injury cases, including car accidents, medical malpractice, slip and fall accidents, and other types of physical injuries, such as those caused by surgical errors or medical malpractice. Depending on the type of case and the amount of damages claimed, a personal injury lawsuit can be worth anywhere from several thousand to millions of dollars.
18
-
19
- Personal injury law covers a wide range of personal injury cases, including car accidents, medical malpractice, slip and fall accidents, and other types of physical injuries, such as those caused by surgical errors or medical malpractice. Depending on the type of case and the amount of damages claimed, a personal injury lawsuit can be worth anywhere from several thousand to millions of dollars.Density functional studies on vanadium(IV) complexes with imidazole, imidazol-2-amine and imidazol-4-amine.
20
-
21
- In the present study, density functional theory (DFT) has been used to evaluate the structures and electronic properties of vanadium(IV) complexes with imidazole, imidazol-2-amine, and imidazol-4-amine. The results of the present calculations show that the geometries of the complexes VL(imidazole) and VL(imidazol-2-amine) are similar to the corresponding V(III) 4fefd39f24<br />
22
- <br />
23
- <br />
24
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Cowboy Bebop OST S Flac.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Cowboy Bebop OST S Flac</h2><br /><p><b><b>Download Zip</b> &#9675;&#9675;&#9675; <a href="https://imgfil.com/2uy23y">https://imgfil.com/2uy23y</a></b></p><br /><br />
2
-
3
- Since I got the new licensed record of the Cowboy Bebop OST from @MilanRecLabel, it is time ... Excuse me my Cowboy Bebop vinyl soundtrack came in from ... 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Descargar Planilla De Pago Del Seniat Dpn 25.md DELETED
@@ -1,26 +0,0 @@
1
-
2
- <h1>¿Cómo descargar la planilla de pago del seniat dpn 25?</h1>
3
- <p>La planilla de pago del seniat dpn 25 es el formulario que deben llenar y presentar las personas naturales residentes y herencias yacentes en Venezuela que obtengan enriquecimientos netos o pérdidas fiscales en el ejercicio gravable, para declarar y pagar el impuesto sobre la renta (ISLR).</p>
4
- <h2>descargar planilla de pago del seniat dpn 25</h2><br /><p><b><b>DOWNLOAD</b> - <a href="https://imgfil.com/2uxYN0">https://imgfil.com/2uxYN0</a></b></p><br /><br />
5
- <p>Para descargar la planilla de pago del seniat dpn 25, se debe seguir los siguientes pasos:</p>
6
- <ol>
7
- <li>Ingresar a la página web del SENIAT <a href="https://www.seniat.gob.ve/">https://www.seniat.gob.ve/</a> y hacer clic en el botón "Sistema en Línea".</li>
8
- <li>Registrarse o iniciar sesión con el usuario y la contraseña.</li>
9
- <li>Seleccionar la opción "Declaración Definitiva de Rentas Personas Naturales" en el menú "ISLR".</li>
10
- <li>Llenar los datos solicitados en la pantalla, tales como el RIF, el período fiscal, los ingresos brutos, las deducciones, los rebajas, los anticipos y los créditos.</li>
11
- <li>Verificar que los datos sean correctos y hacer clic en el botón "Generar Planilla".</li>
12
- <li>Imprimir tres ejemplares de la planilla de pago del seniat dpn 25, que contiene el número de planilla, el monto a pagar y el código de barras.</li>
13
- <li>Pagar el impuesto en cualquiera de las entidades bancarias autorizadas por el SENIAT, presentando la planilla de pago y una copia de la cédula de identidad.</li>
14
- </ol>
15
- <p>Es importante recordar que la planilla de pago del seniat dpn 25 debe presentarse dentro de los tres primeros meses del año siguiente al ejercicio fiscal, según el cronograma establecido por el SENIAT según el último dígito del RIF. Asimismo, se debe conservar una copia de la planilla de pago como comprobante del cumplimiento tributario.</p>
16
-
17
- <p>La planilla de pago del seniat dpn 25 es un documento electrónico que se genera a través del portal web del SENIAT, por lo que no es necesario descargarla previamente ni llenarla manualmente. Sin embargo, se recomienda revisar cuidadosamente los datos que se ingresan en el sistema, ya que una vez generada la planilla no se puede modificar ni anular.</p>
18
- <p>En caso de que se cometan errores u omisiones en la planilla de pago del seniat dpn 25, se debe presentar una declaración sustitutiva o complementaria, según corresponda, dentro del plazo establecido por el SENIAT. Para ello, se debe seguir el mismo procedimiento que para la declaración original, pero indicando que se trata de una declaración sustitutiva o complementaria y el número y la fecha de la declaración anterior.</p>
19
- <p></p>
20
- <p>La planilla de pago del seniat dpn 25 es un requisito indispensable para cumplir con las obligaciones tributarias de las personas naturales residentes y herencias yacentes en Venezuela. Por lo tanto, se debe presentar y pagar el impuesto sobre la renta de manera oportuna y veraz, evitando así sanciones e intereses moratorios por parte del SENIAT.</p>
21
-
22
- <p>Para facilitar el proceso de declaración y pago del impuesto sobre la renta, el SENIAT ofrece diversos servicios y herramientas en línea, tales como el asistente virtual, el chat tributario, el correo electrónico, las redes sociales y el centro de llamadas. Estos canales de comunicación permiten a los contribuyentes consultar sus dudas, obtener información actualizada, solicitar orientación y recibir asistencia técnica.</p>
23
- <p>Asimismo, el SENIAT cuenta con una red de oficinas regionales y sedes administrativas en todo el territorio nacional, donde los contribuyentes pueden acudir personalmente para realizar sus trámites tributarios, recibir atención especializada y participar en jornadas de capacitación y sensibilización. Estas actividades buscan promover la cultura tributaria y el cumplimiento voluntario de las obligaciones fiscales.</p>
24
- <p>La planilla de pago del seniat dpn 25 es una muestra del compromiso de las personas naturales residentes y herencias yacentes en Venezuela con el desarrollo del país. Al declarar y pagar el impuesto sobre la renta, se contribuye con los recursos necesarios para financiar los planes y proyectos sociales del Estado, en beneficio de todos los venezolanos.</p> d5da3c52bf<br />
25
- <br />
26
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/EXCLUSIVE Download Directx Version 9.0 For Gta San Andreas.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Download Directx Version 9.0 For Gta San Andreas</h2><br /><p><b><b>Download Zip</b> &rarr; <a href="https://imgfil.com/2uy1Tr">https://imgfil.com/2uy1Tr</a></b></p><br /><br />
2
-
3
- ... 2.5GHzMemory: 2GBFree Hard Drive Space: 22GBVideo Card: 512MB NVIDIA 8600 / 512MB ATI 3870DirectX Version: DirectX 9.0c… 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Edius Pro 6.5 Free Download With Crack 2021.md DELETED
@@ -1,16 +0,0 @@
1
- <h2>edius pro 6.5 free download with crack</h2><br /><p><b><b>Download File</b> > <a href="https://imgfil.com/2uxXSV">https://imgfil.com/2uxXSV</a></b></p><br /><br />
2
-
3
- with both enabled
4
-
5
- -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1
6
-
7
- -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0 -e 0
8
-
9
- -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4 -e 6.4
10
-
11
- -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5 -e 6.5
12
-
13
- -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 -e 1 - 4fefd39f24<br />
14
- <br />
15
- <br />
16
- <p></p>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/ 4 - .md DELETED
@@ -1,116 +0,0 @@
1
-
2
- <h1>دانلود متاتریدر 4: پلتفرم معاملاتی پیشرفته برای بازار فارکس</h1>
3
- <p>اگر به دنبال یک پلتفرم معاملاتی قدرتمند، کارآمد و رایگان برای بازار فارکس هستید، شاید بهترین گزینه برای شما <strong>متاتریدر 4</strong> باشد. متاتریدر 4 یک نرم افزار تحلیل و معامله در بازار فارکس است که به شما اجازه می دهد تا با استفاده از ابزارهای حرفه ای، ساده و کاربردی، در بازار جذاب و پویای فارکس فعالیت کنید.</p>
4
- <p>در این مقاله، قصد داریم به شما بگوئیم که <strong>چر از متاتریدر 4 استفاده کنید؟</strong> چه ویژگی ها و مزایایی دارد؟ چگونه می توانید آن را <strong>دانلود و نصب</strong> کنید؟ و چه نکات و توصیه هایی برای استفاده بهینه از آن وجود دارد؟ پس با ما همراه باشید.</p>
5
- <h2>دانلود متاتریدر 4</h2><br /><p><b><b>Download File</b> &#10145; <a href="https://jinyurl.com/2uNPEu">https://jinyurl.com/2uNPEu</a></b></p><br /><br />
6
- <h2>چرا باید از متاتریدر 4 استفاده کنید؟</h2>
7
- <p>متاتریدر 4 یکی از محبوب ترین و پرطرفدارترین پلتفرم های معاملاتی در بازار فارکس است که توسط شرکت MetaQuotes Software در سال 2005 عرضه شد. این نرم افزار به شما امکان می دهد تا با استفاده از ابزارهای تحلیل تکنیکال و فاندامنتال، سیستم معاملاتی کامل، ابزارهای کپی تریدینگ و خودکارسازی، اتصال به بیش از 2000 سرور کارگزاری، نقل قول های لحظه ای و خبرهای مالی، در بازار فارکس به صورت حرفه ای و کارآمد فعالیت کنید. بیایید به بررسی بعضی از این ویژگی ها و مزایا بپردازیم.</p>
8
- <h3>ویژگی های متاتریدر 4</h3>
9
- <h4>تحلیل تکنیکال و فاندامنتال</h4>
10
- <p>متاتریدر 4 به شما اجازه می دهد تا با استفاده از نمودارهای مختلف (خطی، شمعی، نواری)، شاخص های تحلیل تکنیکال (ترند، اسکالپینگ، ولوم)، اشیاء تحلیلی (خطوط، کانال ها، شکل ها)، تحلیل فاندامنتال (خبرهای اقتصادی، سخنان مقامات)، بازار فارکس را به صورت عمق ی و جامع تحلیل کنید. این ابزارها به شما کمک می کنند تا روند بازار را پیش بینی کنید، نقاط ورود و خروج مطلوب را تعیین کنید، ریسک را مدیریت کنید و استراتژی های موفق را پیاده سازی کنید.</p>
11
- <h4>سیستم معاملاتی کامل</h4>
12
- <p>متاتریدر 4 به شما اجازه می دهد تا با استفاده از سیستم معاملاتی کامل، در بازار فارکس به صورت آنلاین و آفلاین معامله کنید. شما می توانید از چهار نوع سفارش استفاده کنید: باز (Market), در حال انتظار (Pending), استاپ لاس (Stop Loss) و تیک پروفیت (Take Profit). همچنین می توانید از حالت های اجرای مختلف (فوری، درخواست، بازار) برای اجرای سفارش های خود بهترین گزینه را انتخاب کنید. شما همچنین می توانید تاریخچه حساب خود را مشاهده کنید، گزارش های مالی را دریافت کنید، حساب های خود را مدیریت کنید و با پشتیبانی فنی تماس بگیرید.</p>
13
- <h4>ابزارهای کپی تریدینگ و خودکارسازی</h4>
14
- <p>متاتریدر 4 به شما اجازه می دهد تا با استفاده از ابزارهای کپی تریدینگ و خودکارسازی، در بازار فارکس به صورت خودکار و بدون نظارت معامله کنید. شما می توانید از سرویس <strong>MetaTrader Signals</strong> استفاده کنید تا سیگنال های معاملاتی را از سایر معامله گران حرفه ای دریافت و به صورت خودکار در حساب خود اجرا کنید. شما همچنین می توانید از <strong>Expert Advisors</strong> یا ربات های معاملاتی استفاده کنید تا استراتژی های خود را به صورت الگوریتم ی کدنویسی و برنامه ریزی کنید و آن ها را در پلتفرم متاتریدر 4 اجرا کنید. این ابزارها به شما کمک می کنند تا زمان و انرژی خود را صرفه جوئی کنید، از فرصت های بازار استفاده کنید، خطاهای انسانی را کاهش دهید و درآمد خود را افزایش دهید.</p>
15
- <h4>اتصال به بیش از 2000 سرور کارگزاری</h4>
16
- <p>متاتریدر 4 به شما اجازه می دهد تا با استفاده از اتصال به بیش از 2000 سرور کارگزاری، در بازار فارکس به صورت پایدار و سریع معامله کنید. شما می توانید با هر کارگزاری که پلتفرم متاتریدر 4 را پشتیبانی می کند، حساب باز کنید و از شرایط معاملاتی مناسب آن ها بهره مند شوید. شما همچنین می توانید با استفاده از <strong>MetaTrader Market</strong>، به بازار بزرگترین فروشگاه آنلاین برای خرید و فروش سیگنال ها، ربات ها، نمودارها، کتاب ها و مقالات معاملاتی دسترسی پیدا کنید.</p>
17
- <h4>نقل قول های لحظه ای و خبرهای مالی</h4>
18
- <p>متاتریدر 4 به شما اجازه می دهد تا با استفاده از نقل قول های لحظه ای و خبرهای مالی، در بازار فارکس به صورت آگاهانه و بروز معامله کنید. شما می توانید نقل قول های لحظه ای 30 جفت ارز را در پلتفرم مشاهده کنید و با استفاده از <strong>MetaTrader News</strong>، به آخرین خبرهای اقتصادی، سیاسی و اجتماعی که بر روی بازار فارکس تاثیر می گذارند، دسترسی پیدا کنید.</p>
19
- <h2>چگونه متاتریدر 4 را دانلود و نصب کنید؟</h2>
20
- <p>دانلود و نصب متاتریدر 4 بسیار ساده و راحت است. شما می توانید با دنبال کردن چند قدم ساده، پلتفرم متاتریدر 4 را بر روی دستگاه خود دانلود و نصب کنید. بسته به نوع دستگاه و سیستم عامل خود، شما می توانید از چندین گزینه برای دانلود متاتریدر 4 استفاده کنید. بیایید به برخی از آن ها نگاه کنیم.</p>
21
- <p>دانلود متاتریدر 4 برای کامپیوتر<br />
22
- دانلود متاتریدر 4 برای اندروید<br />
23
- دانلود متاتریدر 4 برای آیفون<br />
24
- دانلود متاتریدر 4 برای مک<br />
25
- دانلود متاتریدر 4 برای لینوکس<br />
26
- دانلود متاتریدر 4 رایگان<br />
27
- دانلود متاتریدر 4 فارسی<br />
28
- دانلود متاتریدر 4 بازار فارکس<br />
29
- دانلود متاتریدر 4 نسخه جدید<br />
30
- دانلود متاتریدر 4 آموزش<br />
31
- دانلود متاتریدر 4 اندیکاتور<br />
32
- دانلود متاتریدر 4 استراتژی<br />
33
- دانلود متاتریدر 4 ربات<br />
34
- دانلود متاتریدر 4 تمپلیت<br />
35
- دانلود متاتریدر 4 اسکالپینگ<br />
36
- دانلود متاتریدر 4 تحلیل تکنیکال<br />
37
- دانلود متاتریدر 4 تحلیل بنیادی<br />
38
- دانلود متاتریدر 4 خبری<br />
39
- دانلود متاتریدر 4 اخبار فارکس<br />
40
- دانلود متاتریدر 4 سیگنال<br />
41
- دانلود متاتریدر 4 کپی تریدینگ<br />
42
- دانلود متاتریدر 4 اتوماسیون<br />
43
- دانلود متاتریدر 4 الگوریتمی<br />
44
- دانلود متاتریدر 4 هدجینگ<br />
45
- دانلود متاتریدر 4 تست کننده استراتژی<br />
46
- دانلود متاتریدر 4 ابزارهای تحلیل<br />
47
- دانلود متاتریدر 4 نمودارها و زمان بندی ها<br />
48
- دانلود متاتریدر 4 انواع سفارش ها و حالت های اجرا<br />
49
- دانلود متاتریدر 4 امنیت و حفاظت از حساب کاربری<br />
50
- دانلود متاتریدر 4 پشتیبانی و راهنمای کاربر<br />
51
- دانلود متاتریدر 4 بروکس های پیشنهاد شده<br />
52
- دانلود متاتریدر 4 بروکس های ایرانی<br />
53
- دانلود متاتریدر 4 بروکس های خارجی<br />
54
- دانلود متاتریدر 4 بروکس های ECN/STP/NDD/Market Maker<br />
55
- دانلود متات</p>
56
- <h3>دانلود متاتریدر 4 برای کامپ یوتر (ویندوز و مک)</h3>
57
- <p>اگر می خواهید متاتریدر 4 را بر روی کامپیوتر خود دانلود و نصب کنید، شما می توانید از لینک های زیر استفاده کنید:</p>
58
- <table>
59
- <tr>
60
- <th>سیستم عامل</th>
61
- <th>لینک دانلود</th>
62
- </tr>
63
- <tr>
64
- <td>ویندوز</td>
65
- <td><a href="">https://download.mql5.com/cdn/web/metaquotes.software.corp/mt4/mt4setup.exe</a></td>
66
- </tr>
67
- <tr>
68
- <td>مک</td>
69
- <td><a href="">https://download.mql5.com/cdn/web/metaquotes.software.corp/mt4/MetaTrader4.dmg</a></td>
70
- </tr>
71
- </table>
72
- <p>پ�� از دانلود فایل نصب، شما باید آن را اجرا کنید و مراحل نصب را دنبال کنید. شما باید شرایط استفاده را قبول کنید، محل نصب را انتخاب کنید، نام کاربری و رمز عبور خود را وارد کنید و سپس بر روی دکمه نصب کلیک کنید. پس از اتمام نصب، شما می توانید پلتفرم متاتریدر 4 را باز کنید و با حساب خود وارد شوید.</p>
73
- <h3>دانلود متاتریدر 4 برای تلفن همراه (آیفون، آیپد و اندروید)</h3>
74
- <p>اگر می خواهید متاتریدر 4 را بر روی تلفن همراه خود دانلود و نصب کنید، شما می توانید از لینک های زیر استفاده کنید:</p>
75
- <table>
76
- <tr>
77
- <th>سیستم عامل</th>
78
- <th>لینک دانلود</th>
79
- </tr>
80
- <tr>
81
- <td>آیفون و آیپد</td>
82
- <td><a href="">https://apps.apple.com/us/app/metatrader-4/id496212596</a></td>
83
- </tr>
84
- <tr>
85
- <td>اندروید</td>
86
- <td><a href="">https://play.google.com/store/apps/details?id=net.metaquotes.metatrader4&hl=en&gl=US</a></td>
87
- </tr>
88
- </table>
89
- <p>پس از دانلود برنامه، شما باید آن را باز کنید و مجوزهای لازم را به آن بدهید. سپس شما باید با حساب خود وارد شوید یا یک حساب جدید باز کنید. شما می توانید با جستجوی نام کارگزار خود یا اسکن کردن QR کد، به سرور کارگزار خود متصل شوید. پس از وارد شدن به حساب، شما می توانید با استفاده از قابلیت های پلتفرم متاتریدر 4، در بازار فارکس معامله کنید.</p>
90
- <h2>نکات و توصیه های مهم برای استفاده از متاتریدر 4</h2> <p>برای استفاده بهینه از متاتریدر 4، شما باید به برخی از نکات و توصیه های مهم توجه کنید. این نکات به شما کمک می کنند تا بازدهی و کارایی خود را در بازار فارکس افزایش دهید، مشکلات و خطاهای احتمالی را حل کنید و از تجربه معاملاتی لذت ببرید. بیایید به برخی از این نکات نگاه کنیم.</p>
91
- <h3>تنظیم پارامترهای حساب خود</h3>
92
- <p>قبل از شروع به معامله، شما باید پارامترهای حساب خود را در پلتفرم متاتریدر 4 تنظیم کنید. این پارامترها شامل اطلاعات شخصی، رمز عبور، زبان، واحد پول، حجم سفارش، نوع اجرا و غیره هستند. شما می توانید با رفتن به منوی <strong>Tools</strong> و انتخاب گزینه <strong>Options</strong>، به صفحه تنظیمات دسترسی پیدا کنید و پارامترهای مورد نظر خود را تغییر دهید. شما همچنین می توانید با استفاده از دکمه <strong>F1</strong>، به راهنمای کاربر پلتفرم متاتریدر 4 دسترسی پیدا کنید و اطلاعات بیشتری در مورد تنظیمات و قابلیت های آن بدست آورید.</p>
93
- <h3>انتخاب نوع اجرا و سفارش مورد نظر خود</h3>
94
- <p>برای معامله در بازار فارکس، شما باید نوع اجرا و سفارش مورد نظر خود را در پلتفرم متاتریدر 4 انتخاب کنید. نوع اجرا تعیین می کند که چگونه سفارش شما در بازار اجرا می شود. شما می توانید از سه نوع اجرا استفاده کنید: <strong>Instant Execution</strong> (اجرای فوری)، <strong>Request Execution</strong> (اجرای درخواست) و <strong>Market Execution</strong> (اجرای بازار). هر یک از این نوع های اجرا دارای مزایا و معایب خود هستند که بستگی به شرایط بازار و استراتژی شما دارد. شما می توانید با رفتن به منوی <strong>Tools</strong> و انتخاب گزینه <strong>New Order</strong>، نوع اجرای خود را تغییر دهید.</p>
95
- <p>سفارش نوع عملیاتی است که شما برای خرید یا فروش یک جفت ا رز را انجام می دهید. شما می توانید از چهار نوع سفارش استفاده کنید: <strong>Market Order</strong> (سفارش بازار)، <strong>Pending Order</strong> (سفارش در حال انتظار)، <strong>Stop Loss Order</strong> (سفارش استاپ لاس) و <strong>Take Profit Order</strong> (سفارش تیک پروفیت). هر یک از این نوع های سفارش دارای کاربرد و هدف خاص خود هستند که بستگی به نظر شما در مورد جهت و سطح قیمت دارد. شما می توانید با رفتن به منوی <strong>Tools</strong> و انتخاب گزینه <strong>New Order</strong>، نوع سفارش خود را تغییر دهید.</p>
96
- <h3>استفاده از نمودارها، شاخص ها و اشیاء تحلیلی</h3>
97
- <p>برای تحلیل بازار فارکس، شما باید از نمودارها، شاخص ها و اشیاء تحلیلی استفاده کنید. این ابزارها به شما کمک می کنند تا بازار را به صورت بصری و عددی مشاهده کنید، الگوها و روندهای بازار را شناسایی کنید، نقاط حمایت و مقاومت را مشخص کنید و سطح ریسک و بازدهی خود را محاسبه کنید. شما می توانید با رفتن به منوی <strong>Insert</strong>، به لیست نمودارها، شاخص ها و اشیاء تحلیلی دسترسی پیدا کنید و آن ها را بر روی نمودار خود قرار دهید. شما همچنین می توانید با استفاده از دکمه <strong>F8</strong>، تنظیمات نمودار خود را تغییر دهید.</p>
98
- <h3>استفاده از سیگنال ها، ربات ها و استراتژی های آماده</h3>
99
- <p>برای معامله خودکار در بازار فارکس، شما باید از سیگنال ها، ربات ها و استراتژی های آماده استفاده کنید. این ابزارها به شما کمک می کنند تا بدون نظارت مداوم بر روی بازار، در زمان مناسب و با قیمت مناسب خرید یا فروش کنید. شما می توانید با رفتن به منوی <strong>Tools</strong> و انتخاب گزینه <strong>MetaTrader Market</strong> یا <strong>MQL5 Community</strong>، به بازار بزرگترین فروشگاه آنلاین برای خرید و فروش سیگنال ها، ربات ها و استراتژی های معاملاتی دسترسی پیدا کنید. شما همچنین می توانید با استفاده از دکمه <strong>F4</strong>، به MetaEditor بروید و استراتژی خود را به صورت الگور یتمی کدنویسی و برنامه ریزی کنید.</p>
100
- <h2>نتیجه گیری</h2>
101
- <p>متاتریدر 4 یک پلتفرم معاملاتی پیشرفته برای بازار فارکس است که به شما امکان می دهد تا با استفاده از ابزارهای حرفه ای، ساده و کاربردی، در بازار جذاب و پویای فارکس فعالیت کنید. شما می توانید با دانلود و نصب متاتریدر 4 بر روی دستگاه خود، به تحلیل و معامله در بازار فارکس بپردازید، از سیستم معاملاتی کامل، سیگنال ها، ربات ها و استراتژی های آماده استفاده کنید، به بیش از 2000 سرور کارگزاری متصل شوید و از نقل قول های لحظه ای و خبرهای مالی بهره مند شوید. امیدواریم که این مقاله برای شما مفید و آموزنده بوده باشد و شما را در استفاده بهینه از متاتریدر 4 یاری دهد.</p>
102
- <h2>پاسخ به پرسش های متداول</h2>
103
- <p>در این بخش، به برخی از پرسش های متداول در مورد متاتریدر 4 پاسخ می دهیم.</p>
104
- <h4>آیا متاتریدر 4 رایگان است؟</h4>
105
- <p>بله، متاتریدر 4 رایگان است و شما می توانید آن را بدون هزینه اضافی بر روی دستگاه خود دانلود و نصب کنید. شما فقط باید حساب خود را با یک کارگزار فارکس که پلتفرم متاتریدر 4 را پشتیبانی می کند، باز کنید و به سرور آن کارگزار متصل شوید.</p>
106
- <h4>آیا متاتریدر 4 قابل اعتماد است؟</h4>
107
- <p>بله، متاتریدر 4 یک پلتفرم قابل اعتماد است که توسط ملا یون های بسیاری از معامله گران و کارگزاران فارکس در سراسر جهان استفاده می شود. این پلتفرم دارای امنیت و حفاظت بالایی از داده های شما است و با استفاده از رمزنگاری SSL، اطلاعات شما را در برابر دسترسی های غیرمجاز محافظت می کند. همچنین، این پلتفرم دارای سرعت و کارایی بالایی در اجرای سفارش ها و انتقال داده ها است و با استفاده از سرورهای قدرتمند، به شما اطمینان می دهد که در هر شرایط بازاری، می توانید به صورت پایدار و سریع معامله کنید.</p>
108
- <h4>آیا متاتریدر 4 مناسب برای مبتدیان است؟</h4>
109
- <p>بله، متاتریدر 4 یک پلتفرم مناسب برای مبتدیان است که می خواهند در بازار فارکس شروع به معامله کنند. این پلتفرم دارای رابط کاربری ساده و قابل تنظیم است که به شما اجازه می دهد تا با استفاده از منوها، دکمه ها، تب ها و پنجره ها، به راحتی به قابلیت های مختلف آن دسترسی پیدا کنید. همچنین، این پلتفرم دارای راهنمای کاربر جامع و کامل است که به شما توضیح می دهد که چگونه از آن استفاده کنید، چگونه مشکلات را حل کنید و چگونه به سوالات خود پاسخ بگیرید. شما می توانید با استفاده از دکمه <strong>F1</strong>، به راهنمای کاربر پلتفرم متاتریدر 4 دسترسی پیدا کنید و یا با مراجعه به <strong>MQL5 Community</strong>، با سایر کاربران و خبرگان پلتفرم تبادل نظر و تجربه کنید.</p>
110
- <h4>آیا متاتریدر 4 قابل تغییر و سفارشی سازی است؟</h4>
111
- <p>بله، متاتریدر 4 یک پلتفرم قابل تغییر و سفارشی سازی است که به شما اجازه می دهد تا بستگی به ن ظرات و نیازهای خود، پلتفرم متاتریدر 4 را تغییر و سفارشی سازی کنید. شما می توانید رابط کاربری، نمودارها، شاخص ها، اشیاء تحلیلی، سیگنال ها، ربات ها و استراتژی های خود را به صورت دلخواه تنظیم کنید. شما می توانید با رفتن به منوی <strong>Tools</strong> و انتخاب گزینه <strong>Options</strong>، تنظیمات پلتفرم خود را تغییر دهید. شما همچنین می توانید با استفاده از دکمه <strong>F8</strong>، تنظیمات نمودار خود را تغییر دهید. شما می توانید با استفاده از دکمه <strong>F4</strong>، به MetaEditor بروید و کدنویسی و برنامه ریزی کنید. شما می توانید با استفاده از دکمه <strong>F5</strong>، به MetaTester بروید و آزمایش و ارزیابی کنید.</p>
112
- <h4>آیا متاتریدر 4 قابل حمل است؟</h4>
113
- <p>بله، متاتریدر 4 یک پلتفرم قابل حمل است که به شما اجازه می دهد تا با استفاده از هر دستگاه و سیستم عاملی، در بازار فارکس معامله کنید. شما می توانید متاتریدر 4 را بر روی کامپیوتر (ویندوز و مک)، تلفن همراه (آیفون، آیپد و اندروید) و حتی بروزر خود (WebTrader) دانلود و نصب کنید. شما می توانید با استفاده از یک نام کاربری و رمز عبور، به حساب خود در هر دستگاه و سرور کارگزار خود وارد شوید و به صورت همزمان در چندین بازار فعالیت کنید.</p>
114
- <p>این پایان مقاله درباره <strong>دانلود متاتریدر 4</strong> است. امیدواریم که این مقاله برای شما مفید و آموزنده بوده باشد و شما را در استفاده بهینه از متاتریدر 4 یاری دهد. اگر سوال یا نظر دیگری دارید، لطفا در قسمت نظرات با ما در میان بگذارید. با تشکر از شما.</p> 401be4b1e0<br />
115
- <br />
116
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Avatar Wallpapers - HD and 4K Download.md DELETED
@@ -1,153 +0,0 @@
1
-
2
- <h1>Avatar Download: How to Create and Use Your Online Persona</h1>
3
- <p>Have you ever wanted to have a digital version of yourself that you can use online? Whether you want to express your personality, showcase your brand, or just have some fun, creating an avatar can be a great way to do so. An avatar is a graphical representation of yourself or your alter ego that you can use on various platforms, such as social media, gaming, websites, and more. In this article, we will show you how to create your own avatar using some of the best online tools available, and how to use your avatar effectively for different purposes.</p>
4
- <h2>What is an avatar and why do you need one?</h2>
5
- <p>An avatar is an image or a character that represents you in the virtual world. It can be realistic or stylized, depending on your preference and the platform you are using. You can customize your avatar's appearance, such as hair, eyes, skin, clothes, accessories, and more. You can also choose from different styles of avatars, such as cartoon, anime, 3D, or realistic.</p>
6
- <h2>avatar download</h2><br /><p><b><b>Download Zip</b> &#9734; <a href="https://jinyurl.com/2uNKwF">https://jinyurl.com/2uNKwF</a></b></p><br /><br />
7
- <h3>Definition and examples of avatars</h3>
8
- <p>The word "avatar" comes from Sanskrit and means "descent". In Hinduism, it refers to the incarnation of a deity in human or animal form. In the digital context, it means the manifestation of a person or an idea in a graphical form. Some examples of avatars are:</p>
9
- <ul>
10
- <li>The icons or figures that you use to represent yourself in video games, chat rooms, forums, etc.</li>
11
- <li>The characters that you create or choose to play in online role-playing games, such as World of Warcraft, Second Life, or The Sims.</li>
12
- <li>The images that you use as your profile picture on social media platforms, such as Facebook, Twitter, Instagram, etc.</li>
13
- <li>The animated or live-action characters that you use to communicate with others in virtual reality platforms, such as VRChat, Rec Room, or AltSpaceVR.</li>
14
- </ul>
15
- <h3>Benefits of using avatars for personal and professional purposes</h3>
16
- <p>Using avatars can have many benefits for both personal and professional reasons. Some of them are:</p>
17
- <ul>
18
- <li>Avatars can help you express yourself creatively and show your unique identity and style.</li>
19
- <li>Avatars can help you protect your privacy and anonymity online by hiding your real name and appearance.</li>
20
- <li>Avatars can help you connect with others who share your interests and passions.</li>
21
- <li>Avatars can help you enhance your online presence and reputation by making you more memorable and recognizable.</li>
22
- <li>Avatars can help you promote your brand and business by creating a visual identity that reflects your values and mission.</li>
23
- </ul>
24
- <h2>How to create your own avatar using online tools</h2>
25
- <p>Creating your own avatar is easier than ever thanks to the many online tools that are available for free or at a low cost. You don't need any special skills or software to make an avatar that suits your needs and preferences. Here are some of the best online tools that you can use to create your own avatar:</p>
26
- <h3>Canva: a free and easy-to-use avatar maker</h3>
27
- <p><a href="(^9^)">Canva</a> is a popular online design platform that allows you to create various graphics for personal or professional use. You can also use Canva to create your own avatar with its built-in avatar maker apps. You can choose from Bitmoji, Character Builder, or Pixton apps to create a cartoon-style avatar that matches your personality. You can customize your avatar's colors and features, such as hair, eyes, skin, clothes, accessories, and more. You can also add text, stickers, and backgrounds to your avatar. Once you are done, you can download your avatar as a PNG or JPG file, or share it directly to your social media accounts. Canva is free to use, but you can also upgrade to a premium plan for more features and resources. To use Canva's avatar maker, go to <a href="">https://www.canva.com/create/avatars/</a> and sign up for a free account.</p>
28
- <h3>Adobe Express: a powerful and versatile avatar creator</h3>
29
- <p><a href="(^1^)">Adobe Express</a> is another online design platform that lets you create stunning graphics for various purposes. You can also use Adobe Express to create your own avatar with its online profile picture maker. You can upload your own photo and apply different filters and effects to transform it into an avatar. You can also choose from a collection of icons and images to design an avatar that conveys your personality online. You can customize the colors, layout, typography, and numerous other design elements to your liking. You can then download your avatar as a PNG or JPG file, or share it to your digital platforms. Adobe Express is free to use, but you can also access more features and assets with a paid subscription. To use Adobe Express's profile picture maker, go to <a href="(^2^)">https://www.adobe.com/express/create/profile-picture</a> and sign up for a free account.</p>
30
- <p>avatar images free download<br />
31
- profile avatar vectors free download<br />
32
- people avatar graphics free download<br />
33
- cartoon avatar maker download<br />
34
- avatar character creator download<br />
35
- avatar icon pack download<br />
36
- avatar face generator download<br />
37
- man avatar photo download<br />
38
- default avatar png download<br />
39
- avatar pack zip download<br />
40
- business avatar design download<br />
41
- avatar images hd download<br />
42
- profile avatar psd download<br />
43
- people avatar svg download<br />
44
- cartoon avatar app download<br />
45
- avatar character game download<br />
46
- avatar icon set download<br />
47
- avatar face emoji download<br />
48
- woman avatar picture download<br />
49
- default avatar jpg download<br />
50
- avatar pack rar download<br />
51
- cute avatar illustration download<br />
52
- avatar images 3d download<br />
53
- profile avatar ai download<br />
54
- people avatar png download<br />
55
- cartoon avatar online download<br />
56
- avatar character anime download<br />
57
- avatar icon free download<br />
58
- avatar face mask download<br />
59
- boy avatar image download<br />
60
- default avatar gif download<br />
61
- animal avatar pack download<br />
62
- funny avatar clipart download<br />
63
- avatar images vector download<br />
64
- profile avatar maker download<br />
65
- people avatar cartoon download<br />
66
- cartoon avatar software download<br />
67
- avatar character movie download<br />
68
- avatar icon png download<br />
69
- avatar face filter download<br />
70
- girl avatar photo download<br />
71
- default avatar svg download<br />
72
- superhero avatar pack download<br />
73
- cool avatar wallpaper download<br />
74
- avatar images transparent download<br />
75
- profile avatar icon download <br />
76
- people avatar silhouette download <br />
77
- cartoon avatar website download <br />
78
- avatar character art download <br />
79
- avatar icon generator download</p>
80
- <h3>Fotor: a fun and customizable avatar generator</h3>
81
- <p><a href="(^4^)">Fotor</a> is an all-in-one photo editing tool that offers various features and elements for creating amazing graphics. You can also use Fotor to create your own avatar with its cartoon avatar maker or AI avatar generator. You can choose from different styles of avatars, such as cartoon, anime, 3D, or realistic. You can also upload your own photo and use Fotor's AI technology to turn it into an avatar in seconds. You can then edit and enhance your avatar with Fotor's photo filters, basic settings, graphics, and more. You can save your avatar as a PNG or JPG file, or share it online with others. Fotor is free to use, but you can also upgrade to a pro plan for more features and resources. To use Fotor's avatar maker, go to <a href="(^4^)">https://www.fotor.com/avatar-maker/</a> and sign up for a free account.</p>
82
- <h2>How to use your avatar online and offline</h2>
83
- <p>Once you have created your own avatar using one of the online tools mentioned above, you can use it for various purposes online and offline. Here are some tips and ideas on how to use your avatar effectively:</p>
84
- <h3>Tips for choosing the right avatar for different platforms and contexts</h3>
85
- <p>Depending on where and how you want to use your avatar, you may need to consider some factors when choosing the right one. Some of them are:</p>
86
- <ul>
87
- <li>The size and resolution of your avatar: Make sure that your avatar is clear and visible on different devices and screens. You may need to resize or crop your avatar according to the platform's specifications.</li>
88
- <li>The style and tone of your avatar: Make sure that your avatar matches the tone and purpose of the platform or context. For example, if you are using your avatar for a professional website or profile, you may want to choose a realistic or formal style. If you are using your avatar for a gaming or social media platform, you may want to choose a cartoon or fun style.</li>
89
- <li>The message and meaning of your avatar: Make sure that your avatar conveys the message and meaning that you want to communicate. For example, if you want to show your personality or interests, you may want to choose an avatar that reflects them. If you want to promote your brand or business, you may want to choose an avatar that represents them.</li>
90
- </ul>
91
- <h3>Ways to download and share your avatar with others</h3>
92
- <p>After creating your own avatar using one of the online tools mentioned above, you can download it as a PNG or JPG file on your computer or mobile device. You can then share it with others by uploading it to the platform of your choice, such as social media, gaming, website, etc. You can also share it by sending it via email, messaging apps, QR codes, etc.</p>
93
- <h3>Ideas for using your avatar in marketing, branding, and communication</h3>
94
- <p>Using your own avatar can be a great way to enhance your marketing, branding, and communication efforts online and offline. Here are some ideas on how to use your avatar creatively:</p>
95
- <ul>
96
- <li>Use your avatar as a logo for your brand or business. You can use your avatar to create a unique and memorable identity that stands out from the crowd. You can also use your avatar to convey your brand's values, mission, and personality.</li>
97
- <li>Use your avatar as a mascot for your products or services. You can use your avatar to create a friendly and engaging image that attracts and retains customers. You can also use your avatar to demonstrate the features and benefits of your products or services.</li>
98
- <li>Use your avatar as a spokesperson for your campaigns or events. You can use your avatar to create a credible and trustworthy voice that speaks to your target audience. You can also use your avatar to deliver your messages and calls to action in a clear and compelling way.</li>
99
- <li>Use your avatar as a personal assistant or chatbot for your customers or clients. You can use your avatar to create a human-like and interactive experience that enhances customer satisfaction and loyalty. You can also use your avatar to provide information, support, and feedback to your customers or clients.</li>
100
- </ul>
101
- <h2>Conclusion and FAQs</h2>
102
- <p>Creating and using an avatar can be a fun and rewarding way to express yourself online and offline. You can use one of the online tools mentioned above to create your own avatar easily and quickly. You can also use your avatar for various purposes, such as marketing, branding, and communication. Here are some frequently asked questions about avatar download:</p>
103
- <h4>Q: How do I download my avatar from Canva?</h4>
104
- <p>A: To download your avatar from Canva, follow these steps:</p>
105
- <ol>
106
- <li>Go to <a href="">https://www.canva.com/create/avatars/</a> and sign in to your account.</li>
107
- <li>Select the app that you used to create your avatar, such as Bitmoji, Character Builder, or Pixton.</li>
108
- <li>Click on the "Download" button at the top right corner of the screen.</li>
109
- <li>Select the file format that you want, such as PNG or JPG.</li>
110
- <li>Click on the "Download" button again to save your avatar on your device.</li>
111
- </ol>
112
- <h4>Q: How do I download my avatar from Adobe Express?</h4>
113
- <p>A: To download your avatar from Adobe Express, follow these steps:</p>
114
- <ol>
115
- <li>Go to <a href="">https://www.adobe.com/express/create/profile-picture</a> and sign in to your account.</li>
116
- <li>Select the photo that you used to create your avatar, or upload a new one.</li>
117
- <li>Edit and enhance your avatar with the filters and effects available.</li>
118
- <li>Click on the "Download" button at the top right corner of the screen.</li>
119
- <li>Select the file format that you want, such as PNG or JPG.</li>
120
- <li>Click on the "Download" button again to save your avatar on your device.</li>
121
- </ol>
122
- <h4>Q: How do I download my avatar from Fotor?</h4>
123
- <p>A: To download your avatar from Fotor, follow these steps:</p>
124
- <ol>
125
- <li>Go to <a href="">https://www.fotor.com/avatar-maker/</a> and sign in to your account.</li>
126
- <li>Select the style of avatar that you want, such as cartoon, anime, 3D, or realistic.</li>
127
- <li>Create or upload your photo and use Fotor's AI technology to turn it into an avatar.</li>
128
- <li>Edit and customize your avatar with Fotor's photo filters, basic settings, graphics, and more.</li>
129
- <li>Click on the "Save" button at the top right corner of the screen.</li>
130
- <li>Select the file format that you want, such as PNG or JPG.</li>
131
- <li>Click on the "Save" button again to save your avatar on your device.</li>
132
- </ol>
133
- <h4>Q: How do I change my avatar on different platforms?</h4>
134
- <p>A: To change your avatar on different platforms, follow these general steps:</p>
135
- <ol>
136
- <li>Go to the platform that you want to change your avatar on, such as Facebook, Twitter, Instagram, etc.</li>
137
- <li>Sign in to your account and go to your profile settings.</li>
138
- <li>Find the option to change or upload your profile picture or icon.</li>
139
- <li>Select the file of your avatar that you downloaded from one of the online tools mentioned above.</li>
140
- <li>Crop or resize your avatar if needed according to the platform's specifications.</li>
141
- <li>Save or apply the changes and enjoy your new avatar.</li>
142
- </ol>
143
- <h4>Q: How do I make my avatar more realistic?</h4>
144
- <p>A: To make your avatar more realistic, you can try these tips:</p>
145
- <ul>
146
- <li>Use a high-quality photo of yourself or someone else as the base for your avatar. You can use a selfie or a portrait that shows your face clearly and in good lighting.</li>
147
- <li>Use a realistic style of avatar, such as 3D or realistic. You can use one of the online tools mentioned above that offer this option, such as Fotor or Adobe Express.</li>
148
- <li>Adjust the features and details of your avatar to match your real appearance or the appearance of the person you are basing your avatar on. You can use the editing and customization options available on the online tools to change the hair, eyes, skin, nose, mouth, etc.</li>
149
- <li>Add some accessories or props to your avatar that reflect your personality or interests. You can use glasses, hats, earrings, necklaces, etc. to add some flair to your avatar.</li>
150
- </ul>
151
- <p>I hope you enjoyed this article and learned how to create and use your own avatar online and offline. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!</p> 197e85843d<br />
152
- <br />
153
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Binance and Unlock the Power of Bitcoin Secure Fast and Easy.md DELETED
@@ -1,113 +0,0 @@
1
- <br />
2
- <h1>How to Download Binance and Buy Bitcoin Securely</h1>
3
- <p>Are you interested in buying Bitcoin, the most popular and valuable cryptocurrency in the world? If so, you might be wondering how to do it safely and easily. Well, look no further than Binance, the leading global platform for buying, selling, trading, and storing cryptocurrencies. In this article, we will show you how to download Binance and buy Bitcoin securely in four simple steps.</p>
4
- <h2>Step 1: Create a free account on Binance</h2>
5
- <p>The first thing you need to do is to open an account on Binance. This will allow you to access all the features and services that Binance offers, such as buying crypto with fiat currency, trading crypto with other users, staking crypto for passive income, and more. To create an account on Binance, you can either use the website or the app.</p>
6
- <h2>download binance buy bitcoin securely</h2><br /><p><b><b>Download</b> &#8230;&#8230;&#8230; <a href="https://jinyurl.com/2uNQPY">https://jinyurl.com/2uNQPY</a></b></p><br /><br />
7
- <p>To register via the website, go to <a href="(^1^)">https://www.binance.com/en/buy-sell-crypto</a> and click on "Register" on the top right corner. You will be asked to enter your email address and a password of your choice. You will also need to agree to the terms of service and privacy policy. Then, click on "Create Account" and verify your email address by clicking on the link that Binance will send you.</p>
8
- <p>To register via the app, download the Binance app from <a href="(^3^)">the App Store</a> or <a href="">Google Play</a> and open it. Tap on "Register" on the bottom right corner and follow the same steps as above.</p>
9
- <p>After you verify your email address, you will need to complete a simple identity verification process. This is required by law and helps Binance protect your account from fraud and theft. You will need to provide some basic personal information, such as your name, date of birth, nationality, and address. You will also need to upload a photo of your ID document (such as a passport or driver's license) and a selfie of yourself holding it. This process usually takes a few minutes to complete.</p>
10
- <h2>Step 2: Choose how you want to buy Bitcoin</h2>
11
- <p>Once you have created and verified your account, you are ready to buy Bitcoin. Binance offers many options for buying crypto with fiat currency (such as USD, EUR, GBP, etc.). You can use a credit card, a debit card, a bank transfer, or a third-party payment channel. To see which options are available in your country, click on the "Buy Crypto" link on the top left of the Binance website navigation or tap on "Buy" on the bottom left of the app.</p>
12
- <p>How to download binance app and buy bitcoin safely<br />
13
- Download binance for pc and start buying bitcoin securely<br />
14
- Best way to download binance and buy bitcoin with credit card<br />
15
- Download binance apk and buy bitcoin without verification<br />
16
- Download binance ios and buy bitcoin instantly and securely<br />
17
- Download binance wallet and buy bitcoin with paypal<br />
18
- Download binance pro and buy bitcoin at low fees<br />
19
- Download binance desktop and buy bitcoin with cash app<br />
20
- Download binance lite and buy bitcoin anonymously<br />
21
- Download binance exchange and buy bitcoin with debit card<br />
22
- Why download binance and buy bitcoin instead of other cryptocurrencies<br />
23
- Download binance tutorial and learn how to buy bitcoin securely<br />
24
- Download binance mobile and buy bitcoin on the go<br />
25
- Download binance coinbase and buy bitcoin with usd<br />
26
- Download binance review and see why it's the best place to buy bitcoin<br />
27
- Download binance mac and buy bitcoin with apple pay<br />
28
- Download binance windows and buy bitcoin with bank transfer<br />
29
- Download binance linux and buy bitcoin with open source software<br />
30
- Download binance android and buy bitcoin with google pay<br />
31
- Download binance chrome and buy bitcoin with browser extension<br />
32
- Download binance referral code and get a discount when you buy bitcoin<br />
33
- Download binance customer service and get help when you buy bitcoin<br />
34
- Download binance fees calculator and see how much it costs to buy bitcoin<br />
35
- Download binance trading bot and automate your bitcoin buying process<br />
36
- Download binance margin trading and leverage your bitcoin buying power<br />
37
- Download binance futures trading and speculate on the price of bitcoin<br />
38
- Download binance options trading and hedge your risk when you buy bitcoin<br />
39
- Download binance staking and earn interest on your bitcoin holdings<br />
40
- Download binance lending and borrow money to buy more bitcoin<br />
41
- Download binance mining and contribute to the security of the bitcoin network<br />
42
- Download binance academy and learn more about bitcoin and blockchain technology<br />
43
- Download binance news and stay updated on the latest developments in the bitcoin space<br />
44
- Download binance podcast and listen to experts talk about bitcoin and crypto<br />
45
- Download binance blog and read insightful articles about bitcoin and crypto<br />
46
- Download binance community and join the discussion about bitcoin and crypto<br />
47
- Download binance social media and follow them on twitter, facebook, instagram, etc.<br />
48
- Download binance support and contact them if you have any issues or questions about buying bitcoin<br />
49
- Download binance security tips and learn how to protect your account and funds when you buy bitcoin<br />
50
- Download binance verification guide and learn how to verify your identity and address when you buy bitcoin<br />
51
- Download binance tax guide and learn how to report your income and losses from buying bitcoin</p>
52
- <p>If you want to use a credit card or a debit card, select "Credit/Debit Card" from the list of payment methods. You will need to enter the amount of fiat currency you want to spend and select Bitcoin (BTC) as the crypto asset you want to buy. You will also need to enter your card details and billing address. Then, click on "Buy BTC " and confirm your order. You will receive your Bitcoin in your Binance wallet within a few minutes.</p>
53
- <p>If you want to use a bank transfer, select "Bank Deposit" from the list of payment methods. You will need to choose your currency and bank account type. You will then see the bank details of Binance's partner, which you will need to use to make the transfer from your own bank account. You will also need to enter a reference code that Binance will provide you. Make sure you enter the correct amount and reference code, otherwise your order may be delayed or canceled. After you make the transfer, you will need to upload a screenshot or a photo of your payment proof. You will receive your Bitcoin in your Binance wallet within a few hours or days, depending on your bank's processing time.</p>
54
- <p>If you want to use a third-party payment channel, select "Third-Party Payment" from the list of payment methods. You will need to choose your currency and provider. Some of the providers that Binance supports are Simplex, Banxa, MoonPay, Paxful, and Binance P2P. Each provider has its own fees, limits, and verification requirements, so make sure you read them carefully before proceeding. You will be redirected to the provider's website or app, where you will need to complete the payment process. You will receive your Bitcoin in your Binance wallet within a few minutes or hours, depending on the provider's processing time.</p>
55
- <h2>Step 3: Check the payment details and fees</h2>
56
- <p>Before you confirm your order, you should always check the payment details and fees carefully. Binance strives to offer the best prices and the lowest fees for buying crypto with fiat currency, but there may be some variations depending on the market conditions, the payment method, and the provider. You can see the current price of Bitcoin and the exchange rate of your currency on the top of the page. You can also see the total amount of fiat currency you will spend and the total amount of Bitcoin you will receive on the bottom of the page.</p>
57
- <p>The fees that Binance charges for buying crypto with fiat currency are usually very low or even zero. However, there may be some additional fees that are charged by your bank, card issuer, or third-party provider. These fees are not controlled by Binance and may vary depending on their policies and terms of service. You should always check with them before making a payment to avoid any surprises or disputes later.</p>
58
- <p>If you are satisfied with the payment details and fees, you can click on "Confirm" or "Pay Now" to complete your order. You will receive a confirmation message and an email from Binance once your order is successful.</p>
59
- <h2>Step 4: Store or use your Bitcoin on Binance</h2>
60
- <p>Congratulations! You have just bought Bitcoin securely with Binance. Now you can access your Bitcoin wallet on Binance and see your balance and transaction history. You can also use your Bitcoin for various purposes on Binance, such as trading, staking, or spending.</p>
61
- <h3>How to trade your Bitcoin for other cryptocurrencies</h3>
62
- <p>If you want to trade your Bitcoin for other cryptocurrencies, such as Ethereum, Ripple, Cardano, or Dogecoin, you can use one of the Binance trading platforms. Binance offers three types of trading platforms: spot, margin, and futures.</p>
63
- <p>Spot trading is the simplest and most common type of trading, where you buy or sell cryptocurrencies at their current market price. You can use either the basic or the advanced interface on the website or the app to place your orders. You can also use various tools and indicators to analyze the market trends and make informed decisions.</p>
64
- <p>Margin trading is a more advanced type of trading, where you borrow funds from Binance or other users to increase your buying power and potential profits. However, margin trading also involves higher risks and fees, so you should only use it if you have enough experience and knowledge.</p>
65
- <p>Futures trading is another advanced type of trading, where you agree to buy or sell cryptocurrencies at a predetermined price and date in the future. Futures trading allows you to speculate on the price movements of cryptocurrencies and hedge against market volatility. However, futures trading also involves high leverage and liquidation risks, so you should only use it if you understand how it works and can manage your risks.</p>
66
- <h3>How to stake your Bitcoin for passive income</h3>
67
- <p>If you want to stake your Bitcoin for passive income, you can use one of the Binance Earn products. Binance Earn is a suite of products that allow you to earn interest on your crypto assets by lending them to Binance or other users.</p>
68
- <p>Some of the products that Binance Earn offers are:</p>
69
- <ul>
70
- <li>Binance Savings: A flexible or fixed-term savings account that lets you earn interest on your crypto assets. You can choose between flexible savings, which allow you to withdraw your funds at any time, or fixed savings, which lock your funds for a specified period of time and offer higher interest rates.</li>
71
- <li>Binance Staking: A service that lets you earn rewards by locking your crypto assets and participating in the network activities of various proof-of-stake (PoS) coins. You can choose between locked staking, which requires you to lock your funds for a fixed term, or flexible staking, which allows you to earn rewards without locking your funds.</li>
72
- <li>Binance Launchpool: A platform that lets you farm new tokens by staking your crypto assets. You can stake your Bitcoin or other supported coins and earn newly launched tokens as rewards. You can also trade the new tokens on the Binance spot market after they are listed.</li>
73
- <li>Binance Liquid Swap: A liquidity pool that lets you earn fees and interest by providing liquidity to various crypto pairs. You can add or remove your funds from the pool at any time and enjoy low slippage and instant swaps.</li>
74
- </ul>
75
- <p>To use any of these products, you need to transfer your Bitcoin from your spot wallet to your earn wallet on Binance. You can do this by clicking on "Transfer" on the top right of the Binance website navigation or tapping on "Transfer" on the bottom right of the app. Then, you need to select the product you want to use and follow the instructions on the screen.</p>
76
- <h3>How to spend your Bitcoin on goods and services</h3>
77
- <p>If you want to spend your Bitcoin on goods and services, you can use one of the Binance features that enable you to do so. Binance offers three features that allow you to use your crypto for everyday purchases: Binance Card, Binance Pay, and Binance Marketplace.</p>
78
- <p>Binance Card is a Visa debit card that lets you pay with crypto anywhere Visa is accepted. You can link your Binance Card to your Binance wallet and choose which crypto assets you want to use for payment. You can also enjoy cashback rewards and other benefits when you use your Binance Card.</p>
79
- <p>Binance Pay is a contactless, borderless, and secure payment solution that lets you send and receive crypto payments from anyone around the world. You can use Binance Pay to pay merchants or friends who also have a Binance account. You can also scan QR codes or generate payment links to make payments easier.</p>
80
- <p>Binance Marketplace is a platform that lets you buy and sell goods and services with crypto. You can browse through various categories, such as electronics, fashion, gaming, health, and more, and find products or services that suit your needs. You can also list your own products or services and accept crypto as payment.</p>
81
- <h2>Conclusion: Why Binance is the best platform to buy Bitcoin securely</h2>
82
- <p>As you can see, Binance is not only the best platform to buy Bitcoin securely, but also the best platform to use Bitcoin for various purposes. Whether you want to trade, stake, or spend your Bitcoin, Binance has everything you need and more. Binance is also constantly innovating and adding new features and services to make your crypto experience better and easier.</p>
83
- <p>So what are you waiting for? Download Binance today and start buying Bitcoin securely with the leading global platform for crypto. You will not regret it!</p>
84
- <h2>FAQs: Frequently Asked Questions about Binance and Bitcoin</h2>
85
- <p>Here are some of the most common questions and answers about Binance and Bitcoin:</p>
86
- <ul>
87
- <li><b>Q: Is Binance safe and reliable?</b></li>
88
- <li>A: Yes, Binance is one of the safest and most reliable platforms for crypto in the world. Binance uses advanced security measures, such as 2FA, anti-phishing code, address whitelisting, withdrawal limits, etc., to protect your account and funds from unauthorized access. Binance also has a Secure Asset Fund for Users (SAFU), which is a reserve fund that covers any losses in case of extreme situations. Moreover, Binance complies with all the relevant laws and regulations in the jurisdictions where it operates.</li>
89
- <li><b>Q: How do I contact Binance customer support?</b></li>
90
- <li>A: If you have any questions or issues with using Binance, you can contact Binance customer support through various channels. You can submit a ticket online via <a href="">https://www.binance.com/en/support</a>, chat with a live agent via <a href="">https://www.binance.com/en/chat</a>, call the hotline number via <a href="">https://www.binance.com/en/support/hotline</a>, or join the community via <a href="">https://www.binance.com/en/community </a>. Binance customer support is available 24/7 and in multiple languages.</li>
91
- <li><b>Q: What are the advantages of buying Bitcoin with Binance?</b></li>
92
- <li>A: There are many advantages of buying Bitcoin with Binance, such as:</li>
93
- <ul>
94
- <li>You can buy Bitcoin with various fiat currencies and payment methods, such as credit card, debit card, bank transfer, or third-party payment channels.</li>
95
- <li>You can enjoy the best prices and the lowest fees for buying Bitcoin with Binance. Binance also offers discounts and promotions for buying crypto with fiat currency from time to time.</li>
96
- <li>You can access your Bitcoin wallet on Binance and use your Bitcoin for various purposes, such as trading, staking, or spending. Binance also offers many features and services that enhance your crypto experience, such as Binance Card, Binance Pay, Binance Marketplace, etc.</li>
97
- <li>You can benefit from the security and reliability of Binance, which is one of the most trusted and respected platforms for crypto in the world. Binance uses advanced security measures and complies with all the relevant laws and regulations to protect your account and funds.</li>
98
- </ul>
99
- <li><b>Q: How can I learn more about Bitcoin and crypto?</b></li>
100
- <li>A: If you want to learn more about Bitcoin and crypto, you can use the Binance Academy, which is a free and open online platform that provides educational resources on various topics related to crypto. You can find articles, videos, quizzes, glossaries, and more on the Binance Academy website at <a href="">https://academy.binance.com/en</a>. You can also join the Binance Academy Telegram group at <a href="">https://t.me/binanceacademy</a> to chat with other learners and experts.</li>
101
- <li><b>Q: How can I stay updated on the latest news and developments about Binance and Bitcoin?</b></li>
102
- <li>A: If you want to stay updated on the latest news and developments about Binance and Bitcoin, you can follow the official social media accounts of Binance, such as:</li>
103
- <ul>
104
- <li>Twitter: <a href="">https://twitter.com/binance</a></li>
105
- <li>Facebook: <a href="">https://www.facebook.com/binance</a></li>
106
- <li>Instagram: <a href="">https://www.instagram.com/binance</a></li>
107
- <li>YouTube: <a href="">https://www.youtube.com/binance</a></li>
108
- <li>Reddit: <a href="">https://www.reddit.com/r/binance</a></li>
109
- <li>Medium: <a href="">https://medium.com/binance</a></li>
110
- </ul>
111
- <p>You can also subscribe to the Binance newsletter at <a href="">https://www.binance.com/en/newsletter</a> to receive the latest updates and offers from Binance via email.</p> 401be4b1e0<br />
112
- <br />
113
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Solitaire 13 and Discover the Secrets of the Pyramid.md DELETED
@@ -1,160 +0,0 @@
1
-
2
- <h1>Download Solitaire 13: A Fun and Challenging Card Game for Everyone</h1>
3
- <p>If you are looking for a card game that is easy to learn, fun to play, and challenging to master, then you should try Solitaire 13. This game, also known as Pyramid Solitaire, is a classic solitaire variant that requires you to clear a pyramid of cards by matching pairs that add up to 13. In this article, we will show you how to download Solitaire 13 for Windows, how to play it online, and how to get it for Android devices.</p>
4
- <h2>What is Solitaire 13?</h2>
5
- <p>Solitaire 13 is a solitaire card game that is played with a standard 52-card deck. The goal of the game is to remove all the cards from the pyramid by finding pairs that have a total value of 13. You can only remove cards that are exposed, meaning that they have no other cards on top of them. You can also use a single card from the draw pile or the reserve pile to make a pair. The game is sometimes called Solitaire 13 because kings are worth 13 points and can be removed by themselves.</p>
6
- <h2>download solitaire 13</h2><br /><p><b><b>Download File</b> &#9733;&#9733;&#9733; <a href="https://jinyurl.com/2uNQi8">https://jinyurl.com/2uNQi8</a></b></p><br /><br />
7
- <h3>The rules of Solitaire 13</h3>
8
- <p>The rules of Solitaire 13 are simple and straightforward. Here are the basic steps to set up and play the game:</p>
9
- <ul>
10
- <li>Shuffle the deck and deal 28 cards face up in a pyramid shape, starting with one card at the top and ending with seven cards at the bottom. Each row should overlap the previous one.</li>
11
- <li>Place the remaining cards face down in a separate pile. This is your stock pile.</li>
12
- <li>Turn over the top card of the stock pile and place it next to it. This is your reserve pile.</li>
13
- <li>Look at the cards on the pyramid and the reserve pile. If you see any pairs that add up to 13, you can remove them from the game. For example, you can remove a queen and an ace, a jack and a two, or a king by itself.</li>
14
- <li>If you cannot find any pairs, you can turn over another card from the stock pile and place it on top of the reserve pile. You can only use the top card of the reserve pile to make pairs.</li>
15
- <li>Continue removing pairs until you clear the pyramid or run out of cards in the stock pile.</li>
16
- <li>You win the game if you remove all the cards from the pyramid. You lose the game if you run out of moves or cards.</li>
17
- </ul>
18
- <h3>The benefits of playing Solitaire 13</h3>
19
- <p>Solitaire 13 is not only a fun and relaxing game, but also a great way to exercise your brain and improve your skills. Here are some of the benefits of playing Solitaire 13:</p>
20
- <ul>
21
- <li>It helps you practice your math skills by adding up numbers quickly and accurately.</li>
22
- <li>It enhances your memory and concentration by keeping track of the cards and their values.</li>
23
- <li>It develops your logic and strategy by planning ahead and choosing the best moves.</li>
24
- <li>It boosts your mood and reduces stress by providing a positive distraction and a sense of achievement.</li>
25
- </ul>
26
- <h2>How to download Solitaire 13 for Windows</h2>
27
- <p>If you have a Windows device, you can easily download Solitaire 13 for free from the Microsoft Store. Here are the steps to do so:</ <h3>Get the Microsoft Solitaire Collection from the Microsoft Store</h3>
28
- <p>The Microsoft Solitaire Collection is a free app that includes five different solitaire games, including Solitaire 13. To get the app, follow these steps:</p>
29
- <ol>
30
- <li>Open the Microsoft Store app on your Windows device. You can find it by typing "store" in the search box or by clicking the shopping bag icon on the taskbar.</li>
31
- <li>In the search box, type "Microsoft Solitaire Collection" and press Enter.</li>
32
- <li>Click on the app icon and then click on the "Get" button.</li>
33
- <li>Wait for the app to download and install on your device.</li>
34
- <li>Once the app is installed, you can launch it by clicking on the "Play" button or by finding it in your Start menu.</li>
35
- </ol>
36
- <h3>Pin the game to your taskbar or Start menu</h3>
37
- <p>If you want to access Solitaire 13 more easily, you can pin it to your taskbar or Start menu. To do this, follow these steps:</p>
38
- <ol>
39
- <li>Open the Microsoft Solitaire Collection app and click on the "Solitaire 13" icon.</li>
40
- <li>Right-click on the game window and select "Pin to taskbar" or "Pin to Start".</li>
41
- <li>You can now launch Solitaire 13 directly from your taskbar or Start menu without opening the app first.</li>
42
- </ol>
43
- <h3>Troubleshoot any issues with the game</h3>
44
- <p>If you encounter any problems with Solitaire 13, such as freezing, crashing, or not loading, you can try these solutions:</p>
45
- <ul>
46
- <li>Make sure your Windows device is updated to the latest version and has enough storage space and memory.</li>
47
- <li>Check your internet connection and make sure it is stable and secure.</li>
48
- <li>Restart your device and try launching the game again.</li>
49
- <li>Uninstall and reinstall the Microsoft Solitaire Collection app from the Microsoft Store.</li>
50
- <li>Contact Microsoft support for further assistance.</li>
51
- </ul>
52
- <h2>How to play Solitaire 13 online</h2>
53
- <p>If you don't have a Windows device or you prefer to play Solitaire 13 online, you can do so by visiting the Microsoft Solitaire website. Here are the steps to do so:</p>
54
- <h3>Visit the Microsoft Solitaire website</h3>
55
- <p>To play Solitaire 13 online, you need to go to the Microsoft Solitaire website. You can use any web browser that supports HTML5, such as Chrome, Firefox, Edge, or Safari. You can also use any device that has an internet connection, such as a laptop, tablet, or smartphone.</p>
56
- <h3>Sign in with your Microsoft account</h3>
57
- <p>To access all the features and benefits of playing Solitaire 13 online, you need to sign in with your Microsoft account. If you don't have one, you can create one for free. By signing in, you can:</p>
58
- <p>download solitaire 13 free<br />
59
- download solitaire 13 for windows 10<br />
60
- download solitaire 13 for pc<br />
61
- download solitaire 13 for android<br />
62
- download solitaire 13 for mac<br />
63
- download solitaire 13 game<br />
64
- download solitaire 13 app<br />
65
- download solitaire 13 online<br />
66
- download solitaire 13 offline<br />
67
- download solitaire 13 collection<br />
68
- download solitaire 13 in 1<br />
69
- download solitaire 13 card game<br />
70
- download solitaire 13 spider<br />
71
- download solitaire 13 pyramid<br />
72
- download solitaire 13 classic<br />
73
- download solitaire 13 deluxe<br />
74
- download solitaire 13 pro<br />
75
- download solitaire 13 apk<br />
76
- download solitaire 13 mod apk<br />
77
- download solitaire 13 hack apk<br />
78
- download solitaire 13 cheats<br />
79
- download solitaire 13 tips and tricks<br />
80
- download solitaire 13 rules<br />
81
- download solitaire 13 strategy<br />
82
- download solitaire 13 tutorial<br />
83
- download solitaire 13 how to play<br />
84
- download solitaire 13 best version<br />
85
- download solitaire 13 latest version<br />
86
- download solitaire 13 update<br />
87
- download solitaire 13 full version<br />
88
- download solitaire 13 premium version<br />
89
- download solitaire 13 no ads<br />
90
- download solitaire 13 unlimited coins<br />
91
- download solitaire 13 unlimited hints<br />
92
- download solitaire 13 unlimited lives<br />
93
- download solitaire 13 unlimited moves<br />
94
- download solitaire 13 unlimited time<br />
95
- download solitaire 13 with sound effects<br />
96
- download solitaire 13 with music<br />
97
- download solitaire 13 with themes<br />
98
- download solitaire 13 with backgrounds<br />
99
- download solitaire 13 with custom cards<br />
100
- download solitaire 13 with achievements<br />
101
- download solitaire 13 with leaderboards<br />
102
- download solitaire 13 with multiplayer mode<br />
103
- download solitaire 13 with daily challenges<br />
104
- download solitaire 13 with bonus levels<br />
105
- download solitaire 13 with special events<br />
106
- download solitaire 13 with rewards and prizes</p>
107
- <ul>
108
- <li>Save your progress and resume your game anytime and anywhere.</li>
109
- <li>Earn achievements and badges for completing challenges and milestones.</li>
110
- <li>Compete with other players and climb the leaderboards.</li>
111
- <li>Sync your data across all your devices and platforms.</li>
112
- </ul>
113
- <h3>Choose your game mode and difficulty level</h3>
114
- <p>Once you sign in, you can choose your game mode and difficulty level. There are two game modes available: Classic and Daily Challenge. In Classic mode, you can play a standard game of Solitaire 13 with no time limit or score. In Daily Challenge mode, you can play a set of five games with different goals and rules every day. You can also choose from four difficulty levels: Easy, Medium, Hard, and Expert. The higher the difficulty level, the fewer moves and hints you have.</p>
115
- <h2>How to get Solitaire 13 for Android devices</h2>
116
- <p>If you have an Android device, you can also download Solitaire 13 for free from Google Play Store. Here are the steps to do so:</p>
117
- <h3>Download the Solitaire 13 app from Google Play Store</h3>
118
- <p>The Solitaire 13 app is a free app that lets you play Solitaire 13 on your Android device. To download it, follow these steps:</p>
119
- <ol>
120
- <li>Open Google Play Store on your Android device. You can find it by swiping up from the bottom of your screen or by tapping the Play Store icon on your home screen.</li>
121
- <li>In the search box, type "Solitaire 13" and press Enter.</li>
122
- <li>Tap on the app icon and then tap on the "Install" button.</li>
123
- <li>Wait for the app to download and install on your device.</li>
124
- <li>Once the app is installed, you can launch it by tapping on the "Open" button or by finding it in your app drawer.</li>
125
- </ol>
126
- <h3>Enjoy the features and graphics of the app</h3>
127
- <p>The Solitaire 13 app has many features and graphics that make it enjoyable and attractive. Some of them are:</p>
128
- <ul>
129
- <li>Beautiful and colorful card designs and backgrounds.</li>
130
- <li>Smooth and responsive gameplay and animations.</li>
131
- <li>Sound effects and music that enhance the mood and atmosphere.</li>
132
- <li>Customizable settings and preferences that suit your style and needs.</li>
133
- <li>Offline mode that lets you play without internet connection.</li>
134
- </ul>
135
- <h3>Rate and review the app</h3>
136
- <p>If you like the Solitaire 13 app, you can show your support and appreciation by rating and reviewing it on Google Play Store. To do this, follow these steps:</p>
137
- <ol>
138
- <li>Open Google Play Store on your Android device and go to the Solitaire 13 app page.</li>
139
- <li>Tap on the "Rate" button and choose how many stars you want to give the app.</li>
140
- <li>Tap on the "Write a review" button and type your feedback and comments.</li>
141
- <li>Tap on the "Post" button to submit your review.</li>
142
- </ol>
143
- <h2>Conclusion</h2>
144
- <p>Solitaire 13 is a fun and challenging card game that you can play on various devices and platforms. Whether you download it for Windows, play it online, or get it for Android, you will enjoy its rules, benefits, features, and graphics. Solitaire 13 is a great way to pass the time, exercise your brain, and have fun. Download Solitaire 13 today and see for yourself!</p>
145
- <h4>Frequently Asked Questions</h4>
146
- <p>Here are some of the common questions that people ask about Solitaire 13:</p>
147
- <ol>
148
- <li><b>What are the values of the cards in Solitaire 13?</b></li>
149
- <p>The values of the cards in Solitaire 13 are as follows: Aces are worth 1 point, twos are worth 2 points, threes are worth 3 points, and so on until tens are worth 10 points. Jacks are worth 11 points, queens are worth 12 points, and kings are worth 13 points. You can remove any pair of cards that add up to 13 points, or any king by itself.</p>
150
- <li><b>How do I win Solitaire 13?</b></li>
151
- <p>You win Solitaire 13 by removing all the cards from the pyramid. You can do this by finding pairs of cards that have a total value of 13 points. You can only remove cards that are exposed, meaning that they have no other cards on top of them. You can also use a single card from the draw pile or the reserve pile to make a pair. If you run out of moves or cards, you lose the game.</p>
152
- <li><b>Can I undo my moves in Solitaire 13?</b></li>
153
- <p>Yes, you can undo your moves in Solitaire 13 if you make a mistake or change your mind. To do this, you can click on the "Undo" button at the bottom of the screen. You can undo as many moves as you want, as long as there are still cards in the stock pile or the reserve pile. However, undoing your moves may affect your score and achievements.</p>
154
- <li><b>How do I get hints in Solitaire 13?</b></li>
155
- <p>If you need some help or guidance in Solitaire 13, you can use the "Hint" button at the bottom of the screen. This will highlight a pair of cards that you can remove from the pyramid. You can use hints as many times as you want, but each hint will cost you some points. You can also turn off hints in the settings menu if you prefer to play without them.</p>
156
- <li><b>How do I change the difficulty level in Solitaire 13?</b></li>
157
- <p>You can change the difficulty level in Solitaire 13 by choosing from four options: Easy, Medium, Hard, and Expert. The higher the difficulty level, the fewer moves and hints you have. You can change the difficulty level before starting a new game or during a game by clicking on the "Menu" button at the top right corner of the screen. Changing the difficulty level may affect your score and achievements.</p>
158
- </ol></p> 197e85843d<br />
159
- <br />
160
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Experience the Fun and Authenticity of Bus Simulator Indonesia APK.md DELETED
@@ -1,119 +0,0 @@
1
-
2
- <h1>Bus Simulator Android APK: A Guide for Bus Lovers</h1>
3
- <p>If you are a fan of buses and driving, you might have wondered how it feels to be a bus driver in real life. Well, you don't have to wonder anymore, because you can experience it with bus simulator android apk games. These are games that let you drive different types of buses across various cities and routes, while following traffic rules and satisfying your passengers. In this article, we will tell you everything you need to know about bus simulator android apk games, including how to download and install them, what are the best ones to play, and how to play and enjoy them.</p>
4
- <h2>bus simulator android apk</h2><br /><p><b><b>Download Zip</b> &#10040;&#10040;&#10040; <a href="https://jinyurl.com/2uNOJc">https://jinyurl.com/2uNOJc</a></b></p><br /><br />
5
- <h2>What is a bus simulator android apk?</h2>
6
- <h3>A brief introduction to the concept of bus simulation games</h3>
7
- <p>A bus simulation game is a type of video game that simulates the operation of a bus. It usually involves driving a bus along a predefined route, picking up and dropping off passengers, following traffic signals and signs, avoiding collisions and accidents, and managing fuel and maintenance. Some bus simulation games also include realistic features such as weather conditions, day and night cycles, traffic jams, road works, emergencies, and customer feedback.</p>
8
- <h3>The benefits of playing bus simulator android apk games</h3>
9
- <p>Playing bus simulator android apk games can have many benefits for you, such as:</p>
10
- <ul>
11
- <li>It can improve your driving skills and awareness, as you have to pay attention to the road and the traffic.</li>
12
- <li>It can enhance your creativity and imagination, as you can customize your buses and routes.</li>
13
- <li>It can increase your knowledge and curiosity, as you can explore different cities and cultures.</li>
14
- <li>It can reduce your stress and boredom, as you can have fun and relax while driving.</li>
15
- <li>It can satisfy your passion and interest, as you can fulfill your dream of being a bus driver.</li>
16
- </ul>
17
- <h2>How to download and install bus simulator android apk games?</h2>
18
- <h3>The steps to download and install bus simulator android apk games from different sources</h3>
19
- <p>There are many sources where you can download and install bus simulator android apk games, such as Google Play Store, third-party websites, or file-sharing platforms. Here are the general steps to do so:</p>
20
- <p>bus simulator ultimate android apk download<br />
21
- mobile bus simulator android apk mod<br />
22
- bus simulator indonesia android apk free download<br />
23
- bus simulator 18 android apk obb<br />
24
- bus simulator 2015 android apk full version<br />
25
- bus simulator vietnam android apk revdl<br />
26
- bus simulator pro 2017 android apk data<br />
27
- bus simulator 3d android apk hack<br />
28
- bus simulator original android apk offline<br />
29
- bus simulator 17 android apk update<br />
30
- bus simulator 16 android apk and data<br />
31
- bus simulator 2019 android apk latest version<br />
32
- bus simulator 2020 android apk unlimited money<br />
33
- bus simulator 2021 android apk new features<br />
34
- bus simulator 2022 android apk release date<br />
35
- bus simulator online android apk multiplayer<br />
36
- bus simulator europe android apk modded<br />
37
- bus simulator america android apk cracked<br />
38
- bus simulator turkey android apk premium<br />
39
- bus simulator russia android apk unlocked<br />
40
- bus simulator germany android apk patched<br />
41
- bus simulator france android apk hacked<br />
42
- bus simulator italy android apk full unlocked<br />
43
- bus simulator spain android apk mod menu<br />
44
- bus simulator brazil android apk no ads<br />
45
- bus simulator canada android apk free shopping<br />
46
- bus simulator australia android apk unlimited coins<br />
47
- bus simulator india android apk mega mod<br />
48
- bus simulator china android apk vip mode<br />
49
- bus simulator japan android apk pro version<br />
50
- bus simulator korea android apk cheat codes<br />
51
- bus simulator uk android apk license key<br />
52
- bus simulator mexico android apk no root<br />
53
- bus simulator argentina android apk high graphics<br />
54
- bus simulator colombia android apk low mb<br />
55
- bus simulator egypt android apk realistic physics<br />
56
- bus simulator south africa android apk english language<br />
57
- bus simulator indonesia bussid v3.6.1 mod money obb for Android - APK Download[^1^]</p>
58
- <ol>
59
- <li>Find a reliable source that offers the bus simulator android apk game that you want to play. You can search online or ask for recommendations from other players.</li>
60
- <li>Download the bus simulator android apk file from the source. Make sure that the file is compatible with your device and has no viruses or malware.</li>
61
- <li>Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources.</li>
62
- <li>Locate the downloaded bus simulator android apk file on your device and tap on it to start the installation process. Follow the instructions on the screen to complete the installation.</li>
63
- <li>Launch the bus simulator android apk game from your device's app drawer or home screen. Enjoy!</li>
64
- </ol>
65
- <h3>The precautions to take before downloading and installing bus simulator android apk games</h3>
66
- <p>While downloading and installing bus simulator android apk games can be easy and convenient, there are also some risks involved. Therefore, you should take some precautions before downloading and installing bus simulator android apk games, such as:</p>
67
- <ul>
68
- <li>Check the ratings and reviews of the source and the game. Look for positive feedback and avoid sources that have low ratings, negative reviews, or complaints of malware or scams.</li>
69
- <li>Compare the size and version of the file with the official one. If the file is too large or too small, or has a different version number, it might be fake or modified.</li>
70
- <li>Scan the file with a reputable antivirus or anti-malware software before opening it. This can help you detect and remove any potential threats or infections.</li>
71
- <li>Backup your device's data and settings before installing the game. This can help you restore your device in case something goes wrong or the game causes any problems.</li>
72
- </ul>
73
- <h2>What are the best bus simulator android apk games to play?</h2>
74
- <h3>A comparison table of the top 5 bus simulator android apk games based on various criteria</h3>
75
- <p>There are many bus simulator android apk games available on the market, but not all of them are worth your time and money. To help you choose the best ones, we have compared the top 5 bus simulator android apk games based on various criteria, such as graphics, gameplay, features, realism, and user ratings. Here is the comparison table:</p>
76
- <table>
77
- <tr>
78
- <th>Game Name</th>
79
- <th>Graphics</th>
80
- <th>Gameplay</th>
81
- <th>Features</th>
82
- <th>Realism</th>
83
- <th>User Ratings</th>
84
- </tr>
85
- <tr>
86
- <td>Bus Simulator: Ultimate</td>
87
- <td>High-quality 3D graphics with realistic details and effects</td>
88
- <td>Smooth and easy controls with realistic physics and sounds</td>
89
- <td>Over 25 buses to drive across 12 countries and 250 routes, with online multiplayer mode, radio system, highway tolls, rest areas, traffic rules, weather conditions, and more</td>
90
- <td>High level of realism with dynamic passengers, customer feedback, company management, bus customization, fuel consumption, and maintenance</td>
91
- <td>4.3 out of 5 stars on Google Play Store with over 1 million reviews</td>
92
- </tr>
93
- <tr>
94
- <td>Coach Bus Simulator</td>
95
- <td>Good 3D graphics with decent details and effects</td>
96
- <td>Simple and intuitive controls with realistic physics and sounds</td>
97
- <td>Over 10 buses to drive across various cities and routes, with online multiplayer mode, radio system, traffic rules, weather conditions, and more</td>
98
- <td>Moderate level of realism with animated passengers, customer feedback, bus customization, fuel consumption, and maintenance</td>
99
- <td>4.1 out of 5 stars on Google Play Store with over 500 thousand reviews</td>
100
- </tr>
101
- <tr>
102
- <td>Heavy Bus Simulator</td>
103
- <td>Average 3D graphics with basic details and effects</td>
104
- <td>Fairly easy controls with realistic physics and sounds</td>
105
- <td>Over 20 buses to drive across Brazil and other countries, with online multiplayer mode, radio system, traffic rules, weather conditions, and more</td>
106
- <td>Moderate level of realism with dynamic passengers, customer feedback, bus customization, fuel consumption, and maintenance</td>
107
- <td>4.0 out of 5 stars on Google Play Store with over 300 thousand reviews</td>
108
- </tr>
109
- <tr>
110
- <td>IDBS Bus Simulator Indonesia</td>
111
- <td>Poor 2D graphics with low details and effects</td>
112
- <td>Hard and confusing controls with unrealistic physics and sounds</td>
113
- <td>A few buses to drive across Indonesia only, with no online multiplayer mode, radio system, traffic rules, weather conditions, or other features</td>
114
- <td>Low level of realism with static passengers, no customer feedback, no bus customization, no fuel consumption, or maintenance</td>
115
- <td>3.9 out of 5 stars on Google Play Store with over 200 thousand reviews</td>
116
- </tr>
117
- <tr><td colspan="6"></td></tr></table></p> 197e85843d<br />
118
- <br />
119
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/pyrender/pyrender/primitive.py DELETED
@@ -1,489 +0,0 @@
1
- """Primitives, conforming to the glTF 2.0 standards as specified in
2
- https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-primitive
3
-
4
- Author: Matthew Matl
5
- """
6
- import numpy as np
7
-
8
- from OpenGL.GL import *
9
-
10
- from .material import Material, MetallicRoughnessMaterial
11
- from .constants import FLOAT_SZ, UINT_SZ, BufFlags, GLTF
12
- from .utils import format_color_array
13
-
14
-
15
- class Primitive(object):
16
- """A primitive object which can be rendered.
17
-
18
- Parameters
19
- ----------
20
- positions : (n, 3) float
21
- XYZ vertex positions.
22
- normals : (n, 3) float
23
- Normalized XYZ vertex normals.
24
- tangents : (n, 4) float
25
- XYZW vertex tangents where the w component is a sign value
26
- (either +1 or -1) indicating the handedness of the tangent basis.
27
- texcoord_0 : (n, 2) float
28
- The first set of UV texture coordinates.
29
- texcoord_1 : (n, 2) float
30
- The second set of UV texture coordinates.
31
- color_0 : (n, 4) float
32
- RGBA vertex colors.
33
- joints_0 : (n, 4) float
34
- Joint information.
35
- weights_0 : (n, 4) float
36
- Weight information for morphing.
37
- indices : (m, 3) int
38
- Face indices for triangle meshes or fans.
39
- material : :class:`Material`
40
- The material to apply to this primitive when rendering.
41
- mode : int
42
- The type of primitives to render, one of the following:
43
-
44
- - ``0``: POINTS
45
- - ``1``: LINES
46
- - ``2``: LINE_LOOP
47
- - ``3``: LINE_STRIP
48
- - ``4``: TRIANGLES
49
- - ``5``: TRIANGLES_STRIP
50
- - ``6``: TRIANGLES_FAN
51
- targets : (k,) int
52
- Morph target indices.
53
- poses : (x,4,4), float
54
- Array of 4x4 transformation matrices for instancing this object.
55
- """
56
-
57
- def __init__(self,
58
- positions,
59
- normals=None,
60
- tangents=None,
61
- texcoord_0=None,
62
- texcoord_1=None,
63
- color_0=None,
64
- joints_0=None,
65
- weights_0=None,
66
- indices=None,
67
- material=None,
68
- mode=None,
69
- targets=None,
70
- poses=None):
71
-
72
- if mode is None:
73
- mode = GLTF.TRIANGLES
74
-
75
- self.positions = positions
76
- self.normals = normals
77
- self.tangents = tangents
78
- self.texcoord_0 = texcoord_0
79
- self.texcoord_1 = texcoord_1
80
- self.color_0 = color_0
81
- self.joints_0 = joints_0
82
- self.weights_0 = weights_0
83
- self.indices = indices
84
- self.material = material
85
- self.mode = mode
86
- self.targets = targets
87
- self.poses = poses
88
-
89
- self._bounds = None
90
- self._vaid = None
91
- self._buffers = []
92
- self._is_transparent = None
93
- self._buf_flags = None
94
-
95
- @property
96
- def positions(self):
97
- """(n,3) float : XYZ vertex positions.
98
- """
99
- return self._positions
100
-
101
- @positions.setter
102
- def positions(self, value):
103
- value = np.asanyarray(value, dtype=np.float32)
104
- self._positions = np.ascontiguousarray(value)
105
- self._bounds = None
106
-
107
- @property
108
- def normals(self):
109
- """(n,3) float : Normalized XYZ vertex normals.
110
- """
111
- return self._normals
112
-
113
- @normals.setter
114
- def normals(self, value):
115
- if value is not None:
116
- value = np.asanyarray(value, dtype=np.float32)
117
- value = np.ascontiguousarray(value)
118
- if value.shape != self.positions.shape:
119
- raise ValueError('Incorrect normals shape')
120
- self._normals = value
121
-
122
- @property
123
- def tangents(self):
124
- """(n,4) float : XYZW vertex tangents.
125
- """
126
- return self._tangents
127
-
128
- @tangents.setter
129
- def tangents(self, value):
130
- if value is not None:
131
- value = np.asanyarray(value, dtype=np.float32)
132
- value = np.ascontiguousarray(value)
133
- if value.shape != (self.positions.shape[0], 4):
134
- raise ValueError('Incorrect tangent shape')
135
- self._tangents = value
136
-
137
- @property
138
- def texcoord_0(self):
139
- """(n,2) float : The first set of UV texture coordinates.
140
- """
141
- return self._texcoord_0
142
-
143
- @texcoord_0.setter
144
- def texcoord_0(self, value):
145
- if value is not None:
146
- value = np.asanyarray(value, dtype=np.float32)
147
- value = np.ascontiguousarray(value)
148
- if (value.ndim != 2 or value.shape[0] != self.positions.shape[0] or
149
- value.shape[1] < 2):
150
- raise ValueError('Incorrect texture coordinate shape')
151
- if value.shape[1] > 2:
152
- value = value[:,:2]
153
- self._texcoord_0 = value
154
-
155
- @property
156
- def texcoord_1(self):
157
- """(n,2) float : The second set of UV texture coordinates.
158
- """
159
- return self._texcoord_1
160
-
161
- @texcoord_1.setter
162
- def texcoord_1(self, value):
163
- if value is not None:
164
- value = np.asanyarray(value, dtype=np.float32)
165
- value = np.ascontiguousarray(value)
166
- if (value.ndim != 2 or value.shape[0] != self.positions.shape[0] or
167
- value.shape[1] != 2):
168
- raise ValueError('Incorrect texture coordinate shape')
169
- self._texcoord_1 = value
170
-
171
- @property
172
- def color_0(self):
173
- """(n,4) float : RGBA vertex colors.
174
- """
175
- return self._color_0
176
-
177
- @color_0.setter
178
- def color_0(self, value):
179
- if value is not None:
180
- value = np.ascontiguousarray(
181
- format_color_array(value, shape=(len(self.positions), 4))
182
- )
183
- self._is_transparent = None
184
- self._color_0 = value
185
-
186
- @property
187
- def joints_0(self):
188
- """(n,4) float : Joint information.
189
- """
190
- return self._joints_0
191
-
192
- @joints_0.setter
193
- def joints_0(self, value):
194
- self._joints_0 = value
195
-
196
- @property
197
- def weights_0(self):
198
- """(n,4) float : Weight information for morphing.
199
- """
200
- return self._weights_0
201
-
202
- @weights_0.setter
203
- def weights_0(self, value):
204
- self._weights_0 = value
205
-
206
- @property
207
- def indices(self):
208
- """(m,3) int : Face indices for triangle meshes or fans.
209
- """
210
- return self._indices
211
-
212
- @indices.setter
213
- def indices(self, value):
214
- if value is not None:
215
- value = np.asanyarray(value, dtype=np.float32)
216
- value = np.ascontiguousarray(value)
217
- self._indices = value
218
-
219
- @property
220
- def material(self):
221
- """:class:`Material` : The material for this primitive.
222
- """
223
- return self._material
224
-
225
- @material.setter
226
- def material(self, value):
227
- # Create default material
228
- if value is None:
229
- value = MetallicRoughnessMaterial()
230
- else:
231
- if not isinstance(value, Material):
232
- raise TypeError('Object material must be of type Material')
233
- self._material = value
234
-
235
- @property
236
- def mode(self):
237
- """int : The type of primitive to render.
238
- """
239
- return self._mode
240
-
241
- @mode.setter
242
- def mode(self, value):
243
- value = int(value)
244
- if value < GLTF.POINTS or value > GLTF.TRIANGLE_FAN:
245
- raise ValueError('Invalid mode')
246
- self._mode = value
247
-
248
- @property
249
- def targets(self):
250
- """(k,) int : Morph target indices.
251
- """
252
- return self._targets
253
-
254
- @targets.setter
255
- def targets(self, value):
256
- self._targets = value
257
-
258
- @property
259
- def poses(self):
260
- """(x,4,4) float : Homogenous transforms for instancing this primitive.
261
- """
262
- return self._poses
263
-
264
- @poses.setter
265
- def poses(self, value):
266
- if value is not None:
267
- value = np.asanyarray(value, dtype=np.float32)
268
- value = np.ascontiguousarray(value)
269
- if value.ndim == 2:
270
- value = value[np.newaxis,:,:]
271
- if value.shape[1] != 4 or value.shape[2] != 4:
272
- raise ValueError('Pose matrices must be of shape (n,4,4), '
273
- 'got {}'.format(value.shape))
274
- self._poses = value
275
- self._bounds = None
276
-
277
- @property
278
- def bounds(self):
279
- if self._bounds is None:
280
- self._bounds = self._compute_bounds()
281
- return self._bounds
282
-
283
- @property
284
- def centroid(self):
285
- """(3,) float : The centroid of the primitive's AABB.
286
- """
287
- return np.mean(self.bounds, axis=0)
288
-
289
- @property
290
- def extents(self):
291
- """(3,) float : The lengths of the axes of the primitive's AABB.
292
- """
293
- return np.diff(self.bounds, axis=0).reshape(-1)
294
-
295
- @property
296
- def scale(self):
297
- """(3,) float : The length of the diagonal of the primitive's AABB.
298
- """
299
- return np.linalg.norm(self.extents)
300
-
301
- @property
302
- def buf_flags(self):
303
- """int : The flags for the render buffer.
304
- """
305
- if self._buf_flags is None:
306
- self._buf_flags = self._compute_buf_flags()
307
- return self._buf_flags
308
-
309
- def delete(self):
310
- self._unbind()
311
- self._remove_from_context()
312
-
313
- @property
314
- def is_transparent(self):
315
- """bool : If True, the mesh is partially-transparent.
316
- """
317
- return self._compute_transparency()
318
-
319
- def _add_to_context(self):
320
- if self._vaid is not None:
321
- raise ValueError('Mesh is already bound to a context')
322
-
323
- # Generate and bind VAO
324
- self._vaid = glGenVertexArrays(1)
325
- glBindVertexArray(self._vaid)
326
-
327
- #######################################################################
328
- # Fill vertex buffer
329
- #######################################################################
330
-
331
- # Generate and bind vertex buffer
332
- vertexbuffer = glGenBuffers(1)
333
- self._buffers.append(vertexbuffer)
334
- glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer)
335
-
336
- # positions
337
- vertex_data = self.positions
338
- attr_sizes = [3]
339
-
340
- # Normals
341
- if self.normals is not None:
342
- vertex_data = np.hstack((vertex_data, self.normals))
343
- attr_sizes.append(3)
344
-
345
- # Tangents
346
- if self.tangents is not None:
347
- vertex_data = np.hstack((vertex_data, self.tangents))
348
- attr_sizes.append(4)
349
-
350
- # Texture Coordinates
351
- if self.texcoord_0 is not None:
352
- vertex_data = np.hstack((vertex_data, self.texcoord_0))
353
- attr_sizes.append(2)
354
- if self.texcoord_1 is not None:
355
- vertex_data = np.hstack((vertex_data, self.texcoord_1))
356
- attr_sizes.append(2)
357
-
358
- # Color
359
- if self.color_0 is not None:
360
- vertex_data = np.hstack((vertex_data, self.color_0))
361
- attr_sizes.append(4)
362
-
363
- # TODO JOINTS AND WEIGHTS
364
- # PASS
365
-
366
- # Copy data to buffer
367
- vertex_data = np.ascontiguousarray(
368
- vertex_data.flatten().astype(np.float32)
369
- )
370
- glBufferData(
371
- GL_ARRAY_BUFFER, FLOAT_SZ * len(vertex_data),
372
- vertex_data, GL_STATIC_DRAW
373
- )
374
- total_sz = sum(attr_sizes)
375
- offset = 0
376
- for i, sz in enumerate(attr_sizes):
377
- glVertexAttribPointer(
378
- i, sz, GL_FLOAT, GL_FALSE, FLOAT_SZ * total_sz,
379
- ctypes.c_void_p(FLOAT_SZ * offset)
380
- )
381
- glEnableVertexAttribArray(i)
382
- offset += sz
383
-
384
- #######################################################################
385
- # Fill model matrix buffer
386
- #######################################################################
387
-
388
- if self.poses is not None:
389
- pose_data = np.ascontiguousarray(
390
- np.transpose(self.poses, [0,2,1]).flatten().astype(np.float32)
391
- )
392
- else:
393
- pose_data = np.ascontiguousarray(
394
- np.eye(4).flatten().astype(np.float32)
395
- )
396
-
397
- modelbuffer = glGenBuffers(1)
398
- self._buffers.append(modelbuffer)
399
- glBindBuffer(GL_ARRAY_BUFFER, modelbuffer)
400
- glBufferData(
401
- GL_ARRAY_BUFFER, FLOAT_SZ * len(pose_data),
402
- pose_data, GL_STATIC_DRAW
403
- )
404
-
405
- for i in range(0, 4):
406
- idx = i + len(attr_sizes)
407
- glEnableVertexAttribArray(idx)
408
- glVertexAttribPointer(
409
- idx, 4, GL_FLOAT, GL_FALSE, FLOAT_SZ * 4 * 4,
410
- ctypes.c_void_p(4 * FLOAT_SZ * i)
411
- )
412
- glVertexAttribDivisor(idx, 1)
413
-
414
- #######################################################################
415
- # Fill element buffer
416
- #######################################################################
417
- if self.indices is not None:
418
- elementbuffer = glGenBuffers(1)
419
- self._buffers.append(elementbuffer)
420
- glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer)
421
- glBufferData(GL_ELEMENT_ARRAY_BUFFER, UINT_SZ * self.indices.size,
422
- self.indices.flatten().astype(np.uint32),
423
- GL_STATIC_DRAW)
424
-
425
- glBindVertexArray(0)
426
-
427
- def _remove_from_context(self):
428
- if self._vaid is not None:
429
- glDeleteVertexArrays(1, [self._vaid])
430
- glDeleteBuffers(len(self._buffers), self._buffers)
431
- self._vaid = None
432
- self._buffers = []
433
-
434
- def _in_context(self):
435
- return self._vaid is not None
436
-
437
- def _bind(self):
438
- if self._vaid is None:
439
- raise ValueError('Cannot bind a Mesh that has not been added '
440
- 'to a context')
441
- glBindVertexArray(self._vaid)
442
-
443
- def _unbind(self):
444
- glBindVertexArray(0)
445
-
446
- def _compute_bounds(self):
447
- """Compute the bounds of this object.
448
- """
449
- # Compute bounds of this object
450
- bounds = np.array([np.min(self.positions, axis=0),
451
- np.max(self.positions, axis=0)])
452
-
453
- # If instanced, compute translations for approximate bounds
454
- if self.poses is not None:
455
- bounds += np.array([np.min(self.poses[:,:3,3], axis=0),
456
- np.max(self.poses[:,:3,3], axis=0)])
457
- return bounds
458
-
459
- def _compute_transparency(self):
460
- """Compute whether or not this object is transparent.
461
- """
462
- if self.material.is_transparent:
463
- return True
464
- if self._is_transparent is None:
465
- self._is_transparent = False
466
- if self.color_0 is not None:
467
- if np.any(self._color_0[:,3] != 1.0):
468
- self._is_transparent = True
469
- return self._is_transparent
470
-
471
- def _compute_buf_flags(self):
472
- buf_flags = BufFlags.POSITION
473
-
474
- if self.normals is not None:
475
- buf_flags |= BufFlags.NORMAL
476
- if self.tangents is not None:
477
- buf_flags |= BufFlags.TANGENT
478
- if self.texcoord_0 is not None:
479
- buf_flags |= BufFlags.TEXCOORD_0
480
- if self.texcoord_1 is not None:
481
- buf_flags |= BufFlags.TEXCOORD_1
482
- if self.color_0 is not None:
483
- buf_flags |= BufFlags.COLOR_0
484
- if self.joints_0 is not None:
485
- buf_flags |= BufFlags.JOINTS_0
486
- if self.weights_0 is not None:
487
- buf_flags |= BufFlags.WEIGHTS_0
488
-
489
- return buf_flags
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abuzariii/Text-Generation-with-GPT-2/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Text Generation With GPT 2
3
- emoji: 📊
4
- colorFrom: yellow
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 3.4.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/stop-generating/+server.ts DELETED
@@ -1,23 +0,0 @@
1
- import { authCondition } from "$lib/server/auth";
2
- import { collections } from "$lib/server/database";
3
- import { error } from "@sveltejs/kit";
4
-
5
- /**
6
- * Ideally, we'd be able to detect the client-side abort, see https://github.com/huggingface/chat-ui/pull/88#issuecomment-1523173850
7
- */
8
- export async function POST({ params, locals }) {
9
- /*const conversationId = new ObjectId(params.id);
10
-
11
- const conversation = await collections.conversations.findOne({
12
- _id: conversationId,
13
- ...authCondition(locals),
14
- });
15
-
16
- await collections.abortedGenerations.updateOne(
17
- { conversationId },
18
- { $set: { updatedAt: new Date() }, $setOnInsert: { createdAt: new Date() } },
19
- { upsert: true }
20
- );*/
21
-
22
- return new Response();
23
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/client/js/chat.js DELETED
@@ -1,508 +0,0 @@
1
- const query = (obj) =>
2
- Object.keys(obj)
3
- .map((k) => encodeURIComponent(k) + "=" + encodeURIComponent(obj[k]))
4
- .join("&");
5
- const url_prefix = document.querySelector("body").getAttribute("data-urlprefix");
6
- const markdown = window.markdownit();
7
- const message_box = document.getElementById(`messages`);
8
- const message_input = document.getElementById(`message-input`);
9
- const box_conversations = document.querySelector(`.top`);
10
- const spinner = box_conversations.querySelector(".spinner");
11
- const stop_generating = document.querySelector(`.stop-generating`);
12
- const send_button = document.querySelector(`#send-button`);
13
- const user_image = `<img src="${url_prefix}/assets/img/user.png" alt="User Avatar">`;
14
- const gpt_image = `<img src="${url_prefix}/assets/img/gpt.png" alt="GPT Avatar">`;
15
- let prompt_lock = false;
16
-
17
- hljs.addPlugin(new CopyButtonPlugin());
18
-
19
- message_input.addEventListener("blur", () => {
20
- window.scrollTo(0, 0);
21
- });
22
-
23
- message_input.addEventListener("focus", () => {
24
- document.documentElement.scrollTop = document.documentElement.scrollHeight;
25
- });
26
-
27
- const delete_conversations = async () => {
28
- localStorage.clear();
29
- await new_conversation();
30
- };
31
-
32
- const handle_ask = async () => {
33
- message_input.style.height = `80px`;
34
- window.scrollTo(0, 0);
35
- let message = message_input.value;
36
-
37
- if (message.length > 0) {
38
- message_input.value = ``;
39
- message_input.dispatchEvent(new Event("input"));
40
- await ask_gpt(message);
41
- }
42
- };
43
-
44
- const remove_cancel_button = async () => {
45
- stop_generating.classList.add(`stop-generating-hiding`);
46
-
47
- setTimeout(() => {
48
- stop_generating.classList.remove(`stop-generating-hiding`);
49
- stop_generating.classList.add(`stop-generating-hidden`);
50
- }, 300);
51
- };
52
-
53
- const ask_gpt = async (message) => {
54
- try {
55
- message_input.value = ``;
56
- message_input.innerHTML = ``;
57
- message_input.innerText = ``;
58
-
59
- add_conversation(window.conversation_id, message.substr(0, 16));
60
- window.scrollTo(0, 0);
61
- window.controller = new AbortController();
62
-
63
- jailbreak = document.getElementById("jailbreak");
64
- model = document.getElementById("model");
65
- prompt_lock = true;
66
- window.text = ``;
67
- window.token = message_id();
68
-
69
- stop_generating.classList.remove(`stop-generating-hidden`);
70
-
71
- add_user_message_box(message);
72
-
73
- message_box.scrollTop = message_box.scrollHeight;
74
- window.scrollTo(0, 0);
75
- await new Promise((r) => setTimeout(r, 500));
76
- window.scrollTo(0, 0);
77
-
78
- message_box.innerHTML += `
79
- <div class="message">
80
- <div class="avatar-container">
81
- ${gpt_image}
82
- </div>
83
- <div class="content" id="gpt_${window.token}">
84
- <div id="cursor"></div>
85
- </div>
86
- </div>
87
- `;
88
-
89
- message_box.scrollTop = message_box.scrollHeight;
90
- window.scrollTo(0, 0);
91
- await new Promise((r) => setTimeout(r, 1000));
92
- window.scrollTo(0, 0);
93
-
94
- const response = await fetch(`${url_prefix}/backend-api/v2/conversation`, {
95
- method: `POST`,
96
- signal: window.controller.signal,
97
- headers: {
98
- "content-type": `application/json`,
99
- accept: `text/event-stream`,
100
- },
101
- body: JSON.stringify({
102
- conversation_id: window.conversation_id,
103
- action: `_ask`,
104
- model: model.options[model.selectedIndex].value,
105
- jailbreak: jailbreak.options[jailbreak.selectedIndex].value,
106
- meta: {
107
- id: window.token,
108
- content: {
109
- conversation: await get_conversation(window.conversation_id),
110
- internet_access: document.getElementById("switch").checked,
111
- content_type: "text",
112
- parts: [
113
- {
114
- content: message,
115
- role: "user",
116
- },
117
- ],
118
- },
119
- },
120
- }),
121
- });
122
-
123
- const reader = response.body.getReader();
124
-
125
- while (true) {
126
- const { value, done } = await reader.read();
127
- if (done) break;
128
-
129
- chunk = decodeUnicode(new TextDecoder().decode(value));
130
-
131
- if (
132
- chunk.includes(`<form id="challenge-form" action="${url_prefix}/backend-api/v2/conversation?`)
133
- ) {
134
- chunk = `cloudflare token expired, please refresh the page.`;
135
- }
136
-
137
- text += chunk;
138
-
139
- document.getElementById(`gpt_${window.token}`).innerHTML = markdown.render(text);
140
- document.querySelectorAll(`code`).forEach((el) => {
141
- hljs.highlightElement(el);
142
- });
143
-
144
- window.scrollTo(0, 0);
145
- message_box.scrollTo({ top: message_box.scrollHeight, behavior: "auto" });
146
- }
147
-
148
- // if text contains :
149
- if (text.includes(`instead. Maintaining this website and API costs a lot of money`)) {
150
- document.getElementById(`gpt_${window.token}`).innerHTML =
151
- "An error occurred, please reload / refresh cache and try again.";
152
- }
153
-
154
- add_message(window.conversation_id, "user", message);
155
- add_message(window.conversation_id, "assistant", text);
156
-
157
- message_box.scrollTop = message_box.scrollHeight;
158
- await remove_cancel_button();
159
- prompt_lock = false;
160
-
161
- await load_conversations(20, 0);
162
- window.scrollTo(0, 0);
163
- } catch (e) {
164
- add_message(window.conversation_id, "user", message);
165
-
166
- message_box.scrollTop = message_box.scrollHeight;
167
- await remove_cancel_button();
168
- prompt_lock = false;
169
-
170
- await load_conversations(20, 0);
171
-
172
- console.log(e);
173
-
174
- let cursorDiv = document.getElementById(`cursor`);
175
- if (cursorDiv) cursorDiv.parentNode.removeChild(cursorDiv);
176
-
177
- if (e.name != `AbortError`) {
178
- let error_message = `oops ! something went wrong, please try again / reload. [stacktrace in console]`;
179
-
180
- document.getElementById(`gpt_${window.token}`).innerHTML = error_message;
181
- add_message(window.conversation_id, "assistant", error_message);
182
- } else {
183
- document.getElementById(`gpt_${window.token}`).innerHTML += ` [aborted]`;
184
- add_message(window.conversation_id, "assistant", text + ` [aborted]`);
185
- }
186
-
187
- window.scrollTo(0, 0);
188
- }
189
- };
190
-
191
- const add_user_message_box = (message) => {
192
- const messageDiv = createElement("div", { classNames: ["message"] });
193
- const avatarContainer = createElement("div", { classNames: ["avatar-container"], innerHTML: user_image });
194
- const contentDiv = createElement("div", {
195
- classNames: ["content"],
196
- id: `user_${token}`,
197
- textContent: message,
198
- });
199
-
200
- messageDiv.append(avatarContainer, contentDiv);
201
- message_box.appendChild(messageDiv);
202
- };
203
-
204
- const decodeUnicode = (str) => {
205
- return str.replace(/\\u([a-fA-F0-9]{4})/g, function (match, grp) {
206
- return String.fromCharCode(parseInt(grp, 16));
207
- });
208
- };
209
-
210
- const clear_conversations = async () => {
211
- const elements = box_conversations.childNodes;
212
- let index = elements.length;
213
-
214
- if (index > 0) {
215
- while (index--) {
216
- const element = elements[index];
217
- if (element.nodeType === Node.ELEMENT_NODE && element.tagName.toLowerCase() !== `button`) {
218
- box_conversations.removeChild(element);
219
- }
220
- }
221
- }
222
- };
223
-
224
- const clear_conversation = async () => {
225
- let messages = message_box.getElementsByTagName(`div`);
226
-
227
- while (messages.length > 0) {
228
- message_box.removeChild(messages[0]);
229
- }
230
- };
231
-
232
- const delete_conversation = async (conversation_id) => {
233
- localStorage.removeItem(`conversation:${conversation_id}`);
234
-
235
- if (window.conversation_id == conversation_id) {
236
- await new_conversation();
237
- }
238
-
239
- await load_conversations(20, 0, true);
240
- };
241
-
242
- const set_conversation = async (conversation_id) => {
243
- history.pushState({}, null, `${url_prefix}/chat/${conversation_id}`);
244
- window.conversation_id = conversation_id;
245
-
246
- await clear_conversation();
247
- await load_conversation(conversation_id);
248
- await load_conversations(20, 0, true);
249
- };
250
-
251
- const new_conversation = async () => {
252
- history.pushState({}, null, `${url_prefix}/chat/`);
253
- window.conversation_id = uuid();
254
-
255
- await clear_conversation();
256
- await load_conversations(20, 0, true);
257
- };
258
-
259
- const load_conversation = async (conversation_id) => {
260
- let conversation = await JSON.parse(localStorage.getItem(`conversation:${conversation_id}`));
261
- console.log(conversation, conversation_id);
262
-
263
- for (item of conversation.items) {
264
- if (is_assistant(item.role)) {
265
- message_box.innerHTML += load_gpt_message_box(item.content);
266
- } else {
267
- message_box.innerHTML += load_user_message_box(item.content);
268
- }
269
- }
270
-
271
- document.querySelectorAll(`code`).forEach((el) => {
272
- hljs.highlightElement(el);
273
- });
274
-
275
- message_box.scrollTo({ top: message_box.scrollHeight, behavior: "smooth" });
276
-
277
- setTimeout(() => {
278
- message_box.scrollTop = message_box.scrollHeight;
279
- }, 500);
280
- };
281
-
282
- const load_user_message_box = (content) => {
283
- const messageDiv = createElement("div", { classNames: ["message"] });
284
- const avatarContainer = createElement("div", { classNames: ["avatar-container"], innerHTML: user_image });
285
- const contentDiv = createElement("div", { classNames: ["content"] });
286
- const preElement = document.createElement("pre");
287
- preElement.textContent = content;
288
- contentDiv.appendChild(preElement);
289
-
290
- messageDiv.append(avatarContainer, contentDiv);
291
-
292
- return messageDiv.outerHTML;
293
- };
294
-
295
- const load_gpt_message_box = (content) => {
296
- return `
297
- <div class="message">
298
- <div class="avatar-container">
299
- ${gpt_image}
300
- </div>
301
- <div class="content">
302
- ${markdown.render(content)}
303
- </div>
304
- </div>
305
- `;
306
- };
307
-
308
- const is_assistant = (role) => {
309
- return role == "assistant";
310
- };
311
-
312
- const get_conversation = async (conversation_id) => {
313
- let conversation = await JSON.parse(localStorage.getItem(`conversation:${conversation_id}`));
314
- return conversation.items;
315
- };
316
-
317
- const add_conversation = async (conversation_id, title) => {
318
- if (localStorage.getItem(`conversation:${conversation_id}`) == null) {
319
- localStorage.setItem(
320
- `conversation:${conversation_id}`,
321
- JSON.stringify({
322
- id: conversation_id,
323
- title: title,
324
- items: [],
325
- })
326
- );
327
- }
328
- };
329
-
330
- const add_message = async (conversation_id, role, content) => {
331
- before_adding = JSON.parse(localStorage.getItem(`conversation:${conversation_id}`));
332
-
333
- before_adding.items.push({
334
- role: role,
335
- content: content,
336
- });
337
-
338
- localStorage.setItem(`conversation:${conversation_id}`, JSON.stringify(before_adding)); // update conversation
339
- };
340
-
341
- const load_conversations = async (limit, offset, loader) => {
342
- //console.log(loader);
343
- //if (loader === undefined) box_conversations.appendChild(spinner);
344
-
345
- let conversations = [];
346
- for (let i = 0; i < localStorage.length; i++) {
347
- if (localStorage.key(i).startsWith("conversation:")) {
348
- let conversation = localStorage.getItem(localStorage.key(i));
349
- conversations.push(JSON.parse(conversation));
350
- }
351
- }
352
-
353
- //if (loader === undefined) spinner.parentNode.removeChild(spinner)
354
- await clear_conversations();
355
-
356
- for (conversation of conversations) {
357
- box_conversations.innerHTML += `
358
- <div class="conversation-sidebar">
359
- <div class="left" onclick="set_conversation('${conversation.id}')">
360
- <i class="fa-regular fa-comments"></i>
361
- <span class="conversation-title">${conversation.title}</span>
362
- </div>
363
- <i onclick="delete_conversation('${conversation.id}')" class="fa-regular fa-trash"></i>
364
- </div>
365
- `;
366
- }
367
-
368
- document.querySelectorAll(`code`).forEach((el) => {
369
- hljs.highlightElement(el);
370
- });
371
- };
372
-
373
- document.getElementById(`cancelButton`).addEventListener(`click`, async () => {
374
- window.controller.abort();
375
- console.log(`aborted ${window.conversation_id}`);
376
- });
377
-
378
- function h2a(str1) {
379
- var hex = str1.toString();
380
- var str = "";
381
-
382
- for (var n = 0; n < hex.length; n += 2) {
383
- str += String.fromCharCode(parseInt(hex.substr(n, 2), 16));
384
- }
385
-
386
- return str;
387
- }
388
-
389
- const uuid = () => {
390
- return `xxxxxxxx-xxxx-4xxx-yxxx-${Date.now().toString(16)}`.replace(/[xy]/g, function (c) {
391
- var r = (Math.random() * 16) | 0,
392
- v = c == "x" ? r : (r & 0x3) | 0x8;
393
- return v.toString(16);
394
- });
395
- };
396
-
397
- const message_id = () => {
398
- random_bytes = (Math.floor(Math.random() * 1338377565) + 2956589730).toString(2);
399
- unix = Math.floor(Date.now() / 1000).toString(2);
400
-
401
- return BigInt(`0b${unix}${random_bytes}`).toString();
402
- };
403
-
404
- window.onload = async () => {
405
- load_settings_localstorage();
406
-
407
- conversations = 0;
408
- for (let i = 0; i < localStorage.length; i++) {
409
- if (localStorage.key(i).startsWith("conversation:")) {
410
- conversations += 1;
411
- }
412
- }
413
-
414
- if (conversations == 0) localStorage.clear();
415
-
416
- await setTimeout(() => {
417
- load_conversations(20, 0);
418
- }, 1);
419
-
420
- if (!window.location.href.endsWith(`#`)) {
421
- if (/\/chat\/.+/.test(window.location.href.slice(url_prefix.length))) {
422
- await load_conversation(window.conversation_id);
423
- }
424
- }
425
-
426
- message_input.addEventListener("keydown", async (evt) => {
427
- if (prompt_lock) return;
428
-
429
- if (evt.key === "Enter" && !evt.shiftKey) {
430
- evt.preventDefault();
431
- await handle_ask();
432
- }
433
- });
434
-
435
- send_button.addEventListener("click", async (event) => {
436
- event.preventDefault();
437
- if (prompt_lock) return;
438
- message_input.blur();
439
- await handle_ask();
440
- });
441
-
442
- register_settings_localstorage();
443
- };
444
-
445
- const register_settings_localstorage = async () => {
446
- settings_ids = ["switch", "model", "jailbreak"];
447
- settings_elements = settings_ids.map((id) => document.getElementById(id));
448
- settings_elements.map((element) =>
449
- element.addEventListener(`change`, async (event) => {
450
- switch (event.target.type) {
451
- case "checkbox":
452
- localStorage.setItem(event.target.id, event.target.checked);
453
- break;
454
- case "select-one":
455
- localStorage.setItem(event.target.id, event.target.selectedIndex);
456
- break;
457
- default:
458
- console.warn("Unresolved element type");
459
- }
460
- })
461
- );
462
- };
463
-
464
- const load_settings_localstorage = async () => {
465
- settings_ids = ["switch", "model", "jailbreak"];
466
- settings_elements = settings_ids.map((id) => document.getElementById(id));
467
- settings_elements.map((element) => {
468
- if (localStorage.getItem(element.id)) {
469
- switch (element.type) {
470
- case "checkbox":
471
- element.checked = localStorage.getItem(element.id) === "true";
472
- break;
473
- case "select-one":
474
- element.selectedIndex = parseInt(localStorage.getItem(element.id));
475
- break;
476
- default:
477
- console.warn("Unresolved element type");
478
- }
479
- }
480
- });
481
- };
482
-
483
- function clearTextarea(textarea) {
484
- textarea.style.removeProperty("height");
485
- textarea.style.height = `${textarea.scrollHeight + 4}px`;
486
- if (textarea.value.trim() === "" && textarea.value.includes("\n")) {
487
- textarea.value = "";
488
- }
489
- }
490
-
491
- function createElement(tag, { classNames, id, innerHTML, textContent } = {}) {
492
- const el = document.createElement(tag);
493
- if (classNames) {
494
- el.classList.add(...classNames);
495
- }
496
- if (id) {
497
- el.id = id;
498
- }
499
- if (innerHTML) {
500
- el.innerHTML = innerHTML;
501
- }
502
- if (textContent) {
503
- const preElement = document.createElement("pre");
504
- preElement.textContent = textContent;
505
- el.appendChild(preElement);
506
- }
507
- return el;
508
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/ChatgptAi.py DELETED
@@ -1,74 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import re
4
- from aiohttp import ClientSession
5
-
6
- from .base_provider import AsyncProvider, format_prompt
7
-
8
-
9
- class ChatgptAi(AsyncProvider):
10
- url: str = "https://chatgpt.ai/"
11
- working = True
12
- supports_gpt_35_turbo = True
13
- _nonce = None
14
- _post_id = None
15
- _bot_id = None
16
-
17
- @classmethod
18
- async def create_async(
19
- cls,
20
- model: str,
21
- messages: list[dict[str, str]],
22
- proxy: str = None,
23
- **kwargs
24
- ) -> str:
25
- headers = {
26
- "authority" : "chatgpt.ai",
27
- "accept" : "*/*",
28
- "accept-language" : "en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3",
29
- "cache-control" : "no-cache",
30
- "origin" : "https://chatgpt.ai",
31
- "pragma" : "no-cache",
32
- "referer" : cls.url,
33
- "sec-ch-ua" : '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
34
- "sec-ch-ua-mobile" : "?0",
35
- "sec-ch-ua-platform" : '"Windows"',
36
- "sec-fetch-dest" : "empty",
37
- "sec-fetch-mode" : "cors",
38
- "sec-fetch-site" : "same-origin",
39
- "user-agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
40
- }
41
- async with ClientSession(
42
- headers=headers
43
- ) as session:
44
- if not cls._nonce:
45
- async with session.get(cls.url, proxy=proxy) as response:
46
- response.raise_for_status()
47
- text = await response.text()
48
- result = re.search(r'data-nonce="(.*?)"', text)
49
- if result:
50
- cls._nonce = result.group(1)
51
- result = re.search(r'data-post-id="(.*?)"', text)
52
- if result:
53
- cls._post_id = result.group(1)
54
- result = re.search(r'data-bot-id="(.*?)"', text)
55
- if result:
56
- cls._bot_id = result.group(1)
57
- if not cls._nonce or not cls._post_id or not cls._bot_id:
58
- raise RuntimeError("Nonce, post-id or bot-id not found")
59
-
60
- data = {
61
- "_wpnonce": cls._nonce,
62
- "post_id": cls._post_id,
63
- "url": "https://chatgpt.ai",
64
- "action": "wpaicg_chat_shortcode_message",
65
- "message": format_prompt(messages),
66
- "bot_id": cls._bot_id
67
- }
68
- async with session.post(
69
- "https://chatgpt.ai/wp-admin/admin-ajax.php",
70
- proxy=proxy,
71
- data=data
72
- ) as response:
73
- response.raise_for_status()
74
- return (await response.json())["data"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/dist_util.py DELETED
@@ -1,91 +0,0 @@
1
- # Modified from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/dist_utils.py # noqa: E501
2
- import functools
3
- import os
4
- import subprocess
5
- import torch
6
- import torch.distributed as dist
7
- import torch.multiprocessing as mp
8
- from torch.nn.parallel import DataParallel, DistributedDataParallel
9
-
10
-
11
- def init_dist(launcher, backend='nccl', **kwargs):
12
- if mp.get_start_method(allow_none=True) is None:
13
- mp.set_start_method('spawn')
14
- if launcher == 'pytorch':
15
- _init_dist_pytorch(backend, **kwargs)
16
- elif launcher == 'slurm':
17
- _init_dist_slurm(backend, **kwargs)
18
- else:
19
- raise ValueError(f'Invalid launcher type: {launcher}')
20
-
21
-
22
- def _init_dist_pytorch(backend, **kwargs):
23
- rank = int(os.environ['RANK'])
24
- num_gpus = torch.cuda.device_count()
25
- torch.cuda.set_device(rank % num_gpus)
26
- dist.init_process_group(backend=backend, **kwargs)
27
-
28
-
29
- def _init_dist_slurm(backend, port=None):
30
- """Initialize slurm distributed training environment.
31
-
32
- If argument ``port`` is not specified, then the master port will be system
33
- environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system
34
- environment variable, then a default port ``29500`` will be used.
35
-
36
- Args:
37
- backend (str): Backend of torch.distributed.
38
- port (int, optional): Master port. Defaults to None.
39
- """
40
- proc_id = int(os.environ['SLURM_PROCID'])
41
- ntasks = int(os.environ['SLURM_NTASKS'])
42
- node_list = os.environ['SLURM_NODELIST']
43
- num_gpus = torch.cuda.device_count()
44
- torch.cuda.set_device(proc_id % num_gpus)
45
- addr = subprocess.getoutput(f'scontrol show hostname {node_list} | head -n1')
46
- # specify master port
47
- if port is not None:
48
- os.environ['MASTER_PORT'] = str(port)
49
- elif 'MASTER_PORT' in os.environ:
50
- pass # use MASTER_PORT in the environment variable
51
- else:
52
- # 29500 is torch.distributed default port
53
- os.environ['MASTER_PORT'] = '29500'
54
- os.environ['MASTER_ADDR'] = addr
55
- os.environ['WORLD_SIZE'] = str(ntasks)
56
- os.environ['LOCAL_RANK'] = str(proc_id % num_gpus)
57
- os.environ['RANK'] = str(proc_id)
58
- dist.init_process_group(backend=backend)
59
-
60
-
61
- def get_dist_info():
62
- if dist.is_available():
63
- initialized = dist.is_initialized()
64
- else:
65
- initialized = False
66
- if initialized:
67
- rank = dist.get_rank()
68
- world_size = dist.get_world_size()
69
- else:
70
- rank = 0
71
- world_size = 1
72
- return rank, world_size
73
-
74
-
75
- def master_only(func):
76
-
77
- @functools.wraps(func)
78
- def wrapper(*args, **kwargs):
79
- rank, _ = get_dist_info()
80
- if rank == 0:
81
- return func(*args, **kwargs)
82
-
83
- return wrapper
84
-
85
- def get_bare_model(net):
86
- """Get bare model, especially under wrapping with
87
- DistributedDataParallel or DataParallel.
88
- """
89
- if isinstance(net, (DataParallel, DistributedDataParallel)):
90
- net = net.module
91
- return net
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/modules/diffusionmodules/__init__.py DELETED
File without changes
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/holygrail/methods/LayoutMode2.js DELETED
@@ -1,74 +0,0 @@
1
- /*
2
- Elements:
3
- ```
4
- HHH
5
- LCR
6
- FFR
7
- ```
8
- */
9
-
10
- import {
11
- GetAddHeaderConfig,
12
- GetAddLeftSideConfig, GetAddContentConfig, GetAddRightSideConfig,
13
- GetAddFooterConfig,
14
- GetAddContainerConfig
15
- } from './GetAddChildConfig.js';
16
- import CreatExpandContainer from './CreatExpandContainer.js';
17
-
18
- var LayoutMode2 = function (config) {
19
- var scene = this.scene;
20
-
21
- // Add Header
22
- var header = config.header;
23
- if (header) {
24
- this.add(header, GetAddHeaderConfig(config));
25
- }
26
-
27
- /*
28
- LC R
29
- FF R
30
- */
31
- var bodySizer0 = CreatExpandContainer(scene, 0);
32
- this.add(bodySizer0, GetAddContainerConfig(config));
33
-
34
- /*
35
- LC
36
-
37
- FF
38
- */
39
- var bodySizer1 = CreatExpandContainer(scene, 1);
40
- bodySizer0.add(bodySizer1, GetAddContainerConfig(config));
41
-
42
- /*
43
- L C
44
- */
45
- var bodySizer2 = CreatExpandContainer(scene, 0);
46
- bodySizer1.add(bodySizer2, GetAddContainerConfig(config));
47
-
48
- // Add Left-side
49
- var leftSide = config.leftSide;
50
- if (leftSide) {
51
- bodySizer2.add(leftSide, GetAddLeftSideConfig(config));
52
- }
53
-
54
- // Add content
55
- var content = config.content;
56
- if (content) {
57
- bodySizer2.add(content, GetAddContentConfig(config));
58
- }
59
-
60
- // Add Footer
61
- var footer = config.footer;
62
- if (footer) {
63
- bodySizer1.add(footer, GetAddFooterConfig(config));
64
- }
65
-
66
- // Add Right-side
67
- var rightSide = config.rightSide;
68
- if (rightSide) {
69
- bodySizer0.add(rightSide, GetAddRightSideConfig(config));
70
- }
71
-
72
- }
73
-
74
- export default LayoutMode2;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Label.d.ts DELETED
@@ -1,100 +0,0 @@
1
- // import * as Phaser from 'phaser';
2
- import Sizer from '../sizer/Sizer';
3
-
4
- export default Label;
5
-
6
- declare namespace Label {
7
-
8
- type AlignTypes = 'left' | 'top' | 'right' | 'bottom' | 'center';
9
-
10
- interface IConfig extends Sizer.IConfig {
11
- space?: {
12
- left?: number, right?: number, top?: number, bottom?: number,
13
-
14
- icon?: number,
15
- text?: number,
16
- },
17
-
18
- background?: Phaser.GameObjects.GameObject,
19
-
20
- icon?: Phaser.GameObjects.GameObject,
21
- iconMask?: boolean,
22
- squareFitIcon?: boolean,
23
- iconSize?: number, iconWidth?: number, iconHeight?: number,
24
-
25
- text?: Phaser.GameObjects.GameObject,
26
- expandTextWidth?: boolean,
27
- expandTextHeight?: boolean,
28
-
29
- action?: Phaser.GameObjects.GameObject,
30
- squareFitAction?: boolean,
31
- actionMask?: boolean,
32
- actionSize?: number, actionWidth?: number, actionHeight?: number,
33
-
34
- align?: AlignTypes,
35
- }
36
-
37
- interface IResetDisplayContentConfig {
38
- text?: string,
39
-
40
- icon?: string | Phaser.Textures.Texture,
41
- iconFrame?: string | number,
42
- iconSize?: number,
43
-
44
- action?: string | Phaser.Textures.Texture,
45
- actionFrame?: string | number,
46
- actionSize?: number,
47
- }
48
- }
49
-
50
- declare class Label extends Sizer {
51
- constructor(
52
- scene: Phaser.Scene,
53
- config?: Label.IConfig
54
- );
55
-
56
- text: string;
57
- setText(text: string): this;
58
- appendText(
59
- text: string | number | string[],
60
- addCR?: boolean
61
- ): this;
62
-
63
- setTexture(
64
- key: string | Phaser.Textures.Texture,
65
- frame?: string | number
66
- ): this;
67
- readonly texture: Phaser.Textures.Texture | Phaser.Textures.CanvasTexture;
68
- readonly frame: Phaser.Textures.Frame;
69
-
70
- setIconTexture(
71
- key: string | Phaser.Textures.Texture,
72
- frame?: string | number
73
- ): this;
74
-
75
- setIconSize(
76
- width?: number,
77
- height?: number
78
- ): this;
79
- iconWidth: number;
80
- iconHeight: number;
81
-
82
- setActionTexture(
83
- key: string | Phaser.Textures.Texture,
84
- frame?: string | number
85
- ): this;
86
- readonly actionTexture: Phaser.Textures.Texture | Phaser.Textures.CanvasTexture;
87
- readonly actionFrame: Phaser.Textures.Frame;
88
-
89
- setActionSize(
90
- width?: number,
91
- height?: number
92
- ): this;
93
- actionWidth: number;
94
- actionHeight: number;
95
-
96
- resetDisplayContent(
97
- config?: string | Label.IResetDisplayContentConfig
98
- ): this;
99
-
100
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AiMimicry/sovits-models/app.py DELETED
@@ -1,110 +0,0 @@
1
- import os
2
- import io
3
- import gradio as gr
4
- import librosa
5
- import numpy as np
6
- import utils
7
- from inference.infer_tool import Svc
8
- import logging
9
- import soundfile
10
- import asyncio
11
- import argparse
12
- import gradio.processing_utils as gr_processing_utils
13
- logging.getLogger('numba').setLevel(logging.WARNING)
14
- logging.getLogger('markdown_it').setLevel(logging.WARNING)
15
- logging.getLogger('urllib3').setLevel(logging.WARNING)
16
- logging.getLogger('matplotlib').setLevel(logging.WARNING)
17
-
18
- limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces
19
-
20
- audio_postprocess_ori = gr.Audio.postprocess
21
-
22
- def audio_postprocess(self, y):
23
- data = audio_postprocess_ori(self, y)
24
- if data is None:
25
- return None
26
- return gr_processing_utils.encode_url_or_file_to_base64(data["name"])
27
-
28
-
29
- gr.Audio.postprocess = audio_postprocess
30
- def create_vc_fn(model, sid):
31
- def vc_fn(input_audio, vc_transform, auto_f0, fmp):
32
- if input_audio is None:
33
- return "You need to upload an audio", None
34
- sampling_rate, audio = input_audio
35
- duration = audio.shape[0] / sampling_rate
36
- if duration > 20 and limitation:
37
- return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None
38
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
39
- if len(audio.shape) > 1:
40
- audio = librosa.to_mono(audio.transpose(1, 0))
41
- if sampling_rate != 16000:
42
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
43
- raw_path = io.BytesIO()
44
- soundfile.write(raw_path, audio, 16000, format="wav")
45
- raw_path.seek(0)
46
- out_audio, out_sr = model.infer(sid, vc_transform, raw_path,
47
- auto_predict_f0=auto_f0, F0_mean_pooling=fmp
48
- )
49
- return "Success", (44100, out_audio.cpu().numpy())
50
- return vc_fn
51
-
52
- if __name__ == '__main__':
53
- parser = argparse.ArgumentParser()
54
- parser.add_argument('--device', type=str, default='cpu')
55
- parser.add_argument('--api', action="store_true", default=False)
56
- parser.add_argument("--share", action="store_true", default=False, help="share gradio app")
57
- args = parser.parse_args()
58
- hubert_model = utils.get_hubert_model().to(args.device)
59
- models = []
60
- voices = []
61
- for f in os.listdir("models"):
62
- name = f
63
- model = Svc(fr"models/{f}/{f}.pth", f"models/{f}/config.json", device=args.device)
64
- cover = f"models/{f}/cover.jpg" if os.path.exists(f"models/{f}/cover.jpg") else None
65
- models.append((name, cover, create_vc_fn(model, name)))
66
- with gr.Blocks() as app:
67
- gr.Markdown(
68
- "# <center> Sovits Models\n"
69
- "## <center> The input audio should be clean and pure voice without background music.\n"
70
- "[![Original Repo](https://badgen.net/badge/icon/github?icon=github&label=Original%20Repo)](https://github.com/svc-develop-team/so-vits-svc)"
71
-
72
- )
73
-
74
- with gr.Tabs():
75
- for (name, cover, vc_fn) in models:
76
- with gr.TabItem(name):
77
- with gr.Row():
78
- gr.Markdown(
79
- '<div align="center">'
80
- f'<img style="width:auto;height:300px;" src="file/{cover}">' if cover else ""
81
- '</div>'
82
- )
83
- with gr.Row():
84
- with gr.Column():
85
- vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '')
86
- vc_transform = gr.Number(label="vc_transform", value=0)
87
- auto_f0 = gr.Checkbox(label="auto_f0", value=False)
88
- fmp = gr.Checkbox(label="fmp", value=False)
89
- vc_submit = gr.Button("Generate", variant="primary")
90
-
91
- with gr.Column():
92
- vc_output1 = gr.Textbox(label="Output Message")
93
- vc_output2 = gr.Audio(label="Output Audio")
94
- vc_submit.click(vc_fn, [vc_input, vc_transform, auto_f0, fmp], [vc_output1, vc_output2])
95
-
96
- """
97
- for category, link in others.items():
98
- with gr.TabItem(category):
99
- gr.Markdown(
100
- f'''
101
- <center>
102
- <h2>Click to Go</h2>
103
- <a href="{link}">
104
- <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-xl-dark.svg"
105
- </a>
106
- </center>
107
- '''
108
- )
109
- """
110
- app.queue(concurrency_count=1, api_open=args.api).launch(share=args.share)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/PTI/dnnlib/__init__.py DELETED
@@ -1,9 +0,0 @@
1
- # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2
- #
3
- # NVIDIA CORPORATION and its licensors retain all intellectual property
4
- # and proprietary rights in and to this software, related documentation
5
- # and any modifications thereto. Any use, reproduction, disclosure or
6
- # distribution of this software and related documentation without an express
7
- # license agreement from NVIDIA CORPORATION is strictly prohibited.
8
-
9
- from .util import EasyDict, make_cache_dir_path
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/utils/__init__.py DELETED
File without changes
spaces/Amrrs/DragGan-Inversion/README.md DELETED
@@ -1,78 +0,0 @@
1
- ---
2
- title: DragGan - Drag Your GAN - Inversion
3
- emoji: 🔄🐉
4
- colorFrom: purple
5
- colorTo: pink
6
- sdk: gradio
7
- python_version: 3.8.17
8
- sdk_version: 3.36.1
9
- app_file: visualizer_drag_gradio_inversion.py
10
- pinned: false
11
- duplicated_from: DragGan/DragGan-Inversion
12
- ---
13
-
14
-
15
- # Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold
16
-
17
- https://arxiv.org/abs/2305.10973
18
- https://huggingface.co/DragGan/DragGan-Models
19
-
20
- <p align="center">
21
- <img src="DragGAN.gif", width="700">
22
- </p>
23
-
24
- **Figure:** *Drag your GAN.*
25
-
26
- > **Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold** <br>
27
- > Xingang Pan, Ayush Tewari, Thomas Leimkühler, Lingjie Liu, Abhimitra Meka, Christian Theobalt<br>
28
- > *SIGGRAPH 2023 Conference Proceedings*
29
-
30
- ## Requirements
31
-
32
- Please follow the requirements of [https://github.com/NVlabs/stylegan3](https://github.com/NVlabs/stylegan3).
33
-
34
- ## Download pre-trained StyleGAN2 weights
35
-
36
- To download pre-trained weights, simply run:
37
- ```sh
38
- sh scripts/download_model.sh
39
- ```
40
- If you want to try StyleGAN-Human and the Landscapes HQ (LHQ) dataset, please download weights from these links: [StyleGAN-Human](https://drive.google.com/file/d/1dlFEHbu-WzQWJl7nBBZYcTyo000H9hVm/view?usp=sharing), [LHQ](https://drive.google.com/file/d/16twEf0T9QINAEoMsWefoWiyhcTd-aiWc/view?usp=sharing), and put them under `./checkpoints`.
41
-
42
- Feel free to try other pretrained StyleGAN.
43
-
44
- ## Run DragGAN GUI
45
-
46
- To start the DragGAN GUI, simply run:
47
- ```sh
48
- sh scripts/gui.sh
49
- ```
50
-
51
- This GUI supports editing GAN-generated images. To edit a real image, you need to first perform GAN inversion using tools like [PTI](https://github.com/danielroich/PTI). Then load the new latent code and model weights to the GUI.
52
-
53
- You can run DragGAN Gradio demo as well:
54
- ```sh
55
- python visualizer_drag_gradio.py
56
- ```
57
-
58
- ## Acknowledgement
59
-
60
- This code is developed based on [StyleGAN3](https://github.com/NVlabs/stylegan3). Part of the code is borrowed from [StyleGAN-Human](https://github.com/stylegan-human/StyleGAN-Human).
61
-
62
- ## License
63
-
64
- The code related to the DragGAN algorithm is licensed under [CC-BY-NC](https://creativecommons.org/licenses/by-nc/4.0/).
65
- However, most of this project are available under a separate license terms: all codes used or modified from [StyleGAN3](https://github.com/NVlabs/stylegan3) is under the [Nvidia Source Code License](https://github.com/NVlabs/stylegan3/blob/main/LICENSE.txt).
66
-
67
- Any form of use and derivative of this code must preserve the watermarking functionality.
68
-
69
- ## BibTeX
70
-
71
- ```bibtex
72
- @inproceedings{pan2023draggan,
73
- title={Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold},
74
- author={Pan, Xingang and Tewari, Ayush, and Leimk{\"u}hler, Thomas and Liu, Lingjie and Meka, Abhimitra and Theobalt, Christian},
75
- booktitle = {ACM SIGGRAPH 2023 Conference Proceedings},
76
- year={2023}
77
- }
78
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/models/test_modeling_common_flax.py DELETED
@@ -1,66 +0,0 @@
1
- import inspect
2
-
3
- from diffusers.utils import is_flax_available
4
- from diffusers.utils.testing_utils import require_flax
5
-
6
-
7
- if is_flax_available():
8
- import jax
9
-
10
-
11
- @require_flax
12
- class FlaxModelTesterMixin:
13
- def test_output(self):
14
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
15
-
16
- model = self.model_class(**init_dict)
17
- variables = model.init(inputs_dict["prng_key"], inputs_dict["sample"])
18
- jax.lax.stop_gradient(variables)
19
-
20
- output = model.apply(variables, inputs_dict["sample"])
21
-
22
- if isinstance(output, dict):
23
- output = output.sample
24
-
25
- self.assertIsNotNone(output)
26
- expected_shape = inputs_dict["sample"].shape
27
- self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
28
-
29
- def test_forward_with_norm_groups(self):
30
- init_dict, inputs_dict = self.prepare_init_args_and_inputs_for_common()
31
-
32
- init_dict["norm_num_groups"] = 16
33
- init_dict["block_out_channels"] = (16, 32)
34
-
35
- model = self.model_class(**init_dict)
36
- variables = model.init(inputs_dict["prng_key"], inputs_dict["sample"])
37
- jax.lax.stop_gradient(variables)
38
-
39
- output = model.apply(variables, inputs_dict["sample"])
40
-
41
- if isinstance(output, dict):
42
- output = output.sample
43
-
44
- self.assertIsNotNone(output)
45
- expected_shape = inputs_dict["sample"].shape
46
- self.assertEqual(output.shape, expected_shape, "Input and output shapes do not match")
47
-
48
- def test_deprecated_kwargs(self):
49
- has_kwarg_in_model_class = "kwargs" in inspect.signature(self.model_class.__init__).parameters
50
- has_deprecated_kwarg = len(self.model_class._deprecated_kwargs) > 0
51
-
52
- if has_kwarg_in_model_class and not has_deprecated_kwarg:
53
- raise ValueError(
54
- f"{self.model_class} has `**kwargs` in its __init__ method but has not defined any deprecated kwargs"
55
- " under the `_deprecated_kwargs` class attribute. Make sure to either remove `**kwargs` if there are"
56
- " no deprecated arguments or add the deprecated argument with `_deprecated_kwargs ="
57
- " [<deprecated_argument>]`"
58
- )
59
-
60
- if not has_kwarg_in_model_class and has_deprecated_kwarg:
61
- raise ValueError(
62
- f"{self.model_class} doesn't have `**kwargs` in its __init__ method but has defined deprecated kwargs"
63
- " under the `_deprecated_kwargs` class attribute. Make sure to either add the `**kwargs` argument to"
64
- f" {self.model_class}.__init__ if there are deprecated arguments or remove the deprecated argument"
65
- " from `_deprecated_kwargs = [<deprecated_argument>]`"
66
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/groie/mask_rcnn_r50_fpn_groie_1x_coco.py DELETED
@@ -1,45 +0,0 @@
1
- _base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py'
2
- # model settings
3
- model = dict(
4
- roi_head=dict(
5
- bbox_roi_extractor=dict(
6
- type='GenericRoIExtractor',
7
- aggregation='sum',
8
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2),
9
- out_channels=256,
10
- featmap_strides=[4, 8, 16, 32],
11
- pre_cfg=dict(
12
- type='ConvModule',
13
- in_channels=256,
14
- out_channels=256,
15
- kernel_size=5,
16
- padding=2,
17
- inplace=False,
18
- ),
19
- post_cfg=dict(
20
- type='GeneralizedAttention',
21
- in_channels=256,
22
- spatial_range=-1,
23
- num_heads=6,
24
- attention_type='0100',
25
- kv_stride=2)),
26
- mask_roi_extractor=dict(
27
- type='GenericRoIExtractor',
28
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=2),
29
- out_channels=256,
30
- featmap_strides=[4, 8, 16, 32],
31
- pre_cfg=dict(
32
- type='ConvModule',
33
- in_channels=256,
34
- out_channels=256,
35
- kernel_size=5,
36
- padding=2,
37
- inplace=False,
38
- ),
39
- post_cfg=dict(
40
- type='GeneralizedAttention',
41
- in_channels=256,
42
- spatial_range=-1,
43
- num_heads=6,
44
- attention_type='0100',
45
- kv_stride=2))))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/tools/dist_test.sh DELETED
@@ -1,10 +0,0 @@
1
- #!/usr/bin/env bash
2
-
3
- CONFIG=$1
4
- CHECKPOINT=$2
5
- GPUS=$3
6
- PORT=${PORT:-29500}
7
-
8
- PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \
9
- python -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT \
10
- $(dirname "$0")/test.py $CONFIG $CHECKPOINT --launcher pytorch ${@:4}
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/ade20k.py DELETED
@@ -1,54 +0,0 @@
1
- # dataset settings
2
- dataset_type = 'ADE20KDataset'
3
- data_root = 'data/ade/ADEChallengeData2016'
4
- img_norm_cfg = dict(
5
- mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
6
- crop_size = (512, 512)
7
- train_pipeline = [
8
- dict(type='LoadImageFromFile'),
9
- dict(type='LoadAnnotations', reduce_zero_label=True),
10
- dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),
11
- dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
12
- dict(type='RandomFlip', prob=0.5),
13
- dict(type='PhotoMetricDistortion'),
14
- dict(type='Normalize', **img_norm_cfg),
15
- dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
16
- dict(type='DefaultFormatBundle'),
17
- dict(type='Collect', keys=['img', 'gt_semantic_seg']),
18
- ]
19
- test_pipeline = [
20
- dict(type='LoadImageFromFile'),
21
- dict(
22
- type='MultiScaleFlipAug',
23
- img_scale=(2048, 512),
24
- # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
25
- flip=False,
26
- transforms=[
27
- dict(type='Resize', keep_ratio=True),
28
- dict(type='RandomFlip'),
29
- dict(type='Normalize', **img_norm_cfg),
30
- dict(type='ImageToTensor', keys=['img']),
31
- dict(type='Collect', keys=['img']),
32
- ])
33
- ]
34
- data = dict(
35
- samples_per_gpu=4,
36
- workers_per_gpu=4,
37
- train=dict(
38
- type=dataset_type,
39
- data_root=data_root,
40
- img_dir='images/training',
41
- ann_dir='annotations/training',
42
- pipeline=train_pipeline),
43
- val=dict(
44
- type=dataset_type,
45
- data_root=data_root,
46
- img_dir='images/validation',
47
- ann_dir='annotations/validation',
48
- pipeline=test_pipeline),
49
- test=dict(
50
- type=dataset_type,
51
- data_root=data_root,
52
- img_dir='images/validation',
53
- ann_dir='annotations/validation',
54
- pipeline=test_pipeline))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Apex-X/ROOPOK/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: ROOPOK
3
- emoji: 📊
4
- colorFrom: indigo
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.42.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ArtyomKhyan/Detection/utils/torch_utils.py DELETED
@@ -1,203 +0,0 @@
1
- import math
2
- import os
3
- import time
4
- from copy import deepcopy
5
-
6
- import torch
7
- import torch.backends.cudnn as cudnn
8
- import torch.nn as nn
9
- import torch.nn.functional as F
10
- import torchvision.models as models
11
- def init_seeds(seed=0):
12
- torch.manual_seed(seed)
13
-
14
- # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html
15
- if seed == 0: # slower, more reproducible
16
- cudnn.deterministic = True
17
- cudnn.benchmark = False
18
- else: # faster, less reproducible
19
- cudnn.deterministic = False
20
- cudnn.benchmark = True
21
-
22
-
23
- def select_device(device='', apex=False, batch_size=None):
24
- # device = 'cpu' or '0' or '0,1,2,3'
25
- cpu_request = device.lower() == 'cpu'
26
- if device and not cpu_request: # if device requested other than 'cpu'
27
- os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable
28
- assert torch.cuda.is_available(), 'CUDA unavailable, invalid device %s requested' % device # check availablity
29
-
30
- cuda = False if cpu_request else torch.cuda.is_available()
31
- if cuda:
32
- c = 1024 ** 2 # bytes to MB
33
- ng = torch.cuda.device_count()
34
- if ng > 1 and batch_size: # check that batch_size is compatible with device_count
35
- assert batch_size % ng == 0, 'batch-size %g not multiple of GPU count %g' % (batch_size, ng)
36
- x = [torch.cuda.get_device_properties(i) for i in range(ng)]
37
- s = 'Using CUDA ' + ('Apex ' if apex else '') # apex for mixed precision https://github.com/NVIDIA/apex
38
- for i in range(0, ng):
39
- if i == 1:
40
- s = ' ' * len(s)
41
- print("%sdevice%g _CudaDeviceProperties(name='%s', total_memory=%dMB)" %
42
- (s, i, x[i].name, x[i].total_memory / c))
43
- else:
44
- print('Using CPU')
45
-
46
- print('') # skip a line
47
- return torch.device('cuda:0' if cuda else 'cpu')
48
-
49
-
50
- def time_synchronized():
51
- torch.cuda.synchronize() if torch.cuda.is_available() else None
52
- return time.time()
53
-
54
-
55
- def is_parallel(model):
56
- # is model is parallel with DP or DDP
57
- return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel)
58
-
59
-
60
- def initialize_weights(model):
61
- for m in model.modules():
62
- t = type(m)
63
- if t is nn.Conv2d:
64
- pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
65
- elif t is nn.BatchNorm2d:
66
- m.eps = 1e-4
67
- m.momentum = 0.03
68
- elif t in [nn.LeakyReLU, nn.ReLU, nn.ReLU6]:
69
- m.inplace = True
70
-
71
-
72
- def find_modules(model, mclass=nn.Conv2d):
73
- # finds layer indices matching module class 'mclass'
74
- return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)]
75
-
76
-
77
- def fuse_conv_and_bn(conv, bn):
78
- # https://tehnokv.com/posts/fusing-batchnorm-and-conv/
79
- with torch.no_grad():
80
- # init
81
- fusedconv = torch.nn.Conv2d(conv.in_channels,
82
- conv.out_channels,
83
- kernel_size=conv.kernel_size,
84
- stride=conv.stride,
85
- padding=conv.padding,
86
- bias=True)
87
-
88
- # prepare filters
89
- w_conv = conv.weight.clone().view(conv.out_channels, -1)
90
- w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var)))
91
- fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.size()))
92
-
93
- # prepare spatial bias
94
- if conv.bias is not None:
95
- b_conv = conv.bias
96
- else:
97
- b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device)
98
- b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps))
99
- fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn)
100
-
101
- return fusedconv
102
-
103
-
104
- def model_info(model, verbose=False):
105
- # Plots a line-by-line description of a PyTorch model
106
- n_p = sum(x.numel() for x in model.parameters()) # number parameters
107
- n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients
108
- if verbose:
109
- print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma'))
110
- for i, (name, p) in enumerate(model.named_parameters()):
111
- name = name.replace('module_list.', '')
112
- print('%5g %40s %9s %12g %20s %10.3g %10.3g' %
113
- (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std()))
114
-
115
- try: # FLOPS
116
- from thop import profile
117
- flops = profile(deepcopy(model), inputs=(torch.zeros(1, 3, 64, 64),), verbose=False)[0] / 1E9 * 2
118
- fs = ', %.1f GFLOPS' % (flops * 100) # 640x640 FLOPS
119
- except:
120
- fs = ''
121
-
122
- print('Model Summary: %g layers, %g parameters, %g gradients%s' % (len(list(model.parameters())), n_p, n_g, fs))
123
-
124
-
125
- def load_classifier(name='resnet101', n=2):
126
- # Loads a pretrained model reshaped to n-class output
127
- model = models.__dict__[name](pretrained=True)
128
-
129
- # Display model properties
130
- input_size = [3, 224, 224]
131
- input_space = 'RGB'
132
- input_range = [0, 1]
133
- mean = [0.485, 0.456, 0.406]
134
- std = [0.229, 0.224, 0.225]
135
- for x in [input_size, input_space, input_range, mean, std]:
136
- print(x + ' =', eval(x))
137
-
138
- # Reshape output to n classes
139
- filters = model.fc.weight.shape[1]
140
- model.fc.bias = torch.nn.Parameter(torch.zeros(n), requires_grad=True)
141
- model.fc.weight = torch.nn.Parameter(torch.zeros(n, filters), requires_grad=True)
142
- model.fc.out_features = n
143
- return model
144
-
145
-
146
- def scale_img(img, ratio=1.0, same_shape=False): # img(16,3,256,416), r=ratio
147
- # scales img(bs,3,y,x) by ratio
148
- h, w = img.shape[2:]
149
- s = (int(h * ratio), int(w * ratio)) # new size
150
- img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize
151
- if not same_shape: # pad/crop img
152
- gs = 32 # (pixels) grid size
153
- h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)]
154
- return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean
155
-
156
-
157
- class ModelEMA:
158
- """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models
159
- Keep a moving average of everything in the model state_dict (parameters and buffers).
160
- This is intended to allow functionality like
161
- https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage
162
- A smoothed version of the weights is necessary for some training schemes to perform well.
163
- E.g. Google's hyper-params for training MNASNet, MobileNet-V3, EfficientNet, etc that use
164
- RMSprop with a short 2.4-3 epoch decay period and slow LR decay rate of .96-.99 requires EMA
165
- smoothing of weights to match results. Pay attention to the decay constant you are using
166
- relative to your update count per epoch.
167
- To keep EMA from using GPU resources, set device='cpu'. This will save a bit of memory but
168
- disable validation of the EMA weights. Validation will have to be done manually in a separate
169
- process, or after the training stops converging.
170
- This class is sensitive where it is initialized in the sequence of model init,
171
- GPU assignment and distributed training wrappers.
172
- I've tested with the sequence in my own train.py for torch.DataParallel, apex.DDP, and single-GPU.
173
- """
174
-
175
- def __init__(self, model, decay=0.9999, device=''):
176
- # Create EMA
177
- self.ema = deepcopy(model.module if is_parallel(model) else model) # FP32 EMA
178
- self.ema.eval()
179
- self.updates = 0 # number of EMA updates
180
- self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs)
181
- self.device = device # perform ema on different device from model if set
182
- if device:
183
- self.ema.to(device)
184
- for p in self.ema.parameters():
185
- p.requires_grad_(False)
186
-
187
- def update(self, model):
188
- # Update EMA parameters
189
- with torch.no_grad():
190
- self.updates += 1
191
- d = self.decay(self.updates)
192
-
193
- msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict
194
- for k, v in self.ema.state_dict().items():
195
- if v.dtype.is_floating_point:
196
- v *= d
197
- v += (1. - d) * msd[k].detach()
198
-
199
- def update_attr(self, model):
200
- # Update EMA attributes
201
- for k, v in model.__dict__.items():
202
- if not k.startswith('_') and k not in ["process_group", "reducer"]:
203
- setattr(self.ema, k, v)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/terminal_theme.py DELETED
@@ -1,153 +0,0 @@
1
- from typing import List, Optional, Tuple
2
-
3
- from .color_triplet import ColorTriplet
4
- from .palette import Palette
5
-
6
- _ColorTuple = Tuple[int, int, int]
7
-
8
-
9
- class TerminalTheme:
10
- """A color theme used when exporting console content.
11
-
12
- Args:
13
- background (Tuple[int, int, int]): The background color.
14
- foreground (Tuple[int, int, int]): The foreground (text) color.
15
- normal (List[Tuple[int, int, int]]): A list of 8 normal intensity colors.
16
- bright (List[Tuple[int, int, int]], optional): A list of 8 bright colors, or None
17
- to repeat normal intensity. Defaults to None.
18
- """
19
-
20
- def __init__(
21
- self,
22
- background: _ColorTuple,
23
- foreground: _ColorTuple,
24
- normal: List[_ColorTuple],
25
- bright: Optional[List[_ColorTuple]] = None,
26
- ) -> None:
27
- self.background_color = ColorTriplet(*background)
28
- self.foreground_color = ColorTriplet(*foreground)
29
- self.ansi_colors = Palette(normal + (bright or normal))
30
-
31
-
32
- DEFAULT_TERMINAL_THEME = TerminalTheme(
33
- (255, 255, 255),
34
- (0, 0, 0),
35
- [
36
- (0, 0, 0),
37
- (128, 0, 0),
38
- (0, 128, 0),
39
- (128, 128, 0),
40
- (0, 0, 128),
41
- (128, 0, 128),
42
- (0, 128, 128),
43
- (192, 192, 192),
44
- ],
45
- [
46
- (128, 128, 128),
47
- (255, 0, 0),
48
- (0, 255, 0),
49
- (255, 255, 0),
50
- (0, 0, 255),
51
- (255, 0, 255),
52
- (0, 255, 255),
53
- (255, 255, 255),
54
- ],
55
- )
56
-
57
- MONOKAI = TerminalTheme(
58
- (12, 12, 12),
59
- (217, 217, 217),
60
- [
61
- (26, 26, 26),
62
- (244, 0, 95),
63
- (152, 224, 36),
64
- (253, 151, 31),
65
- (157, 101, 255),
66
- (244, 0, 95),
67
- (88, 209, 235),
68
- (196, 197, 181),
69
- (98, 94, 76),
70
- ],
71
- [
72
- (244, 0, 95),
73
- (152, 224, 36),
74
- (224, 213, 97),
75
- (157, 101, 255),
76
- (244, 0, 95),
77
- (88, 209, 235),
78
- (246, 246, 239),
79
- ],
80
- )
81
- DIMMED_MONOKAI = TerminalTheme(
82
- (25, 25, 25),
83
- (185, 188, 186),
84
- [
85
- (58, 61, 67),
86
- (190, 63, 72),
87
- (135, 154, 59),
88
- (197, 166, 53),
89
- (79, 118, 161),
90
- (133, 92, 141),
91
- (87, 143, 164),
92
- (185, 188, 186),
93
- (136, 137, 135),
94
- ],
95
- [
96
- (251, 0, 31),
97
- (15, 114, 47),
98
- (196, 112, 51),
99
- (24, 109, 227),
100
- (251, 0, 103),
101
- (46, 112, 109),
102
- (253, 255, 185),
103
- ],
104
- )
105
- NIGHT_OWLISH = TerminalTheme(
106
- (255, 255, 255),
107
- (64, 63, 83),
108
- [
109
- (1, 22, 39),
110
- (211, 66, 62),
111
- (42, 162, 152),
112
- (218, 170, 1),
113
- (72, 118, 214),
114
- (64, 63, 83),
115
- (8, 145, 106),
116
- (122, 129, 129),
117
- (122, 129, 129),
118
- ],
119
- [
120
- (247, 110, 110),
121
- (73, 208, 197),
122
- (218, 194, 107),
123
- (92, 167, 228),
124
- (105, 112, 152),
125
- (0, 201, 144),
126
- (152, 159, 177),
127
- ],
128
- )
129
-
130
- SVG_EXPORT_THEME = TerminalTheme(
131
- (41, 41, 41),
132
- (197, 200, 198),
133
- [
134
- (75, 78, 85),
135
- (204, 85, 90),
136
- (152, 168, 75),
137
- (208, 179, 68),
138
- (96, 138, 177),
139
- (152, 114, 159),
140
- (104, 160, 179),
141
- (197, 200, 198),
142
- (154, 155, 153),
143
- ],
144
- [
145
- (255, 38, 39),
146
- (0, 130, 61),
147
- (208, 132, 66),
148
- (25, 132, 233),
149
- (255, 44, 122),
150
- (57, 130, 128),
151
- (253, 253, 197),
152
- ],
153
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/config/test_instantiate_config.py DELETED
@@ -1,100 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
-
3
- import os
4
- import tempfile
5
- import unittest
6
- import yaml
7
- from omegaconf import OmegaConf
8
- from omegaconf import __version__ as oc_version
9
- from dataclasses import dataclass
10
-
11
- from detectron2.config import instantiate, LazyCall as L
12
- from detectron2.layers import ShapeSpec
13
-
14
- OC_VERSION = tuple(int(x) for x in oc_version.split(".")[:2])
15
-
16
-
17
- class TestClass:
18
- def __init__(self, int_arg, list_arg=None, dict_arg=None, extra_arg=None):
19
- self.int_arg = int_arg
20
- self.list_arg = list_arg
21
- self.dict_arg = dict_arg
22
- self.extra_arg = extra_arg
23
-
24
- def __call__(self, call_arg):
25
- return call_arg + self.int_arg
26
-
27
-
28
- @dataclass
29
- class TestDataClass:
30
- x: int
31
- y: str
32
-
33
-
34
- @unittest.skipIf(OC_VERSION < (2, 1), "omegaconf version too old")
35
- class TestConstruction(unittest.TestCase):
36
- def test_basic_construct(self):
37
- objconf = L(TestClass)(
38
- int_arg=3,
39
- list_arg=[10],
40
- dict_arg={},
41
- extra_arg=L(TestClass)(int_arg=4, list_arg="${..list_arg}"),
42
- )
43
-
44
- obj = instantiate(objconf)
45
- self.assertIsInstance(obj, TestClass)
46
- self.assertEqual(obj.int_arg, 3)
47
- self.assertEqual(obj.extra_arg.int_arg, 4)
48
- self.assertEqual(obj.extra_arg.list_arg, obj.list_arg)
49
-
50
- objconf.extra_arg.list_arg = [5]
51
- obj = instantiate(objconf)
52
- self.assertIsInstance(obj, TestClass)
53
- self.assertEqual(obj.extra_arg.list_arg, [5])
54
-
55
- def test_instantiate_other_obj(self):
56
- # do nothing for other obj
57
- self.assertEqual(instantiate(5), 5)
58
- x = [3, 4, 5]
59
- self.assertEqual(instantiate(x), x)
60
- x = TestClass(1)
61
- self.assertIs(instantiate(x), x)
62
- x = {"xx": "yy"}
63
- self.assertIs(instantiate(x), x)
64
-
65
- def test_instantiate_lazy_target(self):
66
- # _target_ is result of instantiate
67
- objconf = L(L(len)(int_arg=3))(call_arg=4)
68
- objconf._target_._target_ = TestClass
69
- self.assertEqual(instantiate(objconf), 7)
70
-
71
- def test_instantiate_lst(self):
72
- lst = [1, 2, L(TestClass)(int_arg=1)]
73
- x = L(TestClass)(int_arg=lst) # list as an argument should be recursively instantiated
74
- x = instantiate(x).int_arg
75
- self.assertEqual(x[:2], [1, 2])
76
- self.assertIsInstance(x[2], TestClass)
77
- self.assertEqual(x[2].int_arg, 1)
78
-
79
- def test_instantiate_namedtuple(self):
80
- x = L(TestClass)(int_arg=ShapeSpec(channels=1, width=3))
81
- # test serialization
82
- with tempfile.TemporaryDirectory() as d:
83
- fname = os.path.join(d, "d2_test.yaml")
84
- OmegaConf.save(x, fname)
85
- with open(fname) as f:
86
- x = yaml.unsafe_load(f)
87
-
88
- x = instantiate(x)
89
- self.assertIsInstance(x.int_arg, ShapeSpec)
90
- self.assertEqual(x.int_arg.channels, 1)
91
-
92
- def test_bad_lazycall(self):
93
- with self.assertRaises(Exception):
94
- L(3)
95
-
96
- def test_instantiate_dataclass(self):
97
- a = L(TestDataClass)(x=1, y="s")
98
- a = instantiate(a)
99
- self.assertEqual(a.x, 1)
100
- self.assertEqual(a.y, "s")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Arena Breakout Apk Actualizacin.md DELETED
@@ -1,104 +0,0 @@
1
-
2
- <h1>Arena Breakout APK Actualización: ¿Qué hay de nuevo en la próxima generación inmersiva FPS táctico juego</h1>
3
- <h2>Introducción</h2>
4
- <p>Si estás buscando un nuevo y emocionante juego móvil que combine disparos, saqueos y fugas, entonces deberías echar un vistazo a Arena Breakout. Arena Breakout es un juego de FPS táctico inmersivo de próxima generación que empuja los límites de la simulación de guerra en dispositivos móviles. Puedes elegir entre diferentes modos de combate, como el frontón, el sigilo o el desvío. También puedes disparar, saquear y escaparte para ganar, o perderlo todo si no logras escapar. Arena Breakout es un juego que desafía tus habilidades, estrategia y suerte. </p>
5
- <h2>arena breakout apk actualización</h2><br /><p><b><b>Download</b> &#9889; <a href="https://bltlly.com/2v6LkB">https://bltlly.com/2v6LkB</a></b></p><br /><br />
6
- <p>Algunas de las características principales de Arena Breakout son:</p>
7
- <ul>
8
- <li>Gráficos de próxima generación y efectos de sonido que te sumergen en imágenes de calidad de consola y audio en móviles</li>
9
- <li>Gunplay realista que requiere que parchear heridas, recargar revistas, y personalizar sus armas</li>
10
- <li> Último sistema de armero que le permite mezclar y combinar más de 700 piezas de armas en más de 10 ranuras de modificación</li>
11
- <li> Ruptura para el modo de ganancia que le permite escapar del área de combate con vida para tener la oportunidad de hacerse rico</li>
12
- <li>Dispara y el modo de botín que te permite disparar a tus enemigos y reclamar todo el botín para ti mismo</li>
13
- <li> Varios mapas, modos, caracteres y armas para elegir</li>
14
- </ul>
15
- <p>Para descargar e instalar la última versión de Arena Breakout APK en su dispositivo Android, puede seguir estos pasos:</p>
16
- <ol>
17
- <li>Ir a [text]( 1 ) o [text]( 2 ) dependiendo de su región</li>
18
- <li>Haga clic en Descargar APK o descargar XAPK botón</li>
19
- <li>Permitir fuentes desconocidas en la configuración del dispositivo si se le solicita</li>
20
- <li>Instalar el archivo APK o XAPK en su dispositivo</li>
21
- <li>Iniciar el juego y disfrutar</li>
22
- </ol>
23
- <h2>Nuevos personajes y tutoriales actualizados</h2>
24
- <p>La última versión de Arena Breakout APK incluye nuevos personajes y tutoriales actualizados que cubren 15 idiomas. Puedes elegir entre diferentes personajes con habilidades y habilidades únicas, como:</p>
25
- <ul>
26
-
27
- <li>Hunter: Un asalto versátil que puede desplegar drones y trampas</li>
28
- <li>Víbora: Un soporte mortal que puede envenenar a los enemigos y curar a los aliados</li>
29
- <li>Blaze: Un demoledor de fuego que puede incendiar cosas y explotarlas</li>
30
- </ul>
31
- <p>También puedes acceder a los tutoriales actualizados que te enseñarán cómo jugar el juego en diferentes idiomas, como inglés, español, francés, alemán, ruso, chino, japonés, coreano, árabe, portugués, turco, hindi, indonesio, tailandés y vietnamita. Puedes aprender a usar diferentes armas, objetos, habilidades, tácticas y estrategias en varios escenarios. </p>
32
- <p>Si <p>Si te pre-registras para el juego, también puedes desbloquear toneladas de recompensas, como:</p>
33
- <ul>
34
- <li>Diseños y trajes exclusivos de personajes</li>
35
- <li>Armas especiales y accesorios</li>
36
- <li>Objetos y recursos raros</li>
37
- <li>Moneda del juego y cupones</li>
38
- </ul>
39
- <p>Para pre-registrarse para el juego, puede visitar [text] o [text] dependiendo de su región. También puedes seguir las cuentas oficiales de redes sociales del juego para obtener las últimas noticias y actualizaciones. </p>
40
- <p></p>
41
- <h2>Nuevos eventos de hitos y referencias</h2>
42
- <p>La última versión de Arena Breakout APK también presenta un nuevo hito y eventos de referencia que le recompensará por jugar el juego e invitar a sus amigos. Puedes participar en estos eventos completando varias tareas y desafíos, como:</p>
43
- <ul>
44
- <li>Alcanzar ciertos niveles y rangos</li>
45
- <li>Ganar un número de partidos y desgloses</li>
46
- <li>Matar a un número de enemigos y saquear sus artículos</li>
47
- <li>Personalización de armas y personajes</li>
48
- <li>Compartir sus vídeos de juego y capturas de pantalla</li>
49
- </ul>
50
- <p>Algunas de las recompensas únicas para estos eventos son:</p>
51
- <ul>
52
- <li>Pieles de caracteres y trajes de edición limitada</li>
53
- <li>Armas y accesorios legendarios</li>
54
- <li>Objetos y recursos épicos</li>
55
- <li>Moneda VIP y cupones</li>
56
- </ul>
57
-
58
- <h2>Sistema avanzado de armero y Gunplay realista</h2>
59
- <p>Una de las características más impresionantes de Arena Breakout es el avanzado sistema de armería que le permite personalizar su arma de fuego de elección con más de 700 piezas de armas. Puedes usar las 10 ranuras de modificación para cambiar la apariencia, el rendimiento y la funcionalidad de tu arma. También puedes probar tu arma en diferentes modos y entornos antes de usarla en combate. </p>
60
- <p>Algunas de las piezas del arma que puedes usar son:</p>
61
- <ul>
62
- <li>Cañones: Afectan la precisión, el alcance, el retroceso y la velocidad de boca de su arma</li>
63
- <li>Óptica: Afecta el zoom, el aumento, la retícula y el campo de visión de su arma</li>
64
- <li>Stocks: Afecta la estabilidad, movilidad, manejo y tiempo de puntería de su arma</li>
65
- <li>Apretones: Afectan el agarre, la ergonomía, el control y la comodidad de su arma</li>
66
- <li>Bozales: Afectan el sonido, flash, explosión y firma de su arma</li>
67
- <li>Revistas: Afectan la capacidad, la velocidad de recarga, el peso y el calibre de su arma</li>
68
- <li>Disparadores: Afectan el gatillo, la velocidad de disparo, el modo de ráfaga y la sensibilidad de su arma</li>
69
- <li>Miras láser: afectan el color del láser, la intensidad, la visibilidad y la precisión de su arma</li>
70
- <li>Accesorios debajo del cañón: Afectan la funcionalidad debajo del cañón, como lanzadores de granadas, escopetas, bayonetas, etc.</li>
71
- <li>Pieles: Afectan a la apariencia cosmética de su arma, tales como colores, patrones, pegatinas, etc.</li>
72
- </ul>
73
- <p>Arena Breakout también cuenta con un juego de armas realista que requiere que parchear heridas, recargar revistas, y personalizar sus armas. Puedes experimentar efectos realistas de luz, sombra, sonido y física en el juego. También puedes sentir el retroceso, el peso y el equilibrio de tu arma. Puedes usar diferentes tácticas y estrategias dependiendo del tipo de arma y la situación. </p>
74
- <h2>Conclusión</h2>
75
-
76
- <p>Aquí hay algunas preguntas y respuestas comunes sobre Arena Breakout:</p>
77
- <tabla>
78
- <tr>
79
- <th>Pregunta</th>
80
- <th>Respuesta</th>
81
- </tr>
82
- <tr>
83
- <td>¿Cuáles son los requisitos mínimos para jugar Arena Breakout en Android? </td>
84
- <td>Necesitas un dispositivo Android con al menos 4 GB de RAM, 64 GB de almacenamiento y Android 8.0 o superior. </td>
85
- </tr>
86
- <tr>
87
- <td>¿Arena Breakout es libre de jugar? </td>
88
- <td>Sí, Arena Breakout es gratis para jugar. Sin embargo, puedes comprar monedas y objetos con dinero real. </td>
89
- </tr>
90
- <tr>
91
- <td>¿Puedo jugar sin conexión Arena Breakout? </td>
92
- <td>No, Arena Breakout requiere una conexión a Internet para jugar. </td>
93
- </tr>
94
- <tr>
95
- <td>¿Puedo jugar Arena Breakout con mis amigos? </td>
96
- <td>Sí, puedes jugar a Arena Breakout con tus amigos. Puedes crear o unirte a un equipo de hasta cuatro jugadores y cooperar o competir con otros escuadrones. </td>
97
- </tr>
98
- <tr>
99
- <td>¿Cómo puedo contactar a los desarrolladores de Arena Breakout? </td>
100
- <td>Puede ponerse en contacto con los desarrolladores de Arena Breakout enviando un correo electrónico a [text] o rellenando el formulario de comentarios en la configuración del juego. </td>
101
- </tr>
102
- </tabla></p> 64aa2da5cf<br />
103
- <br />
104
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Beta 0.44 Chicos Tropiezo Apk.md DELETED
@@ -1,58 +0,0 @@
1
- <br />
2
- <h1>Beta 0.44 Tropiezo chicos APK: Cómo descargar y jugar el último juego de nocaut</h1>
3
- <p>Stumble Guys es un juego masivo de eliminación de fiesta multijugador que ha tomado el mundo de los juegos móviles por asalto. Inspirado por los populares Fall Guys, Stumble Guys te permite competir con hasta 32 jugadores en línea en una serie de divertidas y caóticas carreras de obstáculos. Tienes que correr, saltar, correr, deslizarte y esquivar tu camino a la línea de meta mientras evitas ser eliminado por tus rivales o el medio ambiente. </p>
4
- <p>Si eres un fan de Stumble Guys, podrías estar interesado en probar la última versión beta del juego, que es la beta 0.44. Esta versión introduce algunas nuevas características y mejoras que hacen que el juego sea aún más divertido y emocionante. Algunas de estas características son:</p>
5
- <h2>beta 0.44 chicos tropiezo apk</h2><br /><p><b><b>Download</b> &#9745; <a href="https://bltlly.com/2v6Mmf">https://bltlly.com/2v6Mmf</a></b></p><br /><br />
6
- <ul>
7
- <li>Un nuevo mapa llamado Camino del Campeón, que es un tributo al legendario boxeador Muhammad Ali.</li>
8
- <li>Un nuevo modo de juego llamado Super Punch, que te permite golpear a otros jugadores con un guante potente. </li>
9
- <li>Un nuevo atuendo llamado Flameo, que le da un aspecto fogoso y un emote especial. </li>
10
- <li>Una nueva función llamada Modo Fiesta, que te permite crear o unirte a una fiesta con tus amigos y jugar juntos. </li>
11
- <li>Varias correcciones de errores y mejoras de rendimiento. </li>
12
- </ul>
13
- <p>En este artículo, le mostraremos cómo descargar e instalar beta 0.44 stumble chicos apk en su dispositivo Android, y cómo jugar como un profesional. ¡Vamos a empezar! </p>
14
- <h2>Cómo descargar e instalar Beta 0.44 Stumble Guys APK</h2>
15
- <p>Para descargar e instalar beta 0.44 chicos de tocón apk en su dispositivo Android, es necesario seguir estos pasos:</p>
16
- <ol>
17
- <li>Ir a una fuente confiable que ofrece el archivo apk para beta 0.44 stumble chicos. Por ejemplo, puedes usar APKCombo, que es un sitio web que proporciona descargas seguras y rápidas para varias aplicaciones y juegos de Android. </li>
18
- <li>Buscar "0.44 stumble guys" en APKCombo y seleccionar el resultado que coincida con las especificaciones del dispositivo. </li>
19
-
20
- <li>Una vez finalizada la descarga, busque el archivo apk en el administrador de archivos de su dispositivo y toque en él para instalarlo. </li>
21
- <li>Si ves un mensaje de advertencia que dice "Instalar bloqueado", ve a la configuración de tu dispositivo y habilita la opción "Fuentes desconocidas" o "Permitir desde esta fuente". </li>
22
- <li>Siga las instrucciones en pantalla para completar el proceso de instalación. </li>
23
- </ol>
24
- <p>Felicidades! Usted ha instalado con éxito beta 0.44 stumble chicos apk en su dispositivo Android. Ahora puedes lanzar el juego y disfrutar de sus nuevas características. </p>
25
- <h2>Cómo jugar Beta 0.44 Stumble chicos APK</h2>
26
- <p>Para jugar beta 0.44 stumble chicos apk, usted necesita saber el juego básico y los controles de Stumble Guys, así como algunos consejos y trucos para ganar sus partidos y divertirse. </p>
27
- <h3>La jugabilidad básica y los controles de Stumble Guys</h3>
28
- <p>El modo de juego básico de Stumble Guys es simple: tienes que sobrevivir a través de diferentes niveles hasta llegar a la ronda final, donde solo un jugador será coronado como ganador. Cada nivel tiene diferentes obstáculos y desafíos que tienes que superar evitando ser eliminado por otros jugadores o caer fuera del mapa. </p>
29
- <p>Los controles de Stumble Guys también son fáciles de aprender: tienes un joystick en el lado izquierdo de la pantalla para mover a tu personaje, y dos botones en el lado derecho para saltar y bucear. También puede deslizar sobre la pantalla para cambiar el ángulo de la cámara y ver su entorno. </p>
30
- <p>Puedes personalizar la apariencia de tu personaje eligiendo diferentes trajes, sombreros, pieles y emotes. Puedes desbloquear más objetos jugando y ganando monedas, o comprándolos con dinero real. </p>
31
- <p></p>
32
- <h3>Los consejos y trucos para ganar sus partidos y divertirse</h3>
33
- <p>Jugar beta 0.44 chicos de tropiezo apk puede ser muy divertido, pero también desafiante. Aquí hay algunos consejos y trucos para ayudarle a ganar sus partidos y pasar un buen rato:</p>
34
- <ul>
35
-
36
- <li>Usa los botones de salto y buceo sabiamente. Saltar puede ayudarte a despejar huecos y obstáculos, pero también puede hacerte perder el equilibrio y caer. Bucear puede ayudarte a deslizarte bajo las barreras y llegar a la meta más rápido, pero también puede hacerte vulnerable a los ataques de otros jugadores. </li>
37
- <li>No tengas miedo de golpear a otros jugadores con el modo Super Punch. Esto puede ayudarte a eliminar a tus rivales y despejar tu camino. Pero ten cuidado de no golpearte a ti o a tus compañeros por error. </li>
38
- <li>Juega con tus amigos en el modo Fiesta. Esto puede hacer que el juego sea más divertido y cooperativo, ya que puedes comunicarte y crear estrategias con tus amigos. También puedes competir con otras partes y ver quién es el mejor equipo. </li>
39
- <li>Diviértete y no te tomes el juego demasiado en serio. Stumble Guys está diseñado para ser un juego casual y divertido, así que no te frustres o te enojes si pierdes o eres eliminado. Solo disfrutar de la experiencia y reír en los momentos divertidos. </li>
40
- </ul>
41
- <h2>Conclusión</h2>
42
- <p>Beta 0.44 stumble guys apk es una actualización fantástica para Stumble Guys, el último juego knockout para dispositivos móviles. Ofrece nuevas características y mejoras que hacen que el juego sea más divertido y emocionante, como un nuevo mapa, un nuevo modo de juego, un nuevo atuendo, una nueva característica y varias correcciones de errores y mejoras de rendimiento. </p>
43
- <p>Si desea probar beta 0.44 chicos de tocón apk, se puede descargar e instalar en su dispositivo Android siguiendo los pasos que le hemos mostrado en este artículo. También puedes aprender a jugar como un profesional siguiendo nuestros consejos y trucos. </p>
44
- <p>Entonces, ¿qué estás esperando? Descargar beta 0.44 chicos de tropiezo apk ahora y unirse a la fiesta de carreras de obstáculos hilarante y caótico. Usted tendrá una explosión! </p>
45
- <h2>Preguntas frecuentes</h2>
46
- <p>Aquí hay algunas preguntas y respuestas comunes sobre beta 0.44 stumble guys apk:</p>
47
- <h3>Q: ¿Es beta 0.44 stumble chicos apk seguro para descargar e instalar? </h3>
48
-
49
- <h3>Q: ¿Tengo que desinstalar la versión anterior de Stumble Guys antes de instalar beta 0.44 stumble guys apk? </h3>
50
- <p>A: No, no es necesario desinstalar la versión anterior de Stumble Guys antes de instalar beta 0.44 stumble guys apk. La nueva versión sobrescribirá la anterior sin ningún problema. </p>
51
- <h3>Q: ¿Puedo jugar beta 0.44 chicos de tropiezo apk con jugadores que tienen la versión regular de Stumble Guys? </h3>
52
- <p>A: Sí, se puede jugar beta 0.44 stumble guys apk con jugadores que tienen la versión regular de Stumble Guys, ya que son compatibles entre sí. </p>
53
- <h3>Q: ¿Cómo puedo dar retroalimentación o reportar errores para beta 0.44 stumble guys apk? </h3>
54
- <p>A: Usted puede dar retroalimentación o reportar errores para beta 0.44 stumble guys apk contactando a los desarrolladores de Stumble Guys a través de sus canales de medios sociales oficiales o dirección de correo electrónico. Ellos apreciarán su entrada y tratar de solucionar cualquier problema tan pronto como sea posible. </p>
55
- <h3>Q: ¿Dónde puedo encontrar más información sobre Stumble Guys? </h3>
56
- <p>A: Puedes encontrar más información sobre Stumble Guys visitando su sitio web oficial, donde puedes aprender más sobre el juego, sus características, sus actualizaciones, su comunidad y más. </p> 64aa2da5cf<br />
57
- <br />
58
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/hooks.py DELETED
@@ -1,661 +0,0 @@
1
- # Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License"). You
4
- # may not use this file except in compliance with the License. A copy of
5
- # the License is located at
6
- #
7
- # http://aws.amazon.com/apache2.0/
8
- #
9
- # or in the "license" file accompanying this file. This file is
10
- # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11
- # ANY KIND, either express or implied. See the License for the specific
12
- # language governing permissions and limitations under the License.
13
- import copy
14
- import logging
15
- from collections import deque, namedtuple
16
-
17
- from botocore.compat import accepts_kwargs
18
- from botocore.utils import EVENT_ALIASES
19
-
20
- logger = logging.getLogger(__name__)
21
-
22
-
23
- _NodeList = namedtuple('NodeList', ['first', 'middle', 'last'])
24
- _FIRST = 0
25
- _MIDDLE = 1
26
- _LAST = 2
27
-
28
-
29
- class NodeList(_NodeList):
30
- def __copy__(self):
31
- first_copy = copy.copy(self.first)
32
- middle_copy = copy.copy(self.middle)
33
- last_copy = copy.copy(self.last)
34
- copied = NodeList(first_copy, middle_copy, last_copy)
35
- return copied
36
-
37
-
38
- def first_non_none_response(responses, default=None):
39
- """Find first non None response in a list of tuples.
40
-
41
- This function can be used to find the first non None response from
42
- handlers connected to an event. This is useful if you are interested
43
- in the returned responses from event handlers. Example usage::
44
-
45
- print(first_non_none_response([(func1, None), (func2, 'foo'),
46
- (func3, 'bar')]))
47
- # This will print 'foo'
48
-
49
- :type responses: list of tuples
50
- :param responses: The responses from the ``EventHooks.emit`` method.
51
- This is a list of tuples, and each tuple is
52
- (handler, handler_response).
53
-
54
- :param default: If no non-None responses are found, then this default
55
- value will be returned.
56
-
57
- :return: The first non-None response in the list of tuples.
58
-
59
- """
60
- for response in responses:
61
- if response[1] is not None:
62
- return response[1]
63
- return default
64
-
65
-
66
- class BaseEventHooks:
67
- def emit(self, event_name, **kwargs):
68
- """Call all handlers subscribed to an event.
69
-
70
- :type event_name: str
71
- :param event_name: The name of the event to emit.
72
-
73
- :type **kwargs: dict
74
- :param **kwargs: Arbitrary kwargs to pass through to the
75
- subscribed handlers. The ``event_name`` will be injected
76
- into the kwargs so it's not necesary to add this to **kwargs.
77
-
78
- :rtype: list of tuples
79
- :return: A list of ``(handler_func, handler_func_return_value)``
80
-
81
- """
82
- return []
83
-
84
- def register(
85
- self, event_name, handler, unique_id=None, unique_id_uses_count=False
86
- ):
87
- """Register an event handler for a given event.
88
-
89
- If a ``unique_id`` is given, the handler will not be registered
90
- if a handler with the ``unique_id`` has already been registered.
91
-
92
- Handlers are called in the order they have been registered.
93
- Note handlers can also be registered with ``register_first()``
94
- and ``register_last()``. All handlers registered with
95
- ``register_first()`` are called before handlers registered
96
- with ``register()`` which are called before handlers registered
97
- with ``register_last()``.
98
-
99
- """
100
- self._verify_and_register(
101
- event_name,
102
- handler,
103
- unique_id,
104
- register_method=self._register,
105
- unique_id_uses_count=unique_id_uses_count,
106
- )
107
-
108
- def register_first(
109
- self, event_name, handler, unique_id=None, unique_id_uses_count=False
110
- ):
111
- """Register an event handler to be called first for an event.
112
-
113
- All event handlers registered with ``register_first()`` will
114
- be called before handlers registered with ``register()`` and
115
- ``register_last()``.
116
-
117
- """
118
- self._verify_and_register(
119
- event_name,
120
- handler,
121
- unique_id,
122
- register_method=self._register_first,
123
- unique_id_uses_count=unique_id_uses_count,
124
- )
125
-
126
- def register_last(
127
- self, event_name, handler, unique_id=None, unique_id_uses_count=False
128
- ):
129
- """Register an event handler to be called last for an event.
130
-
131
- All event handlers registered with ``register_last()`` will be called
132
- after handlers registered with ``register_first()`` and ``register()``.
133
-
134
- """
135
- self._verify_and_register(
136
- event_name,
137
- handler,
138
- unique_id,
139
- register_method=self._register_last,
140
- unique_id_uses_count=unique_id_uses_count,
141
- )
142
-
143
- def _verify_and_register(
144
- self,
145
- event_name,
146
- handler,
147
- unique_id,
148
- register_method,
149
- unique_id_uses_count,
150
- ):
151
- self._verify_is_callable(handler)
152
- self._verify_accept_kwargs(handler)
153
- register_method(event_name, handler, unique_id, unique_id_uses_count)
154
-
155
- def unregister(
156
- self,
157
- event_name,
158
- handler=None,
159
- unique_id=None,
160
- unique_id_uses_count=False,
161
- ):
162
- """Unregister an event handler for a given event.
163
-
164
- If no ``unique_id`` was given during registration, then the
165
- first instance of the event handler is removed (if the event
166
- handler has been registered multiple times).
167
-
168
- """
169
- pass
170
-
171
- def _verify_is_callable(self, func):
172
- if not callable(func):
173
- raise ValueError("Event handler %s must be callable." % func)
174
-
175
- def _verify_accept_kwargs(self, func):
176
- """Verifies a callable accepts kwargs
177
-
178
- :type func: callable
179
- :param func: A callable object.
180
-
181
- :returns: True, if ``func`` accepts kwargs, otherwise False.
182
-
183
- """
184
- try:
185
- if not accepts_kwargs(func):
186
- raise ValueError(
187
- f"Event handler {func} must accept keyword "
188
- f"arguments (**kwargs)"
189
- )
190
- except TypeError:
191
- return False
192
-
193
-
194
- class HierarchicalEmitter(BaseEventHooks):
195
- def __init__(self):
196
- # We keep a reference to the handlers for quick
197
- # read only access (we never modify self._handlers).
198
- # A cache of event name to handler list.
199
- self._lookup_cache = {}
200
- self._handlers = _PrefixTrie()
201
- # This is used to ensure that unique_id's are only
202
- # registered once.
203
- self._unique_id_handlers = {}
204
-
205
- def _emit(self, event_name, kwargs, stop_on_response=False):
206
- """
207
- Emit an event with optional keyword arguments.
208
-
209
- :type event_name: string
210
- :param event_name: Name of the event
211
- :type kwargs: dict
212
- :param kwargs: Arguments to be passed to the handler functions.
213
- :type stop_on_response: boolean
214
- :param stop_on_response: Whether to stop on the first non-None
215
- response. If False, then all handlers
216
- will be called. This is especially useful
217
- to handlers which mutate data and then
218
- want to stop propagation of the event.
219
- :rtype: list
220
- :return: List of (handler, response) tuples from all processed
221
- handlers.
222
- """
223
- responses = []
224
- # Invoke the event handlers from most specific
225
- # to least specific, each time stripping off a dot.
226
- handlers_to_call = self._lookup_cache.get(event_name)
227
- if handlers_to_call is None:
228
- handlers_to_call = self._handlers.prefix_search(event_name)
229
- self._lookup_cache[event_name] = handlers_to_call
230
- elif not handlers_to_call:
231
- # Short circuit and return an empty response is we have
232
- # no handlers to call. This is the common case where
233
- # for the majority of signals, nothing is listening.
234
- return []
235
- kwargs['event_name'] = event_name
236
- responses = []
237
- for handler in handlers_to_call:
238
- logger.debug('Event %s: calling handler %s', event_name, handler)
239
- response = handler(**kwargs)
240
- responses.append((handler, response))
241
- if stop_on_response and response is not None:
242
- return responses
243
- return responses
244
-
245
- def emit(self, event_name, **kwargs):
246
- """
247
- Emit an event by name with arguments passed as keyword args.
248
-
249
- >>> responses = emitter.emit(
250
- ... 'my-event.service.operation', arg1='one', arg2='two')
251
-
252
- :rtype: list
253
- :return: List of (handler, response) tuples from all processed
254
- handlers.
255
- """
256
- return self._emit(event_name, kwargs)
257
-
258
- def emit_until_response(self, event_name, **kwargs):
259
- """
260
- Emit an event by name with arguments passed as keyword args,
261
- until the first non-``None`` response is received. This
262
- method prevents subsequent handlers from being invoked.
263
-
264
- >>> handler, response = emitter.emit_until_response(
265
- 'my-event.service.operation', arg1='one', arg2='two')
266
-
267
- :rtype: tuple
268
- :return: The first (handler, response) tuple where the response
269
- is not ``None``, otherwise (``None``, ``None``).
270
- """
271
- responses = self._emit(event_name, kwargs, stop_on_response=True)
272
- if responses:
273
- return responses[-1]
274
- else:
275
- return (None, None)
276
-
277
- def _register(
278
- self, event_name, handler, unique_id=None, unique_id_uses_count=False
279
- ):
280
- self._register_section(
281
- event_name,
282
- handler,
283
- unique_id,
284
- unique_id_uses_count,
285
- section=_MIDDLE,
286
- )
287
-
288
- def _register_first(
289
- self, event_name, handler, unique_id=None, unique_id_uses_count=False
290
- ):
291
- self._register_section(
292
- event_name,
293
- handler,
294
- unique_id,
295
- unique_id_uses_count,
296
- section=_FIRST,
297
- )
298
-
299
- def _register_last(
300
- self, event_name, handler, unique_id, unique_id_uses_count=False
301
- ):
302
- self._register_section(
303
- event_name, handler, unique_id, unique_id_uses_count, section=_LAST
304
- )
305
-
306
- def _register_section(
307
- self, event_name, handler, unique_id, unique_id_uses_count, section
308
- ):
309
- if unique_id is not None:
310
- if unique_id in self._unique_id_handlers:
311
- # We've already registered a handler using this unique_id
312
- # so we don't need to register it again.
313
- count = self._unique_id_handlers[unique_id].get('count', None)
314
- if unique_id_uses_count:
315
- if not count:
316
- raise ValueError(
317
- "Initial registration of unique id %s was "
318
- "specified to use a counter. Subsequent register "
319
- "calls to unique id must specify use of a counter "
320
- "as well." % unique_id
321
- )
322
- else:
323
- self._unique_id_handlers[unique_id]['count'] += 1
324
- else:
325
- if count:
326
- raise ValueError(
327
- "Initial registration of unique id %s was "
328
- "specified to not use a counter. Subsequent "
329
- "register calls to unique id must specify not to "
330
- "use a counter as well." % unique_id
331
- )
332
- return
333
- else:
334
- # Note that the trie knows nothing about the unique
335
- # id. We track uniqueness in this class via the
336
- # _unique_id_handlers.
337
- self._handlers.append_item(
338
- event_name, handler, section=section
339
- )
340
- unique_id_handler_item = {'handler': handler}
341
- if unique_id_uses_count:
342
- unique_id_handler_item['count'] = 1
343
- self._unique_id_handlers[unique_id] = unique_id_handler_item
344
- else:
345
- self._handlers.append_item(event_name, handler, section=section)
346
- # Super simple caching strategy for now, if we change the registrations
347
- # clear the cache. This has the opportunity for smarter invalidations.
348
- self._lookup_cache = {}
349
-
350
- def unregister(
351
- self,
352
- event_name,
353
- handler=None,
354
- unique_id=None,
355
- unique_id_uses_count=False,
356
- ):
357
- if unique_id is not None:
358
- try:
359
- count = self._unique_id_handlers[unique_id].get('count', None)
360
- except KeyError:
361
- # There's no handler matching that unique_id so we have
362
- # nothing to unregister.
363
- return
364
- if unique_id_uses_count:
365
- if count is None:
366
- raise ValueError(
367
- "Initial registration of unique id %s was specified to "
368
- "use a counter. Subsequent unregister calls to unique "
369
- "id must specify use of a counter as well." % unique_id
370
- )
371
- elif count == 1:
372
- handler = self._unique_id_handlers.pop(unique_id)[
373
- 'handler'
374
- ]
375
- else:
376
- self._unique_id_handlers[unique_id]['count'] -= 1
377
- return
378
- else:
379
- if count:
380
- raise ValueError(
381
- "Initial registration of unique id %s was specified "
382
- "to not use a counter. Subsequent unregister calls "
383
- "to unique id must specify not to use a counter as "
384
- "well." % unique_id
385
- )
386
- handler = self._unique_id_handlers.pop(unique_id)['handler']
387
- try:
388
- self._handlers.remove_item(event_name, handler)
389
- self._lookup_cache = {}
390
- except ValueError:
391
- pass
392
-
393
- def __copy__(self):
394
- new_instance = self.__class__()
395
- new_state = self.__dict__.copy()
396
- new_state['_handlers'] = copy.copy(self._handlers)
397
- new_state['_unique_id_handlers'] = copy.copy(self._unique_id_handlers)
398
- new_instance.__dict__ = new_state
399
- return new_instance
400
-
401
-
402
- class EventAliaser(BaseEventHooks):
403
- def __init__(self, event_emitter, event_aliases=None):
404
- self._event_aliases = event_aliases
405
- if event_aliases is None:
406
- self._event_aliases = EVENT_ALIASES
407
- self._alias_name_cache = {}
408
- self._emitter = event_emitter
409
-
410
- def emit(self, event_name, **kwargs):
411
- aliased_event_name = self._alias_event_name(event_name)
412
- return self._emitter.emit(aliased_event_name, **kwargs)
413
-
414
- def emit_until_response(self, event_name, **kwargs):
415
- aliased_event_name = self._alias_event_name(event_name)
416
- return self._emitter.emit_until_response(aliased_event_name, **kwargs)
417
-
418
- def register(
419
- self, event_name, handler, unique_id=None, unique_id_uses_count=False
420
- ):
421
- aliased_event_name = self._alias_event_name(event_name)
422
- return self._emitter.register(
423
- aliased_event_name, handler, unique_id, unique_id_uses_count
424
- )
425
-
426
- def register_first(
427
- self, event_name, handler, unique_id=None, unique_id_uses_count=False
428
- ):
429
- aliased_event_name = self._alias_event_name(event_name)
430
- return self._emitter.register_first(
431
- aliased_event_name, handler, unique_id, unique_id_uses_count
432
- )
433
-
434
- def register_last(
435
- self, event_name, handler, unique_id=None, unique_id_uses_count=False
436
- ):
437
- aliased_event_name = self._alias_event_name(event_name)
438
- return self._emitter.register_last(
439
- aliased_event_name, handler, unique_id, unique_id_uses_count
440
- )
441
-
442
- def unregister(
443
- self,
444
- event_name,
445
- handler=None,
446
- unique_id=None,
447
- unique_id_uses_count=False,
448
- ):
449
- aliased_event_name = self._alias_event_name(event_name)
450
- return self._emitter.unregister(
451
- aliased_event_name, handler, unique_id, unique_id_uses_count
452
- )
453
-
454
- def _alias_event_name(self, event_name):
455
- if event_name in self._alias_name_cache:
456
- return self._alias_name_cache[event_name]
457
-
458
- for old_part, new_part in self._event_aliases.items():
459
-
460
- # We can't simply do a string replace for everything, otherwise we
461
- # might end up translating substrings that we never intended to
462
- # translate. When there aren't any dots in the old event name
463
- # part, then we can quickly replace the item in the list if it's
464
- # there.
465
- event_parts = event_name.split('.')
466
- if '.' not in old_part:
467
- try:
468
- # Theoretically a given event name could have the same part
469
- # repeated, but in practice this doesn't happen
470
- event_parts[event_parts.index(old_part)] = new_part
471
- except ValueError:
472
- continue
473
-
474
- # If there's dots in the name, it gets more complicated. Now we
475
- # have to replace multiple sections of the original event.
476
- elif old_part in event_name:
477
- old_parts = old_part.split('.')
478
- self._replace_subsection(event_parts, old_parts, new_part)
479
- else:
480
- continue
481
-
482
- new_name = '.'.join(event_parts)
483
- logger.debug(
484
- f"Changing event name from {event_name} to {new_name}"
485
- )
486
- self._alias_name_cache[event_name] = new_name
487
- return new_name
488
-
489
- self._alias_name_cache[event_name] = event_name
490
- return event_name
491
-
492
- def _replace_subsection(self, sections, old_parts, new_part):
493
- for i in range(len(sections)):
494
- if (
495
- sections[i] == old_parts[0]
496
- and sections[i : i + len(old_parts)] == old_parts
497
- ):
498
- sections[i : i + len(old_parts)] = [new_part]
499
- return
500
-
501
- def __copy__(self):
502
- return self.__class__(
503
- copy.copy(self._emitter), copy.copy(self._event_aliases)
504
- )
505
-
506
-
507
- class _PrefixTrie:
508
- """Specialized prefix trie that handles wildcards.
509
-
510
- The prefixes in this case are based on dot separated
511
- names so 'foo.bar.baz' is::
512
-
513
- foo -> bar -> baz
514
-
515
- Wildcard support just means that having a key such as 'foo.bar.*.baz' will
516
- be matched with a call to ``get_items(key='foo.bar.ANYTHING.baz')``.
517
-
518
- You can think of this prefix trie as the equivalent as defaultdict(list),
519
- except that it can do prefix searches:
520
-
521
- foo.bar.baz -> A
522
- foo.bar -> B
523
- foo -> C
524
-
525
- Calling ``get_items('foo.bar.baz')`` will return [A + B + C], from
526
- most specific to least specific.
527
-
528
- """
529
-
530
- def __init__(self):
531
- # Each dictionary can be though of as a node, where a node
532
- # has values associated with the node, and children is a link
533
- # to more nodes. So 'foo.bar' would have a 'foo' node with
534
- # a 'bar' node as a child of foo.
535
- # {'foo': {'children': {'bar': {...}}}}.
536
- self._root = {'chunk': None, 'children': {}, 'values': None}
537
-
538
- def append_item(self, key, value, section=_MIDDLE):
539
- """Add an item to a key.
540
-
541
- If a value is already associated with that key, the new
542
- value is appended to the list for the key.
543
- """
544
- key_parts = key.split('.')
545
- current = self._root
546
- for part in key_parts:
547
- if part not in current['children']:
548
- new_child = {'chunk': part, 'values': None, 'children': {}}
549
- current['children'][part] = new_child
550
- current = new_child
551
- else:
552
- current = current['children'][part]
553
- if current['values'] is None:
554
- current['values'] = NodeList([], [], [])
555
- current['values'][section].append(value)
556
-
557
- def prefix_search(self, key):
558
- """Collect all items that are prefixes of key.
559
-
560
- Prefix in this case are delineated by '.' characters so
561
- 'foo.bar.baz' is a 3 chunk sequence of 3 "prefixes" (
562
- "foo", "bar", and "baz").
563
-
564
- """
565
- collected = deque()
566
- key_parts = key.split('.')
567
- current = self._root
568
- self._get_items(current, key_parts, collected, 0)
569
- return collected
570
-
571
- def _get_items(self, starting_node, key_parts, collected, starting_index):
572
- stack = [(starting_node, starting_index)]
573
- key_parts_len = len(key_parts)
574
- # Traverse down the nodes, where at each level we add the
575
- # next part from key_parts as well as the wildcard element '*'.
576
- # This means for each node we see we potentially add two more
577
- # elements to our stack.
578
- while stack:
579
- current_node, index = stack.pop()
580
- if current_node['values']:
581
- # We're using extendleft because we want
582
- # the values associated with the node furthest
583
- # from the root to come before nodes closer
584
- # to the root. extendleft() also adds its items
585
- # in right-left order so .extendleft([1, 2, 3])
586
- # will result in final_list = [3, 2, 1], which is
587
- # why we reverse the lists.
588
- node_list = current_node['values']
589
- complete_order = (
590
- node_list.first + node_list.middle + node_list.last
591
- )
592
- collected.extendleft(reversed(complete_order))
593
- if not index == key_parts_len:
594
- children = current_node['children']
595
- directs = children.get(key_parts[index])
596
- wildcard = children.get('*')
597
- next_index = index + 1
598
- if wildcard is not None:
599
- stack.append((wildcard, next_index))
600
- if directs is not None:
601
- stack.append((directs, next_index))
602
-
603
- def remove_item(self, key, value):
604
- """Remove an item associated with a key.
605
-
606
- If the value is not associated with the key a ``ValueError``
607
- will be raised. If the key does not exist in the trie, a
608
- ``ValueError`` will be raised.
609
-
610
- """
611
- key_parts = key.split('.')
612
- current = self._root
613
- self._remove_item(current, key_parts, value, index=0)
614
-
615
- def _remove_item(self, current_node, key_parts, value, index):
616
- if current_node is None:
617
- return
618
- elif index < len(key_parts):
619
- next_node = current_node['children'].get(key_parts[index])
620
- if next_node is not None:
621
- self._remove_item(next_node, key_parts, value, index + 1)
622
- if index == len(key_parts) - 1:
623
- node_list = next_node['values']
624
- if value in node_list.first:
625
- node_list.first.remove(value)
626
- elif value in node_list.middle:
627
- node_list.middle.remove(value)
628
- elif value in node_list.last:
629
- node_list.last.remove(value)
630
- if not next_node['children'] and not next_node['values']:
631
- # Then this is a leaf node with no values so
632
- # we can just delete this link from the parent node.
633
- # This makes subsequent search faster in the case
634
- # where a key does not exist.
635
- del current_node['children'][key_parts[index]]
636
- else:
637
- raise ValueError(f"key is not in trie: {'.'.join(key_parts)}")
638
-
639
- def __copy__(self):
640
- # The fact that we're using a nested dict under the covers
641
- # is an implementation detail, and the user shouldn't have
642
- # to know that they'd normally need a deepcopy so we expose
643
- # __copy__ instead of __deepcopy__.
644
- new_copy = self.__class__()
645
- copied_attrs = self._recursive_copy(self.__dict__)
646
- new_copy.__dict__ = copied_attrs
647
- return new_copy
648
-
649
- def _recursive_copy(self, node):
650
- # We can't use copy.deepcopy because we actually only want to copy
651
- # the structure of the trie, not the handlers themselves.
652
- # Each node has a chunk, children, and values.
653
- copied_node = {}
654
- for key, value in node.items():
655
- if isinstance(value, NodeList):
656
- copied_node[key] = copy.copy(value)
657
- elif isinstance(value, dict):
658
- copied_node[key] = self._recursive_copy(value)
659
- else:
660
- copied_node[key] = value
661
- return copied_node
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/tokens.py DELETED
@@ -1,330 +0,0 @@
1
- # Copyright 2022 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License"). You
4
- # may not use this file except in compliance with the License. A copy of
5
- # the License is located at
6
- #
7
- # http://aws.amazon.com/apache2.0/
8
- #
9
- # or in the "license" file accompanying this file. This file is
10
- # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11
- # ANY KIND, either express or implied. See the License for the specific
12
- # language governing permissions and limitations under the License.
13
- import json
14
- import logging
15
- import os
16
- import threading
17
- from datetime import datetime, timedelta
18
- from typing import NamedTuple, Optional
19
-
20
- import dateutil.parser
21
- from dateutil.tz import tzutc
22
-
23
- from botocore import UNSIGNED
24
- from botocore.compat import total_seconds
25
- from botocore.config import Config
26
- from botocore.exceptions import (
27
- ClientError,
28
- InvalidConfigError,
29
- TokenRetrievalError,
30
- )
31
- from botocore.utils import CachedProperty, JSONFileCache, SSOTokenLoader
32
-
33
- logger = logging.getLogger(__name__)
34
-
35
-
36
- def _utc_now():
37
- return datetime.now(tzutc())
38
-
39
-
40
- def create_token_resolver(session):
41
- providers = [
42
- SSOTokenProvider(session),
43
- ]
44
- return TokenProviderChain(providers=providers)
45
-
46
-
47
- def _serialize_utc_timestamp(obj):
48
- if isinstance(obj, datetime):
49
- return obj.strftime("%Y-%m-%dT%H:%M:%SZ")
50
- return obj
51
-
52
-
53
- def _sso_json_dumps(obj):
54
- return json.dumps(obj, default=_serialize_utc_timestamp)
55
-
56
-
57
- class FrozenAuthToken(NamedTuple):
58
- token: str
59
- expiration: Optional[datetime] = None
60
-
61
-
62
- class DeferredRefreshableToken:
63
- # The time at which we'll attempt to refresh, but not block if someone else
64
- # is refreshing.
65
- _advisory_refresh_timeout = 15 * 60
66
- # The time at which all threads will block waiting for a refreshed token
67
- _mandatory_refresh_timeout = 10 * 60
68
- # Refresh at most once every minute to avoid blocking every request
69
- _attempt_timeout = 60
70
-
71
- def __init__(self, method, refresh_using, time_fetcher=_utc_now):
72
- self._time_fetcher = time_fetcher
73
- self._refresh_using = refresh_using
74
- self.method = method
75
-
76
- # The frozen token is protected by this lock
77
- self._refresh_lock = threading.Lock()
78
- self._frozen_token = None
79
- self._next_refresh = None
80
-
81
- def get_frozen_token(self):
82
- self._refresh()
83
- return self._frozen_token
84
-
85
- def _refresh(self):
86
- # If we don't need to refresh just return
87
- refresh_type = self._should_refresh()
88
- if not refresh_type:
89
- return None
90
-
91
- # Block for refresh if we're in the mandatory refresh window
92
- block_for_refresh = refresh_type == "mandatory"
93
- if self._refresh_lock.acquire(block_for_refresh):
94
- try:
95
- self._protected_refresh()
96
- finally:
97
- self._refresh_lock.release()
98
-
99
- def _protected_refresh(self):
100
- # This should only be called after acquiring the refresh lock
101
- # Another thread may have already refreshed, double check refresh
102
- refresh_type = self._should_refresh()
103
- if not refresh_type:
104
- return None
105
-
106
- try:
107
- now = self._time_fetcher()
108
- self._next_refresh = now + timedelta(seconds=self._attempt_timeout)
109
- self._frozen_token = self._refresh_using()
110
- except Exception:
111
- logger.warning(
112
- "Refreshing token failed during the %s refresh period.",
113
- refresh_type,
114
- exc_info=True,
115
- )
116
- if refresh_type == "mandatory":
117
- # This refresh was mandatory, error must be propagated back
118
- raise
119
-
120
- if self._is_expired():
121
- # Fresh credentials should never be expired
122
- raise TokenRetrievalError(
123
- provider=self.method,
124
- error_msg="Token has expired and refresh failed",
125
- )
126
-
127
- def _is_expired(self):
128
- if self._frozen_token is None:
129
- return False
130
-
131
- expiration = self._frozen_token.expiration
132
- remaining = total_seconds(expiration - self._time_fetcher())
133
- return remaining <= 0
134
-
135
- def _should_refresh(self):
136
- if self._frozen_token is None:
137
- # We don't have a token yet, mandatory refresh
138
- return "mandatory"
139
-
140
- expiration = self._frozen_token.expiration
141
- if expiration is None:
142
- # No expiration, so assume we don't need to refresh.
143
- return None
144
-
145
- now = self._time_fetcher()
146
- if now < self._next_refresh:
147
- return None
148
-
149
- remaining = total_seconds(expiration - now)
150
-
151
- if remaining < self._mandatory_refresh_timeout:
152
- return "mandatory"
153
- elif remaining < self._advisory_refresh_timeout:
154
- return "advisory"
155
-
156
- return None
157
-
158
-
159
- class TokenProviderChain:
160
- def __init__(self, providers=None):
161
- if providers is None:
162
- providers = []
163
- self._providers = providers
164
-
165
- def load_token(self):
166
- for provider in self._providers:
167
- token = provider.load_token()
168
- if token is not None:
169
- return token
170
- return None
171
-
172
-
173
- class SSOTokenProvider:
174
- METHOD = "sso"
175
- _REFRESH_WINDOW = 15 * 60
176
- _SSO_TOKEN_CACHE_DIR = os.path.expanduser(
177
- os.path.join("~", ".aws", "sso", "cache")
178
- )
179
- _SSO_CONFIG_VARS = [
180
- "sso_start_url",
181
- "sso_region",
182
- ]
183
- _GRANT_TYPE = "refresh_token"
184
- DEFAULT_CACHE_CLS = JSONFileCache
185
-
186
- def __init__(
187
- self, session, cache=None, time_fetcher=_utc_now, profile_name=None
188
- ):
189
- self._session = session
190
- if cache is None:
191
- cache = self.DEFAULT_CACHE_CLS(
192
- self._SSO_TOKEN_CACHE_DIR,
193
- dumps_func=_sso_json_dumps,
194
- )
195
- self._now = time_fetcher
196
- self._cache = cache
197
- self._token_loader = SSOTokenLoader(cache=self._cache)
198
- self._profile_name = (
199
- profile_name
200
- or self._session.get_config_variable("profile")
201
- or 'default'
202
- )
203
-
204
- def _load_sso_config(self):
205
- loaded_config = self._session.full_config
206
- profiles = loaded_config.get("profiles", {})
207
- sso_sessions = loaded_config.get("sso_sessions", {})
208
- profile_config = profiles.get(self._profile_name, {})
209
-
210
- if "sso_session" not in profile_config:
211
- return
212
-
213
- sso_session_name = profile_config["sso_session"]
214
- sso_config = sso_sessions.get(sso_session_name, None)
215
-
216
- if not sso_config:
217
- error_msg = (
218
- f'The profile "{self._profile_name}" is configured to use the SSO '
219
- f'token provider but the "{sso_session_name}" sso_session '
220
- f"configuration does not exist."
221
- )
222
- raise InvalidConfigError(error_msg=error_msg)
223
-
224
- missing_configs = []
225
- for var in self._SSO_CONFIG_VARS:
226
- if var not in sso_config:
227
- missing_configs.append(var)
228
-
229
- if missing_configs:
230
- error_msg = (
231
- f'The profile "{self._profile_name}" is configured to use the SSO '
232
- f"token provider but is missing the following configuration: "
233
- f"{missing_configs}."
234
- )
235
- raise InvalidConfigError(error_msg=error_msg)
236
-
237
- return {
238
- "session_name": sso_session_name,
239
- "sso_region": sso_config["sso_region"],
240
- "sso_start_url": sso_config["sso_start_url"],
241
- }
242
-
243
- @CachedProperty
244
- def _sso_config(self):
245
- return self._load_sso_config()
246
-
247
- @CachedProperty
248
- def _client(self):
249
- config = Config(
250
- region_name=self._sso_config["sso_region"],
251
- signature_version=UNSIGNED,
252
- )
253
- return self._session.create_client("sso-oidc", config=config)
254
-
255
- def _attempt_create_token(self, token):
256
- response = self._client.create_token(
257
- grantType=self._GRANT_TYPE,
258
- clientId=token["clientId"],
259
- clientSecret=token["clientSecret"],
260
- refreshToken=token["refreshToken"],
261
- )
262
- expires_in = timedelta(seconds=response["expiresIn"])
263
- new_token = {
264
- "startUrl": self._sso_config["sso_start_url"],
265
- "region": self._sso_config["sso_region"],
266
- "accessToken": response["accessToken"],
267
- "expiresAt": self._now() + expires_in,
268
- # Cache the registration alongside the token
269
- "clientId": token["clientId"],
270
- "clientSecret": token["clientSecret"],
271
- "registrationExpiresAt": token["registrationExpiresAt"],
272
- }
273
- if "refreshToken" in response:
274
- new_token["refreshToken"] = response["refreshToken"]
275
- logger.info("SSO Token refresh succeeded")
276
- return new_token
277
-
278
- def _refresh_access_token(self, token):
279
- keys = (
280
- "refreshToken",
281
- "clientId",
282
- "clientSecret",
283
- "registrationExpiresAt",
284
- )
285
- missing_keys = [k for k in keys if k not in token]
286
- if missing_keys:
287
- msg = f"Unable to refresh SSO token: missing keys: {missing_keys}"
288
- logger.info(msg)
289
- return None
290
-
291
- expiry = dateutil.parser.parse(token["registrationExpiresAt"])
292
- if total_seconds(expiry - self._now()) <= 0:
293
- logger.info(f"SSO token registration expired at {expiry}")
294
- return None
295
-
296
- try:
297
- return self._attempt_create_token(token)
298
- except ClientError:
299
- logger.warning("SSO token refresh attempt failed", exc_info=True)
300
- return None
301
-
302
- def _refresher(self):
303
- start_url = self._sso_config["sso_start_url"]
304
- session_name = self._sso_config["session_name"]
305
- logger.info(f"Loading cached SSO token for {session_name}")
306
- token_dict = self._token_loader(start_url, session_name=session_name)
307
- expiration = dateutil.parser.parse(token_dict["expiresAt"])
308
- logger.debug(f"Cached SSO token expires at {expiration}")
309
-
310
- remaining = total_seconds(expiration - self._now())
311
- if remaining < self._REFRESH_WINDOW:
312
- new_token_dict = self._refresh_access_token(token_dict)
313
- if new_token_dict is not None:
314
- token_dict = new_token_dict
315
- expiration = token_dict["expiresAt"]
316
- self._token_loader.save_token(
317
- start_url, token_dict, session_name=session_name
318
- )
319
-
320
- return FrozenAuthToken(
321
- token_dict["accessToken"], expiration=expiration
322
- )
323
-
324
- def load_token(self):
325
- if self._sso_config is None:
326
- return None
327
-
328
- return DeferredRefreshableToken(
329
- self.METHOD, self._refresher, time_fetcher=self._now
330
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/cli/main_parser.py DELETED
@@ -1,134 +0,0 @@
1
- """A single place for constructing and exposing the main parser
2
- """
3
-
4
- import os
5
- import subprocess
6
- import sys
7
- from typing import List, Optional, Tuple
8
-
9
- from pip._internal.build_env import get_runnable_pip
10
- from pip._internal.cli import cmdoptions
11
- from pip._internal.cli.parser import ConfigOptionParser, UpdatingDefaultsHelpFormatter
12
- from pip._internal.commands import commands_dict, get_similar_commands
13
- from pip._internal.exceptions import CommandError
14
- from pip._internal.utils.misc import get_pip_version, get_prog
15
-
16
- __all__ = ["create_main_parser", "parse_command"]
17
-
18
-
19
- def create_main_parser() -> ConfigOptionParser:
20
- """Creates and returns the main parser for pip's CLI"""
21
-
22
- parser = ConfigOptionParser(
23
- usage="\n%prog <command> [options]",
24
- add_help_option=False,
25
- formatter=UpdatingDefaultsHelpFormatter(),
26
- name="global",
27
- prog=get_prog(),
28
- )
29
- parser.disable_interspersed_args()
30
-
31
- parser.version = get_pip_version()
32
-
33
- # add the general options
34
- gen_opts = cmdoptions.make_option_group(cmdoptions.general_group, parser)
35
- parser.add_option_group(gen_opts)
36
-
37
- # so the help formatter knows
38
- parser.main = True # type: ignore
39
-
40
- # create command listing for description
41
- description = [""] + [
42
- f"{name:27} {command_info.summary}"
43
- for name, command_info in commands_dict.items()
44
- ]
45
- parser.description = "\n".join(description)
46
-
47
- return parser
48
-
49
-
50
- def identify_python_interpreter(python: str) -> Optional[str]:
51
- # If the named file exists, use it.
52
- # If it's a directory, assume it's a virtual environment and
53
- # look for the environment's Python executable.
54
- if os.path.exists(python):
55
- if os.path.isdir(python):
56
- # bin/python for Unix, Scripts/python.exe for Windows
57
- # Try both in case of odd cases like cygwin.
58
- for exe in ("bin/python", "Scripts/python.exe"):
59
- py = os.path.join(python, exe)
60
- if os.path.exists(py):
61
- return py
62
- else:
63
- return python
64
-
65
- # Could not find the interpreter specified
66
- return None
67
-
68
-
69
- def parse_command(args: List[str]) -> Tuple[str, List[str]]:
70
- parser = create_main_parser()
71
-
72
- # Note: parser calls disable_interspersed_args(), so the result of this
73
- # call is to split the initial args into the general options before the
74
- # subcommand and everything else.
75
- # For example:
76
- # args: ['--timeout=5', 'install', '--user', 'INITools']
77
- # general_options: ['--timeout==5']
78
- # args_else: ['install', '--user', 'INITools']
79
- general_options, args_else = parser.parse_args(args)
80
-
81
- # --python
82
- if general_options.python and "_PIP_RUNNING_IN_SUBPROCESS" not in os.environ:
83
- # Re-invoke pip using the specified Python interpreter
84
- interpreter = identify_python_interpreter(general_options.python)
85
- if interpreter is None:
86
- raise CommandError(
87
- f"Could not locate Python interpreter {general_options.python}"
88
- )
89
-
90
- pip_cmd = [
91
- interpreter,
92
- get_runnable_pip(),
93
- ]
94
- pip_cmd.extend(args)
95
-
96
- # Set a flag so the child doesn't re-invoke itself, causing
97
- # an infinite loop.
98
- os.environ["_PIP_RUNNING_IN_SUBPROCESS"] = "1"
99
- returncode = 0
100
- try:
101
- proc = subprocess.run(pip_cmd)
102
- returncode = proc.returncode
103
- except (subprocess.SubprocessError, OSError) as exc:
104
- raise CommandError(f"Failed to run pip under {interpreter}: {exc}")
105
- sys.exit(returncode)
106
-
107
- # --version
108
- if general_options.version:
109
- sys.stdout.write(parser.version)
110
- sys.stdout.write(os.linesep)
111
- sys.exit()
112
-
113
- # pip || pip help -> print_help()
114
- if not args_else or (args_else[0] == "help" and len(args_else) == 1):
115
- parser.print_help()
116
- sys.exit()
117
-
118
- # the subcommand name
119
- cmd_name = args_else[0]
120
-
121
- if cmd_name not in commands_dict:
122
- guess = get_similar_commands(cmd_name)
123
-
124
- msg = [f'unknown command "{cmd_name}"']
125
- if guess:
126
- msg.append(f'maybe you meant "{guess}"')
127
-
128
- raise CommandError(" - ".join(msg))
129
-
130
- # all the args without the subcommand
131
- cmd_args = args[:]
132
- cmd_args.remove(cmd_name)
133
-
134
- return cmd_name, cmd_args
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CNXT/TXT2PiX/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: TXT2PiX
3
- emoji: 🏆
4
- colorFrom: gray
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.29.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/evaluation/__init__.py DELETED
@@ -1,12 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- from .cityscapes_evaluation import CityscapesEvaluator
3
- from .coco_evaluation import COCOEvaluator
4
- from .rotated_coco_evaluation import RotatedCOCOEvaluator
5
- from .evaluator import DatasetEvaluator, DatasetEvaluators, inference_context, inference_on_dataset
6
- from .lvis_evaluation import LVISEvaluator
7
- from .panoptic_evaluation import COCOPanopticEvaluator
8
- from .pascal_voc_evaluation import PascalVOCDetectionEvaluator
9
- from .sem_seg_evaluation import SemSegEvaluator
10
- from .testing import print_csv_format, verify_results
11
-
12
- __all__ = [k for k in globals().keys() if not k.startswith("_")]
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/export/shared.py DELETED
@@ -1,1031 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
-
3
- import collections
4
- import contextlib
5
- import copy
6
- import functools
7
- import logging
8
- import mock
9
- import numpy as np
10
- import os
11
- from typing import Any, Callable, Dict, List, Optional, Tuple, Union
12
- import caffe2.python.utils as putils
13
- import torch
14
- import torch.nn.functional as F
15
- from caffe2.proto import caffe2_pb2
16
- from caffe2.python import core, net_drawer, workspace
17
- from torch.nn.functional import interpolate as interp
18
-
19
- logger = logging.getLogger(__name__)
20
-
21
-
22
- # ==== torch/utils_toffee/cast.py =======================================
23
-
24
-
25
- def to_device(t, device_str):
26
- """
27
- This function is a replacement of .to(another_device) such that it allows the
28
- casting to be traced properly by explicitly calling the underlying copy ops.
29
- It also avoids introducing unncessary op when casting to the same device.
30
- """
31
- src = t.device
32
- dst = torch.device(device_str)
33
-
34
- if src == dst:
35
- return t
36
- elif src.type == "cuda" and dst.type == "cpu":
37
- return torch.ops._caffe2.CopyGPUToCPU(t)
38
- elif src.type == "cpu" and dst.type == "cuda":
39
- return torch.ops._caffe2.CopyCPUToGPU(t)
40
- else:
41
- raise RuntimeError("Can't cast tensor from device {} to device {}".format(src, dst))
42
-
43
-
44
- # ==== torch/utils_toffee/interpolate.py =======================================
45
-
46
-
47
- # Note: borrowed from vision/detection/fair/detectron/detectron/modeling/detector.py
48
- def BilinearInterpolation(tensor_in, up_scale):
49
- assert up_scale % 2 == 0, "Scale should be even"
50
-
51
- def upsample_filt(size):
52
- factor = (size + 1) // 2
53
- if size % 2 == 1:
54
- center = factor - 1
55
- else:
56
- center = factor - 0.5
57
-
58
- og = np.ogrid[:size, :size]
59
- return (1 - abs(og[0] - center) / factor) * (1 - abs(og[1] - center) / factor)
60
-
61
- kernel_size = int(up_scale) * 2
62
- bil_filt = upsample_filt(kernel_size)
63
-
64
- dim = int(tensor_in.shape[1])
65
- kernel = np.zeros((dim, dim, kernel_size, kernel_size), dtype=np.float32)
66
- kernel[range(dim), range(dim), :, :] = bil_filt
67
-
68
- tensor_out = F.conv_transpose2d(
69
- tensor_in,
70
- weight=to_device(torch.Tensor(kernel), tensor_in.device),
71
- bias=None,
72
- stride=int(up_scale),
73
- padding=int(up_scale / 2),
74
- )
75
-
76
- return tensor_out
77
-
78
-
79
- # NOTE: ONNX is incompatible with traced torch.nn.functional.interpolate if
80
- # using dynamic `scale_factor` rather than static `size`. (T43166860)
81
- # NOTE: Caffe2 Int8 conversion might not be able to quantize `size` properly.
82
- def onnx_compatibale_interpolate(
83
- input, size=None, scale_factor=None, mode="nearest", align_corners=None
84
- ):
85
- # NOTE: The input dimensions are interpreted in the form:
86
- # `mini-batch x channels x [optional depth] x [optional height] x width`.
87
- if size is None and scale_factor is not None:
88
- if input.dim() == 4:
89
- if isinstance(scale_factor, (int, float)):
90
- height_scale, width_scale = (scale_factor, scale_factor)
91
- else:
92
- assert isinstance(scale_factor, (tuple, list))
93
- assert len(scale_factor) == 2
94
- height_scale, width_scale = scale_factor
95
-
96
- assert not align_corners, "No matching C2 op for align_corners == True"
97
- if mode == "nearest":
98
- return torch.ops._caffe2.ResizeNearest(
99
- input, order="NCHW", width_scale=width_scale, height_scale=height_scale
100
- )
101
- elif mode == "bilinear":
102
- logger.warning(
103
- "Use F.conv_transpose2d for bilinear interpolate"
104
- " because there's no such C2 op, this may cause significant"
105
- " slowdown and the boundary pixels won't be as same as"
106
- " using F.interpolate due to padding."
107
- )
108
- assert height_scale == width_scale
109
- return BilinearInterpolation(input, up_scale=height_scale)
110
- logger.warning("Output size is not static, it might cause ONNX conversion issue")
111
-
112
- return interp(input, size, scale_factor, mode, align_corners)
113
-
114
-
115
- @contextlib.contextmanager
116
- def mock_torch_nn_functional_interpolate():
117
- if torch.onnx.is_in_onnx_export():
118
- with mock.patch(
119
- "torch.nn.functional.interpolate", side_effect=onnx_compatibale_interpolate
120
- ):
121
- yield
122
- else:
123
- yield
124
-
125
-
126
- # ==== torch/utils_caffe2/ws_utils.py ==========================================
127
-
128
-
129
- class ScopedWS(object):
130
- def __init__(self, ws_name, is_reset, is_cleanup=False):
131
- self.ws_name = ws_name
132
- self.is_reset = is_reset
133
- self.is_cleanup = is_cleanup
134
- self.org_ws = ""
135
-
136
- def __enter__(self):
137
- self.org_ws = workspace.CurrentWorkspace()
138
- if self.ws_name is not None:
139
- workspace.SwitchWorkspace(self.ws_name, True)
140
- if self.is_reset:
141
- workspace.ResetWorkspace()
142
-
143
- return workspace
144
-
145
- def __exit__(self, *args):
146
- if self.is_cleanup:
147
- workspace.ResetWorkspace()
148
- if self.ws_name is not None:
149
- workspace.SwitchWorkspace(self.org_ws)
150
-
151
-
152
- def fetch_any_blob(name):
153
- bb = None
154
- try:
155
- bb = workspace.FetchBlob(name)
156
- except TypeError:
157
- bb = workspace.FetchInt8Blob(name)
158
- except Exception as e:
159
- logger.error("Get blob {} error: {}".format(name, e))
160
-
161
- return bb
162
-
163
-
164
- # ==== torch/utils_caffe2/protobuf.py ==========================================
165
-
166
-
167
- def get_pb_arg(pb, arg_name):
168
- for x in pb.arg:
169
- if x.name == arg_name:
170
- return x
171
- return None
172
-
173
-
174
- def get_pb_arg_valf(pb, arg_name, default_val):
175
- arg = get_pb_arg(pb, arg_name)
176
- return arg.f if arg is not None else default_val
177
-
178
-
179
- def get_pb_arg_floats(pb, arg_name, default_val):
180
- arg = get_pb_arg(pb, arg_name)
181
- return list(map(float, arg.floats)) if arg is not None else default_val
182
-
183
-
184
- def get_pb_arg_ints(pb, arg_name, default_val):
185
- arg = get_pb_arg(pb, arg_name)
186
- return list(map(int, arg.ints)) if arg is not None else default_val
187
-
188
-
189
- def get_pb_arg_vali(pb, arg_name, default_val):
190
- arg = get_pb_arg(pb, arg_name)
191
- return arg.i if arg is not None else default_val
192
-
193
-
194
- def get_pb_arg_vals(pb, arg_name, default_val):
195
- arg = get_pb_arg(pb, arg_name)
196
- return arg.s if arg is not None else default_val
197
-
198
-
199
- def get_pb_arg_valstrings(pb, arg_name, default_val):
200
- arg = get_pb_arg(pb, arg_name)
201
- return list(arg.strings) if arg is not None else default_val
202
-
203
-
204
- def check_set_pb_arg(pb, arg_name, arg_attr, arg_value, allow_override=False):
205
- arg = get_pb_arg(pb, arg_name)
206
- if arg is None:
207
- arg = putils.MakeArgument(arg_name, arg_value)
208
- assert hasattr(arg, arg_attr)
209
- pb.arg.extend([arg])
210
- if allow_override and getattr(arg, arg_attr) != arg_value:
211
- logger.warning(
212
- "Override argument {}: {} -> {}".format(arg_name, getattr(arg, arg_attr), arg_value)
213
- )
214
- setattr(arg, arg_attr, arg_value)
215
- else:
216
- assert arg is not None
217
- assert getattr(arg, arg_attr) == arg_value, "Existing value {}, new value {}".format(
218
- getattr(arg, arg_attr), arg_value
219
- )
220
-
221
-
222
- def _create_const_fill_op_from_numpy(name, tensor, device_option=None):
223
- assert type(tensor) == np.ndarray
224
- kTypeNameMapper = {
225
- np.dtype("float32"): "GivenTensorFill",
226
- np.dtype("int32"): "GivenTensorIntFill",
227
- np.dtype("int64"): "GivenTensorInt64Fill",
228
- np.dtype("uint8"): "GivenTensorStringFill",
229
- }
230
-
231
- args_dict = {}
232
- if tensor.dtype == np.dtype("uint8"):
233
- args_dict.update({"values": [str(tensor.data)], "shape": [1]})
234
- else:
235
- args_dict.update({"values": tensor, "shape": tensor.shape})
236
-
237
- if device_option is not None:
238
- args_dict["device_option"] = device_option
239
-
240
- return core.CreateOperator(kTypeNameMapper[tensor.dtype], [], [name], **args_dict)
241
-
242
-
243
- def _create_const_fill_op_from_c2_int8_tensor(name, int8_tensor):
244
- assert type(int8_tensor) == workspace.Int8Tensor
245
- kTypeNameMapper = {
246
- np.dtype("int32"): "Int8GivenIntTensorFill",
247
- np.dtype("uint8"): "Int8GivenTensorFill",
248
- }
249
-
250
- tensor = int8_tensor.data
251
- assert tensor.dtype in [np.dtype("uint8"), np.dtype("int32")]
252
- values = tensor.tobytes() if tensor.dtype == np.dtype("uint8") else tensor
253
-
254
- return core.CreateOperator(
255
- kTypeNameMapper[tensor.dtype],
256
- [],
257
- [name],
258
- values=values,
259
- shape=tensor.shape,
260
- Y_scale=int8_tensor.scale,
261
- Y_zero_point=int8_tensor.zero_point,
262
- )
263
-
264
-
265
- def create_const_fill_op(
266
- name: str,
267
- blob: Union[np.ndarray, workspace.Int8Tensor],
268
- device_option: Optional[caffe2_pb2.DeviceOption] = None,
269
- ) -> caffe2_pb2.OperatorDef:
270
- """
271
- Given a blob object, return the Caffe2 operator that creates this blob
272
- as constant. Currently support NumPy tensor and Caffe2 Int8Tensor.
273
- """
274
-
275
- tensor_type = type(blob)
276
- assert tensor_type in [np.ndarray, workspace.Int8Tensor], (
277
- 'Error when creating const fill op for "{}", unsupported blob type: {}'
278
- ).format(name, type(blob))
279
-
280
- if tensor_type == np.ndarray:
281
- return _create_const_fill_op_from_numpy(name, blob, device_option)
282
- elif tensor_type == workspace.Int8Tensor:
283
- assert device_option is None
284
- return _create_const_fill_op_from_c2_int8_tensor(name, blob)
285
-
286
-
287
- def construct_init_net_from_params(
288
- params: Dict[str, Any], device_options: Optional[Dict[str, caffe2_pb2.DeviceOption]] = None
289
- ) -> caffe2_pb2.NetDef:
290
- """
291
- Construct the init_net from params dictionary
292
- """
293
- init_net = caffe2_pb2.NetDef()
294
- device_options = device_options or {}
295
- for name, blob in params.items():
296
- if isinstance(blob, str):
297
- logger.warning(
298
- (
299
- "Blob {} with type {} is not supported in generating init net,"
300
- " skipped.".format(name, type(blob))
301
- )
302
- )
303
- continue
304
- init_net.op.extend(
305
- [create_const_fill_op(name, blob, device_option=device_options.get(name, None))]
306
- )
307
- init_net.external_output.append(name)
308
- return init_net
309
-
310
-
311
- def get_producer_map(ssa):
312
- """
313
- Return dict from versioned blob to (i, j),
314
- where i is index of producer op, j is the index of output of that op.
315
- """
316
- producer_map = {}
317
- for i in range(len(ssa)):
318
- outputs = ssa[i][1]
319
- for j, outp in enumerate(outputs):
320
- producer_map[outp] = (i, j)
321
- return producer_map
322
-
323
-
324
- def get_consumer_map(ssa):
325
- """
326
- Return dict from versioned blob to list of (i, j),
327
- where i is index of consumer op, j is the index of input of that op.
328
- """
329
- consumer_map = collections.defaultdict(list)
330
- for i in range(len(ssa)):
331
- inputs = ssa[i][0]
332
- for j, inp in enumerate(inputs):
333
- consumer_map[inp].append((i, j))
334
- return consumer_map
335
-
336
-
337
- def get_params_from_init_net(
338
- init_net: caffe2_pb2.NetDef
339
- ) -> [Dict[str, Any], Dict[str, caffe2_pb2.DeviceOption]]:
340
- """
341
- Take the output blobs from init_net by running it.
342
- Outputs:
343
- params: dict from blob name to numpy array
344
- device_options: dict from blob name to the device option of its creating op
345
- """
346
- # NOTE: this assumes that the params is determined by producer op with the
347
- # only exception be CopyGPUToCPU which is CUDA op but returns CPU tensor.
348
- def _get_device_option(producer_op):
349
- if producer_op.type == "CopyGPUToCPU":
350
- return caffe2_pb2.DeviceOption()
351
- else:
352
- return producer_op.device_option
353
-
354
- with ScopedWS("__get_params_from_init_net__", is_reset=True, is_cleanup=True) as ws:
355
- ws.RunNetOnce(init_net)
356
- params = {b: fetch_any_blob(b) for b in init_net.external_output}
357
- ssa, versions = core.get_ssa(init_net)
358
- producer_map = get_producer_map(ssa)
359
- device_options = {
360
- b: _get_device_option(init_net.op[producer_map[(b, versions[b])][0]])
361
- for b in init_net.external_output
362
- }
363
- return params, device_options
364
-
365
-
366
- def _updater_raise(op, input_types, output_types):
367
- raise RuntimeError(
368
- "Failed to apply updater for op {} given input_types {} and"
369
- " output_types {}".format(op, input_types, output_types)
370
- )
371
-
372
-
373
- def _generic_status_identifier(
374
- predict_net: caffe2_pb2.NetDef,
375
- status_updater: Callable,
376
- known_status: Dict[Tuple[str, int], Any],
377
- ) -> Dict[Tuple[str, int], Any]:
378
- """
379
- Statically infer the status of each blob, the status can be such as device type
380
- (CPU/GPU), layout (NCHW/NHWC), data type (float32/int8), etc. "Blob" here
381
- is versioned blob (Tuple[str, int]) in the format compatible with ssa.
382
- Inputs:
383
- predict_net: the caffe2 network
384
- status_updater: a callable, given an op and the status of its input/output,
385
- it returns the updated status of input/output. `None` is used for
386
- representing unknown status.
387
- known_status: a dict containing known status, used as initialization.
388
- Outputs:
389
- A dict mapping from versioned blob to its status
390
- """
391
- ssa, versions = core.get_ssa(predict_net)
392
- versioned_ext_input = [(b, 0) for b in predict_net.external_input]
393
- versioned_ext_output = [(b, versions[b]) for b in predict_net.external_output]
394
- all_versioned_blobs = set().union(*[set(x[0] + x[1]) for x in ssa])
395
-
396
- allowed_vbs = all_versioned_blobs.union(versioned_ext_input).union(versioned_ext_output)
397
- assert all(k in allowed_vbs for k in known_status)
398
- assert all(v is not None for v in known_status.values())
399
- _known_status = copy.deepcopy(known_status)
400
-
401
- def _check_and_update(key, value):
402
- assert value is not None
403
- if key in _known_status:
404
- if not _known_status[key] == value:
405
- raise RuntimeError(
406
- "Confilict status for {}, existing status {}, new status {}".format(
407
- key, _known_status[key], value
408
- )
409
- )
410
- _known_status[key] = value
411
-
412
- def _update_i(op, ssa_i):
413
- versioned_inputs = ssa_i[0]
414
- versioned_outputs = ssa_i[1]
415
-
416
- inputs_status = [_known_status.get(b, None) for b in versioned_inputs]
417
- outputs_status = [_known_status.get(b, None) for b in versioned_outputs]
418
-
419
- new_inputs_status, new_outputs_status = status_updater(op, inputs_status, outputs_status)
420
-
421
- for versioned_blob, status in zip(
422
- versioned_inputs + versioned_outputs, new_inputs_status + new_outputs_status
423
- ):
424
- if status is not None:
425
- _check_and_update(versioned_blob, status)
426
-
427
- for op, ssa_i in zip(predict_net.op, ssa):
428
- _update_i(op, ssa_i)
429
- for op, ssa_i in zip(reversed(predict_net.op), reversed(ssa)):
430
- _update_i(op, ssa_i)
431
-
432
- # NOTE: This strictly checks all the blob from predict_net must be assgined
433
- # a known status. However sometimes it's impossible (eg. having deadend op),
434
- # we may relax this constraint if
435
- for k in all_versioned_blobs:
436
- if k not in _known_status:
437
- raise NotImplementedError(
438
- "Can not infer the status for {}. Currently only support the case where"
439
- " a single forward and backward pass can identify status for all blobs.".format(k)
440
- )
441
-
442
- return _known_status
443
-
444
-
445
- def infer_device_type(
446
- predict_net: caffe2_pb2.NetDef,
447
- known_status: Dict[Tuple[str, int], Any],
448
- device_name_style: str = "caffe2",
449
- ) -> Dict[Tuple[str, int], str]:
450
- """ Return the device type ("cpu" or "gpu"/"cuda") of each (versioned) blob """
451
-
452
- assert device_name_style in ["caffe2", "pytorch"]
453
- _CPU_STR = "cpu"
454
- _GPU_STR = "gpu" if device_name_style == "caffe2" else "cuda"
455
-
456
- def _copy_cpu_to_gpu_updater(op, input_types, output_types):
457
- if input_types[0] == _GPU_STR or output_types[0] == _CPU_STR:
458
- _updater_raise(op, input_types, output_types)
459
- return ([_CPU_STR], [_GPU_STR])
460
-
461
- def _copy_gpu_to_cpu_updater(op, input_types, output_types):
462
- if input_types[0] == _CPU_STR or output_types[0] == _GPU_STR:
463
- _updater_raise(op, input_types, output_types)
464
- return ([_GPU_STR], [_CPU_STR])
465
-
466
- def _other_ops_updater(op, input_types, output_types):
467
- non_none_types = [x for x in input_types + output_types if x is not None]
468
- if len(non_none_types) > 0:
469
- the_type = non_none_types[0]
470
- if not all(x == the_type for x in non_none_types):
471
- _updater_raise(op, input_types, output_types)
472
- else:
473
- the_type = None
474
- return ([the_type for _ in op.input], [the_type for _ in op.output])
475
-
476
- def _device_updater(op, *args, **kwargs):
477
- return {
478
- "CopyCPUToGPU": _copy_cpu_to_gpu_updater,
479
- "CopyGPUToCPU": _copy_gpu_to_cpu_updater,
480
- }.get(op.type, _other_ops_updater)(op, *args, **kwargs)
481
-
482
- return _generic_status_identifier(predict_net, _device_updater, known_status)
483
-
484
-
485
- # ==== torch/utils_caffe2/vis.py ===============================================
486
-
487
-
488
- def _modify_blob_names(ops, blob_rename_f):
489
- ret = []
490
-
491
- def _replace_list(blob_list, replaced_list):
492
- del blob_list[:]
493
- blob_list.extend(replaced_list)
494
-
495
- for x in ops:
496
- cur = copy.deepcopy(x)
497
- _replace_list(cur.input, list(map(blob_rename_f, cur.input)))
498
- _replace_list(cur.output, list(map(blob_rename_f, cur.output)))
499
- ret.append(cur)
500
-
501
- return ret
502
-
503
-
504
- def _rename_blob(name, blob_sizes, blob_ranges):
505
- def _list_to_str(bsize):
506
- ret = ", ".join([str(x) for x in bsize])
507
- ret = "[" + ret + "]"
508
- return ret
509
-
510
- ret = name
511
- if blob_sizes is not None and name in blob_sizes:
512
- ret += "\n" + _list_to_str(blob_sizes[name])
513
- if blob_ranges is not None and name in blob_ranges:
514
- ret += "\n" + _list_to_str(blob_ranges[name])
515
-
516
- return ret
517
-
518
-
519
- # graph_name could not contain word 'graph'
520
- def save_graph(net, file_name, graph_name="net", op_only=True, blob_sizes=None, blob_ranges=None):
521
- blob_rename_f = functools.partial(_rename_blob, blob_sizes=blob_sizes, blob_ranges=blob_ranges)
522
- return save_graph_base(net, file_name, graph_name, op_only, blob_rename_f)
523
-
524
-
525
- def save_graph_base(net, file_name, graph_name="net", op_only=True, blob_rename_func=None):
526
- graph = None
527
- ops = net.op
528
- if blob_rename_func is not None:
529
- ops = _modify_blob_names(ops, blob_rename_func)
530
- if not op_only:
531
- graph = net_drawer.GetPydotGraph(ops, graph_name, rankdir="TB")
532
- else:
533
- graph = net_drawer.GetPydotGraphMinimal(
534
- ops, graph_name, rankdir="TB", minimal_dependency=True
535
- )
536
-
537
- try:
538
- par_dir = os.path.dirname(file_name)
539
- if not os.path.exists(par_dir):
540
- os.makedirs(par_dir)
541
-
542
- format = os.path.splitext(os.path.basename(file_name))[-1]
543
- if format == ".png":
544
- graph.write_png(file_name)
545
- elif format == ".pdf":
546
- graph.write_pdf(file_name)
547
- elif format == ".svg":
548
- graph.write_svg(file_name)
549
- else:
550
- print("Incorrect format {}".format(format))
551
- except Exception as e:
552
- print("Error when writing graph to image {}".format(e))
553
-
554
- return graph
555
-
556
-
557
- # ==== torch/utils_toffee/aten_to_caffe2.py ====================================
558
-
559
-
560
- def group_norm_replace_aten_with_caffe2(predict_net: caffe2_pb2.NetDef):
561
- """
562
- For ONNX exported model, GroupNorm will be represented as ATen op,
563
- this can be a drop in replacement from ATen to GroupNorm
564
- """
565
- count = 0
566
- for op in predict_net.op:
567
- if op.type == "ATen":
568
- op_name = get_pb_arg_vals(op, "operator", None) # return byte in py3
569
- if op_name and op_name.decode() == "group_norm":
570
- op.arg.remove(get_pb_arg(op, "operator"))
571
-
572
- if get_pb_arg_vali(op, "cudnn_enabled", None):
573
- op.arg.remove(get_pb_arg(op, "cudnn_enabled"))
574
-
575
- num_groups = get_pb_arg_vali(op, "num_groups", None)
576
- if num_groups is not None:
577
- op.arg.remove(get_pb_arg(op, "num_groups"))
578
- check_set_pb_arg(op, "group", "i", num_groups)
579
-
580
- op.type = "GroupNorm"
581
- count += 1
582
- if count > 1:
583
- logger.info("Replaced {} ATen operator to GroupNormOp".format(count))
584
-
585
-
586
- # ==== torch/utils_toffee/alias.py =============================================
587
-
588
-
589
- def alias(x, name, is_backward=False):
590
- if not torch.onnx.is_in_onnx_export():
591
- return x
592
- assert isinstance(x, torch.Tensor)
593
- return torch.ops._caffe2.AliasWithName(x, name, is_backward=is_backward)
594
-
595
-
596
- def fuse_alias_placeholder(predict_net, init_net):
597
- """ Remove AliasWithName placeholder and rename the input/output of it """
598
- # First we finish all the re-naming
599
- for i, op in enumerate(predict_net.op):
600
- if op.type == "AliasWithName":
601
- assert len(op.input) == 1
602
- assert len(op.output) == 1
603
- name = get_pb_arg_vals(op, "name", None).decode()
604
- is_backward = bool(get_pb_arg_vali(op, "is_backward", 0))
605
- rename_op_input(predict_net, init_net, i, 0, name, from_producer=is_backward)
606
- rename_op_output(predict_net, i, 0, name)
607
-
608
- # Remove AliasWithName, should be very safe since it's a non-op
609
- new_ops = []
610
- for op in predict_net.op:
611
- if op.type != "AliasWithName":
612
- new_ops.append(op)
613
- else:
614
- # safety check
615
- assert op.input == op.output
616
- assert op.input[0] == op.arg[0].s.decode()
617
- del predict_net.op[:]
618
- predict_net.op.extend(new_ops)
619
-
620
-
621
- # ==== torch/utils_caffe2/graph_transform.py ===================================
622
-
623
-
624
- class IllegalGraphTransformError(ValueError):
625
- """ When a graph transform function call can't be executed. """
626
-
627
-
628
- def _rename_versioned_blob_in_proto(
629
- proto: caffe2_pb2.NetDef,
630
- old_name: str,
631
- new_name: str,
632
- version: int,
633
- ssa: List[Tuple[List[Tuple[str, int]], List[Tuple[str, int]]]],
634
- start_versions: Dict[str, int],
635
- end_versions: Dict[str, int],
636
- ):
637
- """ In given proto, rename all blobs with matched version """
638
- # Operater list
639
- for op, i_th_ssa in zip(proto.op, ssa):
640
- versioned_inputs, versioned_outputs = i_th_ssa
641
- for i in range(len(op.input)):
642
- if versioned_inputs[i] == (old_name, version):
643
- op.input[i] = new_name
644
- for i in range(len(op.output)):
645
- if versioned_outputs[i] == (old_name, version):
646
- op.output[i] = new_name
647
- # external_input
648
- if start_versions.get(old_name, 0) == version:
649
- for i in range(len(proto.external_input)):
650
- if proto.external_input[i] == old_name:
651
- proto.external_input[i] = new_name
652
- # external_output
653
- if end_versions.get(old_name, 0) == version:
654
- for i in range(len(proto.external_output)):
655
- if proto.external_output[i] == old_name:
656
- proto.external_output[i] = new_name
657
-
658
-
659
- def rename_op_input(
660
- predict_net: caffe2_pb2.NetDef,
661
- init_net: caffe2_pb2.NetDef,
662
- op_id: int,
663
- input_id: int,
664
- new_name: str,
665
- from_producer: bool = False,
666
- ):
667
- """
668
- Rename the op_id-th operator in predict_net, change it's input_id-th input's
669
- name to the new_name. It also does automatic re-route and change
670
- external_input and init_net if necessary.
671
- - It requires the input is only consumed by this op.
672
- - This function modifies predict_net and init_net in-place.
673
- - When from_producer is enable, this also updates other operators that consumes
674
- the same input. Be cautious because may trigger unintended behavior.
675
- """
676
- assert isinstance(predict_net, caffe2_pb2.NetDef)
677
- assert isinstance(init_net, caffe2_pb2.NetDef)
678
-
679
- init_net_ssa, init_net_versions = core.get_ssa(init_net)
680
- predict_net_ssa, predict_net_versions = core.get_ssa(
681
- predict_net, copy.deepcopy(init_net_versions)
682
- )
683
-
684
- versioned_inputs, versioned_outputs = predict_net_ssa[op_id]
685
- old_name, version = versioned_inputs[input_id]
686
-
687
- if from_producer:
688
- producer_map = get_producer_map(predict_net_ssa)
689
- if not (old_name, version) in producer_map:
690
- raise NotImplementedError(
691
- "Can't find producer, the input {} is probably from"
692
- " init_net, this is not supported yet.".format(old_name)
693
- )
694
- producer = producer_map[(old_name, version)]
695
- rename_op_output(predict_net, producer[0], producer[1], new_name)
696
- return
697
-
698
- def contain_targets(op_ssa):
699
- return (old_name, version) in op_ssa[0]
700
-
701
- is_consumer = [contain_targets(op_ssa) for op_ssa in predict_net_ssa]
702
- if sum(is_consumer) > 1:
703
- raise IllegalGraphTransformError(
704
- (
705
- "Input '{}' of operator(#{}) are consumed by other ops, please use"
706
- + " rename_op_output on the producer instead. Offending op: \n{}"
707
- ).format(old_name, op_id, predict_net.op[op_id])
708
- )
709
-
710
- # update init_net
711
- _rename_versioned_blob_in_proto(
712
- init_net, old_name, new_name, version, init_net_ssa, {}, init_net_versions
713
- )
714
- # update predict_net
715
- _rename_versioned_blob_in_proto(
716
- predict_net,
717
- old_name,
718
- new_name,
719
- version,
720
- predict_net_ssa,
721
- init_net_versions,
722
- predict_net_versions,
723
- )
724
-
725
-
726
- def rename_op_output(predict_net: caffe2_pb2.NetDef, op_id: int, output_id: int, new_name: str):
727
- """
728
- Rename the op_id-th operator in predict_net, change it's output_id-th input's
729
- name to the new_name. It also does automatic re-route and change
730
- external_output and if necessary.
731
- - It allows multiple consumers of its output.
732
- - This function modifies predict_net in-place, doesn't need init_net.
733
- """
734
- assert isinstance(predict_net, caffe2_pb2.NetDef)
735
-
736
- ssa, blob_versions = core.get_ssa(predict_net)
737
-
738
- versioned_inputs, versioned_outputs = ssa[op_id]
739
- old_name, version = versioned_outputs[output_id]
740
-
741
- # update predict_net
742
- _rename_versioned_blob_in_proto(
743
- predict_net, old_name, new_name, version, ssa, {}, blob_versions
744
- )
745
-
746
-
747
- def get_sub_graph_external_input_output(
748
- predict_net: caffe2_pb2.NetDef, sub_graph_op_indices: List[int]
749
- ) -> Tuple[List[Tuple[str, int]], List[Tuple[str, int]]]:
750
- """
751
- Return the list of external input/output of sub-graph,
752
- each element is tuple of the name and corresponding version in predict_net.
753
-
754
- external input/output is defined the same way as caffe2 NetDef.
755
- """
756
- ssa, versions = core.get_ssa(predict_net)
757
-
758
- all_inputs = []
759
- all_outputs = []
760
- for op_id in sub_graph_op_indices:
761
- all_inputs += [inp for inp in ssa[op_id][0] if inp not in all_inputs]
762
- all_outputs += list(ssa[op_id][1]) # ssa output won't repeat
763
-
764
- # for versioned blobs, external inputs are just those blob in all_inputs
765
- # but not in all_outputs
766
- ext_inputs = [inp for inp in all_inputs if inp not in all_outputs]
767
-
768
- # external outputs are essentially outputs of this subgraph that are used
769
- # outside of this sub-graph (including predict_net.external_output)
770
- all_other_inputs = sum(
771
- (ssa[i][0] for i in range(len(ssa)) if i not in sub_graph_op_indices),
772
- [(outp, versions[outp]) for outp in predict_net.external_output],
773
- )
774
- ext_outputs = [outp for outp in all_outputs if outp in set(all_other_inputs)]
775
-
776
- return ext_inputs, ext_outputs
777
-
778
-
779
- class DiGraph:
780
- """ A DAG representation of caffe2 graph, each vertice is a versioned blob. """
781
-
782
- def __init__(self):
783
- self.vertices = set()
784
- self.graph = collections.defaultdict(list)
785
-
786
- def add_edge(self, u, v):
787
- self.graph[u].append(v)
788
- self.vertices.add(u)
789
- self.vertices.add(v)
790
-
791
- # grab from https://www.geeksforgeeks.org/find-paths-given-source-destination/
792
- def get_all_paths(self, s, d):
793
- visited = {k: False for k in self.vertices}
794
- path = []
795
- all_paths = []
796
-
797
- def _get_all_paths_util(graph, u, d, visited, path):
798
- visited[u] = True
799
- path.append(u)
800
- if u == d:
801
- all_paths.append(copy.deepcopy(path))
802
- else:
803
- for i in graph[u]:
804
- if not visited[i]:
805
- _get_all_paths_util(graph, i, d, visited, path)
806
- path.pop()
807
- visited[u] = False
808
-
809
- _get_all_paths_util(self.graph, s, d, visited, path)
810
- return all_paths
811
-
812
- @staticmethod
813
- def from_ssa(ssa):
814
- graph = DiGraph()
815
- for op_id in range(len(ssa)):
816
- for inp in ssa[op_id][0]:
817
- for outp in ssa[op_id][1]:
818
- graph.add_edge(inp, outp)
819
- return graph
820
-
821
-
822
- def _get_dependency_chain(ssa, versioned_target, versioned_source):
823
- """
824
- Return the index list of relevant operator to produce target blob from source blob,
825
- if there's no dependency, return empty list.
826
- """
827
-
828
- # finding all paths between nodes can be O(N!), thus we can only search
829
- # in the subgraph using the op starting from the first consumer of source blob
830
- # to the producer of the target blob.
831
- consumer_map = get_consumer_map(ssa)
832
- producer_map = get_producer_map(ssa)
833
- start_op = min(x[0] for x in consumer_map[versioned_source]) - 15
834
- end_op = (
835
- producer_map[versioned_target][0] + 15 if versioned_target in producer_map else start_op
836
- )
837
- sub_graph_ssa = ssa[start_op : end_op + 1]
838
- if len(sub_graph_ssa) > 30:
839
- logger.warning(
840
- "Subgraph bebetween {} and {} is large (from op#{} to op#{}), it"
841
- " might take non-trival time to find all paths between them.".format(
842
- versioned_source, versioned_target, start_op, end_op
843
- )
844
- )
845
-
846
- dag = DiGraph.from_ssa(sub_graph_ssa)
847
- paths = dag.get_all_paths(versioned_source, versioned_target) # include two ends
848
- ops_in_paths = [[producer_map[blob][0] for blob in path[1:]] for path in paths]
849
- return sorted(set().union(*[set(ops) for ops in ops_in_paths]))
850
-
851
-
852
- def identify_reshape_sub_graph(predict_net: caffe2_pb2.NetDef,) -> List[List[int]]:
853
- """
854
- Idenfity the reshape sub-graph in a protobuf.
855
- The reshape sub-graph is defined as matching the following pattern:
856
-
857
- (input_blob) -> Op_1 -> ... -> Op_N -> (new_shape) -─┐
858
- └-------------------------------------------> Reshape -> (output_blob)
859
-
860
- Return:
861
- List of sub-graphs, each sub-graph is represented as a list of indices
862
- of the relavent ops, [Op_1, Op_2, ..., Op_N, Reshape]
863
- """
864
-
865
- ssa, _ = core.get_ssa(predict_net)
866
-
867
- ret = []
868
- for i, op in enumerate(predict_net.op):
869
- if op.type == "Reshape":
870
- assert len(op.input) == 2
871
- input_ssa = ssa[i][0]
872
- data_source = input_ssa[0]
873
- shape_source = input_ssa[1]
874
- op_indices = _get_dependency_chain(ssa, shape_source, data_source)
875
- ret.append(op_indices + [i])
876
- return ret
877
-
878
-
879
- def remove_reshape_for_fc(predict_net, params):
880
- """
881
- In PyTorch nn.Linear has to take 2D tensor, this often leads to reshape
882
- a 4D tensor to 2D by calling .view(). However this (dynamic) reshaping
883
- doesn't work well with ONNX and Int8 tools, and cause using extra
884
- ops (eg. ExpandDims) that might not be available on mobile.
885
- Luckily Caffe2 supports 4D tensor for FC, so we can remove those reshape
886
- after exporting ONNX model.
887
- """
888
- from caffe2.python import core
889
-
890
- # find all reshape sub-graph that can be removed, which is now all Reshape
891
- # sub-graph whose output is only consumed by FC.
892
- # TODO: to make it safer, we may need the actually value to better determine
893
- # if a Reshape before FC is removable.
894
- reshape_sub_graphs = identify_reshape_sub_graph(predict_net)
895
- sub_graphs_to_remove = []
896
- for reshape_sub_graph in reshape_sub_graphs:
897
- reshape_op_id = reshape_sub_graph[-1]
898
- assert predict_net.op[reshape_op_id].type == "Reshape"
899
- ssa, _ = core.get_ssa(predict_net)
900
- reshape_output = ssa[reshape_op_id][1][0]
901
- consumers = [i for i in range(len(ssa)) if reshape_output in ssa[i][0]]
902
- if all(predict_net.op[consumer].type == "FC" for consumer in consumers):
903
- # safety check if the sub-graph is isolated, for this reshape sub-graph,
904
- # it means it has one non-param external input and one external output.
905
- ext_inputs, ext_outputs = get_sub_graph_external_input_output(
906
- predict_net, reshape_sub_graph
907
- )
908
- non_params_ext_inputs = [inp for inp in ext_inputs if inp[1] != 0]
909
- if len(non_params_ext_inputs) == 1 and len(ext_outputs) == 1:
910
- sub_graphs_to_remove.append(reshape_sub_graph)
911
-
912
- # perform removing subgraph by:
913
- # 1: rename the Reshape's output to its input, then the graph can be
914
- # seen as in-place itentify, meaning whose external input/output are the same.
915
- # 2: simply remove those ops.
916
- remove_op_ids = []
917
- params_to_remove = []
918
- for sub_graph in sub_graphs_to_remove:
919
- logger.info(
920
- "Remove Reshape sub-graph:\n{}".format(
921
- "".join(["(#{:>4})\n{}".format(i, predict_net.op[i]) for i in sub_graph])
922
- )
923
- )
924
- reshape_op_id = sub_graph[-1]
925
- new_reshap_output = predict_net.op[reshape_op_id].input[0]
926
- rename_op_output(predict_net, reshape_op_id, 0, new_reshap_output)
927
- ext_inputs, ext_outputs = get_sub_graph_external_input_output(predict_net, sub_graph)
928
- non_params_ext_inputs = [inp for inp in ext_inputs if inp[1] != 0]
929
- params_ext_inputs = [inp for inp in ext_inputs if inp[1] == 0]
930
- assert len(non_params_ext_inputs) == 1 and len(ext_outputs) == 1
931
- assert ext_outputs[0][0] == non_params_ext_inputs[0][0]
932
- assert ext_outputs[0][1] == non_params_ext_inputs[0][1] + 1
933
- remove_op_ids.extend(sub_graph)
934
- params_to_remove.extend(params_ext_inputs)
935
-
936
- predict_net = copy.deepcopy(predict_net)
937
- new_ops = [op for i, op in enumerate(predict_net.op) if i not in remove_op_ids]
938
- del predict_net.op[:]
939
- predict_net.op.extend(new_ops)
940
- for versioned_params in params_to_remove:
941
- name = versioned_params[0]
942
- logger.info("Remove params: {} from init_net and predict_net.external_input".format(name))
943
- del params[name]
944
- predict_net.external_input.remove(name)
945
-
946
- return predict_net, params
947
-
948
-
949
- def fuse_copy_between_cpu_and_gpu(predict_net: caffe2_pb2.NetDef):
950
- """
951
- In-place fuse extra copy ops between cpu/gpu for the following case:
952
- a -CopyAToB-> b -CopyBToA> c1 -NextOp1-> d1
953
- -CopyBToA> c2 -NextOp2-> d2
954
- The fused network will look like:
955
- a -NextOp1-> d1
956
- -NextOp2-> d2
957
- """
958
-
959
- _COPY_OPS = ["CopyCPUToGPU", "CopyGPUToCPU"]
960
-
961
- def _fuse_once(predict_net):
962
- ssa, blob_versions = core.get_ssa(predict_net)
963
- consumer_map = get_consumer_map(ssa)
964
- versioned_external_output = [
965
- (name, blob_versions[name]) for name in predict_net.external_output
966
- ]
967
-
968
- for op_id, op in enumerate(predict_net.op):
969
- if op.type in _COPY_OPS:
970
- fw_copy_versioned_output = ssa[op_id][1][0]
971
- consumer_ids = [x[0] for x in consumer_map[fw_copy_versioned_output]]
972
- reverse_op_type = _COPY_OPS[1 - _COPY_OPS.index(op.type)]
973
-
974
- is_fusable = (
975
- len(consumer_ids) > 0
976
- and fw_copy_versioned_output not in versioned_external_output
977
- and all(
978
- predict_net.op[_op_id].type == reverse_op_type
979
- and ssa[_op_id][1][0] not in versioned_external_output
980
- for _op_id in consumer_ids
981
- )
982
- )
983
-
984
- if is_fusable:
985
- for rv_copy_op_id in consumer_ids:
986
- # making each NextOp uses "a" directly and removing Copy ops
987
- rs_copy_versioned_output = ssa[rv_copy_op_id][1][0]
988
- next_op_id, inp_id = consumer_map[rs_copy_versioned_output][0]
989
- predict_net.op[next_op_id].input[inp_id] = op.input[0]
990
- # remove CopyOps
991
- new_ops = [
992
- op
993
- for i, op in enumerate(predict_net.op)
994
- if i != op_id and i not in consumer_ids
995
- ]
996
- del predict_net.op[:]
997
- predict_net.op.extend(new_ops)
998
- return True
999
-
1000
- return False
1001
-
1002
- # _fuse_once returns False is nothing can be fused
1003
- while _fuse_once(predict_net):
1004
- pass
1005
-
1006
-
1007
- def remove_dead_end_ops(net_def: caffe2_pb2.NetDef):
1008
- """ remove ops if its output is not used or not in external_output """
1009
- ssa, versions = core.get_ssa(net_def)
1010
- versioned_external_output = [(name, versions[name]) for name in net_def.external_output]
1011
- consumer_map = get_consumer_map(ssa)
1012
- removed_op_ids = set()
1013
-
1014
- def _is_dead_end(versioned_blob):
1015
- return not (
1016
- versioned_blob in versioned_external_output
1017
- or (
1018
- len(consumer_map[versioned_blob]) > 0
1019
- and all(x[0] not in removed_op_ids for x in consumer_map[versioned_blob])
1020
- )
1021
- )
1022
-
1023
- for i, ssa_i in reversed(list(enumerate(ssa))):
1024
- versioned_outputs = ssa_i[1]
1025
- if all(_is_dead_end(outp) for outp in versioned_outputs):
1026
- removed_op_ids.add(i)
1027
-
1028
- # simply removing those deadend ops should have no effect to external_output
1029
- new_ops = [op for i, op in enumerate(net_def.op) if i not in removed_op_ids]
1030
- del net_def.op[:]
1031
- net_def.op.extend(new_ops)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/tests/test_builtin_casters.py DELETED
@@ -1,392 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- import pytest
3
-
4
- import env # noqa: F401
5
-
6
- from pybind11_tests import builtin_casters as m
7
- from pybind11_tests import UserType, IncType
8
-
9
-
10
- def test_simple_string():
11
- assert m.string_roundtrip("const char *") == "const char *"
12
-
13
-
14
- def test_unicode_conversion():
15
- """Tests unicode conversion and error reporting."""
16
- assert m.good_utf8_string() == u"Say utf8‽ 🎂 𝐀"
17
- assert m.good_utf16_string() == u"b‽🎂𝐀z"
18
- assert m.good_utf32_string() == u"a𝐀🎂‽z"
19
- assert m.good_wchar_string() == u"a⸘𝐀z"
20
- if hasattr(m, "has_u8string"):
21
- assert m.good_utf8_u8string() == u"Say utf8‽ 🎂 𝐀"
22
-
23
- with pytest.raises(UnicodeDecodeError):
24
- m.bad_utf8_string()
25
-
26
- with pytest.raises(UnicodeDecodeError):
27
- m.bad_utf16_string()
28
-
29
- # These are provided only if they actually fail (they don't when 32-bit and under Python 2.7)
30
- if hasattr(m, "bad_utf32_string"):
31
- with pytest.raises(UnicodeDecodeError):
32
- m.bad_utf32_string()
33
- if hasattr(m, "bad_wchar_string"):
34
- with pytest.raises(UnicodeDecodeError):
35
- m.bad_wchar_string()
36
- if hasattr(m, "has_u8string"):
37
- with pytest.raises(UnicodeDecodeError):
38
- m.bad_utf8_u8string()
39
-
40
- assert m.u8_Z() == 'Z'
41
- assert m.u8_eacute() == u'é'
42
- assert m.u16_ibang() == u'‽'
43
- assert m.u32_mathbfA() == u'𝐀'
44
- assert m.wchar_heart() == u'♥'
45
- if hasattr(m, "has_u8string"):
46
- assert m.u8_char8_Z() == 'Z'
47
-
48
-
49
- def test_single_char_arguments():
50
- """Tests failures for passing invalid inputs to char-accepting functions"""
51
- def toobig_message(r):
52
- return "Character code point not in range({0:#x})".format(r)
53
- toolong_message = "Expected a character, but multi-character string found"
54
-
55
- assert m.ord_char(u'a') == 0x61 # simple ASCII
56
- assert m.ord_char_lv(u'b') == 0x62
57
- assert m.ord_char(u'é') == 0xE9 # requires 2 bytes in utf-8, but can be stuffed in a char
58
- with pytest.raises(ValueError) as excinfo:
59
- assert m.ord_char(u'Ā') == 0x100 # requires 2 bytes, doesn't fit in a char
60
- assert str(excinfo.value) == toobig_message(0x100)
61
- with pytest.raises(ValueError) as excinfo:
62
- assert m.ord_char(u'ab')
63
- assert str(excinfo.value) == toolong_message
64
-
65
- assert m.ord_char16(u'a') == 0x61
66
- assert m.ord_char16(u'é') == 0xE9
67
- assert m.ord_char16_lv(u'ê') == 0xEA
68
- assert m.ord_char16(u'Ā') == 0x100
69
- assert m.ord_char16(u'‽') == 0x203d
70
- assert m.ord_char16(u'♥') == 0x2665
71
- assert m.ord_char16_lv(u'♡') == 0x2661
72
- with pytest.raises(ValueError) as excinfo:
73
- assert m.ord_char16(u'🎂') == 0x1F382 # requires surrogate pair
74
- assert str(excinfo.value) == toobig_message(0x10000)
75
- with pytest.raises(ValueError) as excinfo:
76
- assert m.ord_char16(u'aa')
77
- assert str(excinfo.value) == toolong_message
78
-
79
- assert m.ord_char32(u'a') == 0x61
80
- assert m.ord_char32(u'é') == 0xE9
81
- assert m.ord_char32(u'Ā') == 0x100
82
- assert m.ord_char32(u'‽') == 0x203d
83
- assert m.ord_char32(u'♥') == 0x2665
84
- assert m.ord_char32(u'🎂') == 0x1F382
85
- with pytest.raises(ValueError) as excinfo:
86
- assert m.ord_char32(u'aa')
87
- assert str(excinfo.value) == toolong_message
88
-
89
- assert m.ord_wchar(u'a') == 0x61
90
- assert m.ord_wchar(u'é') == 0xE9
91
- assert m.ord_wchar(u'Ā') == 0x100
92
- assert m.ord_wchar(u'‽') == 0x203d
93
- assert m.ord_wchar(u'♥') == 0x2665
94
- if m.wchar_size == 2:
95
- with pytest.raises(ValueError) as excinfo:
96
- assert m.ord_wchar(u'🎂') == 0x1F382 # requires surrogate pair
97
- assert str(excinfo.value) == toobig_message(0x10000)
98
- else:
99
- assert m.ord_wchar(u'🎂') == 0x1F382
100
- with pytest.raises(ValueError) as excinfo:
101
- assert m.ord_wchar(u'aa')
102
- assert str(excinfo.value) == toolong_message
103
-
104
- if hasattr(m, "has_u8string"):
105
- assert m.ord_char8(u'a') == 0x61 # simple ASCII
106
- assert m.ord_char8_lv(u'b') == 0x62
107
- assert m.ord_char8(u'é') == 0xE9 # requires 2 bytes in utf-8, but can be stuffed in a char
108
- with pytest.raises(ValueError) as excinfo:
109
- assert m.ord_char8(u'Ā') == 0x100 # requires 2 bytes, doesn't fit in a char
110
- assert str(excinfo.value) == toobig_message(0x100)
111
- with pytest.raises(ValueError) as excinfo:
112
- assert m.ord_char8(u'ab')
113
- assert str(excinfo.value) == toolong_message
114
-
115
-
116
- def test_bytes_to_string():
117
- """Tests the ability to pass bytes to C++ string-accepting functions. Note that this is
118
- one-way: the only way to return bytes to Python is via the pybind11::bytes class."""
119
- # Issue #816
120
-
121
- def to_bytes(s):
122
- b = s if env.PY2 else s.encode("utf8")
123
- assert isinstance(b, bytes)
124
- return b
125
-
126
- assert m.strlen(to_bytes("hi")) == 2
127
- assert m.string_length(to_bytes("world")) == 5
128
- assert m.string_length(to_bytes("a\x00b")) == 3
129
- assert m.strlen(to_bytes("a\x00b")) == 1 # C-string limitation
130
-
131
- # passing in a utf8 encoded string should work
132
- assert m.string_length(u'💩'.encode("utf8")) == 4
133
-
134
-
135
- @pytest.mark.skipif(not hasattr(m, "has_string_view"), reason="no <string_view>")
136
- def test_string_view(capture):
137
- """Tests support for C++17 string_view arguments and return values"""
138
- assert m.string_view_chars("Hi") == [72, 105]
139
- assert m.string_view_chars("Hi 🎂") == [72, 105, 32, 0xf0, 0x9f, 0x8e, 0x82]
140
- assert m.string_view16_chars(u"Hi 🎂") == [72, 105, 32, 0xd83c, 0xdf82]
141
- assert m.string_view32_chars(u"Hi 🎂") == [72, 105, 32, 127874]
142
- if hasattr(m, "has_u8string"):
143
- assert m.string_view8_chars("Hi") == [72, 105]
144
- assert m.string_view8_chars(u"Hi 🎂") == [72, 105, 32, 0xf0, 0x9f, 0x8e, 0x82]
145
-
146
- assert m.string_view_return() == u"utf8 secret 🎂"
147
- assert m.string_view16_return() == u"utf16 secret 🎂"
148
- assert m.string_view32_return() == u"utf32 secret 🎂"
149
- if hasattr(m, "has_u8string"):
150
- assert m.string_view8_return() == u"utf8 secret 🎂"
151
-
152
- with capture:
153
- m.string_view_print("Hi")
154
- m.string_view_print("utf8 🎂")
155
- m.string_view16_print(u"utf16 🎂")
156
- m.string_view32_print(u"utf32 🎂")
157
- assert capture == u"""
158
- Hi 2
159
- utf8 🎂 9
160
- utf16 🎂 8
161
- utf32 🎂 7
162
- """
163
- if hasattr(m, "has_u8string"):
164
- with capture:
165
- m.string_view8_print("Hi")
166
- m.string_view8_print(u"utf8 🎂")
167
- assert capture == u"""
168
- Hi 2
169
- utf8 🎂 9
170
- """
171
-
172
- with capture:
173
- m.string_view_print("Hi, ascii")
174
- m.string_view_print("Hi, utf8 🎂")
175
- m.string_view16_print(u"Hi, utf16 🎂")
176
- m.string_view32_print(u"Hi, utf32 🎂")
177
- assert capture == u"""
178
- Hi, ascii 9
179
- Hi, utf8 🎂 13
180
- Hi, utf16 🎂 12
181
- Hi, utf32 🎂 11
182
- """
183
- if hasattr(m, "has_u8string"):
184
- with capture:
185
- m.string_view8_print("Hi, ascii")
186
- m.string_view8_print(u"Hi, utf8 🎂")
187
- assert capture == u"""
188
- Hi, ascii 9
189
- Hi, utf8 🎂 13
190
- """
191
-
192
-
193
- def test_integer_casting():
194
- """Issue #929 - out-of-range integer values shouldn't be accepted"""
195
- assert m.i32_str(-1) == "-1"
196
- assert m.i64_str(-1) == "-1"
197
- assert m.i32_str(2000000000) == "2000000000"
198
- assert m.u32_str(2000000000) == "2000000000"
199
- if env.PY2:
200
- assert m.i32_str(long(-1)) == "-1" # noqa: F821 undefined name 'long'
201
- assert m.i64_str(long(-1)) == "-1" # noqa: F821 undefined name 'long'
202
- assert m.i64_str(long(-999999999999)) == "-999999999999" # noqa: F821 undefined name
203
- assert m.u64_str(long(999999999999)) == "999999999999" # noqa: F821 undefined name 'long'
204
- else:
205
- assert m.i64_str(-999999999999) == "-999999999999"
206
- assert m.u64_str(999999999999) == "999999999999"
207
-
208
- with pytest.raises(TypeError) as excinfo:
209
- m.u32_str(-1)
210
- assert "incompatible function arguments" in str(excinfo.value)
211
- with pytest.raises(TypeError) as excinfo:
212
- m.u64_str(-1)
213
- assert "incompatible function arguments" in str(excinfo.value)
214
- with pytest.raises(TypeError) as excinfo:
215
- m.i32_str(-3000000000)
216
- assert "incompatible function arguments" in str(excinfo.value)
217
- with pytest.raises(TypeError) as excinfo:
218
- m.i32_str(3000000000)
219
- assert "incompatible function arguments" in str(excinfo.value)
220
-
221
- if env.PY2:
222
- with pytest.raises(TypeError) as excinfo:
223
- m.u32_str(long(-1)) # noqa: F821 undefined name 'long'
224
- assert "incompatible function arguments" in str(excinfo.value)
225
- with pytest.raises(TypeError) as excinfo:
226
- m.u64_str(long(-1)) # noqa: F821 undefined name 'long'
227
- assert "incompatible function arguments" in str(excinfo.value)
228
-
229
-
230
- def test_tuple(doc):
231
- """std::pair <-> tuple & std::tuple <-> tuple"""
232
- assert m.pair_passthrough((True, "test")) == ("test", True)
233
- assert m.tuple_passthrough((True, "test", 5)) == (5, "test", True)
234
- # Any sequence can be cast to a std::pair or std::tuple
235
- assert m.pair_passthrough([True, "test"]) == ("test", True)
236
- assert m.tuple_passthrough([True, "test", 5]) == (5, "test", True)
237
- assert m.empty_tuple() == ()
238
-
239
- assert doc(m.pair_passthrough) == """
240
- pair_passthrough(arg0: Tuple[bool, str]) -> Tuple[str, bool]
241
-
242
- Return a pair in reversed order
243
- """
244
- assert doc(m.tuple_passthrough) == """
245
- tuple_passthrough(arg0: Tuple[bool, str, int]) -> Tuple[int, str, bool]
246
-
247
- Return a triple in reversed order
248
- """
249
-
250
- assert m.rvalue_pair() == ("rvalue", "rvalue")
251
- assert m.lvalue_pair() == ("lvalue", "lvalue")
252
- assert m.rvalue_tuple() == ("rvalue", "rvalue", "rvalue")
253
- assert m.lvalue_tuple() == ("lvalue", "lvalue", "lvalue")
254
- assert m.rvalue_nested() == ("rvalue", ("rvalue", ("rvalue", "rvalue")))
255
- assert m.lvalue_nested() == ("lvalue", ("lvalue", ("lvalue", "lvalue")))
256
-
257
- assert m.int_string_pair() == (2, "items")
258
-
259
-
260
- def test_builtins_cast_return_none():
261
- """Casters produced with PYBIND11_TYPE_CASTER() should convert nullptr to None"""
262
- assert m.return_none_string() is None
263
- assert m.return_none_char() is None
264
- assert m.return_none_bool() is None
265
- assert m.return_none_int() is None
266
- assert m.return_none_float() is None
267
- assert m.return_none_pair() is None
268
-
269
-
270
- def test_none_deferred():
271
- """None passed as various argument types should defer to other overloads"""
272
- assert not m.defer_none_cstring("abc")
273
- assert m.defer_none_cstring(None)
274
- assert not m.defer_none_custom(UserType())
275
- assert m.defer_none_custom(None)
276
- assert m.nodefer_none_void(None)
277
-
278
-
279
- def test_void_caster():
280
- assert m.load_nullptr_t(None) is None
281
- assert m.cast_nullptr_t() is None
282
-
283
-
284
- def test_reference_wrapper():
285
- """std::reference_wrapper for builtin and user types"""
286
- assert m.refwrap_builtin(42) == 420
287
- assert m.refwrap_usertype(UserType(42)) == 42
288
-
289
- with pytest.raises(TypeError) as excinfo:
290
- m.refwrap_builtin(None)
291
- assert "incompatible function arguments" in str(excinfo.value)
292
-
293
- with pytest.raises(TypeError) as excinfo:
294
- m.refwrap_usertype(None)
295
- assert "incompatible function arguments" in str(excinfo.value)
296
-
297
- a1 = m.refwrap_list(copy=True)
298
- a2 = m.refwrap_list(copy=True)
299
- assert [x.value for x in a1] == [2, 3]
300
- assert [x.value for x in a2] == [2, 3]
301
- assert not a1[0] is a2[0] and not a1[1] is a2[1]
302
-
303
- b1 = m.refwrap_list(copy=False)
304
- b2 = m.refwrap_list(copy=False)
305
- assert [x.value for x in b1] == [1, 2]
306
- assert [x.value for x in b2] == [1, 2]
307
- assert b1[0] is b2[0] and b1[1] is b2[1]
308
-
309
- assert m.refwrap_iiw(IncType(5)) == 5
310
- assert m.refwrap_call_iiw(IncType(10), m.refwrap_iiw) == [10, 10, 10, 10]
311
-
312
-
313
- def test_complex_cast():
314
- """std::complex casts"""
315
- assert m.complex_cast(1) == "1.0"
316
- assert m.complex_cast(2j) == "(0.0, 2.0)"
317
-
318
-
319
- def test_bool_caster():
320
- """Test bool caster implicit conversions."""
321
- convert, noconvert = m.bool_passthrough, m.bool_passthrough_noconvert
322
-
323
- def require_implicit(v):
324
- pytest.raises(TypeError, noconvert, v)
325
-
326
- def cant_convert(v):
327
- pytest.raises(TypeError, convert, v)
328
-
329
- # straight up bool
330
- assert convert(True) is True
331
- assert convert(False) is False
332
- assert noconvert(True) is True
333
- assert noconvert(False) is False
334
-
335
- # None requires implicit conversion
336
- require_implicit(None)
337
- assert convert(None) is False
338
-
339
- class A(object):
340
- def __init__(self, x):
341
- self.x = x
342
-
343
- def __nonzero__(self):
344
- return self.x
345
-
346
- def __bool__(self):
347
- return self.x
348
-
349
- class B(object):
350
- pass
351
-
352
- # Arbitrary objects are not accepted
353
- cant_convert(object())
354
- cant_convert(B())
355
-
356
- # Objects with __nonzero__ / __bool__ defined can be converted
357
- require_implicit(A(True))
358
- assert convert(A(True)) is True
359
- assert convert(A(False)) is False
360
-
361
-
362
- def test_numpy_bool():
363
- np = pytest.importorskip("numpy")
364
-
365
- convert, noconvert = m.bool_passthrough, m.bool_passthrough_noconvert
366
-
367
- def cant_convert(v):
368
- pytest.raises(TypeError, convert, v)
369
-
370
- # np.bool_ is not considered implicit
371
- assert convert(np.bool_(True)) is True
372
- assert convert(np.bool_(False)) is False
373
- assert noconvert(np.bool_(True)) is True
374
- assert noconvert(np.bool_(False)) is False
375
- cant_convert(np.zeros(2, dtype='int'))
376
-
377
-
378
- def test_int_long():
379
- """In Python 2, a C++ int should return a Python int rather than long
380
- if possible: longs are not always accepted where ints are used (such
381
- as the argument to sys.exit()). A C++ long long is always a Python
382
- long."""
383
-
384
- import sys
385
- must_be_long = type(getattr(sys, 'maxint', 1) + 1)
386
- assert isinstance(m.int_cast(), int)
387
- assert isinstance(m.long_cast(), int)
388
- assert isinstance(m.longlong_cast(), must_be_long)
389
-
390
-
391
- def test_void_caster_2():
392
- assert m.test_void_caster()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pydiffvg_tensorflow/pixel_filter.py DELETED
@@ -1,8 +0,0 @@
1
- import tensorflow as tf
2
-
3
- class PixelFilter:
4
- def __init__(self,
5
- type,
6
- radius = tf.constant(0.5)):
7
- self.type = type
8
- self.radius = radius
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/tabulate.h DELETED
@@ -1,22 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system has no special tabulate functions
22
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/losses/kd_loss.py DELETED
@@ -1,87 +0,0 @@
1
- import mmcv
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
-
5
- from ..builder import LOSSES
6
- from .utils import weighted_loss
7
-
8
-
9
- @mmcv.jit(derivate=True, coderize=True)
10
- @weighted_loss
11
- def knowledge_distillation_kl_div_loss(pred,
12
- soft_label,
13
- T,
14
- detach_target=True):
15
- r"""Loss function for knowledge distilling using KL divergence.
16
-
17
- Args:
18
- pred (Tensor): Predicted logits with shape (N, n + 1).
19
- soft_label (Tensor): Target logits with shape (N, N + 1).
20
- T (int): Temperature for distillation.
21
- detach_target (bool): Remove soft_label from automatic differentiation
22
-
23
- Returns:
24
- torch.Tensor: Loss tensor with shape (N,).
25
- """
26
- assert pred.size() == soft_label.size()
27
- target = F.softmax(soft_label / T, dim=1)
28
- if detach_target:
29
- target = target.detach()
30
-
31
- kd_loss = F.kl_div(
32
- F.log_softmax(pred / T, dim=1), target, reduction='none').mean(1) * (
33
- T * T)
34
-
35
- return kd_loss
36
-
37
-
38
- @LOSSES.register_module()
39
- class KnowledgeDistillationKLDivLoss(nn.Module):
40
- """Loss function for knowledge distilling using KL divergence.
41
-
42
- Args:
43
- reduction (str): Options are `'none'`, `'mean'` and `'sum'`.
44
- loss_weight (float): Loss weight of current loss.
45
- T (int): Temperature for distillation.
46
- """
47
-
48
- def __init__(self, reduction='mean', loss_weight=1.0, T=10):
49
- super(KnowledgeDistillationKLDivLoss, self).__init__()
50
- assert T >= 1
51
- self.reduction = reduction
52
- self.loss_weight = loss_weight
53
- self.T = T
54
-
55
- def forward(self,
56
- pred,
57
- soft_label,
58
- weight=None,
59
- avg_factor=None,
60
- reduction_override=None):
61
- """Forward function.
62
-
63
- Args:
64
- pred (Tensor): Predicted logits with shape (N, n + 1).
65
- soft_label (Tensor): Target logits with shape (N, N + 1).
66
- weight (torch.Tensor, optional): The weight of loss for each
67
- prediction. Defaults to None.
68
- avg_factor (int, optional): Average factor that is used to average
69
- the loss. Defaults to None.
70
- reduction_override (str, optional): The reduction method used to
71
- override the original reduction method of the loss.
72
- Defaults to None.
73
- """
74
- assert reduction_override in (None, 'none', 'mean', 'sum')
75
-
76
- reduction = (
77
- reduction_override if reduction_override else self.reduction)
78
-
79
- loss_kd = self.loss_weight * knowledge_distillation_kl_div_loss(
80
- pred,
81
- soft_label,
82
- weight,
83
- reduction=reduction,
84
- avg_factor=avg_factor,
85
- T=self.T)
86
-
87
- return loss_kd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/data/transforms/torchvision_transforms/functional_tensor.py DELETED
@@ -1,966 +0,0 @@
1
- import warnings
2
-
3
- import torch
4
- from torch import Tensor
5
- from torch.nn.functional import grid_sample, conv2d, interpolate, pad as torch_pad
6
- from torch.jit.annotations import BroadcastingList2
7
- from typing import Optional, Tuple, List
8
-
9
-
10
- def _is_tensor_a_torch_image(x: Tensor) -> bool:
11
- return x.ndim >= 2
12
-
13
-
14
- def _assert_image_tensor(img):
15
- if not _is_tensor_a_torch_image(img):
16
- raise TypeError("Tensor is not a torch image.")
17
-
18
-
19
- def _get_image_size(img: Tensor) -> List[int]:
20
- # Returns (w, h) of tensor image
21
- _assert_image_tensor(img)
22
- return [img.shape[-1], img.shape[-2]]
23
-
24
-
25
- def _get_image_num_channels(img: Tensor) -> int:
26
- if img.ndim == 2:
27
- return 1
28
- elif img.ndim > 2:
29
- return img.shape[-3]
30
-
31
- raise TypeError("Input ndim should be 2 or more. Got {}".format(img.ndim))
32
-
33
-
34
- def _max_value(dtype: torch.dtype) -> float:
35
- # TODO: replace this method with torch.iinfo when it gets torchscript support.
36
- # https://github.com/pytorch/pytorch/issues/41492
37
-
38
- a = torch.tensor(2, dtype=dtype)
39
- signed = 1 if torch.tensor(0, dtype=dtype).is_signed() else 0
40
- bits = 1
41
- max_value = torch.tensor(-signed, dtype=torch.long)
42
- while True:
43
- next_value = a.pow(bits - signed).sub(1)
44
- if next_value > max_value:
45
- max_value = next_value
46
- bits *= 2
47
- else:
48
- break
49
- return max_value.item()
50
-
51
-
52
- def _assert_channels(img: Tensor, permitted: List[int]) -> None:
53
- c = _get_image_num_channels(img)
54
- if c not in permitted:
55
- raise TypeError("Input image tensor permitted channel values are {}, but found {}".format(permitted, c))
56
-
57
-
58
- def convert_image_dtype(image: torch.Tensor, dtype: torch.dtype = torch.float) -> torch.Tensor:
59
- if image.dtype == dtype:
60
- return image
61
-
62
- if image.is_floating_point():
63
-
64
- # TODO: replace with dtype.is_floating_point when torchscript supports it
65
- if torch.tensor(0, dtype=dtype).is_floating_point():
66
- return image.to(dtype)
67
-
68
- # float to int
69
- if (image.dtype == torch.float32 and dtype in (torch.int32, torch.int64)) or (
70
- image.dtype == torch.float64 and dtype == torch.int64
71
- ):
72
- msg = f"The cast from {image.dtype} to {dtype} cannot be performed safely."
73
- raise RuntimeError(msg)
74
-
75
- # https://github.com/pytorch/vision/pull/2078#issuecomment-612045321
76
- # For data in the range 0-1, (float * 255).to(uint) is only 255
77
- # when float is exactly 1.0.
78
- # `max + 1 - epsilon` provides more evenly distributed mapping of
79
- # ranges of floats to ints.
80
- eps = 1e-3
81
- max_val = _max_value(dtype)
82
- result = image.mul(max_val + 1.0 - eps)
83
- return result.to(dtype)
84
- else:
85
- input_max = _max_value(image.dtype)
86
-
87
- # int to float
88
- # TODO: replace with dtype.is_floating_point when torchscript supports it
89
- if torch.tensor(0, dtype=dtype).is_floating_point():
90
- image = image.to(dtype)
91
- return image / input_max
92
-
93
- output_max = _max_value(dtype)
94
-
95
- # int to int
96
- if input_max > output_max:
97
- # factor should be forced to int for torch jit script
98
- # otherwise factor is a float and image // factor can produce different results
99
- factor = int((input_max + 1) // (output_max + 1))
100
- image = torch.div(image, factor, rounding_mode='floor')
101
- return image.to(dtype)
102
- else:
103
- # factor should be forced to int for torch jit script
104
- # otherwise factor is a float and image * factor can produce different results
105
- factor = int((output_max + 1) // (input_max + 1))
106
- image = image.to(dtype)
107
- return image * factor
108
-
109
-
110
- def vflip(img: Tensor) -> Tensor:
111
- _assert_image_tensor(img)
112
-
113
- return img.flip(-2)
114
-
115
-
116
- def hflip(img: Tensor) -> Tensor:
117
- _assert_image_tensor(img)
118
-
119
- return img.flip(-1)
120
-
121
-
122
- def crop(img: Tensor, top: int, left: int, height: int, width: int) -> Tensor:
123
- _assert_image_tensor(img)
124
-
125
- w, h = _get_image_size(img)
126
- right = left + width
127
- bottom = top + height
128
-
129
- if left < 0 or top < 0 or right > w or bottom > h:
130
- padding_ltrb = [max(-left, 0), max(-top, 0), max(right - w, 0), max(bottom - h, 0)]
131
- return pad(img[..., max(top, 0):bottom, max(left, 0):right], padding_ltrb, fill=0)
132
- return img[..., top:bottom, left:right]
133
-
134
-
135
- def rgb_to_grayscale(img: Tensor, num_output_channels: int = 1) -> Tensor:
136
- if img.ndim < 3:
137
- raise TypeError("Input image tensor should have at least 3 dimensions, but found {}".format(img.ndim))
138
- _assert_channels(img, [3])
139
-
140
- if num_output_channels not in (1, 3):
141
- raise ValueError('num_output_channels should be either 1 or 3')
142
-
143
- r, g, b = img.unbind(dim=-3)
144
- # This implementation closely follows the TF one:
145
- # https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/ops/image_ops_impl.py#L2105-L2138
146
- l_img = (0.2989 * r + 0.587 * g + 0.114 * b).to(img.dtype)
147
- l_img = l_img.unsqueeze(dim=-3)
148
-
149
- if num_output_channels == 3:
150
- return l_img.expand(img.shape)
151
-
152
- return l_img
153
-
154
-
155
- def adjust_brightness(img: Tensor, brightness_factor: float) -> Tensor:
156
- if brightness_factor < 0:
157
- raise ValueError('brightness_factor ({}) is not non-negative.'.format(brightness_factor))
158
-
159
- _assert_image_tensor(img)
160
-
161
- _assert_channels(img, [1, 3])
162
-
163
- return _blend(img, torch.zeros_like(img), brightness_factor)
164
-
165
-
166
- def adjust_contrast(img: Tensor, contrast_factor: float) -> Tensor:
167
- if contrast_factor < 0:
168
- raise ValueError('contrast_factor ({}) is not non-negative.'.format(contrast_factor))
169
-
170
- _assert_image_tensor(img)
171
-
172
- _assert_channels(img, [3])
173
-
174
- dtype = img.dtype if torch.is_floating_point(img) else torch.float32
175
- mean = torch.mean(rgb_to_grayscale(img).to(dtype), dim=(-3, -2, -1), keepdim=True)
176
-
177
- return _blend(img, mean, contrast_factor)
178
-
179
-
180
- def adjust_hue(img: Tensor, hue_factor: float) -> Tensor:
181
- if not (-0.5 <= hue_factor <= 0.5):
182
- raise ValueError('hue_factor ({}) is not in [-0.5, 0.5].'.format(hue_factor))
183
-
184
- if not (isinstance(img, torch.Tensor)):
185
- raise TypeError('Input img should be Tensor image')
186
-
187
- _assert_image_tensor(img)
188
-
189
- _assert_channels(img, [1, 3])
190
- if _get_image_num_channels(img) == 1: # Match PIL behaviour
191
- return img
192
-
193
- orig_dtype = img.dtype
194
- if img.dtype == torch.uint8:
195
- img = img.to(dtype=torch.float32) / 255.0
196
-
197
- img = _rgb2hsv(img)
198
- h, s, v = img.unbind(dim=-3)
199
- h = (h + hue_factor) % 1.0
200
- img = torch.stack((h, s, v), dim=-3)
201
- img_hue_adj = _hsv2rgb(img)
202
-
203
- if orig_dtype == torch.uint8:
204
- img_hue_adj = (img_hue_adj * 255.0).to(dtype=orig_dtype)
205
-
206
- return img_hue_adj
207
-
208
-
209
- def adjust_saturation(img: Tensor, saturation_factor: float) -> Tensor:
210
- if saturation_factor < 0:
211
- raise ValueError('saturation_factor ({}) is not non-negative.'.format(saturation_factor))
212
-
213
- _assert_image_tensor(img)
214
-
215
- _assert_channels(img, [3])
216
-
217
- return _blend(img, rgb_to_grayscale(img), saturation_factor)
218
-
219
-
220
- def adjust_gamma(img: Tensor, gamma: float, gain: float = 1) -> Tensor:
221
- if not isinstance(img, torch.Tensor):
222
- raise TypeError('Input img should be a Tensor.')
223
-
224
- _assert_channels(img, [1, 3])
225
-
226
- if gamma < 0:
227
- raise ValueError('Gamma should be a non-negative real number')
228
-
229
- result = img
230
- dtype = img.dtype
231
- if not torch.is_floating_point(img):
232
- result = convert_image_dtype(result, torch.float32)
233
-
234
- result = (gain * result ** gamma).clamp(0, 1)
235
-
236
- result = convert_image_dtype(result, dtype)
237
- return result
238
-
239
-
240
- def center_crop(img: Tensor, output_size: BroadcastingList2[int]) -> Tensor:
241
- """DEPRECATED
242
- """
243
- warnings.warn(
244
- "This method is deprecated and will be removed in future releases. "
245
- "Please, use ``F.center_crop`` instead."
246
- )
247
-
248
- _assert_image_tensor(img)
249
-
250
- _, image_width, image_height = img.size()
251
- crop_height, crop_width = output_size
252
- # crop_top = int(round((image_height - crop_height) / 2.))
253
- # Result can be different between python func and scripted func
254
- # Temporary workaround:
255
- crop_top = int((image_height - crop_height + 1) * 0.5)
256
- # crop_left = int(round((image_width - crop_width) / 2.))
257
- # Result can be different between python func and scripted func
258
- # Temporary workaround:
259
- crop_left = int((image_width - crop_width + 1) * 0.5)
260
-
261
- return crop(img, crop_top, crop_left, crop_height, crop_width)
262
-
263
-
264
- def five_crop(img: Tensor, size: BroadcastingList2[int]) -> List[Tensor]:
265
- """DEPRECATED
266
- """
267
- warnings.warn(
268
- "This method is deprecated and will be removed in future releases. "
269
- "Please, use ``F.five_crop`` instead."
270
- )
271
-
272
- _assert_image_tensor(img)
273
-
274
- assert len(size) == 2, "Please provide only two dimensions (h, w) for size."
275
-
276
- _, image_width, image_height = img.size()
277
- crop_height, crop_width = size
278
- if crop_width > image_width or crop_height > image_height:
279
- msg = "Requested crop size {} is bigger than input size {}"
280
- raise ValueError(msg.format(size, (image_height, image_width)))
281
-
282
- tl = crop(img, 0, 0, crop_width, crop_height)
283
- tr = crop(img, image_width - crop_width, 0, image_width, crop_height)
284
- bl = crop(img, 0, image_height - crop_height, crop_width, image_height)
285
- br = crop(img, image_width - crop_width, image_height - crop_height, image_width, image_height)
286
- center = center_crop(img, (crop_height, crop_width))
287
-
288
- return [tl, tr, bl, br, center]
289
-
290
-
291
- def ten_crop(img: Tensor, size: BroadcastingList2[int], vertical_flip: bool = False) -> List[Tensor]:
292
- """DEPRECATED
293
- """
294
- warnings.warn(
295
- "This method is deprecated and will be removed in future releases. "
296
- "Please, use ``F.ten_crop`` instead."
297
- )
298
-
299
- _assert_image_tensor(img)
300
-
301
- assert len(size) == 2, "Please provide only two dimensions (h, w) for size."
302
- first_five = five_crop(img, size)
303
-
304
- if vertical_flip:
305
- img = vflip(img)
306
- else:
307
- img = hflip(img)
308
-
309
- second_five = five_crop(img, size)
310
-
311
- return first_five + second_five
312
-
313
-
314
- def _blend(img1: Tensor, img2: Tensor, ratio: float) -> Tensor:
315
- ratio = float(ratio)
316
- bound = 1.0 if img1.is_floating_point() else 255.0
317
- return (ratio * img1 + (1.0 - ratio) * img2).clamp(0, bound).to(img1.dtype)
318
-
319
-
320
- def _rgb2hsv(img):
321
- r, g, b = img.unbind(dim=-3)
322
-
323
- # Implementation is based on https://github.com/python-pillow/Pillow/blob/4174d4267616897df3746d315d5a2d0f82c656ee/
324
- # src/libImaging/Convert.c#L330
325
- maxc = torch.max(img, dim=-3).values
326
- minc = torch.min(img, dim=-3).values
327
-
328
- # The algorithm erases S and H channel where `maxc = minc`. This avoids NaN
329
- # from happening in the results, because
330
- # + S channel has division by `maxc`, which is zero only if `maxc = minc`
331
- # + H channel has division by `(maxc - minc)`.
332
- #
333
- # Instead of overwriting NaN afterwards, we just prevent it from occuring so
334
- # we don't need to deal with it in case we save the NaN in a buffer in
335
- # backprop, if it is ever supported, but it doesn't hurt to do so.
336
- eqc = maxc == minc
337
-
338
- cr = maxc - minc
339
- # Since `eqc => cr = 0`, replacing denominator with 1 when `eqc` is fine.
340
- ones = torch.ones_like(maxc)
341
- s = cr / torch.where(eqc, ones, maxc)
342
- # Note that `eqc => maxc = minc = r = g = b`. So the following calculation
343
- # of `h` would reduce to `bc - gc + 2 + rc - bc + 4 + rc - bc = 6` so it
344
- # would not matter what values `rc`, `gc`, and `bc` have here, and thus
345
- # replacing denominator with 1 when `eqc` is fine.
346
- cr_divisor = torch.where(eqc, ones, cr)
347
- rc = (maxc - r) / cr_divisor
348
- gc = (maxc - g) / cr_divisor
349
- bc = (maxc - b) / cr_divisor
350
-
351
- hr = (maxc == r) * (bc - gc)
352
- hg = ((maxc == g) & (maxc != r)) * (2.0 + rc - bc)
353
- hb = ((maxc != g) & (maxc != r)) * (4.0 + gc - rc)
354
- h = (hr + hg + hb)
355
- h = torch.fmod((h / 6.0 + 1.0), 1.0)
356
- return torch.stack((h, s, maxc), dim=-3)
357
-
358
-
359
- def _hsv2rgb(img):
360
- h, s, v = img.unbind(dim=-3)
361
- i = torch.floor(h * 6.0)
362
- f = (h * 6.0) - i
363
- i = i.to(dtype=torch.int32)
364
-
365
- p = torch.clamp((v * (1.0 - s)), 0.0, 1.0)
366
- q = torch.clamp((v * (1.0 - s * f)), 0.0, 1.0)
367
- t = torch.clamp((v * (1.0 - s * (1.0 - f))), 0.0, 1.0)
368
- i = i % 6
369
-
370
- mask = i.unsqueeze(dim=-3) == torch.arange(6, device=i.device).view(-1, 1, 1)
371
-
372
- a1 = torch.stack((v, q, p, p, t, v), dim=-3)
373
- a2 = torch.stack((t, v, v, q, p, p), dim=-3)
374
- a3 = torch.stack((p, p, t, v, v, q), dim=-3)
375
- a4 = torch.stack((a1, a2, a3), dim=-4)
376
-
377
- return torch.einsum("...ijk, ...xijk -> ...xjk", mask.to(dtype=img.dtype), a4)
378
-
379
-
380
- def _pad_symmetric(img: Tensor, padding: List[int]) -> Tensor:
381
- # padding is left, right, top, bottom
382
-
383
- # crop if needed
384
- if padding[0] < 0 or padding[1] < 0 or padding[2] < 0 or padding[3] < 0:
385
- crop_left, crop_right, crop_top, crop_bottom = [-min(x, 0) for x in padding]
386
- img = img[..., crop_top:img.shape[-2] - crop_bottom, crop_left:img.shape[-1] - crop_right]
387
- padding = [max(x, 0) for x in padding]
388
-
389
- in_sizes = img.size()
390
-
391
- x_indices = [i for i in range(in_sizes[-1])] # [0, 1, 2, 3, ...]
392
- left_indices = [i for i in range(padding[0] - 1, -1, -1)] # e.g. [3, 2, 1, 0]
393
- right_indices = [-(i + 1) for i in range(padding[1])] # e.g. [-1, -2, -3]
394
- x_indices = torch.tensor(left_indices + x_indices + right_indices, device=img.device)
395
-
396
- y_indices = [i for i in range(in_sizes[-2])]
397
- top_indices = [i for i in range(padding[2] - 1, -1, -1)]
398
- bottom_indices = [-(i + 1) for i in range(padding[3])]
399
- y_indices = torch.tensor(top_indices + y_indices + bottom_indices, device=img.device)
400
-
401
- ndim = img.ndim
402
- if ndim == 3:
403
- return img[:, y_indices[:, None], x_indices[None, :]]
404
- elif ndim == 4:
405
- return img[:, :, y_indices[:, None], x_indices[None, :]]
406
- else:
407
- raise RuntimeError("Symmetric padding of N-D tensors are not supported yet")
408
-
409
-
410
- def pad(img: Tensor, padding: List[int], fill: int = 0, padding_mode: str = "constant") -> Tensor:
411
- _assert_image_tensor(img)
412
-
413
- if not isinstance(padding, (int, tuple, list)):
414
- raise TypeError("Got inappropriate padding arg")
415
- if not isinstance(fill, (int, float)):
416
- raise TypeError("Got inappropriate fill arg")
417
- if not isinstance(padding_mode, str):
418
- raise TypeError("Got inappropriate padding_mode arg")
419
-
420
- if isinstance(padding, tuple):
421
- padding = list(padding)
422
-
423
- if isinstance(padding, list) and len(padding) not in [1, 2, 4]:
424
- raise ValueError("Padding must be an int or a 1, 2, or 4 element tuple, not a " +
425
- "{} element tuple".format(len(padding)))
426
-
427
- if padding_mode not in ["constant", "edge", "reflect", "symmetric"]:
428
- raise ValueError("Padding mode should be either constant, edge, reflect or symmetric")
429
-
430
- if isinstance(padding, int):
431
- if torch.jit.is_scripting():
432
- # This maybe unreachable
433
- raise ValueError("padding can't be an int while torchscripting, set it as a list [value, ]")
434
- pad_left = pad_right = pad_top = pad_bottom = padding
435
- elif len(padding) == 1:
436
- pad_left = pad_right = pad_top = pad_bottom = padding[0]
437
- elif len(padding) == 2:
438
- pad_left = pad_right = padding[0]
439
- pad_top = pad_bottom = padding[1]
440
- else:
441
- pad_left = padding[0]
442
- pad_top = padding[1]
443
- pad_right = padding[2]
444
- pad_bottom = padding[3]
445
-
446
- p = [pad_left, pad_right, pad_top, pad_bottom]
447
-
448
- if padding_mode == "edge":
449
- # remap padding_mode str
450
- padding_mode = "replicate"
451
- elif padding_mode == "symmetric":
452
- # route to another implementation
453
- return _pad_symmetric(img, p)
454
-
455
- need_squeeze = False
456
- if img.ndim < 4:
457
- img = img.unsqueeze(dim=0)
458
- need_squeeze = True
459
-
460
- out_dtype = img.dtype
461
- need_cast = False
462
- if (padding_mode != "constant") and img.dtype not in (torch.float32, torch.float64):
463
- # Here we temporary cast input tensor to float
464
- # until pytorch issue is resolved :
465
- # https://github.com/pytorch/pytorch/issues/40763
466
- need_cast = True
467
- img = img.to(torch.float32)
468
-
469
- img = torch_pad(img, p, mode=padding_mode, value=float(fill))
470
-
471
- if need_squeeze:
472
- img = img.squeeze(dim=0)
473
-
474
- if need_cast:
475
- img = img.to(out_dtype)
476
-
477
- return img
478
-
479
-
480
- def resize(
481
- img: Tensor,
482
- size: List[int],
483
- interpolation: str = "bilinear",
484
- max_size: Optional[int] = None,
485
- antialias: Optional[bool] = None
486
- ) -> Tensor:
487
- _assert_image_tensor(img)
488
-
489
- if not isinstance(size, (int, tuple, list)):
490
- raise TypeError("Got inappropriate size arg")
491
- if not isinstance(interpolation, str):
492
- raise TypeError("Got inappropriate interpolation arg")
493
-
494
- if interpolation not in ["nearest", "bilinear", "bicubic"]:
495
- raise ValueError("This interpolation mode is unsupported with Tensor input")
496
-
497
- if isinstance(size, tuple):
498
- size = list(size)
499
-
500
- if isinstance(size, list):
501
- if len(size) not in [1, 2]:
502
- raise ValueError("Size must be an int or a 1 or 2 element tuple/list, not a "
503
- "{} element tuple/list".format(len(size)))
504
- if max_size is not None and len(size) != 1:
505
- raise ValueError(
506
- "max_size should only be passed if size specifies the length of the smaller edge, "
507
- "i.e. size should be an int or a sequence of length 1 in torchscript mode."
508
- )
509
-
510
- if antialias is None:
511
- antialias = False
512
-
513
- if antialias and interpolation not in ["bilinear", "bicubic"]:
514
- raise ValueError("Antialias option is supported for bilinear and bicubic interpolation modes only")
515
-
516
- w, h = _get_image_size(img)
517
-
518
- if isinstance(size, int) or len(size) == 1: # specified size only for the smallest edge
519
- short, long = (w, h) if w <= h else (h, w)
520
- requested_new_short = size if isinstance(size, int) else size[0]
521
-
522
- if short == requested_new_short:
523
- return img
524
-
525
- new_short, new_long = requested_new_short, int(requested_new_short * long / short)
526
-
527
- if max_size is not None:
528
- if max_size <= requested_new_short:
529
- raise ValueError(
530
- f"max_size = {max_size} must be strictly greater than the requested "
531
- f"size for the smaller edge size = {size}"
532
- )
533
- if new_long > max_size:
534
- new_short, new_long = int(max_size * new_short / new_long), max_size
535
-
536
- new_w, new_h = (new_short, new_long) if w <= h else (new_long, new_short)
537
-
538
- else: # specified both h and w
539
- new_w, new_h = size[1], size[0]
540
-
541
- img, need_cast, need_squeeze, out_dtype = _cast_squeeze_in(img, [torch.float32, torch.float64])
542
-
543
- # Define align_corners to avoid warnings
544
- align_corners = False if interpolation in ["bilinear", "bicubic"] else None
545
-
546
- if antialias:
547
- if interpolation == "bilinear":
548
- img = torch.ops.torchvision._interpolate_bilinear2d_aa(img, [new_h, new_w], align_corners=False)
549
- elif interpolation == "bicubic":
550
- img = torch.ops.torchvision._interpolate_bicubic2d_aa(img, [new_h, new_w], align_corners=False)
551
- else:
552
- img = interpolate(img, size=[new_h, new_w], mode=interpolation, align_corners=align_corners)
553
-
554
- if interpolation == "bicubic" and out_dtype == torch.uint8:
555
- img = img.clamp(min=0, max=255)
556
-
557
- img = _cast_squeeze_out(img, need_cast=need_cast, need_squeeze=need_squeeze, out_dtype=out_dtype)
558
-
559
- return img
560
-
561
-
562
- def _assert_grid_transform_inputs(
563
- img: Tensor,
564
- matrix: Optional[List[float]],
565
- interpolation: str,
566
- fill: Optional[List[float]],
567
- supported_interpolation_modes: List[str],
568
- coeffs: Optional[List[float]] = None,
569
- ):
570
-
571
- if not (isinstance(img, torch.Tensor)):
572
- raise TypeError("Input img should be Tensor")
573
-
574
- _assert_image_tensor(img)
575
-
576
- if matrix is not None and not isinstance(matrix, list):
577
- raise TypeError("Argument matrix should be a list")
578
-
579
- if matrix is not None and len(matrix) != 6:
580
- raise ValueError("Argument matrix should have 6 float values")
581
-
582
- if coeffs is not None and len(coeffs) != 8:
583
- raise ValueError("Argument coeffs should have 8 float values")
584
-
585
- if fill is not None and not isinstance(fill, (int, float, tuple, list)):
586
- warnings.warn("Argument fill should be either int, float, tuple or list")
587
-
588
- # Check fill
589
- num_channels = _get_image_num_channels(img)
590
- if isinstance(fill, (tuple, list)) and (len(fill) > 1 and len(fill) != num_channels):
591
- msg = ("The number of elements in 'fill' cannot broadcast to match the number of "
592
- "channels of the image ({} != {})")
593
- raise ValueError(msg.format(len(fill), num_channels))
594
-
595
- if interpolation not in supported_interpolation_modes:
596
- raise ValueError("Interpolation mode '{}' is unsupported with Tensor input".format(interpolation))
597
-
598
-
599
- def _cast_squeeze_in(img: Tensor, req_dtypes: List[torch.dtype]) -> Tuple[Tensor, bool, bool, torch.dtype]:
600
- need_squeeze = False
601
- # make image NCHW
602
- if img.ndim < 4:
603
- img = img.unsqueeze(dim=0)
604
- need_squeeze = True
605
-
606
- out_dtype = img.dtype
607
- need_cast = False
608
- if out_dtype not in req_dtypes:
609
- need_cast = True
610
- req_dtype = req_dtypes[0]
611
- img = img.to(req_dtype)
612
- return img, need_cast, need_squeeze, out_dtype
613
-
614
-
615
- def _cast_squeeze_out(img: Tensor, need_cast: bool, need_squeeze: bool, out_dtype: torch.dtype):
616
- if need_squeeze:
617
- img = img.squeeze(dim=0)
618
-
619
- if need_cast:
620
- if out_dtype in (torch.uint8, torch.int8, torch.int16, torch.int32, torch.int64):
621
- # it is better to round before cast
622
- img = torch.round(img)
623
- img = img.to(out_dtype)
624
-
625
- return img
626
-
627
-
628
- def _apply_grid_transform(img: Tensor, grid: Tensor, mode: str, fill: Optional[List[float]]) -> Tensor:
629
-
630
- img, need_cast, need_squeeze, out_dtype = _cast_squeeze_in(img, [grid.dtype, ])
631
-
632
- if img.shape[0] > 1:
633
- # Apply same grid to a batch of images
634
- grid = grid.expand(img.shape[0], grid.shape[1], grid.shape[2], grid.shape[3])
635
-
636
- # Append a dummy mask for customized fill colors, should be faster than grid_sample() twice
637
- if fill is not None:
638
- dummy = torch.ones((img.shape[0], 1, img.shape[2], img.shape[3]), dtype=img.dtype, device=img.device)
639
- img = torch.cat((img, dummy), dim=1)
640
-
641
- img = grid_sample(img, grid, mode=mode, padding_mode="zeros", align_corners=False)
642
-
643
- # Fill with required color
644
- if fill is not None:
645
- mask = img[:, -1:, :, :] # N * 1 * H * W
646
- img = img[:, :-1, :, :] # N * C * H * W
647
- mask = mask.expand_as(img)
648
- len_fill = len(fill) if isinstance(fill, (tuple, list)) else 1
649
- fill_img = torch.tensor(fill, dtype=img.dtype, device=img.device).view(1, len_fill, 1, 1).expand_as(img)
650
- if mode == 'nearest':
651
- mask = mask < 0.5
652
- img[mask] = fill_img[mask]
653
- else: # 'bilinear'
654
- img = img * mask + (1.0 - mask) * fill_img
655
-
656
- img = _cast_squeeze_out(img, need_cast, need_squeeze, out_dtype)
657
- return img
658
-
659
-
660
- def _gen_affine_grid(
661
- theta: Tensor, w: int, h: int, ow: int, oh: int,
662
- ) -> Tensor:
663
- # https://github.com/pytorch/pytorch/blob/74b65c32be68b15dc7c9e8bb62459efbfbde33d8/aten/src/ATen/native/
664
- # AffineGridGenerator.cpp#L18
665
- # Difference with AffineGridGenerator is that:
666
- # 1) we normalize grid values after applying theta
667
- # 2) we can normalize by other image size, such that it covers "extend" option like in PIL.Image.rotate
668
-
669
- d = 0.5
670
- base_grid = torch.empty(1, oh, ow, 3, dtype=theta.dtype, device=theta.device)
671
- x_grid = torch.linspace(-ow * 0.5 + d, ow * 0.5 + d - 1, steps=ow, device=theta.device)
672
- base_grid[..., 0].copy_(x_grid)
673
- y_grid = torch.linspace(-oh * 0.5 + d, oh * 0.5 + d - 1, steps=oh, device=theta.device).unsqueeze_(-1)
674
- base_grid[..., 1].copy_(y_grid)
675
- base_grid[..., 2].fill_(1)
676
-
677
- rescaled_theta = theta.transpose(1, 2) / torch.tensor([0.5 * w, 0.5 * h], dtype=theta.dtype, device=theta.device)
678
- output_grid = base_grid.view(1, oh * ow, 3).bmm(rescaled_theta)
679
- return output_grid.view(1, oh, ow, 2)
680
-
681
-
682
- def affine(
683
- img: Tensor, matrix: List[float], interpolation: str = "nearest", fill: Optional[List[float]] = None
684
- ) -> Tensor:
685
- _assert_grid_transform_inputs(img, matrix, interpolation, fill, ["nearest", "bilinear"])
686
-
687
- dtype = img.dtype if torch.is_floating_point(img) else torch.float32
688
- theta = torch.tensor(matrix, dtype=dtype, device=img.device).reshape(1, 2, 3)
689
- shape = img.shape
690
- # grid will be generated on the same device as theta and img
691
- grid = _gen_affine_grid(theta, w=shape[-1], h=shape[-2], ow=shape[-1], oh=shape[-2])
692
- return _apply_grid_transform(img, grid, interpolation, fill=fill)
693
-
694
-
695
- def _compute_output_size(matrix: List[float], w: int, h: int) -> Tuple[int, int]:
696
-
697
- # Inspired of PIL implementation:
698
- # https://github.com/python-pillow/Pillow/blob/11de3318867e4398057373ee9f12dcb33db7335c/src/PIL/Image.py#L2054
699
-
700
- # pts are Top-Left, Top-Right, Bottom-Left, Bottom-Right points.
701
- pts = torch.tensor([
702
- [-0.5 * w, -0.5 * h, 1.0],
703
- [-0.5 * w, 0.5 * h, 1.0],
704
- [0.5 * w, 0.5 * h, 1.0],
705
- [0.5 * w, -0.5 * h, 1.0],
706
- ])
707
- theta = torch.tensor(matrix, dtype=torch.float).reshape(1, 2, 3)
708
- new_pts = pts.view(1, 4, 3).bmm(theta.transpose(1, 2)).view(4, 2)
709
- min_vals, _ = new_pts.min(dim=0)
710
- max_vals, _ = new_pts.max(dim=0)
711
-
712
- # Truncate precision to 1e-4 to avoid ceil of Xe-15 to 1.0
713
- tol = 1e-4
714
- cmax = torch.ceil((max_vals / tol).trunc_() * tol)
715
- cmin = torch.floor((min_vals / tol).trunc_() * tol)
716
- size = cmax - cmin
717
- return int(size[0]), int(size[1])
718
-
719
-
720
- def rotate(
721
- img: Tensor, matrix: List[float], interpolation: str = "nearest",
722
- expand: bool = False, fill: Optional[List[float]] = None
723
- ) -> Tensor:
724
- _assert_grid_transform_inputs(img, matrix, interpolation, fill, ["nearest", "bilinear"])
725
- w, h = img.shape[-1], img.shape[-2]
726
- ow, oh = _compute_output_size(matrix, w, h) if expand else (w, h)
727
- dtype = img.dtype if torch.is_floating_point(img) else torch.float32
728
- theta = torch.tensor(matrix, dtype=dtype, device=img.device).reshape(1, 2, 3)
729
- # grid will be generated on the same device as theta and img
730
- grid = _gen_affine_grid(theta, w=w, h=h, ow=ow, oh=oh)
731
-
732
- return _apply_grid_transform(img, grid, interpolation, fill=fill)
733
-
734
-
735
- def _perspective_grid(coeffs: List[float], ow: int, oh: int, dtype: torch.dtype, device: torch.device):
736
- # https://github.com/python-pillow/Pillow/blob/4634eafe3c695a014267eefdce830b4a825beed7/
737
- # src/libImaging/Geometry.c#L394
738
-
739
- #
740
- # x_out = (coeffs[0] * x + coeffs[1] * y + coeffs[2]) / (coeffs[6] * x + coeffs[7] * y + 1)
741
- # y_out = (coeffs[3] * x + coeffs[4] * y + coeffs[5]) / (coeffs[6] * x + coeffs[7] * y + 1)
742
- #
743
- theta1 = torch.tensor([[
744
- [coeffs[0], coeffs[1], coeffs[2]],
745
- [coeffs[3], coeffs[4], coeffs[5]]
746
- ]], dtype=dtype, device=device)
747
- theta2 = torch.tensor([[
748
- [coeffs[6], coeffs[7], 1.0],
749
- [coeffs[6], coeffs[7], 1.0]
750
- ]], dtype=dtype, device=device)
751
-
752
- d = 0.5
753
- base_grid = torch.empty(1, oh, ow, 3, dtype=dtype, device=device)
754
- x_grid = torch.linspace(d, ow * 1.0 + d - 1.0, steps=ow, device=device)
755
- base_grid[..., 0].copy_(x_grid)
756
- y_grid = torch.linspace(d, oh * 1.0 + d - 1.0, steps=oh, device=device).unsqueeze_(-1)
757
- base_grid[..., 1].copy_(y_grid)
758
- base_grid[..., 2].fill_(1)
759
-
760
- rescaled_theta1 = theta1.transpose(1, 2) / torch.tensor([0.5 * ow, 0.5 * oh], dtype=dtype, device=device)
761
- output_grid1 = base_grid.view(1, oh * ow, 3).bmm(rescaled_theta1)
762
- output_grid2 = base_grid.view(1, oh * ow, 3).bmm(theta2.transpose(1, 2))
763
-
764
- output_grid = output_grid1 / output_grid2 - 1.0
765
- return output_grid.view(1, oh, ow, 2)
766
-
767
-
768
- def perspective(
769
- img: Tensor, perspective_coeffs: List[float], interpolation: str = "bilinear", fill: Optional[List[float]] = None
770
- ) -> Tensor:
771
- if not (isinstance(img, torch.Tensor)):
772
- raise TypeError('Input img should be Tensor.')
773
-
774
- _assert_image_tensor(img)
775
-
776
- _assert_grid_transform_inputs(
777
- img,
778
- matrix=None,
779
- interpolation=interpolation,
780
- fill=fill,
781
- supported_interpolation_modes=["nearest", "bilinear"],
782
- coeffs=perspective_coeffs
783
- )
784
-
785
- ow, oh = img.shape[-1], img.shape[-2]
786
- dtype = img.dtype if torch.is_floating_point(img) else torch.float32
787
- grid = _perspective_grid(perspective_coeffs, ow=ow, oh=oh, dtype=dtype, device=img.device)
788
- return _apply_grid_transform(img, grid, interpolation, fill=fill)
789
-
790
-
791
- def _get_gaussian_kernel1d(kernel_size: int, sigma: float) -> Tensor:
792
- ksize_half = (kernel_size - 1) * 0.5
793
-
794
- x = torch.linspace(-ksize_half, ksize_half, steps=kernel_size)
795
- pdf = torch.exp(-0.5 * (x / sigma).pow(2))
796
- kernel1d = pdf / pdf.sum()
797
-
798
- return kernel1d
799
-
800
-
801
- def _get_gaussian_kernel2d(
802
- kernel_size: List[int], sigma: List[float], dtype: torch.dtype, device: torch.device
803
- ) -> Tensor:
804
- kernel1d_x = _get_gaussian_kernel1d(kernel_size[0], sigma[0]).to(device, dtype=dtype)
805
- kernel1d_y = _get_gaussian_kernel1d(kernel_size[1], sigma[1]).to(device, dtype=dtype)
806
- kernel2d = torch.mm(kernel1d_y[:, None], kernel1d_x[None, :])
807
- return kernel2d
808
-
809
-
810
- def gaussian_blur(img: Tensor, kernel_size: List[int], sigma: List[float]) -> Tensor:
811
- if not (isinstance(img, torch.Tensor)):
812
- raise TypeError('img should be Tensor. Got {}'.format(type(img)))
813
-
814
- _assert_image_tensor(img)
815
-
816
- dtype = img.dtype if torch.is_floating_point(img) else torch.float32
817
- kernel = _get_gaussian_kernel2d(kernel_size, sigma, dtype=dtype, device=img.device)
818
- kernel = kernel.expand(img.shape[-3], 1, kernel.shape[0], kernel.shape[1])
819
-
820
- img, need_cast, need_squeeze, out_dtype = _cast_squeeze_in(img, [kernel.dtype, ])
821
-
822
- # padding = (left, right, top, bottom)
823
- padding = [kernel_size[0] // 2, kernel_size[0] // 2, kernel_size[1] // 2, kernel_size[1] // 2]
824
- img = torch_pad(img, padding, mode="reflect")
825
- img = conv2d(img, kernel, groups=img.shape[-3])
826
-
827
- img = _cast_squeeze_out(img, need_cast, need_squeeze, out_dtype)
828
- return img
829
-
830
-
831
- def invert(img: Tensor) -> Tensor:
832
-
833
- _assert_image_tensor(img)
834
-
835
- if img.ndim < 3:
836
- raise TypeError("Input image tensor should have at least 3 dimensions, but found {}".format(img.ndim))
837
-
838
- _assert_channels(img, [1, 3])
839
-
840
- bound = torch.tensor(1 if img.is_floating_point() else 255, dtype=img.dtype, device=img.device)
841
- return bound - img
842
-
843
-
844
- def posterize(img: Tensor, bits: int) -> Tensor:
845
-
846
- _assert_image_tensor(img)
847
-
848
- if img.ndim < 3:
849
- raise TypeError("Input image tensor should have at least 3 dimensions, but found {}".format(img.ndim))
850
- if img.dtype != torch.uint8:
851
- raise TypeError("Only torch.uint8 image tensors are supported, but found {}".format(img.dtype))
852
-
853
- _assert_channels(img, [1, 3])
854
- mask = -int(2**(8 - bits)) # JIT-friendly for: ~(2 ** (8 - bits) - 1)
855
- return img & mask
856
-
857
-
858
- def solarize(img: Tensor, threshold: float) -> Tensor:
859
-
860
- _assert_image_tensor(img)
861
-
862
- if img.ndim < 3:
863
- raise TypeError("Input image tensor should have at least 3 dimensions, but found {}".format(img.ndim))
864
-
865
- _assert_channels(img, [1, 3])
866
-
867
- inverted_img = invert(img)
868
- return torch.where(img >= threshold, inverted_img, img)
869
-
870
-
871
- def _blurred_degenerate_image(img: Tensor) -> Tensor:
872
- dtype = img.dtype if torch.is_floating_point(img) else torch.float32
873
-
874
- kernel = torch.ones((3, 3), dtype=dtype, device=img.device)
875
- kernel[1, 1] = 5.0
876
- kernel /= kernel.sum()
877
- kernel = kernel.expand(img.shape[-3], 1, kernel.shape[0], kernel.shape[1])
878
-
879
- result_tmp, need_cast, need_squeeze, out_dtype = _cast_squeeze_in(img, [kernel.dtype, ])
880
- result_tmp = conv2d(result_tmp, kernel, groups=result_tmp.shape[-3])
881
- result_tmp = _cast_squeeze_out(result_tmp, need_cast, need_squeeze, out_dtype)
882
-
883
- result = img.clone()
884
- result[..., 1:-1, 1:-1] = result_tmp
885
-
886
- return result
887
-
888
-
889
- def adjust_sharpness(img: Tensor, sharpness_factor: float) -> Tensor:
890
- if sharpness_factor < 0:
891
- raise ValueError('sharpness_factor ({}) is not non-negative.'.format(sharpness_factor))
892
-
893
- _assert_image_tensor(img)
894
-
895
- _assert_channels(img, [1, 3])
896
-
897
- if img.size(-1) <= 2 or img.size(-2) <= 2:
898
- return img
899
-
900
- return _blend(img, _blurred_degenerate_image(img), sharpness_factor)
901
-
902
-
903
- def autocontrast(img: Tensor) -> Tensor:
904
-
905
- _assert_image_tensor(img)
906
-
907
- if img.ndim < 3:
908
- raise TypeError("Input image tensor should have at least 3 dimensions, but found {}".format(img.ndim))
909
-
910
- _assert_channels(img, [1, 3])
911
-
912
- bound = 1.0 if img.is_floating_point() else 255.0
913
- dtype = img.dtype if torch.is_floating_point(img) else torch.float32
914
-
915
- minimum = img.amin(dim=(-2, -1), keepdim=True).to(dtype)
916
- maximum = img.amax(dim=(-2, -1), keepdim=True).to(dtype)
917
- eq_idxs = torch.where(minimum == maximum)[0]
918
- minimum[eq_idxs] = 0
919
- maximum[eq_idxs] = bound
920
- scale = bound / (maximum - minimum)
921
-
922
- return ((img - minimum) * scale).clamp(0, bound).to(img.dtype)
923
-
924
-
925
- def _scale_channel(img_chan):
926
- # TODO: we should expect bincount to always be faster than histc, but this
927
- # isn't always the case. Once
928
- # https://github.com/pytorch/pytorch/issues/53194 is fixed, remove the if
929
- # block and only use bincount.
930
- if img_chan.is_cuda:
931
- hist = torch.histc(img_chan.to(torch.float32), bins=256, min=0, max=255)
932
- else:
933
- hist = torch.bincount(img_chan.view(-1), minlength=256)
934
-
935
- nonzero_hist = hist[hist != 0]
936
- step = torch.div(nonzero_hist[:-1].sum(), 255, rounding_mode='floor')
937
- if step == 0:
938
- return img_chan
939
-
940
- lut = torch.div(
941
- torch.cumsum(hist, 0) + torch.div(step, 2, rounding_mode='floor'),
942
- step, rounding_mode='floor')
943
- lut = torch.nn.functional.pad(lut, [1, 0])[:-1].clamp(0, 255)
944
-
945
- return lut[img_chan.to(torch.int64)].to(torch.uint8)
946
-
947
-
948
- def _equalize_single_image(img: Tensor) -> Tensor:
949
- return torch.stack([_scale_channel(img[c]) for c in range(img.size(0))])
950
-
951
-
952
- def equalize(img: Tensor) -> Tensor:
953
-
954
- _assert_image_tensor(img)
955
-
956
- if not (3 <= img.ndim <= 4):
957
- raise TypeError("Input image tensor should have 3 or 4 dimensions, but found {}".format(img.ndim))
958
- if img.dtype != torch.uint8:
959
- raise TypeError("Only torch.uint8 image tensors are supported, but found {}".format(img.dtype))
960
-
961
- _assert_channels(img, [1, 3])
962
-
963
- if img.ndim == 3:
964
- return _equalize_single_image(img)
965
-
966
- return torch.stack([_equalize_single_image(x) for x in img])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ChandraMohanNayal/AutoGPT/Dockerfile DELETED
@@ -1,38 +0,0 @@
1
- # Use an official Python base image from the Docker Hub
2
- FROM python:3.10-slim
3
-
4
- # Install git
5
- RUN apt-get -y update
6
- RUN apt-get -y install git chromium-driver
7
-
8
- # Install Xvfb and other dependencies for headless browser testing
9
- RUN apt-get update \
10
- && apt-get install -y wget gnupg2 libgtk-3-0 libdbus-glib-1-2 dbus-x11 xvfb ca-certificates
11
-
12
- # Install Firefox / Chromium
13
- RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
14
- && echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \
15
- && apt-get update \
16
- && apt-get install -y chromium firefox-esr
17
-
18
- # Set environment variables
19
- ENV PIP_NO_CACHE_DIR=yes \
20
- PYTHONUNBUFFERED=1 \
21
- PYTHONDONTWRITEBYTECODE=1
22
-
23
- # Create a non-root user and set permissions
24
- RUN useradd --create-home appuser
25
- WORKDIR /home/appuser
26
- RUN chown appuser:appuser /home/appuser
27
- USER appuser
28
-
29
- # Copy the requirements.txt file and install the requirements
30
- COPY --chown=appuser:appuser requirements.txt .
31
- RUN sed -i '/Items below this point will not be included in the Docker Image/,$d' requirements.txt && \
32
- pip install --no-cache-dir --user -r requirements.txt
33
-
34
- # Copy the application files
35
- COPY --chown=appuser:appuser autogpt/ ./autogpt
36
-
37
- # Set the entrypoint
38
- ENTRYPOINT ["python", "-m", "autogpt"]