parquet-converter commited on
Commit
2b5db52
·
1 Parent(s): 2957772

Update parquet files (step 71 of 476)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe CS6 Response Code Generator A Guide to Activate Your Software.md +0 -140
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Version of JR Typing Tutor A Risky and Unethical Choice.md +0 -24
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download HD Tune Pro The Ultimate Tool for HDD and SSD Optimization.md +0 -28
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (kokurikozaka kara 720p or 1080p) la storia romantica ambientata nella Yokohama degli anni 60.md +0 -141
  5. spaces/1gistliPinn/ChatGPT4/Examples/ActivationacronisTIH 6514 6.md +0 -10
  6. spaces/1phancelerku/anime-remove-background/Download Trucker - Overloaded Trucks APK and Haul Ore for Profit.md +0 -132
  7. spaces/1phancelerku/anime-remove-background/Download apk 5play.ru No ads no limits no worries.md +0 -121
  8. spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/README.md +0 -12
  9. spaces/AI-Hobbyist/Hoyo-RVC/go-web.bat +0 -2
  10. spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan_light.py +0 -650
  11. spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/transformer.py +0 -747
  12. spaces/ALSv/FSW/roop/metadata.py +0 -2
  13. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Label.js +0 -297
  14. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/Factory.js +0 -13
  15. spaces/AlexWang/lama/saicinpainting/training/modules/multiscale.py +0 -244
  16. spaces/Alpaca233/SadTalker/src/face3d/extract_kp_videos.py +0 -108
  17. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/__init__.py +0 -188
  18. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_torchsde_objects.py +0 -17
  19. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler.py +0 -146
  20. spaces/Andy1621/uniformer_image_segmentation/configs/emanet/emanet_r101-d8_769x769_80k_cityscapes.py +0 -2
  21. spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50b-d16_512x1024_80k_cityscapes.py +0 -2
  22. spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes.py +0 -4
  23. spaces/Anonymous-sub/Rerender/flow/flow_utils.py +0 -218
  24. spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-coverage.sh +0 -4
  25. spaces/Apex-X/nono/app.py +0 -69
  26. spaces/Aqdas/YouTube_Video_OpenAI_whisper/whisper.py +0 -18
  27. spaces/Artrajz/vits-simple-api/logger.py +0 -40
  28. spaces/AsakuraMizu/moe-tts/text/ngu_dialect.py +0 -30
  29. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/README_D2.md +0 -62
  30. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn.py +0 -425
  31. spaces/Benson/text-generation/Examples/Asfalto 8 - Juego De Carreras De Coches.md +0 -72
  32. spaces/Benson/text-generation/Examples/Cmo Descargar Messenger En Iphone 5s.md +0 -63
  33. spaces/Benson/text-generation/Examples/Descargar Frag Pro Shooter Mod Apk Desbloquear Todos Los Personajes.md +0 -72
  34. spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tzwin.py +0 -2
  35. spaces/CVH-vn1210/make_hair/minigpt4/models/Qformer.py +0 -1216
  36. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/matcher.py +0 -135
  37. spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/sort.h +0 -23
  38. spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/scan.h +0 -928
  39. spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/for_each.h +0 -95
  40. spaces/CVPR/MonoScene/monoscene/monoscene.py +0 -125
  41. spaces/CVPR/WALT/mmdet/models/losses/varifocal_loss.py +0 -133
  42. spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rpn.py +0 -533
  43. spaces/Cletrason/Cletrason-toad-mario-movie/hf_utils.py +0 -39
  44. spaces/CloseEric/CloseEric/Dockerfile +0 -11
  45. spaces/CofAI/tv/public/mpegts.js +0 -0
  46. spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/transforms_video.py +0 -179
  47. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/validators.py +0 -1186
  48. spaces/Deep1994/t5-paraphrase/README.md +0 -12
  49. spaces/Dinoking/Guccio-AI-Designer/netdissect/nethook.py +0 -266
  50. spaces/DragGan/DragGan-Inversion/training/augment.py +0 -562
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe CS6 Response Code Generator A Guide to Activate Your Software.md DELETED
@@ -1,140 +0,0 @@
1
-
2
- <h1>Adobe CS6 Response Code Generator: How to Activate Adobe Products Offline</h1>
3
- <p>If you are a creative professional or enthusiast who uses Adobe CS6 products, such as Photoshop, Illustrator, InDesign, and more, you may have encountered situations where you need to activate your software offline. This could be because you are traveling, have internet connection issues, or work in a secure environment where online activation is not possible.</p>
4
- <p>In this article, we will explain what is Adobe CS6 and why do you need a response code generator for offline activation. We will also show you how to generate a response code for Adobe CS6 offline activation using an internet-enabled device and your product's serial number. Finally, we will discuss the benefits and limitations of using a response code generator for Adobe CS6 offline activation.</p>
5
- <h2>adobe cs6 response code generator</h2><br /><p><b><b>Download Zip</b> &bull; <a href="https://byltly.com/2uKxK2">https://byltly.com/2uKxK2</a></b></p><br /><br />
6
- <h2>What is Adobe CS6 and why do you need a response code generator?</h2>
7
- <h3>Adobe CS6 is a suite of creative software products that includes Photoshop, Illustrator, InDesign, and more.</h3>
8
- <p>Adobe CS6 stands for Creative Suite 6, which is a collection of software products that enable you to create, edit, design, and publish various types of digital content. Some of the most popular products in Adobe CS6 are:</p>
9
- <ul>
10
- <li>Photoshop: A powerful image editing and manipulation tool that lets you create stunning graphics, photos, illustrations, and more.</li>
11
- <li>Illustrator: A vector-based drawing and design tool that lets you create logos, icons, typography, and more.</li>
12
- <li>InDesign: A layout and publishing tool that lets you create print and digital documents, such as books, magazines, flyers, and more.</li>
13
- <li>Dreamweaver: A web development tool that lets you create websites and web applications using HTML, CSS, JavaScript, and more.</li>
14
- <li>Premiere Pro: A video editing and production tool that lets you create professional-quality videos and movies.</li>
15
- <li>After Effects: A motion graphics and visual effects tool that lets you create animations, transitions, effects, and more for your videos.</li>
16
- </ul>
17
- <p>These are just some of the products in Adobe CS6. There are many more products that cater to different creative needs and workflows.</p>
18
- <h3>A response code generator is a tool that helps you activate Adobe products offline when you cannot connect to the internet or Adobe servers.</h3>
19
- <p>To use Adobe products, you need to activate them with your Adobe ID and password. This process verifies that you have a valid license for the product and prevents unauthorized use or piracy.</p>
20
- <p>Normally, this process is done online by connecting to the internet and signing in with your Adobe ID and password. However, there may be situations where you cannot connect to the internet or Adobe servers due to various reasons. For example:</p>
21
- <ul>
22
- <li>You are traveling and do not have access to a reliable internet connection.</li>
23
- <li>You have internet connection issues or network problems that prevent you from connecting to Adobe servers.</li>
24
- <li>You work in a secure environment like government, banking, etc. where online activation is not possible due to security policies or restrictions.</li>
25
- </ul>
26
- <p>In these situations, you need to use an alternative method of activation called offline activation. Offline activation allows you to activate your Adobe products without an internet connection or access to Adobe servers.</p>
27
- <p>adobe cs6 activation code generator online<br />
28
- adobe cs6 serial number generator mac<br />
29
- adobe cs6 master collection response code crack<br />
30
- adobe cs6 offline activation code generator<br />
31
- adobe cs6 keygen generator download<br />
32
- adobe cs6 license key generator free<br />
33
- adobe cs6 product code generator<br />
34
- adobe cs6 registration code generator<br />
35
- adobe cs6 authorization code generator windows<br />
36
- adobe cs6 activation code generator for pc<br />
37
- adobe cs6 serial number generator windows 10<br />
38
- adobe cs6 master collection response code keygen<br />
39
- adobe cs6 offline activation code generator mac<br />
40
- adobe cs6 keygen generator online free<br />
41
- adobe cs6 license key generator online<br />
42
- adobe cs6 product code generator mac<br />
43
- adobe cs6 registration code generator online<br />
44
- adobe cs6 authorization code generator mac<br />
45
- adobe cs6 activation code generator for mac<br />
46
- adobe cs6 serial number generator windows 7<br />
47
- adobe cs6 master collection response code generator online<br />
48
- adobe cs6 offline activation code generator windows 10<br />
49
- adobe cs6 keygen generator download free<br />
50
- adobe cs6 license key generator mac<br />
51
- adobe cs6 product code generator online free<br />
52
- adobe cs6 registration code generator mac<br />
53
- adobe cs6 authorization code generator windows 10<br />
54
- adobe cs6 activation code generator for windows 10<br />
55
- adobe cs6 serial number generator windows 8.1<br />
56
- adobe cs6 master collection response code crack download<br />
57
- adobe cs6 offline activation code generator windows 7<br />
58
- adobe cs6 keygen generator online no survey<br />
59
- adobe cs6 license key generator windows 10<br />
60
- adobe cs6 product code generator windows 10<br />
61
- adobe cs6 registration code generator windows 10<br />
62
- adobe cs6 authorization code generator windows 7<br />
63
- adobe cs6 activation code generator for windows 7<br />
64
- adobe cs6 serial number generator windows xp<br />
65
- adobe cs6 master collection response code hack<br />
66
- adobe cs6 offline activation code generator windows 8.1<br />
67
- adobe cs6 keygen generator free download no survey<br />
68
- adobe cs6 license key generator windows 7<br />
69
- adobe cs6 product code generator windows 7<br />
70
- adobe cs6 registration code generator windows 7<br />
71
- adobe cs6 authorization code generator windows 8.1<br />
72
- adobe cs6 activation code generator for windows 8.1<br />
73
- adobe cs6 serial number generator mac os x<br />
74
- adobe cs6 master collection response code bypass<br />
75
- adobe cs6 offline activation code generator mac os x<br />
76
- adobe cs6 keygen generator mac download</p>
77
- <p>To perform offline activation, you need a tool called a response code generator. A response code generator is a web page that helps you generate a unique code called a response code that you can use to activate your Adobe products offline.</p>
78
- <h2>How to generate a response code for Adobe CS6 offline activation?</h2>
79
- <h3>Step 1: Follow the installation or product launch screens until you see a link that says "I cannot connect to the internet" or "Having trouble connecting to the internet". Click the link and follow the instructions to generate a request code.</h3>
80
- <p>The first step of offline activation is to generate a request code on your offline computer where you want to use your Adobe product. A request code is another unique code that identifies your computer and product.</p>
81
- <p>To generate a request code:</p>
82
- <ol>
83
- <li>Install or launch your Adobe product on your offline computer as usual.</li>
84
- <li>Follow the installation or product launch screens until you see a link that says "I cannot connect to the internet" or "Having trouble connecting to the internet". Click the link.</li>
85
- <li>You will see a screen that asks you to enter your product's serial number. Enter it and click Next.</li>
86
- <li>You will see another screen that shows your request code. Write it down or copy it somewhere safe. You will need it later.</li>
87
- </ol>
88
- <h3>Step 2: Use an internet-enabled device to visit https://exception.licenses.adobe.com/aoes/aoes/v1/t1?locale=en and sign in with your Adobe ID and password. Enter the request code and your product's serial number to generate a response code.</h3>
89
- <p>The second step of offline activation is to generate a response code using an internet-enabled device such as another computer, a smartphone, or a tablet. A response code is the final code that you can use to activate your Adobe product offline.</p>
90
- <p>To generate a response code:</p>
91
- <ol>
92
- <li>Use an internet-enabled device to visit https://exception.licenses.adobe.com/aoes/aoes/v1/t1?locale=en</li>
93
- <li>Sign in with your Adobe ID and password. If you do not have an Adobe ID, you can create one for free by clicking Create an account.</li>
94
- <li>Enter the request code that you generated in step 1 and your product's serial number in the corresponding fields. Click Generate Response Code.</li>
95
- <li>You will see your response code on the screen. Write it down or copy it somewhere safe. You will need it later.</li>
96
- </ol>
97
- <h3>Step 3: Enter the response code on the installation or launch product screen of your offline computer when you are prompted to complete the offline activation process.</h3>
98
- <p>The third step of offline activation is to enter the response code on your offline computer where you want to use your Adobe product. This will complete the offline activation process and allow you to use your product normally.</p>
99
- <p>To enter the response code:</p>
100
- <ol>
101
- <li>Go back to your offline computer where you installed or launched your Adobe product in step 1.</li>
102
- <li>You should see a screen that prompts you to enter your response code. Enter it exactly as it appears and click Activate. This will complete the offline activation process and allow you to use your product normally.</li>
103
- </ol>
104
- <h2>What are the benefits of using a response code generator for Adobe CS6 offline activation?</h2>
105
- <h3>You can activate your Adobe products without an internet connection or access to Adobe servers.</h3>
106
- <p>One of the main benefits of using a response code generator for Adobe CS6 offline activation is that you can activate your Adobe products without an internet connection or access to Adobe servers. This means that you can use your products anytime and anywhere, even when you are offline or in a secure environment where online activation is not possible.</p>
107
- <h3>You can use your Adobe products on secure environments like government, banking, etc. where online activation is not possible.</h3>
108
- <p>Another benefit of using a response code generator for Adobe CS6 offline activation is that you can use your Adobe products on secure environments like government, banking, etc. where online activation is not possible due to security policies or restrictions. For example, if you work in a government agency or a bank that does not allow internet access or connection to external servers, you can still use your Adobe products by activating them offline using a response code generator.</p>
109
- <h3>You can avoid activation errors or issues that may occur due to network problems or server outages.</h3>
110
- <p>A third benefit of using a response code generator for Adobe CS6 offline activation is that you can avoid activation errors or issues that may occur due to network problems or server outages. For example, if you have a slow or unstable internet connection that prevents you from connecting to Adobe servers or completing the online activation process, you can still use your Adobe products by activating them offline using a response code generator. Similarly, if Adobe servers are down or undergoing maintenance, you can still use your Adobe products by activating them offline using a response code generator.</p>
111
- <h2>What are the limitations of using a response code generator for Adobe CS6 offline activation?</h2>
112
- <h3>You need an internet-enabled device and your product's serial number to generate a response code.</h3>
113
- <p>One of the limitations of using a response code generator for Adobe CS6 offline activation is that you need an internet-enabled device and your product's serial number to generate a response code. This means that you cannot activate your Adobe products offline without having access to another device that has internet access and your product's serial number. For example, if you lose your product's serial number or do not have another device that has internet access, you cannot generate a response code and activate your Adobe products offline.</p>
114
- <h3>You need to complete the offline activation within 7 days of the first launch of your Adobe product or it will stop working.</h3>
115
- <p>Another limitation of using a response code generator for Adobe CS6 offline activation is that you need to complete the offline activation within 7 days of the first launch of your Adobe product or it will stop working. This means that you cannot use your Adobe products indefinitely without connecting to the internet or Adobe servers at least once every 7 days. For example, if you travel for more than 7 days without internet access or access to Adobe servers, you will not be able to use your Adobe products until you complete the online activation and registration process.</p>
116
- <h3>The request code is machine-specific and valid for 72 hours. If it takes longer than 72 hours to complete the offline activation, you need to generate a new request code.</h3>
117
- <p>A third limitation of using a response code generator for Adobe CS6 offline activation is that the request code is machine-specific and valid for 72 hours. This means that you cannot use the same request code on different computers or after 72 hours have passed since you generated it. For example, if you want to activate your Adobe products on another computer or if it takes longer than 72 hours to generate a response code and enter it on your offline computer, you need to generate a new request code and repeat the offline activation process.</p>
118
- <h2>Conclusion</h2>
119
- <p>In this article, we have explained what is Adobe CS6 and why do you need a response code generator for offline activation. We have also shown you how to generate a response code for Adobe CS6 offline activation using an internet-enabled device and your product's serial number. Finally, we have discussed the benefits and limitations of using a response code generator for Adobe CS6 offline activation.</p>
120
- <p>We hope that this article has helped you understand how to activate your Adobe products offline using a response code generator. If you have any questions or feedback, please feel free to leave a comment below.</p>
121
- <h2>Frequently Asked Questions</h2>
122
- <ol>
123
- <li><b>What is the difference between online and offline activation?</b></li>
124
- <p>Online activation is the process of activating your Adobe products by connecting to the internet and signing in with your Adobe ID and password. Offline activation is the process of activating your Adobe products without an internet connection or access to Adobe servers by using a response code generator.</p>
125
- <li><b>Can I use both online and offline activation for my Adobe products?</b></li>
126
- <p>Yes, you can use both online and offline activation for your Adobe products depending on your situation and preference. However, you cannot use both methods simultaneously for the same product on the same computer.</p>
127
- <li><b>How many times can I use offline activation for my Adobe products?</b></li>
128
- <p>You can use offline activation for your Adobe products as many times as you need as long as you have an internet-enabled device and your product's serial number to generate a response code. However, each time you use offline activation, you need to generate a new request code and enter it on your offline computer within 72 hours.</p>
129
- <li><b>What happens if I lose my product's serial number or my response code?</b></li>
130
- <p>If you lose your product's serial number or your response code, you will not be able to activate your Adobe products offline until you find them again. If you lose your product's serial number, you can try to recover it by contacting Adobe customer support or by checking your email confirmation or receipt when you purchased the product. If you lose your response code, you can try to generate it again by visiting https://exception.licenses.adobe.com/aoes/aoes/v1/t1?locale=en and entering the request code and your product's serial number.</p>
131
- <li><b>What are some alternatives to using a response code generator for Adobe CS6 offline activation?</b></li>
132
- <p>Some alternatives to using a response code generator for Adobe CS6 offline activation are:</p>
133
- <ul>
134
- <li>Using an online connection: If possible, try to connect to the internet and complete the online activation and registration process instead of using the offline method.</li>
135
- <li>Using volume licensing: If you are an enterprise customer who needs to install and activate multiple copies of Adobe products on multiple computers in secure environments where online activation is not possible, consider using volume licensing instead of individual licensing. Volume licensing allows you to activate and manage your Adobe products using a single license key and an offline activation tool. For more information, visit https://www.adobe.com/volume-licensing.html.</li>
136
- <li>Using Creative Cloud: If you are a creative professional or enthusiast who wants to use the latest versions of Adobe products with more features and benefits, consider switching to Creative Cloud instead of using Adobe CS6. Creative Cloud is a subscription-based service that gives you access to all Adobe creative apps and services, as well as cloud storage, collaboration tools, and more. For more information, visit https://www.adobe.com/creativecloud.html.</li>
137
- </ul>
138
- </p> 0a6ba089eb<br />
139
- <br />
140
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Version of JR Typing Tutor A Risky and Unethical Choice.md DELETED
@@ -1,24 +0,0 @@
1
-
2
- <h1>How to Download Crack Version of JR Typing Tutor for Free</h1>
3
- <p>JR Typing Tutor is a software that helps you learn and improve your typing skills in Hindi, English, and other languages. It is specially designed for government typing tests, such as Allahabad High Court RO/ARO, UKPSC RO/ARO, UPPCL, CPCT, M.P. High Court, U.P. Computer Operator, IA, Rajasthan LDC, Tax Assistant, RSMSSB LDC Efficiency & Type Test. It also supports various fonts and keyboard layouts, such as DevLys010, KrutiDev010, Mangal, Raavi, Asees.</p>
4
- <h2>crack version of jr typing tutor</h2><br /><p><b><b>DOWNLOAD</b> &#127383; <a href="https://byltly.com/2uKvsb">https://byltly.com/2uKvsb</a></b></p><br /><br />
5
- <p>If you want to download JR Typing Tutor for free, you may be tempted to look for a crack version of the software. A crack version is a modified version of the software that bypasses the license verification and allows you to use it without paying. However, downloading a crack version of JR Typing Tutor is not a good idea for several reasons.</p>
6
- <h2>Why You Should Avoid Crack Version of JR Typing Tutor</h2>
7
- <p>Here are some of the risks and disadvantages of downloading a crack version of JR Typing Tutor:</p>
8
- <ul>
9
- <li><b>It is illegal.</b> Downloading a crack version of JR Typing Tutor is a violation of the software's terms and conditions and a form of piracy. You may face legal consequences if you are caught using or distributing a crack version of the software.</li>
10
- <li><b>It is unsafe.</b> Downloading a crack version of JR Typing Tutor may expose your computer to viruses, malware, spyware, or ransomware. These malicious programs can damage your system, steal your personal information, or lock your files until you pay a ransom. You may also lose your data or compromise your privacy if you use a crack version of the software.</li>
11
- <li><b>It is unreliable.</b> Downloading a crack version of JR Typing Tutor may not work properly or at all. You may encounter errors, bugs, crashes, or compatibility issues with your operating system or other software. You may also miss out on the latest features, updates, and support from the official website.</li>
12
- <li><b>It is unethical.</b> Downloading a crack version of JR Typing Tutor is unfair to the developers who have invested their time, money, and effort to create the software. You are depriving them of their rightful income and discouraging them from improving the software or creating new products.</li>
13
- </ul>
14
- <h2>How to Download JR Typing Tutor Legally</h2>
15
- <p>If you want to download JR Typing Tutor legally and safely, you have two options:</p>
16
- <ol>
17
- <li><b>Download a free trial.</b> You can download a free 14-day trial of JR Typing Tutor from the official website. This will allow you to test the software and see if it meets your needs and expectations. You can access all the features and functions of the software during the trial period.</li>
18
- <li><b>Buy a license.</b> If you are satisfied with the software and want to continue using it after the trial period, you can buy a license from the official website. The price of the license depends on the duration and number of users. You can choose from 1 month, 3 months, 6 months, 1 year, 2 years, 3 years, or lifetime licenses. You can also choose from single user or multi user licenses. Buying a license will give you unlimited access to the software and its updates and support.</li>
19
- </ol>
20
- <h2>Conclusion</h2>
21
- <p>JR Typing Tutor is a useful software that can help you learn and improve your typing skills in Hindi, English, and other languages. It is specially designed for government typing tests and supports various fonts and keyboard layouts. However, downloading a crack version of JR Typing Tutor is not advisable because it is illegal, unsafe, unreliable, and unethical. Instead, you should download a free trial or buy a license from the official website to enjoy the benefits of the software legally and safely.</p>
22
- <p></p> ddb901b051<br />
23
- <br />
24
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download HD Tune Pro The Ultimate Tool for HDD and SSD Optimization.md DELETED
@@ -1,28 +0,0 @@
1
- <br />
2
- <h1>How to Download HD Tune Pro and Why You Need It</h1>
3
- <p>HD Tune Pro is a powerful tool that can help you monitor, benchmark, and optimize your hard disk drives (HDDs) and solid state drives (SSDs). It can also scan for errors, check the health status (S.M.A.R.T.), securely erase all data, and more. In this article, we will show you how to download HD Tune Pro and what features it offers.</p>
4
- <h2>How to Download HD Tune Pro</h2>
5
- <p>HD Tune Pro is a paid software that costs $34.95 USD or 24.95 EUR for a single user license. You can purchase it from the official website at <a href="http://www.hdtune.com/download.html">http://www.hdtune.com/download.html</a>. After you complete the payment, you will receive a serial number that you can use to activate the software.</p>
6
- <h2>download hd tune pro</h2><br /><p><b><b>Download Zip</b> === <a href="https://byltly.com/2uKA0A">https://byltly.com/2uKA0A</a></b></p><br /><br />
7
- <p>If you want to try out the software before buying it, you can download a 15-day trial version from the same website. The trial version has all the features of the full version, except for the file benchmark and the folder usage view. You can also download an older version of HD Tune (2.55) for free, but it has fewer features and supports fewer operating systems.</p>
8
- <p>To install HD Tune Pro, you need to have Windows 2000, Windows XP, Windows Vista, Windows 7, Windows 8, or Windows 10. You also need to have a hard disk (internal or external), SSD, USB stick, or memory card reader. Note that some drives may not support all functions due to hardware limitations.</p>
9
- <h2>What Features Does HD Tune Pro Offer</h2>
10
- <p>HD Tune Pro offers many features that can help you test and improve the performance of your drives. Here are some of them:</p>
11
- <ul>
12
- <li><b>Benchmark</b>: This feature allows you to measure the read and write speed of your drives under different conditions. You can also compare the results with other drives or view them as graphs.</li>
13
- <li><b>Error Scan</b>: This feature allows you to scan your drives for bad sectors or other errors. You can also view the error log and save it as a text file.</li>
14
- <li><b>Health</b>: This feature allows you to check the health status of your drives using the S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) system. You can also view the log file and run a self-test.</li>
15
- <li><b>Secure Erase</b>: This feature allows you to securely erase all data from your drives using various methods. This can prevent data recovery and protect your privacy.</li>
16
- <li><b>Disk Monitor</b>: This feature allows you to monitor the temperature, transfer rate, and access time of your drives in real-time. You can also set up alerts and warnings for critical values.</li>
17
- <li><b>AAM</b>: This feature allows you to adjust the Automatic Acoustic Management (AAM) setting of your drives. This can reduce the noise level or increase the performance of your drives.</li>
18
- <li><b>Command Line Parameters</b>: This feature allows you to run HD Tune Pro from a command prompt or a batch file with various parameters.</li>
19
- <li><b>File Benchmark</b>: This feature allows you to measure the read and write speed of files on your drives. You can also compare the results with other files or view them as graphs.</li>
20
- <li><b>Folder Usage</b>: This feature allows you to view how much space each folder occupies on your drives. You can also sort the folders by size or name.</li>
21
- <li><b>Extra Tests</b>: This feature allows you to run some extra tests on your drives, such as random access time, burst rate, CPU usage, etc.</li>
22
- <li><b>Cache Test</b>: This feature allows you to test the cache size and performance of your drives.</li>
23
- </ul>
24
- <h2>Conclusion</h2>
25
- <p>HD Tune Pro is a comprehensive and reliable software that can help you monitor, benchmark, and optimize your hard disk drives and solid state drives. It can also scan for errors, check the health status, securely erase all data, and more. If you want to download HD Tune Pro, you can visit the official website at <a href="http://www.hdtune.com/download.html">http</p>
26
- <p></p> ddb901b051<br />
27
- <br />
28
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (kokurikozaka kara 720p or 1080p) la storia romantica ambientata nella Yokohama degli anni 60.md DELETED
@@ -1,141 +0,0 @@
1
- <br />
2
- <h1>HD Online Player (kokurikozaka kara 720p or 1080p)</h1>
3
- <p>If you are a fan of Japanese animation, you might have heard of kokurikozaka kara, or From Up on Poppy Hill, a 2011 film by Studio Ghibli. This film is a beautiful and nostalgic story set in the 1960s, about a group of high school students who try to save their clubhouse from demolition. It is also a touching romance between two young people who discover a surprising connection.</p>
4
- <h2>HD Online Player (kokurikozaka kara 720p or 1080p)</h2><br /><p><b><b>Download File</b> &#10002; <a href="https://byltly.com/2uKvKx">https://byltly.com/2uKvKx</a></b></p><br /><br />
5
- <p>In this article, we will tell you everything you need to know about kokurikozaka kara, and how you can watch it online in high definition. We will also help you decide whether to choose 720p or 1080p resolution for your viewing pleasure. So, let's get started!</p>
6
- <h2>What is kokurikozaka kara?</h2>
7
- <p>Kokurikozaka kara, or From Up on Poppy Hill, is a Japanese animated drama film directed by Gorō Miyazaki, the son of the legendary Hayao Miyazaki. It is based on a manga series of the same name by Tetsurō Sayama and Chizuru Takahashi. It was produced by Studio Ghibli, the renowned animation studio behind classics like Spirited Away, My Neighbor Totoro, and Princess Mononoke.</p>
8
- <h3>A brief summary of the plot and characters</h3>
9
- <p>The film is set in 1963 Yokohama, Japan, a year before the Tokyo Olympics. The main character is Umi Matsuzaki, a 16-year-old girl who lives in a boarding house called Coquelicot Manor with her grandmother and younger siblings. Her father was a sailor who died in the Korean War, and her mother is a medical professor studying in the United States. Every morning, Umi raises a set of signal flags with the message "I pray for safe voyages" in honor of her father.</p>
10
- <p>One day, Umi meets Shun Kazama, a boy who writes for the school newspaper. He is also a member of the "Latin Quarter", an old building that houses various clubs and activities. The Latin Quarter is threatened with demolition by the school board, who wants to build a new modern building instead. Umi and Shun join forces with other students to clean up and renovate the Latin Quarter, hoping to persuade the board to reconsider.</p>
11
- <p>Watch From Up On Poppy Hill online HD free<br />
12
- Kokuriko-zaka Kara full movie download 1080p<br />
13
- HD Online Player for Kokurikozaka Kara 720p<br />
14
- From Up On Poppy Hill 2011 streaming 1080p<br />
15
- Kokuriko-zaka Kara HD online player portable<br />
16
- How to watch From Up On Poppy Hill in HD<br />
17
- Kokurikozaka Kara 720p or 1080p download<br />
18
- From Up On Poppy Hill full movie online HD<br />
19
- Kokuriko-zaka Kara HD online player free<br />
20
- Watch From Up On Poppy Hill 1080p streaming<br />
21
- Kokurikozaka Kara full movie HD online player<br />
22
- From Up On Poppy Hill 720p download free<br />
23
- Kokuriko-zaka Kara HD online player soundcloud<br />
24
- Watch From Up On Poppy Hill online free HD<br />
25
- Kokurikozaka Kara 1080p download link<br />
26
- From Up On Poppy Hill HD online player burgerhouse<br />
27
- Kokuriko-zaka Kara HD online player elinquar<br />
28
- Watch From Up On Poppy Hill 720p or 1080p<br />
29
- Kokurikozaka Kara full movie streaming HD<br />
30
- From Up On Poppy Hill HD online player kindlansuxt<br />
31
- Kokuriko-zaka Kara HD online player upd<br />
32
- Watch From Up On Poppy Hill full movie HD<br />
33
- Kokurikozaka Kara 720p or 1080p streaming<br />
34
- From Up On Poppy Hill HD online player fatalitron<br />
35
- Kokuriko-zaka Kara HD online player black and white<br />
36
- Watch From Up On Poppy Hill in HD quality<br />
37
- Kokurikozaka Kara full movie download free HD<br />
38
- From Up On Poppy Hill HD online player action movies<br />
39
- Kokuriko-zaka Kara HD online player USA<br />
40
- Watch From Up On Poppy Hill 2011 online free<br />
41
- Kokurikozaka Kara 720p or 1080p full movie<br />
42
- From Up On Poppy Hill HD online player best movies 2019<br />
43
- Kokuriko-zaka Kara HD online player beautiful animation<br />
44
- Watch From Up On Poppy Hill in 1080p quality<br />
45
- Kokurikozaka Kara full movie watch online free HD<br />
46
- From Up On Poppy Hill HD online player studio ghibli<br />
47
- Kokuriko-zaka Kara HD online player kokurikozakakara.com<br />
48
- Watch From Up On Poppy Hill with subtitles HD<br />
49
- Kokurikozaka Kara 720p or 1080p watch online free<br />
50
- From Up On Poppy Hill HD online player ghibli fan club</p>
51
- <p>As Umi and Shun work together, they develop feelings for each other. However, they soon discover that they share a shocking secret that could tear them apart. Will they be able to overcome their past and save their future?</p>
52
- <h3>The production and release of the film</h3>
53
- <p>The film was announced by Studio Ghibli in December 2010, as Gorō Miyazaki's second directorial work after Tales from Earthsea (2006). His father, Hayao Miyazaki, co-wrote the screenplay with Keiko Niwa, based on the manga by Sayama and Takahashi. The music was composed by Satoshi Takebe, who also worked on Tales from Earthsea.</p>
54
- <p>The film was released in Japan on July 16, 2011, by Toho. It was a commercial success, grossing over $61 million worldwide. It was also well received by critics, who praised its animation, story, and characters. It won several awards, including the Japan Academy Prize for Animation of the Year, and was nominated for the Asia Pacific Screen Award for Best Animated Feature Film.</p>
55
- <p>The film was dubbed into English by GKIDS, with a voice cast that includes Sarah Bolger as Umi, Anton Yelchin as Shun, Gillian Anderson as Umi's mother Ryoko, Jamie Lee Curtis as Umi's grandmother Hana, Beau Bridges as Shun's father Yūichirō Sawamura , Bruce Dern as Shun's adoptive father Yoshio Onodera , Christina Hendricks as Miki Hokuto , Aubrey Plaza as Sachiko Hirokōji , Chris Noth as Tokumaru , Ron Howard as Akio Kazama , Jeff Dunham as Gen Shiraki , Emily Osment as Nobuko Yokoyama , Charlie Saxton as Shiro Mizunuma , Isabelle Fuhrman as Sora Matsuzaki , Alex Wolff as Riku Matsuzaki , Jake Steinfeld as Oyaji , James Marsden as Mr. Tokumaru , Masami Nagasawa as Umi Matsuzaki (Japanese version), Junichi Okada as Shun Kazama (Japanese version), Keiko Takeshita as Hana Matsuzaki (Japanese version), Yuriko Ishida as Ryoko Matsuzaki (Japanese version), Jun Fubuki as Miki Hokuto (Japanese version), Takashi Naito as Yūichirō Sawamura (Japanese version), Shunsuke Kazama as Shiro Mizunuma (Japanese version), Nao Ōmori as Yoshio Onodera (Japanese version), Teruyuki Kagawa as Tokumaru (Japanese version). It was released in North America on March 15, 2013.</p>
56
- <h3>The reception and awards of the film</h3>
57
- <p>The film received positive reviews from most critics and audiences. It has a rating of 86% on Rotten Tomatoes based on 97 reviews, with an average score of 7/10. The website's critical consensus reads: "Gentle and nostalgic, From Up on Poppy Hill is one of Studio Ghibli's sweeter efforts -- and if it doesn't push the boundaries of the genre, it remains as engagingly lovely as Ghibli fans have come to expect." </p>
58
- <p>On Metacritic , which assigns a normalized rating out of 100 to reviews from mainstream critics, the film has an average score of 71 based on 25 reviews, indicating "generally favorable reviews". </p>
59
- <p>The film won several awards, including: <ul>
60
- <li>The Japan Academy Prize for Animation of the Year in 2012</li>
61
- <li>The Mainichi Film Award for Best Animation Film in 2011</li>
62
- <li>The Tokyo Anime Award for Animation of the Year in 2012</li>
63
- <li>The Asia Pacific Screen Award for Best Animated Feature Film nomination in 2012</li>
64
- <li>The Satellite Award for Best Animated or Mixed Media Feature nomination in 2013</li>
65
- </ul></p>
66
- <h2>Why watch kokurikozaka kara online?</h2>
67
- <p>If you are interested in watching kokurikozaka kara, you might wonder why you should stream it online instead of buying or renting a DVD or Blu-ray disc. Here are some reasons why watching it online is a good idea:</p>
68
- <h3>The benefits of streaming the film online</h3>
69
- <p>Streaming kokurikozaka kara online has many advantages, such as: <ul>
70
- <li>You can watch it anytime and anywhere you want, as long as you have an internet connection and a compatible device.</li>
71
- <li>You can choose between different platforms and services that offer different prices and features.</li>
72
- <li>You can avoid paying extra fees for shipping or late returns.</li>
73
- <li>You can avoid damaging or losing your physical copy of the film.</li>
74
- <li>You can enjoy high-quality video and audio without any scratches or glitches.</li>
75
- <li>You can access bonus features and extras that might not be available on discs.</li>
76
- </ul></p>
77
- <h3>The best platforms and devices to watch the film online</h3>
78
- <p>There are many options for streaming kokurikozaka kara online, but some are better than others. Here are some of the best platforms and devices to watch the film online:</p>
79
- <table><tr><th>Platform</th><th>Device</th><th>Features</th></tr><tr><td>Netflix</td><td>Smart TV, laptop, tablet, smartphone, gaming console, <h3>The themes and messages of the film</h3>
80
- <p>Kokurikozaka kara is not only a romantic and nostalgic film, but also a film that explores various themes and messages that are relevant to today's society. Some of the themes and messages are:</p>
81
- <ul>
82
- <li>The importance of preserving history and culture. The film shows how the students of the Latin Quarter value their old building and its memories, and how they fight to save it from destruction. The film also depicts the contrast between the traditional and the modern, the old and the new, and the rural and the urban in Japan during the 1960s.</li>
83
- <li>The impact of war and loss on families and individuals. The film portrays how Umi and Shun cope with the absence of their fathers, who died in the Korean War. The film also reveals how their fathers' pasts affect their present and future. The film also touches on the issues of identity, belonging, and inheritance.</li>
84
- <li>The power of love and friendship. The film illustrates how Umi and Shun's relationship grows from friendship to romance, and how they support each other through their challenges. The film also shows how their friends and family help them along the way, and how they form a community of solidarity and harmony.</li>
85
- </ul>
86
- <p>The film conveys a message of hope and optimism, despite the difficulties and uncertainties of life. It celebrates the beauty and joy of everyday life, and the potential of young people to make a difference in the world.</p>
87
- <h2>Why watch kokurikozaka kara online?</h2>
88
- <p>If you are interested in watching kokurikozaka kara, you might wonder why you should stream it online instead of buying or renting a DVD or Blu-ray disc. Here are some reasons why watching it online is a good idea:</p>
89
- <h3>The benefits of streaming the film online</h3>
90
- <p>Streaming kokurikozaka kara online has many advantages, such as: <ul>
91
- <li>You can watch it anytime and anywhere you want, as long as you have an internet connection and a compatible device.</li>
92
- <li>You can choose between different platforms and services that offer different prices and features.</li>
93
- <li>You can avoid paying extra fees for shipping or late returns.</li>
94
- <li>You can avoid damaging or losing your physical copy of the film.</li>
95
- <li>You can enjoy high-quality video and audio without any scratches or glitches.</li>
96
- <li>You can access bonus features and extras that might not be available on discs.</li>
97
- </ul></p>
98
- <h3>The best platforms and devices to watch the film online</h3>
99
- <p>There are many options for streaming kokurikozaka kara online, but some are better than others. Here are some of the best platforms and devices to watch the film online:</p>
100
- <table><tr><th>Platform</th><th>Device</th><th>Features</th></tr><tr><td>Netflix</td><td>Smart TV, laptop, tablet, smartphone, gaming console, streaming device</td><td>- Offers a wide range of movies and shows, including kokurikozaka kara<br>- Allows you to download content for offline viewing<br>- Supports HD quality and 5.1 surround sound<br>- Has a user-friendly interface and personalized recommendations<br>- Charges a monthly fee based on your plan<br>- Requires an internet connection of at least 5 Mbps for HD streaming</td></tr><tr><td>Amazon Prime Video</td><td>Smart TV, laptop, tablet, smartphone, gaming console, streaming device</td><td>- Offers a large library of movies and shows, including kokurikozaka kara<br>- Allows you to rent or buy content that is not included in your subscription<br>- Supports HD quality and 5.1 surround sound<br>- Has a simple interface and parental controls<br>- Charges an annual or monthly fee for Prime membership<br>- Requires an internet connection of at least 5 Mbps for HD streaming</td></tr><tr><td>Hulu</td><td>Smart TV, laptop, tablet, smartphone, gaming console, streaming device</td><td>- Offers a variety of movies and shows, including kokurikozaka kara<br>- Allows you to add live TV channels and premium networks to your subscription<br>- Supports HD quality and 5.1 surround sound<br>- Has a sleek interface and multiple profiles<br>- Charges a monthly fee based on your plan<br>- Requires an internet connection of at least 6 Mbps for HD streaming</td></tr></table>
101
- <h3>The tips and tricks to enhance your viewing experience</h3>
102
- <p>To make sure you enjoy watching kokurikozaka kara online, here are some tips and tricks to follow:</p>
103
- <ul>
104
- <li>Choose a platform that suits your preferences and budget.</li>
105
- <li>Check your internet speed and bandwidth before streaming.</li>
106
- <li>Select a device that has a good screen and sound quality.</li>
107
- <li>Adjust your brightness and volume settings according to your environment.</li>
108
- <li>Use headphones or speakers for better audio effects.</li>
109
- <li>Avoid spoilers and distractions while watching.</li>
110
- <li>Watch with friends or family for more fun.</li>
111
- </ul>
112
- <h2>How to choose between 720p and 1080p?</h2>
113
- <p>One of the questions you might have when streaming kokurikozaka kara online is whether to choose 720p or 1080p resolution. What is the difference between them, and which one is better for you? Let's find out!</p>
114
- <h3>The difference between 720p and 1080p resolution</h3>
115
- <p>The resolution of a video refers to the number of pixels that make up its image. The more pixels there are, the sharper and clearer the image will be. The term 720p means that the video has 720 horizontal lines of pixels, while 1080p means that it has 1080 horizontal lines of pixels. Therefore, 1080p has more pixels than 720p, resulting in higher image quality.</p>
116
- <h3>The factors that affect your resolution choice</h3>
117
- <p>However, choosing between 720p and 1080p is not as simple as picking the one with more pixels. There are other factors that affect your resolution choice, such as:</p>
118
- <ul>
119
- <li>Your device's screen size and resolution. If your device has a small screen or a low resolution, you might not notice much difference between 720p and 1080p. On the other hand, if your device has a large screen or a high resolution, you might appreciate the extra details that 1080p offers.</li>
120
- <li>Your internet speed and data usage. Streaming 1080p requires more bandwidth than streaming 720p, which means it will consume more data and load slower if your internet connection is weak or unstable. If you have a fast and reliable internet connection, you can enjoy smooth streaming at 1080p. However, if you have a slow or limited internet connection, you might want to stick with 720p to avoid buffering or extra charges.</li>
121
- <li>Your personal preference and realism, you might prefer 1080p. If you are more concerned about speed and data, you might opt for 720p.</li>
122
- </ul>
123
- <h3>The pros and cons of 720p and 1080p for kokurikozaka kara</h3>
124
- <p>To help you decide between 720p and 1080p for kokurikozaka kara, here are some pros and cons of each resolution:</p>
125
- <table><tr><th>Resolution</th><th>Pros</th><th>Cons</th></tr><tr><td>720p</td><td>- Faster loading and streaming<br>- Less data consumption<br>- Suitable for smaller screens<br>- Good enough for most animated films</td><td>- Lower image quality<br>- Less details and sharpness<br>- Not ideal for larger screens<br>- Might miss some nuances and subtleties of the film</td></tr><tr><td>1080p</td><td>- Higher image quality<br>- More details and sharpness<br>- Ideal for larger screens<br>- Can appreciate the artistry and beauty of the film</td><td>- Slower loading and streaming<br>- More data consumption<br>- Might not be supported by some devices<br>- Might not notice much difference on some animated films</td></tr></table>
126
- <h2>Conclusion</h2>
127
- <p>Kokurikozaka kara, or From Up on Poppy Hill, is a wonderful film that you can enjoy watching online in high definition. It is a film that tells a story of love, friendship, and history, set in the 1960s Japan. It is also a film that showcases the talent and charm of Studio Ghibli and its creators.</p>
128
- <p>If you want to watch kokurikozaka kara online, you have many options to choose from. You can stream it on various platforms and devices, depending on your preferences and budget. You can also choose between 720p and 1080p resolution, depending on your device's screen size and resolution, your internet speed and data usage, and your personal expectations.</p>
129
- <p>No matter what you choose, we hope you have a great time watching kokurikozaka kara online. It is a film that will make you smile, cry, and dream.</p>
130
- <h2>FAQs</h2>
131
- <p>Here are some frequently asked questions about kokurikozaka kara and watching it online:</p>
132
- <ul>
133
- <li>Q: Is kokurikozaka kara based on a true story?<br>A: No, kokurikozaka kara is not based on a true story. It is based on a manga series by Tetsurō Sayama and Chizuru Takahashi. However, it does depict some historical events and aspects of Japan in the 1960s.</li>
134
- <li>Q: Is kokurikozaka kara suitable for children?<br>A: Yes, kokurikozaka kara is suitable for children. It is rated PG by the MPAA for mild thematic elements and some incidental smoking images. It is also rated U by the BBFC for very mild threat. It is a family-friendly film that can be enjoyed by people of all ages.</li>
135
- <li>Q: Where can I find the soundtrack of kokurikozaka kara?<br>A: You can find the soundtrack of kokurikozaka kara on various music platforms and services, such as Spotify , Apple Music , YouTube Music , Amazon Music , etc. You can also buy the CD or digital album from online stores, such as Amazon , iTunes , etc.</li>
136
- <li>Q: Who sings the theme song of kokurikozaka kara?<br>A: The theme song of kokurikozaka kara is called "Summer of Farewells — From Up on Poppy Hill" (「さよならの夏~コクリコ坂から~」, "Sayonara no Natsu ~Kokuriko-zaka kara~"). It is sung by Aoi Teshima , a Japanese singer and voice actress who also voiced Theru in Tales from Earthsea . She also sings another song in the film called "Breakfast Song" (「朝ご飯の歌」, "Asagohan no Uta").</li>
137
- <li>Q: What are some other films by Studio Ghibli that I can watch online?<br>A: There are many other films by Studio Ghibli that you can watch online, such as Spirited Away , My Neighbor Totoro , Princess Mononoke , Howl's Moving Castle , Ponyo , The Wind Rises , etc. You can find them on various streaming platforms and services, such as Netflix , Amazon Prime Video , Hulu , HBO Max , etc.</li>
138
- </ul>
139
- </p> 0a6ba089eb<br />
140
- <br />
141
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/ActivationacronisTIH 6514 6.md DELETED
@@ -1,10 +0,0 @@
1
- <br />
2
- <p>If you've recently installed antivirus programs or adware, then search your computer for 'activationacronistih.exe' to detect them. While activationacronistih.exe activationacronistih.exe viruses, you can delete activationacronistih.exe. Malware may not always change the name, but removing it is always advisable. Find activationacronistih.exe and delete it.</p>
3
- <h2>activationacronisTIH 6514 6</h2><br /><p><b><b>Download</b> &#11088; <a href="https://imgfil.com/2uy1xQ">https://imgfil.com/2uy1xQ</a></b></p><br /><br />
4
- <p>Deleting the files responsible for the problem may help in stopping them from activating. First, follow the instructions in the preceding article to locate the activationacronistih.exe file and delete it. If you are not sure which program is causing the activationacronistih.exe problems, you can use a Web browser to access your control panel, and then locate the window or icon that has activationacronistih.exe on the error screen.</p>
5
- <p>After the restore finishes, then you can restart your computer. If this doesn't resolve your activationacronistih.exe problem, you can proceed with the next step, which is to run a virus scan on your hard drive.</p>
6
- <p>ActivationAcronisTIH is an helpful tool that can be run on the infected PC to help fix. It can fix the activationacronistih.exe errors, optimize the PC's performance, and protect it from other threats. The tool is safe, it does not void your product’s license. The activationacronistih.exe is a part of Microsoft Windows Operating System program developed by not by Acronis.Some activationacronistih.exe errors may have the following reasons:</p>
7
- <p></p>
8
- <p> - Some system files have been incorrectly installed, corrupted or removed. - Some other programs are misbehaving on your system. Fixing activationacronistih.exe may require you to perform different steps, depending on the cause of the error.</p> 899543212b<br />
9
- <br />
10
- <br />
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Trucker - Overloaded Trucks APK and Haul Ore for Profit.md DELETED
@@ -1,132 +0,0 @@
1
-
2
- <h1>Trucker - Overloaded Trucks APK: A Fun and Challenging Driving Game</h1>
3
- <p>Do you love driving big trucks and hauling heavy loads? Do you enjoy testing your skills and reflexes on different terrains and weather conditions? If you answered yes, then you should try Trucker - Overloaded Trucks APK, a fun and challenging driving game for Android devices.</p>
4
- <h2>trucker overloaded trucks apk</h2><br /><p><b><b>Download</b> &middot;&middot;&middot;&middot;&middot; <a href="https://jinyurl.com/2uNRL2">https://jinyurl.com/2uNRL2</a></b></p><br /><br />
5
- <h2>What is Trucker - Overloaded Trucks APK?</h2>
6
- <p>Trucker - Overloaded Trucks APK is a game developed by LQG, a studio that specializes in creating realistic and immersive simulation games. In this game, you will take the role of a truck driver who has to deliver various cargoes across different locations. You will have to deal with different obstacles, such as traffic, bridges, tunnels, hills, mud, snow, and more. You will also have to manage your fuel, speed, brakes, and steering to avoid accidents and damages.</p>
7
- <h3>The gameplay of Trucker - Overloaded Trucks APK</h3>
8
- <p>The gameplay of Trucker - Overloaded Trucks APK is simple but addictive. You will start with a basic truck and a simple cargo. You will have to drive from point A to point B without losing your cargo or crashing your truck. You will earn money for each successful delivery. You can use the money to upgrade your truck or buy new trucks with different features and capacities. You can also unlock new cargoes and locations as you progress in the game.</p>
9
- <h3>The features of Trucker - Overloaded Trucks APK</h3>
10
- <p>Trucker - Overloaded Trucks APK has many features that make it an enjoyable and realistic driving game. Some of these features are:</p>
11
- <ul>
12
- <li>High-quality graphics and sound effects that create a realistic atmosphere.</li>
13
- <li>Multiple camera angles that let you view your truck from different perspectives.</li>
14
- <li>A variety of trucks and cargoes that have different characteristics and challenges.</li>
15
- <li>A dynamic weather system that affects the driving conditions and the physics of your truck.</li>
16
- <li>A map that shows your current location, destination, and route.</li>
17
- <li>A leaderboard that ranks your performance against other players around the world.</li>
18
- </ul>
19
- <h2>How to download and install Trucker - Overloaded Trucks APK on your Android device?</h2>
20
- <p>If you want to play Trucker - Overloaded Trucks APK on your Android device, you will need to download and install it from a reliable source. Here are the requirements and steps for doing so:</p>
21
- <h3>The requirements for Trucker - Overloaded Trucks APK</h3>
22
- <p>To play Trucker - Overloaded Trucks APK on your Android device, you will need:</p>
23
- <p>trucker overloaded trucks game download<br />
24
- trucker overloaded trucks simulator apk<br />
25
- trucker overloaded trucks mod apk<br />
26
- trucker overloaded trucks android app<br />
27
- trucker overloaded trucks online emulator<br />
28
- trucker overloaded trucks free apk<br />
29
- trucker overloaded trucks gameplay<br />
30
- trucker overloaded trucks latest version apk<br />
31
- trucker overloaded trucks ore transport<br />
32
- trucker overloaded trucks dump truck driver<br />
33
- trucker overloaded trucks apk for pc<br />
34
- trucker overloaded trucks review<br />
35
- trucker overloaded trucks cheats<br />
36
- trucker overloaded trucks tips and tricks<br />
37
- trucker overloaded trucks best price<br />
38
- trucker overloaded trucks apk mirror<br />
39
- trucker overloaded trucks offline apk<br />
40
- trucker overloaded trucks unlimited money apk<br />
41
- trucker overloaded trucks realistic physics<br />
42
- trucker overloaded trucks graphics quality<br />
43
- trucker overloaded trucks update apk<br />
44
- trucker overloaded trucks trailer<br />
45
- trucker overloaded trucks features<br />
46
- trucker overloaded trucks how to play<br />
47
- trucker overloaded trucks system requirements<br />
48
- trucker overloaded trucks apk pure<br />
49
- trucker overloaded trucks hack apk<br />
50
- trucker overloaded trucks premium apk<br />
51
- trucker overloaded trucks full version apk<br />
52
- trucker overloaded trucks no ads apk<br />
53
- trucker overloaded trucks fun and addictive<br />
54
- trucker overloaded trucks challenges and missions<br />
55
- trucker overloaded trucks buy and sell ore<br />
56
- trucker overloaded trucks earn money and upgrade<br />
57
- trucker overloaded trucks different types of ore<br />
58
- trucker overloaded trucks various locations and routes<br />
59
- trucker overloaded trucks realistic sound effects<br />
60
- trucker overloaded trucks easy controls and interface<br />
61
- trucker overloaded trucks support and feedback<br />
62
- trucker overloaded trucks bug fixes and improvements</p>
63
- <ul>
64
- <li>An Android device that runs on Android 4.4 or higher.</li>
65
- <li>At least 65 MB of free storage space on your device.</li>
66
- <li>A stable internet connection to download the game and access its online features.</li>
67
- </ul>
68
- <h3>The steps to download and install Trucker - Overloaded Trucks APK</h3>
69
- <p>To download and install Trucker - Overloaded Trucks APK on your Android device, follow these steps:</p>
70
- <ol>
71
- <li>Go to [this link](^1^) to download the latest version of Trucker - Overloaded Trucks APK.</li>
72
- <li>Once the download is complete, locate the file on your device and tap on it to start the installation process.</li>
73
- <li>If you see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source", go to your device settings and enable the option to install apps from unknown sources.</li>
74
- <li>Follow the on-screen instructions to complete the installation process.</li>
75
- <li>Once the installation is done, you can launch the game from your app drawer or home screen and enjoy playing Trucker - Overloaded Trucks APK.</li>
76
- </ol>
77
- <h2>How to play Trucker - Overloaded Trucks APK?</h2>
78
- <p>Playing Trucker - Overloaded Trucks APK is easy and fun. Here are the controls and tips for playing the game:</p>
79
- <h3>The controls of Trucker - Overloaded Trucks APK</h3>
80
- <p>The controls of Trucker - Overloaded Trucks APK are simple and intuitive. You can use the following buttons on your screen to control your truck:</p>
81
- <ul>
82
- <li>The gas pedal to accelerate your truck.</li>
83
- <li>The brake pedal to slow down or stop your truck.</li>
84
- <li>The steering wheel to turn your truck left or right.</li>
85
- <li>The horn to honk at other vehicles or pedestrians.</li>
86
- <li>The camera button to switch between different camera angles.</li>
87
- <li>The pause button to pause the game or access the settings menu.</li>
88
- </ul>
89
- <h3>The tips and tricks for Trucker - Overloaded Trucks APK</h3>
90
- <p>To play Trucker - Overloaded Trucks APK well, you will need some tips and tricks. Here are some of them:</p>
91
- <ul>
92
- <li>Pay attention to the road signs and traffic rules. They will help you avoid accidents and penalties.</li>
93
- <li>Balance your speed and fuel consumption. Driving too fast will consume more fuel and increase the risk of losing control. Driving too slow will waste time and reduce your earnings.</li>
94
- <li>Choose the right truck and cargo for each mission. Different trucks and cargoes have different advantages and disadvantages. For example, some trucks have more power and speed, but less fuel efficiency and maneuverability. Some cargoes are lighter and easier to transport, but less valuable and rewarding.</li>
95
- <li>Upgrade your truck or buy new trucks as you earn more money. Upgrading your truck will improve its performance and durability. Buying new trucks will give you access to more missions and challenges.</li>
96
- <li>Use the map to plan your route and avoid getting lost. The map will show you your current location, destination, and route. You can also zoom in or out of the map to see more details.</li>
97
- </ul>
98
- <h2>Why should you play Trucker - Overloaded Trucks APK?</h2>
99
- <p>Trucker - Overloaded Trucks APK is a game that will give you a lot of fun and satisfaction. Here are some reasons why you should play it:</p>
100
- <h3>The benefits of playing Trucker - Overloaded Trucks APK</h3>
101
- <p>Playing Trucker - Overloaded Trucks APK will give you many benefits, such as:</p>
102
- <ul>
103
- <li>Improving your driving skills and reflexes. You will learn how to drive a big truck in different situations and environments.</li>
104
- <li>Enhancing your creativity and problem-solving abilities. You will have to find the best way to deliver your cargo safely and efficiently.</li>
105
- <li>Relaxing your mind and relieving your stress. You will enjoy the scenery and the sound of your truck engine as you drive along the road.</li>
106
- <li>Entertaining yourself and killing time. You will never get bored with the variety of missions and challenges that the game offers.</li>
107
- </ul>
108
- <h3>The drawbacks of playing Trucker - Overloaded Trucks APK</h3>
109
- <p>Playing Trucker - Overloaded Trucks APK also has some drawbacks, such as:</p>
110
- <ul>
111
- <li>Taking up some storage space on your device. The game requires at least 65 MB of free storage space on your device, which may be a problem if you have a low-end device or limited storage space.</li>
112
- <li>Consuming some battery power on your device. The game uses high-quality graphics and sound effects, which may drain your battery faster than usual.</li>
113
- <li>Requiring an internet connection to access some features. The game needs an internet connection to download the game, update the game, access the leaderboard, and share your achievements with other players.</li>
114
- </ul>
115
- <h2>Conclusion</h2>
116
- <p>In conclusion, Trucker - Overloaded Trucks APK is a fun and challenging driving game that will test your skills and reflexes as a truck driver. You will have to deliver various cargoes across different locations while dealing with different obstacles, such as traffic, bridges, tunnels, hills, mud, snow, and more. You will also have to manage your fuel, speed, brakes, and steering to avoid accidents and damages. You will earn money for each successful delivery, which you can use to upgrade your truck or buy new trucks with different features and capacities and capacities. You can also unlock new cargoes and locations as you progress in the game. The game has high-quality graphics and sound effects that create a realistic atmosphere. The game also has multiple camera angles that let you view your truck from different perspectives. The game also has a dynamic weather system that affects the driving conditions and the physics of your truck. The game also has a map that shows your current location, destination, and route. The game also has a leaderboard that ranks your performance against other players around the world. Playing Trucker - Overloaded Trucks APK will improve your driving skills and reflexes, enhance your creativity and problem-solving abilities, relax your mind and relieve your stress, and entertain yourself and kill time. However, playing Trucker - Overloaded Trucks APK also has some drawbacks, such as taking up some storage space on your device, consuming some battery power on your device, and requiring an internet connection to access some features. If you are looking for a fun and challenging driving game for your Android device, you should try Trucker - Overloaded Trucks APK.</p>
117
- <h2>FAQs</h2>
118
- <p>Here are some frequently asked questions about Trucker - Overloaded Trucks APK:</p>
119
- <ol>
120
- <li>What is the latest version of Trucker - Overloaded Trucks APK?</li>
121
- <p>The latest version of Trucker - Overloaded Trucks APK is 1.0.3, which was released on June 15, 2023.</p>
122
- <li>How many trucks and cargoes are available in Trucker - Overloaded Trucks APK?</li>
123
- <p>There are 10 trucks and 20 cargoes available in Trucker - Overloaded Trucks APK, each with different characteristics and challenges.</p>
124
- <li>How can I share my achievements with other players in Trucker - Overloaded Trucks APK?</li>
125
- <p>You can share your achievements with other players in Trucker - Overloaded Trucks APK by connecting your game to your Facebook account. You can also invite your friends to play the game with you.</p>
126
- <li>How can I contact the developer of Trucker - Overloaded Trucks APK?</li>
127
- <p>You can contact the developer of Trucker - Overloaded Trucks APK by sending an email to [email protected] or visiting their website at https://lqgstudio.com/.</p>
128
- <li>Is Trucker - Overloaded Trucks APK safe to download and install?</li>
129
- <p>Yes, Trucker - Overloaded Trucks APK is safe to download and install from a reliable source. However, you should always scan the file for viruses before installing it on your device.</p>
130
- </ol></p> 401be4b1e0<br />
131
- <br />
132
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download apk 5play.ru No ads no limits no worries.md DELETED
@@ -1,121 +0,0 @@
1
-
2
- <h1>How to Download APK 5play.ru for Android</h1>
3
- <p>If you are a fan of android games, you might have heard of APK 5play.ru. It is a website that offers free downloads of android games, including mods, hacks, and premium versions. In this article, we will tell you everything you need to know about APK 5play.ru, including what it is, why you should download it, how to download it, how to use it, and what are its benefits and drawbacks.</p>
4
- <h2>What is APK 5play.ru?</h2>
5
- <p>APK 5play.ru is a website that provides free android games for download. It has a huge collection of games from various genres and categories, such as action, adventure, arcade, puzzle, racing, simulation, sports, strategy, and more. You can find popular games like PUBG Mobile, Minecraft, GTA San Andreas, Among Us, Genshin Impact, etc., as well as indie games from different developers. You can also download mods and hacks for some games, which give you unlimited resources, features, or cheats.</p>
6
- <h2>download apk 5play.ru</h2><br /><p><b><b>Download</b> &#128504; <a href="https://jinyurl.com/2uNSy8">https://jinyurl.com/2uNSy8</a></b></p><br /><br />
7
- <p>APK 5play.ru is not only a website but also a platform that supports android gamers. It has a user-friendly interface that allows you to easily search, browse, and download games. It also has a community that provides feedback and ratings for each game. You can read the reviews of other users, leave your own comments, and rate the games according to your experience. You can also request new games or mods from the developers or other users.</p>
8
- <h2>Why download APK 5play.ru?</h2>
9
- <p>There are many reasons why you should download APK 5play.ru for your android device. Here are some of them:</p>
10
- <h3>To enjoy the latest and best android games for free</h3>
11
- <p>One of the main reasons why you should download APK 5play.ru is that it gives you free access to premium and paid games. You don't have to spend any money to enjoy the latest and best android games. You can download them directly from the website without any registration or subscription. You can also update them regularly to get new features and fixes.</p>
12
- <h3>To access exclusive mods and hacks for popular games</h3>
13
- <p>Another reason why you should download APK 5play.ru is that it offers exclusive mods and hacks for popular games. You can get unlimited resources, features, or cheats for games like PUBG Mobile, GTA San Andreas, Among Us, Genshin Impact, etc. You can also customize the games according to your preferences and needs. You can change the graphics, the gameplay, the characters, the items, and more. You can also unlock new levels, modes, skins, and weapons.</p>
14
- <h3>To discover new and interesting games from different developers</h3>
15
- <p>A third reason why you should download APK 5play.ru is that it helps you discover new and interesting games from different developers. You can find games that are not available on the Google Play Store or other platforms. You can also find games that are unique, creative, and innovative. You can explore different genres and categories of games and find the ones that suit your taste and mood.</p>
16
- <h2>How to download APK 5play.ru?</h2>
17
- <p>Downloading APK 5play.ru for your android device is very easy and simple. You just need to follow these steps:</p>
18
- <h3>Step 1: Visit the official website of APK 5play.ru</h3>
19
- <p>The first step is to visit the official website of APK 5play.ru. You can use any browser on your device to access the website. The website has a domain name of https://5play.ru/en/. You can also use a VPN or proxy service if the website is blocked or restricted in your region.</p>
20
- <p>download apk 5play.ru games<br />
21
- download apk 5play.ru mods<br />
22
- download apk 5play.ru android<br />
23
- download apk 5play.ru obb<br />
24
- download apk 5play.ru latest version<br />
25
- download apk 5play.ru free<br />
26
- download apk 5play.ru offline<br />
27
- download apk 5play.ru online<br />
28
- download apk 5play.ru action<br />
29
- download apk 5play.ru adventure<br />
30
- download apk 5play.ru simulation<br />
31
- download apk 5play.ru strategy<br />
32
- download apk 5play.ru racing<br />
33
- download apk 5play.ru sports<br />
34
- download apk 5play.ru puzzle<br />
35
- download apk 5play.ru arcade<br />
36
- download apk 5play.ru rpg<br />
37
- download apk 5play.ru shooter<br />
38
- download apk 5play.ru horror<br />
39
- download apk 5play.ru casual<br />
40
- download apk 5play.ru sandbox<br />
41
- download apk 5play.ru platformer<br />
42
- download apk 5play.ru fighting<br />
43
- download apk 5play.ru stealth<br />
44
- download apk 5play.ru survival<br />
45
- download apk 5play.ru tower defense<br />
46
- download apk 5play.ru card<br />
47
- download apk 5play.ru board<br />
48
- download apk 5play.ru trivia<br />
49
- download apk 5play.ru word<br />
50
- download apk 5play.ru educational<br />
51
- download apk 5play.ru music<br />
52
- download apk 5play.ru role playing<br />
53
- download apk 5play.ru multiplayer<br />
54
- download apk 5play.ru co-op<br />
55
- download apk 5play.ru vr<br />
56
- download apk 5play.ru ar<br />
57
- download apk 5play.ru premium<br />
58
- download apk 5play.ru unlocked<br />
59
- download apk 5play.ru hacked<br />
60
- download apk 5play.ru cracked<br />
61
- download apk 5play.ru patched<br />
62
- download apk 5play.ru full version<br />
63
- download apk 5play.ru pro version<br />
64
- download apk 5play.ru modded version<br />
65
- download apk 5play.ru unlimited money<br />
66
- download apk 5play.ru unlimited gems<br />
67
- download apk 5play.ru unlimited coins<br />
68
- download apk 5play.ru unlimited lives</p>
69
- <h3>Step 2: Choose the game you want to download</h3>
70
- <p>The second step is to choose the game you want to download. You can use the search bar on the top of the website to type the name of the game or the keyword related to it. You can also use the filters on the left side of the website to narrow down your search by genre, category, rating, popularity, etc. You can also browse through the featured, new, or updated games on the homepage of the website.</p>
71
- <h3>Step 3: Click on the download button and select the APK or OBB file</h3>
72
- <p>The third step is to click on the download button and select the APK or OBB file. Once you have found the game you want to download, click on its name or image to open its page. On the game page, you will see a green download button on the right side. Click on it and you will see a list of files available for download. You can choose either the APK file or the OBB file depending on your preference. The APK file is the application file that installs the game on your device. The OBB file is the data file that contains the additional content of the game such as graphics, sounds, etc.</p>
73
- <h3>Step 4: Install the APK file on your device and copy the OBB file to the appropriate folder</h3>
74
- <p>The fourth step is to install the APK file on your device and copy the OBB file to the appropriate folder. After downloading the files, you need to install the APK file on your device. To do that, you need to enable the installation of unknown sources on your device. You can do that by going to the settings of your device, then security, then unknown sources. Once you have enabled that, you can tap on the APK file and follow the instructions to install it. If you have downloaded the OBB file, you need to copy it to the right folder on your device. You can do that by using a file manager app or a USB cable. The OBB file should be copied to the folder named Android/obb/ on your device's internal or external storage. Make sure that the OBB file has the same name as the game's package name.</p>
75
- <h3>Step 5: Launch the game and enjoy</h3>
76
- <p>The fifth and final step is to launch the game and enjoy. Once you have installed the APK file and copied the OBB file, you can launch the game from your device's app drawer or home screen. You can also create a shortcut for the game on your device's desktop for easy access. You can now enjoy the game with all its features and mods.</p>
77
- <h2>How to use APK 5play.ru?</h2>
78
- <p>Using APK 5play.ru is very easy and simple as well. You just need to follow these tips:</p>
79
- <h3>Browse through the different categories and genres of games</h3>
80
- <p>One of the best ways to use APK 5play.ru is to browse through the different categories and genres of games. You can find games from various genres such as action, adventure, arcade, puzzle, racing, simulation, sports, strategy, and more. You can also find games from different categories such as online, offline, multiplayer, single-player, 3D, 2D, etc. You can also sort the games by popularity, rating, date, or alphabet.</p>
81
- <h3>Read the description and reviews of each game</h3>
82
- <p>Another way to use APK 5play.ru is to read the description and reviews of each game. You can find useful information about each game such as its features, gameplay, graphics, controls, requirements, etc. You can also read the reviews of other users who have downloaded and played the game. You can see their ratings, comments, feedback, and suggestions. You can also leave your own review and rating for each game.</p>
83
- <h3>Check the compatibility and requirements of each game</h3>
84
- <p>A third way to use APK 5play.ru is to check the compatibility and requirements of each game. You can see if the game is compatible with your device's model, version of android, screen size, etc. You can also see if the game requires any additional permissions or data such as internet connection, storage space, location access, etc. You can also see if the game has any in-app purchases or ads.</p>
85
- <h3>Update the games regularly to get new features and fixes</h3>
86
- <p>A fourth way to use APK 5play.ru is to update the games regularly to get new features and fixes. You can see if there are any new versions or updates available for each game on the website. You can also enable notifications for updates on your device's settings. You can download and install the updates easily from the website or from your device's app manager.</p>
87
- <h2>What are the benefits of APK 5play.ru?</h2>
88
- <p>APK 5play.ru has many benefits for android gamers. Here are some of them:</p>
89
- <h3>Free access to premium and paid games</h3>
90
- <p>One of the main benefits of APK 5play.ru is that it gives you free access to premium and paid games. You don't have to spend any money to enjoy the latest and best android games. You can download them directly from the website without any registration or subscription. You can also update them regularly to get new features and fixes.</p>
91
- <h3>Unlimited resources and features with mods and hacks</h3>
92
- <p>Another benefit of APK 5play.ru is that it offers unlimited resources and features with mods and hacks for some games. You can get unlimited coins, gems, lives, ammo, health, etc. for games like PUBG Mobile, GTA San Andreas, Among Us, Genshin Impact, etc. You can also customize the games according to your preferences and needs. You can change the graphics, the gameplay, the characters, the items, and more. You can also unlock new levels, modes, skins, and weapons.</p>
93
- <h3>High-quality graphics and performance with optimized games</h3>
94
- <p>A third benefit of APK 5play.ru is that it provides high-quality graphics and performance with optimized games. You can enjoy the games with smooth and fast gameplay, stunning visuals, realistic sounds, and immersive effects. You can also adjust the settings of the games to match your device's capabilities and preferences. You can also save battery and data by playing offline or online games.</p>
95
- <h3>Safe and secure downloads with no viruses or malware</h3>
96
- <p>A fourth benefit of APK 5play.ru is that it ensures safe and secure downloads with no viruses or malware. You don't have to worry about harming your device or compromising your privacy by downloading games from the website. The website has a strict policy of checking and verifying each game before uploading it to the website. The website also uses encryption and protection technologies to prevent any unauthorized access or interference.</p>
97
- <h2>What are the drawbacks of APK 5play.ru?</h2>
98
- <p>APK 5play.ru has some drawbacks as well for android gamers. Here are some of them:</p>
99
- <h3>Potential risk of violating the terms and conditions of some games</h3>
100
- <p>One of the main drawbacks of APK 5play.ru is that it poses a potential risk of violating the terms and conditions of some games. You might be breaking the rules or laws of some games by downloading or using mods or hacks for them. You might also be infringing the intellectual property rights or copyrights of some game developers or publishers by downloading or using their games without their permission. This could result in legal actions or penalties against you.</p>
101
- <h3>Possible compatibility issues with some devices or versions of android</h3>
102
- <p>Another drawback of APK 5play.ru is that it might cause compatibility issues with some devices or versions of android. You might not be able to download or install some games on your device due to its model, version of android, screen size, etc. You might also experience crashes, glitches, errors, or bugs with some games due to their requirements, permissions, data, etc. You might also face difficulties in updating or uninstalling some games from your device.</p>
103
- <h3>Occasional bugs or errors with some games or mods</h3>
104
- <p>A third drawback of APK 5play.ru is that it might have occasional bugs or errors with some games or mods. You might encounter problems with some games or mods such as missing content, corrupted files, wrong language, invalid links, etc. You might also find some games or mods that are outdated, incomplete, or fake. You might also face issues with some games or mods that are not compatible with each other or with your device.</p>
105
- <h2>Conclusion</h2>
106
- <p>APK 5play.ru is a website that offers free downloads of android games, including mods, hacks, and premium versions. It has many benefits for android gamers such as free access to premium and paid games, unlimited resources and features with mods and hacks, high-quality graphics and performance with optimized games, and safe and secure downloads with no viruses or malware. It also has some drawbacks such as potential risk of violating the terms and conditions of some games, possible compatibility issues with some devices or versions of android, and occasional bugs or errors with some games or mods.</p>
107
- <p>If you are interested in downloading APK 5play.ru for your android device, you can follow the steps mentioned above in this article. You can also use the tips provided above to use APK 5play.ru effectively and efficiently. However, you should also be aware of the risks and consequences involved in downloading or using APK 5play.ru. You should always respect the rights and rules of the game developers and publishers as well as your own device's security and privacy.</p>
108
- <h2>FAQs</h2>
109
- <p>Here are some frequently asked questions about APK 5play.ru:</p>
110
- <h4>Q: Is APK 5play.ru legal?</h4>
111
- <p>A: APK 5play.ru is not legal in some countries or regions where downloading or using pirated or modded games is prohibited by law. You should also check the laws and regulations of your country or region before downloading or using APK 5play.ru.</p>
112
- <h4>Q: Is APK 5play.ru safe?</h4>
113
- <p>A: APK 5play.ru is safe in terms of downloading and installing games without any viruses or malware. The website has a strict policy of checking and verifying each game before uploading it to the website. The website also uses encryption and protection technologies to prevent any unauthorized access or interference. However, APK 5play.ru is not safe in terms of violating the terms and conditions of some games or compromising your device's security or privacy. You should always be careful and cautious when downloading or using APK 5play.ru.</p>
114
- <h4>Q: How to update APK 5play.ru?</h4>
115
- <p>A: You can update APK 5play.ru by visiting the official website of APK 5play.ru and downloading the latest version of the games you want. You can also enable notifications for updates on your device's settings. You can download and install the updates easily from the website or from your device's app manager.</p>
116
- <h4>Q: How to uninstall APK 5play.ru?</h4>
117
- <p>A: You can uninstall APK 5play.ru by deleting the APK file and the OBB file from your device's storage. You can also use a file manager app or a USB cable to do that. You can also uninstall the games you have downloaded from APK 5play.ru by using your device's app manager or settings.</p>
118
- <h4>Q: How to contact APK 5play.ru?</h4>
119
- <p>A: You can contact APK 5play.ru by using the feedback form on the website. You can also use the email address, phone number, or social media accounts provided on the website. You can also use the comment section on each game page to communicate with other users or developers.</p> 401be4b1e0<br />
120
- <br />
121
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: (Not working for now) Stable Diffusion 1.4 openvino
3
- emoji: 🌚
4
- colorFrom: blue
5
- colorTo: pink
6
- sdk: streamlit
7
- sdk_version: 1.15.2
8
- app_file: demo_web.py
9
- pinned: false
10
- license: apache-2.0
11
- duplicated_from: timboie/test
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Hobbyist/Hoyo-RVC/go-web.bat DELETED
@@ -1,2 +0,0 @@
1
- runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897
2
- pause
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan_light.py DELETED
@@ -1,650 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- import numpy as np
3
- import cv2
4
- import torch
5
-
6
- from functools import partial
7
- import random
8
- from scipy import ndimage
9
- import scipy
10
- import scipy.stats as ss
11
- from scipy.interpolate import interp2d
12
- from scipy.linalg import orth
13
- import albumentations
14
-
15
- import ldm.modules.image_degradation.utils_image as util
16
-
17
- """
18
- # --------------------------------------------
19
- # Super-Resolution
20
- # --------------------------------------------
21
- #
22
- # Kai Zhang ([email protected])
23
- # https://github.com/cszn
24
- # From 2019/03--2021/08
25
- # --------------------------------------------
26
- """
27
-
28
-
29
- def modcrop_np(img, sf):
30
- '''
31
- Args:
32
- img: numpy image, WxH or WxHxC
33
- sf: scale factor
34
- Return:
35
- cropped image
36
- '''
37
- w, h = img.shape[:2]
38
- im = np.copy(img)
39
- return im[:w - w % sf, :h - h % sf, ...]
40
-
41
-
42
- """
43
- # --------------------------------------------
44
- # anisotropic Gaussian kernels
45
- # --------------------------------------------
46
- """
47
-
48
-
49
- def analytic_kernel(k):
50
- """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
51
- k_size = k.shape[0]
52
- # Calculate the big kernels size
53
- big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
54
- # Loop over the small kernel to fill the big one
55
- for r in range(k_size):
56
- for c in range(k_size):
57
- big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k
58
- # Crop the edges of the big kernel to ignore very small values and increase run time of SR
59
- crop = k_size // 2
60
- cropped_big_k = big_k[crop:-crop, crop:-crop]
61
- # Normalize to 1
62
- return cropped_big_k / cropped_big_k.sum()
63
-
64
-
65
- def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
66
- """ generate an anisotropic Gaussian kernel
67
- Args:
68
- ksize : e.g., 15, kernel size
69
- theta : [0, pi], rotation angle range
70
- l1 : [0.1,50], scaling of eigenvalues
71
- l2 : [0.1,l1], scaling of eigenvalues
72
- If l1 = l2, will get an isotropic Gaussian kernel.
73
- Returns:
74
- k : kernel
75
- """
76
-
77
- v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
78
- V = np.array([[v[0], v[1]], [v[1], -v[0]]])
79
- D = np.array([[l1, 0], [0, l2]])
80
- Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
81
- k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
82
-
83
- return k
84
-
85
-
86
- def gm_blur_kernel(mean, cov, size=15):
87
- center = size / 2.0 + 0.5
88
- k = np.zeros([size, size])
89
- for y in range(size):
90
- for x in range(size):
91
- cy = y - center + 1
92
- cx = x - center + 1
93
- k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
94
-
95
- k = k / np.sum(k)
96
- return k
97
-
98
-
99
- def shift_pixel(x, sf, upper_left=True):
100
- """shift pixel for super-resolution with different scale factors
101
- Args:
102
- x: WxHxC or WxH
103
- sf: scale factor
104
- upper_left: shift direction
105
- """
106
- h, w = x.shape[:2]
107
- shift = (sf - 1) * 0.5
108
- xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
109
- if upper_left:
110
- x1 = xv + shift
111
- y1 = yv + shift
112
- else:
113
- x1 = xv - shift
114
- y1 = yv - shift
115
-
116
- x1 = np.clip(x1, 0, w - 1)
117
- y1 = np.clip(y1, 0, h - 1)
118
-
119
- if x.ndim == 2:
120
- x = interp2d(xv, yv, x)(x1, y1)
121
- if x.ndim == 3:
122
- for i in range(x.shape[-1]):
123
- x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)
124
-
125
- return x
126
-
127
-
128
- def blur(x, k):
129
- '''
130
- x: image, NxcxHxW
131
- k: kernel, Nx1xhxw
132
- '''
133
- n, c = x.shape[:2]
134
- p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2
135
- x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')
136
- k = k.repeat(1, c, 1, 1)
137
- k = k.view(-1, 1, k.shape[2], k.shape[3])
138
- x = x.view(1, -1, x.shape[2], x.shape[3])
139
- x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)
140
- x = x.view(n, c, x.shape[2], x.shape[3])
141
-
142
- return x
143
-
144
-
145
- def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
146
- """"
147
- # modified version of https://github.com/assafshocher/BlindSR_dataset_generator
148
- # Kai Zhang
149
- # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
150
- # max_var = 2.5 * sf
151
- """
152
- # Set random eigen-vals (lambdas) and angle (theta) for COV matrix
153
- lambda_1 = min_var + np.random.rand() * (max_var - min_var)
154
- lambda_2 = min_var + np.random.rand() * (max_var - min_var)
155
- theta = np.random.rand() * np.pi # random theta
156
- noise = -noise_level + np.random.rand(*k_size) * noise_level * 2
157
-
158
- # Set COV matrix using Lambdas and Theta
159
- LAMBDA = np.diag([lambda_1, lambda_2])
160
- Q = np.array([[np.cos(theta), -np.sin(theta)],
161
- [np.sin(theta), np.cos(theta)]])
162
- SIGMA = Q @ LAMBDA @ Q.T
163
- INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]
164
-
165
- # Set expectation position (shifting kernel for aligned image)
166
- MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2)
167
- MU = MU[None, None, :, None]
168
-
169
- # Create meshgrid for Gaussian
170
- [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))
171
- Z = np.stack([X, Y], 2)[:, :, :, None]
172
-
173
- # Calcualte Gaussian for every pixel of the kernel
174
- ZZ = Z - MU
175
- ZZ_t = ZZ.transpose(0, 1, 3, 2)
176
- raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)
177
-
178
- # shift the kernel so it will be centered
179
- # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)
180
-
181
- # Normalize the kernel and return
182
- # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)
183
- kernel = raw_kernel / np.sum(raw_kernel)
184
- return kernel
185
-
186
-
187
- def fspecial_gaussian(hsize, sigma):
188
- hsize = [hsize, hsize]
189
- siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]
190
- std = sigma
191
- [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))
192
- arg = -(x * x + y * y) / (2 * std * std)
193
- h = np.exp(arg)
194
- h[h < scipy.finfo(float).eps * h.max()] = 0
195
- sumh = h.sum()
196
- if sumh != 0:
197
- h = h / sumh
198
- return h
199
-
200
-
201
- def fspecial_laplacian(alpha):
202
- alpha = max([0, min([alpha, 1])])
203
- h1 = alpha / (alpha + 1)
204
- h2 = (1 - alpha) / (alpha + 1)
205
- h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]
206
- h = np.array(h)
207
- return h
208
-
209
-
210
- def fspecial(filter_type, *args, **kwargs):
211
- '''
212
- python code from:
213
- https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
214
- '''
215
- if filter_type == 'gaussian':
216
- return fspecial_gaussian(*args, **kwargs)
217
- if filter_type == 'laplacian':
218
- return fspecial_laplacian(*args, **kwargs)
219
-
220
-
221
- """
222
- # --------------------------------------------
223
- # degradation models
224
- # --------------------------------------------
225
- """
226
-
227
-
228
- def bicubic_degradation(x, sf=3):
229
- '''
230
- Args:
231
- x: HxWxC image, [0, 1]
232
- sf: down-scale factor
233
- Return:
234
- bicubicly downsampled LR image
235
- '''
236
- x = util.imresize_np(x, scale=1 / sf)
237
- return x
238
-
239
-
240
- def srmd_degradation(x, k, sf=3):
241
- ''' blur + bicubic downsampling
242
- Args:
243
- x: HxWxC image, [0, 1]
244
- k: hxw, double
245
- sf: down-scale factor
246
- Return:
247
- downsampled LR image
248
- Reference:
249
- @inproceedings{zhang2018learning,
250
- title={Learning a single convolutional super-resolution network for multiple degradations},
251
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
252
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
253
- pages={3262--3271},
254
- year={2018}
255
- }
256
- '''
257
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror'
258
- x = bicubic_degradation(x, sf=sf)
259
- return x
260
-
261
-
262
- def dpsr_degradation(x, k, sf=3):
263
- ''' bicubic downsampling + blur
264
- Args:
265
- x: HxWxC image, [0, 1]
266
- k: hxw, double
267
- sf: down-scale factor
268
- Return:
269
- downsampled LR image
270
- Reference:
271
- @inproceedings{zhang2019deep,
272
- title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
273
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
274
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
275
- pages={1671--1681},
276
- year={2019}
277
- }
278
- '''
279
- x = bicubic_degradation(x, sf=sf)
280
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
281
- return x
282
-
283
-
284
- def classical_degradation(x, k, sf=3):
285
- ''' blur + downsampling
286
- Args:
287
- x: HxWxC image, [0, 1]/[0, 255]
288
- k: hxw, double
289
- sf: down-scale factor
290
- Return:
291
- downsampled LR image
292
- '''
293
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
294
- # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))
295
- st = 0
296
- return x[st::sf, st::sf, ...]
297
-
298
-
299
- def add_sharpening(img, weight=0.5, radius=50, threshold=10):
300
- """USM sharpening. borrowed from real-ESRGAN
301
- Input image: I; Blurry image: B.
302
- 1. K = I + weight * (I - B)
303
- 2. Mask = 1 if abs(I - B) > threshold, else: 0
304
- 3. Blur mask:
305
- 4. Out = Mask * K + (1 - Mask) * I
306
- Args:
307
- img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
308
- weight (float): Sharp weight. Default: 1.
309
- radius (float): Kernel size of Gaussian blur. Default: 50.
310
- threshold (int):
311
- """
312
- if radius % 2 == 0:
313
- radius += 1
314
- blur = cv2.GaussianBlur(img, (radius, radius), 0)
315
- residual = img - blur
316
- mask = np.abs(residual) * 255 > threshold
317
- mask = mask.astype('float32')
318
- soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
319
-
320
- K = img + weight * residual
321
- K = np.clip(K, 0, 1)
322
- return soft_mask * K + (1 - soft_mask) * img
323
-
324
-
325
- def add_blur(img, sf=4):
326
- wd2 = 4.0 + sf
327
- wd = 2.0 + 0.2 * sf
328
-
329
- wd2 = wd2/4
330
- wd = wd/4
331
-
332
- if random.random() < 0.5:
333
- l1 = wd2 * random.random()
334
- l2 = wd2 * random.random()
335
- k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)
336
- else:
337
- k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random())
338
- img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror')
339
-
340
- return img
341
-
342
-
343
- def add_resize(img, sf=4):
344
- rnum = np.random.rand()
345
- if rnum > 0.8: # up
346
- sf1 = random.uniform(1, 2)
347
- elif rnum < 0.7: # down
348
- sf1 = random.uniform(0.5 / sf, 1)
349
- else:
350
- sf1 = 1.0
351
- img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))
352
- img = np.clip(img, 0.0, 1.0)
353
-
354
- return img
355
-
356
-
357
- # def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
358
- # noise_level = random.randint(noise_level1, noise_level2)
359
- # rnum = np.random.rand()
360
- # if rnum > 0.6: # add color Gaussian noise
361
- # img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
362
- # elif rnum < 0.4: # add grayscale Gaussian noise
363
- # img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
364
- # else: # add noise
365
- # L = noise_level2 / 255.
366
- # D = np.diag(np.random.rand(3))
367
- # U = orth(np.random.rand(3, 3))
368
- # conv = np.dot(np.dot(np.transpose(U), D), U)
369
- # img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
370
- # img = np.clip(img, 0.0, 1.0)
371
- # return img
372
-
373
- def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
374
- noise_level = random.randint(noise_level1, noise_level2)
375
- rnum = np.random.rand()
376
- if rnum > 0.6: # add color Gaussian noise
377
- img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
378
- elif rnum < 0.4: # add grayscale Gaussian noise
379
- img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
380
- else: # add noise
381
- L = noise_level2 / 255.
382
- D = np.diag(np.random.rand(3))
383
- U = orth(np.random.rand(3, 3))
384
- conv = np.dot(np.dot(np.transpose(U), D), U)
385
- img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
386
- img = np.clip(img, 0.0, 1.0)
387
- return img
388
-
389
-
390
- def add_speckle_noise(img, noise_level1=2, noise_level2=25):
391
- noise_level = random.randint(noise_level1, noise_level2)
392
- img = np.clip(img, 0.0, 1.0)
393
- rnum = random.random()
394
- if rnum > 0.6:
395
- img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
396
- elif rnum < 0.4:
397
- img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
398
- else:
399
- L = noise_level2 / 255.
400
- D = np.diag(np.random.rand(3))
401
- U = orth(np.random.rand(3, 3))
402
- conv = np.dot(np.dot(np.transpose(U), D), U)
403
- img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
404
- img = np.clip(img, 0.0, 1.0)
405
- return img
406
-
407
-
408
- def add_Poisson_noise(img):
409
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
410
- vals = 10 ** (2 * random.random() + 2.0) # [2, 4]
411
- if random.random() < 0.5:
412
- img = np.random.poisson(img * vals).astype(np.float32) / vals
413
- else:
414
- img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])
415
- img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
416
- noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
417
- img += noise_gray[:, :, np.newaxis]
418
- img = np.clip(img, 0.0, 1.0)
419
- return img
420
-
421
-
422
- def add_JPEG_noise(img):
423
- quality_factor = random.randint(80, 95)
424
- img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)
425
- result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
426
- img = cv2.imdecode(encimg, 1)
427
- img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)
428
- return img
429
-
430
-
431
- def random_crop(lq, hq, sf=4, lq_patchsize=64):
432
- h, w = lq.shape[:2]
433
- rnd_h = random.randint(0, h - lq_patchsize)
434
- rnd_w = random.randint(0, w - lq_patchsize)
435
- lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]
436
-
437
- rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)
438
- hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]
439
- return lq, hq
440
-
441
-
442
- def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
443
- """
444
- This is the degradation model of BSRGAN from the paper
445
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
446
- ----------
447
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
448
- sf: scale factor
449
- isp_model: camera ISP model
450
- Returns
451
- -------
452
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
453
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
454
- """
455
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
456
- sf_ori = sf
457
-
458
- h1, w1 = img.shape[:2]
459
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
460
- h, w = img.shape[:2]
461
-
462
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
463
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
464
-
465
- hq = img.copy()
466
-
467
- if sf == 4 and random.random() < scale2_prob: # downsample1
468
- if np.random.rand() < 0.5:
469
- img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),
470
- interpolation=random.choice([1, 2, 3]))
471
- else:
472
- img = util.imresize_np(img, 1 / 2, True)
473
- img = np.clip(img, 0.0, 1.0)
474
- sf = 2
475
-
476
- shuffle_order = random.sample(range(7), 7)
477
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
478
- if idx1 > idx2: # keep downsample3 last
479
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
480
-
481
- for i in shuffle_order:
482
-
483
- if i == 0:
484
- img = add_blur(img, sf=sf)
485
-
486
- elif i == 1:
487
- img = add_blur(img, sf=sf)
488
-
489
- elif i == 2:
490
- a, b = img.shape[1], img.shape[0]
491
- # downsample2
492
- if random.random() < 0.75:
493
- sf1 = random.uniform(1, 2 * sf)
494
- img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),
495
- interpolation=random.choice([1, 2, 3]))
496
- else:
497
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
498
- k_shifted = shift_pixel(k, sf)
499
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
500
- img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')
501
- img = img[0::sf, 0::sf, ...] # nearest downsampling
502
- img = np.clip(img, 0.0, 1.0)
503
-
504
- elif i == 3:
505
- # downsample3
506
- img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
507
- img = np.clip(img, 0.0, 1.0)
508
-
509
- elif i == 4:
510
- # add Gaussian noise
511
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8)
512
-
513
- elif i == 5:
514
- # add JPEG noise
515
- if random.random() < jpeg_prob:
516
- img = add_JPEG_noise(img)
517
-
518
- elif i == 6:
519
- # add processed camera sensor noise
520
- if random.random() < isp_prob and isp_model is not None:
521
- with torch.no_grad():
522
- img, hq = isp_model.forward(img.copy(), hq)
523
-
524
- # add final JPEG compression noise
525
- img = add_JPEG_noise(img)
526
-
527
- # random crop
528
- img, hq = random_crop(img, hq, sf_ori, lq_patchsize)
529
-
530
- return img, hq
531
-
532
-
533
- # todo no isp_model?
534
- def degradation_bsrgan_variant(image, sf=4, isp_model=None):
535
- """
536
- This is the degradation model of BSRGAN from the paper
537
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
538
- ----------
539
- sf: scale factor
540
- isp_model: camera ISP model
541
- Returns
542
- -------
543
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
544
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
545
- """
546
- image = util.uint2single(image)
547
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
548
- sf_ori = sf
549
-
550
- h1, w1 = image.shape[:2]
551
- image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
552
- h, w = image.shape[:2]
553
-
554
- hq = image.copy()
555
-
556
- if sf == 4 and random.random() < scale2_prob: # downsample1
557
- if np.random.rand() < 0.5:
558
- image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),
559
- interpolation=random.choice([1, 2, 3]))
560
- else:
561
- image = util.imresize_np(image, 1 / 2, True)
562
- image = np.clip(image, 0.0, 1.0)
563
- sf = 2
564
-
565
- shuffle_order = random.sample(range(7), 7)
566
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
567
- if idx1 > idx2: # keep downsample3 last
568
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
569
-
570
- for i in shuffle_order:
571
-
572
- if i == 0:
573
- image = add_blur(image, sf=sf)
574
-
575
- # elif i == 1:
576
- # image = add_blur(image, sf=sf)
577
-
578
- if i == 0:
579
- pass
580
-
581
- elif i == 2:
582
- a, b = image.shape[1], image.shape[0]
583
- # downsample2
584
- if random.random() < 0.8:
585
- sf1 = random.uniform(1, 2 * sf)
586
- image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),
587
- interpolation=random.choice([1, 2, 3]))
588
- else:
589
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
590
- k_shifted = shift_pixel(k, sf)
591
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
592
- image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')
593
- image = image[0::sf, 0::sf, ...] # nearest downsampling
594
-
595
- image = np.clip(image, 0.0, 1.0)
596
-
597
- elif i == 3:
598
- # downsample3
599
- image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
600
- image = np.clip(image, 0.0, 1.0)
601
-
602
- elif i == 4:
603
- # add Gaussian noise
604
- image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2)
605
-
606
- elif i == 5:
607
- # add JPEG noise
608
- if random.random() < jpeg_prob:
609
- image = add_JPEG_noise(image)
610
- #
611
- # elif i == 6:
612
- # # add processed camera sensor noise
613
- # if random.random() < isp_prob and isp_model is not None:
614
- # with torch.no_grad():
615
- # img, hq = isp_model.forward(img.copy(), hq)
616
-
617
- # add final JPEG compression noise
618
- image = add_JPEG_noise(image)
619
- image = util.single2uint(image)
620
- example = {"image": image}
621
- return example
622
-
623
-
624
-
625
-
626
- if __name__ == '__main__':
627
- print("hey")
628
- img = util.imread_uint('utils/test.png', 3)
629
- img = img[:448, :448]
630
- h = img.shape[0] // 4
631
- print("resizing to", h)
632
- sf = 4
633
- deg_fn = partial(degradation_bsrgan_variant, sf=sf)
634
- for i in range(20):
635
- print(i)
636
- img_hq = img
637
- img_lq = deg_fn(img)["image"]
638
- img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq)
639
- print(img_lq)
640
- img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"]
641
- print(img_lq.shape)
642
- print("bicubic", img_lq_bicubic.shape)
643
- print(img_hq.shape)
644
- lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
645
- interpolation=0)
646
- lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic),
647
- (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
648
- interpolation=0)
649
- img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)
650
- util.imsave(img_concat, str(i) + '.png')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/transformer.py DELETED
@@ -1,747 +0,0 @@
1
- import math
2
- import torch
3
- from torch import nn
4
- from torch.nn import Parameter, Linear
5
- from text_to_speech.modules.commons.layers import LayerNorm, Embedding
6
- from text_to_speech.utils.nn.seq_utils import get_incremental_state, set_incremental_state, softmax, make_positions
7
- import torch.nn.functional as F
8
-
9
- DEFAULT_MAX_SOURCE_POSITIONS = 2000
10
- DEFAULT_MAX_TARGET_POSITIONS = 2000
11
-
12
-
13
- class SinusoidalPositionalEmbedding(nn.Module):
14
- """This module produces sinusoidal positional embeddings of any length.
15
-
16
- Padding symbols are ignored.
17
- """
18
-
19
- def __init__(self, embedding_dim, padding_idx, init_size=1024):
20
- super().__init__()
21
- self.embedding_dim = embedding_dim
22
- self.padding_idx = padding_idx
23
- self.weights = SinusoidalPositionalEmbedding.get_embedding(
24
- init_size,
25
- embedding_dim,
26
- padding_idx,
27
- )
28
- self.register_buffer('_float_tensor', torch.FloatTensor(1))
29
-
30
- @staticmethod
31
- def get_embedding(num_embeddings, embedding_dim, padding_idx=None):
32
- """Build sinusoidal embeddings.
33
-
34
- This matches the implementation in tensor2tensor, but differs slightly
35
- from the description in Section 3.5 of "Attention Is All You Need".
36
- """
37
- half_dim = embedding_dim // 2
38
- emb = math.log(10000) / (half_dim - 1)
39
- emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb)
40
- emb = torch.arange(num_embeddings, dtype=torch.float).unsqueeze(1) * emb.unsqueeze(0)
41
- emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view(num_embeddings, -1)
42
- if embedding_dim % 2 == 1:
43
- # zero pad
44
- emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1)
45
- if padding_idx is not None:
46
- emb[padding_idx, :] = 0
47
- return emb
48
-
49
- def forward(self, input, incremental_state=None, timestep=None, positions=None, **kwargs):
50
- """Input is expected to be of size [bsz x seqlen]."""
51
- bsz, seq_len = input.shape[:2]
52
- max_pos = self.padding_idx + 1 + seq_len
53
- if self.weights is None or max_pos > self.weights.size(0):
54
- # recompute/expand embeddings if needed
55
- self.weights = SinusoidalPositionalEmbedding.get_embedding(
56
- max_pos,
57
- self.embedding_dim,
58
- self.padding_idx,
59
- )
60
- self.weights = self.weights.to(self._float_tensor)
61
-
62
- if incremental_state is not None:
63
- # positions is the same for every token when decoding a single step
64
- pos = timestep.view(-1)[0] + 1 if timestep is not None else seq_len
65
- return self.weights[self.padding_idx + pos, :].expand(bsz, 1, -1)
66
-
67
- positions = make_positions(input, self.padding_idx) if positions is None else positions
68
- return self.weights.index_select(0, positions.view(-1)).view(bsz, seq_len, -1).detach()
69
-
70
- def max_positions(self):
71
- """Maximum number of supported positions."""
72
- return int(1e5) # an arbitrary large number
73
-
74
-
75
- class TransformerFFNLayer(nn.Module):
76
- def __init__(self, hidden_size, filter_size, padding="SAME", kernel_size=1, dropout=0., act='gelu'):
77
- super().__init__()
78
- self.kernel_size = kernel_size
79
- self.dropout = dropout
80
- self.act = act
81
- if padding == 'SAME':
82
- self.ffn_1 = nn.Conv1d(hidden_size, filter_size, kernel_size, padding=kernel_size // 2)
83
- elif padding == 'LEFT':
84
- self.ffn_1 = nn.Sequential(
85
- nn.ConstantPad1d((kernel_size - 1, 0), 0.0),
86
- nn.Conv1d(hidden_size, filter_size, kernel_size)
87
- )
88
- self.ffn_2 = Linear(filter_size, hidden_size)
89
-
90
- def forward(self, x, incremental_state=None):
91
- # x: T x B x C
92
- if incremental_state is not None:
93
- saved_state = self._get_input_buffer(incremental_state)
94
- if 'prev_input' in saved_state:
95
- prev_input = saved_state['prev_input']
96
- x = torch.cat((prev_input, x), dim=0)
97
- x = x[-self.kernel_size:]
98
- saved_state['prev_input'] = x
99
- self._set_input_buffer(incremental_state, saved_state)
100
-
101
- x = self.ffn_1(x.permute(1, 2, 0)).permute(2, 0, 1)
102
- x = x * self.kernel_size ** -0.5
103
-
104
- if incremental_state is not None:
105
- x = x[-1:]
106
- if self.act == 'gelu':
107
- x = F.gelu(x)
108
- if self.act == 'relu':
109
- x = F.relu(x)
110
- x = F.dropout(x, self.dropout, training=self.training)
111
- x = self.ffn_2(x)
112
- return x
113
-
114
- def _get_input_buffer(self, incremental_state):
115
- return get_incremental_state(
116
- self,
117
- incremental_state,
118
- 'f',
119
- ) or {}
120
-
121
- def _set_input_buffer(self, incremental_state, buffer):
122
- set_incremental_state(
123
- self,
124
- incremental_state,
125
- 'f',
126
- buffer,
127
- )
128
-
129
- def clear_buffer(self, incremental_state):
130
- if incremental_state is not None:
131
- saved_state = self._get_input_buffer(incremental_state)
132
- if 'prev_input' in saved_state:
133
- del saved_state['prev_input']
134
- self._set_input_buffer(incremental_state, saved_state)
135
-
136
-
137
- class MultiheadAttention(nn.Module):
138
- def __init__(self, embed_dim, num_heads, kdim=None, vdim=None, dropout=0., bias=True,
139
- add_bias_kv=False, add_zero_attn=False, self_attention=False,
140
- encoder_decoder_attention=False):
141
- super().__init__()
142
- self.embed_dim = embed_dim
143
- self.kdim = kdim if kdim is not None else embed_dim
144
- self.vdim = vdim if vdim is not None else embed_dim
145
- self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim
146
-
147
- self.num_heads = num_heads
148
- self.dropout = dropout
149
- self.head_dim = embed_dim // num_heads
150
- assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
151
- self.scaling = self.head_dim ** -0.5
152
-
153
- self.self_attention = self_attention
154
- self.encoder_decoder_attention = encoder_decoder_attention
155
-
156
- assert not self.self_attention or self.qkv_same_dim, 'Self-attention requires query, key and ' \
157
- 'value to be of the same size'
158
-
159
- if self.qkv_same_dim:
160
- self.in_proj_weight = Parameter(torch.Tensor(3 * embed_dim, embed_dim))
161
- else:
162
- self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim))
163
- self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim))
164
- self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))
165
-
166
- if bias:
167
- self.in_proj_bias = Parameter(torch.Tensor(3 * embed_dim))
168
- else:
169
- self.register_parameter('in_proj_bias', None)
170
-
171
- self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias)
172
-
173
- if add_bias_kv:
174
- self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim))
175
- self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim))
176
- else:
177
- self.bias_k = self.bias_v = None
178
-
179
- self.add_zero_attn = add_zero_attn
180
-
181
- self.reset_parameters()
182
-
183
- self.enable_torch_version = False
184
- if hasattr(F, "multi_head_attention_forward"):
185
- self.enable_torch_version = True
186
- else:
187
- self.enable_torch_version = False
188
- self.last_attn_probs = None
189
-
190
- def reset_parameters(self):
191
- if self.qkv_same_dim:
192
- nn.init.xavier_uniform_(self.in_proj_weight)
193
- else:
194
- nn.init.xavier_uniform_(self.k_proj_weight)
195
- nn.init.xavier_uniform_(self.v_proj_weight)
196
- nn.init.xavier_uniform_(self.q_proj_weight)
197
-
198
- nn.init.xavier_uniform_(self.out_proj.weight)
199
- if self.in_proj_bias is not None:
200
- nn.init.constant_(self.in_proj_bias, 0.)
201
- nn.init.constant_(self.out_proj.bias, 0.)
202
- if self.bias_k is not None:
203
- nn.init.xavier_normal_(self.bias_k)
204
- if self.bias_v is not None:
205
- nn.init.xavier_normal_(self.bias_v)
206
-
207
- def forward(
208
- self,
209
- query, key, value,
210
- key_padding_mask=None,
211
- incremental_state=None,
212
- need_weights=True,
213
- static_kv=False,
214
- attn_mask=None,
215
- before_softmax=False,
216
- need_head_weights=False,
217
- enc_dec_attn_constraint_mask=None,
218
- reset_attn_weight=None
219
- ):
220
- """Input shape: Time x Batch x Channel
221
-
222
- Args:
223
- key_padding_mask (ByteTensor, optional): mask to exclude
224
- keys that are pads, of shape `(batch, src_len)`, where
225
- padding elements are indicated by 1s.
226
- need_weights (bool, optional): return the attention weights,
227
- averaged over heads (default: False).
228
- attn_mask (ByteTensor, optional): typically used to
229
- implement causal attention, where the mask prevents the
230
- attention from looking forward in time (default: None).
231
- before_softmax (bool, optional): return the raw attention
232
- weights and values before the attention softmax.
233
- need_head_weights (bool, optional): return the attention
234
- weights for each head. Implies *need_weights*. Default:
235
- return the average attention weights over all heads.
236
- """
237
- if need_head_weights:
238
- need_weights = True
239
-
240
- tgt_len, bsz, embed_dim = query.size()
241
- assert embed_dim == self.embed_dim
242
- assert list(query.size()) == [tgt_len, bsz, embed_dim]
243
- if self.enable_torch_version and incremental_state is None and not static_kv and reset_attn_weight is None:
244
- if self.qkv_same_dim:
245
- return F.multi_head_attention_forward(query, key, value,
246
- self.embed_dim, self.num_heads,
247
- self.in_proj_weight,
248
- self.in_proj_bias, self.bias_k, self.bias_v,
249
- self.add_zero_attn, self.dropout,
250
- self.out_proj.weight, self.out_proj.bias,
251
- self.training, key_padding_mask, need_weights,
252
- attn_mask)
253
- else:
254
- return F.multi_head_attention_forward(query, key, value,
255
- self.embed_dim, self.num_heads,
256
- torch.empty([0]),
257
- self.in_proj_bias, self.bias_k, self.bias_v,
258
- self.add_zero_attn, self.dropout,
259
- self.out_proj.weight, self.out_proj.bias,
260
- self.training, key_padding_mask, need_weights,
261
- attn_mask, use_separate_proj_weight=True,
262
- q_proj_weight=self.q_proj_weight,
263
- k_proj_weight=self.k_proj_weight,
264
- v_proj_weight=self.v_proj_weight)
265
-
266
- if incremental_state is not None:
267
- saved_state = self._get_input_buffer(incremental_state)
268
- if 'prev_key' in saved_state:
269
- # previous time steps are cached - no need to recompute
270
- # key and value if they are static
271
- if static_kv:
272
- assert self.encoder_decoder_attention and not self.self_attention
273
- key = value = None
274
- else:
275
- saved_state = None
276
-
277
- if self.self_attention:
278
- # self-attention
279
- q, k, v = self.in_proj_qkv(query)
280
- elif self.encoder_decoder_attention:
281
- # encoder-decoder attention
282
- q = self.in_proj_q(query)
283
- if key is None:
284
- assert value is None
285
- k = v = None
286
- else:
287
- k = self.in_proj_k(key)
288
- v = self.in_proj_v(key)
289
-
290
- else:
291
- q = self.in_proj_q(query)
292
- k = self.in_proj_k(key)
293
- v = self.in_proj_v(value)
294
- q *= self.scaling
295
-
296
- if self.bias_k is not None:
297
- assert self.bias_v is not None
298
- k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)])
299
- v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)])
300
- if attn_mask is not None:
301
- attn_mask = torch.cat([attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1)
302
- if key_padding_mask is not None:
303
- key_padding_mask = torch.cat(
304
- [key_padding_mask, key_padding_mask.new_zeros(key_padding_mask.size(0), 1)], dim=1)
305
-
306
- q = q.contiguous().view(tgt_len, bsz * self.num_heads, self.head_dim).transpose(0, 1)
307
- if k is not None:
308
- k = k.contiguous().view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1)
309
- if v is not None:
310
- v = v.contiguous().view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1)
311
-
312
- if saved_state is not None:
313
- # saved states are stored with shape (bsz, num_heads, seq_len, head_dim)
314
- if 'prev_key' in saved_state:
315
- prev_key = saved_state['prev_key'].view(bsz * self.num_heads, -1, self.head_dim)
316
- if static_kv:
317
- k = prev_key
318
- else:
319
- k = torch.cat((prev_key, k), dim=1)
320
- if 'prev_value' in saved_state:
321
- prev_value = saved_state['prev_value'].view(bsz * self.num_heads, -1, self.head_dim)
322
- if static_kv:
323
- v = prev_value
324
- else:
325
- v = torch.cat((prev_value, v), dim=1)
326
- if 'prev_key_padding_mask' in saved_state and saved_state['prev_key_padding_mask'] is not None:
327
- prev_key_padding_mask = saved_state['prev_key_padding_mask']
328
- if static_kv:
329
- key_padding_mask = prev_key_padding_mask
330
- else:
331
- key_padding_mask = torch.cat((prev_key_padding_mask, key_padding_mask), dim=1)
332
-
333
- saved_state['prev_key'] = k.view(bsz, self.num_heads, -1, self.head_dim)
334
- saved_state['prev_value'] = v.view(bsz, self.num_heads, -1, self.head_dim)
335
- saved_state['prev_key_padding_mask'] = key_padding_mask
336
-
337
- self._set_input_buffer(incremental_state, saved_state)
338
-
339
- src_len = k.size(1)
340
-
341
- # This is part of a workaround to get around fork/join parallelism
342
- # not supporting Optional types.
343
- if key_padding_mask is not None and key_padding_mask.shape == torch.Size([]):
344
- key_padding_mask = None
345
-
346
- if key_padding_mask is not None:
347
- assert key_padding_mask.size(0) == bsz
348
- assert key_padding_mask.size(1) == src_len
349
-
350
- if self.add_zero_attn:
351
- src_len += 1
352
- k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1)
353
- v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1)
354
- if attn_mask is not None:
355
- attn_mask = torch.cat([attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1)
356
- if key_padding_mask is not None:
357
- key_padding_mask = torch.cat(
358
- [key_padding_mask, torch.zeros(key_padding_mask.size(0), 1).type_as(key_padding_mask)], dim=1)
359
-
360
- attn_weights = torch.bmm(q, k.transpose(1, 2))
361
- attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz)
362
-
363
- assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len]
364
-
365
- if attn_mask is not None:
366
- if len(attn_mask.shape) == 2:
367
- attn_mask = attn_mask.unsqueeze(0)
368
- elif len(attn_mask.shape) == 3:
369
- attn_mask = attn_mask[:, None].repeat([1, self.num_heads, 1, 1]).reshape(
370
- bsz * self.num_heads, tgt_len, src_len)
371
- attn_weights = attn_weights + attn_mask
372
-
373
- if enc_dec_attn_constraint_mask is not None: # bs x head x L_kv
374
- attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
375
- attn_weights = attn_weights.masked_fill(
376
- enc_dec_attn_constraint_mask.unsqueeze(2).bool(),
377
- -1e8,
378
- )
379
- attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
380
-
381
- if key_padding_mask is not None:
382
- # don't attend to padding symbols
383
- attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
384
- attn_weights = attn_weights.masked_fill(
385
- key_padding_mask.unsqueeze(1).unsqueeze(2),
386
- -1e8,
387
- )
388
- attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
389
-
390
- attn_logits = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
391
-
392
- if before_softmax:
393
- return attn_weights, v
394
-
395
- attn_weights_float = softmax(attn_weights, dim=-1)
396
- attn_weights = attn_weights_float.type_as(attn_weights)
397
- attn_probs = F.dropout(attn_weights_float.type_as(attn_weights), p=self.dropout, training=self.training)
398
-
399
- if reset_attn_weight is not None:
400
- if reset_attn_weight:
401
- self.last_attn_probs = attn_probs.detach()
402
- else:
403
- assert self.last_attn_probs is not None
404
- attn_probs = self.last_attn_probs
405
- attn = torch.bmm(attn_probs, v)
406
- assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim]
407
- attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
408
- attn = self.out_proj(attn)
409
-
410
- if need_weights:
411
- attn_weights = attn_weights_float.view(bsz, self.num_heads, tgt_len, src_len).transpose(1, 0)
412
- if not need_head_weights:
413
- # average attention weights over heads
414
- attn_weights = attn_weights.mean(dim=0)
415
- else:
416
- attn_weights = None
417
-
418
- return attn, (attn_weights, attn_logits)
419
-
420
- def in_proj_qkv(self, query):
421
- return self._in_proj(query).chunk(3, dim=-1)
422
-
423
- def in_proj_q(self, query):
424
- if self.qkv_same_dim:
425
- return self._in_proj(query, end=self.embed_dim)
426
- else:
427
- bias = self.in_proj_bias
428
- if bias is not None:
429
- bias = bias[:self.embed_dim]
430
- return F.linear(query, self.q_proj_weight, bias)
431
-
432
- def in_proj_k(self, key):
433
- if self.qkv_same_dim:
434
- return self._in_proj(key, start=self.embed_dim, end=2 * self.embed_dim)
435
- else:
436
- weight = self.k_proj_weight
437
- bias = self.in_proj_bias
438
- if bias is not None:
439
- bias = bias[self.embed_dim:2 * self.embed_dim]
440
- return F.linear(key, weight, bias)
441
-
442
- def in_proj_v(self, value):
443
- if self.qkv_same_dim:
444
- return self._in_proj(value, start=2 * self.embed_dim)
445
- else:
446
- weight = self.v_proj_weight
447
- bias = self.in_proj_bias
448
- if bias is not None:
449
- bias = bias[2 * self.embed_dim:]
450
- return F.linear(value, weight, bias)
451
-
452
- def _in_proj(self, input, start=0, end=None):
453
- weight = self.in_proj_weight
454
- bias = self.in_proj_bias
455
- weight = weight[start:end, :]
456
- if bias is not None:
457
- bias = bias[start:end]
458
- return F.linear(input, weight, bias)
459
-
460
- def _get_input_buffer(self, incremental_state):
461
- return get_incremental_state(
462
- self,
463
- incremental_state,
464
- 'attn_state',
465
- ) or {}
466
-
467
- def _set_input_buffer(self, incremental_state, buffer):
468
- set_incremental_state(
469
- self,
470
- incremental_state,
471
- 'attn_state',
472
- buffer,
473
- )
474
-
475
- def apply_sparse_mask(self, attn_weights, tgt_len, src_len, bsz):
476
- return attn_weights
477
-
478
- def clear_buffer(self, incremental_state=None):
479
- if incremental_state is not None:
480
- saved_state = self._get_input_buffer(incremental_state)
481
- if 'prev_key' in saved_state:
482
- del saved_state['prev_key']
483
- if 'prev_value' in saved_state:
484
- del saved_state['prev_value']
485
- self._set_input_buffer(incremental_state, saved_state)
486
-
487
-
488
- class EncSALayer(nn.Module):
489
- def __init__(self, c, num_heads, dropout, attention_dropout=0.1,
490
- relu_dropout=0.1, kernel_size=9, padding='SAME', act='gelu'):
491
- super().__init__()
492
- self.c = c
493
- self.dropout = dropout
494
- self.num_heads = num_heads
495
- if num_heads > 0:
496
- self.layer_norm1 = LayerNorm(c)
497
- self.self_attn = MultiheadAttention(
498
- self.c, num_heads, self_attention=True, dropout=attention_dropout, bias=False)
499
- self.layer_norm2 = LayerNorm(c)
500
- self.ffn = TransformerFFNLayer(
501
- c, 4 * c, kernel_size=kernel_size, dropout=relu_dropout, padding=padding, act=act)
502
-
503
- def forward(self, x, encoder_padding_mask=None, **kwargs):
504
- layer_norm_training = kwargs.get('layer_norm_training', None)
505
- if layer_norm_training is not None:
506
- self.layer_norm1.training = layer_norm_training
507
- self.layer_norm2.training = layer_norm_training
508
- if self.num_heads > 0:
509
- residual = x
510
- x = self.layer_norm1(x)
511
- x, _, = self.self_attn(
512
- query=x,
513
- key=x,
514
- value=x,
515
- key_padding_mask=encoder_padding_mask
516
- )
517
- x = F.dropout(x, self.dropout, training=self.training)
518
- x = residual + x
519
- x = x * (1 - encoder_padding_mask.float()).transpose(0, 1)[..., None]
520
-
521
- residual = x
522
- x = self.layer_norm2(x)
523
- x = self.ffn(x)
524
- x = F.dropout(x, self.dropout, training=self.training)
525
- x = residual + x
526
- x = x * (1 - encoder_padding_mask.float()).transpose(0, 1)[..., None]
527
- return x
528
-
529
-
530
- class DecSALayer(nn.Module):
531
- def __init__(self, c, num_heads, dropout, attention_dropout=0.1, relu_dropout=0.1,
532
- kernel_size=9, act='gelu'):
533
- super().__init__()
534
- self.c = c
535
- self.dropout = dropout
536
- self.layer_norm1 = LayerNorm(c)
537
- self.self_attn = MultiheadAttention(
538
- c, num_heads, self_attention=True, dropout=attention_dropout, bias=False
539
- )
540
- self.layer_norm2 = LayerNorm(c)
541
- self.encoder_attn = MultiheadAttention(
542
- c, num_heads, encoder_decoder_attention=True, dropout=attention_dropout, bias=False,
543
- )
544
- self.layer_norm3 = LayerNorm(c)
545
- self.ffn = TransformerFFNLayer(
546
- c, 4 * c, padding='LEFT', kernel_size=kernel_size, dropout=relu_dropout, act=act)
547
-
548
- def forward(
549
- self,
550
- x,
551
- encoder_out=None,
552
- encoder_padding_mask=None,
553
- incremental_state=None,
554
- self_attn_mask=None,
555
- self_attn_padding_mask=None,
556
- attn_out=None,
557
- reset_attn_weight=None,
558
- **kwargs,
559
- ):
560
- layer_norm_training = kwargs.get('layer_norm_training', None)
561
- if layer_norm_training is not None:
562
- self.layer_norm1.training = layer_norm_training
563
- self.layer_norm2.training = layer_norm_training
564
- self.layer_norm3.training = layer_norm_training
565
- residual = x
566
- x = self.layer_norm1(x)
567
- x, _ = self.self_attn(
568
- query=x,
569
- key=x,
570
- value=x,
571
- key_padding_mask=self_attn_padding_mask,
572
- incremental_state=incremental_state,
573
- attn_mask=self_attn_mask
574
- )
575
- x = F.dropout(x, self.dropout, training=self.training)
576
- x = residual + x
577
-
578
- attn_logits = None
579
- if encoder_out is not None or attn_out is not None:
580
- residual = x
581
- x = self.layer_norm2(x)
582
- if encoder_out is not None:
583
- x, attn = self.encoder_attn(
584
- query=x,
585
- key=encoder_out,
586
- value=encoder_out,
587
- key_padding_mask=encoder_padding_mask,
588
- incremental_state=incremental_state,
589
- static_kv=True,
590
- enc_dec_attn_constraint_mask=get_incremental_state(self, incremental_state,
591
- 'enc_dec_attn_constraint_mask'),
592
- reset_attn_weight=reset_attn_weight
593
- )
594
- attn_logits = attn[1]
595
- elif attn_out is not None:
596
- x = self.encoder_attn.in_proj_v(attn_out)
597
- if encoder_out is not None or attn_out is not None:
598
- x = F.dropout(x, self.dropout, training=self.training)
599
- x = residual + x
600
-
601
- residual = x
602
- x = self.layer_norm3(x)
603
- x = self.ffn(x, incremental_state=incremental_state)
604
- x = F.dropout(x, self.dropout, training=self.training)
605
- x = residual + x
606
- return x, attn_logits
607
-
608
- def clear_buffer(self, input, encoder_out=None, encoder_padding_mask=None, incremental_state=None):
609
- self.encoder_attn.clear_buffer(incremental_state)
610
- self.ffn.clear_buffer(incremental_state)
611
-
612
- def set_buffer(self, name, tensor, incremental_state):
613
- return set_incremental_state(self, incremental_state, name, tensor)
614
-
615
-
616
- class TransformerEncoderLayer(nn.Module):
617
- def __init__(self, hidden_size, dropout, kernel_size=9, num_heads=2):
618
- super().__init__()
619
- self.hidden_size = hidden_size
620
- self.dropout = dropout
621
- self.num_heads = num_heads
622
- self.op = EncSALayer(
623
- hidden_size, num_heads, dropout=dropout,
624
- attention_dropout=0.0, relu_dropout=dropout,
625
- kernel_size=kernel_size)
626
-
627
- def forward(self, x, **kwargs):
628
- return self.op(x, **kwargs)
629
-
630
-
631
- class TransformerDecoderLayer(nn.Module):
632
- def __init__(self, hidden_size, dropout, kernel_size=9, num_heads=2):
633
- super().__init__()
634
- self.hidden_size = hidden_size
635
- self.dropout = dropout
636
- self.num_heads = num_heads
637
- self.op = DecSALayer(
638
- hidden_size, num_heads, dropout=dropout,
639
- attention_dropout=0.0, relu_dropout=dropout,
640
- kernel_size=kernel_size)
641
-
642
- def forward(self, x, **kwargs):
643
- return self.op(x, **kwargs)
644
-
645
- def clear_buffer(self, *args):
646
- return self.op.clear_buffer(*args)
647
-
648
- def set_buffer(self, *args):
649
- return self.op.set_buffer(*args)
650
-
651
-
652
- class FFTBlocks(nn.Module):
653
- def __init__(self, hidden_size, num_layers, ffn_kernel_size=9, dropout=0.0,
654
- num_heads=2, use_pos_embed=True, use_last_norm=True,
655
- use_pos_embed_alpha=True):
656
- super().__init__()
657
- self.num_layers = num_layers
658
- embed_dim = self.hidden_size = hidden_size
659
- self.dropout = dropout
660
- self.use_pos_embed = use_pos_embed
661
- self.use_last_norm = use_last_norm
662
- if use_pos_embed:
663
- self.max_source_positions = DEFAULT_MAX_TARGET_POSITIONS
664
- self.padding_idx = 0
665
- self.pos_embed_alpha = nn.Parameter(torch.Tensor([1])) if use_pos_embed_alpha else 1
666
- self.embed_positions = SinusoidalPositionalEmbedding(
667
- embed_dim, self.padding_idx, init_size=DEFAULT_MAX_TARGET_POSITIONS,
668
- )
669
-
670
- self.layers = nn.ModuleList([])
671
- self.layers.extend([
672
- TransformerEncoderLayer(self.hidden_size, self.dropout,
673
- kernel_size=ffn_kernel_size, num_heads=num_heads)
674
- for _ in range(self.num_layers)
675
- ])
676
- if self.use_last_norm:
677
- self.layer_norm = nn.LayerNorm(embed_dim)
678
- else:
679
- self.layer_norm = None
680
-
681
- def forward(self, x, padding_mask=None, attn_mask=None, return_hiddens=False):
682
- """
683
- :param x: [B, T, C]
684
- :param padding_mask: [B, T]
685
- :return: [B, T, C] or [L, B, T, C]
686
- """
687
- padding_mask = x.abs().sum(-1).eq(0).data if padding_mask is None else padding_mask
688
- nonpadding_mask_TB = 1 - padding_mask.transpose(0, 1).float()[:, :, None] # [T, B, 1]
689
- if self.use_pos_embed:
690
- positions = self.pos_embed_alpha * self.embed_positions(x[..., 0])
691
- x = x + positions
692
- x = F.dropout(x, p=self.dropout, training=self.training)
693
- # B x T x C -> T x B x C
694
- x = x.transpose(0, 1) * nonpadding_mask_TB
695
- hiddens = []
696
- for layer in self.layers:
697
- x = layer(x, encoder_padding_mask=padding_mask, attn_mask=attn_mask) * nonpadding_mask_TB
698
- hiddens.append(x)
699
- if self.use_last_norm:
700
- x = self.layer_norm(x) * nonpadding_mask_TB
701
- if return_hiddens:
702
- x = torch.stack(hiddens, 0) # [L, T, B, C]
703
- x = x.transpose(1, 2) # [L, B, T, C]
704
- else:
705
- x = x.transpose(0, 1) # [B, T, C]
706
- return x
707
-
708
-
709
- class FastSpeechEncoder(FFTBlocks):
710
- def __init__(self, dict_size, hidden_size=256, num_layers=4, kernel_size=9, num_heads=2,
711
- dropout=0.0):
712
- super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads,
713
- use_pos_embed=False, dropout=dropout) # use_pos_embed_alpha for compatibility
714
- self.embed_tokens = Embedding(dict_size, hidden_size, 0)
715
- self.embed_scale = math.sqrt(hidden_size)
716
- self.padding_idx = 0
717
- self.embed_positions = SinusoidalPositionalEmbedding(
718
- hidden_size, self.padding_idx, init_size=DEFAULT_MAX_TARGET_POSITIONS,
719
- )
720
-
721
- def forward(self, txt_tokens, attn_mask=None):
722
- """
723
-
724
- :param txt_tokens: [B, T]
725
- :return: {
726
- 'encoder_out': [B x T x C]
727
- }
728
- """
729
- encoder_padding_mask = txt_tokens.eq(self.padding_idx).data
730
- x = self.forward_embedding(txt_tokens) # [B, T, H]
731
- if self.num_layers > 0:
732
- x = super(FastSpeechEncoder, self).forward(x, encoder_padding_mask, attn_mask=attn_mask)
733
- return x
734
-
735
- def forward_embedding(self, txt_tokens):
736
- # embed tokens and positions
737
- x = self.embed_scale * self.embed_tokens(txt_tokens)
738
- if self.use_pos_embed:
739
- positions = self.embed_positions(txt_tokens)
740
- x = x + positions
741
- x = F.dropout(x, p=self.dropout, training=self.training)
742
- return x
743
-
744
-
745
- class FastSpeechDecoder(FFTBlocks):
746
- def __init__(self, hidden_size=256, num_layers=4, kernel_size=9, num_heads=2):
747
- super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ALSv/FSW/roop/metadata.py DELETED
@@ -1,2 +0,0 @@
1
- name = 'roop'
2
- version = '1.3.2'
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Label.js DELETED
@@ -1,297 +0,0 @@
1
- import Sizer from '../sizer/Sizer.js';
2
- import AddChildMask from '../../../plugins/gameobjects/container/containerlite/mask/AddChildMask.js';
3
- import SetDisplaySize from '../../../plugins/utils/size/SetDisplaySize.js';
4
- import Methods from './methods/Methods.js';
5
-
6
- const GetValue = Phaser.Utils.Objects.GetValue;
7
-
8
- class Label extends Sizer {
9
- constructor(scene, config) {
10
- // Create sizer
11
- super(scene, config);
12
- this.type = 'rexLabel';
13
-
14
- // Add elements
15
- var background = GetValue(config, 'background', undefined);
16
- var icon = GetValue(config, 'icon', undefined);
17
- var iconMask = GetValue(config, 'iconMask', undefined);
18
- var text = GetValue(config, 'text', undefined);
19
- var action = GetValue(config, 'action', undefined);
20
- var actionMask = GetValue(config, 'actionMask', undefined);
21
- // Align
22
- var align = GetValue(config, 'align', undefined); // undefined/left/top: no space
23
-
24
-
25
- if (background) {
26
- this.addBackground(background);
27
- }
28
-
29
- // Add space
30
- if (
31
- (align === 'right') ||
32
- (align === 'bottom') ||
33
- (align === 'center')
34
- ) {
35
- this.addSpace();
36
- }
37
-
38
- if (icon) {
39
- var iconSpace = GetValue(config, 'space.icon', 0);
40
- var padding;
41
- if (this.orientation === 0) {
42
- if (text || action) {
43
- padding = { right: iconSpace };
44
- }
45
- } else {
46
- if (text || action) {
47
- padding = { bottom: iconSpace };
48
- }
49
- }
50
- var fitRatio = GetValue(config, 'squareFitIcon', false) ? 1 : 0;
51
-
52
- this.add(
53
- icon,
54
- { proportion: 0, padding: padding, fitRatio: fitRatio }
55
- );
56
-
57
- if (iconMask) {
58
- iconMask = AddChildMask.call(this, icon, icon, 1); // Circle mask
59
- }
60
-
61
- if (!fitRatio) {
62
- var iconSize = GetValue(config, 'iconSize', undefined);
63
- this.setIconSize(
64
- GetValue(config, 'iconWidth', iconSize),
65
- GetValue(config, 'iconHeight', iconSize)
66
- );
67
- }
68
- }
69
-
70
-
71
- if (text) {
72
- var textSpace = GetValue(config, 'space.text', 0);
73
- var expandTextWidth = GetValue(config, 'expandTextWidth', false);
74
- var expandTextHeight = GetValue(config, 'expandTextHeight', false);
75
- var proportion, padding, expand;
76
- if (this.orientation === 0) {
77
- proportion = (expandTextWidth) ? 1 : 0;
78
- if (action) {
79
- padding = { right: textSpace };
80
- }
81
- expand = expandTextHeight;
82
- } else {
83
- proportion = (expandTextHeight) ? 1 : 0;
84
- if (action) {
85
- padding = { bottom: textSpace };
86
- }
87
- expand = expandTextWidth;
88
- }
89
-
90
- this.add(
91
- text,
92
- { proportion: proportion, expand: expand, padding: padding, }
93
- );
94
- }
95
-
96
- if (action) {
97
- var fitRatio = GetValue(config, 'squareFitAction', false) ? 1 : 0;
98
- this.add(
99
- action,
100
- { proportion: 0, fitRatio: fitRatio }
101
- );
102
-
103
- if (actionMask) {
104
- actionMask = AddChildMask.call(this, action, action, 1); // Circle mask
105
- }
106
-
107
- if (!fitRatio) {
108
- var actionSize = GetValue(config, 'actionSize');
109
- this.setActionSize(
110
- GetValue(config, 'actionWidth', actionSize),
111
- GetValue(config, 'actionHeight', actionSize)
112
- );
113
- }
114
- }
115
-
116
- // Add space
117
- if (align === 'center') {
118
- this.addSpace();
119
- }
120
-
121
- this.addChildrenMap('background', background);
122
- this.addChildrenMap('icon', icon);
123
- this.addChildrenMap('iconMask', iconMask);
124
- this.addChildrenMap('text', text);
125
- this.addChildrenMap('action', action);
126
- this.addChildrenMap('actionMask', actionMask);
127
- }
128
-
129
- // Access text game object
130
- get text() {
131
- var textObject = this.childrenMap.text;
132
- if (textObject === undefined) {
133
- return '';
134
- }
135
- return textObject.text;
136
- }
137
-
138
- set text(value) {
139
- var textObject = this.childrenMap.text;
140
- if (textObject === undefined) {
141
- return;
142
- }
143
- textObject.setText(value);
144
- }
145
-
146
- setText(value) {
147
- this.text = value;
148
- return this;
149
- }
150
-
151
- // Access icon game object
152
- setIconTexture(key, frame) {
153
- var imageObject = this.childrenMap.icon;
154
- if (imageObject === undefined) {
155
- return this;
156
- }
157
- imageObject.setTexture(key, frame);
158
-
159
- if (this.iconWidth !== undefined) {
160
- SetDisplaySize(imageObject, this.iconWidth, this.iconHeight);
161
- this.resetChildScaleState(imageObject);
162
- }
163
-
164
- return this;
165
- }
166
-
167
- setTexture(key, frame) {
168
- this.setIconTexture(key, frame);
169
- return this;
170
- }
171
-
172
- setIconSize(width, height) {
173
- if (height === undefined) {
174
- height = width;
175
- }
176
-
177
- this.iconWidth = width;
178
- this.iconHeight = height;
179
-
180
- return this;
181
- }
182
-
183
- get texture() {
184
- var imageObject = this.childrenMap.icon;
185
- if (imageObject === undefined) {
186
- return undefined;
187
- }
188
- return imageObject.texture;
189
- }
190
-
191
- get frame() {
192
- var imageObject = this.childrenMap.icon;
193
- if (imageObject === undefined) {
194
- return undefined;
195
- }
196
- return imageObject.frame;
197
- }
198
-
199
- setActionTexture(key, frame) {
200
- var imageObject = this.childrenMap.action;
201
- if (imageObject === undefined) {
202
- return this;
203
- }
204
- imageObject.setTexture(key, frame);
205
-
206
- if (this.actionWidth !== undefined) {
207
- SetDisplaySize(imageObject, this.actionWidth, this.actionHeight);
208
- this.resetChildScaleState(imageObject);
209
- }
210
-
211
- return this;
212
- }
213
-
214
- get actionTexture() {
215
- var imageObject = this.childrenMap.action;
216
- if (imageObject === undefined) {
217
- return undefined;
218
- }
219
- return imageObject.texture;
220
- }
221
-
222
- get actionFrame() {
223
- var imageObject = this.childrenMap.action;
224
- if (imageObject === undefined) {
225
- return undefined;
226
- }
227
- return imageObject.frame;
228
- }
229
-
230
- setActionSize(width, height) {
231
- if (height === undefined) {
232
- height = width;
233
- }
234
-
235
- this.actionWidth = width;
236
- this.actionHeight = height;
237
-
238
- return this;
239
- }
240
-
241
- preLayout() {
242
- var icon = this.childrenMap.icon;
243
- if (icon && (this.iconWidth !== undefined)) {
244
- SetDisplaySize(icon, this.iconWidth, this.iconHeight);
245
- }
246
-
247
- var action = this.childrenMap.action;
248
- if (action && (this.actionWidth !== undefined)) {
249
- SetDisplaySize(action, this.actionWidth, this.actionHeight);
250
- }
251
-
252
- super.preLayout();
253
- }
254
-
255
- runLayout(parent, newWidth, newHeight) {
256
- if (this.ignoreLayout) {
257
- return this;
258
- }
259
-
260
- super.runLayout(parent, newWidth, newHeight);
261
- // Pin icon-mask to icon game object
262
- var iconMask = this.childrenMap.iconMask;
263
- if (iconMask) {
264
- iconMask.setPosition();
265
- this.resetChildPositionState(iconMask);
266
- }
267
- // Pin action-mask to action game object
268
- var actionMask = this.childrenMap.actionMask;
269
- if (actionMask) {
270
- actionMask.setPosition();
271
- this.resetChildPositionState(actionMask);
272
- }
273
- return this;
274
- }
275
-
276
- resize(width, height) {
277
- super.resize(width, height);
278
- // Resize icon-mask to icon game object
279
- var iconMask = this.childrenMap.iconMask;
280
- if (iconMask) {
281
- iconMask.resize();
282
- }
283
- // Resize action-mask to icon game object
284
- var actionMask = this.childrenMap.actionMask;
285
- if (actionMask) {
286
- actionMask.resize();
287
- }
288
- return this;
289
- }
290
- }
291
-
292
- Object.assign(
293
- Label.prototype,
294
- Methods,
295
- )
296
-
297
- export default Label;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/Factory.js DELETED
@@ -1,13 +0,0 @@
1
- import Menu from './Menu.js';
2
- import ObjectFactory from '../ObjectFactory.js';
3
- import SetValue from '../../../plugins/utils/object/SetValue.js';
4
-
5
- ObjectFactory.register('menu', function (config) {
6
- var gameObject = new Menu(this.scene, config);
7
- this.scene.add.existing(gameObject);
8
- return gameObject;
9
- });
10
-
11
- SetValue(window, 'RexPlugins.UI.Menu', Menu);
12
-
13
- export default Menu;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AlexWang/lama/saicinpainting/training/modules/multiscale.py DELETED
@@ -1,244 +0,0 @@
1
- from typing import List, Tuple, Union, Optional
2
-
3
- import torch
4
- import torch.nn as nn
5
- import torch.nn.functional as F
6
-
7
- from saicinpainting.training.modules.base import get_conv_block_ctor, get_activation
8
- from saicinpainting.training.modules.pix2pixhd import ResnetBlock
9
-
10
-
11
- class ResNetHead(nn.Module):
12
- def __init__(self, input_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d,
13
- padding_type='reflect', conv_kind='default', activation=nn.ReLU(True)):
14
- assert (n_blocks >= 0)
15
- super(ResNetHead, self).__init__()
16
-
17
- conv_layer = get_conv_block_ctor(conv_kind)
18
-
19
- model = [nn.ReflectionPad2d(3),
20
- conv_layer(input_nc, ngf, kernel_size=7, padding=0),
21
- norm_layer(ngf),
22
- activation]
23
-
24
- ### downsample
25
- for i in range(n_downsampling):
26
- mult = 2 ** i
27
- model += [conv_layer(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1),
28
- norm_layer(ngf * mult * 2),
29
- activation]
30
-
31
- mult = 2 ** n_downsampling
32
-
33
- ### resnet blocks
34
- for i in range(n_blocks):
35
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
36
- conv_kind=conv_kind)]
37
-
38
- self.model = nn.Sequential(*model)
39
-
40
- def forward(self, input):
41
- return self.model(input)
42
-
43
-
44
- class ResNetTail(nn.Module):
45
- def __init__(self, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d,
46
- padding_type='reflect', conv_kind='default', activation=nn.ReLU(True),
47
- up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0,
48
- add_in_proj=None):
49
- assert (n_blocks >= 0)
50
- super(ResNetTail, self).__init__()
51
-
52
- mult = 2 ** n_downsampling
53
-
54
- model = []
55
-
56
- if add_in_proj is not None:
57
- model.append(nn.Conv2d(add_in_proj, ngf * mult, kernel_size=1))
58
-
59
- ### resnet blocks
60
- for i in range(n_blocks):
61
- model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer,
62
- conv_kind=conv_kind)]
63
-
64
- ### upsample
65
- for i in range(n_downsampling):
66
- mult = 2 ** (n_downsampling - i)
67
- model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1,
68
- output_padding=1),
69
- up_norm_layer(int(ngf * mult / 2)),
70
- up_activation]
71
- self.model = nn.Sequential(*model)
72
-
73
- out_layers = []
74
- for _ in range(out_extra_layers_n):
75
- out_layers += [nn.Conv2d(ngf, ngf, kernel_size=1, padding=0),
76
- up_norm_layer(ngf),
77
- up_activation]
78
- out_layers += [nn.ReflectionPad2d(3),
79
- nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)]
80
-
81
- if add_out_act:
82
- out_layers.append(get_activation('tanh' if add_out_act is True else add_out_act))
83
-
84
- self.out_proj = nn.Sequential(*out_layers)
85
-
86
- def forward(self, input, return_last_act=False):
87
- features = self.model(input)
88
- out = self.out_proj(features)
89
- if return_last_act:
90
- return out, features
91
- else:
92
- return out
93
-
94
-
95
- class MultiscaleResNet(nn.Module):
96
- def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=2, n_blocks_head=2, n_blocks_tail=6, n_scales=3,
97
- norm_layer=nn.BatchNorm2d, padding_type='reflect', conv_kind='default', activation=nn.ReLU(True),
98
- up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0,
99
- out_cumulative=False, return_only_hr=False):
100
- super().__init__()
101
-
102
- self.heads = nn.ModuleList([ResNetHead(input_nc, ngf=ngf, n_downsampling=n_downsampling,
103
- n_blocks=n_blocks_head, norm_layer=norm_layer, padding_type=padding_type,
104
- conv_kind=conv_kind, activation=activation)
105
- for i in range(n_scales)])
106
- tail_in_feats = ngf * (2 ** n_downsampling) + ngf
107
- self.tails = nn.ModuleList([ResNetTail(output_nc,
108
- ngf=ngf, n_downsampling=n_downsampling,
109
- n_blocks=n_blocks_tail, norm_layer=norm_layer, padding_type=padding_type,
110
- conv_kind=conv_kind, activation=activation, up_norm_layer=up_norm_layer,
111
- up_activation=up_activation, add_out_act=add_out_act,
112
- out_extra_layers_n=out_extra_layers_n,
113
- add_in_proj=None if (i == n_scales - 1) else tail_in_feats)
114
- for i in range(n_scales)])
115
-
116
- self.out_cumulative = out_cumulative
117
- self.return_only_hr = return_only_hr
118
-
119
- @property
120
- def num_scales(self):
121
- return len(self.heads)
122
-
123
- def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \
124
- -> Union[torch.Tensor, List[torch.Tensor]]:
125
- """
126
- :param ms_inputs: List of inputs of different resolutions from HR to LR
127
- :param smallest_scales_num: int or None, number of smallest scales to take at input
128
- :return: Depending on return_only_hr:
129
- True: Only the most HR output
130
- False: List of outputs of different resolutions from HR to LR
131
- """
132
- if smallest_scales_num is None:
133
- assert len(self.heads) == len(ms_inputs), (len(self.heads), len(ms_inputs), smallest_scales_num)
134
- smallest_scales_num = len(self.heads)
135
- else:
136
- assert smallest_scales_num == len(ms_inputs) <= len(self.heads), (len(self.heads), len(ms_inputs), smallest_scales_num)
137
-
138
- cur_heads = self.heads[-smallest_scales_num:]
139
- ms_features = [cur_head(cur_inp) for cur_head, cur_inp in zip(cur_heads, ms_inputs)]
140
-
141
- all_outputs = []
142
- prev_tail_features = None
143
- for i in range(len(ms_features)):
144
- scale_i = -i - 1
145
-
146
- cur_tail_input = ms_features[-i - 1]
147
- if prev_tail_features is not None:
148
- if prev_tail_features.shape != cur_tail_input.shape:
149
- prev_tail_features = F.interpolate(prev_tail_features, size=cur_tail_input.shape[2:],
150
- mode='bilinear', align_corners=False)
151
- cur_tail_input = torch.cat((cur_tail_input, prev_tail_features), dim=1)
152
-
153
- cur_out, cur_tail_feats = self.tails[scale_i](cur_tail_input, return_last_act=True)
154
-
155
- prev_tail_features = cur_tail_feats
156
- all_outputs.append(cur_out)
157
-
158
- if self.out_cumulative:
159
- all_outputs_cum = [all_outputs[0]]
160
- for i in range(1, len(ms_features)):
161
- cur_out = all_outputs[i]
162
- cur_out_cum = cur_out + F.interpolate(all_outputs_cum[-1], size=cur_out.shape[2:],
163
- mode='bilinear', align_corners=False)
164
- all_outputs_cum.append(cur_out_cum)
165
- all_outputs = all_outputs_cum
166
-
167
- if self.return_only_hr:
168
- return all_outputs[-1]
169
- else:
170
- return all_outputs[::-1]
171
-
172
-
173
- class MultiscaleDiscriminatorSimple(nn.Module):
174
- def __init__(self, ms_impl):
175
- super().__init__()
176
- self.ms_impl = nn.ModuleList(ms_impl)
177
-
178
- @property
179
- def num_scales(self):
180
- return len(self.ms_impl)
181
-
182
- def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \
183
- -> List[Tuple[torch.Tensor, List[torch.Tensor]]]:
184
- """
185
- :param ms_inputs: List of inputs of different resolutions from HR to LR
186
- :param smallest_scales_num: int or None, number of smallest scales to take at input
187
- :return: List of pairs (prediction, features) for different resolutions from HR to LR
188
- """
189
- if smallest_scales_num is None:
190
- assert len(self.ms_impl) == len(ms_inputs), (len(self.ms_impl), len(ms_inputs), smallest_scales_num)
191
- smallest_scales_num = len(self.heads)
192
- else:
193
- assert smallest_scales_num == len(ms_inputs) <= len(self.ms_impl), \
194
- (len(self.ms_impl), len(ms_inputs), smallest_scales_num)
195
-
196
- return [cur_discr(cur_input) for cur_discr, cur_input in zip(self.ms_impl[-smallest_scales_num:], ms_inputs)]
197
-
198
-
199
- class SingleToMultiScaleInputMixin:
200
- def forward(self, x: torch.Tensor) -> List:
201
- orig_height, orig_width = x.shape[2:]
202
- factors = [2 ** i for i in range(self.num_scales)]
203
- ms_inputs = [F.interpolate(x, size=(orig_height // f, orig_width // f), mode='bilinear', align_corners=False)
204
- for f in factors]
205
- return super().forward(ms_inputs)
206
-
207
-
208
- class GeneratorMultiToSingleOutputMixin:
209
- def forward(self, x):
210
- return super().forward(x)[0]
211
-
212
-
213
- class DiscriminatorMultiToSingleOutputMixin:
214
- def forward(self, x):
215
- out_feat_tuples = super().forward(x)
216
- return out_feat_tuples[0][0], [f for _, flist in out_feat_tuples for f in flist]
217
-
218
-
219
- class DiscriminatorMultiToSingleOutputStackedMixin:
220
- def __init__(self, *args, return_feats_only_levels=None, **kwargs):
221
- super().__init__(*args, **kwargs)
222
- self.return_feats_only_levels = return_feats_only_levels
223
-
224
- def forward(self, x):
225
- out_feat_tuples = super().forward(x)
226
- outs = [out for out, _ in out_feat_tuples]
227
- scaled_outs = [outs[0]] + [F.interpolate(cur_out, size=outs[0].shape[-2:],
228
- mode='bilinear', align_corners=False)
229
- for cur_out in outs[1:]]
230
- out = torch.cat(scaled_outs, dim=1)
231
- if self.return_feats_only_levels is not None:
232
- feat_lists = [out_feat_tuples[i][1] for i in self.return_feats_only_levels]
233
- else:
234
- feat_lists = [flist for _, flist in out_feat_tuples]
235
- feats = [f for flist in feat_lists for f in flist]
236
- return out, feats
237
-
238
-
239
- class MultiscaleDiscrSingleInput(SingleToMultiScaleInputMixin, DiscriminatorMultiToSingleOutputStackedMixin, MultiscaleDiscriminatorSimple):
240
- pass
241
-
242
-
243
- class MultiscaleResNetSingle(GeneratorMultiToSingleOutputMixin, SingleToMultiScaleInputMixin, MultiscaleResNet):
244
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/face3d/extract_kp_videos.py DELETED
@@ -1,108 +0,0 @@
1
- import os
2
- import cv2
3
- import time
4
- import glob
5
- import argparse
6
- import face_alignment
7
- import numpy as np
8
- from PIL import Image
9
- from tqdm import tqdm
10
- from itertools import cycle
11
-
12
- from torch.multiprocessing import Pool, Process, set_start_method
13
-
14
- class KeypointExtractor():
15
- def __init__(self, device):
16
- self.detector = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D,
17
- device=device)
18
-
19
- def extract_keypoint(self, images, name=None, info=True):
20
- if isinstance(images, list):
21
- keypoints = []
22
- if info:
23
- i_range = tqdm(images,desc='landmark Det:')
24
- else:
25
- i_range = images
26
-
27
- for image in i_range:
28
- current_kp = self.extract_keypoint(image)
29
- if np.mean(current_kp) == -1 and keypoints:
30
- keypoints.append(keypoints[-1])
31
- else:
32
- keypoints.append(current_kp[None])
33
-
34
- keypoints = np.concatenate(keypoints, 0)
35
- np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1))
36
- return keypoints
37
- else:
38
- while True:
39
- try:
40
- keypoints = self.detector.get_landmarks_from_image(np.array(images))[0]
41
- break
42
- except RuntimeError as e:
43
- if str(e).startswith('CUDA'):
44
- print("Warning: out of memory, sleep for 1s")
45
- time.sleep(1)
46
- else:
47
- print(e)
48
- break
49
- except TypeError:
50
- print('No face detected in this image')
51
- shape = [68, 2]
52
- keypoints = -1. * np.ones(shape)
53
- break
54
- if name is not None:
55
- np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1))
56
- return keypoints
57
-
58
- def read_video(filename):
59
- frames = []
60
- cap = cv2.VideoCapture(filename)
61
- while cap.isOpened():
62
- ret, frame = cap.read()
63
- if ret:
64
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
65
- frame = Image.fromarray(frame)
66
- frames.append(frame)
67
- else:
68
- break
69
- cap.release()
70
- return frames
71
-
72
- def run(data):
73
- filename, opt, device = data
74
- os.environ['CUDA_VISIBLE_DEVICES'] = device
75
- kp_extractor = KeypointExtractor()
76
- images = read_video(filename)
77
- name = filename.split('/')[-2:]
78
- os.makedirs(os.path.join(opt.output_dir, name[-2]), exist_ok=True)
79
- kp_extractor.extract_keypoint(
80
- images,
81
- name=os.path.join(opt.output_dir, name[-2], name[-1])
82
- )
83
-
84
- if __name__ == '__main__':
85
- set_start_method('spawn')
86
- parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
87
- parser.add_argument('--input_dir', type=str, help='the folder of the input files')
88
- parser.add_argument('--output_dir', type=str, help='the folder of the output files')
89
- parser.add_argument('--device_ids', type=str, default='0,1')
90
- parser.add_argument('--workers', type=int, default=4)
91
-
92
- opt = parser.parse_args()
93
- filenames = list()
94
- VIDEO_EXTENSIONS_LOWERCASE = {'mp4'}
95
- VIDEO_EXTENSIONS = VIDEO_EXTENSIONS_LOWERCASE.union({f.upper() for f in VIDEO_EXTENSIONS_LOWERCASE})
96
- extensions = VIDEO_EXTENSIONS
97
-
98
- for ext in extensions:
99
- os.listdir(f'{opt.input_dir}')
100
- print(f'{opt.input_dir}/*.{ext}')
101
- filenames = sorted(glob.glob(f'{opt.input_dir}/*.{ext}'))
102
- print('Total number of videos:', len(filenames))
103
- pool = Pool(opt.workers)
104
- args_list = cycle([opt])
105
- device_ids = opt.device_ids.split(",")
106
- device_ids = cycle(device_ids)
107
- for data in tqdm(pool.imap_unordered(run, zip(filenames, args_list, device_ids))):
108
- None
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/__init__.py DELETED
@@ -1,188 +0,0 @@
1
- from ..utils import (
2
- OptionalDependencyNotAvailable,
3
- is_flax_available,
4
- is_k_diffusion_available,
5
- is_librosa_available,
6
- is_note_seq_available,
7
- is_onnx_available,
8
- is_torch_available,
9
- is_transformers_available,
10
- )
11
-
12
-
13
- try:
14
- if not is_torch_available():
15
- raise OptionalDependencyNotAvailable()
16
- except OptionalDependencyNotAvailable:
17
- from ..utils.dummy_pt_objects import * # noqa F403
18
- else:
19
- from .auto_pipeline import AutoPipelineForImage2Image, AutoPipelineForInpainting, AutoPipelineForText2Image
20
- from .consistency_models import ConsistencyModelPipeline
21
- from .dance_diffusion import DanceDiffusionPipeline
22
- from .ddim import DDIMPipeline
23
- from .ddpm import DDPMPipeline
24
- from .dit import DiTPipeline
25
- from .latent_diffusion import LDMSuperResolutionPipeline
26
- from .latent_diffusion_uncond import LDMPipeline
27
- from .pipeline_utils import AudioPipelineOutput, DiffusionPipeline, ImagePipelineOutput
28
- from .pndm import PNDMPipeline
29
- from .repaint import RePaintPipeline
30
- from .score_sde_ve import ScoreSdeVePipeline
31
- from .stochastic_karras_ve import KarrasVePipeline
32
-
33
- try:
34
- if not (is_torch_available() and is_librosa_available()):
35
- raise OptionalDependencyNotAvailable()
36
- except OptionalDependencyNotAvailable:
37
- from ..utils.dummy_torch_and_librosa_objects import * # noqa F403
38
- else:
39
- from .audio_diffusion import AudioDiffusionPipeline, Mel
40
-
41
- try:
42
- if not (is_torch_available() and is_transformers_available()):
43
- raise OptionalDependencyNotAvailable()
44
- except OptionalDependencyNotAvailable:
45
- from ..utils.dummy_torch_and_transformers_objects import * # noqa F403
46
- else:
47
- from .alt_diffusion import AltDiffusionImg2ImgPipeline, AltDiffusionPipeline
48
- from .audioldm import AudioLDMPipeline
49
- from .controlnet import (
50
- StableDiffusionControlNetImg2ImgPipeline,
51
- StableDiffusionControlNetInpaintPipeline,
52
- StableDiffusionControlNetPipeline,
53
- StableDiffusionXLControlNetPipeline,
54
- )
55
- from .deepfloyd_if import (
56
- IFImg2ImgPipeline,
57
- IFImg2ImgSuperResolutionPipeline,
58
- IFInpaintingPipeline,
59
- IFInpaintingSuperResolutionPipeline,
60
- IFPipeline,
61
- IFSuperResolutionPipeline,
62
- )
63
- from .kandinsky import (
64
- KandinskyCombinedPipeline,
65
- KandinskyImg2ImgCombinedPipeline,
66
- KandinskyImg2ImgPipeline,
67
- KandinskyInpaintCombinedPipeline,
68
- KandinskyInpaintPipeline,
69
- KandinskyPipeline,
70
- KandinskyPriorPipeline,
71
- )
72
- from .kandinsky2_2 import (
73
- KandinskyV22CombinedPipeline,
74
- KandinskyV22ControlnetImg2ImgPipeline,
75
- KandinskyV22ControlnetPipeline,
76
- KandinskyV22Img2ImgCombinedPipeline,
77
- KandinskyV22Img2ImgPipeline,
78
- KandinskyV22InpaintCombinedPipeline,
79
- KandinskyV22InpaintPipeline,
80
- KandinskyV22Pipeline,
81
- KandinskyV22PriorEmb2EmbPipeline,
82
- KandinskyV22PriorPipeline,
83
- )
84
- from .latent_diffusion import LDMTextToImagePipeline
85
- from .paint_by_example import PaintByExamplePipeline
86
- from .semantic_stable_diffusion import SemanticStableDiffusionPipeline
87
- from .shap_e import ShapEImg2ImgPipeline, ShapEPipeline
88
- from .stable_diffusion import (
89
- CycleDiffusionPipeline,
90
- StableDiffusionAttendAndExcitePipeline,
91
- StableDiffusionDepth2ImgPipeline,
92
- StableDiffusionDiffEditPipeline,
93
- StableDiffusionImageVariationPipeline,
94
- StableDiffusionImg2ImgPipeline,
95
- StableDiffusionInpaintPipeline,
96
- StableDiffusionInpaintPipelineLegacy,
97
- StableDiffusionInstructPix2PixPipeline,
98
- StableDiffusionLatentUpscalePipeline,
99
- StableDiffusionLDM3DPipeline,
100
- StableDiffusionModelEditingPipeline,
101
- StableDiffusionPanoramaPipeline,
102
- StableDiffusionParadigmsPipeline,
103
- StableDiffusionPipeline,
104
- StableDiffusionPix2PixZeroPipeline,
105
- StableDiffusionSAGPipeline,
106
- StableDiffusionUpscalePipeline,
107
- StableUnCLIPImg2ImgPipeline,
108
- StableUnCLIPPipeline,
109
- )
110
- from .stable_diffusion_safe import StableDiffusionPipelineSafe
111
- from .stable_diffusion_xl import (
112
- StableDiffusionXLImg2ImgPipeline,
113
- StableDiffusionXLInpaintPipeline,
114
- StableDiffusionXLInstructPix2PixPipeline,
115
- StableDiffusionXLPipeline,
116
- )
117
- from .t2i_adapter import StableDiffusionAdapterPipeline
118
- from .text_to_video_synthesis import TextToVideoSDPipeline, TextToVideoZeroPipeline, VideoToVideoSDPipeline
119
- from .unclip import UnCLIPImageVariationPipeline, UnCLIPPipeline
120
- from .unidiffuser import ImageTextPipelineOutput, UniDiffuserModel, UniDiffuserPipeline, UniDiffuserTextDecoder
121
- from .versatile_diffusion import (
122
- VersatileDiffusionDualGuidedPipeline,
123
- VersatileDiffusionImageVariationPipeline,
124
- VersatileDiffusionPipeline,
125
- VersatileDiffusionTextToImagePipeline,
126
- )
127
- from .vq_diffusion import VQDiffusionPipeline
128
-
129
-
130
- try:
131
- if not is_onnx_available():
132
- raise OptionalDependencyNotAvailable()
133
- except OptionalDependencyNotAvailable:
134
- from ..utils.dummy_onnx_objects import * # noqa F403
135
- else:
136
- from .onnx_utils import OnnxRuntimeModel
137
-
138
- try:
139
- if not (is_torch_available() and is_transformers_available() and is_onnx_available()):
140
- raise OptionalDependencyNotAvailable()
141
- except OptionalDependencyNotAvailable:
142
- from ..utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403
143
- else:
144
- from .stable_diffusion import (
145
- OnnxStableDiffusionImg2ImgPipeline,
146
- OnnxStableDiffusionInpaintPipeline,
147
- OnnxStableDiffusionInpaintPipelineLegacy,
148
- OnnxStableDiffusionPipeline,
149
- OnnxStableDiffusionUpscalePipeline,
150
- StableDiffusionOnnxPipeline,
151
- )
152
-
153
- try:
154
- if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()):
155
- raise OptionalDependencyNotAvailable()
156
- except OptionalDependencyNotAvailable:
157
- from ..utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403
158
- else:
159
- from .stable_diffusion import StableDiffusionKDiffusionPipeline
160
-
161
- try:
162
- if not is_flax_available():
163
- raise OptionalDependencyNotAvailable()
164
- except OptionalDependencyNotAvailable:
165
- from ..utils.dummy_flax_objects import * # noqa F403
166
- else:
167
- from .pipeline_flax_utils import FlaxDiffusionPipeline
168
-
169
-
170
- try:
171
- if not (is_flax_available() and is_transformers_available()):
172
- raise OptionalDependencyNotAvailable()
173
- except OptionalDependencyNotAvailable:
174
- from ..utils.dummy_flax_and_transformers_objects import * # noqa F403
175
- else:
176
- from .controlnet import FlaxStableDiffusionControlNetPipeline
177
- from .stable_diffusion import (
178
- FlaxStableDiffusionImg2ImgPipeline,
179
- FlaxStableDiffusionInpaintPipeline,
180
- FlaxStableDiffusionPipeline,
181
- )
182
- try:
183
- if not (is_transformers_available() and is_torch_available() and is_note_seq_available()):
184
- raise OptionalDependencyNotAvailable()
185
- except OptionalDependencyNotAvailable:
186
- from ..utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403
187
- else:
188
- from .spectrogram_diffusion import MidiProcessor, SpectrogramDiffusionPipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_torchsde_objects.py DELETED
@@ -1,17 +0,0 @@
1
- # This file is autogenerated by the command `make fix-copies`, do not edit.
2
- from ..utils import DummyObject, requires_backends
3
-
4
-
5
- class DPMSolverSDEScheduler(metaclass=DummyObject):
6
- _backends = ["torch", "torchsde"]
7
-
8
- def __init__(self, *args, **kwargs):
9
- requires_backends(self, ["torch", "torchsde"])
10
-
11
- @classmethod
12
- def from_config(cls, *args, **kwargs):
13
- requires_backends(cls, ["torch", "torchsde"])
14
-
15
- @classmethod
16
- def from_pretrained(cls, *args, **kwargs):
17
- requires_backends(cls, ["torch", "torchsde"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler.py DELETED
@@ -1,146 +0,0 @@
1
- import torch
2
-
3
- from diffusers import EulerDiscreteScheduler
4
- from diffusers.utils import torch_device
5
-
6
- from .test_schedulers import SchedulerCommonTest
7
-
8
-
9
- class EulerDiscreteSchedulerTest(SchedulerCommonTest):
10
- scheduler_classes = (EulerDiscreteScheduler,)
11
- num_inference_steps = 10
12
-
13
- def get_scheduler_config(self, **kwargs):
14
- config = {
15
- "num_train_timesteps": 1100,
16
- "beta_start": 0.0001,
17
- "beta_end": 0.02,
18
- "beta_schedule": "linear",
19
- }
20
-
21
- config.update(**kwargs)
22
- return config
23
-
24
- def test_timesteps(self):
25
- for timesteps in [10, 50, 100, 1000]:
26
- self.check_over_configs(num_train_timesteps=timesteps)
27
-
28
- def test_betas(self):
29
- for beta_start, beta_end in zip([0.00001, 0.0001, 0.001], [0.0002, 0.002, 0.02]):
30
- self.check_over_configs(beta_start=beta_start, beta_end=beta_end)
31
-
32
- def test_schedules(self):
33
- for schedule in ["linear", "scaled_linear"]:
34
- self.check_over_configs(beta_schedule=schedule)
35
-
36
- def test_prediction_type(self):
37
- for prediction_type in ["epsilon", "v_prediction"]:
38
- self.check_over_configs(prediction_type=prediction_type)
39
-
40
- def test_full_loop_no_noise(self):
41
- scheduler_class = self.scheduler_classes[0]
42
- scheduler_config = self.get_scheduler_config()
43
- scheduler = scheduler_class(**scheduler_config)
44
-
45
- scheduler.set_timesteps(self.num_inference_steps)
46
-
47
- generator = torch.manual_seed(0)
48
-
49
- model = self.dummy_model()
50
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma
51
- sample = sample.to(torch_device)
52
-
53
- for i, t in enumerate(scheduler.timesteps):
54
- sample = scheduler.scale_model_input(sample, t)
55
-
56
- model_output = model(sample, t)
57
-
58
- output = scheduler.step(model_output, t, sample, generator=generator)
59
- sample = output.prev_sample
60
-
61
- result_sum = torch.sum(torch.abs(sample))
62
- result_mean = torch.mean(torch.abs(sample))
63
-
64
- assert abs(result_sum.item() - 10.0807) < 1e-2
65
- assert abs(result_mean.item() - 0.0131) < 1e-3
66
-
67
- def test_full_loop_with_v_prediction(self):
68
- scheduler_class = self.scheduler_classes[0]
69
- scheduler_config = self.get_scheduler_config(prediction_type="v_prediction")
70
- scheduler = scheduler_class(**scheduler_config)
71
-
72
- scheduler.set_timesteps(self.num_inference_steps)
73
-
74
- generator = torch.manual_seed(0)
75
-
76
- model = self.dummy_model()
77
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma
78
- sample = sample.to(torch_device)
79
-
80
- for i, t in enumerate(scheduler.timesteps):
81
- sample = scheduler.scale_model_input(sample, t)
82
-
83
- model_output = model(sample, t)
84
-
85
- output = scheduler.step(model_output, t, sample, generator=generator)
86
- sample = output.prev_sample
87
-
88
- result_sum = torch.sum(torch.abs(sample))
89
- result_mean = torch.mean(torch.abs(sample))
90
-
91
- assert abs(result_sum.item() - 0.0002) < 1e-2
92
- assert abs(result_mean.item() - 2.2676e-06) < 1e-3
93
-
94
- def test_full_loop_device(self):
95
- scheduler_class = self.scheduler_classes[0]
96
- scheduler_config = self.get_scheduler_config()
97
- scheduler = scheduler_class(**scheduler_config)
98
-
99
- scheduler.set_timesteps(self.num_inference_steps, device=torch_device)
100
-
101
- generator = torch.manual_seed(0)
102
-
103
- model = self.dummy_model()
104
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma.cpu()
105
- sample = sample.to(torch_device)
106
-
107
- for t in scheduler.timesteps:
108
- sample = scheduler.scale_model_input(sample, t)
109
-
110
- model_output = model(sample, t)
111
-
112
- output = scheduler.step(model_output, t, sample, generator=generator)
113
- sample = output.prev_sample
114
-
115
- result_sum = torch.sum(torch.abs(sample))
116
- result_mean = torch.mean(torch.abs(sample))
117
-
118
- assert abs(result_sum.item() - 10.0807) < 1e-2
119
- assert abs(result_mean.item() - 0.0131) < 1e-3
120
-
121
- def test_full_loop_device_karras_sigmas(self):
122
- scheduler_class = self.scheduler_classes[0]
123
- scheduler_config = self.get_scheduler_config()
124
- scheduler = scheduler_class(**scheduler_config, use_karras_sigmas=True)
125
-
126
- scheduler.set_timesteps(self.num_inference_steps, device=torch_device)
127
-
128
- generator = torch.manual_seed(0)
129
-
130
- model = self.dummy_model()
131
- sample = self.dummy_sample_deter * scheduler.init_noise_sigma.cpu()
132
- sample = sample.to(torch_device)
133
-
134
- for t in scheduler.timesteps:
135
- sample = scheduler.scale_model_input(sample, t)
136
-
137
- model_output = model(sample, t)
138
-
139
- output = scheduler.step(model_output, t, sample, generator=generator)
140
- sample = output.prev_sample
141
-
142
- result_sum = torch.sum(torch.abs(sample))
143
- result_mean = torch.mean(torch.abs(sample))
144
-
145
- assert abs(result_sum.item() - 124.52299499511719) < 1e-2
146
- assert abs(result_mean.item() - 0.16213932633399963) < 1e-3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/emanet/emanet_r101-d8_769x769_80k_cityscapes.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './emanet_r50-d8_769x769_80k_cityscapes.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50b-d16_512x1024_80k_cityscapes.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './fcn_d6_r50-d16_512x1024_80k_cityscapes.py'
2
- model = dict(pretrained='torchvision://resnet50', backbone=dict(type='ResNet'))
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes.py DELETED
@@ -1,4 +0,0 @@
1
- _base_ = './pspnet_r50-d8_769x769_80k_cityscapes.py'
2
- model = dict(
3
- pretrained='torchvision://resnet101',
4
- backbone=dict(type='ResNet', depth=101))
 
 
 
 
 
spaces/Anonymous-sub/Rerender/flow/flow_utils.py DELETED
@@ -1,218 +0,0 @@
1
- import os
2
- import sys
3
-
4
- import numpy as np
5
- import torch
6
- import torch.nn.functional as F
7
-
8
- parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
9
- gmflow_dir = os.path.join(parent_dir, 'gmflow_module')
10
- sys.path.insert(0, gmflow_dir)
11
-
12
- from gmflow.gmflow import GMFlow # noqa: E702 E402 F401
13
- from utils.utils import InputPadder # noqa: E702 E402
14
-
15
- import huggingface_hub
16
-
17
- repo_name = 'Anonymous-sub/Rerender'
18
-
19
- global_device = 'cuda' if torch.cuda.is_available() else 'cpu'
20
- gmflow_path = huggingface_hub.hf_hub_download(
21
- repo_name, 'models/gmflow_sintel-0c07dcb3.pth', local_dir='./')
22
-
23
-
24
- def coords_grid(b, h, w, homogeneous=False, device=None):
25
- y, x = torch.meshgrid(torch.arange(h), torch.arange(w)) # [H, W]
26
-
27
- stacks = [x, y]
28
-
29
- if homogeneous:
30
- ones = torch.ones_like(x) # [H, W]
31
- stacks.append(ones)
32
-
33
- grid = torch.stack(stacks, dim=0).float() # [2, H, W] or [3, H, W]
34
-
35
- grid = grid[None].repeat(b, 1, 1, 1) # [B, 2, H, W] or [B, 3, H, W]
36
-
37
- if device is not None:
38
- grid = grid.to(global_device)
39
-
40
- return grid
41
-
42
-
43
- def bilinear_sample(img,
44
- sample_coords,
45
- mode='bilinear',
46
- padding_mode='zeros',
47
- return_mask=False):
48
- # img: [B, C, H, W]
49
- # sample_coords: [B, 2, H, W] in image scale
50
- if sample_coords.size(1) != 2: # [B, H, W, 2]
51
- sample_coords = sample_coords.permute(0, 3, 1, 2)
52
-
53
- b, _, h, w = sample_coords.shape
54
-
55
- # Normalize to [-1, 1]
56
- x_grid = 2 * sample_coords[:, 0] / (w - 1) - 1
57
- y_grid = 2 * sample_coords[:, 1] / (h - 1) - 1
58
-
59
- grid = torch.stack([x_grid, y_grid], dim=-1) # [B, H, W, 2]
60
-
61
- img = F.grid_sample(img,
62
- grid,
63
- mode=mode,
64
- padding_mode=padding_mode,
65
- align_corners=True)
66
-
67
- if return_mask:
68
- mask = (x_grid >= -1) & (y_grid >= -1) & (x_grid <= 1) & (
69
- y_grid <= 1) # [B, H, W]
70
-
71
- return img, mask
72
-
73
- return img
74
-
75
-
76
- def flow_warp(feature,
77
- flow,
78
- mask=False,
79
- mode='bilinear',
80
- padding_mode='zeros'):
81
- b, c, h, w = feature.size()
82
- assert flow.size(1) == 2
83
-
84
- grid = coords_grid(b, h, w).to(flow.device) + flow # [B, 2, H, W]
85
-
86
- return bilinear_sample(feature,
87
- grid,
88
- mode=mode,
89
- padding_mode=padding_mode,
90
- return_mask=mask)
91
-
92
-
93
- def forward_backward_consistency_check(fwd_flow,
94
- bwd_flow,
95
- alpha=0.01,
96
- beta=0.5):
97
- # fwd_flow, bwd_flow: [B, 2, H, W]
98
- # alpha and beta values are following UnFlow
99
- # (https://arxiv.org/abs/1711.07837)
100
- assert fwd_flow.dim() == 4 and bwd_flow.dim() == 4
101
- assert fwd_flow.size(1) == 2 and bwd_flow.size(1) == 2
102
- flow_mag = torch.norm(fwd_flow, dim=1) + torch.norm(bwd_flow,
103
- dim=1) # [B, H, W]
104
-
105
- warped_bwd_flow = flow_warp(bwd_flow, fwd_flow) # [B, 2, H, W]
106
- warped_fwd_flow = flow_warp(fwd_flow, bwd_flow) # [B, 2, H, W]
107
-
108
- diff_fwd = torch.norm(fwd_flow + warped_bwd_flow, dim=1) # [B, H, W]
109
- diff_bwd = torch.norm(bwd_flow + warped_fwd_flow, dim=1)
110
-
111
- threshold = alpha * flow_mag + beta
112
-
113
- fwd_occ = (diff_fwd > threshold).float() # [B, H, W]
114
- bwd_occ = (diff_bwd > threshold).float()
115
-
116
- return fwd_occ, bwd_occ
117
-
118
-
119
- @torch.no_grad()
120
- def get_warped_and_mask(flow_model,
121
- image1,
122
- image2,
123
- image3=None,
124
- pixel_consistency=False):
125
- if image3 is None:
126
- image3 = image1
127
- padder = InputPadder(image1.shape, padding_factor=8)
128
- image1, image2 = padder.pad(image1[None].to(global_device),
129
- image2[None].to(global_device))
130
- results_dict = flow_model(image1,
131
- image2,
132
- attn_splits_list=[2],
133
- corr_radius_list=[-1],
134
- prop_radius_list=[-1],
135
- pred_bidir_flow=True)
136
- flow_pr = results_dict['flow_preds'][-1] # [B, 2, H, W]
137
- fwd_flow = padder.unpad(flow_pr[0]).unsqueeze(0) # [1, 2, H, W]
138
- bwd_flow = padder.unpad(flow_pr[1]).unsqueeze(0) # [1, 2, H, W]
139
- fwd_occ, bwd_occ = forward_backward_consistency_check(
140
- fwd_flow, bwd_flow) # [1, H, W] float
141
- if pixel_consistency:
142
- warped_image1 = flow_warp(image1, bwd_flow)
143
- bwd_occ = torch.clamp(
144
- bwd_occ +
145
- (abs(image2 - warped_image1).mean(dim=1) > 255 * 0.25).float(), 0,
146
- 1).unsqueeze(0)
147
- warped_results = flow_warp(image3, bwd_flow)
148
- return warped_results, bwd_occ, bwd_flow
149
-
150
-
151
- class FlowCalc():
152
-
153
- def __init__(self, model_path='./models/gmflow_sintel-0c07dcb3.pth'):
154
- flow_model = GMFlow(
155
- feature_channels=128,
156
- num_scales=1,
157
- upsample_factor=8,
158
- num_head=1,
159
- attention_type='swin',
160
- ffn_dim_expansion=4,
161
- num_transformer_layers=6,
162
- ).to(global_device)
163
- checkpoint = torch.load(model_path,
164
- map_location=lambda storage, loc: storage)
165
- weights = checkpoint['model'] if 'model' in checkpoint else checkpoint
166
- flow_model.load_state_dict(weights, strict=False)
167
- flow_model.eval()
168
- self.model = flow_model
169
-
170
- @torch.no_grad()
171
- def get_flow(self, image1, image2, save_path=None):
172
- if save_path is not None and os.path.exists(save_path):
173
- bwd_flow = read_flow(save_path)
174
- return bwd_flow
175
-
176
- image1 = torch.from_numpy(image1).permute(2, 0, 1).float()
177
- image2 = torch.from_numpy(image2).permute(2, 0, 1).float()
178
- padder = InputPadder(image1.shape, padding_factor=8)
179
- image1, image2 = padder.pad(image1[None].to(global_device),
180
- image2[None].to(global_device))
181
- results_dict = self.model(image1,
182
- image2,
183
- attn_splits_list=[2],
184
- corr_radius_list=[-1],
185
- prop_radius_list=[-1],
186
- pred_bidir_flow=True)
187
- flow_pr = results_dict['flow_preds'][-1] # [B, 2, H, W]
188
- bwd_flow = padder.unpad(flow_pr[1]).unsqueeze(0) # [1, 2, H, W]
189
- if save_path is not None:
190
- flow_np = bwd_flow.cpu().numpy()
191
- np.save(save_path, flow_np)
192
-
193
- return bwd_flow
194
-
195
- def warp(self, img, flow, mode='bilinear'):
196
- expand = False
197
- if len(img.shape) == 2:
198
- expand = True
199
- img = np.expand_dims(img, 2)
200
-
201
- img = torch.from_numpy(img).permute(2, 0, 1).unsqueeze(0)
202
- dtype = img.dtype
203
- img = img.to(torch.float)
204
- res = flow_warp(img, flow, mode=mode)
205
- res = res.to(dtype)
206
- res = res[0].cpu().permute(1, 2, 0).numpy()
207
- if expand:
208
- res = res[:, :, 0]
209
- return res
210
-
211
-
212
- def read_flow(save_path):
213
- flow_np = np.load(save_path)
214
- bwd_flow = torch.from_numpy(flow_np)
215
- return bwd_flow
216
-
217
-
218
- flow_calc = FlowCalc()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-coverage.sh DELETED
@@ -1,4 +0,0 @@
1
- #!/usr/bin/env sh
2
- poetry run coverage run --parallel -m pytest
3
- poetry run coverage combine
4
- poetry run coverage report
 
 
 
 
 
spaces/Apex-X/nono/app.py DELETED
@@ -1,69 +0,0 @@
1
- # -* coding:UTF-8 -*
2
- # !/usr/bin/env python
3
- import numpy as np
4
- import gradio as gr
5
- import roop.globals
6
- from roop.core import (
7
- start,
8
- decode_execution_providers,
9
- suggest_max_memory,
10
- suggest_execution_threads,
11
- )
12
- from roop.processors.frame.core import get_frame_processors_modules
13
- from roop.utilities import normalize_output_path
14
- import os
15
- from PIL import Image
16
-
17
-
18
- def swap_face(source_file, target_file):
19
-
20
- source_path = "input.jpg"
21
- target_path = "target.jpg"
22
-
23
- source_image = Image.fromarray(source_file)
24
- source_image.save(source_path)
25
- target_image = Image.fromarray(target_file)
26
- target_image.save(target_path)
27
-
28
- print("source_path: ", source_path)
29
- print("target_path: ", target_path)
30
-
31
- roop.globals.source_path = source_path
32
- roop.globals.target_path = target_path
33
- output_path = "output.jpg"
34
- roop.globals.output_path = normalize_output_path(
35
- roop.globals.source_path, roop.globals.target_path, output_path
36
- )
37
- roop.globals.frame_processors = ["face_swapper"]
38
- roop.globals.headless = True
39
- roop.globals.keep_fps = True
40
- roop.globals.keep_audio = True
41
- roop.globals.keep_frames = False
42
- roop.globals.many_faces = False
43
- roop.globals.video_encoder = "libx264"
44
- roop.globals.video_quality = 18
45
- roop.globals.max_memory = suggest_max_memory()
46
- roop.globals.execution_providers = decode_execution_providers(["cpu"])
47
- roop.globals.execution_threads = suggest_execution_threads()
48
-
49
- print(
50
- "start process",
51
- roop.globals.source_path,
52
- roop.globals.target_path,
53
- roop.globals.output_path,
54
- )
55
-
56
- for frame_processor in get_frame_processors_modules(
57
- roop.globals.frame_processors
58
- ):
59
- if not frame_processor.pre_check():
60
- return
61
-
62
- start()
63
- return output_path
64
-
65
-
66
- app = gr.Interface(
67
- fn=swap_face, inputs=[gr.Image(), gr.Image()], outputs="image"
68
- )
69
- app.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aqdas/YouTube_Video_OpenAI_whisper/whisper.py DELETED
@@ -1,18 +0,0 @@
1
- def dowload_youtube_video(url):
2
- from pytube import YouTube
3
- yt = YouTube(url)
4
- global audio_stream
5
- audio_stream = yt.streams.filter(only_audio=True, file_extension='mp4').first()
6
- audio_stream.download()
7
- return 'download successfully'
8
-
9
-
10
- def transcribe_audio():
11
- import openai
12
- from openai import OpenAI
13
- import os
14
- client = OpenAI(api_key=os.environ['openai_api_key'])
15
- file = open(audio_stream.default_filename, "rb")
16
- transcription = client.audio.transcriptions.create(model="whisper-1", file=file, response_format='text', language='ur')
17
-
18
- return transcription
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Artrajz/vits-simple-api/logger.py DELETED
@@ -1,40 +0,0 @@
1
- import os
2
- import sys
3
- import logging
4
- import logzero
5
- import config
6
- from logging.handlers import TimedRotatingFileHandler
7
-
8
- logzero.loglevel(logging.WARNING)
9
- logger = logging.getLogger("vits-simple-api")
10
- level = getattr(config, "LOGGING_LEVEL", "DEBUG")
11
- level_dict = {'DEBUG': logging.DEBUG, 'INFO': logging.INFO, 'WARNING': logging.WARNING, 'ERROR': logging.ERROR,
12
- 'CRITICAL': logging.CRITICAL}
13
- logging.basicConfig(level=level_dict[level])
14
- logging.getLogger('numba').setLevel(logging.WARNING)
15
- logging.getLogger("langid.langid").setLevel(logging.INFO)
16
- logging.getLogger("apscheduler.scheduler").setLevel(logging.INFO)
17
-
18
- os.makedirs(config.LOGS_PATH, exist_ok=True)
19
- log_file = os.path.join(config.LOGS_PATH, 'latest.log')
20
- backup_count = getattr(config, "LOGS_BACKUPCOUNT", 30)
21
- handler = TimedRotatingFileHandler(log_file, when="midnight", interval=1, backupCount=backup_count, encoding='utf-8')
22
- handler.suffix = "%Y-%m-%d.log"
23
- formatter = logging.Formatter('%(levelname)s:%(name)s %(message)s')
24
- handler.setFormatter(formatter)
25
-
26
- logging.getLogger().addHandler(handler)
27
-
28
-
29
- # Custom function to handle uncaught exceptions
30
- def handle_exception(exc_type, exc_value, exc_traceback):
31
- # If it's a keyboard interrupt, don't handle it, just return
32
- if issubclass(exc_type, KeyboardInterrupt):
33
- sys.__excepthook__(exc_type, exc_value, exc_traceback)
34
- return
35
-
36
- logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback))
37
-
38
-
39
- # Set the global exception handler in Python
40
- sys.excepthook = handle_exception
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AsakuraMizu/moe-tts/text/ngu_dialect.py DELETED
@@ -1,30 +0,0 @@
1
- import re
2
- import opencc
3
-
4
-
5
- dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou',
6
- 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing',
7
- 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang',
8
- 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan',
9
- 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen',
10
- 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'}
11
-
12
- converters = {}
13
-
14
- for dialect in dialects.values():
15
- try:
16
- converters[dialect] = opencc.OpenCC("chinese_dialect_lexicons/"+dialect)
17
- except:
18
- pass
19
-
20
-
21
- def ngu_dialect_to_ipa(text, dialect):
22
- dialect = dialects[dialect]
23
- text = converters[dialect].convert(text).replace('-','').replace('$',' ')
24
- text = re.sub(r'[、;:]', ',', text)
25
- text = re.sub(r'\s*,\s*', ', ', text)
26
- text = re.sub(r'\s*。\s*', '. ', text)
27
- text = re.sub(r'\s*?\s*', '? ', text)
28
- text = re.sub(r'\s*!\s*', '! ', text)
29
- text = re.sub(r'\s*$', '', text)
30
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/README_D2.md DELETED
@@ -1,62 +0,0 @@
1
- <img src=".github/Detectron2-Logo-Horz.svg" width="300" >
2
-
3
- Detectron2 is Facebook AI Research's next generation software system
4
- that implements state-of-the-art object detection algorithms.
5
- It is a ground-up rewrite of the previous version,
6
- [Detectron](https://github.com/facebookresearch/Detectron/),
7
- and it originates from [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark/).
8
-
9
- <div align="center">
10
- <img src="https://user-images.githubusercontent.com/1381301/66535560-d3422200-eace-11e9-9123-5535d469db19.png"/>
11
- </div>
12
-
13
- ### What's New
14
- * It is powered by the [PyTorch](https://pytorch.org) deep learning framework.
15
- * Includes more features such as panoptic segmentation, Densepose, Cascade R-CNN, rotated bounding boxes, PointRend,
16
- DeepLab, etc.
17
- * Can be used as a library to support [different projects](projects/) on top of it.
18
- We'll open source more research projects in this way.
19
- * It [trains much faster](https://detectron2.readthedocs.io/notes/benchmarks.html).
20
- * Models can be exported to TorchScript format or Caffe2 format for deployment.
21
-
22
- See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-/)
23
- to see more demos and learn about detectron2.
24
-
25
- ## Installation
26
-
27
- See [INSTALL.md](INSTALL.md).
28
-
29
- ## Getting Started
30
-
31
- Follow the [installation instructions](https://detectron2.readthedocs.io/tutorials/install.html) to
32
- install detectron2.
33
-
34
- See [Getting Started with Detectron2](https://detectron2.readthedocs.io/tutorials/getting_started.html),
35
- and the [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5)
36
- to learn about basic usage.
37
-
38
- Learn more at our [documentation](https://detectron2.readthedocs.org).
39
- And see [projects/](projects/) for some projects that are built on top of detectron2.
40
-
41
- ## Model Zoo and Baselines
42
-
43
- We provide a large set of baseline results and trained models available for download in the [Detectron2 Model Zoo](MODEL_ZOO.md).
44
-
45
-
46
- ## License
47
-
48
- Detectron2 is released under the [Apache 2.0 license](LICENSE).
49
-
50
- ## Citing Detectron2
51
-
52
- If you use Detectron2 in your research or wish to refer to the baseline results published in the [Model Zoo](MODEL_ZOO.md), please use the following BibTeX entry.
53
-
54
- ```BibTeX
55
- @misc{wu2019detectron2,
56
- author = {Yuxin Wu and Alexander Kirillov and Francisco Massa and
57
- Wan-Yen Lo and Ross Girshick},
58
- title = {Detectron2},
59
- howpublished = {\url{https://github.com/facebookresearch/detectron2}},
60
- year = {2019}
61
- }
62
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn.py DELETED
@@ -1,425 +0,0 @@
1
- # Modified from https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/efficientdet.py
2
- # The original file is under Apache-2.0 License
3
- import math
4
- from os.path import join
5
- import numpy as np
6
- from collections import OrderedDict
7
- from typing import List
8
-
9
- import torch
10
- from torch import nn
11
- import torch.utils.model_zoo as model_zoo
12
- import torch.nn.functional as F
13
- import fvcore.nn.weight_init as weight_init
14
-
15
- from detectron2.layers import ShapeSpec, Conv2d
16
- from detectron2.modeling.backbone.resnet import build_resnet_backbone
17
- from detectron2.modeling.backbone.build import BACKBONE_REGISTRY
18
- from detectron2.layers.batch_norm import get_norm
19
- from detectron2.modeling.backbone import Backbone
20
- from .dlafpn import dla34
21
-
22
- def get_fpn_config(base_reduction=8):
23
- """BiFPN config with sum."""
24
- p = {
25
- 'nodes': [
26
- {'reduction': base_reduction << 3, 'inputs_offsets': [3, 4]},
27
- {'reduction': base_reduction << 2, 'inputs_offsets': [2, 5]},
28
- {'reduction': base_reduction << 1, 'inputs_offsets': [1, 6]},
29
- {'reduction': base_reduction, 'inputs_offsets': [0, 7]},
30
- {'reduction': base_reduction << 1, 'inputs_offsets': [1, 7, 8]},
31
- {'reduction': base_reduction << 2, 'inputs_offsets': [2, 6, 9]},
32
- {'reduction': base_reduction << 3, 'inputs_offsets': [3, 5, 10]},
33
- {'reduction': base_reduction << 4, 'inputs_offsets': [4, 11]},
34
- ],
35
- 'weight_method': 'fastattn',
36
- }
37
- return p
38
-
39
-
40
- def swish(x, inplace: bool = False):
41
- """Swish - Described in: https://arxiv.org/abs/1710.05941
42
- """
43
- return x.mul_(x.sigmoid()) if inplace else x.mul(x.sigmoid())
44
-
45
-
46
- class Swish(nn.Module):
47
- def __init__(self, inplace: bool = False):
48
- super(Swish, self).__init__()
49
- self.inplace = inplace
50
-
51
- def forward(self, x):
52
- return swish(x, self.inplace)
53
-
54
-
55
- class SequentialAppend(nn.Sequential):
56
- def __init__(self, *args):
57
- super(SequentialAppend, self).__init__(*args)
58
-
59
- def forward(self, x):
60
- for module in self:
61
- x.append(module(x))
62
- return x
63
-
64
-
65
- class SequentialAppendLast(nn.Sequential):
66
- def __init__(self, *args):
67
- super(SequentialAppendLast, self).__init__(*args)
68
-
69
- # def forward(self, x: List[torch.Tensor]):
70
- def forward(self, x):
71
- for module in self:
72
- x.append(module(x[-1]))
73
- return x
74
-
75
-
76
- class ConvBnAct2d(nn.Module):
77
- def __init__(self, in_channels, out_channels, kernel_size, stride=1, dilation=1, padding='', bias=False,
78
- norm='', act_layer=Swish):
79
- super(ConvBnAct2d, self).__init__()
80
- # self.conv = create_conv2d(
81
- # in_channels, out_channels, kernel_size, stride=stride, dilation=dilation, padding=padding, bias=bias)
82
- self.conv = Conv2d(
83
- in_channels, out_channels, kernel_size=kernel_size, stride=stride,
84
- padding=kernel_size // 2, bias=(norm == ''))
85
- self.bn = get_norm(norm, out_channels)
86
- self.act = None if act_layer is None else act_layer(inplace=True)
87
-
88
- def forward(self, x):
89
- x = self.conv(x)
90
- if self.bn is not None:
91
- x = self.bn(x)
92
- if self.act is not None:
93
- x = self.act(x)
94
- return x
95
-
96
-
97
- class SeparableConv2d(nn.Module):
98
- """ Separable Conv
99
- """
100
- def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, dilation=1, padding='', bias=False,
101
- channel_multiplier=1.0, pw_kernel_size=1, act_layer=Swish,
102
- norm=''):
103
- super(SeparableConv2d, self).__init__()
104
-
105
- # self.conv_dw = create_conv2d(
106
- # in_channels, int(in_channels * channel_multiplier), kernel_size,
107
- # stride=stride, dilation=dilation, padding=padding, depthwise=True)
108
-
109
- self.conv_dw = Conv2d(
110
- in_channels, int(in_channels * channel_multiplier),
111
- kernel_size=kernel_size, stride=stride, padding=kernel_size // 2, bias=bias,
112
- groups=out_channels)
113
- # print('conv_dw', kernel_size, stride)
114
- # self.conv_pw = create_conv2d(
115
- # int(in_channels * channel_multiplier), out_channels, pw_kernel_size, padding=padding, bias=bias)
116
-
117
- self.conv_pw = Conv2d(
118
- int(in_channels * channel_multiplier), out_channels,
119
- kernel_size=pw_kernel_size, padding=pw_kernel_size // 2, bias=(norm==''))
120
- # print('conv_pw', pw_kernel_size)
121
-
122
- self.bn = get_norm(norm, out_channels)
123
- self.act = None if act_layer is None else act_layer(inplace=True)
124
-
125
- def forward(self, x):
126
- x = self.conv_dw(x)
127
- x = self.conv_pw(x)
128
- if self.bn is not None:
129
- x = self.bn(x)
130
- if self.act is not None:
131
- x = self.act(x)
132
- return x
133
-
134
-
135
- class ResampleFeatureMap(nn.Sequential):
136
- def __init__(self, in_channels, out_channels, reduction_ratio=1., pad_type='', pooling_type='max',
137
- norm='', apply_bn=False, conv_after_downsample=False,
138
- redundant_bias=False):
139
- super(ResampleFeatureMap, self).__init__()
140
- pooling_type = pooling_type or 'max'
141
- self.in_channels = in_channels
142
- self.out_channels = out_channels
143
- self.reduction_ratio = reduction_ratio
144
- self.conv_after_downsample = conv_after_downsample
145
-
146
- conv = None
147
- if in_channels != out_channels:
148
- conv = ConvBnAct2d(
149
- in_channels, out_channels, kernel_size=1, padding=pad_type,
150
- norm=norm if apply_bn else '',
151
- bias=not apply_bn or redundant_bias, act_layer=None)
152
-
153
- if reduction_ratio > 1:
154
- stride_size = int(reduction_ratio)
155
- if conv is not None and not self.conv_after_downsample:
156
- self.add_module('conv', conv)
157
- self.add_module(
158
- 'downsample',
159
- # create_pool2d(
160
- # pooling_type, kernel_size=stride_size + 1, stride=stride_size, padding=pad_type)
161
- # nn.MaxPool2d(kernel_size=stride_size + 1, stride=stride_size, padding=pad_type)
162
- nn.MaxPool2d(kernel_size=stride_size, stride=stride_size)
163
- )
164
- if conv is not None and self.conv_after_downsample:
165
- self.add_module('conv', conv)
166
- else:
167
- if conv is not None:
168
- self.add_module('conv', conv)
169
- if reduction_ratio < 1:
170
- scale = int(1 // reduction_ratio)
171
- self.add_module('upsample', nn.UpsamplingNearest2d(scale_factor=scale))
172
-
173
-
174
- class FpnCombine(nn.Module):
175
- def __init__(self, feature_info, fpn_config, fpn_channels, inputs_offsets, target_reduction, pad_type='',
176
- pooling_type='max', norm='', apply_bn_for_resampling=False,
177
- conv_after_downsample=False, redundant_bias=False, weight_method='attn'):
178
- super(FpnCombine, self).__init__()
179
- self.inputs_offsets = inputs_offsets
180
- self.weight_method = weight_method
181
-
182
- self.resample = nn.ModuleDict()
183
- for idx, offset in enumerate(inputs_offsets):
184
- in_channels = fpn_channels
185
- if offset < len(feature_info):
186
- in_channels = feature_info[offset]['num_chs']
187
- input_reduction = feature_info[offset]['reduction']
188
- else:
189
- node_idx = offset - len(feature_info)
190
- # print('node_idx, len', node_idx, len(fpn_config['nodes']))
191
- input_reduction = fpn_config['nodes'][node_idx]['reduction']
192
- reduction_ratio = target_reduction / input_reduction
193
- self.resample[str(offset)] = ResampleFeatureMap(
194
- in_channels, fpn_channels, reduction_ratio=reduction_ratio, pad_type=pad_type,
195
- pooling_type=pooling_type, norm=norm,
196
- apply_bn=apply_bn_for_resampling, conv_after_downsample=conv_after_downsample,
197
- redundant_bias=redundant_bias)
198
-
199
- if weight_method == 'attn' or weight_method == 'fastattn':
200
- # WSM
201
- self.edge_weights = nn.Parameter(torch.ones(len(inputs_offsets)), requires_grad=True)
202
- else:
203
- self.edge_weights = None
204
-
205
- def forward(self, x):
206
- dtype = x[0].dtype
207
- nodes = []
208
- for offset in self.inputs_offsets:
209
- input_node = x[offset]
210
- input_node = self.resample[str(offset)](input_node)
211
- nodes.append(input_node)
212
-
213
- if self.weight_method == 'attn':
214
- normalized_weights = torch.softmax(self.edge_weights.type(dtype), dim=0)
215
- x = torch.stack(nodes, dim=-1) * normalized_weights
216
- elif self.weight_method == 'fastattn':
217
- edge_weights = nn.functional.relu(self.edge_weights.type(dtype))
218
- weights_sum = torch.sum(edge_weights)
219
- x = torch.stack(
220
- [(nodes[i] * edge_weights[i]) / (weights_sum + 0.0001) for i in range(len(nodes))], dim=-1)
221
- elif self.weight_method == 'sum':
222
- x = torch.stack(nodes, dim=-1)
223
- else:
224
- raise ValueError('unknown weight_method {}'.format(self.weight_method))
225
- x = torch.sum(x, dim=-1)
226
- return x
227
-
228
-
229
- class BiFpnLayer(nn.Module):
230
- def __init__(self, feature_info, fpn_config, fpn_channels, num_levels=5, pad_type='',
231
- pooling_type='max', norm='', act_layer=Swish,
232
- apply_bn_for_resampling=False, conv_after_downsample=True, conv_bn_relu_pattern=False,
233
- separable_conv=True, redundant_bias=False):
234
- super(BiFpnLayer, self).__init__()
235
- self.fpn_config = fpn_config
236
- self.num_levels = num_levels
237
- self.conv_bn_relu_pattern = False
238
-
239
- self.feature_info = []
240
- self.fnode = SequentialAppend()
241
- for i, fnode_cfg in enumerate(fpn_config['nodes']):
242
- # logging.debug('fnode {} : {}'.format(i, fnode_cfg))
243
- # print('fnode {} : {}'.format(i, fnode_cfg))
244
- fnode_layers = OrderedDict()
245
-
246
- # combine features
247
- reduction = fnode_cfg['reduction']
248
- fnode_layers['combine'] = FpnCombine(
249
- feature_info, fpn_config, fpn_channels, fnode_cfg['inputs_offsets'], target_reduction=reduction,
250
- pad_type=pad_type, pooling_type=pooling_type, norm=norm,
251
- apply_bn_for_resampling=apply_bn_for_resampling, conv_after_downsample=conv_after_downsample,
252
- redundant_bias=redundant_bias, weight_method=fpn_config['weight_method'])
253
- self.feature_info.append(dict(num_chs=fpn_channels, reduction=reduction))
254
-
255
- # after combine ops
256
- after_combine = OrderedDict()
257
- if not conv_bn_relu_pattern:
258
- after_combine['act'] = act_layer(inplace=True)
259
- conv_bias = redundant_bias
260
- conv_act = None
261
- else:
262
- conv_bias = False
263
- conv_act = act_layer
264
- conv_kwargs = dict(
265
- in_channels=fpn_channels, out_channels=fpn_channels, kernel_size=3, padding=pad_type,
266
- bias=conv_bias, norm=norm, act_layer=conv_act)
267
- after_combine['conv'] = SeparableConv2d(**conv_kwargs) if separable_conv else ConvBnAct2d(**conv_kwargs)
268
- fnode_layers['after_combine'] = nn.Sequential(after_combine)
269
-
270
- self.fnode.add_module(str(i), nn.Sequential(fnode_layers))
271
-
272
- self.feature_info = self.feature_info[-num_levels::]
273
-
274
- def forward(self, x):
275
- x = self.fnode(x)
276
- return x[-self.num_levels::]
277
-
278
-
279
- class BiFPN(Backbone):
280
- def __init__(
281
- self, cfg, bottom_up, in_features, out_channels, norm='',
282
- num_levels=5, num_bifpn=4, separable_conv=False,
283
- ):
284
- super(BiFPN, self).__init__()
285
- assert isinstance(bottom_up, Backbone)
286
-
287
- # Feature map strides and channels from the bottom up network (e.g. ResNet)
288
- input_shapes = bottom_up.output_shape()
289
- in_strides = [input_shapes[f].stride for f in in_features]
290
- in_channels = [input_shapes[f].channels for f in in_features]
291
-
292
- self.num_levels = num_levels
293
- self.num_bifpn = num_bifpn
294
- self.bottom_up = bottom_up
295
- self.in_features = in_features
296
- self._size_divisibility = 128
297
- levels = [int(math.log2(s)) for s in in_strides]
298
- self._out_feature_strides = {
299
- "p{}".format(int(math.log2(s))): s for s in in_strides}
300
- if len(in_features) < num_levels:
301
- for l in range(num_levels - len(in_features)):
302
- s = l + levels[-1]
303
- self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1)
304
- self._out_features = list(sorted(self._out_feature_strides.keys()))
305
- self._out_feature_channels = {k: out_channels for k in self._out_features}
306
-
307
- # print('self._out_feature_strides', self._out_feature_strides)
308
- # print('self._out_feature_channels', self._out_feature_channels)
309
-
310
- feature_info = [
311
- {'num_chs': in_channels[level], 'reduction': in_strides[level]} \
312
- for level in range(len(self.in_features))
313
- ]
314
- # self.config = config
315
- fpn_config = get_fpn_config()
316
- self.resample = SequentialAppendLast()
317
- for level in range(num_levels):
318
- if level < len(feature_info):
319
- in_chs = in_channels[level] # feature_info[level]['num_chs']
320
- reduction = in_strides[level] # feature_info[level]['reduction']
321
- else:
322
- # Adds a coarser level by downsampling the last feature map
323
- reduction_ratio = 2
324
- self.resample.add_module(str(level), ResampleFeatureMap(
325
- in_channels=in_chs,
326
- out_channels=out_channels,
327
- pad_type='same',
328
- pooling_type=None,
329
- norm=norm,
330
- reduction_ratio=reduction_ratio,
331
- apply_bn=True,
332
- conv_after_downsample=False,
333
- redundant_bias=False,
334
- ))
335
- in_chs = out_channels
336
- reduction = int(reduction * reduction_ratio)
337
- feature_info.append(dict(num_chs=in_chs, reduction=reduction))
338
-
339
- self.cell = nn.Sequential()
340
- for rep in range(self.num_bifpn):
341
- # logging.debug('building cell {}'.format(rep))
342
- # print('building cell {}'.format(rep))
343
- fpn_layer = BiFpnLayer(
344
- feature_info=feature_info,
345
- fpn_config=fpn_config,
346
- fpn_channels=out_channels,
347
- num_levels=self.num_levels,
348
- pad_type='same',
349
- pooling_type=None,
350
- norm=norm,
351
- act_layer=Swish,
352
- separable_conv=separable_conv,
353
- apply_bn_for_resampling=True,
354
- conv_after_downsample=False,
355
- conv_bn_relu_pattern=False,
356
- redundant_bias=False,
357
- )
358
- self.cell.add_module(str(rep), fpn_layer)
359
- feature_info = fpn_layer.feature_info
360
- # import pdb; pdb.set_trace()
361
-
362
- @property
363
- def size_divisibility(self):
364
- return self._size_divisibility
365
-
366
- def forward(self, x):
367
- # print('input shapes', x.shape)
368
- bottom_up_features = self.bottom_up(x)
369
- x = [bottom_up_features[f] for f in self.in_features]
370
- assert len(self.resample) == self.num_levels - len(x)
371
- x = self.resample(x)
372
- shapes = [xx.shape for xx in x]
373
- # print('resample shapes', shapes)
374
- x = self.cell(x)
375
- out = {f: xx for f, xx in zip(self._out_features, x)}
376
- # import pdb; pdb.set_trace()
377
- return out
378
-
379
-
380
- @BACKBONE_REGISTRY.register()
381
- def build_resnet_bifpn_backbone(cfg, input_shape: ShapeSpec):
382
- """
383
- Args:
384
- cfg: a detectron2 CfgNode
385
-
386
- Returns:
387
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
388
- """
389
- bottom_up = build_resnet_backbone(cfg, input_shape)
390
- in_features = cfg.MODEL.FPN.IN_FEATURES
391
- backbone = BiFPN(
392
- cfg=cfg,
393
- bottom_up=bottom_up,
394
- in_features=in_features,
395
- out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS,
396
- norm=cfg.MODEL.BIFPN.NORM,
397
- num_levels=cfg.MODEL.BIFPN.NUM_LEVELS,
398
- num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN,
399
- separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV,
400
- )
401
- return backbone
402
-
403
- @BACKBONE_REGISTRY.register()
404
- def build_p37_dla_bifpn_backbone(cfg, input_shape: ShapeSpec):
405
- """
406
- Args:
407
- cfg: a detectron2 CfgNode
408
- Returns:
409
- backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`.
410
- """
411
- bottom_up = dla34(cfg)
412
- in_features = cfg.MODEL.FPN.IN_FEATURES
413
- assert cfg.MODEL.BIFPN.NUM_LEVELS == 5
414
-
415
- backbone = BiFPN(
416
- cfg=cfg,
417
- bottom_up=bottom_up,
418
- in_features=in_features,
419
- out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS,
420
- norm=cfg.MODEL.BIFPN.NORM,
421
- num_levels=cfg.MODEL.BIFPN.NUM_LEVELS,
422
- num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN,
423
- separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV,
424
- )
425
- return backbone
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Asfalto 8 - Juego De Carreras De Coches.md DELETED
@@ -1,72 +0,0 @@
1
- <br />
2
- <h1>Ninja Shadow Fight 2: Una revisión</h1>
3
- <p>Si eres un fanático de los juegos de lucha con elementos RPG, es posible que quieras echar un vistazo a Ninja Shadow Fight 2. Este juego es una secuela del famoso éxito de Facebook con 40 millones de usuarios, Shadow Fight. Es una mezcla de técnicas clásicas de lucha y artes marciales. Puedes equipar a tu personaje con innumerables armas letales y armaduras raras, personalizar a tu luchador con habilidades épicas y poderes mágicos, y viajar a través de seis mundos diferentes llenos de demonios amenazantes. En este artículo, revisaremos Ninja Shadow Fight 2 en términos de su jugabilidad y controles, gráficos y sonido, pros y contras, consejos y trucos. </p>
4
- <h2>Juego y controles</h2>
5
- <h3>Sistema de combate</h3>
6
- <p>El sistema de combate en Ninja Shadow Fight 2 se basa en la física realista y animaciones. Puedes usar un palo direccional a la izquierda para mover a tu personaje, y botones a la derecha para golpear o patear a tu oponente. También puede combinar diferentes direcciones y tipos de ataques para realizar varios movimientos y combos. Por ejemplo, puedes usar forward+punch para hacer una fuerte barra con tu arma, o backward+punch para hacer una barra giratoria. También puedes usar up+punch para hacer una barra superior que puede derribar a tu oponente, o down+punch para hacer una barra baja que puede golpearlos mientras están en el suelo. </p>
7
- <h2>asfalto 8 - juego de carreras de coches</h2><br /><p><b><b>Download Zip</b> &middot;&middot;&middot; <a href="https://bltlly.com/2v6M6F">https://bltlly.com/2v6M6F</a></b></p><br /><br />
8
- <p>El sistema de combate también te permite usar armas a distancia y habilidades mágicas en algunas situaciones. Las armas a distancia se pueden lanzar a tu oponente pulsando un botón en la esquina superior derecha. Pueden causar daño a distancia o interrumpir sus ataques. Las habilidades mágicas se pueden activar pulsando un botón en la esquina inferior derecha cuando el medidor de magia está lleno. Pueden desatar poderosos efectos que pueden cambiar el curso de la batalla. </p>
9
- <h3>Elementos RPG</h3>
10
-
11
- <h3>Modos de juego</h3>
12
- <p>Los modos de juego en Ninja Shadow Fight 2 ofrecen diferentes desafíos y recompensas para los jugadores. Puedes elegir entre los siguientes modos:</p>
13
- <ul>
14
- <li>Torneo: Este es el modo principal del juego, donde tienes que luchar tu camino a través de una serie de oponentes en cada mundo. Puedes ganar monedas y gemas ganando batallas, y desbloquear nuevos mundos derrotando jefes. </li>
15
- <li>Supervivencia: Este es un modo en el que tienes que sobrevivir el mayor tiempo posible contra interminables oleadas de enemigos. Puedes ganar monedas y gemas matando enemigos y poniendo a prueba tus habilidades y resistencia. </li>
16
- <li>Duelo: Este es un modo donde puedes luchar contra otros jugadores en línea. Puedes ganar monedas y gemas ganando duelos, y posicionarte en la clasificación. </li>
17
- <li>Underworld: Este es un modo donde puedes unir fuerzas con otros jugadores en línea para luchar contra poderosos jefes. Puedes ganar monedas y gemas participando en incursiones, y recolectar objetos y equipos raros. </li>
18
- </ul>
19
- <h2>Gráficos y sonido</h2>
20
- <h3>Estilo visual</h3>
21
- <p>El estilo visual de Ninja Shadow Fight 2 es único y atractivo. El juego utiliza un estilo de silueta para los personajes, lo que crea un contraste con los fondos coloridos y detallados. El juego también utiliza iluminación dinámica y sombras, que añaden profundidad y realismo a las escenas. El juego tiene una variedad de entornos, como bosques, templos, cuevas y castillos, cada uno con su propia atmósfera y estilo. </p>
22
- <h3>Efectos de sonido y música</h3>
23
- <p>Los efectos de sonido y la música de Ninja Shadow Fight 2 también son impresionantes e inmersivos. El juego utiliza sonidos realistas para las armas y los golpes, que hacen que el combate se sienta más intenso y satisfactorio. El juego también utiliza música atmosférica para los fondos, que coinciden con el estado de ánimo y el tema de cada mundo. El juego tiene una banda sonora diversa, que va desde melodías orientales hasta ritmos de rock, cada uno con su propio ritmo y tempo. </p>
24
- <h2>Pros y contras</h2>
25
- <h3>Pros</h3>
26
-
27
- <ul>
28
- <li>El sistema de combate es suave y sensible, con física realista y animaciones. </li>
29
- <li>Los elementos RPG son profundos y gratificantes, con muchas opciones para personalizar tu luchador. </li>
30
- <li>Los modos de juego son variados y desafiantes, con diferentes objetivos y recompensas. </li>
31
- <li>El estilo visual es único y atractivo, con un contraste entre los personajes de silueta y los fondos de colores. </li>
32
- <li>Los efectos de sonido y la música son impresionantes y envolventes, con sonidos realistas para las armas y los golpes, y música atmosférica para los fondos. </li>
33
- <li>La historia es intrigante y cautivadora, con una trama misteriosa y personajes carismáticos. </li>
34
- </ul>
35
- <h3>Contras</h3>
36
- <p>Ninja Shadow Fight 2 también tiene algunos aspectos negativos que podrían restar provecho a su disfrute. Algunos de los contras son:</p>
37
- <ul>
38
- <li>El juego tiene anuncios frecuentes que interrumpen el juego y molestan a los jugadores. </li>
39
- <li>El juego tiene un modelo de pago a ganador que da una ventaja injusta a los jugadores que gastan dinero real en gemas. </li>
40
- <li> El juego tiene una falta de sincronización entre dispositivos que hace que sea difícil transferir su progreso de un dispositivo a otro. </li>
41
- </ul>
42
- <h2>Consejos y trucos</h2>
43
- <h3>Cómo ganar batallas</h3>
44
- <p>Si quieres ganar batallas en Ninja Shadow Fight 2, necesitas dominar el sistema de combate y usar algunas estrategias. Aquí hay algunos consejos y trucos sobre cómo ganar batallas:</p>
45
- <p></p>
46
- <ul>
47
- <li>Apunta a la cabeza: Golpear la cabeza de tu oponente inflige más daño que golpear su cuerpo o extremidades. Puedes usar up+punch o up+kick para hacer una barra superior o patada que puede derribar a tu oponente o romper su guardia. </li>
48
- <li>Usa patadas para interrumpir a los enemigos: Patear a tu oponente puede interrumpir sus ataques o empujarlos hacia atrás. Puedes usar forward+kick o backward+kick para hacer una patada fuerte que pueda hacer volar o aturdir a tu oponente. </li>
49
-
50
- </ul>
51
- <h3>Cómo manejar la armadura al derrotar a los jefes en el modo de torneo. Cada jefe tiene un arma única y una armadura que puedes obtener al vencerlos. Por ejemplo, puedes conseguir la katana y la armadura samurái derrotando a Lynx, el primer jefe del juego. </li>
52
- <li>Completar desafíos: Puedes desbloquear nuevas habilidades y habilidades mágicas completando desafíos en el juego. Los desafíos son tareas especiales que requieren realizar ciertas acciones o cumplir ciertos criterios en el juego. Por ejemplo, puedes desbloquear la habilidad de bola de fuego completando el reto de matar a 10 enemigos con armas a distancia. </li>
53
- <li>Únete a las redadas: Puedes desbloquear objetos y equipos raros uniéndote a las redadas en el modo inframundo. Las redadas son batallas cooperativas contra jefes poderosos que requieren trabajo en equipo y coordinación. Puedes unirte a las redadas pulsando el botón raid en la parte inferior de la pantalla, o crear tu propia redada pulsando el botón crear. Puedes ganar tickets de raid jugando los modos de juego o gastando gemas. </li>
54
- </ul>
55
- <h2>Conclusión</h2>
56
- <p>Ninja Shadow Fight 2 es un gran juego para los fanáticos de los juegos de lucha con elementos RPG. Tiene un sistema de combate suave y sensible, elementos de RPG profundos y gratificantes, modos de juego variados y desafiantes, estilo visual único y atractivo, efectos de sonido y música impresionantes e inmersivos, y una historia intrigante y cautivadora. También tiene algunos inconvenientes, como los anuncios frecuentes, el modelo de pago para ganar y la falta de sincronización entre dispositivos. Sin embargo, estos no eclipsan la calidad general y la diversión del juego. Ninja Shadow Fight 2 es un juego que deberías probar si estás buscando un juego de lucha emocionante y adictivo con elementos RPG. </p>
57
- <h2>Preguntas frecuentes</h2>
58
- <p>Aquí hay algunas preguntas frecuentes sobre Ninja Shadow Fight 2, junto con sus respuestas:</p>
59
- <ol>
60
- <li>Q: ¿Cómo puedo sincronizar mi progreso entre dispositivos? <br>
61
-
62
- <li>Q: ¿Cómo puedo eliminar anuncios del juego? <br>
63
- R: Puedes eliminar anuncios del juego comprando la versión premium del juego por $4.99. Esto también te dará algunos beneficios adicionales, como 2000 gemas, 2000 monedas, recompensas dobles para el modo de supervivencia y acceso a armas y armaduras exclusivas. </li>
64
- <li>P: ¿Cómo puedo obtener más gemas sin gastar dinero real? <br>
65
- R: Puedes obtener más gemas sin gastar dinero real completando ofertas gratuitas, viendo anuncios de video, cultivando gemas en modo supervivencia o modo inframundo, o derrotando jefes en modo torneo. </li>
66
- <li>Q: ¿Cómo puedo restablecer mi progreso y empezar de nuevo? <br>
67
- R: Puedes restablecer tu progreso y empezar de nuevo borrando los datos del juego de tu dispositivo. Sin embargo, esto también eliminará sus monedas y gemas, así que asegúrese de que desea hacer esto antes de proceder. Para eliminar los datos del juego, vaya a la configuración de su dispositivo, encuentre Ninja Shadow Fight 2 en la lista de aplicaciones y toque en los datos claros o elimine los datos. </li>
68
- <li>Q: ¿Cómo puedo contactar a los desarrolladores o reportar un error? <br>
69
- R: Puede ponerse en contacto con los desarrolladores o informar de un error enviando un correo electrónico a [email protected]. También puede visitar su sitio web oficial en https://www.nekki.com/ shadowfight2/ o su página de Facebook en https://www.facebook.com/ shadowfightgames/ para obtener más información y actualizaciones. </li>
70
- </ol></p> 64aa2da5cf<br />
71
- <br />
72
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Cmo Descargar Messenger En Iphone 5s.md DELETED
@@ -1,63 +0,0 @@
1
-
2
- <h1>Cómo descargar Messenger en el iPhone 5s</h1>
3
- <p>Messenger es una aplicación de chat que le permite mantenerse conectado con sus personas favoritas en Facebook, Instagram, Portal y Oculus. También puede disfrutar de videos con sus amigos a través de chat de video, expresarse con emojis, pegatinas, GIF, filtros y mensajes de voz, hacer llamadas de voz y video gratuitas, enviar dinero de forma segura con Facebook Pay y conectarse con empresas para ofertas, reservas y atención al cliente. </p>
4
- <h2>cómo descargar messenger en iphone 5s</h2><br /><p><b><b>DOWNLOAD</b> &#9881; <a href="https://bltlly.com/2v6M2w">https://bltlly.com/2v6M2w</a></b></p><br /><br />
5
- <p>Si tienes un iPhone 5s y quieres descargar Messenger en tu dispositivo, es posible que te estés preguntando cómo hacerlo. En este artículo, te mostraremos dos formas de descargar Messenger desde la App Store o desde iMessage. También te contaremos algunas de las características y beneficios de usar Messenger en tu iPhone. </p>
6
- <h2>Requisitos para descargar Messenger</h2>
7
- <h3>Versión compatible de iOS</h3>
8
- <p>Antes de descargar Messenger en tu iPhone 5s, necesitas asegurarte de que tu dispositivo tenga una versión iOS compatible. Según la página de aplicaciones de Messenger en la App Store, necesitas tener iOS 8 o posterior para descargar y usar Messenger en tu iPhone 5s. Si tiene una versión anterior de iOS, puede actualizarla en Configuración > General > Actualización de software y siguiendo las instrucciones. </p>
9
- <h3>Espacio de almacenamiento disponible</h3>
10
- <p>Otro requisito para descargar Messenger en tu iPhone 5s es tener suficiente espacio de almacenamiento en tu dispositivo. Según la página de aplicaciones de Messenger en la App Store, necesitas unos 200 MB de espacio libre para descargar e instalar Messenger en tu iPhone 5s. Si no tiene suficiente espacio, puede liberar algunos mediante la eliminación de aplicaciones no deseadas, fotos, videos u otros archivos. Puedes comprobar cuánto espacio tienes en Configuración > General > Almacenamiento del iPhone y ver el espacio disponible y usado. </p>
11
- <h3>Conexión a Internet</h3>
12
-
13
- <h2>Pasos para descargar Messenger desde la App Store</h2>
14
- <h3>Paso 1: Abrir el App Store</h3>
15
- <p>El primer paso para descargar Messenger desde la App Store es abrir la aplicación App Store en tu iPhone 5s. Puede encontrar la aplicación App Store en la pantalla de inicio o en la biblioteca de aplicaciones. Tiene un icono azul con una letra blanca A dentro. </p>
16
- <p></p>
17
- <h3>Paso 2: Búsqueda de Facebook Messenger</h3>
18
- <p>El siguiente paso es buscar Facebook Messenger en la App Store. Para hacer esto, toque en el icono de búsqueda en la esquina inferior derecha de la pantalla. Esto abrirá una barra de búsqueda donde puede escribir el nombre de la aplicación que está buscando. Escribe "Facebook Messenger" y toca el botón de búsqueda en tu teclado. </p>
19
- <h3>Paso 3: Toque en el botón Get</h3>
20
- <p>Una vez que vea la aplicación de Facebook Messenger en los resultados de búsqueda, toque en el botón get junto a su icono y nombre. El botón get es un círculo azul con una flecha blanca dentro. Esto comenzará a descargar la aplicación en su dispositivo. </p>
21
- <h3>Paso 4: Confirmar la descarga</h3>
22
- <p>Dependiendo de tu configuración, es posible que necesites confirmar la descarga introduciendo tu contraseña de Apple ID o usando Touch ID. Para introducir la contraseña de tu Apple ID, pulsa en el botón de inicio de sesión y escribe la contraseña. Para usar Touch ID, coloca el dedo en el botón de inicio y espera a que escanee tu huella digital. Esto verificará su identidad y permitirá que la descarga continúe. </p>
23
- <h3>Paso 5: Espere a que la descarga termine</h3>
24
- <p>El paso final es esperar a que termine la descarga. Puede comprobar el progreso de la descarga mirando el círculo alrededor del icono de la aplicación. Cuando el círculo está lleno, significa que la descarga está completa. Puede tocar el icono de la aplicación para abrirla y comenzar a usar Messenger en su iPhone 5s. </p>
25
- <h2>Pasos para descargar Messenger desde iMessage</h2>
26
- <h3>Paso 1: Abrir iMessage</h3>
27
-
28
- <h3>Paso 2: Toque en el icono de la App Store</h3>
29
- <p>Una vez que abra iMessage, toque en el icono de la tienda de aplicaciones en la parte inferior de la pantalla. El icono de la tienda de aplicaciones es un círculo azul con una letra blanca A dentro. Esto abrirá la tienda de aplicaciones para iMessage, donde puedes encontrar y descargar varias aplicaciones que funcionan con iMessage. </p>
30
- <h3>Paso 3: Búsqueda de Facebook Messenger</h3>
31
- <p>El siguiente paso es buscar Facebook Messenger en la tienda de aplicaciones para iMessage. Para hacer esto, toque en el icono de búsqueda en la esquina superior izquierda de la pantalla. Esto abrirá una barra de búsqueda donde puede escribir el nombre de la aplicación que está buscando. Escribe "Facebook Messenger" y toca el botón de búsqueda en tu teclado. </p>
32
- <h3>Paso 4: Toque en el botón de instalación</h3>
33
- <p>Una vez que vea la aplicación de Facebook Messenger en los resultados de búsqueda, toque en el botón de instalación junto a su icono y nombre. El botón de instalación es un círculo azul con un signo más blanco dentro. Esto comenzará a descargar la aplicación en su dispositivo. </p>
34
- <h3>Paso 5: Espere a que la descarga termine</h3>
35
- <p>El paso final es esperar a que termine la descarga. Puede comprobar el progreso de la descarga mirando el círculo alrededor del icono de la aplicación. Cuando el círculo está lleno, significa que la descarga está completa. Puede tocar el icono de la aplicación para abrirla y comenzar a usar Messenger en su iPhone 5s. </p>
36
- <h2>Características y beneficios de Messenger</h2>
37
- <h3>Comunicación entre aplicaciones</h3>
38
- <p>Una de las características y beneficios de usar Messenger en tu iPhone 5s es que puedes chatear con tus amigos a través de diferentes aplicaciones, como Facebook, Instagram, Portal y Oculus. No es necesario cambiar entre aplicaciones para mantenerse en contacto con sus personas favoritas. También puedes sincronizar tus contactos desde tu teléfono y agregarlos a Messenger fácilmente. </p>
39
- <h3>Ver juntos</h3>
40
-
41
- <h3>Reacciones personalizadas y efectos animados</h3>
42
- <p>Una tercera característica y beneficio de usar Messenger en tu iPhone 5s es que puedes expresarte con reacciones personalizadas y efectos animados. Puede elegir entre una amplia gama de emojis, pegatinas, GIF, filtros, mensajes de voz y efectos de AR para darle vida a sus conversaciones. También puede crear sus propias pegatinas y reacciones con sus fotos y videos. También puedes usar efectos animados para transformarte en diferentes personajes o animales, o agregar fondos divertidos o accesorios a tus chats de video. </p>
43
- <h3>Llamadas de voz y video</h3>
44
- <p>Una cuarta característica y beneficio de usar Messenger en tu iPhone 5s es que puedes hacer llamadas de voz y video gratuitas a cualquier persona en el mundo a través de Wi-Fi o celular. También puede crear llamadas de grupo con hasta 50 personas a la vez. También puedes usar Messenger Rooms para invitar a cualquiera a unirse a tu video chat, incluso si no tiene una cuenta de Facebook. También puedes usar Messenger Kids para que tus hijos puedan chatear de forma segura con sus amigos y familiares. </p>
45
- <h3>Pagos y conexiones de negocios</h3>
46
- <p>Una quinta característica y beneficio de usar Messenger en tu iPhone 5s es que puedes enviar dinero de forma segura y fácil con Facebook Pay, y conectarte con empresas para obtener ofertas, reservas y atención al cliente. Puedes usar Facebook Pay para enviar o solicitar dinero a tus amigos o familiares sin ningún cargo. Solo necesitas vincular tu tarjeta de débito o cuenta PayPal a tu cuenta de Facebook. También puedes usar Messenger para chatear con empresas con diversos fines, como ordenar comida, reservar vuelos, obtener descuentos o hacer preguntas. </p>
47
- <h2>Conclusión y preguntas frecuentes</h2>
48
-
49
- <p>Aquí hay algunas preguntas frecuentes relacionadas con la descarga o el uso de Messenger en el iPhone 5s:</p>
50
- <ul>
51
- <li><b>Q: ¿Cómo puedo actualizar Messenger en mi iPhone 5s? </b></li>
52
- <li>A: Para actualizar Messenger en tu iPhone 5s, debes ir a la aplicación App Store y tocar el icono de actualizaciones en la esquina inferior derecha de la pantalla. Luego, busque la aplicación Messenger en la lista de actualizaciones disponibles y toque en el botón de actualización junto a ella. Alternativamente, puedes habilitar actualizaciones automáticas para Messenger yendo a Configuración > App Store > Descargas automáticas > Actualizaciones.</li>
53
- <li><b>Q: ¿Cómo puedo eliminar Messenger de mi iPhone 5s? </b></li>
54
- <li>A: Para eliminar Messenger de su iPhone 5s, es necesario presionar y mantener pulsado el icono de la aplicación en la pantalla de inicio o en la biblioteca de aplicaciones hasta que comience a sacudirse. Luego, toque en el icono de X en la esquina superior izquierda del icono de la aplicación y confirme la eliminación. Alternativamente, puedes ir a Configuración > General > iPhone Storage > Messenger y tocar el botón Eliminar aplicación. </li>
55
- <li><b>Q: ¿Cómo puedo salir de Messenger en mi iPhone 5s? </b></li>
56
- <li>A: Para salir de Messenger en su iPhone 5s, es necesario abrir la aplicación y toque en la imagen de perfil en la esquina superior izquierda de la pantalla. Luego, desplácese hacia abajo y toque en el botón Cerrar sesión. También puede cambiar entre diferentes cuentas tocando el botón Cambiar cuenta. </li>
57
- <li><b>Q: ¿Cómo puedo cambiar la configuración de notificación para Messenger en mi iPhone 5s? </b></li>
58
- <li>A: Para cambiar la configuración de notificación para Messenger en tu iPhone 5s, debes ir a Configuración > Notificaciones > Messenger y activar o desactivar la opción Permitir notificaciones. También puede personalizar la configuración de sonido, insignia, banner y pantalla de bloqueo para las notificaciones de Messenger. </li>
59
- <li><b>Q: ¿Cómo puedo bloquear o desbloquear a alguien en Messenger en mi iPhone 5s? </b></li>
60
-
61
- </ul></p> 64aa2da5cf<br />
62
- <br />
63
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Frag Pro Shooter Mod Apk Desbloquear Todos Los Personajes.md DELETED
@@ -1,72 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar FRAG Pro Shooter Mod APK y desbloquear todos los caracteres</h1>
3
- <p>¿Eres un fan de FRAG Pro Shooter, el divertido y amigable juego PvP que te permite elegir entre más de 90 personajes y luchar contra jugadores de todo el mundo? ¿Quieres desbloquear todos los personajes, obtener dinero ilimitado y gemas, y disfrutar del juego sin restricciones? Si es así, entonces usted podría estar interesado en la descarga de FRAG Pro Shooter Mod APK, una versión modificada del juego que le da acceso a todas las características y beneficios que usted no puede conseguir en el juego original. En este artículo, le diremos qué es FRAG Pro Shooter, por qué debe usar FRAG Pro Shooter Mod APK, cómo descargarlo e instalarlo, y algunos consejos y trucos para jugarlo. ¡Sigue leyendo para saber más! </p>
4
- <h2>descargar frag pro shooter mod apk desbloquear todos los personajes</h2><br /><p><b><b>Download</b> &middot; <a href="https://bltlly.com/2v6MOt">https://bltlly.com/2v6MOt</a></b></p><br /><br />
5
- <h2>¿Qué es FRAG Pro Shooter? </h2>
6
- <p>FRAG Pro Shooter es un juego de acción para móviles creado por Oh BiBi para dispositivos iOS y Android. Es uno de los juegos multijugador más populares jamás diseñados para móviles, con más de 70 millones de jugadores en todo el mundo. En este juego, puedes elegir a tu héroe, crear tu equipo, entrar en la arena, y comenzar el combate. También puedes cambiar entre tus personajes, usar sus habilidades especiales, personalizar sus pieles y participar en varios modos de juego y eventos. </p>
7
- <h3>Un juego de PvP divertido y amigable</h3>
8
- <p>FRAG Pro Shooter es un juego que está diseñado para todos, independientemente de su edad o género. Puedes jugar con tus amigos o con jugadores aleatorios en línea. También puedes unirte a un club o crear el tuyo propio para luchar por la victoria con tus compañeros de equipo. El juego tiene unos gráficos coloridos y estilizados que lo hacen atractivo y agradable. El juego también tiene un aspecto social, donde puedes compartir tu contenido con otros jugadores, unirte a concursos, seguir influencers y expandir tu base de fans. </p>
9
- <h3>Características de FRAG Pro Shooter</h3>
10
- <p>FRAG Pro Shooter tiene muchas características que lo convierten en un juego emocionante y adictivo. Algunas de estas características son:</p>
11
- <ul>
12
-
13
- <li><b>Juego personalizado:</b> Puedes controlar cualquier personaje en primera persona o en tercera persona. También puedes cambiar entre tus personajes durante la batalla para obtener una ventaja sobre tus enemigos. </li>
14
- <li><b>4 modos de juego disponibles:</b> Puede elegir entre el modo 1v1, el modo 2v2, el modo de carga útil o el modo FRAG de calle. Cada modo tiene sus propias reglas y objetivos. </li>
15
- <li><b>Nuevo contenido cada mes:</b> El juego se actualiza constantemente con nuevos personajes, skins, mapas, eventos y desafíos. </li>
16
- </ul>
17
- <h2> ¿Por qué usar FRAG Pro Shooter Mod APK? </h2>
18
- <p>FRAG Pro Shooter es un juego gratuito, pero también tiene algunas compras en la aplicación que pueden mejorar su experiencia de juego. Por ejemplo, puedes comprar diamantes para desbloquear nuevos personajes o pieles más rápido. Sin embargo, no todos pueden permitirse gastar dinero real en el juego. Es por eso que algunas personas prefieren utilizar FRAG Pro Shooter Mod APK, una versión modificada del juego que le da acceso a todas las características y beneficios que usted no puede conseguir en el juego original. </p>
19
- <h3>Beneficios de usar FRAG Pro Shooter Mod APK</h3>
20
- <p>Algunos de los beneficios de usar FRAG Pro Shooter Mod APK son:</p>
21
- <ul>
22
- <li><b>Desbloquea todos los personajes:</b> Puedes desbloquear todos los personajes del juego sin gastar diamantes ni dinero. Puedes elegir cualquier personaje que quieras y disfrutar de sus habilidades únicas. </li>
23
- <li><b>Dinero y gemas ilimitados:</b> Puedes obtener dinero y gemas ilimitados en el juego que puedes usar para comprar lo que quieras en el juego. También puedes actualizar y subir de nivel a tus personajes más rápido. </li>
24
- <li><b>No hay anuncios:</b> Puedes jugar el juego sin ningún anuncio molesto que pueda interrumpir tu juego o consumir tus datos. </li>
25
- <li><b>No se requiere raíz:</b> Puede descargar e instalar FRAG Pro Shooter Mod APK sin rootear el dispositivo. Esto significa que no tiene que arriesgarse a dañar su dispositivo o anular su garantía. </li>
26
- </ul>
27
- <h3> Los riesgos de usar FRAG Pro Shooter Mod APK</h3>
28
-
29
- <ul>
30
- <li><b>Cuenta prohibida:</b> Es posible que te prohíban participar en el juego si los desarrolladores detectan que estás utilizando una versión modificada del juego. Esto significa que perderás todo tu progreso y logros en el juego. </li>
31
- <li><b>Virus o infección de malware:</b> Usted puede descargar una versión falsa o dañada de FRAG Pro Shooter Mod APK que contiene virus o malware que puede dañar su dispositivo o robar su información personal. </li>
32
- <li><b>Cuestiones legales:</b> Usted puede violar los términos y condiciones del juego o los derechos de propiedad intelectual de los desarrolladores mediante el uso de FRAG Pro Shooter Mod APK. Esto podría resultar en acciones legales o demandas contra usted. </li>
33
- </ul>
34
- <p>Por lo tanto, usted debe utilizar FRAG Pro Shooter Mod APK a su propio riesgo y discreción. No nos hacemos responsables de las consecuencias que puedan derivarse de su uso. </p>
35
- <h2>¿Cómo descargar e instalar FRAG Pro Shooter Mod APK? </h2>
36
- <p>Si ha decidido utilizar FRAG Pro Shooter Mod APK, es necesario seguir algunos pasos para descargar e instalar en su dispositivo. Estos son los pasos:</p>
37
- <h3>Pasos para descargar e instalar FRAG Pro Shooter Mod APK</h3>
38
- <ol>
39
- <li><b>Desinstalar el juego original:</b> Es necesario desinstalar la versión original de FRAG Pro Shooter desde el dispositivo antes de instalar la versión modificada. Esto es para evitar conflictos o errores entre las dos versiones. </li>
40
- <li><b>Descargar FRAG Pro Shooter Mod APK:</b> Es necesario descargar FRAG Pro Shooter Mod APK de una fuente confiable y confiable. Puede usar este enlace para descargarlo. Asegúrese de tener suficiente espacio de almacenamiento en su dispositivo antes de descargarlo. </li>
41
- <li><b>Habilitar fuentes desconocidas:</b> Es necesario habilitar fuentes desconocidas en el dispositivo para permitir la instalación de aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. </li>
42
-
43
- <li><b>Lanzamiento FRAG Pro Shooter Mod APK:</b> Es necesario iniciar FRAG Pro Shooter Mod APK desde el cajón de la aplicación o la pantalla de inicio y disfrutar del juego con todas las características y beneficios desbloqueados. </li>
44
- </ol>
45
- <h3> Consejos y trucos para jugar FRAG Pro Shooter Mod APK</h3>
46
- <p>Para aprovechar al máximo FRAG Pro Shooter Mod APK, aquí hay algunos consejos y trucos que puede utilizar:</p>
47
- <p></p>
48
- <ul>
49
- <li><b>Elige tus personajes sabiamente:</b> Debes elegir tus personajes en función de sus roles, habilidades y compatibilidad entre ellos. También debes equilibrar tu equipo con personajes ofensivos, defensivos y de apoyo. </li>
50
- <li><b>Cambia entre tus personajes con frecuencia:</b> Debes cambiar entre tus personajes durante la batalla para adaptarte a diferentes situaciones y enemigos. También debes usar sus habilidades especiales estratégicamente para ganar ventaja sobre tus oponentes. </li>
51
- <li><b>Usa cubierta y movimiento:</b> Debes usar cubierta y movimiento para evitar ser golpeado por fuego enemigo y sorprenderlos con tus ataques. También debe evitar quedarse en un lugar por mucho tiempo y moverse por el mapa. </li>
52
- <li><b>Recoger monedas y cajas:</b> Usted debe recoger las monedas y cajas que están dispersos por el mapa. Las monedas se pueden usar para comprar nuevos personajes o pieles, mientras que las cajas pueden contener dinero, gemas, cartas o power-ups. </li>
53
- <li><b>Completar misiones y desafíos:</b> Usted debe completar misiones y desafíos que se le dan todos los días o semanas. Estos pueden recompensarte con dinero, gemas, cartas u otros premios. </li>
54
- </ul>
55
- <h2>Conclusión</h2>
56
-
57
- <h2>Preguntas frecuentes</h2>
58
- <p>Aquí hay algunas preguntas frecuentes sobre FRAG Pro Shooter Mod APK:</p>
59
- <ol>
60
- <li><b> ¿Es FRAG Pro Shooter Mod APK seguro de usar? </b></li>
61
- <p>FRAG Pro Shooter Mod APK no es una versión oficial del juego y no está avalado por los desarrolladores. Por lo tanto, no está garantizado que sea seguro o seguro de usar. Es posible que encuentre algunos errores, fallas o errores al usarlo. También puede exponer su dispositivo o datos a virus o infección de malware. Por lo tanto, usted debe utilizar FRAG Pro Shooter Mod APK a su propio riesgo y discreción. </p>
62
- <li><b>Es FRAG Pro Shooter Mod APK compatible con mi dispositivo? </b></li>
63
- <p>FRAG Pro Shooter Mod APK es compatible con la mayoría de los dispositivos Android que tienen Android 4.3 o superior. Sin embargo, algunos dispositivos pueden no ser compatibles con la versión modificada del juego debido a diferentes especificaciones o configuraciones. Por lo tanto, debe comprobar la compatibilidad de su dispositivo antes de descargar e instalar FRAG Pro Shooter Mod APK.</p>
64
- <li><b> ¿Cómo puedo actualizar FRAG Pro Shooter Mod APK? </b></li>
65
- <p>FRAG Pro Shooter Mod APK no se actualiza automáticamente como el juego original. Por lo tanto, es necesario descargar e instalar manualmente la última versión de FRAG Pro Shooter Mod APK siempre que haya una nueva actualización disponible. Puedes buscar actualizaciones de la fuente donde descargaste la versión modificada del juego. </p>
66
- <li><b>¿Puedo jugar FRAG Pro Shooter Mod APK sin conexión? </b></li>
67
- <p>No, no puede jugar FRAG Pro Shooter Mod APK sin conexión. Necesitas una conexión a Internet para jugar y acceder a todas las características y beneficios de la versión modificada del juego. </p>
68
- <li><b>¿Puedo jugar FRAG Pro Shooter Mod APK con mis amigos? </b></li>
69
-
70
- </ol></p> 64aa2da5cf<br />
71
- <br />
72
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tzwin.py DELETED
@@ -1,2 +0,0 @@
1
- # tzwin has moved to dateutil.tz.win
2
- from .tz.win import *
 
 
 
spaces/CVH-vn1210/make_hair/minigpt4/models/Qformer.py DELETED
@@ -1,1216 +0,0 @@
1
- """
2
- * Copyright (c) 2023, salesforce.com, inc.
3
- * All rights reserved.
4
- * SPDX-License-Identifier: BSD-3-Clause
5
- * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause
6
- * By Junnan Li
7
- * Based on huggingface code base
8
- * https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/bert
9
- """
10
-
11
- import math
12
- import os
13
- import warnings
14
- from dataclasses import dataclass
15
- from typing import Optional, Tuple, Dict, Any
16
-
17
- import torch
18
- from torch import Tensor, device, dtype, nn
19
- import torch.utils.checkpoint
20
- from torch import nn
21
- from torch.nn import CrossEntropyLoss
22
- import torch.nn.functional as F
23
-
24
- from transformers.activations import ACT2FN
25
- from transformers.file_utils import (
26
- ModelOutput,
27
- )
28
- from transformers.modeling_outputs import (
29
- BaseModelOutputWithPastAndCrossAttentions,
30
- BaseModelOutputWithPoolingAndCrossAttentions,
31
- CausalLMOutputWithCrossAttentions,
32
- MaskedLMOutput,
33
- MultipleChoiceModelOutput,
34
- NextSentencePredictorOutput,
35
- QuestionAnsweringModelOutput,
36
- SequenceClassifierOutput,
37
- TokenClassifierOutput,
38
- )
39
- from transformers.modeling_utils import (
40
- PreTrainedModel,
41
- apply_chunking_to_forward,
42
- find_pruneable_heads_and_indices,
43
- prune_linear_layer,
44
- )
45
- from transformers.utils import logging
46
- from transformers.models.bert.configuration_bert import BertConfig
47
-
48
- logger = logging.get_logger(__name__)
49
-
50
-
51
- class BertEmbeddings(nn.Module):
52
- """Construct the embeddings from word and position embeddings."""
53
-
54
- def __init__(self, config):
55
- super().__init__()
56
- self.word_embeddings = nn.Embedding(
57
- config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id
58
- )
59
- self.position_embeddings = nn.Embedding(
60
- config.max_position_embeddings, config.hidden_size
61
- )
62
-
63
- # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load
64
- # any TensorFlow checkpoint file
65
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
66
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
67
-
68
- # position_ids (1, len position emb) is contiguous in memory and exported when serialized
69
- self.register_buffer(
70
- "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))
71
- )
72
- self.position_embedding_type = getattr(
73
- config, "position_embedding_type", "absolute"
74
- )
75
-
76
- self.config = config
77
-
78
- def forward(
79
- self,
80
- input_ids=None,
81
- position_ids=None,
82
- query_embeds=None,
83
- past_key_values_length=0,
84
- ):
85
- if input_ids is not None:
86
- seq_length = input_ids.size()[1]
87
- else:
88
- seq_length = 0
89
-
90
- if position_ids is None:
91
- position_ids = self.position_ids[
92
- :, past_key_values_length : seq_length + past_key_values_length
93
- ].clone()
94
-
95
- if input_ids is not None:
96
- embeddings = self.word_embeddings(input_ids)
97
- if self.position_embedding_type == "absolute":
98
- position_embeddings = self.position_embeddings(position_ids)
99
- embeddings = embeddings + position_embeddings
100
-
101
- if query_embeds is not None:
102
- embeddings = torch.cat((query_embeds, embeddings), dim=1)
103
- else:
104
- embeddings = query_embeds
105
-
106
- embeddings = self.LayerNorm(embeddings)
107
- embeddings = self.dropout(embeddings)
108
- return embeddings
109
-
110
-
111
- class BertSelfAttention(nn.Module):
112
- def __init__(self, config, is_cross_attention):
113
- super().__init__()
114
- self.config = config
115
- if config.hidden_size % config.num_attention_heads != 0 and not hasattr(
116
- config, "embedding_size"
117
- ):
118
- raise ValueError(
119
- "The hidden size (%d) is not a multiple of the number of attention "
120
- "heads (%d)" % (config.hidden_size, config.num_attention_heads)
121
- )
122
-
123
- self.num_attention_heads = config.num_attention_heads
124
- self.attention_head_size = int(config.hidden_size / config.num_attention_heads)
125
- self.all_head_size = self.num_attention_heads * self.attention_head_size
126
-
127
- self.query = nn.Linear(config.hidden_size, self.all_head_size)
128
- if is_cross_attention:
129
- self.key = nn.Linear(config.encoder_width, self.all_head_size)
130
- self.value = nn.Linear(config.encoder_width, self.all_head_size)
131
- else:
132
- self.key = nn.Linear(config.hidden_size, self.all_head_size)
133
- self.value = nn.Linear(config.hidden_size, self.all_head_size)
134
-
135
- self.dropout = nn.Dropout(config.attention_probs_dropout_prob)
136
- self.position_embedding_type = getattr(
137
- config, "position_embedding_type", "absolute"
138
- )
139
- if (
140
- self.position_embedding_type == "relative_key"
141
- or self.position_embedding_type == "relative_key_query"
142
- ):
143
- self.max_position_embeddings = config.max_position_embeddings
144
- self.distance_embedding = nn.Embedding(
145
- 2 * config.max_position_embeddings - 1, self.attention_head_size
146
- )
147
- self.save_attention = False
148
-
149
- def save_attn_gradients(self, attn_gradients):
150
- self.attn_gradients = attn_gradients
151
-
152
- def get_attn_gradients(self):
153
- return self.attn_gradients
154
-
155
- def save_attention_map(self, attention_map):
156
- self.attention_map = attention_map
157
-
158
- def get_attention_map(self):
159
- return self.attention_map
160
-
161
- def transpose_for_scores(self, x):
162
- new_x_shape = x.size()[:-1] + (
163
- self.num_attention_heads,
164
- self.attention_head_size,
165
- )
166
- x = x.view(*new_x_shape)
167
- return x.permute(0, 2, 1, 3)
168
-
169
- def forward(
170
- self,
171
- hidden_states,
172
- attention_mask=None,
173
- head_mask=None,
174
- encoder_hidden_states=None,
175
- encoder_attention_mask=None,
176
- past_key_value=None,
177
- output_attentions=False,
178
- ):
179
-
180
- # If this is instantiated as a cross-attention module, the keys
181
- # and values come from an encoder; the attention mask needs to be
182
- # such that the encoder's padding tokens are not attended to.
183
- is_cross_attention = encoder_hidden_states is not None
184
-
185
- if is_cross_attention:
186
- key_layer = self.transpose_for_scores(self.key(encoder_hidden_states))
187
- value_layer = self.transpose_for_scores(self.value(encoder_hidden_states))
188
- attention_mask = encoder_attention_mask
189
- elif past_key_value is not None:
190
- key_layer = self.transpose_for_scores(self.key(hidden_states))
191
- value_layer = self.transpose_for_scores(self.value(hidden_states))
192
- key_layer = torch.cat([past_key_value[0], key_layer], dim=2)
193
- value_layer = torch.cat([past_key_value[1], value_layer], dim=2)
194
- else:
195
- key_layer = self.transpose_for_scores(self.key(hidden_states))
196
- value_layer = self.transpose_for_scores(self.value(hidden_states))
197
-
198
- mixed_query_layer = self.query(hidden_states)
199
-
200
- query_layer = self.transpose_for_scores(mixed_query_layer)
201
-
202
- past_key_value = (key_layer, value_layer)
203
-
204
- # Take the dot product between "query" and "key" to get the raw attention scores.
205
- attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
206
-
207
- if (
208
- self.position_embedding_type == "relative_key"
209
- or self.position_embedding_type == "relative_key_query"
210
- ):
211
- seq_length = hidden_states.size()[1]
212
- position_ids_l = torch.arange(
213
- seq_length, dtype=torch.long, device=hidden_states.device
214
- ).view(-1, 1)
215
- position_ids_r = torch.arange(
216
- seq_length, dtype=torch.long, device=hidden_states.device
217
- ).view(1, -1)
218
- distance = position_ids_l - position_ids_r
219
- positional_embedding = self.distance_embedding(
220
- distance + self.max_position_embeddings - 1
221
- )
222
- positional_embedding = positional_embedding.to(
223
- dtype=query_layer.dtype
224
- ) # fp16 compatibility
225
-
226
- if self.position_embedding_type == "relative_key":
227
- relative_position_scores = torch.einsum(
228
- "bhld,lrd->bhlr", query_layer, positional_embedding
229
- )
230
- attention_scores = attention_scores + relative_position_scores
231
- elif self.position_embedding_type == "relative_key_query":
232
- relative_position_scores_query = torch.einsum(
233
- "bhld,lrd->bhlr", query_layer, positional_embedding
234
- )
235
- relative_position_scores_key = torch.einsum(
236
- "bhrd,lrd->bhlr", key_layer, positional_embedding
237
- )
238
- attention_scores = (
239
- attention_scores
240
- + relative_position_scores_query
241
- + relative_position_scores_key
242
- )
243
-
244
- attention_scores = attention_scores / math.sqrt(self.attention_head_size)
245
- if attention_mask is not None:
246
- # Apply the attention mask is (precomputed for all layers in BertModel forward() function)
247
- attention_scores = attention_scores + attention_mask
248
-
249
- # Normalize the attention scores to probabilities.
250
- attention_probs = nn.Softmax(dim=-1)(attention_scores)
251
-
252
- if is_cross_attention and self.save_attention:
253
- self.save_attention_map(attention_probs)
254
- attention_probs.register_hook(self.save_attn_gradients)
255
-
256
- # This is actually dropping out entire tokens to attend to, which might
257
- # seem a bit unusual, but is taken from the original Transformer paper.
258
- attention_probs_dropped = self.dropout(attention_probs)
259
-
260
- # Mask heads if we want to
261
- if head_mask is not None:
262
- attention_probs_dropped = attention_probs_dropped * head_mask
263
-
264
- context_layer = torch.matmul(attention_probs_dropped, value_layer)
265
-
266
- context_layer = context_layer.permute(0, 2, 1, 3).contiguous()
267
- new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,)
268
- context_layer = context_layer.view(*new_context_layer_shape)
269
-
270
- outputs = (
271
- (context_layer, attention_probs) if output_attentions else (context_layer,)
272
- )
273
-
274
- outputs = outputs + (past_key_value,)
275
- return outputs
276
-
277
-
278
- class BertSelfOutput(nn.Module):
279
- def __init__(self, config):
280
- super().__init__()
281
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
282
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
283
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
284
-
285
- def forward(self, hidden_states, input_tensor):
286
- hidden_states = self.dense(hidden_states)
287
- hidden_states = self.dropout(hidden_states)
288
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
289
- return hidden_states
290
-
291
-
292
- class BertAttention(nn.Module):
293
- def __init__(self, config, is_cross_attention=False):
294
- super().__init__()
295
- self.self = BertSelfAttention(config, is_cross_attention)
296
- self.output = BertSelfOutput(config)
297
- self.pruned_heads = set()
298
-
299
- def prune_heads(self, heads):
300
- if len(heads) == 0:
301
- return
302
- heads, index = find_pruneable_heads_and_indices(
303
- heads,
304
- self.self.num_attention_heads,
305
- self.self.attention_head_size,
306
- self.pruned_heads,
307
- )
308
-
309
- # Prune linear layers
310
- self.self.query = prune_linear_layer(self.self.query, index)
311
- self.self.key = prune_linear_layer(self.self.key, index)
312
- self.self.value = prune_linear_layer(self.self.value, index)
313
- self.output.dense = prune_linear_layer(self.output.dense, index, dim=1)
314
-
315
- # Update hyper params and store pruned heads
316
- self.self.num_attention_heads = self.self.num_attention_heads - len(heads)
317
- self.self.all_head_size = (
318
- self.self.attention_head_size * self.self.num_attention_heads
319
- )
320
- self.pruned_heads = self.pruned_heads.union(heads)
321
-
322
- def forward(
323
- self,
324
- hidden_states,
325
- attention_mask=None,
326
- head_mask=None,
327
- encoder_hidden_states=None,
328
- encoder_attention_mask=None,
329
- past_key_value=None,
330
- output_attentions=False,
331
- ):
332
- self_outputs = self.self(
333
- hidden_states,
334
- attention_mask,
335
- head_mask,
336
- encoder_hidden_states,
337
- encoder_attention_mask,
338
- past_key_value,
339
- output_attentions,
340
- )
341
- attention_output = self.output(self_outputs[0], hidden_states)
342
-
343
- outputs = (attention_output,) + self_outputs[
344
- 1:
345
- ] # add attentions if we output them
346
- return outputs
347
-
348
-
349
- class BertIntermediate(nn.Module):
350
- def __init__(self, config):
351
- super().__init__()
352
- self.dense = nn.Linear(config.hidden_size, config.intermediate_size)
353
- if isinstance(config.hidden_act, str):
354
- self.intermediate_act_fn = ACT2FN[config.hidden_act]
355
- else:
356
- self.intermediate_act_fn = config.hidden_act
357
-
358
- def forward(self, hidden_states):
359
- hidden_states = self.dense(hidden_states)
360
- hidden_states = self.intermediate_act_fn(hidden_states)
361
- return hidden_states
362
-
363
-
364
- class BertOutput(nn.Module):
365
- def __init__(self, config):
366
- super().__init__()
367
- self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
368
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
369
- self.dropout = nn.Dropout(config.hidden_dropout_prob)
370
-
371
- def forward(self, hidden_states, input_tensor):
372
- hidden_states = self.dense(hidden_states)
373
- hidden_states = self.dropout(hidden_states)
374
- hidden_states = self.LayerNorm(hidden_states + input_tensor)
375
- return hidden_states
376
-
377
-
378
- class BertLayer(nn.Module):
379
- def __init__(self, config, layer_num):
380
- super().__init__()
381
- self.config = config
382
- self.chunk_size_feed_forward = config.chunk_size_feed_forward
383
- self.seq_len_dim = 1
384
- self.attention = BertAttention(config)
385
- self.layer_num = layer_num
386
- if (
387
- self.config.add_cross_attention
388
- and layer_num % self.config.cross_attention_freq == 0
389
- ):
390
- self.crossattention = BertAttention(
391
- config, is_cross_attention=self.config.add_cross_attention
392
- )
393
- self.has_cross_attention = True
394
- else:
395
- self.has_cross_attention = False
396
- self.intermediate = BertIntermediate(config)
397
- self.output = BertOutput(config)
398
-
399
- self.intermediate_query = BertIntermediate(config)
400
- self.output_query = BertOutput(config)
401
-
402
- def forward(
403
- self,
404
- hidden_states,
405
- attention_mask=None,
406
- head_mask=None,
407
- encoder_hidden_states=None,
408
- encoder_attention_mask=None,
409
- past_key_value=None,
410
- output_attentions=False,
411
- query_length=0,
412
- ):
413
- # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
414
- self_attn_past_key_value = (
415
- past_key_value[:2] if past_key_value is not None else None
416
- )
417
- self_attention_outputs = self.attention(
418
- hidden_states,
419
- attention_mask,
420
- head_mask,
421
- output_attentions=output_attentions,
422
- past_key_value=self_attn_past_key_value,
423
- )
424
- attention_output = self_attention_outputs[0]
425
- outputs = self_attention_outputs[1:-1]
426
-
427
- present_key_value = self_attention_outputs[-1]
428
-
429
- if query_length > 0:
430
- query_attention_output = attention_output[:, :query_length, :]
431
-
432
- if self.has_cross_attention:
433
- assert (
434
- encoder_hidden_states is not None
435
- ), "encoder_hidden_states must be given for cross-attention layers"
436
- cross_attention_outputs = self.crossattention(
437
- query_attention_output,
438
- attention_mask,
439
- head_mask,
440
- encoder_hidden_states,
441
- encoder_attention_mask,
442
- output_attentions=output_attentions,
443
- )
444
- query_attention_output = cross_attention_outputs[0]
445
- outputs = (
446
- outputs + cross_attention_outputs[1:-1]
447
- ) # add cross attentions if we output attention weights
448
-
449
- layer_output = apply_chunking_to_forward(
450
- self.feed_forward_chunk_query,
451
- self.chunk_size_feed_forward,
452
- self.seq_len_dim,
453
- query_attention_output,
454
- )
455
- if attention_output.shape[1] > query_length:
456
- layer_output_text = apply_chunking_to_forward(
457
- self.feed_forward_chunk,
458
- self.chunk_size_feed_forward,
459
- self.seq_len_dim,
460
- attention_output[:, query_length:, :],
461
- )
462
- layer_output = torch.cat([layer_output, layer_output_text], dim=1)
463
- else:
464
- layer_output = apply_chunking_to_forward(
465
- self.feed_forward_chunk,
466
- self.chunk_size_feed_forward,
467
- self.seq_len_dim,
468
- attention_output,
469
- )
470
- outputs = (layer_output,) + outputs
471
-
472
- outputs = outputs + (present_key_value,)
473
-
474
- return outputs
475
-
476
- def feed_forward_chunk(self, attention_output):
477
- intermediate_output = self.intermediate(attention_output)
478
- layer_output = self.output(intermediate_output, attention_output)
479
- return layer_output
480
-
481
- def feed_forward_chunk_query(self, attention_output):
482
- intermediate_output = self.intermediate_query(attention_output)
483
- layer_output = self.output_query(intermediate_output, attention_output)
484
- return layer_output
485
-
486
-
487
- class BertEncoder(nn.Module):
488
- def __init__(self, config):
489
- super().__init__()
490
- self.config = config
491
- self.layer = nn.ModuleList(
492
- [BertLayer(config, i) for i in range(config.num_hidden_layers)]
493
- )
494
-
495
- def forward(
496
- self,
497
- hidden_states,
498
- attention_mask=None,
499
- head_mask=None,
500
- encoder_hidden_states=None,
501
- encoder_attention_mask=None,
502
- past_key_values=None,
503
- use_cache=None,
504
- output_attentions=False,
505
- output_hidden_states=False,
506
- return_dict=True,
507
- query_length=0,
508
- ):
509
- all_hidden_states = () if output_hidden_states else None
510
- all_self_attentions = () if output_attentions else None
511
- all_cross_attentions = (
512
- () if output_attentions and self.config.add_cross_attention else None
513
- )
514
-
515
- next_decoder_cache = () if use_cache else None
516
-
517
- for i in range(self.config.num_hidden_layers):
518
- layer_module = self.layer[i]
519
- if output_hidden_states:
520
- all_hidden_states = all_hidden_states + (hidden_states,)
521
-
522
- layer_head_mask = head_mask[i] if head_mask is not None else None
523
- past_key_value = past_key_values[i] if past_key_values is not None else None
524
-
525
- if getattr(self.config, "gradient_checkpointing", False) and self.training:
526
-
527
- if use_cache:
528
- logger.warn(
529
- "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
530
- )
531
- use_cache = False
532
-
533
- def create_custom_forward(module):
534
- def custom_forward(*inputs):
535
- return module(
536
- *inputs, past_key_value, output_attentions, query_length
537
- )
538
-
539
- return custom_forward
540
-
541
- layer_outputs = torch.utils.checkpoint.checkpoint(
542
- create_custom_forward(layer_module),
543
- hidden_states,
544
- attention_mask,
545
- layer_head_mask,
546
- encoder_hidden_states,
547
- encoder_attention_mask,
548
- )
549
- else:
550
- layer_outputs = layer_module(
551
- hidden_states,
552
- attention_mask,
553
- layer_head_mask,
554
- encoder_hidden_states,
555
- encoder_attention_mask,
556
- past_key_value,
557
- output_attentions,
558
- query_length,
559
- )
560
-
561
- hidden_states = layer_outputs[0]
562
- if use_cache:
563
- next_decoder_cache += (layer_outputs[-1],)
564
- if output_attentions:
565
- all_self_attentions = all_self_attentions + (layer_outputs[1],)
566
- all_cross_attentions = all_cross_attentions + (layer_outputs[2],)
567
-
568
- if output_hidden_states:
569
- all_hidden_states = all_hidden_states + (hidden_states,)
570
-
571
- if not return_dict:
572
- return tuple(
573
- v
574
- for v in [
575
- hidden_states,
576
- next_decoder_cache,
577
- all_hidden_states,
578
- all_self_attentions,
579
- all_cross_attentions,
580
- ]
581
- if v is not None
582
- )
583
- return BaseModelOutputWithPastAndCrossAttentions(
584
- last_hidden_state=hidden_states,
585
- past_key_values=next_decoder_cache,
586
- hidden_states=all_hidden_states,
587
- attentions=all_self_attentions,
588
- cross_attentions=all_cross_attentions,
589
- )
590
-
591
-
592
- class BertPooler(nn.Module):
593
- def __init__(self, config):
594
- super().__init__()
595
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
596
- self.activation = nn.Tanh()
597
-
598
- def forward(self, hidden_states):
599
- # We "pool" the model by simply taking the hidden state corresponding
600
- # to the first token.
601
- first_token_tensor = hidden_states[:, 0]
602
- pooled_output = self.dense(first_token_tensor)
603
- pooled_output = self.activation(pooled_output)
604
- return pooled_output
605
-
606
-
607
- class BertPredictionHeadTransform(nn.Module):
608
- def __init__(self, config):
609
- super().__init__()
610
- self.dense = nn.Linear(config.hidden_size, config.hidden_size)
611
- if isinstance(config.hidden_act, str):
612
- self.transform_act_fn = ACT2FN[config.hidden_act]
613
- else:
614
- self.transform_act_fn = config.hidden_act
615
- self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
616
-
617
- def forward(self, hidden_states):
618
- hidden_states = self.dense(hidden_states)
619
- hidden_states = self.transform_act_fn(hidden_states)
620
- hidden_states = self.LayerNorm(hidden_states)
621
- return hidden_states
622
-
623
-
624
- class BertLMPredictionHead(nn.Module):
625
- def __init__(self, config):
626
- super().__init__()
627
- self.transform = BertPredictionHeadTransform(config)
628
-
629
- # The output weights are the same as the input embeddings, but there is
630
- # an output-only bias for each token.
631
- self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
632
-
633
- self.bias = nn.Parameter(torch.zeros(config.vocab_size))
634
-
635
- # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings`
636
- self.decoder.bias = self.bias
637
-
638
- def forward(self, hidden_states):
639
- hidden_states = self.transform(hidden_states)
640
- hidden_states = self.decoder(hidden_states)
641
- return hidden_states
642
-
643
-
644
- class BertOnlyMLMHead(nn.Module):
645
- def __init__(self, config):
646
- super().__init__()
647
- self.predictions = BertLMPredictionHead(config)
648
-
649
- def forward(self, sequence_output):
650
- prediction_scores = self.predictions(sequence_output)
651
- return prediction_scores
652
-
653
-
654
- class BertPreTrainedModel(PreTrainedModel):
655
- """
656
- An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained
657
- models.
658
- """
659
-
660
- config_class = BertConfig
661
- base_model_prefix = "bert"
662
- _keys_to_ignore_on_load_missing = [r"position_ids"]
663
-
664
- def _init_weights(self, module):
665
- """Initialize the weights"""
666
- if isinstance(module, (nn.Linear, nn.Embedding)):
667
- # Slightly different from the TF version which uses truncated_normal for initialization
668
- # cf https://github.com/pytorch/pytorch/pull/5617
669
- module.weight.data.normal_(mean=0.0, std=self.config.initializer_range)
670
- elif isinstance(module, nn.LayerNorm):
671
- module.bias.data.zero_()
672
- module.weight.data.fill_(1.0)
673
- if isinstance(module, nn.Linear) and module.bias is not None:
674
- module.bias.data.zero_()
675
-
676
-
677
- class BertModel(BertPreTrainedModel):
678
- """
679
- The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of
680
- cross-attention is added between the self-attention layers, following the architecture described in `Attention is
681
- all you need <https://arxiv.org/abs/1706.03762>`__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit,
682
- Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin.
683
- argument and :obj:`add_cross_attention` set to :obj:`True`; an :obj:`encoder_hidden_states` is then expected as an
684
- input to the forward pass.
685
- """
686
-
687
- def __init__(self, config, add_pooling_layer=False):
688
- super().__init__(config)
689
- self.config = config
690
-
691
- self.embeddings = BertEmbeddings(config)
692
-
693
- self.encoder = BertEncoder(config)
694
-
695
- self.pooler = BertPooler(config) if add_pooling_layer else None
696
-
697
- self.init_weights()
698
-
699
- def get_input_embeddings(self):
700
- return self.embeddings.word_embeddings
701
-
702
- def set_input_embeddings(self, value):
703
- self.embeddings.word_embeddings = value
704
-
705
- def _prune_heads(self, heads_to_prune):
706
- """
707
- Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base
708
- class PreTrainedModel
709
- """
710
- for layer, heads in heads_to_prune.items():
711
- self.encoder.layer[layer].attention.prune_heads(heads)
712
-
713
- def get_extended_attention_mask(
714
- self,
715
- attention_mask: Tensor,
716
- input_shape: Tuple[int],
717
- device: device,
718
- is_decoder: bool,
719
- has_query: bool = False,
720
- ) -> Tensor:
721
- """
722
- Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
723
-
724
- Arguments:
725
- attention_mask (:obj:`torch.Tensor`):
726
- Mask with ones indicating tokens to attend to, zeros for tokens to ignore.
727
- input_shape (:obj:`Tuple[int]`):
728
- The shape of the input to the model.
729
- device: (:obj:`torch.device`):
730
- The device of the input to the model.
731
-
732
- Returns:
733
- :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`.
734
- """
735
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
736
- # ourselves in which case we just need to make it broadcastable to all heads.
737
- if attention_mask.dim() == 3:
738
- extended_attention_mask = attention_mask[:, None, :, :]
739
- elif attention_mask.dim() == 2:
740
- # Provided a padding mask of dimensions [batch_size, seq_length]
741
- # - if the model is a decoder, apply a causal mask in addition to the padding mask
742
- # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length]
743
- if is_decoder:
744
- batch_size, seq_length = input_shape
745
-
746
- seq_ids = torch.arange(seq_length, device=device)
747
- causal_mask = (
748
- seq_ids[None, None, :].repeat(batch_size, seq_length, 1)
749
- <= seq_ids[None, :, None]
750
- )
751
-
752
- # add a prefix ones mask to the causal mask
753
- # causal and attention masks must have same type with pytorch version < 1.3
754
- causal_mask = causal_mask.to(attention_mask.dtype)
755
-
756
- if causal_mask.shape[1] < attention_mask.shape[1]:
757
- prefix_seq_len = attention_mask.shape[1] - causal_mask.shape[1]
758
- if has_query: # UniLM style attention mask
759
- causal_mask = torch.cat(
760
- [
761
- torch.zeros(
762
- (batch_size, prefix_seq_len, seq_length),
763
- device=device,
764
- dtype=causal_mask.dtype,
765
- ),
766
- causal_mask,
767
- ],
768
- axis=1,
769
- )
770
- causal_mask = torch.cat(
771
- [
772
- torch.ones(
773
- (batch_size, causal_mask.shape[1], prefix_seq_len),
774
- device=device,
775
- dtype=causal_mask.dtype,
776
- ),
777
- causal_mask,
778
- ],
779
- axis=-1,
780
- )
781
- extended_attention_mask = (
782
- causal_mask[:, None, :, :] * attention_mask[:, None, None, :]
783
- )
784
- else:
785
- extended_attention_mask = attention_mask[:, None, None, :]
786
- else:
787
- raise ValueError(
788
- "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format(
789
- input_shape, attention_mask.shape
790
- )
791
- )
792
-
793
- # Since attention_mask is 1.0 for positions we want to attend and 0.0 for
794
- # masked positions, this operation will create a tensor which is 0.0 for
795
- # positions we want to attend and -10000.0 for masked positions.
796
- # Since we are adding it to the raw scores before the softmax, this is
797
- # effectively the same as removing these entirely.
798
- extended_attention_mask = extended_attention_mask.to(
799
- dtype=self.dtype
800
- ) # fp16 compatibility
801
- extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0
802
- return extended_attention_mask
803
-
804
- def forward(
805
- self,
806
- input_ids=None,
807
- attention_mask=None,
808
- position_ids=None,
809
- head_mask=None,
810
- query_embeds=None,
811
- encoder_hidden_states=None,
812
- encoder_attention_mask=None,
813
- past_key_values=None,
814
- use_cache=None,
815
- output_attentions=None,
816
- output_hidden_states=None,
817
- return_dict=None,
818
- is_decoder=False,
819
- ):
820
- r"""
821
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
822
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
823
- the model is configured as a decoder.
824
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
825
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
826
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
827
- - 1 for tokens that are **not masked**,
828
- - 0 for tokens that are **masked**.
829
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
830
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
831
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
832
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
833
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
834
- use_cache (:obj:`bool`, `optional`):
835
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
836
- decoding (see :obj:`past_key_values`).
837
- """
838
- output_attentions = (
839
- output_attentions
840
- if output_attentions is not None
841
- else self.config.output_attentions
842
- )
843
- output_hidden_states = (
844
- output_hidden_states
845
- if output_hidden_states is not None
846
- else self.config.output_hidden_states
847
- )
848
- return_dict = (
849
- return_dict if return_dict is not None else self.config.use_return_dict
850
- )
851
-
852
- # use_cache = use_cache if use_cache is not None else self.config.use_cache
853
-
854
- if input_ids is None:
855
- assert (
856
- query_embeds is not None
857
- ), "You have to specify query_embeds when input_ids is None"
858
-
859
- # past_key_values_length
860
- past_key_values_length = (
861
- past_key_values[0][0].shape[2] - self.config.query_length
862
- if past_key_values is not None
863
- else 0
864
- )
865
-
866
- query_length = query_embeds.shape[1] if query_embeds is not None else 0
867
-
868
- embedding_output = self.embeddings(
869
- input_ids=input_ids,
870
- position_ids=position_ids,
871
- query_embeds=query_embeds,
872
- past_key_values_length=past_key_values_length,
873
- )
874
-
875
- input_shape = embedding_output.size()[:-1]
876
- batch_size, seq_length = input_shape
877
- device = embedding_output.device
878
-
879
- if attention_mask is None:
880
- attention_mask = torch.ones(
881
- ((batch_size, seq_length + past_key_values_length)), device=device
882
- )
883
-
884
- # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length]
885
- # ourselves in which case we just need to make it broadcastable to all heads.
886
- if is_decoder:
887
- extended_attention_mask = self.get_extended_attention_mask(
888
- attention_mask,
889
- input_ids.shape,
890
- device,
891
- is_decoder,
892
- has_query=(query_embeds is not None),
893
- )
894
- else:
895
- extended_attention_mask = self.get_extended_attention_mask(
896
- attention_mask, input_shape, device, is_decoder
897
- )
898
-
899
- # If a 2D or 3D attention mask is provided for the cross-attention
900
- # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length]
901
- if encoder_hidden_states is not None:
902
- if type(encoder_hidden_states) == list:
903
- encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[
904
- 0
905
- ].size()
906
- else:
907
- (
908
- encoder_batch_size,
909
- encoder_sequence_length,
910
- _,
911
- ) = encoder_hidden_states.size()
912
- encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length)
913
-
914
- if type(encoder_attention_mask) == list:
915
- encoder_extended_attention_mask = [
916
- self.invert_attention_mask(mask) for mask in encoder_attention_mask
917
- ]
918
- elif encoder_attention_mask is None:
919
- encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device)
920
- encoder_extended_attention_mask = self.invert_attention_mask(
921
- encoder_attention_mask
922
- )
923
- else:
924
- encoder_extended_attention_mask = self.invert_attention_mask(
925
- encoder_attention_mask
926
- )
927
- else:
928
- encoder_extended_attention_mask = None
929
-
930
- # Prepare head mask if needed
931
- # 1.0 in head_mask indicate we keep the head
932
- # attention_probs has shape bsz x n_heads x N x N
933
- # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
934
- # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
935
- head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers)
936
-
937
- encoder_outputs = self.encoder(
938
- embedding_output,
939
- attention_mask=extended_attention_mask,
940
- head_mask=head_mask,
941
- encoder_hidden_states=encoder_hidden_states,
942
- encoder_attention_mask=encoder_extended_attention_mask,
943
- past_key_values=past_key_values,
944
- use_cache=use_cache,
945
- output_attentions=output_attentions,
946
- output_hidden_states=output_hidden_states,
947
- return_dict=return_dict,
948
- query_length=query_length,
949
- )
950
- sequence_output = encoder_outputs[0]
951
- pooled_output = (
952
- self.pooler(sequence_output) if self.pooler is not None else None
953
- )
954
-
955
- if not return_dict:
956
- return (sequence_output, pooled_output) + encoder_outputs[1:]
957
-
958
- return BaseModelOutputWithPoolingAndCrossAttentions(
959
- last_hidden_state=sequence_output,
960
- pooler_output=pooled_output,
961
- past_key_values=encoder_outputs.past_key_values,
962
- hidden_states=encoder_outputs.hidden_states,
963
- attentions=encoder_outputs.attentions,
964
- cross_attentions=encoder_outputs.cross_attentions,
965
- )
966
-
967
-
968
- class BertLMHeadModel(BertPreTrainedModel):
969
-
970
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
971
- _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"]
972
-
973
- def __init__(self, config):
974
- super().__init__(config)
975
-
976
- self.bert = BertModel(config, add_pooling_layer=False)
977
- self.cls = BertOnlyMLMHead(config)
978
-
979
- self.init_weights()
980
-
981
- def get_output_embeddings(self):
982
- return self.cls.predictions.decoder
983
-
984
- def set_output_embeddings(self, new_embeddings):
985
- self.cls.predictions.decoder = new_embeddings
986
-
987
- def forward(
988
- self,
989
- input_ids=None,
990
- attention_mask=None,
991
- position_ids=None,
992
- head_mask=None,
993
- query_embeds=None,
994
- encoder_hidden_states=None,
995
- encoder_attention_mask=None,
996
- labels=None,
997
- past_key_values=None,
998
- use_cache=True,
999
- output_attentions=None,
1000
- output_hidden_states=None,
1001
- return_dict=None,
1002
- return_logits=False,
1003
- is_decoder=True,
1004
- reduction="mean",
1005
- ):
1006
- r"""
1007
- encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`):
1008
- Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if
1009
- the model is configured as a decoder.
1010
- encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
1011
- Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in
1012
- the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``:
1013
- - 1 for tokens that are **not masked**,
1014
- - 0 for tokens that are **masked**.
1015
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
1016
- Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in
1017
- ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are
1018
- ignored (masked), the loss is only computed for the tokens with labels n ``[0, ..., config.vocab_size]``
1019
- past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`):
1020
- Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding.
1021
- If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids`
1022
- (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)`
1023
- instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`.
1024
- use_cache (:obj:`bool`, `optional`):
1025
- If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up
1026
- decoding (see :obj:`past_key_values`).
1027
- Returns:
1028
- Example::
1029
- >>> from transformers import BertTokenizer, BertLMHeadModel, BertConfig
1030
- >>> import torch
1031
- >>> tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
1032
- >>> config = BertConfig.from_pretrained("bert-base-cased")
1033
- >>> model = BertLMHeadModel.from_pretrained('bert-base-cased', config=config)
1034
- >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
1035
- >>> outputs = model(**inputs)
1036
- >>> prediction_logits = outputs.logits
1037
- """
1038
- return_dict = (
1039
- return_dict if return_dict is not None else self.config.use_return_dict
1040
- )
1041
- if labels is not None:
1042
- use_cache = False
1043
- if past_key_values is not None:
1044
- query_embeds = None
1045
-
1046
- outputs = self.bert(
1047
- input_ids,
1048
- attention_mask=attention_mask,
1049
- position_ids=position_ids,
1050
- head_mask=head_mask,
1051
- query_embeds=query_embeds,
1052
- encoder_hidden_states=encoder_hidden_states,
1053
- encoder_attention_mask=encoder_attention_mask,
1054
- past_key_values=past_key_values,
1055
- use_cache=use_cache,
1056
- output_attentions=output_attentions,
1057
- output_hidden_states=output_hidden_states,
1058
- return_dict=return_dict,
1059
- is_decoder=is_decoder,
1060
- )
1061
-
1062
- sequence_output = outputs[0]
1063
- if query_embeds is not None:
1064
- sequence_output = outputs[0][:, query_embeds.shape[1] :, :]
1065
-
1066
- prediction_scores = self.cls(sequence_output)
1067
-
1068
- if return_logits:
1069
- return prediction_scores[:, :-1, :].contiguous()
1070
-
1071
- lm_loss = None
1072
- if labels is not None:
1073
- # we are doing next-token prediction; shift prediction scores and input ids by one
1074
- shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous()
1075
- labels = labels[:, 1:].contiguous()
1076
- loss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=0.1)
1077
- lm_loss = loss_fct(
1078
- shifted_prediction_scores.view(-1, self.config.vocab_size),
1079
- labels.view(-1),
1080
- )
1081
- if reduction == "none":
1082
- lm_loss = lm_loss.view(prediction_scores.size(0), -1).sum(1)
1083
-
1084
- if not return_dict:
1085
- output = (prediction_scores,) + outputs[2:]
1086
- return ((lm_loss,) + output) if lm_loss is not None else output
1087
-
1088
- return CausalLMOutputWithCrossAttentions(
1089
- loss=lm_loss,
1090
- logits=prediction_scores,
1091
- past_key_values=outputs.past_key_values,
1092
- hidden_states=outputs.hidden_states,
1093
- attentions=outputs.attentions,
1094
- cross_attentions=outputs.cross_attentions,
1095
- )
1096
-
1097
- def prepare_inputs_for_generation(
1098
- self, input_ids, query_embeds, past=None, attention_mask=None, **model_kwargs
1099
- ):
1100
- # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly
1101
- if attention_mask is None:
1102
- attention_mask = input_ids.new_ones(input_ids.shape)
1103
- query_mask = input_ids.new_ones(query_embeds.shape[:-1])
1104
- attention_mask = torch.cat([query_mask, attention_mask], dim=-1)
1105
-
1106
- # cut decoder_input_ids if past is used
1107
- if past is not None:
1108
- input_ids = input_ids[:, -1:]
1109
-
1110
- return {
1111
- "input_ids": input_ids,
1112
- "query_embeds": query_embeds,
1113
- "attention_mask": attention_mask,
1114
- "past_key_values": past,
1115
- "encoder_hidden_states": model_kwargs.get("encoder_hidden_states", None),
1116
- "encoder_attention_mask": model_kwargs.get("encoder_attention_mask", None),
1117
- "is_decoder": True,
1118
- }
1119
-
1120
- def _reorder_cache(self, past, beam_idx):
1121
- reordered_past = ()
1122
- for layer_past in past:
1123
- reordered_past += (
1124
- tuple(
1125
- past_state.index_select(0, beam_idx) for past_state in layer_past
1126
- ),
1127
- )
1128
- return reordered_past
1129
-
1130
-
1131
- class BertForMaskedLM(BertPreTrainedModel):
1132
-
1133
- _keys_to_ignore_on_load_unexpected = [r"pooler"]
1134
- _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"]
1135
-
1136
- def __init__(self, config):
1137
- super().__init__(config)
1138
-
1139
- self.bert = BertModel(config, add_pooling_layer=False)
1140
- self.cls = BertOnlyMLMHead(config)
1141
-
1142
- self.init_weights()
1143
-
1144
- def get_output_embeddings(self):
1145
- return self.cls.predictions.decoder
1146
-
1147
- def set_output_embeddings(self, new_embeddings):
1148
- self.cls.predictions.decoder = new_embeddings
1149
-
1150
- def forward(
1151
- self,
1152
- input_ids=None,
1153
- attention_mask=None,
1154
- position_ids=None,
1155
- head_mask=None,
1156
- query_embeds=None,
1157
- encoder_hidden_states=None,
1158
- encoder_attention_mask=None,
1159
- labels=None,
1160
- output_attentions=None,
1161
- output_hidden_states=None,
1162
- return_dict=None,
1163
- return_logits=False,
1164
- is_decoder=False,
1165
- ):
1166
- r"""
1167
- labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`):
1168
- Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ...,
1169
- config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored
1170
- (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]``
1171
- """
1172
-
1173
- return_dict = (
1174
- return_dict if return_dict is not None else self.config.use_return_dict
1175
- )
1176
-
1177
- outputs = self.bert(
1178
- input_ids,
1179
- attention_mask=attention_mask,
1180
- position_ids=position_ids,
1181
- head_mask=head_mask,
1182
- query_embeds=query_embeds,
1183
- encoder_hidden_states=encoder_hidden_states,
1184
- encoder_attention_mask=encoder_attention_mask,
1185
- output_attentions=output_attentions,
1186
- output_hidden_states=output_hidden_states,
1187
- return_dict=return_dict,
1188
- is_decoder=is_decoder,
1189
- )
1190
-
1191
- if query_embeds is not None:
1192
- sequence_output = outputs[0][:, query_embeds.shape[1] :, :]
1193
- prediction_scores = self.cls(sequence_output)
1194
-
1195
- if return_logits:
1196
- return prediction_scores
1197
-
1198
- masked_lm_loss = None
1199
- if labels is not None:
1200
- loss_fct = CrossEntropyLoss() # -100 index = padding token
1201
- masked_lm_loss = loss_fct(
1202
- prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)
1203
- )
1204
-
1205
- if not return_dict:
1206
- output = (prediction_scores,) + outputs[2:]
1207
- return (
1208
- ((masked_lm_loss,) + output) if masked_lm_loss is not None else output
1209
- )
1210
-
1211
- return MaskedLMOutput(
1212
- loss=masked_lm_loss,
1213
- logits=prediction_scores,
1214
- hidden_states=outputs.hidden_states,
1215
- attentions=outputs.attentions,
1216
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/matcher.py DELETED
@@ -1,135 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
- from typing import List
3
- import torch
4
-
5
-
6
- class Matcher(object):
7
- """
8
- This class assigns to each predicted "element" (e.g., a box) a ground-truth
9
- element. Each predicted element will have exactly zero or one matches; each
10
- ground-truth element may be matched to zero or more predicted elements.
11
-
12
- The matching is determined by the MxN match_quality_matrix, that characterizes
13
- how well each (ground-truth, prediction)-pair match each other. For example,
14
- if the elements are boxes, this matrix may contain box intersection-over-union
15
- overlap values.
16
-
17
- The matcher returns (a) a vector of length N containing the index of the
18
- ground-truth element m in [0, M) that matches to prediction n in [0, N).
19
- (b) a vector of length N containing the labels for each prediction.
20
- """
21
-
22
- def __init__(
23
- self, thresholds: List[float], labels: List[int], allow_low_quality_matches: bool = False
24
- ):
25
- """
26
- Args:
27
- thresholds (list): a list of thresholds used to stratify predictions
28
- into levels.
29
- labels (list): a list of values to label predictions belonging at
30
- each level. A label can be one of {-1, 0, 1} signifying
31
- {ignore, negative class, positive class}, respectively.
32
- allow_low_quality_matches (bool): if True, produce additional matches
33
- for predictions with maximum match quality lower than high_threshold.
34
- See set_low_quality_matches_ for more details.
35
-
36
- For example,
37
- thresholds = [0.3, 0.5]
38
- labels = [0, -1, 1]
39
- All predictions with iou < 0.3 will be marked with 0 and
40
- thus will be considered as false positives while training.
41
- All predictions with 0.3 <= iou < 0.5 will be marked with -1 and
42
- thus will be ignored.
43
- All predictions with 0.5 <= iou will be marked with 1 and
44
- thus will be considered as true positives.
45
- """
46
- # Add -inf and +inf to first and last position in thresholds
47
- thresholds = thresholds[:]
48
- assert thresholds[0] > 0
49
- thresholds.insert(0, -float("inf"))
50
- thresholds.append(float("inf"))
51
- assert all(low <= high for (low, high) in zip(thresholds[:-1], thresholds[1:]))
52
- assert all(l in [-1, 0, 1] for l in labels)
53
- assert len(labels) == len(thresholds) - 1
54
- self.thresholds = thresholds
55
- self.labels = labels
56
- self.allow_low_quality_matches = allow_low_quality_matches
57
-
58
- def __call__(self, match_quality_matrix):
59
- """
60
- Args:
61
- match_quality_matrix (Tensor[float]): an MxN tensor, containing the
62
- pairwise quality between M ground-truth elements and N predicted
63
- elements. All elements must be >= 0 (due to the us of `torch.nonzero`
64
- for selecting indices in :meth:`set_low_quality_matches_`).
65
-
66
- Returns:
67
- matches (Tensor[int64]): a vector of length N, where matches[i] is a matched
68
- ground-truth index in [0, M)
69
- match_labels (Tensor[int8]): a vector of length N, where pred_labels[i] indicates
70
- whether a prediction is a true or false positive or ignored
71
- """
72
- assert match_quality_matrix.dim() == 2
73
- if match_quality_matrix.numel() == 0:
74
- default_matches = match_quality_matrix.new_full(
75
- (match_quality_matrix.size(1),), 0, dtype=torch.int64
76
- )
77
- # When no gt boxes exist, we define IOU = 0 and therefore set labels
78
- # to `self.labels[0]`, which usually defaults to background class 0
79
- # To choose to ignore instead, can make labels=[-1,0,-1,1] + set appropriate thresholds
80
- default_match_labels = match_quality_matrix.new_full(
81
- (match_quality_matrix.size(1),), self.labels[0], dtype=torch.int8
82
- )
83
- return default_matches, default_match_labels
84
-
85
- assert torch.all(match_quality_matrix >= 0)
86
-
87
- # match_quality_matrix is M (gt) x N (predicted)
88
- # Max over gt elements (dim 0) to find best gt candidate for each prediction
89
- matched_vals, matches = match_quality_matrix.max(dim=0)
90
-
91
- match_labels = matches.new_full(matches.size(), 1, dtype=torch.int8)
92
-
93
- for (l, low, high) in zip(self.labels, self.thresholds[:-1], self.thresholds[1:]):
94
- low_high = (matched_vals >= low) & (matched_vals < high)
95
- match_labels[low_high] = l
96
-
97
- if self.allow_low_quality_matches:
98
- self.set_low_quality_matches_(match_labels, match_quality_matrix)
99
-
100
- return matches, match_labels
101
-
102
- def set_low_quality_matches_(self, match_labels, match_quality_matrix):
103
- """
104
- Produce additional matches for predictions that have only low-quality matches.
105
- Specifically, for each ground-truth G find the set of predictions that have
106
- maximum overlap with it (including ties); for each prediction in that set, if
107
- it is unmatched, then match it to the ground-truth G.
108
-
109
- This function implements the RPN assignment case (i) in Sec. 3.1.2 of the
110
- Faster R-CNN paper: https://arxiv.org/pdf/1506.01497v3.pdf.
111
- """
112
- # For each gt, find the prediction with which it has highest quality
113
- highest_quality_foreach_gt, _ = match_quality_matrix.max(dim=1)
114
- # Find the highest quality match available, even if it is low, including ties.
115
- # Note that the matches qualities must be positive due to the use of
116
- # `torch.nonzero`.
117
- gt_pred_pairs_of_highest_quality = torch.nonzero(
118
- match_quality_matrix == highest_quality_foreach_gt[:, None]
119
- )
120
- # Example gt_pred_pairs_of_highest_quality:
121
- # tensor([[ 0, 39796],
122
- # [ 1, 32055],
123
- # [ 1, 32070],
124
- # [ 2, 39190],
125
- # [ 2, 40255],
126
- # [ 3, 40390],
127
- # [ 3, 41455],
128
- # [ 4, 45470],
129
- # [ 5, 45325],
130
- # [ 5, 46390]])
131
- # Each row is a (gt index, prediction index)
132
- # Note how gt items 1, 2, 3, and 5 each have two ties
133
-
134
- pred_inds_to_update = gt_pred_pairs_of_highest_quality[:, 1]
135
- match_labels[pred_inds_to_update] = 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/sort.h DELETED
@@ -1,23 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
- #pragma once
18
-
19
- #include <thrust/detail/config.h>
20
-
21
- // this system inherits sort
22
- #include <thrust/system/detail/sequential/sort.h>
23
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/scan.h DELETED
@@ -1,928 +0,0 @@
1
- /******************************************************************************
2
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
3
- *
4
- * Redistribution and use in source and binary forms, with or without
5
- * modification, are permitted provided that the following conditions are met:
6
- * * Redistributions of source code must retain the above copyright
7
- * notice, this list of conditions and the following disclaimer.
8
- * * Redistributions in binary form must reproduce the above copyright
9
- * notice, this list of conditions and the following disclaimer in the
10
- * documentation and/or other materials provided with the distribution.
11
- * * Neither the name of the NVIDIA CORPORATION nor the
12
- * names of its contributors may be used to endorse or promote products
13
- * derived from this software without specific prior written permission.
14
- *
15
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
16
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
19
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
20
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
21
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
22
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
23
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
24
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
25
- *
26
- ******************************************************************************/
27
- #pragma once
28
-
29
-
30
- #if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
31
- #include <thrust/system/cuda/config.h>
32
- #include <thrust/detail/type_traits.h>
33
- #include <thrust/functional.h>
34
- #include <thrust/detail/type_traits/iterator/is_output_iterator.h>
35
-
36
- #include <thrust/system/cuda/detail/execution_policy.h>
37
- #include <thrust/detail/cstdint.h>
38
- #include <thrust/detail/temporary_array.h>
39
- #include <thrust/system/cuda/detail/util.h>
40
- #include <cub/device/device_scan.cuh>
41
- #include <thrust/system/cuda/detail/core/agent_launcher.h>
42
- #include <thrust/system/cuda/detail/par_to_seq.h>
43
- #include <thrust/system/cuda/detail/dispatch.h>
44
- #include <thrust/detail/mpl/math.h>
45
- #include <thrust/detail/minmax.h>
46
- #include <thrust/distance.h>
47
- #include <thrust/iterator/iterator_traits.h>
48
-
49
- namespace thrust
50
- {
51
- template <typename DerivedPolicy,
52
- typename InputIterator,
53
- typename OutputIterator,
54
- typename AssociativeOperator>
55
- __host__ __device__ OutputIterator
56
- inclusive_scan(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
57
- InputIterator first,
58
- InputIterator last,
59
- OutputIterator result,
60
- AssociativeOperator binary_op);
61
-
62
- template <typename DerivedPolicy,
63
- typename InputIterator,
64
- typename OutputIterator,
65
- typename T,
66
- typename AssociativeOperator>
67
- __host__ __device__ OutputIterator
68
- exclusive_scan(const thrust::detail::execution_policy_base<DerivedPolicy> &exec,
69
- InputIterator first,
70
- InputIterator last,
71
- OutputIterator result,
72
- T init,
73
- AssociativeOperator binary_op);
74
- } // end namespace thrust
75
-
76
- namespace thrust
77
- {
78
- namespace cuda_cub {
79
-
80
- namespace __scan {
81
-
82
- namespace mpl = thrust::detail::mpl::math;
83
-
84
- template<class>
85
- struct WarpSize { enum { value = 32 }; };
86
-
87
- template <int _BLOCK_THREADS,
88
- int _ITEMS_PER_THREAD = 1,
89
- cub::BlockLoadAlgorithm _LOAD_ALGORITHM = cub::BLOCK_LOAD_DIRECT,
90
- cub::CacheLoadModifier _LOAD_MODIFIER = cub::LOAD_DEFAULT,
91
- cub::BlockStoreAlgorithm _STORE_ALGORITHM = cub::BLOCK_STORE_DIRECT,
92
- cub::BlockScanAlgorithm _SCAN_ALGORITHM = cub::BLOCK_SCAN_WARP_SCANS>
93
- struct PtxPolicy
94
- {
95
- enum
96
- {
97
- BLOCK_THREADS = _BLOCK_THREADS,
98
- ITEMS_PER_THREAD = _ITEMS_PER_THREAD,
99
- ITEMS_PER_TILE = BLOCK_THREADS * ITEMS_PER_THREAD,
100
- };
101
-
102
- static const cub::BlockLoadAlgorithm LOAD_ALGORITHM = _LOAD_ALGORITHM;
103
- static const cub::CacheLoadModifier LOAD_MODIFIER = _LOAD_MODIFIER;
104
- static const cub::BlockStoreAlgorithm STORE_ALGORITHM = _STORE_ALGORITHM;
105
- static const cub::BlockScanAlgorithm SCAN_ALGORITHM = _SCAN_ALGORITHM;
106
- }; // struct PtxPolicy
107
-
108
-
109
- // Scale the number of warps to keep same amount of "tile" storage
110
- // as the nominal configuration for 4B data. Minimum of two warps.
111
- //
112
- template<class Arch, int NOMINAL_4B_BLOCK_THREADS, class T>
113
- struct THRUST_BLOCK_THREADS
114
- {
115
- enum
116
- {
117
- value = mpl::min<int,
118
- NOMINAL_4B_BLOCK_THREADS,
119
- mpl::max<int,
120
- 3,
121
- ((NOMINAL_4B_BLOCK_THREADS /
122
- WarpSize<Arch>::value) *
123
- 4) /
124
- sizeof(T)>::value *
125
- WarpSize<Arch>::value>::value
126
- };
127
- }; // struct THRUST_BLOCK_THREADS
128
-
129
- // If necessary, scale down number of items per thread to keep
130
- // the same amount of "tile" storage as the nominal configuration for 4B data.
131
- // Minimum 1 item per thread
132
- //
133
- template <class Arch,
134
- int NOMINAL_4B_ITEMS_PER_THREAD,
135
- int NOMINAL_4B_BLOCK_THREADS,
136
- class T>
137
- struct THRUST_ITEMS_PER_THREAD
138
- {
139
- enum
140
- {
141
- value = mpl::min<
142
- int,
143
- NOMINAL_4B_ITEMS_PER_THREAD,
144
- mpl::max<
145
- int,
146
- 1,
147
- (NOMINAL_4B_ITEMS_PER_THREAD *
148
- NOMINAL_4B_BLOCK_THREADS * 4 / sizeof(T)) /
149
- THRUST_BLOCK_THREADS<Arch,
150
- NOMINAL_4B_BLOCK_THREADS,
151
- T>::value>::value>::value
152
- };
153
- };
154
-
155
-
156
- template <class Arch, class T, class U>
157
- struct Tuning;
158
-
159
- template<class T, class U>
160
- struct Tuning<sm30,T,U>
161
- {
162
- typedef sm30 Arch;
163
- enum
164
- {
165
- NOMINAL_4B_BLOCK_THREADS = 256,
166
- NOMINAL_4B_ITEMS_PER_THREAD = 9,
167
- };
168
-
169
- typedef PtxPolicy<THRUST_BLOCK_THREADS<Arch,
170
- NOMINAL_4B_BLOCK_THREADS,
171
- T>::value,
172
- THRUST_ITEMS_PER_THREAD<Arch,
173
- NOMINAL_4B_ITEMS_PER_THREAD,
174
- NOMINAL_4B_BLOCK_THREADS,
175
- T>::value,
176
- cub::BLOCK_LOAD_WARP_TRANSPOSE_TIMESLICED,
177
- cub::LOAD_DEFAULT,
178
- cub::BLOCK_STORE_WARP_TRANSPOSE_TIMESLICED,
179
- cub::BLOCK_SCAN_RAKING_MEMOIZE>
180
- type;
181
- }; // struct Tuning for sm30
182
-
183
- template<class T, class U>
184
- struct Tuning<sm35,T,U>
185
- {
186
- typedef sm35 Arch;
187
- enum
188
- {
189
- NOMINAL_4B_BLOCK_THREADS = 128,
190
- NOMINAL_4B_ITEMS_PER_THREAD = 12,
191
- };
192
-
193
- typedef PtxPolicy<THRUST_BLOCK_THREADS<Arch,
194
- NOMINAL_4B_BLOCK_THREADS,
195
- T>::value,
196
- THRUST_ITEMS_PER_THREAD<Arch,
197
- NOMINAL_4B_ITEMS_PER_THREAD,
198
- NOMINAL_4B_BLOCK_THREADS,
199
- T>::value,
200
- cub::BLOCK_LOAD_WARP_TRANSPOSE_TIMESLICED,
201
- cub::LOAD_LDG,
202
- cub::BLOCK_STORE_WARP_TRANSPOSE_TIMESLICED,
203
- cub::BLOCK_SCAN_RAKING>
204
- type;
205
- }; // struct Tuning for sm35
206
-
207
- template<class T, class U>
208
- struct Tuning<sm52,T,U>
209
- {
210
- typedef sm52 Arch;
211
- enum
212
- {
213
- NOMINAL_4B_BLOCK_THREADS = 128,
214
- NOMINAL_4B_ITEMS_PER_THREAD = 12,
215
- };
216
-
217
- typedef PtxPolicy<THRUST_BLOCK_THREADS<Arch,
218
- NOMINAL_4B_BLOCK_THREADS,
219
- T>::value,
220
- THRUST_ITEMS_PER_THREAD<Arch,
221
- NOMINAL_4B_ITEMS_PER_THREAD,
222
- NOMINAL_4B_BLOCK_THREADS,
223
- T>::value,
224
- cub::BLOCK_LOAD_WARP_TRANSPOSE_TIMESLICED,
225
- cub::LOAD_LDG,
226
- cub::BLOCK_STORE_WARP_TRANSPOSE_TIMESLICED,
227
- cub::BLOCK_SCAN_RAKING>
228
- type;
229
- }; // struct Tuning for sm52
230
-
231
- template <class InputIt,
232
- class OutputIt,
233
- class ScanOp,
234
- class Size,
235
- class T,
236
- class Inclusive>
237
- struct ScanAgent
238
- {
239
- typedef cub::ScanTileState<T> ScanTileState;
240
- typedef cub::BlockScanRunningPrefixOp<T, ScanOp> RunningPrefixCallback;
241
-
242
- template<class Arch>
243
- struct PtxPlan : Tuning<Arch,T,T>::type
244
- {
245
- typedef Tuning<Arch, T, T> tuning;
246
-
247
-
248
- typedef typename core::LoadIterator<PtxPlan, InputIt>::type LoadIt;
249
- typedef typename core::BlockLoad<PtxPlan, LoadIt, T>::type BlockLoad;
250
- typedef typename core::BlockStore<PtxPlan, OutputIt, T>::type BlockStore;
251
-
252
- typedef cub::TilePrefixCallbackOp<T, ScanOp, ScanTileState, Arch::ver>
253
- TilePrefixCallback;
254
- typedef cub::BlockScan<T,
255
- PtxPlan::BLOCK_THREADS,
256
- PtxPlan::SCAN_ALGORITHM,
257
- 1,
258
- 1,
259
- Arch::ver>
260
- BlockScan;
261
-
262
- union TempStorage
263
- {
264
- typename BlockLoad::TempStorage load;
265
- typename BlockStore::TempStorage store;
266
-
267
- struct
268
- {
269
- typename TilePrefixCallback::TempStorage prefix;
270
- typename BlockScan::TempStorage scan;
271
- };
272
- }; // struct TempStorage
273
- }; // struct PtxPlan
274
- typedef typename core::specialize_plan_msvc10_war<PtxPlan>::type::type ptx_plan;
275
-
276
- typedef typename ptx_plan::LoadIt LoadIt;
277
- typedef typename ptx_plan::BlockLoad BlockLoad;
278
- typedef typename ptx_plan::BlockStore BlockStore;
279
- typedef typename ptx_plan::TilePrefixCallback TilePrefixCallback;
280
- typedef typename ptx_plan::BlockScan BlockScan;
281
- typedef typename ptx_plan::TempStorage TempStorage;
282
-
283
- enum
284
- {
285
- INCLUSIVE = Inclusive::value,
286
- BLOCK_THREADS = ptx_plan::BLOCK_THREADS,
287
- ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD,
288
- ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE,
289
-
290
- SYNC_AFTER_LOAD = (ptx_plan::LOAD_ALGORITHM != cub::BLOCK_LOAD_DIRECT),
291
- };
292
-
293
- struct impl
294
- {
295
- //---------------------------------------------------------------------
296
- // Per thread data
297
- //---------------------------------------------------------------------
298
-
299
- TempStorage &storage;
300
- ScanTileState &tile_state;
301
- LoadIt load_it;
302
- OutputIt output_it;
303
- ScanOp scan_op;
304
-
305
- //---------------------------------------------------------------------
306
- // Block scan utility methods (first tile)
307
- //---------------------------------------------------------------------
308
-
309
- // Exclusive scan specialization
310
- //
311
- template <class _ScanOp>
312
- void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD],
313
- _ScanOp scan_op,
314
- T & block_aggregate,
315
- thrust::detail::false_type /* is_inclusive */)
316
- {
317
- BlockScan(storage.scan).ExclusiveScan(items, items, scan_op, block_aggregate);
318
- }
319
-
320
- // Exclusive sum specialization
321
- //
322
- void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD],
323
- plus<T> /*scan_op*/,
324
- T & block_aggregate,
325
- thrust::detail::false_type /* is_inclusive */)
326
- {
327
- BlockScan(storage.scan).ExclusiveSum(items, items, block_aggregate);
328
- }
329
-
330
- // Inclusive scan specialization
331
- //
332
- template <typename _ScanOp>
333
- void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD],
334
- _ScanOp scan_op,
335
- T & block_aggregate,
336
- thrust::detail::true_type /* is_inclusive */)
337
- {
338
- BlockScan(storage.scan).InclusiveScan(items, items, scan_op, block_aggregate);
339
- }
340
-
341
-
342
- // Inclusive sum specialization
343
- //
344
- void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD],
345
- plus<T> /*scan_op*/,
346
- T & block_aggregate,
347
- thrust::detail::true_type /* is_inclusive */)
348
- {
349
- BlockScan(storage.scan).InclusiveSum(items, items, block_aggregate);
350
- }
351
-
352
- //---------------------------------------------------------------------
353
- // Block scan utility methods (subsequent tiles)
354
- //---------------------------------------------------------------------
355
-
356
- // Exclusive scan specialization (with prefix from predecessors)
357
- //
358
- template <class _ScanOp, class PrefixCallback>
359
- void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD],
360
- _ScanOp scan_op,
361
- T & block_aggregate,
362
- PrefixCallback &prefix_op,
363
- thrust::detail::false_type /* is_inclusive */)
364
- {
365
- BlockScan(storage.scan).ExclusiveScan(items, items, scan_op, prefix_op);
366
- block_aggregate = prefix_op.GetBlockAggregate();
367
- }
368
-
369
- // Exclusive sum specialization (with prefix from predecessors)
370
- //
371
- template <class PrefixCallback>
372
- THRUST_DEVICE_FUNCTION void scan_tile(T (&items)[ITEMS_PER_THREAD],
373
- plus<T> /*scan_op*/,
374
- T & block_aggregate,
375
- PrefixCallback &prefix_op,
376
- thrust::detail::false_type /* is_inclusive */)
377
- {
378
- BlockScan(storage.scan).ExclusiveSum(items, items, prefix_op);
379
- block_aggregate = prefix_op.GetBlockAggregate();
380
- }
381
-
382
- // Inclusive scan specialization (with prefix from predecessors)
383
- //
384
- template <class _ScanOp, class PrefixCallback>
385
- THRUST_DEVICE_FUNCTION void scan_tile(T (&items)[ITEMS_PER_THREAD],
386
- _ScanOp scan_op,
387
- T & block_aggregate,
388
- PrefixCallback &prefix_op,
389
- thrust::detail::true_type /* is_inclusive */)
390
- {
391
- BlockScan(storage.scan).InclusiveScan(items, items, scan_op, prefix_op);
392
- block_aggregate = prefix_op.GetBlockAggregate();
393
- }
394
-
395
- // Inclusive sum specialization (with prefix from predecessors)
396
- //
397
- template <class U, class PrefixCallback>
398
- THRUST_DEVICE_FUNCTION void scan_tile(T (&items)[ITEMS_PER_THREAD],
399
- plus<T> /*scan_op*/,
400
- T & block_aggregate,
401
- PrefixCallback &prefix_op,
402
- thrust::detail::true_type /* is_inclusive */)
403
- {
404
- BlockScan(storage.scan).InclusiveSum(items, items, prefix_op);
405
- block_aggregate = prefix_op.GetBlockAggregate();
406
- }
407
-
408
- //---------------------------------------------------------------------
409
- // Cooperatively scan a device-wide sequence of tiles with other CTAs
410
- //---------------------------------------------------------------------
411
-
412
- // Process a tile of input (dynamic chained scan)
413
- //
414
- template <bool IS_FULL_TILE, class AddInitToExclusive>
415
- THRUST_DEVICE_FUNCTION void
416
- consume_tile(Size /*num_items*/,
417
- Size num_remaining,
418
- int tile_idx,
419
- Size tile_base,
420
- AddInitToExclusive add_init_to_exclusive_scan)
421
- {
422
- using core::sync_threadblock;
423
-
424
- // Load items
425
- T items[ITEMS_PER_THREAD];
426
-
427
- if (IS_FULL_TILE)
428
- {
429
- BlockLoad(storage.load).Load(load_it + tile_base, items);
430
- }
431
- else
432
- {
433
- // Fill last element with the first element
434
- // because collectives are not suffix guarded
435
- BlockLoad(storage.load)
436
- .Load(load_it + tile_base,
437
- items,
438
- num_remaining,
439
- *(load_it + tile_base));
440
- }
441
-
442
- if (SYNC_AFTER_LOAD)
443
- sync_threadblock();
444
-
445
- // Perform tile scan
446
- if (tile_idx == 0)
447
- {
448
- // Scan first tile
449
- T block_aggregate;
450
- scan_tile(items, scan_op, block_aggregate, Inclusive());
451
-
452
- // Update tile status if there may be successor tiles (i.e., this tile is full)
453
- if (IS_FULL_TILE && (threadIdx.x == 0))
454
- tile_state.SetInclusive(0, block_aggregate);
455
- }
456
- else
457
- {
458
- // Scan non-first tile
459
- T block_aggregate;
460
- TilePrefixCallback prefix_op(tile_state, storage.prefix, scan_op, tile_idx);
461
- scan_tile(items, scan_op, block_aggregate, prefix_op, Inclusive());
462
- }
463
-
464
- sync_threadblock();
465
-
466
- add_init_to_exclusive_scan(items, tile_idx);
467
-
468
- // Store items
469
- if (IS_FULL_TILE)
470
- {
471
- BlockStore(storage.store).Store(output_it + tile_base, items);
472
- }
473
- else
474
- {
475
- BlockStore(storage.store).Store(output_it + tile_base, items, num_remaining);
476
- }
477
- }
478
-
479
-
480
- //---------------------------------------------------------------------
481
- // Constructor
482
- //---------------------------------------------------------------------
483
-
484
- // Dequeue and scan tiles of items as part of a dynamic chained scan
485
- // with Init
486
- template <class AddInitToExclusiveScan>
487
- THRUST_DEVICE_FUNCTION
488
- impl(TempStorage & storage_,
489
- ScanTileState & tile_state_,
490
- InputIt input_it,
491
- OutputIt output_it_,
492
- ScanOp scan_op_,
493
- Size num_items,
494
- AddInitToExclusiveScan add_init_to_exclusive_scan)
495
- : storage(storage_),
496
- tile_state(tile_state_),
497
- load_it(core::make_load_iterator(ptx_plan(), input_it)),
498
- output_it(output_it_),
499
- scan_op(scan_op_)
500
- {
501
- int tile_idx = blockIdx.x;
502
- Size tile_base = ITEMS_PER_TILE * tile_idx;
503
- Size num_remaining = num_items - tile_base;
504
-
505
- if (num_remaining > ITEMS_PER_TILE)
506
- {
507
- // Full tile
508
- consume_tile<true>(num_items,
509
- num_remaining,
510
- tile_idx,
511
- tile_base,
512
- add_init_to_exclusive_scan);
513
- }
514
- else if (num_remaining > 0)
515
- {
516
- // Partially-full tile
517
- consume_tile<false>(num_items,
518
- num_remaining,
519
- tile_idx,
520
- tile_base,
521
- add_init_to_exclusive_scan);
522
- }
523
- }
524
- }; // struct impl
525
-
526
- //---------------------------------------------------------------------
527
- // Agent entry point
528
- //---------------------------------------------------------------------
529
-
530
- template <class AddInitToExclusiveScan>
531
- THRUST_AGENT_ENTRY(InputIt input_it,
532
- OutputIt output_it,
533
- ScanOp scan_op,
534
- Size num_items,
535
- ScanTileState tile_state,
536
- AddInitToExclusiveScan add_init_to_exclusive_scan,
537
- char * shmem)
538
- {
539
- TempStorage &storage = *reinterpret_cast<TempStorage *>(shmem);
540
- impl(storage,
541
- tile_state,
542
- input_it,
543
- output_it,
544
- scan_op,
545
- num_items,
546
- add_init_to_exclusive_scan);
547
- }
548
- }; // struct ScanAgent
549
-
550
- template <class ScanTileState,
551
- class Size>
552
- struct InitAgent
553
- {
554
- template <class Arch>
555
- struct PtxPlan : PtxPolicy<128> {};
556
-
557
- typedef core::specialize_plan<PtxPlan> ptx_plan;
558
-
559
- //---------------------------------------------------------------------
560
- // Agent entry point
561
- //---------------------------------------------------------------------
562
-
563
- THRUST_AGENT_ENTRY(ScanTileState tile_state,
564
- Size num_tiles,
565
- char * /*shmem*/)
566
- {
567
- tile_state.InitializeStatus(num_tiles);
568
- }
569
-
570
- }; // struct InitAgent
571
-
572
- template<class T>
573
- struct DoNothing
574
- {
575
- typedef T type;
576
- template <int ITEMS_PER_THREAD>
577
- THRUST_DEVICE_FUNCTION void
578
- operator()(T (&items)[ITEMS_PER_THREAD], int /*tile_idx*/)
579
- {
580
- THRUST_UNUSED_VAR(items);
581
- }
582
- }; // struct DoNothing
583
-
584
- template<class T, class ScanOp>
585
- struct AddInitToExclusiveScan
586
- {
587
- typedef T type;
588
- T init;
589
- ScanOp scan_op;
590
-
591
- THRUST_RUNTIME_FUNCTION
592
- AddInitToExclusiveScan(T init_, ScanOp scan_op_)
593
- : init(init_), scan_op(scan_op_) {}
594
-
595
- template <int ITEMS_PER_THREAD>
596
- THRUST_DEVICE_FUNCTION void
597
- operator()(T (&items)[ITEMS_PER_THREAD], int tile_idx)
598
- {
599
- if (tile_idx == 0 && threadIdx.x == 0)
600
- {
601
- items[0] = init;
602
- for (int i = 1; i < ITEMS_PER_THREAD; ++i)
603
- items[i] = scan_op(init, items[i]);
604
- }
605
- else
606
- {
607
- for (int i = 0; i < ITEMS_PER_THREAD; ++i)
608
- items[i] = scan_op(init, items[i]);
609
- }
610
- }
611
- }; // struct AddInitToExclusiveScan
612
-
613
- template <class Inclusive,
614
- class InputIt,
615
- class OutputIt,
616
- class ScanOp,
617
- class Size,
618
- class AddInitToExclusiveScan>
619
- static cudaError_t THRUST_RUNTIME_FUNCTION
620
- doit_step(void * d_temp_storage,
621
- size_t & temp_storage_bytes,
622
- InputIt input_it,
623
- Size num_items,
624
- AddInitToExclusiveScan add_init_to_exclusive_scan,
625
- OutputIt output_it,
626
- ScanOp scan_op,
627
- cudaStream_t stream,
628
- bool debug_sync)
629
- {
630
- using core::AgentPlan;
631
- using core::AgentLauncher;
632
-
633
- cudaError_t status = cudaSuccess;
634
- if (num_items == 0)
635
- return cudaErrorNotSupported;
636
-
637
- typedef typename AddInitToExclusiveScan::type T;
638
-
639
- typedef AgentLauncher<
640
- ScanAgent<InputIt, OutputIt, ScanOp, Size, T, Inclusive> >
641
- scan_agent;
642
-
643
- typedef typename scan_agent::ScanTileState ScanTileState;
644
-
645
- typedef AgentLauncher<InitAgent<ScanTileState, Size> > init_agent;
646
-
647
- AgentPlan scan_plan = scan_agent::get_plan(stream);
648
- AgentPlan init_plan = init_agent::get_plan();
649
-
650
- int tile_size = scan_plan.items_per_tile;
651
- Size num_tiles = static_cast<Size>((num_items + tile_size - 1) / tile_size);
652
-
653
- size_t vshmem_size = core::vshmem_size(scan_plan.shared_memory_size,
654
- num_tiles);
655
-
656
- size_t allocation_sizes[2] = {0, vshmem_size};
657
- status = ScanTileState::AllocationSize(static_cast<int>(num_tiles), allocation_sizes[0]);
658
- CUDA_CUB_RET_IF_FAIL(status);
659
-
660
- void* allocations[2] = {NULL, NULL};
661
-
662
- status = core::alias_storage(d_temp_storage,
663
- temp_storage_bytes,
664
- allocations,
665
- allocation_sizes);
666
- CUDA_CUB_RET_IF_FAIL(status);
667
-
668
- if (d_temp_storage == NULL)
669
- {
670
- return status;
671
- }
672
-
673
- ScanTileState tile_state;
674
- status = tile_state.Init(static_cast<int>(num_tiles), allocations[0], allocation_sizes[0]);
675
- CUDA_CUB_RET_IF_FAIL(status);
676
-
677
- char *vshmem_ptr = vshmem_size > 0 ? (char*)allocations[1] : NULL;
678
-
679
- init_agent ia(init_plan, num_tiles, stream, "scan::init_agent", debug_sync);
680
- ia.launch(tile_state, num_tiles);
681
- CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
682
-
683
- scan_agent sa(scan_plan, num_items, stream, vshmem_ptr, "scan::scan_agent", debug_sync);
684
- sa.launch(input_it,
685
- output_it,
686
- scan_op,
687
- num_items,
688
- tile_state,
689
- add_init_to_exclusive_scan);
690
- CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError());
691
- return status;
692
- } // func doit_step
693
-
694
- template <typename Inclusive,
695
- typename Derived,
696
- typename InputIt,
697
- typename OutputIt,
698
- typename Size,
699
- typename ScanOp,
700
- typename AddInitToExclusiveScan>
701
- THRUST_RUNTIME_FUNCTION
702
- OutputIt scan(execution_policy<Derived>& policy,
703
- InputIt input_it,
704
- OutputIt output_it,
705
- Size num_items,
706
- ScanOp scan_op,
707
- AddInitToExclusiveScan add_init_to_exclusive_scan)
708
- {
709
- if (num_items == 0)
710
- return output_it;
711
-
712
- size_t storage_size = 0;
713
- cudaStream_t stream = cuda_cub::stream(policy);
714
- bool debug_sync = THRUST_DEBUG_SYNC_FLAG;
715
-
716
- cudaError_t status;
717
- THRUST_INDEX_TYPE_DISPATCH(status,
718
- doit_step<Inclusive>,
719
- num_items,
720
- (NULL,
721
- storage_size,
722
- input_it,
723
- num_items_fixed,
724
- add_init_to_exclusive_scan,
725
- output_it,
726
- scan_op,
727
- stream,
728
- debug_sync));
729
- cuda_cub::throw_on_error(status, "scan failed on 1st step");
730
-
731
- // Allocate temporary storage.
732
- thrust::detail::temporary_array<thrust::detail::uint8_t, Derived>
733
- tmp(policy, storage_size);
734
- void *ptr = static_cast<void*>(tmp.data().get());
735
-
736
- THRUST_INDEX_TYPE_DISPATCH(status,
737
- doit_step<Inclusive>,
738
- num_items,
739
- (ptr,
740
- storage_size,
741
- input_it,
742
- num_items_fixed,
743
- add_init_to_exclusive_scan,
744
- output_it,
745
- scan_op,
746
- stream,
747
- debug_sync));
748
- cuda_cub::throw_on_error(status, "scan failed on 2nd step");
749
-
750
- status = cuda_cub::synchronize(policy);
751
- cuda_cub::throw_on_error(status, "scan failed to synchronize");
752
-
753
- return output_it + num_items;
754
- } // func scan
755
-
756
- } // namespace __scan
757
-
758
- //-------------------------
759
- // Thrust API entry points
760
- //-------------------------
761
-
762
- __thrust_exec_check_disable__
763
- template <class Derived,
764
- class InputIt,
765
- class Size,
766
- class OutputIt,
767
- class ScanOp>
768
- OutputIt __host__ __device__
769
- inclusive_scan_n(execution_policy<Derived> &policy,
770
- InputIt first,
771
- Size num_items,
772
- OutputIt result,
773
- ScanOp scan_op)
774
- {
775
- OutputIt ret = result;
776
- if (__THRUST_HAS_CUDART__)
777
- {
778
- typedef typename iterator_traits<InputIt>::value_type T;
779
- ret = __scan::scan<thrust::detail::true_type>(policy,
780
- first,
781
- result,
782
- num_items,
783
- scan_op,
784
- __scan::DoNothing<T>());
785
- }
786
- else
787
- {
788
- #if !__THRUST_HAS_CUDART__
789
- ret = thrust::inclusive_scan(cvt_to_seq(derived_cast(policy)),
790
- first,
791
- first + num_items,
792
- result,
793
- scan_op);
794
- #endif
795
- }
796
- return ret;
797
- }
798
-
799
-
800
- template <class Derived,
801
- class InputIt,
802
- class OutputIt,
803
- class ScanOp>
804
- OutputIt __host__ __device__
805
- inclusive_scan(execution_policy<Derived> &policy,
806
- InputIt first,
807
- InputIt last,
808
- OutputIt result,
809
- ScanOp scan_op)
810
- {
811
- typedef typename thrust::iterator_traits<InputIt>::difference_type diff_t;
812
- diff_t num_items = thrust::distance(first, last);
813
- return cuda_cub::inclusive_scan_n(policy, first, num_items, result, scan_op);
814
- }
815
-
816
-
817
- template <class Derived,
818
- class InputIt,
819
- class OutputIt>
820
- OutputIt __host__ __device__
821
- inclusive_scan(execution_policy<Derived> &policy,
822
- InputIt first,
823
- OutputIt last,
824
- OutputIt result)
825
- {
826
-
827
- typedef typename thrust::detail::eval_if<
828
- thrust::detail::is_output_iterator<OutputIt>::value,
829
- thrust::iterator_value<InputIt>,
830
- thrust::iterator_value<OutputIt> >::type result_type;
831
- return cuda_cub::inclusive_scan(policy, first, last, result, plus<result_type>());
832
- };
833
-
834
- __thrust_exec_check_disable__
835
- template <class Derived,
836
- class InputIt,
837
- class Size,
838
- class OutputIt,
839
- class T,
840
- class ScanOp>
841
- OutputIt __host__ __device__
842
- exclusive_scan_n(execution_policy<Derived> &policy,
843
- InputIt first,
844
- Size num_items,
845
- OutputIt result,
846
- T init,
847
- ScanOp scan_op)
848
- {
849
- OutputIt ret = result;
850
- if (__THRUST_HAS_CUDART__)
851
- {
852
- ret = __scan::scan<thrust::detail::false_type>(
853
- policy,
854
- first,
855
- result,
856
- num_items,
857
- scan_op,
858
- __scan::AddInitToExclusiveScan<T, ScanOp>(init, scan_op));
859
- }
860
- else
861
- {
862
- #if !__THRUST_HAS_CUDART__
863
- ret = thrust::exclusive_scan(cvt_to_seq(derived_cast(policy)),
864
- first,
865
- first + num_items,
866
- result,
867
- init,
868
- scan_op);
869
- #endif
870
- }
871
- return ret;
872
- }
873
-
874
- template <class Derived,
875
- class InputIt,
876
- class OutputIt,
877
- class T,
878
- class ScanOp>
879
- OutputIt __host__ __device__
880
- exclusive_scan(execution_policy<Derived> &policy,
881
- InputIt first,
882
- InputIt last,
883
- OutputIt result,
884
- T init,
885
- ScanOp scan_op)
886
- {
887
- typedef typename thrust::iterator_traits<InputIt>::difference_type diff_t;
888
- diff_t num_items = thrust::distance(first, last);
889
- return cuda_cub::exclusive_scan_n(policy, first, num_items, result, init, scan_op);
890
- }
891
-
892
- template <class Derived,
893
- class InputIt,
894
- class OutputIt,
895
- class T>
896
- OutputIt __host__ __device__
897
- exclusive_scan(execution_policy<Derived> &policy,
898
- InputIt first,
899
- OutputIt last,
900
- OutputIt result,
901
- T init)
902
- {
903
- return cuda_cub::exclusive_scan(policy, first, last, result, init, plus<T>());
904
- }
905
-
906
- template <class Derived,
907
- class InputIt,
908
- class OutputIt>
909
- OutputIt __host__ __device__
910
- exclusive_scan(execution_policy<Derived> &policy,
911
- InputIt first,
912
- OutputIt last,
913
- OutputIt result)
914
- {
915
- typedef typename thrust::detail::eval_if<
916
- thrust::detail::is_output_iterator<OutputIt>::value,
917
- thrust::iterator_value<InputIt>,
918
- thrust::iterator_value<OutputIt>
919
- >::type result_type;
920
- return cuda_cub::exclusive_scan(policy, first, last, result, result_type(0));
921
- };
922
-
923
- } // namespace cuda_cub
924
- } // end namespace thrust
925
-
926
- #include <thrust/scan.h>
927
-
928
- #endif
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/for_each.h DELETED
@@ -1,95 +0,0 @@
1
- /*
2
- * Copyright 2008-2013 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file for_each.h
19
- * \brief Sequential implementations of for_each functions.
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/detail/config.h>
25
- #include <thrust/detail/function.h>
26
- #include <thrust/system/detail/sequential/execution_policy.h>
27
-
28
- namespace thrust
29
- {
30
- namespace system
31
- {
32
- namespace detail
33
- {
34
- namespace sequential
35
- {
36
-
37
-
38
- __thrust_exec_check_disable__
39
- template<typename DerivedPolicy,
40
- typename InputIterator,
41
- typename UnaryFunction>
42
- __host__ __device__
43
- InputIterator for_each(sequential::execution_policy<DerivedPolicy> &,
44
- InputIterator first,
45
- InputIterator last,
46
- UnaryFunction f)
47
- {
48
- // wrap f
49
- thrust::detail::wrapped_function<
50
- UnaryFunction,
51
- void
52
- > wrapped_f(f);
53
-
54
- for(; first != last; ++first)
55
- {
56
- wrapped_f(*first);
57
- }
58
-
59
- return first;
60
- } // end for_each()
61
-
62
-
63
- template<typename DerivedPolicy,
64
- typename InputIterator,
65
- typename Size,
66
- typename UnaryFunction>
67
- __host__ __device__
68
- InputIterator for_each_n(sequential::execution_policy<DerivedPolicy> &,
69
- InputIterator first,
70
- Size n,
71
- UnaryFunction f)
72
- {
73
- // wrap f
74
- thrust::detail::wrapped_function<
75
- UnaryFunction,
76
- void
77
- > wrapped_f(f);
78
-
79
- for(Size i = 0; i != n; i++)
80
- {
81
- // we can dereference an OutputIterator if f does not
82
- // try to use the reference for anything besides assignment
83
- wrapped_f(*first);
84
- ++first;
85
- }
86
-
87
- return first;
88
- } // end for_each_n()
89
-
90
-
91
- } // end namespace sequential
92
- } // end namespace detail
93
- } // end namespace system
94
- } // end namespace thrust
95
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/MonoScene/monoscene/monoscene.py DELETED
@@ -1,125 +0,0 @@
1
- import pytorch_lightning as pl
2
- import torch
3
- import torch.nn as nn
4
- from monoscene.unet3d_nyu import UNet3D as UNet3DNYU
5
- from monoscene.unet3d_kitti import UNet3D as UNet3DKitti
6
- from monoscene.flosp import FLoSP
7
- import numpy as np
8
- import torch.nn.functional as F
9
- from monoscene.unet2d import UNet2D
10
-
11
-
12
- class MonoScene(pl.LightningModule):
13
- def __init__(
14
- self,
15
- n_classes,
16
- feature,
17
- project_scale,
18
- full_scene_size,
19
- dataset,
20
- project_res=["1", "2", "4", "8"],
21
- n_relations=4,
22
- context_prior=True,
23
- fp_loss=True,
24
- frustum_size=4,
25
- relation_loss=False,
26
- CE_ssc_loss=True,
27
- geo_scal_loss=True,
28
- sem_scal_loss=True,
29
- lr=1e-4,
30
- weight_decay=1e-4,
31
- ):
32
- super().__init__()
33
-
34
- self.project_res = project_res
35
- self.fp_loss = fp_loss
36
- self.dataset = dataset
37
- self.context_prior = context_prior
38
- self.frustum_size = frustum_size
39
- self.relation_loss = relation_loss
40
- self.CE_ssc_loss = CE_ssc_loss
41
- self.sem_scal_loss = sem_scal_loss
42
- self.geo_scal_loss = geo_scal_loss
43
- self.project_scale = project_scale
44
- self.lr = lr
45
- self.weight_decay = weight_decay
46
-
47
- self.projects = {}
48
- self.scale_2ds = [1, 2, 4, 8] # 2D scales
49
- for scale_2d in self.scale_2ds:
50
- self.projects[str(scale_2d)] = FLoSP(
51
- full_scene_size, project_scale=self.project_scale, dataset=self.dataset
52
- )
53
- self.projects = nn.ModuleDict(self.projects)
54
-
55
- self.n_classes = n_classes
56
- if self.dataset == "NYU":
57
- self.net_3d_decoder = UNet3DNYU(
58
- self.n_classes,
59
- nn.BatchNorm3d,
60
- n_relations=n_relations,
61
- feature=feature,
62
- full_scene_size=full_scene_size,
63
- context_prior=context_prior,
64
- )
65
- elif self.dataset == "kitti":
66
- self.net_3d_decoder = UNet3DKitti(
67
- self.n_classes,
68
- nn.BatchNorm3d,
69
- project_scale=project_scale,
70
- feature=feature,
71
- full_scene_size=full_scene_size,
72
- context_prior=context_prior,
73
- )
74
- self.net_rgb = UNet2D.build(out_feature=feature, use_decoder=True)
75
-
76
- def forward(self, batch):
77
-
78
- img = batch["img"]
79
- bs = len(img)
80
-
81
- out = {}
82
-
83
- x_rgb = self.net_rgb(img)
84
-
85
- x3ds = []
86
- for i in range(bs):
87
- x3d = None
88
- for scale_2d in self.project_res:
89
-
90
- # project features at each 2D scale to target 3D scale
91
- scale_2d = int(scale_2d)
92
- projected_pix = batch["projected_pix_{}".format(self.project_scale)][i]#.cuda()
93
- fov_mask = batch["fov_mask_{}".format(self.project_scale)][i]#.cuda()
94
-
95
- # Sum all the 3D features
96
- if x3d is None:
97
- x3d = self.projects[str(scale_2d)](
98
- x_rgb["1_" + str(scale_2d)][i],
99
- # torch.div(projected_pix, scale_2d, rounding_mode='floor'),
100
- projected_pix // scale_2d,
101
- fov_mask,
102
- )
103
- else:
104
- x3d += self.projects[str(scale_2d)](
105
- x_rgb["1_" + str(scale_2d)][i],
106
- # torch.div(projected_pix, scale_2d, rounding_mode='floor'),
107
- projected_pix // scale_2d,
108
- fov_mask,
109
- )
110
- x3ds.append(x3d)
111
-
112
- input_dict = {
113
- "x3d": torch.stack(x3ds),
114
- }
115
-
116
- out_dict = self.net_3d_decoder(input_dict)
117
-
118
- ssc_pred = out_dict["ssc_logit"]
119
-
120
- y_pred = ssc_pred.detach().cpu().numpy()
121
- y_pred = np.argmax(y_pred, axis=1)
122
-
123
- return y_pred
124
-
125
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/models/losses/varifocal_loss.py DELETED
@@ -1,133 +0,0 @@
1
- import mmcv
2
- import torch.nn as nn
3
- import torch.nn.functional as F
4
-
5
- from ..builder import LOSSES
6
- from .utils import weight_reduce_loss
7
-
8
-
9
- @mmcv.jit(derivate=True, coderize=True)
10
- def varifocal_loss(pred,
11
- target,
12
- weight=None,
13
- alpha=0.75,
14
- gamma=2.0,
15
- iou_weighted=True,
16
- reduction='mean',
17
- avg_factor=None):
18
- """`Varifocal Loss <https://arxiv.org/abs/2008.13367>`_
19
-
20
- Args:
21
- pred (torch.Tensor): The prediction with shape (N, C), C is the
22
- number of classes
23
- target (torch.Tensor): The learning target of the iou-aware
24
- classification score with shape (N, C), C is the number of classes.
25
- weight (torch.Tensor, optional): The weight of loss for each
26
- prediction. Defaults to None.
27
- alpha (float, optional): A balance factor for the negative part of
28
- Varifocal Loss, which is different from the alpha of Focal Loss.
29
- Defaults to 0.75.
30
- gamma (float, optional): The gamma for calculating the modulating
31
- factor. Defaults to 2.0.
32
- iou_weighted (bool, optional): Whether to weight the loss of the
33
- positive example with the iou target. Defaults to True.
34
- reduction (str, optional): The method used to reduce the loss into
35
- a scalar. Defaults to 'mean'. Options are "none", "mean" and
36
- "sum".
37
- avg_factor (int, optional): Average factor that is used to average
38
- the loss. Defaults to None.
39
- """
40
- # pred and target should be of the same size
41
- assert pred.size() == target.size()
42
- pred_sigmoid = pred.sigmoid()
43
- target = target.type_as(pred)
44
- if iou_weighted:
45
- focal_weight = target * (target > 0.0).float() + \
46
- alpha * (pred_sigmoid - target).abs().pow(gamma) * \
47
- (target <= 0.0).float()
48
- else:
49
- focal_weight = (target > 0.0).float() + \
50
- alpha * (pred_sigmoid - target).abs().pow(gamma) * \
51
- (target <= 0.0).float()
52
- loss = F.binary_cross_entropy_with_logits(
53
- pred, target, reduction='none') * focal_weight
54
- loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
55
- return loss
56
-
57
-
58
- @LOSSES.register_module()
59
- class VarifocalLoss(nn.Module):
60
-
61
- def __init__(self,
62
- use_sigmoid=True,
63
- alpha=0.75,
64
- gamma=2.0,
65
- iou_weighted=True,
66
- reduction='mean',
67
- loss_weight=1.0):
68
- """`Varifocal Loss <https://arxiv.org/abs/2008.13367>`_
69
-
70
- Args:
71
- use_sigmoid (bool, optional): Whether the prediction is
72
- used for sigmoid or softmax. Defaults to True.
73
- alpha (float, optional): A balance factor for the negative part of
74
- Varifocal Loss, which is different from the alpha of Focal
75
- Loss. Defaults to 0.75.
76
- gamma (float, optional): The gamma for calculating the modulating
77
- factor. Defaults to 2.0.
78
- iou_weighted (bool, optional): Whether to weight the loss of the
79
- positive examples with the iou target. Defaults to True.
80
- reduction (str, optional): The method used to reduce the loss into
81
- a scalar. Defaults to 'mean'. Options are "none", "mean" and
82
- "sum".
83
- loss_weight (float, optional): Weight of loss. Defaults to 1.0.
84
- """
85
- super(VarifocalLoss, self).__init__()
86
- assert use_sigmoid is True, \
87
- 'Only sigmoid varifocal loss supported now.'
88
- assert alpha >= 0.0
89
- self.use_sigmoid = use_sigmoid
90
- self.alpha = alpha
91
- self.gamma = gamma
92
- self.iou_weighted = iou_weighted
93
- self.reduction = reduction
94
- self.loss_weight = loss_weight
95
-
96
- def forward(self,
97
- pred,
98
- target,
99
- weight=None,
100
- avg_factor=None,
101
- reduction_override=None):
102
- """Forward function.
103
-
104
- Args:
105
- pred (torch.Tensor): The prediction.
106
- target (torch.Tensor): The learning target of the prediction.
107
- weight (torch.Tensor, optional): The weight of loss for each
108
- prediction. Defaults to None.
109
- avg_factor (int, optional): Average factor that is used to average
110
- the loss. Defaults to None.
111
- reduction_override (str, optional): The reduction method used to
112
- override the original reduction method of the loss.
113
- Options are "none", "mean" and "sum".
114
-
115
- Returns:
116
- torch.Tensor: The calculated loss
117
- """
118
- assert reduction_override in (None, 'none', 'mean', 'sum')
119
- reduction = (
120
- reduction_override if reduction_override else self.reduction)
121
- if self.use_sigmoid:
122
- loss_cls = self.loss_weight * varifocal_loss(
123
- pred,
124
- target,
125
- weight,
126
- alpha=self.alpha,
127
- gamma=self.gamma,
128
- iou_weighted=self.iou_weighted,
129
- reduction=reduction,
130
- avg_factor=avg_factor)
131
- else:
132
- raise NotImplementedError
133
- return loss_cls
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rpn.py DELETED
@@ -1,533 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates.
2
- from typing import Dict, List, Optional, Tuple, Union
3
- import torch
4
- import torch.nn.functional as F
5
- from torch import nn
6
-
7
- from detectron2.config import configurable
8
- from detectron2.layers import Conv2d, ShapeSpec, cat
9
- from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou
10
- from detectron2.utils.events import get_event_storage
11
- from detectron2.utils.memory import retry_if_cuda_oom
12
- from detectron2.utils.registry import Registry
13
-
14
- from ..anchor_generator import build_anchor_generator
15
- from ..box_regression import Box2BoxTransform, _dense_box_regression_loss
16
- from ..matcher import Matcher
17
- from ..sampling import subsample_labels
18
- from .build import PROPOSAL_GENERATOR_REGISTRY
19
- from .proposal_utils import find_top_rpn_proposals
20
-
21
- RPN_HEAD_REGISTRY = Registry("RPN_HEAD")
22
- RPN_HEAD_REGISTRY.__doc__ = """
23
- Registry for RPN heads, which take feature maps and perform
24
- objectness classification and bounding box regression for anchors.
25
-
26
- The registered object will be called with `obj(cfg, input_shape)`.
27
- The call should return a `nn.Module` object.
28
- """
29
-
30
-
31
- """
32
- Shape shorthand in this module:
33
-
34
- N: number of images in the minibatch
35
- L: number of feature maps per image on which RPN is run
36
- A: number of cell anchors (must be the same for all feature maps)
37
- Hi, Wi: height and width of the i-th feature map
38
- B: size of the box parameterization
39
-
40
- Naming convention:
41
-
42
- objectness: refers to the binary classification of an anchor as object vs. not object.
43
-
44
- deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box
45
- transform (see :class:`box_regression.Box2BoxTransform`), or 5d for rotated boxes.
46
-
47
- pred_objectness_logits: predicted objectness scores in [-inf, +inf]; use
48
- sigmoid(pred_objectness_logits) to estimate P(object).
49
-
50
- gt_labels: ground-truth binary classification labels for objectness
51
-
52
- pred_anchor_deltas: predicted box2box transform deltas
53
-
54
- gt_anchor_deltas: ground-truth box2box transform deltas
55
- """
56
-
57
-
58
- def build_rpn_head(cfg, input_shape):
59
- """
60
- Build an RPN head defined by `cfg.MODEL.RPN.HEAD_NAME`.
61
- """
62
- name = cfg.MODEL.RPN.HEAD_NAME
63
- return RPN_HEAD_REGISTRY.get(name)(cfg, input_shape)
64
-
65
-
66
- @RPN_HEAD_REGISTRY.register()
67
- class StandardRPNHead(nn.Module):
68
- """
69
- Standard RPN classification and regression heads described in :paper:`Faster R-CNN`.
70
- Uses a 3x3 conv to produce a shared hidden state from which one 1x1 conv predicts
71
- objectness logits for each anchor and a second 1x1 conv predicts bounding-box deltas
72
- specifying how to deform each anchor into an object proposal.
73
- """
74
-
75
- @configurable
76
- def __init__(
77
- self, *, in_channels: int, num_anchors: int, box_dim: int = 4, conv_dims: List[int] = (-1,)
78
- ):
79
- """
80
- NOTE: this interface is experimental.
81
-
82
- Args:
83
- in_channels (int): number of input feature channels. When using multiple
84
- input features, they must have the same number of channels.
85
- num_anchors (int): number of anchors to predict for *each spatial position*
86
- on the feature map. The total number of anchors for each
87
- feature map will be `num_anchors * H * W`.
88
- box_dim (int): dimension of a box, which is also the number of box regression
89
- predictions to make for each anchor. An axis aligned box has
90
- box_dim=4, while a rotated box has box_dim=5.
91
- conv_dims (list[int]): a list of integers representing the output channels
92
- of N conv layers. Set it to -1 to use the same number of output channels
93
- as input channels.
94
- """
95
- super().__init__()
96
- cur_channels = in_channels
97
- # Keeping the old variable names and structure for backwards compatiblity.
98
- # Otherwise the old checkpoints will fail to load.
99
- if len(conv_dims) == 1:
100
- out_channels = cur_channels if conv_dims[0] == -1 else conv_dims[0]
101
- # 3x3 conv for the hidden representation
102
- self.conv = self._get_rpn_conv(cur_channels, out_channels)
103
- cur_channels = out_channels
104
- else:
105
- self.conv = nn.Sequential()
106
- for k, conv_dim in enumerate(conv_dims):
107
- out_channels = cur_channels if conv_dim == -1 else conv_dim
108
- if out_channels <= 0:
109
- raise ValueError(
110
- f"Conv output channels should be greater than 0. Got {out_channels}"
111
- )
112
- conv = self._get_rpn_conv(cur_channels, out_channels)
113
- self.conv.add_module(f"conv{k}", conv)
114
- cur_channels = out_channels
115
- # 1x1 conv for predicting objectness logits
116
- self.objectness_logits = nn.Conv2d(cur_channels, num_anchors, kernel_size=1, stride=1)
117
- # 1x1 conv for predicting box2box transform deltas
118
- self.anchor_deltas = nn.Conv2d(cur_channels, num_anchors * box_dim, kernel_size=1, stride=1)
119
-
120
- # Keeping the order of weights initialization same for backwards compatiblility.
121
- for layer in self.modules():
122
- if isinstance(layer, nn.Conv2d):
123
- nn.init.normal_(layer.weight, std=0.01)
124
- nn.init.constant_(layer.bias, 0)
125
-
126
- def _get_rpn_conv(self, in_channels, out_channels):
127
- return Conv2d(
128
- in_channels,
129
- out_channels,
130
- kernel_size=3,
131
- stride=1,
132
- padding=1,
133
- activation=nn.ReLU(),
134
- )
135
-
136
- @classmethod
137
- def from_config(cls, cfg, input_shape):
138
- # Standard RPN is shared across levels:
139
- in_channels = [s.channels for s in input_shape]
140
- assert len(set(in_channels)) == 1, "Each level must have the same channel!"
141
- in_channels = in_channels[0]
142
-
143
- # RPNHead should take the same input as anchor generator
144
- # NOTE: it assumes that creating an anchor generator does not have unwanted side effect.
145
- anchor_generator = build_anchor_generator(cfg, input_shape)
146
- num_anchors = anchor_generator.num_anchors
147
- box_dim = anchor_generator.box_dim
148
- assert (
149
- len(set(num_anchors)) == 1
150
- ), "Each level must have the same number of anchors per spatial position"
151
- return {
152
- "in_channels": in_channels,
153
- "num_anchors": num_anchors[0],
154
- "box_dim": box_dim,
155
- "conv_dims": cfg.MODEL.RPN.CONV_DIMS,
156
- }
157
-
158
- def forward(self, features: List[torch.Tensor]):
159
- """
160
- Args:
161
- features (list[Tensor]): list of feature maps
162
-
163
- Returns:
164
- list[Tensor]: A list of L elements.
165
- Element i is a tensor of shape (N, A, Hi, Wi) representing
166
- the predicted objectness logits for all anchors. A is the number of cell anchors.
167
- list[Tensor]: A list of L elements. Element i is a tensor of shape
168
- (N, A*box_dim, Hi, Wi) representing the predicted "deltas" used to transform anchors
169
- to proposals.
170
- """
171
- pred_objectness_logits = []
172
- pred_anchor_deltas = []
173
- for x in features:
174
- t = self.conv(x)
175
- pred_objectness_logits.append(self.objectness_logits(t))
176
- pred_anchor_deltas.append(self.anchor_deltas(t))
177
- return pred_objectness_logits, pred_anchor_deltas
178
-
179
-
180
- @PROPOSAL_GENERATOR_REGISTRY.register()
181
- class RPN(nn.Module):
182
- """
183
- Region Proposal Network, introduced by :paper:`Faster R-CNN`.
184
- """
185
-
186
- @configurable
187
- def __init__(
188
- self,
189
- *,
190
- in_features: List[str],
191
- head: nn.Module,
192
- anchor_generator: nn.Module,
193
- anchor_matcher: Matcher,
194
- box2box_transform: Box2BoxTransform,
195
- batch_size_per_image: int,
196
- positive_fraction: float,
197
- pre_nms_topk: Tuple[float, float],
198
- post_nms_topk: Tuple[float, float],
199
- nms_thresh: float = 0.7,
200
- min_box_size: float = 0.0,
201
- anchor_boundary_thresh: float = -1.0,
202
- loss_weight: Union[float, Dict[str, float]] = 1.0,
203
- box_reg_loss_type: str = "smooth_l1",
204
- smooth_l1_beta: float = 0.0,
205
- ):
206
- """
207
- NOTE: this interface is experimental.
208
-
209
- Args:
210
- in_features (list[str]): list of names of input features to use
211
- head (nn.Module): a module that predicts logits and regression deltas
212
- for each level from a list of per-level features
213
- anchor_generator (nn.Module): a module that creates anchors from a
214
- list of features. Usually an instance of :class:`AnchorGenerator`
215
- anchor_matcher (Matcher): label the anchors by matching them with ground truth.
216
- box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to
217
- instance boxes
218
- batch_size_per_image (int): number of anchors per image to sample for training
219
- positive_fraction (float): fraction of foreground anchors to sample for training
220
- pre_nms_topk (tuple[float]): (train, test) that represents the
221
- number of top k proposals to select before NMS, in
222
- training and testing.
223
- post_nms_topk (tuple[float]): (train, test) that represents the
224
- number of top k proposals to select after NMS, in
225
- training and testing.
226
- nms_thresh (float): NMS threshold used to de-duplicate the predicted proposals
227
- min_box_size (float): remove proposal boxes with any side smaller than this threshold,
228
- in the unit of input image pixels
229
- anchor_boundary_thresh (float): legacy option
230
- loss_weight (float|dict): weights to use for losses. Can be single float for weighting
231
- all rpn losses together, or a dict of individual weightings. Valid dict keys are:
232
- "loss_rpn_cls" - applied to classification loss
233
- "loss_rpn_loc" - applied to box regression loss
234
- box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou".
235
- smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to
236
- use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1"
237
- """
238
- super().__init__()
239
- self.in_features = in_features
240
- self.rpn_head = head
241
- self.anchor_generator = anchor_generator
242
- self.anchor_matcher = anchor_matcher
243
- self.box2box_transform = box2box_transform
244
- self.batch_size_per_image = batch_size_per_image
245
- self.positive_fraction = positive_fraction
246
- # Map from self.training state to train/test settings
247
- self.pre_nms_topk = {True: pre_nms_topk[0], False: pre_nms_topk[1]}
248
- self.post_nms_topk = {True: post_nms_topk[0], False: post_nms_topk[1]}
249
- self.nms_thresh = nms_thresh
250
- self.min_box_size = float(min_box_size)
251
- self.anchor_boundary_thresh = anchor_boundary_thresh
252
- if isinstance(loss_weight, float):
253
- loss_weight = {"loss_rpn_cls": loss_weight, "loss_rpn_loc": loss_weight}
254
- self.loss_weight = loss_weight
255
- self.box_reg_loss_type = box_reg_loss_type
256
- self.smooth_l1_beta = smooth_l1_beta
257
-
258
- @classmethod
259
- def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]):
260
- in_features = cfg.MODEL.RPN.IN_FEATURES
261
- ret = {
262
- "in_features": in_features,
263
- "min_box_size": cfg.MODEL.PROPOSAL_GENERATOR.MIN_SIZE,
264
- "nms_thresh": cfg.MODEL.RPN.NMS_THRESH,
265
- "batch_size_per_image": cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE,
266
- "positive_fraction": cfg.MODEL.RPN.POSITIVE_FRACTION,
267
- "loss_weight": {
268
- "loss_rpn_cls": cfg.MODEL.RPN.LOSS_WEIGHT,
269
- "loss_rpn_loc": cfg.MODEL.RPN.BBOX_REG_LOSS_WEIGHT * cfg.MODEL.RPN.LOSS_WEIGHT,
270
- },
271
- "anchor_boundary_thresh": cfg.MODEL.RPN.BOUNDARY_THRESH,
272
- "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS),
273
- "box_reg_loss_type": cfg.MODEL.RPN.BBOX_REG_LOSS_TYPE,
274
- "smooth_l1_beta": cfg.MODEL.RPN.SMOOTH_L1_BETA,
275
- }
276
-
277
- ret["pre_nms_topk"] = (cfg.MODEL.RPN.PRE_NMS_TOPK_TRAIN, cfg.MODEL.RPN.PRE_NMS_TOPK_TEST)
278
- ret["post_nms_topk"] = (cfg.MODEL.RPN.POST_NMS_TOPK_TRAIN, cfg.MODEL.RPN.POST_NMS_TOPK_TEST)
279
-
280
- ret["anchor_generator"] = build_anchor_generator(cfg, [input_shape[f] for f in in_features])
281
- ret["anchor_matcher"] = Matcher(
282
- cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS, allow_low_quality_matches=True
283
- )
284
- ret["head"] = build_rpn_head(cfg, [input_shape[f] for f in in_features])
285
- return ret
286
-
287
- def _subsample_labels(self, label):
288
- """
289
- Randomly sample a subset of positive and negative examples, and overwrite
290
- the label vector to the ignore value (-1) for all elements that are not
291
- included in the sample.
292
-
293
- Args:
294
- labels (Tensor): a vector of -1, 0, 1. Will be modified in-place and returned.
295
- """
296
- pos_idx, neg_idx = subsample_labels(
297
- label, self.batch_size_per_image, self.positive_fraction, 0
298
- )
299
- # Fill with the ignore label (-1), then set positive and negative labels
300
- label.fill_(-1)
301
- label.scatter_(0, pos_idx, 1)
302
- label.scatter_(0, neg_idx, 0)
303
- return label
304
-
305
- @torch.jit.unused
306
- @torch.no_grad()
307
- def label_and_sample_anchors(
308
- self, anchors: List[Boxes], gt_instances: List[Instances]
309
- ) -> Tuple[List[torch.Tensor], List[torch.Tensor]]:
310
- """
311
- Args:
312
- anchors (list[Boxes]): anchors for each feature map.
313
- gt_instances: the ground-truth instances for each image.
314
-
315
- Returns:
316
- list[Tensor]:
317
- List of #img tensors. i-th element is a vector of labels whose length is
318
- the total number of anchors across all feature maps R = sum(Hi * Wi * A).
319
- Label values are in {-1, 0, 1}, with meanings: -1 = ignore; 0 = negative
320
- class; 1 = positive class.
321
- list[Tensor]:
322
- i-th element is a Rx4 tensor. The values are the matched gt boxes for each
323
- anchor. Values are undefined for those anchors not labeled as 1.
324
- """
325
- anchors = Boxes.cat(anchors)
326
-
327
- gt_boxes = [x.gt_boxes for x in gt_instances]
328
- image_sizes = [x.image_size for x in gt_instances]
329
- del gt_instances
330
-
331
- gt_labels = []
332
- matched_gt_boxes = []
333
- for image_size_i, gt_boxes_i in zip(image_sizes, gt_boxes):
334
- """
335
- image_size_i: (h, w) for the i-th image
336
- gt_boxes_i: ground-truth boxes for i-th image
337
- """
338
-
339
- match_quality_matrix = retry_if_cuda_oom(pairwise_iou)(gt_boxes_i, anchors)
340
- matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix)
341
- # Matching is memory-expensive and may result in CPU tensors. But the result is small
342
- gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device)
343
- del match_quality_matrix
344
-
345
- if self.anchor_boundary_thresh >= 0:
346
- # Discard anchors that go out of the boundaries of the image
347
- # NOTE: This is legacy functionality that is turned off by default in Detectron2
348
- anchors_inside_image = anchors.inside_box(image_size_i, self.anchor_boundary_thresh)
349
- gt_labels_i[~anchors_inside_image] = -1
350
-
351
- # A vector of labels (-1, 0, 1) for each anchor
352
- gt_labels_i = self._subsample_labels(gt_labels_i)
353
-
354
- if len(gt_boxes_i) == 0:
355
- # These values won't be used anyway since the anchor is labeled as background
356
- matched_gt_boxes_i = torch.zeros_like(anchors.tensor)
357
- else:
358
- # TODO wasted indexing computation for ignored boxes
359
- matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor
360
-
361
- gt_labels.append(gt_labels_i) # N,AHW
362
- matched_gt_boxes.append(matched_gt_boxes_i)
363
- return gt_labels, matched_gt_boxes
364
-
365
- @torch.jit.unused
366
- def losses(
367
- self,
368
- anchors: List[Boxes],
369
- pred_objectness_logits: List[torch.Tensor],
370
- gt_labels: List[torch.Tensor],
371
- pred_anchor_deltas: List[torch.Tensor],
372
- gt_boxes: List[torch.Tensor],
373
- ) -> Dict[str, torch.Tensor]:
374
- """
375
- Return the losses from a set of RPN predictions and their associated ground-truth.
376
-
377
- Args:
378
- anchors (list[Boxes or RotatedBoxes]): anchors for each feature map, each
379
- has shape (Hi*Wi*A, B), where B is box dimension (4 or 5).
380
- pred_objectness_logits (list[Tensor]): A list of L elements.
381
- Element i is a tensor of shape (N, Hi*Wi*A) representing
382
- the predicted objectness logits for all anchors.
383
- gt_labels (list[Tensor]): Output of :meth:`label_and_sample_anchors`.
384
- pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape
385
- (N, Hi*Wi*A, 4 or 5) representing the predicted "deltas" used to transform anchors
386
- to proposals.
387
- gt_boxes (list[Tensor]): Output of :meth:`label_and_sample_anchors`.
388
-
389
- Returns:
390
- dict[loss name -> loss value]: A dict mapping from loss name to loss value.
391
- Loss names are: `loss_rpn_cls` for objectness classification and
392
- `loss_rpn_loc` for proposal localization.
393
- """
394
- num_images = len(gt_labels)
395
- gt_labels = torch.stack(gt_labels) # (N, sum(Hi*Wi*Ai))
396
-
397
- # Log the number of positive/negative anchors per-image that's used in training
398
- pos_mask = gt_labels == 1
399
- num_pos_anchors = pos_mask.sum().item()
400
- num_neg_anchors = (gt_labels == 0).sum().item()
401
- storage = get_event_storage()
402
- storage.put_scalar("rpn/num_pos_anchors", num_pos_anchors / num_images)
403
- storage.put_scalar("rpn/num_neg_anchors", num_neg_anchors / num_images)
404
-
405
- localization_loss = _dense_box_regression_loss(
406
- anchors,
407
- self.box2box_transform,
408
- pred_anchor_deltas,
409
- gt_boxes,
410
- pos_mask,
411
- box_reg_loss_type=self.box_reg_loss_type,
412
- smooth_l1_beta=self.smooth_l1_beta,
413
- )
414
-
415
- valid_mask = gt_labels >= 0
416
- objectness_loss = F.binary_cross_entropy_with_logits(
417
- cat(pred_objectness_logits, dim=1)[valid_mask],
418
- gt_labels[valid_mask].to(torch.float32),
419
- reduction="sum",
420
- )
421
- normalizer = self.batch_size_per_image * num_images
422
- losses = {
423
- "loss_rpn_cls": objectness_loss / normalizer,
424
- # The original Faster R-CNN paper uses a slightly different normalizer
425
- # for loc loss. But it doesn't matter in practice
426
- "loss_rpn_loc": localization_loss / normalizer,
427
- }
428
- losses = {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()}
429
- return losses
430
-
431
- def forward(
432
- self,
433
- images: ImageList,
434
- features: Dict[str, torch.Tensor],
435
- gt_instances: Optional[List[Instances]] = None,
436
- ):
437
- """
438
- Args:
439
- images (ImageList): input images of length `N`
440
- features (dict[str, Tensor]): input data as a mapping from feature
441
- map name to tensor. Axis 0 represents the number of images `N` in
442
- the input data; axes 1-3 are channels, height, and width, which may
443
- vary between feature maps (e.g., if a feature pyramid is used).
444
- gt_instances (list[Instances], optional): a length `N` list of `Instances`s.
445
- Each `Instances` stores ground-truth instances for the corresponding image.
446
-
447
- Returns:
448
- proposals: list[Instances]: contains fields "proposal_boxes", "objectness_logits"
449
- loss: dict[Tensor] or None
450
- """
451
- features = [features[f] for f in self.in_features]
452
- anchors = self.anchor_generator(features)
453
-
454
- pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features)
455
- # Transpose the Hi*Wi*A dimension to the middle:
456
- pred_objectness_logits = [
457
- # (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A)
458
- score.permute(0, 2, 3, 1).flatten(1)
459
- for score in pred_objectness_logits
460
- ]
461
- pred_anchor_deltas = [
462
- # (N, A*B, Hi, Wi) -> (N, A, B, Hi, Wi) -> (N, Hi, Wi, A, B) -> (N, Hi*Wi*A, B)
463
- x.view(x.shape[0], -1, self.anchor_generator.box_dim, x.shape[-2], x.shape[-1])
464
- .permute(0, 3, 4, 1, 2)
465
- .flatten(1, -2)
466
- for x in pred_anchor_deltas
467
- ]
468
-
469
- if self.training:
470
- assert gt_instances is not None, "RPN requires gt_instances in training!"
471
- gt_labels, gt_boxes = self.label_and_sample_anchors(anchors, gt_instances)
472
- losses = self.losses(
473
- anchors, pred_objectness_logits, gt_labels, pred_anchor_deltas, gt_boxes
474
- )
475
- else:
476
- losses = {}
477
- proposals = self.predict_proposals(
478
- anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes
479
- )
480
- return proposals, losses
481
-
482
- def predict_proposals(
483
- self,
484
- anchors: List[Boxes],
485
- pred_objectness_logits: List[torch.Tensor],
486
- pred_anchor_deltas: List[torch.Tensor],
487
- image_sizes: List[Tuple[int, int]],
488
- ):
489
- """
490
- Decode all the predicted box regression deltas to proposals. Find the top proposals
491
- by applying NMS and removing boxes that are too small.
492
-
493
- Returns:
494
- proposals (list[Instances]): list of N Instances. The i-th Instances
495
- stores post_nms_topk object proposals for image i, sorted by their
496
- objectness score in descending order.
497
- """
498
- # The proposals are treated as fixed for joint training with roi heads.
499
- # This approach ignores the derivative w.r.t. the proposal boxes’ coordinates that
500
- # are also network responses.
501
- with torch.no_grad():
502
- pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas)
503
- return find_top_rpn_proposals(
504
- pred_proposals,
505
- pred_objectness_logits,
506
- image_sizes,
507
- self.nms_thresh,
508
- self.pre_nms_topk[self.training],
509
- self.post_nms_topk[self.training],
510
- self.min_box_size,
511
- self.training,
512
- )
513
-
514
- def _decode_proposals(self, anchors: List[Boxes], pred_anchor_deltas: List[torch.Tensor]):
515
- """
516
- Transform anchors into proposals by applying the predicted anchor deltas.
517
-
518
- Returns:
519
- proposals (list[Tensor]): A list of L tensors. Tensor i has shape
520
- (N, Hi*Wi*A, B)
521
- """
522
- N = pred_anchor_deltas[0].shape[0]
523
- proposals = []
524
- # For each feature map
525
- for anchors_i, pred_anchor_deltas_i in zip(anchors, pred_anchor_deltas):
526
- B = anchors_i.tensor.size(1)
527
- pred_anchor_deltas_i = pred_anchor_deltas_i.reshape(-1, B)
528
- # Expand anchors to shape (N*Hi*Wi*A, B)
529
- anchors_i = anchors_i.tensor.unsqueeze(0).expand(N, -1, -1).reshape(-1, B)
530
- proposals_i = self.box2box_transform.apply_deltas(pred_anchor_deltas_i, anchors_i)
531
- # Append feature map proposals with shape (N, Hi*Wi*A, B)
532
- proposals.append(proposals_i.view(N, -1, B))
533
- return proposals
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cletrason/Cletrason-toad-mario-movie/hf_utils.py DELETED
@@ -1,39 +0,0 @@
1
- from bs4 import BeautifulSoup
2
- import requests
3
-
4
-
5
- def model_url_list():
6
- url_list = []
7
- for i in range(0, 5):
8
- url_list.append(
9
- f"https://huggingface.co/models?p={i}&sort=downloads&search=dreambooth")
10
- return url_list
11
-
12
-
13
- def data_scraping(url_list):
14
- model_list = []
15
- for url in url_list:
16
- response = requests.get(url)
17
- soup = BeautifulSoup(response.text, "html.parser")
18
- div_class = 'grid grid-cols-1 gap-5 2xl:grid-cols-2'
19
- div = soup.find('div', {'class': div_class})
20
- for a in div.find_all('a', href=True):
21
- model_list.append(a['href'])
22
- return model_list
23
-
24
-
25
- def get_model_list():
26
- model_list = data_scraping(model_url_list())
27
- for i in range(len(model_list)):
28
- model_list[i] = model_list[i][1:]
29
-
30
- best_model_list = [
31
- "dreamlike-art/dreamlike-photoreal-2.0",
32
- "dreamlike-art/dreamlike-diffusion-1.0",
33
- "runwayml/stable-diffusion-v1-5",
34
- "CompVis/stable-diffusion-v1-4",
35
- "prompthero/openjourney",
36
- ]
37
-
38
- model_list = best_model_list + model_list
39
- return model_list
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CloseEric/CloseEric/Dockerfile DELETED
@@ -1,11 +0,0 @@
1
- FROM node:18-bullseye-slim
2
- RUN apt-get update && \
3
- apt-get install -y git
4
- RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
5
- WORKDIR /app
6
- RUN npm install
7
- COPY Dockerfile greeting.md* .env* ./
8
- RUN npm run build
9
- EXPOSE 7860
10
- ENV NODE_ENV=production
11
- CMD [ "npm", "start" ]
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/tv/public/mpegts.js DELETED
The diff for this file is too large to render. See raw diff
 
spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/transforms_video.py DELETED
@@ -1,179 +0,0 @@
1
- #!/usr/bin/env python3
2
- """
3
- Copyright (c) 2022, salesforce.com, inc.
4
- All rights reserved.
5
- SPDX-License-Identifier: BSD-3-Clause
6
- For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
7
- """
8
-
9
-
10
- import numbers
11
- import random
12
-
13
- from torchvision.transforms import (
14
- RandomCrop,
15
- RandomResizedCrop,
16
- )
17
-
18
- import video_llama.processors.functional_video as F
19
-
20
-
21
- __all__ = [
22
- "RandomCropVideo",
23
- "RandomResizedCropVideo",
24
- "CenterCropVideo",
25
- "NormalizeVideo",
26
- "ToTensorVideo",
27
- "RandomHorizontalFlipVideo",
28
- ]
29
-
30
-
31
- class RandomCropVideo(RandomCrop):
32
- def __init__(self, size):
33
- if isinstance(size, numbers.Number):
34
- self.size = (int(size), int(size))
35
- else:
36
- self.size = size
37
-
38
- def __call__(self, clip):
39
- """
40
- Args:
41
- clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W)
42
- Returns:
43
- torch.tensor: randomly cropped/resized video clip.
44
- size is (C, T, OH, OW)
45
- """
46
- i, j, h, w = self.get_params(clip, self.size)
47
- return F.crop(clip, i, j, h, w)
48
-
49
- def __repr__(self) -> str:
50
- return f"{self.__class__.__name__}(size={self.size})"
51
-
52
-
53
- class RandomResizedCropVideo(RandomResizedCrop):
54
- def __init__(
55
- self,
56
- size,
57
- scale=(0.08, 1.0),
58
- ratio=(3.0 / 4.0, 4.0 / 3.0),
59
- interpolation_mode="bilinear",
60
- ):
61
- if isinstance(size, tuple):
62
- if len(size) != 2:
63
- raise ValueError(
64
- f"size should be tuple (height, width), instead got {size}"
65
- )
66
- self.size = size
67
- else:
68
- self.size = (size, size)
69
-
70
- self.interpolation_mode = interpolation_mode
71
- self.scale = scale
72
- self.ratio = ratio
73
-
74
- def __call__(self, clip):
75
- """
76
- Args:
77
- clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W)
78
- Returns:
79
- torch.tensor: randomly cropped/resized video clip.
80
- size is (C, T, H, W)
81
- """
82
- i, j, h, w = self.get_params(clip, self.scale, self.ratio)
83
- return F.resized_crop(clip, i, j, h, w, self.size, self.interpolation_mode)
84
-
85
- def __repr__(self) -> str:
86
- return f"{self.__class__.__name__}(size={self.size}, interpolation_mode={self.interpolation_mode}, scale={self.scale}, ratio={self.ratio})"
87
-
88
-
89
- class CenterCropVideo:
90
- def __init__(self, crop_size):
91
- if isinstance(crop_size, numbers.Number):
92
- self.crop_size = (int(crop_size), int(crop_size))
93
- else:
94
- self.crop_size = crop_size
95
-
96
- def __call__(self, clip):
97
- """
98
- Args:
99
- clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W)
100
- Returns:
101
- torch.tensor: central cropping of video clip. Size is
102
- (C, T, crop_size, crop_size)
103
- """
104
- return F.center_crop(clip, self.crop_size)
105
-
106
- def __repr__(self) -> str:
107
- return f"{self.__class__.__name__}(crop_size={self.crop_size})"
108
-
109
-
110
- class NormalizeVideo:
111
- """
112
- Normalize the video clip by mean subtraction and division by standard deviation
113
- Args:
114
- mean (3-tuple): pixel RGB mean
115
- std (3-tuple): pixel RGB standard deviation
116
- inplace (boolean): whether do in-place normalization
117
- """
118
-
119
- def __init__(self, mean, std, inplace=False):
120
- self.mean = mean
121
- self.std = std
122
- self.inplace = inplace
123
-
124
- def __call__(self, clip):
125
- """
126
- Args:
127
- clip (torch.tensor): video clip to be normalized. Size is (C, T, H, W)
128
- """
129
- return F.normalize(clip, self.mean, self.std, self.inplace)
130
-
131
- def __repr__(self) -> str:
132
- return f"{self.__class__.__name__}(mean={self.mean}, std={self.std}, inplace={self.inplace})"
133
-
134
-
135
- class ToTensorVideo:
136
- """
137
- Convert tensor data type from uint8 to float, divide value by 255.0 and
138
- permute the dimensions of clip tensor
139
- """
140
-
141
- def __init__(self):
142
- pass
143
-
144
- def __call__(self, clip):
145
- """
146
- Args:
147
- clip (torch.tensor, dtype=torch.uint8): Size is (T, H, W, C)
148
- Return:
149
- clip (torch.tensor, dtype=torch.float): Size is (C, T, H, W)
150
- """
151
- return F.to_tensor(clip)
152
-
153
- def __repr__(self) -> str:
154
- return self.__class__.__name__
155
-
156
-
157
- class RandomHorizontalFlipVideo:
158
- """
159
- Flip the video clip along the horizonal direction with a given probability
160
- Args:
161
- p (float): probability of the clip being flipped. Default value is 0.5
162
- """
163
-
164
- def __init__(self, p=0.5):
165
- self.p = p
166
-
167
- def __call__(self, clip):
168
- """
169
- Args:
170
- clip (torch.tensor): Size is (C, T, H, W)
171
- Return:
172
- clip (torch.tensor): Size is (C, T, H, W)
173
- """
174
- if random.random() < self.p:
175
- clip = F.hflip(clip)
176
- return clip
177
-
178
- def __repr__(self) -> str:
179
- return f"{self.__class__.__name__}(p={self.p})"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/validators.py DELETED
@@ -1,1186 +0,0 @@
1
- """Various low level data validators."""
2
-
3
- import calendar
4
- from io import open
5
- import fs.base
6
- import fs.osfs
7
-
8
- from collections.abc import Mapping
9
- from fontTools.ufoLib.utils import numberTypes
10
-
11
-
12
- # -------
13
- # Generic
14
- # -------
15
-
16
-
17
- def isDictEnough(value):
18
- """
19
- Some objects will likely come in that aren't
20
- dicts but are dict-ish enough.
21
- """
22
- if isinstance(value, Mapping):
23
- return True
24
- for attr in ("keys", "values", "items"):
25
- if not hasattr(value, attr):
26
- return False
27
- return True
28
-
29
-
30
- def genericTypeValidator(value, typ):
31
- """
32
- Generic. (Added at version 2.)
33
- """
34
- return isinstance(value, typ)
35
-
36
-
37
- def genericIntListValidator(values, validValues):
38
- """
39
- Generic. (Added at version 2.)
40
- """
41
- if not isinstance(values, (list, tuple)):
42
- return False
43
- valuesSet = set(values)
44
- validValuesSet = set(validValues)
45
- if valuesSet - validValuesSet:
46
- return False
47
- for value in values:
48
- if not isinstance(value, int):
49
- return False
50
- return True
51
-
52
-
53
- def genericNonNegativeIntValidator(value):
54
- """
55
- Generic. (Added at version 3.)
56
- """
57
- if not isinstance(value, int):
58
- return False
59
- if value < 0:
60
- return False
61
- return True
62
-
63
-
64
- def genericNonNegativeNumberValidator(value):
65
- """
66
- Generic. (Added at version 3.)
67
- """
68
- if not isinstance(value, numberTypes):
69
- return False
70
- if value < 0:
71
- return False
72
- return True
73
-
74
-
75
- def genericDictValidator(value, prototype):
76
- """
77
- Generic. (Added at version 3.)
78
- """
79
- # not a dict
80
- if not isinstance(value, Mapping):
81
- return False
82
- # missing required keys
83
- for key, (typ, required) in prototype.items():
84
- if not required:
85
- continue
86
- if key not in value:
87
- return False
88
- # unknown keys
89
- for key in value.keys():
90
- if key not in prototype:
91
- return False
92
- # incorrect types
93
- for key, v in value.items():
94
- prototypeType, required = prototype[key]
95
- if v is None and not required:
96
- continue
97
- if not isinstance(v, prototypeType):
98
- return False
99
- return True
100
-
101
-
102
- # --------------
103
- # fontinfo.plist
104
- # --------------
105
-
106
- # Data Validators
107
-
108
-
109
- def fontInfoStyleMapStyleNameValidator(value):
110
- """
111
- Version 2+.
112
- """
113
- options = ["regular", "italic", "bold", "bold italic"]
114
- return value in options
115
-
116
-
117
- def fontInfoOpenTypeGaspRangeRecordsValidator(value):
118
- """
119
- Version 3+.
120
- """
121
- if not isinstance(value, list):
122
- return False
123
- if len(value) == 0:
124
- return True
125
- validBehaviors = [0, 1, 2, 3]
126
- dictPrototype = dict(rangeMaxPPEM=(int, True), rangeGaspBehavior=(list, True))
127
- ppemOrder = []
128
- for rangeRecord in value:
129
- if not genericDictValidator(rangeRecord, dictPrototype):
130
- return False
131
- ppem = rangeRecord["rangeMaxPPEM"]
132
- behavior = rangeRecord["rangeGaspBehavior"]
133
- ppemValidity = genericNonNegativeIntValidator(ppem)
134
- if not ppemValidity:
135
- return False
136
- behaviorValidity = genericIntListValidator(behavior, validBehaviors)
137
- if not behaviorValidity:
138
- return False
139
- ppemOrder.append(ppem)
140
- if ppemOrder != sorted(ppemOrder):
141
- return False
142
- return True
143
-
144
-
145
- def fontInfoOpenTypeHeadCreatedValidator(value):
146
- """
147
- Version 2+.
148
- """
149
- # format: 0000/00/00 00:00:00
150
- if not isinstance(value, str):
151
- return False
152
- # basic formatting
153
- if not len(value) == 19:
154
- return False
155
- if value.count(" ") != 1:
156
- return False
157
- date, time = value.split(" ")
158
- if date.count("/") != 2:
159
- return False
160
- if time.count(":") != 2:
161
- return False
162
- # date
163
- year, month, day = date.split("/")
164
- if len(year) != 4:
165
- return False
166
- if len(month) != 2:
167
- return False
168
- if len(day) != 2:
169
- return False
170
- try:
171
- year = int(year)
172
- month = int(month)
173
- day = int(day)
174
- except ValueError:
175
- return False
176
- if month < 1 or month > 12:
177
- return False
178
- monthMaxDay = calendar.monthrange(year, month)[1]
179
- if day < 1 or day > monthMaxDay:
180
- return False
181
- # time
182
- hour, minute, second = time.split(":")
183
- if len(hour) != 2:
184
- return False
185
- if len(minute) != 2:
186
- return False
187
- if len(second) != 2:
188
- return False
189
- try:
190
- hour = int(hour)
191
- minute = int(minute)
192
- second = int(second)
193
- except ValueError:
194
- return False
195
- if hour < 0 or hour > 23:
196
- return False
197
- if minute < 0 or minute > 59:
198
- return False
199
- if second < 0 or second > 59:
200
- return False
201
- # fallback
202
- return True
203
-
204
-
205
- def fontInfoOpenTypeNameRecordsValidator(value):
206
- """
207
- Version 3+.
208
- """
209
- if not isinstance(value, list):
210
- return False
211
- dictPrototype = dict(
212
- nameID=(int, True),
213
- platformID=(int, True),
214
- encodingID=(int, True),
215
- languageID=(int, True),
216
- string=(str, True),
217
- )
218
- for nameRecord in value:
219
- if not genericDictValidator(nameRecord, dictPrototype):
220
- return False
221
- return True
222
-
223
-
224
- def fontInfoOpenTypeOS2WeightClassValidator(value):
225
- """
226
- Version 2+.
227
- """
228
- if not isinstance(value, int):
229
- return False
230
- if value < 0:
231
- return False
232
- return True
233
-
234
-
235
- def fontInfoOpenTypeOS2WidthClassValidator(value):
236
- """
237
- Version 2+.
238
- """
239
- if not isinstance(value, int):
240
- return False
241
- if value < 1:
242
- return False
243
- if value > 9:
244
- return False
245
- return True
246
-
247
-
248
- def fontInfoVersion2OpenTypeOS2PanoseValidator(values):
249
- """
250
- Version 2.
251
- """
252
- if not isinstance(values, (list, tuple)):
253
- return False
254
- if len(values) != 10:
255
- return False
256
- for value in values:
257
- if not isinstance(value, int):
258
- return False
259
- # XXX further validation?
260
- return True
261
-
262
-
263
- def fontInfoVersion3OpenTypeOS2PanoseValidator(values):
264
- """
265
- Version 3+.
266
- """
267
- if not isinstance(values, (list, tuple)):
268
- return False
269
- if len(values) != 10:
270
- return False
271
- for value in values:
272
- if not isinstance(value, int):
273
- return False
274
- if value < 0:
275
- return False
276
- # XXX further validation?
277
- return True
278
-
279
-
280
- def fontInfoOpenTypeOS2FamilyClassValidator(values):
281
- """
282
- Version 2+.
283
- """
284
- if not isinstance(values, (list, tuple)):
285
- return False
286
- if len(values) != 2:
287
- return False
288
- for value in values:
289
- if not isinstance(value, int):
290
- return False
291
- classID, subclassID = values
292
- if classID < 0 or classID > 14:
293
- return False
294
- if subclassID < 0 or subclassID > 15:
295
- return False
296
- return True
297
-
298
-
299
- def fontInfoPostscriptBluesValidator(values):
300
- """
301
- Version 2+.
302
- """
303
- if not isinstance(values, (list, tuple)):
304
- return False
305
- if len(values) > 14:
306
- return False
307
- if len(values) % 2:
308
- return False
309
- for value in values:
310
- if not isinstance(value, numberTypes):
311
- return False
312
- return True
313
-
314
-
315
- def fontInfoPostscriptOtherBluesValidator(values):
316
- """
317
- Version 2+.
318
- """
319
- if not isinstance(values, (list, tuple)):
320
- return False
321
- if len(values) > 10:
322
- return False
323
- if len(values) % 2:
324
- return False
325
- for value in values:
326
- if not isinstance(value, numberTypes):
327
- return False
328
- return True
329
-
330
-
331
- def fontInfoPostscriptStemsValidator(values):
332
- """
333
- Version 2+.
334
- """
335
- if not isinstance(values, (list, tuple)):
336
- return False
337
- if len(values) > 12:
338
- return False
339
- for value in values:
340
- if not isinstance(value, numberTypes):
341
- return False
342
- return True
343
-
344
-
345
- def fontInfoPostscriptWindowsCharacterSetValidator(value):
346
- """
347
- Version 2+.
348
- """
349
- validValues = list(range(1, 21))
350
- if value not in validValues:
351
- return False
352
- return True
353
-
354
-
355
- def fontInfoWOFFMetadataUniqueIDValidator(value):
356
- """
357
- Version 3+.
358
- """
359
- dictPrototype = dict(id=(str, True))
360
- if not genericDictValidator(value, dictPrototype):
361
- return False
362
- return True
363
-
364
-
365
- def fontInfoWOFFMetadataVendorValidator(value):
366
- """
367
- Version 3+.
368
- """
369
- dictPrototype = {
370
- "name": (str, True),
371
- "url": (str, False),
372
- "dir": (str, False),
373
- "class": (str, False),
374
- }
375
- if not genericDictValidator(value, dictPrototype):
376
- return False
377
- if "dir" in value and value.get("dir") not in ("ltr", "rtl"):
378
- return False
379
- return True
380
-
381
-
382
- def fontInfoWOFFMetadataCreditsValidator(value):
383
- """
384
- Version 3+.
385
- """
386
- dictPrototype = dict(credits=(list, True))
387
- if not genericDictValidator(value, dictPrototype):
388
- return False
389
- if not len(value["credits"]):
390
- return False
391
- dictPrototype = {
392
- "name": (str, True),
393
- "url": (str, False),
394
- "role": (str, False),
395
- "dir": (str, False),
396
- "class": (str, False),
397
- }
398
- for credit in value["credits"]:
399
- if not genericDictValidator(credit, dictPrototype):
400
- return False
401
- if "dir" in credit and credit.get("dir") not in ("ltr", "rtl"):
402
- return False
403
- return True
404
-
405
-
406
- def fontInfoWOFFMetadataDescriptionValidator(value):
407
- """
408
- Version 3+.
409
- """
410
- dictPrototype = dict(url=(str, False), text=(list, True))
411
- if not genericDictValidator(value, dictPrototype):
412
- return False
413
- for text in value["text"]:
414
- if not fontInfoWOFFMetadataTextValue(text):
415
- return False
416
- return True
417
-
418
-
419
- def fontInfoWOFFMetadataLicenseValidator(value):
420
- """
421
- Version 3+.
422
- """
423
- dictPrototype = dict(url=(str, False), text=(list, False), id=(str, False))
424
- if not genericDictValidator(value, dictPrototype):
425
- return False
426
- if "text" in value:
427
- for text in value["text"]:
428
- if not fontInfoWOFFMetadataTextValue(text):
429
- return False
430
- return True
431
-
432
-
433
- def fontInfoWOFFMetadataTrademarkValidator(value):
434
- """
435
- Version 3+.
436
- """
437
- dictPrototype = dict(text=(list, True))
438
- if not genericDictValidator(value, dictPrototype):
439
- return False
440
- for text in value["text"]:
441
- if not fontInfoWOFFMetadataTextValue(text):
442
- return False
443
- return True
444
-
445
-
446
- def fontInfoWOFFMetadataCopyrightValidator(value):
447
- """
448
- Version 3+.
449
- """
450
- dictPrototype = dict(text=(list, True))
451
- if not genericDictValidator(value, dictPrototype):
452
- return False
453
- for text in value["text"]:
454
- if not fontInfoWOFFMetadataTextValue(text):
455
- return False
456
- return True
457
-
458
-
459
- def fontInfoWOFFMetadataLicenseeValidator(value):
460
- """
461
- Version 3+.
462
- """
463
- dictPrototype = {"name": (str, True), "dir": (str, False), "class": (str, False)}
464
- if not genericDictValidator(value, dictPrototype):
465
- return False
466
- if "dir" in value and value.get("dir") not in ("ltr", "rtl"):
467
- return False
468
- return True
469
-
470
-
471
- def fontInfoWOFFMetadataTextValue(value):
472
- """
473
- Version 3+.
474
- """
475
- dictPrototype = {
476
- "text": (str, True),
477
- "language": (str, False),
478
- "dir": (str, False),
479
- "class": (str, False),
480
- }
481
- if not genericDictValidator(value, dictPrototype):
482
- return False
483
- if "dir" in value and value.get("dir") not in ("ltr", "rtl"):
484
- return False
485
- return True
486
-
487
-
488
- def fontInfoWOFFMetadataExtensionsValidator(value):
489
- """
490
- Version 3+.
491
- """
492
- if not isinstance(value, list):
493
- return False
494
- if not value:
495
- return False
496
- for extension in value:
497
- if not fontInfoWOFFMetadataExtensionValidator(extension):
498
- return False
499
- return True
500
-
501
-
502
- def fontInfoWOFFMetadataExtensionValidator(value):
503
- """
504
- Version 3+.
505
- """
506
- dictPrototype = dict(names=(list, False), items=(list, True), id=(str, False))
507
- if not genericDictValidator(value, dictPrototype):
508
- return False
509
- if "names" in value:
510
- for name in value["names"]:
511
- if not fontInfoWOFFMetadataExtensionNameValidator(name):
512
- return False
513
- for item in value["items"]:
514
- if not fontInfoWOFFMetadataExtensionItemValidator(item):
515
- return False
516
- return True
517
-
518
-
519
- def fontInfoWOFFMetadataExtensionItemValidator(value):
520
- """
521
- Version 3+.
522
- """
523
- dictPrototype = dict(id=(str, False), names=(list, True), values=(list, True))
524
- if not genericDictValidator(value, dictPrototype):
525
- return False
526
- for name in value["names"]:
527
- if not fontInfoWOFFMetadataExtensionNameValidator(name):
528
- return False
529
- for val in value["values"]:
530
- if not fontInfoWOFFMetadataExtensionValueValidator(val):
531
- return False
532
- return True
533
-
534
-
535
- def fontInfoWOFFMetadataExtensionNameValidator(value):
536
- """
537
- Version 3+.
538
- """
539
- dictPrototype = {
540
- "text": (str, True),
541
- "language": (str, False),
542
- "dir": (str, False),
543
- "class": (str, False),
544
- }
545
- if not genericDictValidator(value, dictPrototype):
546
- return False
547
- if "dir" in value and value.get("dir") not in ("ltr", "rtl"):
548
- return False
549
- return True
550
-
551
-
552
- def fontInfoWOFFMetadataExtensionValueValidator(value):
553
- """
554
- Version 3+.
555
- """
556
- dictPrototype = {
557
- "text": (str, True),
558
- "language": (str, False),
559
- "dir": (str, False),
560
- "class": (str, False),
561
- }
562
- if not genericDictValidator(value, dictPrototype):
563
- return False
564
- if "dir" in value and value.get("dir") not in ("ltr", "rtl"):
565
- return False
566
- return True
567
-
568
-
569
- # ----------
570
- # Guidelines
571
- # ----------
572
-
573
-
574
- def guidelinesValidator(value, identifiers=None):
575
- """
576
- Version 3+.
577
- """
578
- if not isinstance(value, list):
579
- return False
580
- if identifiers is None:
581
- identifiers = set()
582
- for guide in value:
583
- if not guidelineValidator(guide):
584
- return False
585
- identifier = guide.get("identifier")
586
- if identifier is not None:
587
- if identifier in identifiers:
588
- return False
589
- identifiers.add(identifier)
590
- return True
591
-
592
-
593
- _guidelineDictPrototype = dict(
594
- x=((int, float), False),
595
- y=((int, float), False),
596
- angle=((int, float), False),
597
- name=(str, False),
598
- color=(str, False),
599
- identifier=(str, False),
600
- )
601
-
602
-
603
- def guidelineValidator(value):
604
- """
605
- Version 3+.
606
- """
607
- if not genericDictValidator(value, _guidelineDictPrototype):
608
- return False
609
- x = value.get("x")
610
- y = value.get("y")
611
- angle = value.get("angle")
612
- # x or y must be present
613
- if x is None and y is None:
614
- return False
615
- # if x or y are None, angle must not be present
616
- if x is None or y is None:
617
- if angle is not None:
618
- return False
619
- # if x and y are defined, angle must be defined
620
- if x is not None and y is not None and angle is None:
621
- return False
622
- # angle must be between 0 and 360
623
- if angle is not None:
624
- if angle < 0:
625
- return False
626
- if angle > 360:
627
- return False
628
- # identifier must be 1 or more characters
629
- identifier = value.get("identifier")
630
- if identifier is not None and not identifierValidator(identifier):
631
- return False
632
- # color must follow the proper format
633
- color = value.get("color")
634
- if color is not None and not colorValidator(color):
635
- return False
636
- return True
637
-
638
-
639
- # -------
640
- # Anchors
641
- # -------
642
-
643
-
644
- def anchorsValidator(value, identifiers=None):
645
- """
646
- Version 3+.
647
- """
648
- if not isinstance(value, list):
649
- return False
650
- if identifiers is None:
651
- identifiers = set()
652
- for anchor in value:
653
- if not anchorValidator(anchor):
654
- return False
655
- identifier = anchor.get("identifier")
656
- if identifier is not None:
657
- if identifier in identifiers:
658
- return False
659
- identifiers.add(identifier)
660
- return True
661
-
662
-
663
- _anchorDictPrototype = dict(
664
- x=((int, float), False),
665
- y=((int, float), False),
666
- name=(str, False),
667
- color=(str, False),
668
- identifier=(str, False),
669
- )
670
-
671
-
672
- def anchorValidator(value):
673
- """
674
- Version 3+.
675
- """
676
- if not genericDictValidator(value, _anchorDictPrototype):
677
- return False
678
- x = value.get("x")
679
- y = value.get("y")
680
- # x and y must be present
681
- if x is None or y is None:
682
- return False
683
- # identifier must be 1 or more characters
684
- identifier = value.get("identifier")
685
- if identifier is not None and not identifierValidator(identifier):
686
- return False
687
- # color must follow the proper format
688
- color = value.get("color")
689
- if color is not None and not colorValidator(color):
690
- return False
691
- return True
692
-
693
-
694
- # ----------
695
- # Identifier
696
- # ----------
697
-
698
-
699
- def identifierValidator(value):
700
- """
701
- Version 3+.
702
-
703
- >>> identifierValidator("a")
704
- True
705
- >>> identifierValidator("")
706
- False
707
- >>> identifierValidator("a" * 101)
708
- False
709
- """
710
- validCharactersMin = 0x20
711
- validCharactersMax = 0x7E
712
- if not isinstance(value, str):
713
- return False
714
- if not value:
715
- return False
716
- if len(value) > 100:
717
- return False
718
- for c in value:
719
- c = ord(c)
720
- if c < validCharactersMin or c > validCharactersMax:
721
- return False
722
- return True
723
-
724
-
725
- # -----
726
- # Color
727
- # -----
728
-
729
-
730
- def colorValidator(value):
731
- """
732
- Version 3+.
733
-
734
- >>> colorValidator("0,0,0,0")
735
- True
736
- >>> colorValidator(".5,.5,.5,.5")
737
- True
738
- >>> colorValidator("0.5,0.5,0.5,0.5")
739
- True
740
- >>> colorValidator("1,1,1,1")
741
- True
742
-
743
- >>> colorValidator("2,0,0,0")
744
- False
745
- >>> colorValidator("0,2,0,0")
746
- False
747
- >>> colorValidator("0,0,2,0")
748
- False
749
- >>> colorValidator("0,0,0,2")
750
- False
751
-
752
- >>> colorValidator("1r,1,1,1")
753
- False
754
- >>> colorValidator("1,1g,1,1")
755
- False
756
- >>> colorValidator("1,1,1b,1")
757
- False
758
- >>> colorValidator("1,1,1,1a")
759
- False
760
-
761
- >>> colorValidator("1 1 1 1")
762
- False
763
- >>> colorValidator("1 1,1,1")
764
- False
765
- >>> colorValidator("1,1 1,1")
766
- False
767
- >>> colorValidator("1,1,1 1")
768
- False
769
-
770
- >>> colorValidator("1, 1, 1, 1")
771
- True
772
- """
773
- if not isinstance(value, str):
774
- return False
775
- parts = value.split(",")
776
- if len(parts) != 4:
777
- return False
778
- for part in parts:
779
- part = part.strip()
780
- converted = False
781
- try:
782
- part = int(part)
783
- converted = True
784
- except ValueError:
785
- pass
786
- if not converted:
787
- try:
788
- part = float(part)
789
- converted = True
790
- except ValueError:
791
- pass
792
- if not converted:
793
- return False
794
- if part < 0:
795
- return False
796
- if part > 1:
797
- return False
798
- return True
799
-
800
-
801
- # -----
802
- # image
803
- # -----
804
-
805
- pngSignature = b"\x89PNG\r\n\x1a\n"
806
-
807
- _imageDictPrototype = dict(
808
- fileName=(str, True),
809
- xScale=((int, float), False),
810
- xyScale=((int, float), False),
811
- yxScale=((int, float), False),
812
- yScale=((int, float), False),
813
- xOffset=((int, float), False),
814
- yOffset=((int, float), False),
815
- color=(str, False),
816
- )
817
-
818
-
819
- def imageValidator(value):
820
- """
821
- Version 3+.
822
- """
823
- if not genericDictValidator(value, _imageDictPrototype):
824
- return False
825
- # fileName must be one or more characters
826
- if not value["fileName"]:
827
- return False
828
- # color must follow the proper format
829
- color = value.get("color")
830
- if color is not None and not colorValidator(color):
831
- return False
832
- return True
833
-
834
-
835
- def pngValidator(path=None, data=None, fileObj=None):
836
- """
837
- Version 3+.
838
-
839
- This checks the signature of the image data.
840
- """
841
- assert path is not None or data is not None or fileObj is not None
842
- if path is not None:
843
- with open(path, "rb") as f:
844
- signature = f.read(8)
845
- elif data is not None:
846
- signature = data[:8]
847
- elif fileObj is not None:
848
- pos = fileObj.tell()
849
- signature = fileObj.read(8)
850
- fileObj.seek(pos)
851
- if signature != pngSignature:
852
- return False, "Image does not begin with the PNG signature."
853
- return True, None
854
-
855
-
856
- # -------------------
857
- # layercontents.plist
858
- # -------------------
859
-
860
-
861
- def layerContentsValidator(value, ufoPathOrFileSystem):
862
- """
863
- Check the validity of layercontents.plist.
864
- Version 3+.
865
- """
866
- if isinstance(ufoPathOrFileSystem, fs.base.FS):
867
- fileSystem = ufoPathOrFileSystem
868
- else:
869
- fileSystem = fs.osfs.OSFS(ufoPathOrFileSystem)
870
-
871
- bogusFileMessage = "layercontents.plist in not in the correct format."
872
- # file isn't in the right format
873
- if not isinstance(value, list):
874
- return False, bogusFileMessage
875
- # work through each entry
876
- usedLayerNames = set()
877
- usedDirectories = set()
878
- contents = {}
879
- for entry in value:
880
- # layer entry in the incorrect format
881
- if not isinstance(entry, list):
882
- return False, bogusFileMessage
883
- if not len(entry) == 2:
884
- return False, bogusFileMessage
885
- for i in entry:
886
- if not isinstance(i, str):
887
- return False, bogusFileMessage
888
- layerName, directoryName = entry
889
- # check directory naming
890
- if directoryName != "glyphs":
891
- if not directoryName.startswith("glyphs."):
892
- return (
893
- False,
894
- "Invalid directory name (%s) in layercontents.plist."
895
- % directoryName,
896
- )
897
- if len(layerName) == 0:
898
- return False, "Empty layer name in layercontents.plist."
899
- # directory doesn't exist
900
- if not fileSystem.exists(directoryName):
901
- return False, "A glyphset does not exist at %s." % directoryName
902
- # default layer name
903
- if layerName == "public.default" and directoryName != "glyphs":
904
- return (
905
- False,
906
- "The name public.default is being used by a layer that is not the default.",
907
- )
908
- # check usage
909
- if layerName in usedLayerNames:
910
- return (
911
- False,
912
- "The layer name %s is used by more than one layer." % layerName,
913
- )
914
- usedLayerNames.add(layerName)
915
- if directoryName in usedDirectories:
916
- return (
917
- False,
918
- "The directory %s is used by more than one layer." % directoryName,
919
- )
920
- usedDirectories.add(directoryName)
921
- # store
922
- contents[layerName] = directoryName
923
- # missing default layer
924
- foundDefault = "glyphs" in contents.values()
925
- if not foundDefault:
926
- return False, "The required default glyph set is not in the UFO."
927
- return True, None
928
-
929
-
930
- # ------------
931
- # groups.plist
932
- # ------------
933
-
934
-
935
- def groupsValidator(value):
936
- """
937
- Check the validity of the groups.
938
- Version 3+ (though it's backwards compatible with UFO 1 and UFO 2).
939
-
940
- >>> groups = {"A" : ["A", "A"], "A2" : ["A"]}
941
- >>> groupsValidator(groups)
942
- (True, None)
943
-
944
- >>> groups = {"" : ["A"]}
945
- >>> valid, msg = groupsValidator(groups)
946
- >>> valid
947
- False
948
- >>> print(msg)
949
- A group has an empty name.
950
-
951
- >>> groups = {"public.awesome" : ["A"]}
952
- >>> groupsValidator(groups)
953
- (True, None)
954
-
955
- >>> groups = {"public.kern1." : ["A"]}
956
- >>> valid, msg = groupsValidator(groups)
957
- >>> valid
958
- False
959
- >>> print(msg)
960
- The group data contains a kerning group with an incomplete name.
961
- >>> groups = {"public.kern2." : ["A"]}
962
- >>> valid, msg = groupsValidator(groups)
963
- >>> valid
964
- False
965
- >>> print(msg)
966
- The group data contains a kerning group with an incomplete name.
967
-
968
- >>> groups = {"public.kern1.A" : ["A"], "public.kern2.A" : ["A"]}
969
- >>> groupsValidator(groups)
970
- (True, None)
971
-
972
- >>> groups = {"public.kern1.A1" : ["A"], "public.kern1.A2" : ["A"]}
973
- >>> valid, msg = groupsValidator(groups)
974
- >>> valid
975
- False
976
- >>> print(msg)
977
- The glyph "A" occurs in too many kerning groups.
978
- """
979
- bogusFormatMessage = "The group data is not in the correct format."
980
- if not isDictEnough(value):
981
- return False, bogusFormatMessage
982
- firstSideMapping = {}
983
- secondSideMapping = {}
984
- for groupName, glyphList in value.items():
985
- if not isinstance(groupName, (str)):
986
- return False, bogusFormatMessage
987
- if not isinstance(glyphList, (list, tuple)):
988
- return False, bogusFormatMessage
989
- if not groupName:
990
- return False, "A group has an empty name."
991
- if groupName.startswith("public."):
992
- if not groupName.startswith("public.kern1.") and not groupName.startswith(
993
- "public.kern2."
994
- ):
995
- # unknown public.* name. silently skip.
996
- continue
997
- else:
998
- if len("public.kernN.") == len(groupName):
999
- return (
1000
- False,
1001
- "The group data contains a kerning group with an incomplete name.",
1002
- )
1003
- if groupName.startswith("public.kern1."):
1004
- d = firstSideMapping
1005
- else:
1006
- d = secondSideMapping
1007
- for glyphName in glyphList:
1008
- if not isinstance(glyphName, str):
1009
- return (
1010
- False,
1011
- "The group data %s contains an invalid member." % groupName,
1012
- )
1013
- if glyphName in d:
1014
- return (
1015
- False,
1016
- 'The glyph "%s" occurs in too many kerning groups.' % glyphName,
1017
- )
1018
- d[glyphName] = groupName
1019
- return True, None
1020
-
1021
-
1022
- # -------------
1023
- # kerning.plist
1024
- # -------------
1025
-
1026
-
1027
- def kerningValidator(data):
1028
- """
1029
- Check the validity of the kerning data structure.
1030
- Version 3+ (though it's backwards compatible with UFO 1 and UFO 2).
1031
-
1032
- >>> kerning = {"A" : {"B" : 100}}
1033
- >>> kerningValidator(kerning)
1034
- (True, None)
1035
-
1036
- >>> kerning = {"A" : ["B"]}
1037
- >>> valid, msg = kerningValidator(kerning)
1038
- >>> valid
1039
- False
1040
- >>> print(msg)
1041
- The kerning data is not in the correct format.
1042
-
1043
- >>> kerning = {"A" : {"B" : "100"}}
1044
- >>> valid, msg = kerningValidator(kerning)
1045
- >>> valid
1046
- False
1047
- >>> print(msg)
1048
- The kerning data is not in the correct format.
1049
- """
1050
- bogusFormatMessage = "The kerning data is not in the correct format."
1051
- if not isinstance(data, Mapping):
1052
- return False, bogusFormatMessage
1053
- for first, secondDict in data.items():
1054
- if not isinstance(first, str):
1055
- return False, bogusFormatMessage
1056
- elif not isinstance(secondDict, Mapping):
1057
- return False, bogusFormatMessage
1058
- for second, value in secondDict.items():
1059
- if not isinstance(second, str):
1060
- return False, bogusFormatMessage
1061
- elif not isinstance(value, numberTypes):
1062
- return False, bogusFormatMessage
1063
- return True, None
1064
-
1065
-
1066
- # -------------
1067
- # lib.plist/lib
1068
- # -------------
1069
-
1070
- _bogusLibFormatMessage = "The lib data is not in the correct format: %s"
1071
-
1072
-
1073
- def fontLibValidator(value):
1074
- """
1075
- Check the validity of the lib.
1076
- Version 3+ (though it's backwards compatible with UFO 1 and UFO 2).
1077
-
1078
- >>> lib = {"foo" : "bar"}
1079
- >>> fontLibValidator(lib)
1080
- (True, None)
1081
-
1082
- >>> lib = {"public.awesome" : "hello"}
1083
- >>> fontLibValidator(lib)
1084
- (True, None)
1085
-
1086
- >>> lib = {"public.glyphOrder" : ["A", "C", "B"]}
1087
- >>> fontLibValidator(lib)
1088
- (True, None)
1089
-
1090
- >>> lib = "hello"
1091
- >>> valid, msg = fontLibValidator(lib)
1092
- >>> valid
1093
- False
1094
- >>> print(msg) # doctest: +ELLIPSIS
1095
- The lib data is not in the correct format: expected a dictionary, ...
1096
-
1097
- >>> lib = {1: "hello"}
1098
- >>> valid, msg = fontLibValidator(lib)
1099
- >>> valid
1100
- False
1101
- >>> print(msg)
1102
- The lib key is not properly formatted: expected str, found int: 1
1103
-
1104
- >>> lib = {"public.glyphOrder" : "hello"}
1105
- >>> valid, msg = fontLibValidator(lib)
1106
- >>> valid
1107
- False
1108
- >>> print(msg) # doctest: +ELLIPSIS
1109
- public.glyphOrder is not properly formatted: expected list or tuple,...
1110
-
1111
- >>> lib = {"public.glyphOrder" : ["A", 1, "B"]}
1112
- >>> valid, msg = fontLibValidator(lib)
1113
- >>> valid
1114
- False
1115
- >>> print(msg) # doctest: +ELLIPSIS
1116
- public.glyphOrder is not properly formatted: expected str,...
1117
- """
1118
- if not isDictEnough(value):
1119
- reason = "expected a dictionary, found %s" % type(value).__name__
1120
- return False, _bogusLibFormatMessage % reason
1121
- for key, value in value.items():
1122
- if not isinstance(key, str):
1123
- return False, (
1124
- "The lib key is not properly formatted: expected str, found %s: %r"
1125
- % (type(key).__name__, key)
1126
- )
1127
- # public.glyphOrder
1128
- if key == "public.glyphOrder":
1129
- bogusGlyphOrderMessage = "public.glyphOrder is not properly formatted: %s"
1130
- if not isinstance(value, (list, tuple)):
1131
- reason = "expected list or tuple, found %s" % type(value).__name__
1132
- return False, bogusGlyphOrderMessage % reason
1133
- for glyphName in value:
1134
- if not isinstance(glyphName, str):
1135
- reason = "expected str, found %s" % type(glyphName).__name__
1136
- return False, bogusGlyphOrderMessage % reason
1137
- return True, None
1138
-
1139
-
1140
- # --------
1141
- # GLIF lib
1142
- # --------
1143
-
1144
-
1145
- def glyphLibValidator(value):
1146
- """
1147
- Check the validity of the lib.
1148
- Version 3+ (though it's backwards compatible with UFO 1 and UFO 2).
1149
-
1150
- >>> lib = {"foo" : "bar"}
1151
- >>> glyphLibValidator(lib)
1152
- (True, None)
1153
-
1154
- >>> lib = {"public.awesome" : "hello"}
1155
- >>> glyphLibValidator(lib)
1156
- (True, None)
1157
-
1158
- >>> lib = {"public.markColor" : "1,0,0,0.5"}
1159
- >>> glyphLibValidator(lib)
1160
- (True, None)
1161
-
1162
- >>> lib = {"public.markColor" : 1}
1163
- >>> valid, msg = glyphLibValidator(lib)
1164
- >>> valid
1165
- False
1166
- >>> print(msg)
1167
- public.markColor is not properly formatted.
1168
- """
1169
- if not isDictEnough(value):
1170
- reason = "expected a dictionary, found %s" % type(value).__name__
1171
- return False, _bogusLibFormatMessage % reason
1172
- for key, value in value.items():
1173
- if not isinstance(key, str):
1174
- reason = "key (%s) should be a string" % key
1175
- return False, _bogusLibFormatMessage % reason
1176
- # public.markColor
1177
- if key == "public.markColor":
1178
- if not colorValidator(value):
1179
- return False, "public.markColor is not properly formatted."
1180
- return True, None
1181
-
1182
-
1183
- if __name__ == "__main__":
1184
- import doctest
1185
-
1186
- doctest.testmod()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Deep1994/t5-paraphrase/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: T5 Paraphrase
3
- emoji: 👁
4
- colorFrom: red
5
- colorTo: yellow
6
- sdk: streamlit
7
- sdk_version: 1.2.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dinoking/Guccio-AI-Designer/netdissect/nethook.py DELETED
@@ -1,266 +0,0 @@
1
- '''
2
- Utilities for instrumenting a torch model.
3
-
4
- InstrumentedModel will wrap a pytorch model and allow hooking
5
- arbitrary layers to monitor or modify their output directly.
6
-
7
- Modified by Erik Härkönen:
8
- - 29.11.2019: Unhooking bugfix
9
- - 25.01.2020: Offset edits, removed old API
10
- '''
11
-
12
- import torch, numpy, types
13
- from collections import OrderedDict
14
-
15
- class InstrumentedModel(torch.nn.Module):
16
- '''
17
- A wrapper for hooking, probing and intervening in pytorch Modules.
18
- Example usage:
19
-
20
- ```
21
- model = load_my_model()
22
- with inst as InstrumentedModel(model):
23
- inst.retain_layer(layername)
24
- inst.edit_layer(layername, 0.5, target_features)
25
- inst.edit_layer(layername, offset=offset_tensor)
26
- inst(inputs)
27
- original_features = inst.retained_layer(layername)
28
- ```
29
- '''
30
-
31
- def __init__(self, model):
32
- super(InstrumentedModel, self).__init__()
33
- self.model = model
34
- self._retained = OrderedDict()
35
- self._ablation = {}
36
- self._replacement = {}
37
- self._offset = {}
38
- self._hooked_layer = {}
39
- self._old_forward = {}
40
-
41
- def __enter__(self):
42
- return self
43
-
44
- def __exit__(self, type, value, traceback):
45
- self.close()
46
-
47
- def forward(self, *inputs, **kwargs):
48
- return self.model(*inputs, **kwargs)
49
-
50
- def retain_layer(self, layername):
51
- '''
52
- Pass a fully-qualified layer name (E.g., module.submodule.conv3)
53
- to hook that layer and retain its output each time the model is run.
54
- A pair (layername, aka) can be provided, and the aka will be used
55
- as the key for the retained value instead of the layername.
56
- '''
57
- self.retain_layers([layername])
58
-
59
- def retain_layers(self, layernames):
60
- '''
61
- Retains a list of a layers at once.
62
- '''
63
- self.add_hooks(layernames)
64
- for layername in layernames:
65
- aka = layername
66
- if not isinstance(aka, str):
67
- layername, aka = layername
68
- if aka not in self._retained:
69
- self._retained[aka] = None
70
-
71
- def retained_features(self):
72
- '''
73
- Returns a dict of all currently retained features.
74
- '''
75
- return OrderedDict(self._retained)
76
-
77
- def retained_layer(self, aka=None, clear=False):
78
- '''
79
- Retrieve retained data that was previously hooked by retain_layer.
80
- Call this after the model is run. If clear is set, then the
81
- retained value will return and also cleared.
82
- '''
83
- if aka is None:
84
- # Default to the first retained layer.
85
- aka = next(self._retained.keys().__iter__())
86
- result = self._retained[aka]
87
- if clear:
88
- self._retained[aka] = None
89
- return result
90
-
91
- def edit_layer(self, layername, ablation=None, replacement=None, offset=None):
92
- '''
93
- Pass a fully-qualified layer name (E.g., module.submodule.conv3)
94
- to hook that layer and modify its output each time the model is run.
95
- The output of the layer will be modified to be a convex combination
96
- of the replacement and x interpolated according to the ablation, i.e.:
97
- `output = x * (1 - a) + (r * a)`.
98
- Additionally or independently, an offset can be added to the output.
99
- '''
100
- if not isinstance(layername, str):
101
- layername, aka = layername
102
- else:
103
- aka = layername
104
-
105
- # The default ablation if a replacement is specified is 1.0.
106
- if ablation is None and replacement is not None:
107
- ablation = 1.0
108
- self.add_hooks([(layername, aka)])
109
- if ablation is not None:
110
- self._ablation[aka] = ablation
111
- if replacement is not None:
112
- self._replacement[aka] = replacement
113
- if offset is not None:
114
- self._offset[aka] = offset
115
- # If needed, could add an arbitrary postprocessing lambda here.
116
-
117
- def remove_edits(self, layername=None, remove_offset=True, remove_replacement=True):
118
- '''
119
- Removes edits at the specified layer, or removes edits at all layers
120
- if no layer name is specified.
121
- '''
122
- if layername is None:
123
- if remove_replacement:
124
- self._ablation.clear()
125
- self._replacement.clear()
126
- if remove_offset:
127
- self._offset.clear()
128
- return
129
-
130
- if not isinstance(layername, str):
131
- layername, aka = layername
132
- else:
133
- aka = layername
134
- if remove_replacement and aka in self._ablation:
135
- del self._ablation[aka]
136
- if remove_replacement and aka in self._replacement:
137
- del self._replacement[aka]
138
- if remove_offset and aka in self._offset:
139
- del self._offset[aka]
140
-
141
- def add_hooks(self, layernames):
142
- '''
143
- Sets up a set of layers to be hooked.
144
-
145
- Usually not called directly: use edit_layer or retain_layer instead.
146
- '''
147
- needed = set()
148
- aka_map = {}
149
- for name in layernames:
150
- aka = name
151
- if not isinstance(aka, str):
152
- name, aka = name
153
- if self._hooked_layer.get(aka, None) != name:
154
- aka_map[name] = aka
155
- needed.add(name)
156
- if not needed:
157
- return
158
- for name, layer in self.model.named_modules():
159
- if name in aka_map:
160
- needed.remove(name)
161
- aka = aka_map[name]
162
- self._hook_layer(layer, name, aka)
163
- for name in needed:
164
- raise ValueError('Layer %s not found in model' % name)
165
-
166
- def _hook_layer(self, layer, layername, aka):
167
- '''
168
- Internal method to replace a forward method with a closure that
169
- intercepts the call, and tracks the hook so that it can be reverted.
170
- '''
171
- if aka in self._hooked_layer:
172
- raise ValueError('Layer %s already hooked' % aka)
173
- if layername in self._old_forward:
174
- raise ValueError('Layer %s already hooked' % layername)
175
- self._hooked_layer[aka] = layername
176
- self._old_forward[layername] = (layer, aka,
177
- layer.__dict__.get('forward', None))
178
- editor = self
179
- original_forward = layer.forward
180
- def new_forward(self, *inputs, **kwargs):
181
- original_x = original_forward(*inputs, **kwargs)
182
- x = editor._postprocess_forward(original_x, aka)
183
- return x
184
- layer.forward = types.MethodType(new_forward, layer)
185
-
186
- def _unhook_layer(self, aka):
187
- '''
188
- Internal method to remove a hook, restoring the original forward method.
189
- '''
190
- if aka not in self._hooked_layer:
191
- return
192
- layername = self._hooked_layer[aka]
193
- layer, check, old_forward = self._old_forward[layername]
194
- assert check == aka
195
- if old_forward is None:
196
- if 'forward' in layer.__dict__:
197
- del layer.__dict__['forward']
198
- else:
199
- layer.forward = old_forward
200
- del self._old_forward[layername]
201
- del self._hooked_layer[aka]
202
- if aka in self._ablation:
203
- del self._ablation[aka]
204
- if aka in self._replacement:
205
- del self._replacement[aka]
206
- if aka in self._offset:
207
- del self._offset[aka]
208
- if aka in self._retained:
209
- del self._retained[aka]
210
-
211
- def _postprocess_forward(self, x, aka):
212
- '''
213
- The internal method called by the hooked layers after they are run.
214
- '''
215
- # Retain output before edits, if desired.
216
- if aka in self._retained:
217
- self._retained[aka] = x.detach()
218
-
219
- # Apply replacement edit
220
- a = make_matching_tensor(self._ablation, aka, x)
221
- if a is not None:
222
- x = x * (1 - a)
223
- v = make_matching_tensor(self._replacement, aka, x)
224
- if v is not None:
225
- x += (v * a)
226
-
227
- # Apply offset edit
228
- b = make_matching_tensor(self._offset, aka, x)
229
- if b is not None:
230
- x = x + b
231
-
232
- return x
233
-
234
- def close(self):
235
- '''
236
- Unhooks all hooked layers in the model.
237
- '''
238
- for aka in list(self._old_forward.keys()):
239
- self._unhook_layer(aka)
240
- assert len(self._old_forward) == 0
241
-
242
-
243
- def make_matching_tensor(valuedict, name, data):
244
- '''
245
- Converts `valuedict[name]` to be a tensor with the same dtype, device,
246
- and dimension count as `data`, and caches the converted tensor.
247
- '''
248
- v = valuedict.get(name, None)
249
- if v is None:
250
- return None
251
- if not isinstance(v, torch.Tensor):
252
- # Accept non-torch data.
253
- v = torch.from_numpy(numpy.array(v))
254
- valuedict[name] = v
255
- if not v.device == data.device or not v.dtype == data.dtype:
256
- # Ensure device and type matches.
257
- assert not v.requires_grad, '%s wrong device or type' % (name)
258
- v = v.to(device=data.device, dtype=data.dtype)
259
- valuedict[name] = v
260
- if len(v.shape) < len(data.shape):
261
- # Ensure dimensions are unsqueezed as needed.
262
- assert not v.requires_grad, '%s wrong dimensions' % (name)
263
- v = v.view((1,) + tuple(v.shape) +
264
- (1,) * (len(data.shape) - len(v.shape) - 1))
265
- valuedict[name] = v
266
- return v
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DragGan/DragGan-Inversion/training/augment.py DELETED
@@ -1,562 +0,0 @@
1
- # Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
2
- #
3
- # NVIDIA CORPORATION and its licensors retain all intellectual property
4
- # and proprietary rights in and to this software, related documentation
5
- # and any modifications thereto. Any use, reproduction, disclosure or
6
- # distribution of this software and related documentation without an express
7
- # license agreement from NVIDIA CORPORATION is strictly prohibited.
8
-
9
- """Augmentation pipeline from the paper
10
- "Training Generative Adversarial Networks with Limited Data".
11
- Matches the original implementation by Karras et al. at
12
- https://github.com/NVlabs/stylegan2-ada/blob/main/training/augment.py"""
13
-
14
- import numpy as np
15
- import scipy.signal
16
- import torch
17
- from torch_utils import persistence
18
- from torch_utils import misc
19
- from torch_utils.ops import upfirdn2d
20
- from torch_utils.ops import grid_sample_gradfix
21
- from torch_utils.ops import conv2d_gradfix
22
-
23
- # ----------------------------------------------------------------------------
24
- # Coefficients of various wavelet decomposition low-pass filters.
25
-
26
- wavelets = {
27
- 'haar': [0.7071067811865476, 0.7071067811865476],
28
- 'db1': [0.7071067811865476, 0.7071067811865476],
29
- 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025],
30
- 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569],
31
- 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523],
32
- 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125],
33
- 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017],
34
- 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236],
35
- 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161],
36
- 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025],
37
- 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569],
38
- 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427],
39
- 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728],
40
- 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148],
41
- 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255],
42
- 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609],
43
- }
44
-
45
- # ----------------------------------------------------------------------------
46
- # Helpers for constructing transformation matrices.
47
-
48
-
49
- def matrix(*rows, device=None):
50
- assert all(len(row) == len(rows[0]) for row in rows)
51
- elems = [x for row in rows for x in row]
52
- ref = [x for x in elems if isinstance(x, torch.Tensor)]
53
- if len(ref) == 0:
54
- return misc.constant(np.asarray(rows), device=device)
55
- assert device is None or device == ref[0].device
56
- elems = [x if isinstance(x, torch.Tensor) else misc.constant(
57
- x, shape=ref[0].shape, device=ref[0].device) for x in elems]
58
- return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1))
59
-
60
-
61
- def translate2d(tx, ty, **kwargs):
62
- return matrix(
63
- [1, 0, tx],
64
- [0, 1, ty],
65
- [0, 0, 1],
66
- **kwargs)
67
-
68
-
69
- def translate3d(tx, ty, tz, **kwargs):
70
- return matrix(
71
- [1, 0, 0, tx],
72
- [0, 1, 0, ty],
73
- [0, 0, 1, tz],
74
- [0, 0, 0, 1],
75
- **kwargs)
76
-
77
-
78
- def scale2d(sx, sy, **kwargs):
79
- return matrix(
80
- [sx, 0, 0],
81
- [0, sy, 0],
82
- [0, 0, 1],
83
- **kwargs)
84
-
85
-
86
- def scale3d(sx, sy, sz, **kwargs):
87
- return matrix(
88
- [sx, 0, 0, 0],
89
- [0, sy, 0, 0],
90
- [0, 0, sz, 0],
91
- [0, 0, 0, 1],
92
- **kwargs)
93
-
94
-
95
- def rotate2d(theta, **kwargs):
96
- return matrix(
97
- [torch.cos(theta), torch.sin(-theta), 0],
98
- [torch.sin(theta), torch.cos(theta), 0],
99
- [0, 0, 1],
100
- **kwargs)
101
-
102
-
103
- def rotate3d(v, theta, **kwargs):
104
- vx = v[..., 0]
105
- vy = v[..., 1]
106
- vz = v[..., 2]
107
- s = torch.sin(theta)
108
- c = torch.cos(theta)
109
- cc = 1 - c
110
- return matrix(
111
- [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0],
112
- [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0],
113
- [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0],
114
- [0, 0, 0, 1],
115
- **kwargs)
116
-
117
-
118
- def translate2d_inv(tx, ty, **kwargs):
119
- return translate2d(-tx, -ty, **kwargs)
120
-
121
-
122
- def scale2d_inv(sx, sy, **kwargs):
123
- return scale2d(1 / sx, 1 / sy, **kwargs)
124
-
125
-
126
- def rotate2d_inv(theta, **kwargs):
127
- return rotate2d(-theta, **kwargs)
128
-
129
- # ----------------------------------------------------------------------------
130
- # Versatile image augmentation pipeline from the paper
131
- # "Training Generative Adversarial Networks with Limited Data".
132
- #
133
- # All augmentations are disabled by default; individual augmentations can
134
- # be enabled by setting their probability multipliers to 1.
135
-
136
-
137
- @persistence.persistent_class
138
- class AugmentPipe(torch.nn.Module):
139
- def __init__(self,
140
- xflip=0, rotate90=0, xint=0, xint_max=0.125,
141
- scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125,
142
- brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, hue_max=1, saturation_std=1,
143
- imgfilter=0, imgfilter_bands=[1, 1, 1, 1], imgfilter_std=1,
144
- noise=0, cutout=0, noise_std=0.1, cutout_size=0.5,
145
- ):
146
- super().__init__()
147
- # Overall multiplier for augmentation probability.
148
- self.register_buffer('p', torch.ones([]))
149
-
150
- # Pixel blitting.
151
- # Probability multiplier for x-flip.
152
- self.xflip = float(xflip)
153
- # Probability multiplier for 90 degree rotations.
154
- self.rotate90 = float(rotate90)
155
- # Probability multiplier for integer translation.
156
- self.xint = float(xint)
157
- # Range of integer translation, relative to image dimensions.
158
- self.xint_max = float(xint_max)
159
-
160
- # General geometric transformations.
161
- # Probability multiplier for isotropic scaling.
162
- self.scale = float(scale)
163
- # Probability multiplier for arbitrary rotation.
164
- self.rotate = float(rotate)
165
- # Probability multiplier for anisotropic scaling.
166
- self.aniso = float(aniso)
167
- # Probability multiplier for fractional translation.
168
- self.xfrac = float(xfrac)
169
- # Log2 standard deviation of isotropic scaling.
170
- self.scale_std = float(scale_std)
171
- # Range of arbitrary rotation, 1 = full circle.
172
- self.rotate_max = float(rotate_max)
173
- # Log2 standard deviation of anisotropic scaling.
174
- self.aniso_std = float(aniso_std)
175
- # Standard deviation of frational translation, relative to image dimensions.
176
- self.xfrac_std = float(xfrac_std)
177
-
178
- # Color transformations.
179
- # Probability multiplier for brightness.
180
- self.brightness = float(brightness)
181
- # Probability multiplier for contrast.
182
- self.contrast = float(contrast)
183
- # Probability multiplier for luma flip.
184
- self.lumaflip = float(lumaflip)
185
- # Probability multiplier for hue rotation.
186
- self.hue = float(hue)
187
- # Probability multiplier for saturation.
188
- self.saturation = float(saturation)
189
- # Standard deviation of brightness.
190
- self.brightness_std = float(brightness_std)
191
- # Log2 standard deviation of contrast.
192
- self.contrast_std = float(contrast_std)
193
- # Range of hue rotation, 1 = full circle.
194
- self.hue_max = float(hue_max)
195
- # Log2 standard deviation of saturation.
196
- self.saturation_std = float(saturation_std)
197
-
198
- # Image-space filtering.
199
- # Probability multiplier for image-space filtering.
200
- self.imgfilter = float(imgfilter)
201
- # Probability multipliers for individual frequency bands.
202
- self.imgfilter_bands = list(imgfilter_bands)
203
- # Log2 standard deviation of image-space filter amplification.
204
- self.imgfilter_std = float(imgfilter_std)
205
-
206
- # Image-space corruptions.
207
- # Probability multiplier for additive RGB noise.
208
- self.noise = float(noise)
209
- # Probability multiplier for cutout.
210
- self.cutout = float(cutout)
211
- # Standard deviation of additive RGB noise.
212
- self.noise_std = float(noise_std)
213
- # Size of the cutout rectangle, relative to image dimensions.
214
- self.cutout_size = float(cutout_size)
215
-
216
- # Setup orthogonal lowpass filter for geometric augmentations.
217
- self.register_buffer(
218
- 'Hz_geom', upfirdn2d.setup_filter(wavelets['sym6']))
219
-
220
- # Construct filter bank for image-space filtering.
221
- Hz_lo = np.asarray(wavelets['sym2']) # H(z)
222
- Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z)
223
- Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2
224
- Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2
225
- Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i)
226
- for i in range(1, Hz_fbank.shape[0]):
227
- Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape(
228
- Hz_fbank.shape[0], -1)[:, :-1]
229
- Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2])
230
- Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) //
231
- 2: (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2
232
- self.register_buffer('Hz_fbank', torch.as_tensor(
233
- Hz_fbank, dtype=torch.float32))
234
-
235
- def forward(self, images, debug_percentile=None):
236
- assert isinstance(images, torch.Tensor) and images.ndim == 4
237
- batch_size, num_channels, height, width = images.shape
238
- device = images.device
239
- if debug_percentile is not None:
240
- debug_percentile = torch.as_tensor(
241
- debug_percentile, dtype=torch.float32, device=device)
242
-
243
- # -------------------------------------
244
- # Select parameters for pixel blitting.
245
- # -------------------------------------
246
-
247
- # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in
248
- I_3 = torch.eye(3, device=device)
249
- G_inv = I_3
250
-
251
- # Apply x-flip with probability (xflip * strength).
252
- if self.xflip > 0:
253
- i = torch.floor(torch.rand([batch_size], device=device) * 2)
254
- i = torch.where(torch.rand(
255
- [batch_size], device=device) < self.xflip * self.p, i, torch.zeros_like(i))
256
- if debug_percentile is not None:
257
- i = torch.full_like(i, torch.floor(debug_percentile * 2))
258
- G_inv = G_inv @ scale2d_inv(1 - 2 * i, 1)
259
-
260
- # Apply 90 degree rotations with probability (rotate90 * strength).
261
- if self.rotate90 > 0:
262
- i = torch.floor(torch.rand([batch_size], device=device) * 4)
263
- i = torch.where(torch.rand(
264
- [batch_size], device=device) < self.rotate90 * self.p, i, torch.zeros_like(i))
265
- if debug_percentile is not None:
266
- i = torch.full_like(i, torch.floor(debug_percentile * 4))
267
- G_inv = G_inv @ rotate2d_inv(-np.pi / 2 * i)
268
-
269
- # Apply integer translation with probability (xint * strength).
270
- if self.xint > 0:
271
- t = (torch.rand([batch_size, 2], device=device)
272
- * 2 - 1) * self.xint_max
273
- t = torch.where(torch.rand(
274
- [batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t))
275
- if debug_percentile is not None:
276
- t = torch.full_like(
277
- t, (debug_percentile * 2 - 1) * self.xint_max)
278
- G_inv = G_inv @ translate2d_inv(torch.round(
279
- t[:, 0] * width), torch.round(t[:, 1] * height))
280
-
281
- # --------------------------------------------------------
282
- # Select parameters for general geometric transformations.
283
- # --------------------------------------------------------
284
-
285
- # Apply isotropic scaling with probability (scale * strength).
286
- if self.scale > 0:
287
- s = torch.exp2(torch.randn(
288
- [batch_size], device=device) * self.scale_std)
289
- s = torch.where(torch.rand(
290
- [batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s))
291
- if debug_percentile is not None:
292
- s = torch.full_like(s, torch.exp2(torch.erfinv(
293
- debug_percentile * 2 - 1) * self.scale_std))
294
- G_inv = G_inv @ scale2d_inv(s, s)
295
-
296
- # Apply pre-rotation with probability p_rot.
297
- # P(pre OR post) = p
298
- p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1))
299
- if self.rotate > 0:
300
- theta = (torch.rand([batch_size], device=device)
301
- * 2 - 1) * np.pi * self.rotate_max
302
- theta = torch.where(torch.rand(
303
- [batch_size], device=device) < p_rot, theta, torch.zeros_like(theta))
304
- if debug_percentile is not None:
305
- theta = torch.full_like(
306
- theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max)
307
- G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling.
308
-
309
- # Apply anisotropic scaling with probability (aniso * strength).
310
- if self.aniso > 0:
311
- s = torch.exp2(torch.randn(
312
- [batch_size], device=device) * self.aniso_std)
313
- s = torch.where(torch.rand(
314
- [batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s))
315
- if debug_percentile is not None:
316
- s = torch.full_like(s, torch.exp2(torch.erfinv(
317
- debug_percentile * 2 - 1) * self.aniso_std))
318
- G_inv = G_inv @ scale2d_inv(s, 1 / s)
319
-
320
- # Apply post-rotation with probability p_rot.
321
- if self.rotate > 0:
322
- theta = (torch.rand([batch_size], device=device)
323
- * 2 - 1) * np.pi * self.rotate_max
324
- theta = torch.where(torch.rand(
325
- [batch_size], device=device) < p_rot, theta, torch.zeros_like(theta))
326
- if debug_percentile is not None:
327
- theta = torch.zeros_like(theta)
328
- G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling.
329
-
330
- # Apply fractional translation with probability (xfrac * strength).
331
- if self.xfrac > 0:
332
- t = torch.randn([batch_size, 2], device=device) * self.xfrac_std
333
- t = torch.where(torch.rand(
334
- [batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t))
335
- if debug_percentile is not None:
336
- t = torch.full_like(t, torch.erfinv(
337
- debug_percentile * 2 - 1) * self.xfrac_std)
338
- G_inv = G_inv @ translate2d_inv(t[:, 0] * width, t[:, 1] * height)
339
-
340
- # ----------------------------------
341
- # Execute geometric transformations.
342
- # ----------------------------------
343
-
344
- # Execute if the transform is not identity.
345
- if G_inv is not I_3:
346
-
347
- # Calculate padding.
348
- cx = (width - 1) / 2
349
- cy = (height - 1) / 2
350
- cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1],
351
- [-cx, cy, 1], device=device) # [idx, xyz]
352
- cp = G_inv @ cp.t() # [batch, xyz, idx]
353
- Hz_pad = self.Hz_geom.shape[0] // 4
354
- margin = cp[:, :2, :].permute(
355
- 1, 0, 2).flatten(1) # [xy, batch * idx]
356
- # [x0, y0, x1, y1]
357
- margin = torch.cat([-margin, margin]).max(dim=1).values
358
- margin = margin + \
359
- misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy]
360
- * 2, device=device)
361
- margin = margin.max(misc.constant([0, 0] * 2, device=device))
362
- margin = margin.min(misc.constant(
363
- [width-1, height-1] * 2, device=device))
364
- mx0, my0, mx1, my1 = margin.ceil().to(torch.int32)
365
-
366
- # Pad image and adjust origin.
367
- images = torch.nn.functional.pad(
368
- input=images, pad=[mx0, mx1, my0, my1], mode='reflect')
369
- G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv
370
-
371
- # Upsample.
372
- images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2)
373
- G_inv = scale2d(
374
- 2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device)
375
- G_inv = translate2d(-0.5, -0.5,
376
- device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device)
377
-
378
- # Execute transformation.
379
- shape = [batch_size, num_channels,
380
- (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2]
381
- G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv(
382
- 2 / shape[3], 2 / shape[2], device=device)
383
- grid = torch.nn.functional.affine_grid(
384
- theta=G_inv[:, :2, :], size=shape, align_corners=False)
385
- images = grid_sample_gradfix.grid_sample(images, grid)
386
-
387
- # Downsample and crop.
388
- images = upfirdn2d.downsample2d(
389
- x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True)
390
-
391
- # --------------------------------------------
392
- # Select parameters for color transformations.
393
- # --------------------------------------------
394
-
395
- # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out
396
- I_4 = torch.eye(4, device=device)
397
- C = I_4
398
-
399
- # Apply brightness with probability (brightness * strength).
400
- if self.brightness > 0:
401
- b = torch.randn([batch_size], device=device) * self.brightness_std
402
- b = torch.where(torch.rand(
403
- [batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b))
404
- if debug_percentile is not None:
405
- b = torch.full_like(b, torch.erfinv(
406
- debug_percentile * 2 - 1) * self.brightness_std)
407
- C = translate3d(b, b, b) @ C
408
-
409
- # Apply contrast with probability (contrast * strength).
410
- if self.contrast > 0:
411
- c = torch.exp2(torch.randn(
412
- [batch_size], device=device) * self.contrast_std)
413
- c = torch.where(torch.rand(
414
- [batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c))
415
- if debug_percentile is not None:
416
- c = torch.full_like(c, torch.exp2(torch.erfinv(
417
- debug_percentile * 2 - 1) * self.contrast_std))
418
- C = scale3d(c, c, c) @ C
419
-
420
- # Apply luma flip with probability (lumaflip * strength).
421
- # Luma axis.
422
- v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device)
423
- if self.lumaflip > 0:
424
- i = torch.floor(torch.rand([batch_size, 1, 1], device=device) * 2)
425
- i = torch.where(torch.rand(
426
- [batch_size, 1, 1], device=device) < self.lumaflip * self.p, i, torch.zeros_like(i))
427
- if debug_percentile is not None:
428
- i = torch.full_like(i, torch.floor(debug_percentile * 2))
429
- C = (I_4 - 2 * v.ger(v) * i) @ C # Householder reflection.
430
-
431
- # Apply hue rotation with probability (hue * strength).
432
- if self.hue > 0 and num_channels > 1:
433
- theta = (torch.rand([batch_size], device=device)
434
- * 2 - 1) * np.pi * self.hue_max
435
- theta = torch.where(torch.rand(
436
- [batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta))
437
- if debug_percentile is not None:
438
- theta = torch.full_like(
439
- theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max)
440
- C = rotate3d(v, theta) @ C # Rotate around v.
441
-
442
- # Apply saturation with probability (saturation * strength).
443
- if self.saturation > 0 and num_channels > 1:
444
- s = torch.exp2(torch.randn(
445
- [batch_size, 1, 1], device=device) * self.saturation_std)
446
- s = torch.where(torch.rand(
447
- [batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s))
448
- if debug_percentile is not None:
449
- s = torch.full_like(s, torch.exp2(torch.erfinv(
450
- debug_percentile * 2 - 1) * self.saturation_std))
451
- C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C
452
-
453
- # ------------------------------
454
- # Execute color transformations.
455
- # ------------------------------
456
-
457
- # Execute if the transform is not identity.
458
- if C is not I_4:
459
- images = images.reshape([batch_size, num_channels, height * width])
460
- if num_channels == 3:
461
- images = C[:, :3, :3] @ images + C[:, :3, 3:]
462
- elif num_channels == 1:
463
- C = C[:, :3, :].mean(dim=1, keepdims=True)
464
- images = images * \
465
- C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:]
466
- else:
467
- raise ValueError(
468
- 'Image must be RGB (3 channels) or L (1 channel)')
469
- images = images.reshape([batch_size, num_channels, height, width])
470
-
471
- # ----------------------
472
- # Image-space filtering.
473
- # ----------------------
474
-
475
- if self.imgfilter > 0:
476
- num_bands = self.Hz_fbank.shape[0]
477
- assert len(self.imgfilter_bands) == num_bands
478
- # Expected power spectrum (1/f).
479
- expected_power = misc.constant(
480
- np.array([10, 1, 1, 1]) / 13, device=device)
481
-
482
- # Apply amplification for each band with probability (imgfilter * strength * band_strength).
483
- # Global gain vector (identity).
484
- g = torch.ones([batch_size, num_bands], device=device)
485
- for i, band_strength in enumerate(self.imgfilter_bands):
486
- t_i = torch.exp2(torch.randn(
487
- [batch_size], device=device) * self.imgfilter_std)
488
- t_i = torch.where(torch.rand(
489
- [batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i))
490
- if debug_percentile is not None:
491
- t_i = torch.full_like(t_i, torch.exp2(torch.erfinv(
492
- debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i)
493
- # Temporary gain vector.
494
- t = torch.ones([batch_size, num_bands], device=device)
495
- # Replace i'th element.
496
- t[:, i] = t_i
497
- # Normalize power.
498
- t = t / (expected_power * t.square()
499
- ).sum(dim=-1, keepdims=True).sqrt()
500
- # Accumulate into global gain.
501
- g = g * t
502
-
503
- # Construct combined amplification filter.
504
- # [batch, tap]
505
- Hz_prime = g @ self.Hz_fbank
506
- Hz_prime = Hz_prime.unsqueeze(1).repeat(
507
- [1, num_channels, 1]) # [batch, channels, tap]
508
- # [batch * channels, 1, tap]
509
- Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1])
510
-
511
- # Apply filter.
512
- p = self.Hz_fbank.shape[1] // 2
513
- images = images.reshape(
514
- [1, batch_size * num_channels, height, width])
515
- images = torch.nn.functional.pad(
516
- input=images, pad=[p, p, p, p], mode='reflect')
517
- images = conv2d_gradfix.conv2d(
518
- input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels)
519
- images = conv2d_gradfix.conv2d(
520
- input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels)
521
- images = images.reshape([batch_size, num_channels, height, width])
522
-
523
- # ------------------------
524
- # Image-space corruptions.
525
- # ------------------------
526
-
527
- # Apply additive RGB noise with probability (noise * strength).
528
- if self.noise > 0:
529
- sigma = torch.randn([batch_size, 1, 1, 1],
530
- device=device).abs() * self.noise_std
531
- sigma = torch.where(torch.rand(
532
- [batch_size, 1, 1, 1], device=device) < self.noise * self.p, sigma, torch.zeros_like(sigma))
533
- if debug_percentile is not None:
534
- sigma = torch.full_like(sigma, torch.erfinv(
535
- debug_percentile) * self.noise_std)
536
- images = images + \
537
- torch.randn([batch_size, num_channels, height,
538
- width], device=device) * sigma
539
-
540
- # Apply cutout with probability (cutout * strength).
541
- if self.cutout > 0:
542
- size = torch.full([batch_size, 2, 1, 1, 1],
543
- self.cutout_size, device=device)
544
- size = torch.where(torch.rand(
545
- [batch_size, 1, 1, 1, 1], device=device) < self.cutout * self.p, size, torch.zeros_like(size))
546
- center = torch.rand([batch_size, 2, 1, 1, 1], device=device)
547
- if debug_percentile is not None:
548
- size = torch.full_like(size, self.cutout_size)
549
- center = torch.full_like(center, debug_percentile)
550
- coord_x = torch.arange(width, device=device).reshape([1, 1, 1, -1])
551
- coord_y = torch.arange(
552
- height, device=device).reshape([1, 1, -1, 1])
553
- mask_x = (((coord_x + 0.5) / width -
554
- center[:, 0]).abs() >= size[:, 0] / 2)
555
- mask_y = (((coord_y + 0.5) / height -
556
- center[:, 1]).abs() >= size[:, 1] / 2)
557
- mask = torch.logical_or(mask_x, mask_y).to(torch.float32)
558
- images = images * mask
559
-
560
- return images
561
-
562
- # ----------------------------------------------------------------------------