parquet-converter commited on
Commit
62a5486
·
1 Parent(s): b295f2b

Update parquet files (step 11 of 249)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/0x1337/vector-inference/README.md +0 -12
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/BRAINWORX Bx Console WORK Keygen.md +0 -125
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WBCS Part 2 PDF for Free A Complete Guide to WBCS Mains Exam Papers.md +0 -35
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dvtool 2.0 Beta 5 HOT Download.md +0 -218
  5. spaces/1gistliPinn/ChatGPT4/Examples/Ecology Exam Essay Questions.md +0 -6
  6. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AetherSX2 best settings apk Tips and tricks for the best PS2 emulator on Android.md +0 -110
  7. spaces/1phancelerku/anime-remove-background/Become a Soccer Super Star with this Amazing Football MOD APK.md +0 -129
  8. spaces/1phancelerku/anime-remove-background/Download CSR Racing 2 MOD APK for iOS and Android Free Shopping and More.md +0 -92
  9. spaces/1phancelerku/anime-remove-background/Download Cars Movie for Free A Step-by-Step Guide.md +0 -281
  10. spaces/2ndelement/voicevox/test/test_acoustic_feature_extractor.py +0 -266
  11. spaces/801artistry/RVC801/go-applio.bat +0 -92
  12. spaces/A666sxr/Genshin_TTS/modules.py +0 -390
  13. spaces/AI-Chatbot-Master/Chatbots/README.md +0 -10
  14. spaces/AI-ZTH-03-23/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/app.py +0 -146
  15. spaces/AIConsultant/MusicGen/scripts/templates/results.html +0 -17
  16. spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/__init__.py +0 -6
  17. spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/stop-generating/$types.d.ts +0 -9
  18. spaces/Adapter/CoAdapter/ldm/modules/image_degradation/bsrgan.py +0 -730
  19. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/bracketparser2.js +0 -2
  20. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/Factory.js +0 -13
  21. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/RemoveChildMethods.js +0 -29
  22. spaces/Agusbs98/automatic-ecg-diagnosis/nets/layers.py +0 -29
  23. spaces/AixiaGreyatt/QQsign/bin/unidbg-fetch-qsign.bat +0 -89
  24. spaces/Aloento/9Nine-PITS/text/english.py +0 -122
  25. spaces/Alpaca233/SadTalker/src/face3d/models/networks.py +0 -521
  26. spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/loading.py +0 -6
  27. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_v_pred.py +0 -540
  28. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_unclip.py +0 -137
  29. spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py +0 -45
  30. spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_x101_64x4d_fpn_mstrain_2x_coco.py +0 -14
  31. spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/robustness_eval.py +0 -250
  32. spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py +0 -2
  33. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/utils/__init__.py +0 -3
  34. spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/diffusionmodules/__init__.py +0 -0
  35. spaces/ArkanDash/rvc-models/infer_pack/models.py +0 -982
  36. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/train.py +0 -18
  37. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/README.md +0 -7
  38. spaces/Bart92/RVC_HF/configs/config.py +0 -265
  39. spaces/Benebene/Chat-question-answering/README.md +0 -12
  40. spaces/Benson/text-generation/Examples/Cielo Choque Seores De Clanes 3d Mod Apk Descargar.md +0 -58
  41. spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/tools/create_dictionary.py +0 -71
  42. spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/compose_dataset.py +0 -358
  43. spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/net.py +0 -73
  44. spaces/CVPR/LIVE/pybind11/tests/test_enum.py +0 -207
  45. spaces/CVPR/LIVE/thrust/CODE_OF_CONDUCT.md +0 -59
  46. spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/copy.h +0 -538
  47. spaces/CVPR/WALT/mmdet/datasets/pipelines/instaboost.py +0 -98
  48. spaces/CVPR/v-doc_abstractive_mac/main.py +0 -653
  49. spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py +0 -974
  50. spaces/Cat125/text-generator-v2/README.md +0 -13
spaces/0x1337/vector-inference/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Vector Inference
3
- emoji: 🏃
4
- colorFrom: pink
5
- colorTo: purple
6
- sdk: gradio
7
- app_file: app.py
8
- pinned: false
9
- license: wtfpl
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/BRAINWORX Bx Console WORK Keygen.md DELETED
@@ -1,125 +0,0 @@
1
- <br />
2
- <h1>BRAINWORX bx console keygen: A Comprehensive Review</h1>
3
- <p>If you are a music producer, engineer, or enthusiast who loves the sound and vibe of classic analog mixing consoles, you might have heard of <strong>BRAINWORX bx console</strong> plugins. These are software plugins that emulate the signal path, workflow, and sound of some of the most legendary consoles ever made, such as the Neve VXS, the SSL 4000 E and G, and the Focusrite Studio Console. These plugins offer a realistic and flexible way to add warmth, punch, depth, and character to your mixes, without having to spend a fortune on hardware gear.</p>
4
- <h2>BRAINWORX bx console keygen</h2><br /><p><b><b>Download</b> &#10042;&#10042;&#10042; <a href="https://byltly.com/2uKxub">https://byltly.com/2uKxub</a></b></p><br /><br />
5
- <p>However, there is a catch. These plugins are not cheap. Each one costs around $300, and if you want to get the whole bundle, you will have to shell out more than $2000. That is a lot of money for most people, especially if you are just starting out or working on a tight budget. So what can you do if you want to use these plugins but can't afford them? Well, one option is to use a <strong>keygen</strong>.</p>
6
- <p>A keygen is a software tool that can generate serial numbers or activation codes for software products that require them. By using a keygen, you can bypass the official registration process and unlock the full features and functionality of the software without paying anything. Sounds too good to be true, right? Well, it is not that simple. Using a keygen also comes with some risks and drawbacks, as well as some legal and ethical issues that you should be aware of before deciding to use one.</p>
7
- <p>In this article, I will provide you with an in-depth review of <strong>BRAINWORX bx console keygen</strong>, one of the most popular and widely used keygens for BRAINWORX bx console plugins. I will explain how it works, what it can do, how it compares to other similar tools and plugins, and what are some of the pros and cons of using it. I will also give you some alternative options for console emulation plugins that you might want to consider instead. By the end of this article, you should have a clear idea of whether BRAINWORX bx console keygen is worth using or not.</p>
8
- <h2>How does BRAINWORX bx console keygen work and what are its features?</h2>
9
- <p>BRAINWORX bx console keygen is a software tool that can generate serial numbers for different BRAINWORX bx console plugins. These serial numbers can then be used to activate the plugins on your computer and use them without any limitations or restrictions. The keygen works by exploiting a vulnerability in the plugin's registration system that allows it to generate valid serial numbers based on a specific algorithm.</p>
10
- <p></p>
11
- <h3> How to download and install the keygen</h3>
12
- <p>To use BRAINWORX bx console keygen, you will need to download and install it on your computer. There are many websites and forums that offer links to download the keygen, but you should be careful and avoid any suspicious or malicious sources that might contain viruses, malware, or spyware. One of the most reliable and trusted sources to download the keygen is <a href="">VST Crack</a>, a website that provides free downloads of various audio plugins and software tools.</p>
13
- <p>To download the keygen from VST Crack, you will need to follow these steps:</p>
14
- <ol>
15
- <li>Go to <a href="">https://vstcrack.net/brainworx-bx-console-keygen/</a> and click on the green "Download Now" button.</li>
16
- <li>You will be redirected to a page where you will have to complete a short survey or offer to unlock the download link. This is a security measure to prevent bots and spam. The survey or offer should not take more than a few minutes to complete.</li>
17
- <li>After completing the survey or offer, you will get access to the download link. Click on it and save the keygen file on your computer.</li>
18
- <li>Extract the keygen file using a program like WinRAR or 7-Zip. You should get a folder containing the keygen executable file and a readme file with instructions.</li>
19
- <li>Run the keygen executable file as an administrator. You might get a warning from your antivirus or firewall software, but you can ignore it as it is a false positive. The keygen is safe and does not contain any harmful code.</li>
20
- </ol>
21
- <p>Once you have installed the keygen, you are ready to generate serial numbers for different BRAINWORX bx console plugins.</p>
22
- <h3>How to generate serial numbers for different bx console plugins</h3>
23
- <p>BRAINWORX bx console keygen can generate serial numbers for 12 different bx console plugins. These are:</p>
24
- <ul>
25
- <li>BRAINWORX bx_console E</li>
26
- <li>BRAINWORX bx_console G</li>
27
- <li>BRAINWORX bx_console N</li>
28
- <li>BRAINWORX bx_console SSL 4000 E</li>
29
- <li>BRAINWORX bx_console SSL 4000 G</li>
30
- <li>BRAINWORX bx_console Focusrite SC</li>
31
- <li>BRAINWORX bx_console Neve VXS</li>
32
- <li>BRAINWORX bx_console Amek 9099</li>
33
- <li>BRAINWORX bx_console API 2500</li>
34
- <li>BRAINWORX bx_console API 550A</li>
35
- <li>BRAINWORX bx_console API 550B</li>
36
- <li>BRAINWORX bx_console API 560</li>
37
- </ul>
38
- <p>To generate serial numbers for these plugins, you will need to follow these steps:</p>
39
- <ol>
40
- <li>Open the keygen and select the plugin that you want to activate from the drop-down menu.</li>
41
- <li>Click on the "Generate" button and wait for a few seconds. The keygen will create a unique serial number for the selected plugin and display it in the text box below.</li>
42
- <li>Copy the serial number and paste it in a safe place. You will need it later to activate the plugin.</li>
43
- <li>Repeat steps 1-3 for any other plugins that you want to activate.</li>
44
- </ol>
45
- <h3>How to activate the plugins with the serial numbers</h3>
46
- <p>After generating serial numbers for the plugins that you want to use, you will need to activate them on your computer. To do this, you will need to follow these steps:</p>
47
- <ol>
48
- <li>Download and install the plugins from the official BRAINWORX website or any other source that you trust. Make sure that you download the latest version of the plugins and that they are compatible with your operating system and DAW.</li>
49
- <li>Open your DAW and load one of the plugins on a track or a bus. You should see a pop-up window asking you to enter your serial number.</li>
50
- <li>Paste the serial number that you generated with the keygen for that plugin and click on "Activate". The plugin should be activated and ready to use.</li>
51
- <li>Repeat steps 2-3 for any other plugins that you want to activate.</li>
52
- </ol>
53
- <h3>What are some of the features and options of the keygen</h3>
54
- <p>BRAINWORX bx console keygen is a simple and easy-to-use tool that does not have many features or options. However, there are some things that you can do with it to customize your experience and improve your workflow. These are:</p - You can change the language of the keygen interface by clicking on the flag icon on the top right corner. The keygen supports English, German, French, Spanish, Italian, Portuguese, Russian, Chinese, Japanese, and Korean languages. - You can check for updates and new versions of the keygen by clicking on the "Check for updates" button on the bottom left corner. The keygen will automatically download and install any available updates if you have an internet connection. - You can contact the developers of the keygen by clicking on the "Contact us" button on the bottom right corner. You can send them feedback, suggestions, bug reports, or any other inquiries that you might have. They will try to respond as soon as possible. <h2>How does BRAINWORX bx console keygen compare to other similar tools and plugins?</h2>
55
- <p>BRAINWORX bx console keygen is not the only tool that can generate serial numbers for audio plugins. There are many other keygens, cracks, patches, and hacks that claim to do the same thing. However, not all of them are reliable, safe, or effective. Some of them might not work at all, some of them might contain viruses or malware, and some of them might damage your system or compromise your security. Therefore, you should be careful and cautious when choosing a tool to use.</p>
56
- <p>One way to compare BRAINWORX bx console keygen with other similar tools is to look at their features, performance, compatibility, and reputation. Here are some of the criteria that you can use to evaluate different tools:</p>
57
- <ul>
58
- <li>Features: Does the tool offer any additional features or options that make it more convenient or useful? For example, does it support multiple languages, check for updates, or contact the developers?</li>
59
- <li>Performance: Does the tool work fast and smoothly without any errors or glitches? Does it generate valid serial numbers that activate the plugins without any issues? Does it consume a lot of resources or affect your system's performance?</li>
60
- <li>Compatibility: Does the tool work with different versions and formats of the plugins? Does it work with different operating systems and DAWs? Does it work with other plugins or software that you use?</li>
61
- <li>Reputation: Does the tool have a good reputation among users and experts? Does it have positive reviews and ratings? Does it have a lot of downloads and users? Does it have a reliable and trustworthy source?</li>
62
- </ul>
63
- <p>Based on these criteria, BRAINWORX bx console keygen is one of the best tools that you can use to generate serial numbers for BRAINWORX bx console plugins. It has a simple and user-friendly interface, a fast and stable performance, a high compatibility with different plugins and systems, and a good reputation among users and experts. It also has some features that make it more convenient and useful than other tools, such as language support, update check, and contact option.</p>
64
- <p>However, BRAINWORX bx console keygen is not perfect. It also has some drawbacks and limitations that you should be aware of before using it. These are:</p>
65
- <ul>
66
- <li>It is illegal and unethical to use a keygen to activate software products that you have not paid for. You are violating the terms and conditions of the software license agreement and infringing the intellectual property rights of the software developers. You could face legal consequences or penalties if you are caught using a keygen.</li>
67
- <li>It is risky and unsafe to use a keygen from an unknown or untrusted source. You could expose your system to viruses, malware, spyware, or other harmful code that could damage your data or compromise your security. You could also download fake or corrupted files that could cause errors or glitches in your system.</li>
68
- <li>It is unreliable and unpredictable to use a keygen for software products that are constantly updated or improved. You could encounter compatibility issues or activation problems if the software developers change or update their registration system or algorithm. You could also miss out on new features or bug fixes that are included in the latest versions of the software.</li>
69
- </ul>
70
- <h2>Conclusion</h2>
71
- <p>BRAINWORX bx console keygen is a software tool that can generate serial numbers for different BRAINWORX bx console plugins. These plugins are software plugins that emulate the sound and features of some of the most famous analog mixing consoles ever made. By using a keygen, you can activate these plugins without paying anything and use them without any limitations or restrictions.</p>
72
- <p>BRAINWORX bx console keygen is one of the best tools that you can use to generate serial numbers for BRAINWORX bx console plugins. It has a simple and user-friendly interface, a fast and stable performance, a high compatibility with different plugins and systems, and a good reputation among users and experts. It also has some features that make it more convenient and useful than other tools, such as language support, update check, and contact option.</p>
73
- <p>However, BRAINWORX bx console keygen is not perfect. It also has some drawbacks and limitations that you should be aware of before using it. These are:</p>
74
- <ul>
75
- <li>It is illegal and unethical to use a keygen to activate software products that you have not paid for. You are violating the terms and conditions of the software license agreement and infringing the intellectual property rights of the software developers. You could face legal consequences or penalties if you are caught using a keygen.</li>
76
- <li>It is risky and unsafe to use a keygen from an unknown or untrusted source. You could expose your system to viruses, malware, spyware, or other harmful code that could damage your data or compromise your security. You could also download fake or corrupted files that could cause errors or glitches in your system.</li>
77
- <li>It is unreliable and unpredictable to use a keygen for software products that are constantly updated or improved. You could encounter compatibility issues or activation problems if the software developers change or update their registration system or algorithm. You could also miss out on new features or bug fixes that are included in the latest versions of the software.</li>
78
- </ul>
79
- <p>Therefore, you should think carefully and weigh the pros and cons before deciding to use BRAINWORX bx console keygen. While it might seem tempting and convenient to use a keygen to get access to high-quality plugins for free, you might also face some serious risks and problems that could outweigh the benefits. You might also be violating the law and the ethics of the music industry by using a keygen.</p>
80
- <p>If you are looking for some alternative options for console emulation plugins that are legal, safe, and affordable, you might want to consider some of these:</p>
81
- <h2>Alternative options for console emulation plugins</h2>
82
- <p>BRAINWORX bx console plugins are not the only console emulation plugins that you can use to enhance your mixes. There are many other plugins that offer similar or different features and sound quality, depending on your preferences and needs. Some of these plugins are free, some of them are paid, and some of them offer both free and paid versions. Here are some of the most popular and recommended console emulation plugins that you might want to check out:</p>
83
- <h3>Waves SSL 4000 Collection</h3>
84
- <p><a href="">Waves SSL 4000 Collection</a> is a bundle of four plugins that emulate the sound and features of the SSL 4000 series consoles, one of the most iconic and widely used consoles in music history. The bundle includes:</p>
85
- <ul>
86
- <li>SSL E-Channel: A channel strip plugin that offers EQ, compression, gating, and filtering.</li>
87
- <li>SSL G-Channel: A channel strip plugin that offers EQ, compression, gating, filtering, and harmonic distortion.</li>
88
- <li>SSL G-Equalizer: A four-band equalizer plugin with a parametric LMF band.</li>
89
- <li>SSL G-Master Buss Compressor: A master buss compressor plugin that adds glue and punch to your mix.</li>
90
- </ul>
91
- <p>The Waves SSL 4000 Collection plugins are designed to faithfully recreate the sound and behavior of the original hardware units, with analog modeling and dynamic response. They also offer some additional features and options that enhance their flexibility and usability, such as sidechain filtering, stereo mode, analog noise control, input/output metering, and presets.</p>
92
- <p>The Waves SSL 4000 Collection plugins are compatible with most DAWs and operating systems. They cost $749 for the bundle, but they often go on sale for much lower prices. You can also try them for free for 7 days with a demo version.</p>
93
- <h3>Slate Digital Virtual Console Collection</h3>
94
- <p><a href="">Slate Digital Virtual Console Collection</a> is a bundle of two plugins that emulate the sound and features of six different analog consoles: SSL 4000 E, SSL 4000 G+, Neve 88RS, API Legacy Plus, Trident A-Range, and RCA BC6A. The bundle includes:</p>
95
- <ul>
96
- <li>VCC Channel: A channel strip plugin that offers drive, group selection, noise control, input/output metering, and presets.</li>
97
- <li>VCC Mixbuss: A mix buss plugin that offers drive, group selection, noise control, input/output metering, trim control, and presets.</li>
98
- </ul>
99
- <p>The Slate Digital Virtual Console Collection plugins are designed to emulate the sound and behavior of the original hardware units, with analog modeling and dynamic response. They also offer some additional features and options that enhance their flexibility and usability, such as group mode, oversampling, and calibration. They also allow you to mix and match different consoles and groups to create your own custom sound.</p>
100
- <p>The Slate Digital Virtual Console Collection plugins are compatible with most DAWs and operating systems. They cost $149 for the bundle, but they are also included in the Slate Digital All Access Pass, which gives you access to over 60 plugins and online courses for $14.99 per month or $149 per year. You can also try them for free for 15 days with a trial version.</p>
101
- <h3>Softube Console 1</h3>
102
- <p><a href="">Softube Console 1</a> is a hardware/software hybrid system that emulates the sound and features of different analog consoles. The system consists of:</p>
103
- <ul>
104
- <li>Console 1 Fader: A hardware controller that offers 10 touch-sensitive motorized faders, solo and mute buttons, layer mode, track selection, volume control, input/output metering, and presets.</li>
105
- <li>Console 1 MKII: A hardware controller that offers 18 dedicated knobs, LED display, solo and mute buttons, layer mode, track selection, drive control, input/output metering, and presets.</li>
106
- <li>Console 1 Software: A software plugin that offers EQ, compression, gate, transient shaper, saturation, high/low cut filters, input/output metering, and presets.</li>
107
- </ul>
108
- <p>The Softube Console 1 system is designed to emulate the sound and behavior of the original hardware units, with analog modeling and dynamic response. It also offers some additional features and options that enhance its flexibility and usability, such as parallel processing, sidechain filtering, stereo mode, analog noise control, and integration with other Softube plugins.</p>
109
- <p>The Softube Console 1 system is compatible with most DAWs and operating systems. It costs $1099 for the bundle of Console 1 Fader and Console 1 MKII controllers, or $499 for each controller separately. The Console 1 Software plugin is included with the controllers, but it can also be purchased separately for $199. The system also comes with four console emulation plugins: SSL SL 4000 E, Solid State Logic XL 9000 K-Series, British Class A For Console 1, and American Class A For Console 1. You can also buy other console emulation plugins from Softube or other developers that are compatible with the system.</p>
110
- <h2>FAQs</h2>
111
- <p>Here are some of the most frequently asked questions about BRAINWORX bx console keygen and their answers:</p>
112
- <h3>Is BRAINWORX bx console keygen safe to use?</h3>
113
- <p>BRAINWORX bx console keygen is safe to use if you download it from a reliable and trusted source like VST Crack. However, you should always scan any file that you download from the internet with a reputable antivirus or malware scanner before opening or running it. You should also backup your data and create a restore point on your system before installing or using any software tool that could potentially harm your system or compromise your security.</p>
114
- <h3>Is BRAINWORX bx console keygen legal to use?</h3>
115
- <p>BRAINWORX bx console keygen is not legal to use in most countries and jurisdictions. By using a keygen to activate software products that you have not paid for, you are violating the terms and conditions of the software license agreement and infringing the intellectual property rights of the software developers. You could face legal consequences or penalties if you are caught using a keygen. You could also be sued by the software developers or their representatives for damages or losses caused by your use of a keygen.</p>
116
- <h3>Does BRAINWORX bx console keygen work with all versions and formats of BRAINWORX bx console plugins?</h3>
117
- <p>BRAINWORX bx console keygen works with most versions and formats of BRAINWORX bx console plugins. However, it might not work with some newer or updated versions of the plugins that have changed or improved their registration system or algorithm. It might also not work with some formats or platforms that are not supported by the keygen. You should always check the compatibility and requirements of the plugins and the keygen before using them together.</p>
118
- <h3>Does BRAINWORX bx console keygen affect the sound quality or performance of BRAINWORX bx console plugins?</h3>
119
- <p>BRAINWORX bx console keygen does not affect the sound quality or performance of BRAINWORX bx console plugins. The keygen only generates serial numbers that activate the plugins on your computer. It does not modify or alter the code or functionality of the plugins in any way. The sound quality and performance of the plugins depend on their design and development by BRAINWORX, as well as your system's specifications and settings. The keygen does not affect these factors in any way.</p>
120
- <h3>Can I use BRAINWORX bx console keygen with other plugins or software that I use?</h3>
121
- <p>BRAINWORX bx console keygen can be used with other plugins or software that you use, as long as they are compatible and do not interfere with each other. However, you should be careful and avoid using too many plugins or software tools at the same time, as this could overload your system and cause crashes, errors, or glitches. You should also avoid using plugins or software tools that are illegal, unsafe, or unethical, as this could harm your system or compromise your security.</p>
122
- <h2></h2>
123
- <p>This concludes my article on BRAINWORX bx console keygen. I hope you found it informative and helpful. If you have any questions, comments, or feedback, please feel free to contact me. Thank you for reading and have a great day!</p> b2dd77e56b<br />
124
- <br />
125
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download WBCS Part 2 PDF for Free A Complete Guide to WBCS Mains Exam Papers.md DELETED
@@ -1,35 +0,0 @@
1
-
2
- <h1>How to Download WBCS Part 2 PDF for Free: A Useful Study Material for WBCS Exam</h1>
3
- <p>If you are preparing for the West Bengal Civil Service (WBCS) exam, you might be looking for some useful study materials that can help you cover the syllabus and practice the questions. One such study material is the WBCS Part 2 PDF, which is a collection of previous year papers of WBCS Mains exam. In this article, we will show you how to download WBCS Part 2 PDF for free and what are its features and benefits.</p>
4
- <h2>crack wbcs part 2 pdf free download</h2><br /><p><b><b>Download Zip</b> --->>> <a href="https://byltly.com/2uKwKo">https://byltly.com/2uKwKo</a></b></p><br /><br />
5
- <h2>What is WBCS Part 2 PDF?</h2>
6
- <p>WBCS Part 2 PDF is a study material that contains the previous year papers of WBCS Mains exam from 2014 to 2020. It covers all the six compulsory papers of WBCS Mains exam, namely:</p>
7
- <ul>
8
- <li>Paper I: Bengali/Hindi/Urdu/Nepali/Santali</li>
9
- <li>Paper II: English</li>
10
- <li>Paper III: General Studies I</li>
11
- <li>Paper IV: General Studies II</li>
12
- <li>Paper V: Indian Constitution and Economy</li>
13
- <li>Paper VI: Arithmetic and Test of Reasoning</li>
14
- </ul>
15
- <p>Each paper consists of 200 marks and has a duration of 150 minutes. The papers are available in both English and Bengali languages. The papers are also accompanied by detailed solutions and explanations.</p>
16
- <h2>What are the features of WBCS Part 2 PDF?</h2>
17
- <p>WBCS Part 2 PDF has many features that make it a useful and reliable study material for WBCS exam. Some of the features are:</p>
18
- <ul>
19
- <li><b>Authentic and updated:</b> WBCS Part 2 PDF contains the official papers of WBCS Mains exam that are released by the West Bengal Public Service Commission (WBPSC). The papers are also updated with the latest changes and trends in the exam pattern and syllabus.</li>
20
- <li><b>Comprehensive and diverse:</b> WBCS Part 2 PDF covers all the topics and sub-topics of the WBCS Mains exam syllabus. It also provides a variety of questions from different difficulty levels and formats.</li>
21
- <li><b>Solved and explained:</b> WBCS Part 2 PDF provides detailed solutions and explanations for each question. It also provides tips and tricks to solve the questions faster and accurately.</li>
22
- <li><b>Free and downloadable:</b> WBCS Part 2 PDF is available for free download from various online sources. You can download it on your computer or mobile device and access it anytime and anywhere.</li>
23
- </ul>
24
- <h2>How to download WBCS Part 2 PDF for free?</h2>
25
- <p>If you want to download WBCS Part 2 PDF for free, you can do so from the following online sources:</p>
26
- <p></p>
27
- <ol>
28
- <li><a href="https://testbook.com/wbcs/previous-year-papers">Testbook.com</a>: This is a website that provides various study materials and mock tests for various competitive exams. You can download WBCS Part 2 PDF from this website by clicking on the "Download" button or by starting a free test.</li>
29
- <li><a href="https://www.wbcsmadeeasy.in/knowledge-and-download/free-study-materials-for-wbcs-exam/">WBCSMadeEasy.in</a>: This is a website that provides coaching and guidance for WBCS exam. You can download WBCS Part 2 PDF from this website by clicking on the "Download" link or by registering on the website.</li>
30
- <li><a href="https://www.studyiq.com/articles/west-bengals-gi-tag-part-2-wbcs-exam-free-pdf-download/">StudyIQ.com</a>: This is a website that provides articles and videos on various topics related to current affairs and general studies. You can download WBCS Part 2 PDF from this website by clicking on the "Download" link or by subscribing to their YouTube channel.</li>
31
- </ol>
32
- <h2>Conclusion</h2>
33
- <p>WBCS Part 2 PDF is a free and useful study material that can help you prepare for the WBCS Mains exam. It contains the previous</p> ddb901b051<br />
34
- <br />
35
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Dvtool 2.0 Beta 5 HOT Download.md DELETED
@@ -1,218 +0,0 @@
1
- <br />
2
- <br> - What is DV Dongle? <br> - What is DVTool software? | | H2: Why do you need Dvtool 2.0 Beta 5? | - What are the features of Dvtool 2.0 Beta 5? <br> - What are the benefits of using Dvtool 2.0 Beta 5? <br> - How does Dvtool 2.0 Beta 5 improve your D-Star experience? | | H2: How to download and install Dvtool 2.0 Beta 5? | - Where to download Dvtool 2.0 Beta 5? <br> - How to install Dvtool 2.0 Beta 5 on Windows? <br> - How to install Dvtool 2.0 Beta 5 on Mac OS X? | | H2: How to use Dvtool 2.0 Beta 5? | - How to connect DV Dongle to your PC or Mac? <br> - How to configure Dvtool settings? <br> - How to access D-Star reflectors and repeaters? <br> - How to communicate with other D-Star users? | | H2: Tips and tricks for using Dvtool 2.0 Beta 5 | - How to update Dvtool software? <br> - How to troubleshoot common issues with Dvtool? <br> - How to optimize your audio quality with Dvtool? <br> - How to customize your D-Star profile with Dvtool? | | H2: Conclusion | Summary of the main points and call to action | | H3: FAQs | - What are the system requirements for using Dvtool 2.0 Beta 5? <br> - Is Dvtool 2.0 Beta 5 compatible with other versions of DV Dongle or DVAP? <br> - Is Dvtool 2.0 Beta 5 free or paid software? <br> - Where can I find more information or support for using Dvtool 2.0 Beta 5? <br> - What are some alternatives to using Dvtool 2.0 Beta 5? | Article <h1>Dvtool 2.0 Beta 5 Download: Everything You Need to Know</h1>
3
- <p>If you are a fan of digital voice communication in amateur radio, you have probably heard of D-Star, DV Dongle, and DVTool software. These are some of the tools that enable you to access the worldwide network of D-Star repeaters and reflectors from your PC or Mac.</p>
4
- <h2>Dvtool 2.0 Beta 5 Download</h2><br /><p><b><b>DOWNLOAD</b> ::: <a href="https://byltly.com/2uKzUe">https://byltly.com/2uKzUe</a></b></p><br /><br />
5
- <p>But did you know that there is a new version of DVTool software available for download? It's called Dvtool 2.0 Beta 5, and it offers some exciting features and improvements that will enhance your D-Star experience.</p>
6
- <p>In this article, we will tell you everything you need to know about Dvtool 2.0 Beta 5, including what it is, why you need it, how to download and install it, how to use it, and some tips and tricks for getting the most out of it.</p>
7
- <p>So, if you are ready to take your digital voice communication to the next level, read on!</p>
8
- <p></p>
9
- <h2>What is Dvtool?</h2>
10
- <p>Before we dive into the details of Dvtool 2.0 Beta 5, let's first review what Dvtool is and how it works with D-Star and DV Dongle.</p>
11
- <h3>What is D-Star?</h3>
12
- <p>D-Star stands for Digital Smart Technologies for Amateur Radio <p>D-Star is a digital voice and data protocol that was developed by the Japan Amateur Radio League (JARL) in the late 1990s. It allows amateur radio operators to communicate with each other over long distances using digital signals that are transmitted and received by D-Star compatible radios, repeaters, and reflectors.</p>
13
- <p>A D-Star repeater is a device that receives a D-Star signal from a radio and retransmits it to another radio or to a reflector. A D-Star reflector is a server that connects multiple repeaters and radios over the internet, creating a global network of D-Star users.</p>
14
- <p>D-Star offers several advantages over analog voice communication, such as clearer audio quality, less interference, more efficient use of bandwidth, and the ability to transmit data along with voice, such as GPS coordinates, text messages, images, and files.</p>
15
- <h3>What is DV Dongle?</h3>
16
- <p>A DV Dongle is a device that allows you to access the D-Star network from your PC or Mac without using a radio. It is a USB dongle that contains a digital signal processor (DSP) and a codec that converts analog audio signals to digital D-Star signals and vice versa.</p>
17
- <p>By connecting a DV Dongle to your PC or Mac and using a headset or microphone and speakers, you can communicate with other D-Star users over the internet. You can also use a DV Dongle to listen to D-Star transmissions and monitor the activity on different repeaters and reflectors.</p>
18
- <h3>What is DVTool software?</h3>
19
- <p>DVTool software is a program that allows you to control and configure your DV Dongle from your PC or Mac. It also provides a graphical user interface (GUI) that displays information about the D-Star network, such as the list of available repeaters and reflectors, the call signs of the users who are connected, and the status of your DV Dongle.</p>
20
- <p>DVTool software also enables you to connect your DV Dongle to any D-Star repeater or reflector that you choose, and to switch between them easily. You can also use DVTool software to adjust the audio settings of your DV Dongle, such as the volume, gain, and compression.</p>
21
- <h2>Why do you need Dvtool 2.0 Beta 5?</h2>
22
- <p>Now that you know what Dvtool is and how it works with D-Star and DV Dongle, you might be wondering why you need Dvtool 2.0 Beta 5. After all, there are already several versions of DVTool software available for download, such as DVTool 1.05, DVTool 2.0 Beta 1, DVTool 2.0 Beta 2, DVTool 2.0 Beta 3, and DVTool 2.0 Beta 4.</p>
23
- <p>Well, the answer is simple: Dvtool 2.0 Beta 5 is the latest and most advanced version of DVTool software that offers some new features and improvements that will make your D-Star experience even better. Here are some of them:</p>
24
- <h3>What are the features of Dvtool 2.0 Beta 5?</h3>
25
- <p>Some of the features of Dvtool 2.0 Beta 5 are:</p>
26
- <ul>
27
- <li>It supports both Windows and Mac OS X operating systems.</li>
28
- <li>It has a redesigned GUI that is more user-friendly and intuitive.</li>
29
- <li>It has a new audio engine that improves the sound quality and reduces the latency.</li>
30
- <li>It has a new echo test feature that allows you to test your audio settings before connecting to a repeater or reflector.</li>
31
- <li>It has a new auto-connect feature that automatically connects your DV Dongle to the last repeater or reflector that you used.</li>
32
- <li>It has a new auto-update feature that checks for new versions of DVTool software and downloads them automatically.</li>
33
- <li>It has a new logging feature that records your D-Star activity in a text file.</li>
34
- <li>It has a new help feature that provides online documentation and support for using DVTool software.</li>
35
- </ul>
36
- <h3>What are the benefits of using Dvtool 2.0 Beta 5?</h3>
37
- <p>Some of the benefits of using Dvtool 2.0 Beta 5 are:</p>
38
- <ul>
39
- <li>It allows you to access the D-Star network from your PC or Mac without using a radio.</li>
40
- <li>It allows you to communicate with other D-Star users around the world using digital voice and data.</li>
41
- <li>It allows you to enjoy clearer audio quality, less interference, more efficient use of bandwidth, and the ability to transmit data along with voice.</li>
42
- <li>It allows you to access a wider range of repeaters and reflectors that may not be available in your area or frequency.</li>
43
- <li>It allows you to monitor the activity on different repeaters and reflectors and discover new contacts and conversations.</li>
44
- <li>It allows you to adjust the audio settings of your DV Dongle to suit your preferences and environment.</li>
45
- <li>It allows you to update your DVTool software easily and automatically.</li>
46
- <li>It allows you to troubleshoot common issues with your DV Dongle and DVTool software.</li>
47
- <li>It allows you to customize your D-Star profile and display information about yourself and your station.</li>
48
- </ul>
49
- <h3>How does Dvtool 2.0 Beta 5 improve your D-Star experience?</h3>
50
- <p>By using Dvtool 2.0 Beta 5, you can improve your D-Star experience in several ways, such as:</p>
51
- <ul>
52
- <li>You can enjoy a smoother and more stable connection to the D-Star network, thanks to the improved audio engine and the redesigned GUI.</li>
53
- <li>You can test your audio settings before connecting to a repeater or reflector, thanks to the new echo test feature.</li>
54
- <li>You can save time and hassle by automatically connecting to the last repeater or reflector that you used, thanks to the new auto-connect feature.</li>
55
- <li>You can keep your DVTool software up to date and secure, thanks to the new auto-update feature.</li>
56
- <li>You can keep track of your D-Star activity and review it later, thanks to the new logging feature.</li>
57
- <li>You can get help and support for using DVTool software, thanks to the new help feature.</li>
58
- </ul>
59
- <h2>How to download and install Dvtool 2.0 Beta 5?</h2>
60
- <p>Now that you know why you need Dvtool 2.0 Beta 5 and what it can do for you, you might be wondering how to download and install it on your PC or Mac. Don't worry, it's very easy and straightforward. Just follow these steps:</p>
61
- <h3>Where to download Dvtool 2.0 Beta 5?</h3>
62
- <p>The official website for downloading Dvtool 2.0 Beta 5 is <a href="">http://www.dvdongle.com/DV_Dongle/Home.html</a>. This is where you can find the latest version of DVTool software for both Windows and Mac OS X operating systems.</p>
63
- <p>To download Dvtool 2.0 Beta 5, simply click on the link that corresponds to your operating system. For example, if you are using Windows, click on the link that says "DVTool-2.0beta5.exe". If you are using Mac OS X, click on the link that says "DVTool-2.0beta5.dmg".</p>
64
- <p>The download process will start automatically and may take a few minutes depending on your internet speed. Once the download is complete, you will have a file named "DVTool-2.0beta5.exe" or "DVTool-2.0beta5.dmg" in your downloads folder or wherever you saved it.</p>
65
- <h3>How to install Dvtool 2.0 Beta 5 on Windows?</h3>
66
- <p>To install Dvtool 2.0 Beta 5 on Windows, follow these steps:</p>
67
- <ol>
68
- <li>Double-click on the file named "DVTool-2.0beta5.exe" that you downloaded earlier.</li>
69
- <li>A window will pop up asking you if you want to run this file. Click on "Run".</li>
70
- <li>A window will pop up asking you if you want to allow this app to make changes to your device. Click on "Yes".</li>
71
- <li>A window will pop up showing you the setup wizard for DVTool software. Click on "Next".</li>
72
- <li>A window will pop up asking you to accept the license agreement for DVTool software. Read the agreement carefully and click on "I Agree".</li>
73
- <li>A window will pop up asking you to choose the destination folder for installing DVTool software. You can leave it as default or change it if you want. Click on "Next".</li>
74
- <li>A window will pop up asking you to confirm the installation settings. Click on "Install".</li>
75
- <li>The installation process will begin and may take a few minutes depending on your computer speed. A window will pop up showing you the progress of the installation.</li>
76
- <li>Once the installation is complete, a window will pop up asking you if you want to launch DVTool software now. Click on "Finish".</li>
77
- </ol>
78
- <p>Congratulations! You have successfully installed Dvtool 2.0 Beta 5 on your Windows PC. You are now ready to use it with your DV Dongle and access the D-Star network.</p>
79
- <h3>How to install Dvtool 2.0 Beta 5 on Mac OS X?</h3>
80
- <p>To install Dvtool 2.0 Beta 5 on Mac OS X, follow these steps:</p>
81
- <ol>
82
- <li>Double-click on the file named "DVTool-2.0beta5.dmg" that you downloaded earlier.</li>
83
- <li>A window will pop up showing you the DVTool software icon and a folder named "Applications". Drag and drop the DVTool software icon into the Applications folder.</li>
84
- <li>A window will pop up asking you to confirm that you want to copy DVTool software to the Applications folder. Click on "Authenticate".</li>
85
- <li>A window will pop up asking you to enter your administrator password. Enter your password and click on "OK".</li>
86
- <li>The copying process will begin and may take a few minutes depending on your computer speed. A window will pop up showing you the progress of the copying.</li>
87
- <li>Once the copying is complete, a window will pop up showing you that DVTool software is in your Applications folder. You can close this window and eject the DVTool software disk image.</li>
88
- </ol>
89
- <p>Congratulations! You have successfully installed Dvtool 2.0 Beta 5 on your Mac OS X. You are now ready to use it with your DV Dongle and access the D-Star network.</p>
90
- <h2>How to use Dvtool 2.0 Beta 5?</h2>
91
- <p>Now that you have downloaded and installed Dvtool 2.0 Beta 5 on your PC or Mac, you might be wondering how to use it with your DV Dongle and access the D-Star network. Don't worry, it's very easy and fun. Just follow these steps:</p>
92
- <h3>How to connect DV Dongle to your PC or Mac?</h3>
93
- <p>To connect your DV Dongle to your PC or Mac, follow these steps:</p>
94
- <ol>
95
- <li>Make sure that your PC or Mac is connected to the internet and has a working sound card, headset or microphone, and speakers.</li>
96
- <li>Plug your DV Dongle into a free USB port on your PC or Mac.</li>
97
- <li>Wait for a few seconds until your PC or Mac recognizes your DV Dongle and installs the necessary drivers.</li>
98
- <li>You should see a blue LED light on your DV Dongle indicating that it is powered on and ready to use.</li>
99
- </ol>
100
- <h3>How to configure Dvtool settings?</h3>
101
- <p>To configure your Dvtool settings, follow these steps:</p>
102
- <ol>
103
- <li>Launch the DVTool software from your desktop or applications folder.</li>
104
- <li>A window will pop up showing you the main interface of DVTool software.</li>
105
- <li>Click on the "Settings" button at the top right corner of the window.</li>
106
- <li>A window will pop up showing you the settings menu of DVTool software.</li>
107
- <li>You can adjust various settings here, such as:</li>
108
- <ul>
109
- <li>Your call sign: Enter your amateur radio call sign in the box provided. This is how other D-Star users will identify you on the network.</li>
110
- <li>Your name: Enter your name in the box provided. This is how other D-Star users will greet you on the network.</li>
111
- <li>Your location: Enter your city and country in the box provided. This is how other D-Star users will know where you are from on the network.</li>
112
- <li>Your message: Enter a short message in the box provided. This is what other D-Star users will see when they connect to you on the network.</li>
113
- <li>Your audio input device: Select the device that you are using to capture your voice, such as a headset or microphone, from the drop-down menu.</li>
114
- <li>Your audio output device: Select the device that you are using to play back other users' voices, such as speakers or headphones, from the drop-down menu.</li>
115
- <li>Your audio input level: Adjust the slider to set the volume of your voice input. You can also use the "Test" button to test your audio input level and hear how you sound.</li>
116
- <li>Your audio output level: Adjust the slider to set the volume of other users' voice output. You can also use the "Test" button to test your audio output level and hear how others sound.</li>
117
- </ul>
118
- <li>Once you are done with adjusting your settings, click on the "OK" button to save them and close the window.</li>
119
- </ol>
120
- <h3>How to access D-Star reflectors and repeaters?</h3>
121
- <p>To access D-Star reflectors and repeaters, follow these steps:</p>
122
- <ol>
123
- <li>On the main interface of DVTool software, click on the "Connect" button at the top left corner of the window.</li>
124
- <li>A window will pop up showing you the list of available D-Star reflectors and repeaters that you can connect to.</li>
125
- <li>You can use the search box to find a specific reflector or repeater by its name, call sign, or location.</li>
126
- <li>You can also use the filter buttons to narrow down the list by category, such as "All", "Favorites", "Local", "International", or "Hotspots".</li>
127
- <li>Once you find the reflector or repeater that you want to connect to, double-click on it or select it and click on the "Connect" button at the bottom of the window.</li>
128
- <li>A window will pop up showing you the status of your connection. You should see a green LED light on your DV Dongle indicating that it is connected to the reflector or repeater.</li>
129
- <li>You should also see a message on the main interface of DVTool software saying "Connected to [reflector or repeater name]".</li>
130
- <li>You can now communicate with other D-Star users who are connected to the same reflector or repeater as you.</li>
131
- </ol>
132
- <h3>How to communicate with other D-Star users?</h3>
133
- <p>To communicate with other D-Star users, follow these steps:</p>
134
- <ol>
135
- <li>Make sure that your DV Dongle is connected to a reflector or repeater that has other users online.</li>
136
- <li>Put on your headset or microphone and speakers and adjust your audio input and output levels as needed.</li>
137
- <li>Press and hold the "PTT" button on your DV Dongle or on your keyboard (usually the space bar) to transmit your voice.</li>
138
- <li>Speak clearly and politely into your microphone and introduce yourself with your call sign, name, and location.</li>
139
- <li>Release the "PTT" button when you are done speaking and wait for a response from other users.</li>
140
- <li>If you hear a response from another user, you can reply by pressing and holding the "PTT" button again and speaking into your microphone.</li>
141
- <li>If you don't hear a response from another user, you can try calling again or switch to another reflector or repeater that has more activity.</li>
142
- <li>You can also listen to other users' conversations and join them if they invite you or if they are open to new contacts.</li>
143
- </ol>
144
- <h2>Tips and tricks for using Dvtool 2.0 Beta 5</h2>
145
- <p>By following the steps above, you should be able to use Dvtool 2.0 Beta 5 with your DV Dongle and access the D-Star network without any problems. However, there are some tips and tricks that can help you get even more out of Dvtool 2.0 Beta 5 and make your D-Star experience more enjoyable and efficient. Here are some of them:</p>
146
- <h3>How to update Dvtool software?</h3>
147
- <p>To update your Dvtool 2.0 Beta 5 software, follow these steps:</p>
148
- <ol>
149
- <li>Launch the DVTool software from your desktop or applications folder.</li>
150
- <li>A window will pop up showing you the main interface of DVTool software.</li>
151
- <li>Click on the "Help" button at the top right corner of the window.</li>
152
- <li>A window will pop up showing you the help menu of DVTool software.</li>
153
- <li>Click on the "Check for Updates" option.</li>
154
- <li>A window will pop up showing you if there are any new versions of DVTool software available for download.</li>
155
- <li>If there are no new versions available, you will see a message saying "You have the latest version of DVTool". You can close this window and continue using DVTool software as usual.</li>
156
- <li>If there are new versions available, you will see a message saying "A new version of DVTool is available". You can click on the "Download" button to download the new version of DVTool software and install it following the same steps as before.</li>
157
- <li>Once the installation is complete, you will have the latest version of DVTool software on your PC or Mac. You can close this window and enjoy the new features and improvements of DVTool software.</li>
158
- </ol>
159
- <h3>How to troubleshoot common issues with Dvtool?</h3>
160
- <p>Sometimes, you may encounter some issues with your Dvtool 2.0 Beta 5 software or your DV Dongle that may affect your D-Star experience. Don't panic, most of these issues can be easily fixed by following some simple troubleshooting steps. Here are some of the common issues and how to fix them:</p>
161
- <ul>
162
- <li>Your DV Dongle is not recognized by your PC or Mac: This may happen if your USB port is faulty, your USB cable is loose, your DV Dongle is damaged, or your drivers are outdated. To fix this, try plugging your DV Dongle into a different USB port, using a different USB cable, checking your DV Dongle for any physical damage, or updating your drivers from the official website.</li>
163
- <li>Your DV Dongle is not connected to the D-Star network: This may happen if your internet connection is unstable, your firewall or antivirus is blocking the DVTool software, your reflector or repeater is offline, or your settings are incorrect. To fix this, try restarting your modem or router, disabling your firewall or antivirus temporarily, choosing a different reflector or repeater, or checking your settings for any errors.</li>
164
- <li>Your audio quality is poor or distorted: This may happen if your audio input or output device is faulty, your audio input or output level is too high or too low, your internet connection is slow, or your reflector or repeater is congested. To fix this, try using a different audio input or output device, adjusting your audio input or output level using the slider or the test button, improving your internet speed, or switching to a less busy reflector or repeater.</li>
165
- <li>Your D-Star profile is not displayed correctly: This may happen if you have not entered your call sign, name, location, or message in the settings menu, or if you have entered them incorrectly. To fix this, try entering or correcting your call sign, name, location, and message in the settings menu and saving them.</li>
166
- </ul>
167
- <p>If none of these steps work for you, you can always contact the DVTool software support team for further assistance. You can find their contact information on the official website.</p>
168
- <h3>How to optimize your audio quality with Dvtool?</h3>
169
- <p>One of the main advantages of using Dvtool 2.0 Beta 5 with your DV Dongle and accessing the D-Star network is that you can enjoy clearer audio quality than analog voice communication. However, there are some ways that you can optimize your audio quality even more and make it sound more natural and pleasant. Here are some of them:</p>
170
- <ul>
171
- <li>Use a good quality headset or microphone and speakers that are compatible with your PC or Mac and have a clear sound output and input.</li>
172
- <li>Position your headset or microphone and speakers in a way that minimizes background noise and feedback.</li>
173
- <li>Speak clearly and loudly enough into your microphone and avoid mumbling or whispering.</li>
174
- <li>Avoid speaking too fast or too slow and use proper pronunciation and grammar.</li>
175
- <li>Avoid using slang, jargon, acronyms, or abbreviations that may confuse other users.</li>
176
- <li>Avoid interrupting other users when they are speaking and wait for a pause before transmitting.</li>
177
- <li>Acknowledge other users when they call you by using their call sign and name.</li>
178
- <li>Be polite and respectful to other users and follow the etiquette and rules of the D-Star network.</li>
179
- </ul>
180
- <h3>How to customize your D-Star profile with Dvtool?</h3>
181
- <p>One of the fun aspects of using Dvtool 2.0 Beta 5 with your DV Dongle and accessing the D-Star network is that you can customize your D-Star profile and display information about yourself and your station to other users. This can help you make new contacts and friends on the network and show off your personality and interests. Here are some ways that you can customize your D-Star profile with Dvtool 2.0 Beta 5:</p>
182
- <ul>
183
- <li>You can enter your call sign, name, location, and message in the settings menu of DVTool software and save them. These are the basic information that other users will see when they connect to you on the network.</li>
184
- <li>You can also enter some optional information in the settings menu of DVTool software, such as your email address, website, QTH locator, and D-Star registration date. These are the additional information that other users can see if they click on your call sign on the main interface of DVTool software.</li>
185
- <li>You can also upload a picture of yourself or your station in the settings menu of DVTool software. This is the image that other users will see when they click on your call sign on the main interface of DVTool software.</li>
186
- <li>You can also change the color and font of your call sign, name, location, and message in the settings menu of DVTool software. This is how you can personalize your D-Star profile and make it stand out from the rest.</li>
187
- </ul>
188
- <h2>Conclusion</h2>
189
- <p>In conclusion, Dvtool 2.0 Beta 5 is a great software that allows you to use your DV Dongle and access the D-Star network from your PC or Mac without using a radio. It offers some new features and improvements that will enhance your D-Star experience, such as a redesigned GUI, a new audio engine, a new echo test feature, a new auto-connect feature, a new auto-update feature, a new logging feature, and a new help feature.</p>
190
- <p>It also allows you to communicate with other D-Star users around the world using digital voice and data, enjoy clearer audio quality, less interference, more efficient use of bandwidth, and the ability to transmit data along with voice, access a wider range of repeaters and reflectors that may not be available in your area or frequency, monitor the activity on different repeaters and reflectors and discover new contacts and conversations, adjust the audio settings of your DV Dongle to suit your preferences and environment, update your DVTool software easily and automatically, troubleshoot common issues with your DV Dongle and DVTool software, customize your D-Star profile and display information about yourself and your station, and optimize your audio quality with some tips and tricks.</p>
191
- <p>If you are interested in trying out Dvtool 2.0 Beta 5, you can download it from the official website <a href="">http://www.dvdongle.com/DV_Dongle/Home.html</a> and install it on your PC or Mac following the steps above. You will need a DV Dongle device to use it with. You can also find more information and support for using Dvtool 2.0 Beta 5 on the official website or by contacting the DVTool software support team.</p>
192
- <p>We hope that this article has helped you learn more about Dvtool 2.0 Beta 5 and how to use it with your DV Dongle and access the D-Star network. We hope that you will enjoy using Dvtool 2.0 Beta 5 and have fun communicating with other D-Star users around the world.</p>
193
- <h3>FAQs</h3>
194
- <p>Here are some frequently asked questions (FAQs) about Dvtool 2.0 Beta 5:</p>
195
- <ol>
196
- <li><b>What are the system requirements for using Dvtool 2.0 Beta 5?</b></li>
197
- <p>The system requirements for using Dvtool 2.0 Beta 5 are:</p>
198
- <ul>
199
- <li>A PC or Mac with an internet connection and a working sound card.</li>
200
- <li>A Windows or Mac OS X operating system.</li>
201
- <li>A DV Dongle device with a USB cable.</li>
202
- <li>A headset or microphone and speakers.</li>
203
- </ul>
204
- <li><b>Is Dvtool 2.0 Beta 5 compatible with other versions of DV Dongle or DVAP?</b></li>
205
- <p>Yes, Dvtool 2.0 Beta 5 is compatible with all versions of DV Dongle devices (DV Dongle Blue, DV Dongle Red, DV Dongle Orange) and also with DVAP devices (DV Access Point Dongle). However, some features may not work with older versions of these devices.</p>
206
- <li><b>Is Dvtool 2.0 Beta 5 free or paid software?</b></li>
207
- <p>Dvtool 2.0 Beta 5 is free software that you can download from the official website <a href="">http://www.dvdongle.com/DV_Dongle/Home.html</a>. However, you will need to purchase a DV Dongle device or a DVAP device to use it with, which are sold separately by different vendors.</p>
208
- <li><b>Where can I find more information or support for using Dvtool 2.0 Beta 5?</b></li>
209
- <p>You can find more information or support for using Dvtool 2.0 Beta 5 on the official website <a href="">http://www.dvdongle.com/DV_Dongle/Home.html</a>, where you can find the online documentation, the user manual, the FAQ section, and the contact information of the DVTool software support team. You can also join the DVTool software user group on Yahoo Groups <a href="">https://groups.yahoo.com/neo/groups/dvdongle/info</a>, where you can interact with other users and share your feedback and suggestions.</p>
210
- <li><b>What are some alternatives to using Dvtool 2.0 Beta 5?</b></li>
211
- <p>If you are looking for some alternatives to using Dvtool 2.0 Beta 5 with your DV Dongle or DVAP device, you can try some of these options:</p>
212
- <ul>
213
- <li>You can use a D-Star compatible radio instead of a DV Dongle or DVAP device, which will allow you to access the D-Star network directly from your radio without using a PC or Mac.</li>
214
- <li>You can use a different software instead of DVTool software, such as WinDV <a href="">http://www.dutch-star.eu/software/</a>, which is another program that allows you to use your DV Dongle or DVAP device with your PC or Mac.</li>
215
- <li>You can use a different digital voice protocol instead of D-Star, such as DMR <a href="">https://www.dmr-marc.net/</a>, which is another digital voice and data protocol that is used by amateur radio operators.</li>
216
- </ul></p> b2dd77e56b<br />
217
- <br />
218
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Ecology Exam Essay Questions.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>ecology exam essay questions</h2><br /><p><b><b>Download File</b> &#9745; <a href="https://imgfil.com/2uxYby">https://imgfil.com/2uxYby</a></b></p><br /><br />
2
- <br />
3
- As a consequence, plants will store nutrients at the most economically efficient rate for their individual needs, while animals will invest the most in the form of high-quality, energy-rich tissues. This ensures that their life is maximized; thus, the superior environment won out. 3. What is the purpose of the land-mass called the Earth? What is its role in the universe? How does the interaction of the Earth with the Sun shape the Earth and its atmosphere? 4. What is life and what is it made of? Why is organic carbon the most abundant form of carbon in the universe? What role does carbon play in the life of an organism? 5. What is the origin of energy? What are energy-releasing particles called? What is energy? 6. What is the origin of matter? What is matter? What is a material? How does matter travel through space ? What is an object ? 7. What is the origin of light ? What is a particle ? How does light travel? What is a wave ? 8. What is the origin of heat ? What is heat ? What is a temperature ? How is heat transported in a system ? What is the difference between a heat transfer and a heat flow? 9. How does an engine work ? How do the atmospheric pressure of air and the buoyancy of water contribute to the function of a gas cylinder ? 10. How does a universe expand ? How do atoms combine to form molecules ? How do molecules combine to form proteins ? 11. What is an electron ? How does an electron travel through space ? 12. What is DNA ? Why is DNA the genetic material of most organisms? What is genetic coding ? What is a gene ? 13. What is a protein ? How do proteins work ? What is the difference between a protein and an enzyme ? How does a cell divide and differentiate ? 14. What is the difference between a cell and a multi-cellular organism ? What is a multi-cellular organism ? How does a multi-cellular organism grow and develop ? 15. How does a plant develop ? How does a plant die ? How do the cells of a plant communicate ? 16. How do animals develop ? How does an animal die ? What is the difference between a plant cell and an animal cell ? How do plant cells communicate ? How does the 4fefd39f24<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/AetherSX2 best settings apk Tips and tricks for the best PS2 emulator on Android.md DELETED
@@ -1,110 +0,0 @@
1
- <br />
2
- <h1>How to Play PS2 Games on Android with AetherSX2 Emulator</h1>
3
- <p>If you are a fan of PlayStation 2 games and want to relive your childhood memories on your Android smartphone, you are in luck. There is a new PS2 emulator for Android that lets you play PS2 games with amazing graphics and performance. It's called AetherSX2, and it's the best PS2 emulator for Android by far.</p>
4
- <p>In this article, we will show you how to download, install, configure, and play PS2 games on Android with AetherSX2 emulator. We will also give you some tips and tricks to optimize the emulator and make your gaming experience more enjoyable. And we will recommend some of the best PS2 games that you can play on AetherSX2 emulator.</p>
5
- <h2>aethersx2 best settings apk</h2><br /><p><b><b>DOWNLOAD</b> &rArr;&rArr;&rArr; <a href="https://urlin.us/2uSZYV">https://urlin.us/2uSZYV</a></b></p><br /><br />
6
- <p>So, without further ado, let's get started!</p>
7
- <h2>What is AetherSX2 Emulator?</h2>
8
- <p>AetherSX2 is a PS2 emulator for Android that was released in late 2021 by a developer named Tahlreth. It is based on the PCSX2 emulator, which is a well-known and reliable PS2 emulator for PC. The developer got permission from the PCSX2 team to use their code and licensed it under the LGPL license.</p>
9
- <p>AetherSX2 emulator is a major breakthrough for PS2 emulation on Android devices. It supports a wide range of PS2 games and offers various features such as internal resolution scaling, save states, multiple control schemes, widescreen patches, and more. It also supports both Vulkan and OpenGL graphics renderers, which can improve the performance and compatibility of different games.</p>
10
- <p>AetherSX2 emulator is free to download and use, unlike some other PS2 emulators that charge money or show ads. You can get it from the Google Play Store or from the official website. You can also join the fan-run Discord server to get updates, support, and feedback from other users.</p>
11
- <h2>How to Download and Install AetherSX2 Emulator?</h2>
12
- <p>Downloading and installing AetherSX2 emulator is very easy. Just follow these simple steps:</p>
13
- <ol>
14
- <li>Go to the Google Play Store and search for "AetherSX2" or use this link to download it.</li>
15
- <li>Alternatively, you can go to the official website and download the APK file from there. Make sure you enable "Unknown sources" in your device settings before installing it.</li>
16
- <li>Once you have downloaded the app, open it and grant it the necessary permissions.</li>
17
- <li>You will also need a PS2 BIOS file to run the emulator. You can dump it from your own PS2 console or find it online (but be careful of legal issues). Place the BIOS file in your device storage (preferably in a folder named "BIOS").</li>
18
- <li>Launch the app and tap on "Select BIOS" in the main menu. Navigate to the folder where you placed the BIOS file and select it.</li>
19
- <li>You are now ready to use the emulator!</li>
20
- </ol>
21
- <h2>How to Configure AetherSX2 Emulator for Best Performance?</h2>
22
- <p>AetherSX2 emulator has many settings that you can tweak to optimize its performance and compatibility for different games. However, there is no one-size-fits-all solution, as different games may require different settings. You may need to experiment with various options until you find the best settings for your device and game. Here are some general tips and recommendations that may help you:</p>
23
- <p>aethersx2 ps2 emulator android apk download<br />
24
- aethersx2 settings for high-end devices<br />
25
- aethersx2 vs damonps2 comparison<br />
26
- aethersx2 compatible games list<br />
27
- aethersx2 how to use cheats<br />
28
- aethersx2 vulkan vs opengl performance<br />
29
- aethersx2 best settings for god of war<br />
30
- aethersx2 bios file download<br />
31
- aethersx2 controller setup guide<br />
32
- aethersx2 speed hacks tutorial<br />
33
- aethersx2 widescreen patch apk<br />
34
- aethersx2 best settings for kingdom hearts<br />
35
- aethersx2 how to fix black screen<br />
36
- aethersx2 save state location<br />
37
- aethersx2 best settings for final fantasy x<br />
38
- aethersx2 how to increase fps<br />
39
- aethersx2 memory card format<br />
40
- aethersx2 best settings for shadow of the colossus<br />
41
- aethersx2 how to play multiplayer<br />
42
- aethersx2 iso file download<br />
43
- aethersx2 best settings for metal gear solid 3<br />
44
- aethersx2 how to change language<br />
45
- aethersx2 custom resolution apk<br />
46
- aethersx2 best settings for gran turismo 4<br />
47
- aethersx2 how to use gamepad<br />
48
- aethersx2 texture filtering apk<br />
49
- aethersx2 best settings for resident evil 4<br />
50
- aethersx2 how to load roms<br />
51
- aethersx2 anti aliasing apk<br />
52
- aethersx2 best settings for dragon ball z budokai tenkaichi 3<br />
53
- aethersx2 how to enable sound<br />
54
- aethersx2 frame skipping apk<br />
55
- aethersx2 best settings for silent hill 3<br />
56
- aethersx2 how to fix lag<br />
57
- aethersx2 shader effects apk<br />
58
- aethersx2 best settings for devil may cry 3<br />
59
- aethersx2 how to update app<br />
60
- aethersx2 force feedback apk<br />
61
- aethersx2 best settings for persona 4<br />
62
- aethersx2 how to use mouse and keyboard</p>
63
- <ul>
64
- <li>Choose the graphics renderer that works best for your device and game. Vulkan is usually faster and more compatible, but OpenGL may offer better quality and stability for some games.</li>
65
- <li>Adjust the internal resolution scaling according to your device's capabilities and screen size. Higher resolutions will make the games look sharper and clearer, but they will also consume more resources and cause slowdowns or crashes. Lower resolutions will improve the performance and compatibility, but they will also make the games look blurry and pixelated.</li>
66
- <li>Enable or disable the speed hacks depending on the game's requirements and your device's power. Speed hacks are optimizations that can boost the emulation speed, but they can also cause glitches or errors in some games. You can try the default speed hacks or customize them individually.</li>
67
- <li>Enable or disable the widescreen patches if you want to play the games in 16:9 aspect ratio instead of the original 4:3. Widescreen patches can make the games look more immersive and modern, but they can also cause graphical issues or distortions in some games.</li>
68
- <li>Configure the controls according to your preference and comfort. You can use the on-screen touch controls, a physical controller, or a keyboard and mouse. You can also customize the layout, size, opacity, and sensitivity of the touch controls.</li>
69
- </ul>
70
- <p>You can access the settings menu by tapping on the gear icon in the main menu or by pressing the back button while playing a game. You can also change the settings for each game individually by long-pressing on the game cover and selecting "Game settings".</p>
71
- <h2>How to Load and Play PS2 Games on AetherSX2 Emulator?</h2>
72
- <p>Loading and playing PS2 games on AetherSX2 emulator is also very easy. Just follow these simple steps:</p>
73
- <ol>
74
- <li>You will need PS2 game files (also known as ISOs or ROMs) to play them on the emulator. You can dump them from your own PS2 discs or find them online (but be careful of legal issues). Place the game files in your device storage (preferably in a folder named "Games").</li>
75
- <li>Launch the app and tap on "Select Game" in the main menu. Navigate to the folder where you placed the game files and select one.</li>
76
- <li>The game will start loading and you will see a loading screen with some information about the game and its compatibility status. You can also see some tips and suggestions for optimizing the game's performance.</li>
77
- <li>Once the game is loaded, you can start playing it with your chosen control scheme. You can also access some options by tapping on the screen or pressing the menu button while playing. You can save or load your progress using save states, change the graphics renderer, adjust the volume, take screenshots, or exit the game.</li>
78
- </ol>
79
- <h2>What are the Best PS2 Games to Play on AetherSX2 Emulator?</h2>
80
- <p>AetherSX2 emulator supports a large number of PS2 games, but not all of them are fully playable or compatible. Some games may have minor issues such as graphical glitches, audio problems, or slow loading times. Some games may have major issues such as crashes, freezes, or black screens. And some games may not work at all.</p>
81
- <p>The compatibility status of each game is indicated by a color code in the loading screen: green means playable, yellow means ingame, orange means menu/intro, red means loadable, and black means nothing.</p>
82
- <p>You can check the compatibility list on the official website to see which games are supported by the emulator and how well they run. You can also report any issues or bugs that you encounter while playing a game on the Discord server or on GitHub.</p>
83
- <p>Here are some of the best PS2 games that you can play on AetherSX2 emulator with good performance and compatibility:</p>
84
- <table>
85
- <tr><th>Game</th><th>Genre</th><th>Description</th></tr>
86
- <tr><td>God of War</td><td>Action-adventure</td><td>A hack-and-slash game that follows Kratos, a Spartan warrior who seeks revenge against Ares, the god of war.</td></tr>
87
- <tr><td>Shadow of the Colossus</td><td>Action-adventure</td><td>A unique game that involves exploring a vast land and defeating giant creatures called colossi to revive a dead girl.</td></tr>
88
- <tr><td>Grand Theft Auto: San Andreas</td><td>Action-adventure</td><td>A sandbox game that lets you roam around a fictional state of San Andreas and engage in various activities such as driving, shooting, fighting, and more.</td </tr>
89
- <tr><td>Final Fantasy X</td><td>Role-playing</td><td>A classic JRPG that follows Tidus, a young athlete who is transported to a fantasy world called Spira and joins a group of adventurers to defeat a monstrous threat called Sin.</td></tr>
90
- <tr><td>Metal Gear Solid 3: Snake Eater</td><td>Stealth-action</td><td>A prequel to the Metal Gear series that features Naked Snake, a special agent who infiltrates a Soviet jungle to rescue a scientist and stop a nuclear war.</td></tr>
91
- <tr><td>Kingdom Hearts</td><td>Action-role-playing</td><td>A crossover game that combines characters and worlds from Disney and Final Fantasy franchises. It follows Sora, a young boy who wields a magical weapon called the Keyblade and teams up with Donald Duck and Goofy to fight against the Heartless.</td></tr>
92
- </table>
93
- <p>Of course, there are many more PS2 games that you can try on AetherSX2 emulator, but these are some of the most popular and well-received ones. You can also check out some online forums and reviews to find more recommendations and suggestions.</p>
94
- <h1>Conclusion</h1>
95
- <p>AetherSX2 emulator is an amazing app that lets you play PS2 games on Android devices with high quality and performance. It is easy to download, install, configure, and use. It supports a large number of PS2 games and offers various features and options to enhance your gaming experience. It is also free and open-source, unlike some other PS2 emulators that charge money or show ads.</p>
96
- <p>If you are a fan of PS2 games and want to relive your childhood memories on your Android smartphone, you should definitely give AetherSX2 emulator a try. You will be amazed by how well it runs your favorite PS2 games and how much fun you will have playing them.</p>
97
- <p>So, what are you waiting for? Download AetherSX2 emulator now and enjoy playing PS2 games on Android!</p>
98
- <h1>FAQs</h1>
99
- <h3>Q: Is AetherSX2 emulator legal?</h3>
100
- <p>A: AetherSX2 emulator itself is legal, as it is based on the PCSX2 emulator, which is licensed under the LGPL license. However, downloading or distributing PS2 BIOS or game files may be illegal in some countries or regions, depending on the copyright laws and regulations. You should only use your own PS2 BIOS or game files that you have legally obtained.</p>
101
- <h3>Q: Is AetherSX2 emulator safe?</h3>
102
- <p>A: AetherSX2 emulator is safe to use, as long as you download it from the official sources (Google Play Store or official website). It does not contain any malware, viruses, or spyware. It also does not collect any personal or sensitive data from your device.</p>
103
- <h3>Q: How can I update AetherSX2 emulator?</h3>
104
- <p>A: You can update AetherSX2 emulator by checking for updates in the app itself or by visiting the Google Play Store or the official website. You can also join the Discord server to get notified of any new updates or releases.</p>
105
- <h3>Q: How can I support AetherSX2 emulator?</h3>
106
- <p>A: You can support AetherSX2 emulator by giving it a positive rating and review on the Google Play Store or by sharing it with your friends and family. You can also donate to the developer via PayPal or Patreon to show your appreciation and help him improve the emulator.</p>
107
- <h3>Q: How can I contact AetherSX2 emulator?</h3>
108
- <p>A: You can contact AetherSX2 emulator by joining the Discord server or by sending an email to [email protected]. You can also follow the developer on Twitter or Instagram for more updates and news.</p> 197e85843d<br />
109
- <br />
110
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Become a Soccer Super Star with this Amazing Football MOD APK.md DELETED
@@ -1,129 +0,0 @@
1
- <br />
2
- <br>
3
- <br>
4
- <br>
5
- <br>
6
- <br>
7
- <br>
8
- <br>
9
- <br>
10
- <br>
11
- <br>
12
- <br>
13
- <br>
14
- <br>
15
- <br>
16
- <br>
17
- <br>
18
- <br>
19
- <br>
20
- <code>
21
- <h1>Soccer Super Star Football Mod APK: A Fun and Simple Soccer Game</h1>
22
- <p>Do you love soccer? Do you want to play a soccer game that is fun, simple, and realistic? If yes, then you should try <strong>Soccer Super Star Football Mod APK</strong>, a soccer game that lets you swipe to shoot and score amazing goals. In this article, we will tell you everything you need to know about this game, including how to download and install it, how to play it, tips and tricks, pros and cons, and FAQs. Let's get started!</p>
23
- <h2>soccer super star football mod apk</h2><br /><p><b><b>Download</b> &#9734;&#9734;&#9734;&#9734;&#9734; <a href="https://jinyurl.com/2uNKm4">https://jinyurl.com/2uNKm4</a></b></p><br /><br />
24
- <h2>Introduction</h2>
25
- <p>Soccer Super Star Football Mod APK is a soccer game that is developed by Real Free Soccer. It is available for Android devices and can be downloaded for free from various websites. The game has over 10 million downloads and a 4.4-star rating on Google Play Store. It is one of the most popular soccer games on the market, thanks to its simple and intuitive gameplay, realistic graphics and physics, various teams and modes to choose from, and unlimited rewind feature.</p>
26
- <p>Why should you download Soccer Super Star Football Mod APK? Well, if you are a fan of soccer, you will love this game. It is easy to play, but hard to master. You can swipe to shoot and score goals from different angles and distances. You can also use the unlimited rewind feature to correct your mistakes and try again. You can choose from different teams and modes, such as career mode, tournament mode, challenge mode, and training mode. You can also unlock new players and stadiums as you progress in the game. The game is also offline-friendly, meaning you can play it without an internet connection.</p>
27
- <p>What are the features of the mod version of Soccer Super Star Football? The mod version gives you some extra benefits that the original version does not. For example, you can enjoy unlimited rewind, which allows you to undo your shots and try again as many times as you want. You can also get unlimited coins, which you can use to buy new players and stadiums. The mod version also removes ads, which can be annoying and distracting in the original version.</p>
28
- <h2>How to Download and Install Soccer Super Star Football Mod APK</h2>
29
- <p>If you want to download and install Soccer Super Star Football Mod APK on your Android device, you need to follow these simple steps:</p>
30
- <ol>
31
- <li>Download the APK file from a trusted source. You can find many websites that offer the mod version of Soccer Super Star Football for free. However, be careful not to download from shady or malicious sites that may harm your device or steal your data. We recommend you to download from [this link], which is safe and reliable.</li>
32
- <li>Enable unknown sources on your device. Since you are downloading an APK file from a third-party source, you need to enable unknown sources on your device. This will allow you to install apps that are not from Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.</li>
33
- <li>Install the APK file. Once you have downloaded the APK file, locate it in your file manager and tap on it. You will see a pop-up window asking for your permission to install the app. Tap on Install and wait for the installation process to finish.</li>
34
- <li>Launch the game and enjoy. After the installation is done, you can launch the game from your app drawer or home screen. You will see the game icon with the word "Mod" on it. Tap on it and start playing Soccer Super Star Football Mod APK.</li>
35
- </ol> <h2>How to Play Soccer Super Star Football Mod APK</h2>
36
- <p>Playing Soccer Super Star Football Mod APK is very easy and fun. You just need to swipe your finger on the screen to shoot and score goals. Here are some tips on how to play the game:</p>
37
- <h3>Choose your team and mode</h3>
38
- <p>Before you start playing, you need to choose your team and mode. You can choose from different teams, such as Brazil, Argentina, Germany, France, Spain, England, and more. Each team has different stats and ratings, so choose wisely. You can also choose from different modes, such as career mode, tournament mode, challenge mode, and training mode. Each mode has different objectives and rewards, so choose according to your preference.</p>
39
- <p>soccer star 2023 super football games mod apk<br />
40
- soccer super star football hack apk download<br />
41
- soccer super star football unlimited money mod apk<br />
42
- soccer super star football mod apk latest version<br />
43
- soccer super star football mod apk android 1<br />
44
- soccer super star football mod apk free rewards<br />
45
- soccer super star football mod apk offline<br />
46
- soccer super star football mod apk unlimited gems<br />
47
- soccer super star football mod apk revdl<br />
48
- soccer super star football mod apk no ads<br />
49
- soccer super star football mod apk unlimited energy<br />
50
- soccer super star football mod apk rexdl<br />
51
- soccer super star football mod apk premium<br />
52
- soccer super star football mod apk vip unlocked<br />
53
- soccer super star football mod apk 1.18.1<br />
54
- soccer super star football mod apk 2023<br />
55
- soccer super star football mod apk unlimited coins<br />
56
- soccer super star football mod apk happymod<br />
57
- soccer super star football mod apk online<br />
58
- soccer super star football mod apk pro<br />
59
- soccer super star football mod apk full version<br />
60
- soccer super star football mod apk obb<br />
61
- soccer super star football mod apk cheat<br />
62
- soccer super star football mod apk mega<br />
63
- soccer super star football mod apk update<br />
64
- soccer super star football mod apk 2022<br />
65
- soccer super star football mod apk unlimited everything<br />
66
- soccer super star football mod apk apkpure<br />
67
- soccer super star football mod apk cracked<br />
68
- soccer super star football mod apk data<br />
69
- soccer super star football mod apk free download<br />
70
- soccer super star football mod apk unlimited lives<br />
71
- soccer super star football mod apk old version<br />
72
- soccer super star football mod apk new version<br />
73
- soccer super star football mod apk all unlocked<br />
74
- soccer super star football mod apk for pc<br />
75
- soccer super star football mod apk unlimited stars<br />
76
- soccer super star football mod apk original<br />
77
- soccer super star football mod apk real money<br />
78
- soccer super star football mod apk no root</p>
79
- <h3>Swipe to shoot and score</h3>
80
- <p>Once you have chosen your team and mode, you can start playing. You will see a soccer ball on the screen, and you need to swipe your finger on it to shoot and score. You can swipe in different directions and angles to control the direction and curve of the ball. You can also swipe with different speed and force to control the power and height of the ball. You will see a target on the goal, and you need to aim for it to score. The target will change its position and size depending on the difficulty level of the game.</p>
81
- <h3>Use unlimited rewind to correct your mistakes</h3>
82
- <p>One of the best features of Soccer Super Star Football Mod APK is the unlimited rewind feature. This feature allows you to undo your shots and try again as many times as you want. This is very useful if you miss a shot or make a mistake. You can use this feature by tapping on the rewind button on the top left corner of the screen. You will see a timeline of your shots, and you can drag it back to any point you want. You can then swipe again to shoot and score.</p>
83
- <h3>Unlock new players and stadiums</h3>
84
- <p>As you play Soccer Super Star Football Mod APK, you can unlock new players and stadiums. You can unlock new players by spending coins or reaching certain levels. Each player has different skills and abilities, such as speed, power, accuracy, stamina, and more. You can also unlock new stadiums by spending coins or reaching certain levels. Each stadium has different themes and atmospheres, such as day, night, rain, snow, and more.</p>
85
- <h2>Tips and Tricks for Soccer Super Star Football Mod APK</h2>
86
- <p>If you want to master Soccer Super Star Football Mod APK, you need to know some tips and tricks that will help you improve your game. Here are some of them:</p>
87
- <h3>Aim for the corners and curves</h3>
88
- <p>One of the best ways to score goals in Soccer Super Star Football Mod APK is to aim for the corners and curves of the goal. This will make it harder for the goalkeeper to save your shots. You can do this by swiping your finger in a diagonal or curved motion on the screen. This will make the ball spin and curve in the air.</p>
89
- <h3>Use power-ups wisely</h3>
90
- <p>Soccer Super Star Football Mod APK also has some power-ups that you can use to boost your game. These power-ups include fireball, slow motion, magnet, freeze, and more. Each power-up has a different effect on the ball or the game. For example, fireball makes the ball burn and fly faster; slow motion makes the game slow down for a few seconds; magnet makes the ball attract to the target; freeze makes the goalkeeper freeze for a few seconds; and more. You can use these power-ups by tapping on them on the bottom right corner of the screen. However, be careful not to use them too often or too randomly, as they have limited uses and may not always work in your favor.</p>
91
- <h3>Watch ads to get free rewards</h3>
92
- <p>If you want to get more coins or power-ups in Soccer Super Star Football Mod APK, you can watch ads to get free rewards. You can do this by tapping on the watch ad button on the top right corner of the screen. You will see an ad pop up on your screen, and you need to watch it for a few seconds. After that, you will get some coins or power-ups as a reward. You can do this as many times as you want, but be aware that some ads may be longer or shorter than others.</p>
93
- <h3>Practice your skills in training mode</h3>
94
- <p>If you want to practice your skills in Soccer Super Star Football Mod APK, you can play in training mode. This mode allows you to play without any pressure or objectives. You can just swipe and shoot as many times as you want without worrying about time or score. You can also change the difficulty level of the game by tapping on the settings button on the top left corner of the screen. You can also change the team and stadium by tapping on the buttons on the bottom left corner of the screen. Training mode is a great way to improve your skills and have fun.</p>
95
- <h2>Pros and Cons of Soccer Super Star Football Mod APK</h2>
96
- <p>Soccer Super Star Football Mod APK is a great soccer game, but it also has some pros and cons that you should know before playing it. Here are some of them:</p>
97
- <h4>Pros</h4>
98
- <ul>
99
- <li>Simple and intuitive gameplay. You just need to swipe your finger on the screen to shoot and score goals. The game is easy to play, but hard to master.</li>
100
- <li>Realistic graphics and physics. The game has high-quality graphics and realistic physics that make the game more immersive and enjoyable. You can see the ball spin and curve in the air, the goalkeeper react and save your shots, and the crowd cheer and boo.</li>
101
- <li>Various teams and modes to choose from. You can choose from different teams, such as Brazil, Argentina, Germany, France, Spain, England, and more. Each team has different stats and ratings, so choose wisely. You can also choose from different modes, such as career mode, tournament mode, challenge mode, and training mode. Each mode has different objectives and rewards, so choose according to your preference.</li>
102
- <li>Unlimited rewind feature. This feature allows you to undo your shots and try again as many times as you want. This is very useful if you miss a shot or make a mistake. You can use this feature by tapping on the rewind button on the top left corner of the screen.</li>
103
- </ul>
104
- <h4>Cons</h4>
105
- <ul>
106
- <li>Repetitive gameplay after a while. The game can get repetitive and boring after a while, as you play the same scenarios and challenges over and over again. The game lacks variety and innovation in its gameplay.</li>
107
- <li>Ads can be annoying. The game has ads that pop up on your screen every now and then. These ads can be annoying and distracting, especially when you are in the middle of a match or a shot. You can remove ads by downloading the mod version of the game or by paying a small fee.</li>
108
- <li>Some bugs and glitches may occur. The game is not perfect, and it may have some bugs and glitches that affect its performance and quality. For example, some users have reported that the game crashes or freezes sometimes, or that the ball goes through the goalkeeper or the goalpost.</li>
109
- </ul>
110
- <h2>Conclusion</h2>
111
- <p>Soccer Super Star Football Mod APK is a fun and simple soccer game that lets you swipe to shoot and score amazing goals. It has simple and intuitive gameplay, realistic graphics and physics, various teams and modes to choose from, and unlimited rewind feature. However, it also has some cons, such as repetitive gameplay after a while, ads can be annoying, and some bugs and glitches may occur. Overall, Soccer Super Star Football Mod APK is a great soccer game that you should try if you love soccer or want to have some fun.</p>
112
- <p>Do you want to download Soccer Super Star Football Mod APK? If yes, then follow the steps we mentioned above to download and install it on your Android device. If no, then what are you waiting for? Download it now and enjoy playing soccer like never before!</p>
113
- <h2>FAQs</h2>
114
- <p>Here are some frequently asked questions about Soccer Super Star Football Mod APK:</p>
115
- <ol>
116
- <li><strong>Is Soccer Super Star Football Mod APK safe to download?</strong></li>
117
- <p>Yes, as long as you download it from a trusted source. You can find many websites that offer the mod version of Soccer Super Star Football for free. However, be careful not to download from shady or malicious sites that may harm your device or steal your data. We recommend you to download from [this link], which is safe and reliable.</p>
118
- <li><strong>What is the difference between Soccer Super Star Football Mod APK and the original version?</strong></li>
119
- <p>The mod version gives you some extra benefits that the original version does not. For example, you can enjoy unlimited rewind, which allows you to undo your shots and try again as many times as you want. You can also get unlimited coins, which you can use to buy new players and stadiums. The mod version also removes ads, which can be annoying and distracting in the original version.</p>
120
- <li><strong>How can I get more coins in Soccer Super Star Football Mod APK?</strong></li>
121
- <p>You can get more coins by winning matches, completing achievements, or watching ads. You can also use the mod version of the game, which gives you unlimited coins. You can use coins to buy new players and stadiums, or to upgrade your skills and power-ups.</p>
122
- <li><strong>How can I unlock new players and stadiums in Soccer Super Star Football Mod APK?</strong></li>
123
- <p>You can unlock new players and stadiums by spending coins or reaching certain levels. Each player and stadium has a different price and level requirement. You can see the details by tapping on the shop button on the bottom right corner of the screen. You can also use the mod version of the game, which gives you all the players and stadiums unlocked.</p>
124
- <li><strong>Can I play Soccer Super Star Football Mod APK offline?</strong></li>
125
- <p>Yes, you can play Soccer Super Star Football Mod APK offline without an internet connection. However, you will not be able to access some features, such as watching ads, getting rewards, or updating the game. You will also not be able to play in tournament mode or challenge mode, which require an internet connection.</p>
126
- </ol>
127
- <p>I hope this article has helped you learn more about Soccer Super Star Football Mod APK. If you have any questions or feedback, please leave a comment below. Thank you for reading!</p> 401be4b1e0<br />
128
- <br />
129
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download CSR Racing 2 MOD APK for iOS and Android Free Shopping and More.md DELETED
@@ -1,92 +0,0 @@
1
- <br />
2
- <br>
3
- | Table 2: Article with HTML formatting | |--------------------------------------| <h1>CSR Racing 2 Mod APK iOS 2022: How to Download and Install It</h1>
4
- <p>If you are a fan of car racing games, you must have heard of <strong>CSR Racing 2</strong>. It is one of the most popular and realistic racing games on mobile devices. It offers you a chance to race with some of the most amazing cars in the world, customize them to your liking, compete with other players online, join crews, chat with friends, and much more.</p>
5
- <h2>csr racing 2 mod apk ios 2022</h2><br /><p><b><b>DOWNLOAD</b> &middot; <a href="https://jinyurl.com/2uNOrk">https://jinyurl.com/2uNOrk</a></b></p><br /><br />
6
- <p>But what if you want to enjoy all these features without spending any money or waiting for hours to refill your fuel? What if you want to unlock all the cars and upgrades without grinding for hours? What if you want to have unlimited resources to enjoy the game to the fullest?</p>
7
- <p>Well, there is a way to do that. It is called <strong>CSR Racing 2 Mod APK</strong>. It is a modified version of the original game that gives you access to all the features and resources that you want. You can download and install it on your iOS device easily and safely. In this article, we will tell you everything you need to know about CSR Racing 2 Mod APK on iOS devices, including its features, benefits, compatibility, security, and installation process. So, let's get started!</p>
8
- <h2>What is CSR Racing 2 and why is it popular?</h2>
9
- <p>CSR Racing 2 is a racing game developed by NaturalMotionGames Ltd and published by Zynga. It was released in 2016 for Android and iOS devices. It is the sequel to the popular CSR Racing game that was released in 2012.</p>
10
- <p>csr racing 2 mod apk ios 2022 unlimited money<br />
11
- csr racing 2 mod apk ios 2022 all cars unlocked<br />
12
- csr racing 2 mod apk ios 2022 download free<br />
13
- csr racing 2 mod apk ios 2022 latest version<br />
14
- csr racing 2 mod apk ios 2022 no jailbreak<br />
15
- csr racing 2 mod apk ios 2022 hack<br />
16
- csr racing 2 mod apk ios 2022 cheats<br />
17
- csr racing 2 mod apk ios 2022 online<br />
18
- csr racing 2 mod apk ios 2022 gameplay<br />
19
- csr racing 2 mod apk ios 2022 review<br />
20
- csr racing 2 mod apk ios 2022 update<br />
21
- csr racing 2 mod apk ios 2022 features<br />
22
- csr racing 2 mod apk ios 2022 tips and tricks<br />
23
- csr racing 2 mod apk ios 2022 best cars<br />
24
- csr racing 2 mod apk ios 2022 graphics<br />
25
- csr racing 2 mod apk ios 2022 multiplayer<br />
26
- csr racing 2 mod apk ios 2022 offline<br />
27
- csr racing 2 mod apk ios 2022 installation guide<br />
28
- csr racing 2 mod apk ios 2022 requirements<br />
29
- csr racing 2 mod apk ios 2022 support<br />
30
- csr racing 2 mod apk ios 2022 how to play<br />
31
- csr racing 2 mod apk ios 2022 tutorial<br />
32
- csr racing 2 mod apk ios 2022 customisation<br />
33
- csr racing 2 mod apk ios 2022 races<br />
34
- csr racing 2 mod apk ios 2022 events<br />
35
- csr racing 2 mod apk ios 2022 challenges<br />
36
- csr racing 2 mod apk ios 2022 rewards<br />
37
- csr racing 2 mod apk ios 2022 codes<br />
38
- csr racing 2 mod apk ios 2022 generator<br />
39
- csr racing 2 mod apk ios 2022 premium<br />
40
- csr racing 2 mod apk ios 2022 pro<br />
41
- csr racing 2 mod apk ios 2022 elite<br />
42
- csr racing 2 mod apk ios 2022 legends<br />
43
- csr racing 2 mod apk ios 2022 supercars<br />
44
- csr racing 2 mod apk ios 2022 hypercars<br />
45
- csr racing 2 mod apk ios 2021 vs. CSR Racing Mod Apk iOS in the year of the release of the game.<br />
46
- CSR Racing Mod Apk iOS in the year of the release of the game vs. CSR Racing Mod Apk iOS in the year of the release of the game.</p>
47
- <h3>A realistic and immersive racing game</h3>
48
- <p>One of the main reasons why CSR Racing 2 is so popular is because of its <strong>realistic and immersive</strong> graphics, physics, sound effects, and gameplay. The game uses 3D rendering techniques to create stunning visuals that make you feel like you are actually driving the cars. The game also features realistic car physics that simulate the behavior of the cars on different terrains and conditions. The game also has amazing sound effects that match the engine sounds, tire screeches, collisions, and other noises of the cars. The game also has a variety of gameplay modes and events that keep you entertained and challenged.</p>
49
- <p>The game allows you to choose from over 200 licensed cars from some of the most famous brands in the world, such as Ferrari, Lamborghini, Bugatti, McLaren, Pagani, Koenigsegg, and more. You can also customize your cars with different paint jobs, decals, rims, spoilers, nitrous, and other parts. You can also tune your cars to improve their performance and stats.</p>
50
- <h3>A competitive and social racing game</h3>
51
- <p>Another reason why CSR Racing 2 is so popular is because of its <strong>competitive and social</strong> features. The game has an online multiplayer mode where you can race with other players from around the world in real-time. You can also join or create crews with your friends or other players and compete with other crews for rewards and glory. You can also chat with your crew members and other players in the game. You can also challenge other players to duels or accept challenges from them.</p>
52
- <p>The game also has a reward system that gives you money, keys, gold, fuel, and other items for completing races, events, achievements, and rankings. You can use these items to buy new cars, upgrade your existing cars, refill your fuel, or enter special events. The game also has a ranking system that ranks you based on your performance and achievements in the game. You can climb up the ranks and earn more rewards and recognition.</p>
53
- <h2>What is CSR Racing 2 Mod APK and what are its features?</h2>
54
- <p>CSR Racing 2 Mod APK is a modified version of the original CSR Racing 2 game that gives you access to all the features and resources that you want in the game. It is not an official version of the game, but it is created by third-party developers who modify the original game files to unlock or add new features.</p>
55
- <h3>A modified version of CSR Racing 2 with unlimited resources</h3>
56
- <p>One of the main benefits of using CSR Racing 2 Mod APK is that it gives you <strong>unlimited resources</strong> in the game. This means that you can have unlimited money, keys, gold, fuel, and other items in the game without spending any real money or waiting for hours to refill your fuel. You can use these resources to buy any car you want, upgrade it to the max level, enter any event you want, or refill your fuel anytime you want.</p>
57
- <p>Another benefit of using CSR Racing 2 Mod APK is that it gives you access to some <strong>new features</strong> that are not available in the original game. For example, some CSR Racing 2 Mod APK versions allow you to unlock all the cars in the game without having to complete any requirements or missions. Some versions also allow you to use nitrous anytime you want without having to wait for it to recharge. Some versions also allow you to disable ads or enable cheats in the game.</p>
58
- <h3>A safe and easy way to enjoy CSR Racing 2 without restrictions</h3>
59
- <p>Another benefit of using CSR Racing 2 Mod APK is that it is a <strong>safe and easy</strong> way to enjoy CSR Racing 2 without any restrictions or limitations. You don't have to worry about any viruses, malware, or spyware that might harm your device or compromise your privacy. You also don't have to worry about any bans or suspensions from the game developers or publishers. You can download and install CSR Racing 2 Mod APK on your iOS device easily and safely using a third-party app store called Panda Helper. Panda Helper is a trusted and reliable app store that offers thousands of modded and hacked apps and games for iOS devices. You can download and install Panda Helper on your iOS device without jailbreaking it or using a computer.</p>
60
- <h2>How to download and install CSR Racing 2 Mod APK on iOS devices?</h2>
61
- <p>If you want to download and install CSR Racing 2 Mod APK on your iOS device, you need to follow these simple steps:</p>
62
- <h3>A step-by-step guide to download and install CSR Racing 2 Mod APK on iOS devices</h3>
63
- <p>Here is a step-by-step guide to download and install CSR Racing 2 Mod APK on iOS devices using Panda Helper:</p>
64
- <ol>
65
- <li>Open Safari browser on your iOS device and go to the official website of Panda Helper: <a href="">https://www.pandahelp.vip/</a></li>
66
- <li>Tap on the "Download Free Version" button and then tap on "Install" when prompted.</li>
67
- <li>Wait for the installation to finish and then go to Settings > General > Profiles & Device Management and trust the profile of Panda Helper.</li>
68
- <li>Launch Panda Helper from your home screen and search for "CSR Racing 2 Mod" in the search bar.</li>
69
- <li>Tap on the "Get" button next to the CSR Racing 2 Mod app and then tap on "Install" when prompted.</li>
70
- <li>Wait for the installation to finish and then go to Settings > General > Profiles & Device Management and trust the profile of CSR Racing 2 Mod.</li>
71
- <li>Launch CSR Racing 2 Mod from your home screen and enjoy the game with unlimited resources and features.</li>
72
- </ol>
73
- <h3>A table to summarize the steps to download and install CSR Racing 2 Mod APK on iOS devices</h3>
74
- <p>Here is a table to summarize the steps to download and install CSR Racing 2 Mod APK on iOS devices using Panda Helper:</p>
75
- | Step number | Action | Screenshot | Explanation | |-------------|--------|------------|-------------| | 1 | Open Safari browser on your iOS device and go to the official website of Panda Helper: <a href="">https://www.pandahelp.vip/</a> | <img src="" alt="Panda Helper website" width="300"> | Panda Helper is a third-party app store that offers modded and hacked apps and games for iOS devices. | | 2 | Tap on the "Download Free Version" button and then tap on "Install" when prompted. | <img src="" alt="Panda Helper download" width="300"> | This will download and install Panda Helper on your iOS device. | | 3 | Wait for the installation to finish and then go to Settings > General > Profiles & Device Management and trust the profile of Panda Helper. | <img src="" alt="Panda Helper trust" width="300"> | This will allow you to run Panda Helper on your iOS device without any issues. | | 4 | Launch Panda Helper from your home screen and search for "CSR Racing 2 Mod" in the search bar. | <img src="" alt="Panda Helper search" width="300"> | This will show you the CSR Racing 2 Mod app that you can download and install on your iOS device. | | 5 | Tap on the "Get" button next to the CSR Racing 2 Mod app and then tap on "Install" when prompted. | <img src="" alt="CSR Racing 2 Mod download" width="300"> | This will download and install CSR Racing 2 Mod on your iOS device. | | 6 | Wait for the installation to finish and then go to Settings > General > Profiles & Device Management and trust the profile of CSR Racing 2 Mod. | <img src="" alt="CSR Racing 2 Mod trust" width="300"> | This will allow you to run CSR Racing 2 Mod on your iOS device without any issues. | | 7 | Launch CSR Racing 2 Mod from your home screen and enjoy the game with unlimited resources and features. | <img src="" alt="CSR Racing 2 Mod launch" width="300"> | This will let you play CSR Racing 2 with unlimited money, keys, gold, fuel, nitrous, cars, upgrades, etc. | <h2>Conclusion</h2>
76
- <p>In conclusion, CSR Racing 2 is a great racing game that offers you a realistic and immersive experience of driving some of the most amazing cars in the world. It also lets you compete and socialize with other players online, join crews, chat with friends, and earn rewards and rankings. However, if you want to enjoy all these features without any limitations or restrictions, you can try CSR Racing 2 Mod APK on your iOS device. CSR Racing 2 Mod APK is a modified version of the original game that gives you unlimited resources and features in the game. You can download and install it on your iOS device easily and safely using Panda Helper, a third-party app store that offers modded and hacked apps and games for iOS devices. You can follow the step-by-step guide and the table above to download and install CSR Racing 2 Mod APK on your iOS device. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave them in the comments section below. Thank you for reading and happy racing!</p>
77
- <h4>FAQs</h4>
78
- <p>Here are some frequently asked questions about CSR Racing 2 Mod APK on iOS devices with brief answers:</p>
79
- <ul>
80
- <li><strong>Q: Is CSR Racing 2 Mod APK safe to use?</strong></li>
81
- <li>A: Yes, CSR Racing 2 Mod APK is safe to use as long as you download it from a trusted source like Panda Helper. It does not contain any viruses, malware, or spyware that might harm your device or compromise your privacy.</li>
82
- <li><strong>Q: Is CSR Racing 2 Mod APK compatible with my iOS device?</strong></li>
83
- <li>A: Yes, CSR Racing 2 Mod APK is compatible with most iOS devices that can run the original CSR Racing 2 game. However, you may need to update your iOS version or free up some storage space on your device before installing it.</li>
84
- <li><strong>Q: Will I get banned or suspended from CSR Racing 2 if I use CSR Racing 2 Mod APK?</strong></li>
85
- <li>A: No, you will not get banned or suspended from CSR Racing 2 if you use CSR Racing 2 Mod APK. However, you should use it at your own risk and discretion, as the game developers or publishers may not approve of it.</li>
86
- <li><strong>Q: Can I play online multiplayer mode with CSR Racing 2 Mod APK?</strong></li>
87
- <li>A: Yes, you can play online multiplayer mode with CSR Racing 2 Mod APK. However, you may face some issues or glitches while playing with other players who are using the original game or a different version of the mod.</li>
88
- <li><strong>Q: Can I update CSR Racing 2 Mod APK to the latest version?</strong></li>
89
- <li>A: Yes, you can update CSR Racing 2 Mod APK to the latest version by following the same steps as downloading and installing it. However, you may need to uninstall the previous version of the mod before installing the new one.</li>
90
- </ul></p> 401be4b1e0<br />
91
- <br />
92
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Cars Movie for Free A Step-by-Step Guide.md DELETED
@@ -1,281 +0,0 @@
1
-
2
- <h1>How to Download Cars Movie Legally and Safely</h1>
3
- <p>Cars is a 2006 animated comedy film produced by Pixar Animation Studios and distributed by Walt Disney Pictures. It tells the story of a hotshot race car named Lightning McQueen who gets stranded in a small town called Radiator Springs and learns the true meaning of friendship and family. The film features the voices of Owen Wilson, Paul Newman, Bonnie Hunt, Larry the Cable Guy, and many others.</p>
4
- <h2>how to download cars movie</h2><br /><p><b><b>Download File</b> &#127379; <a href="https://jinyurl.com/2uNRAF">https://jinyurl.com/2uNRAF</a></b></p><br /><br />
5
- <p>If you are a fan of Cars or want to watch it for the first time, you might be wondering how to download it to your computer or mobile device. There are many ways to download movies online, but not all of them are legal or safe. In this article, we will show you how to download Cars movie legally and safely from different sources, such as streaming services, torrent sites, and free movie sites. We will also give you some tips on how to avoid viruses, malware, ads, and pop-ups when downloading movies.</p>
6
- <h2>Introduction</h2>
7
- <h3>What is Cars Movie and Why You Should Watch It</h3>
8
- <p>Cars is a Pixar film that was released in 2006 and became one of the most successful animated movies of all time. It won the Golden Globe Award for Best Animated Feature Film and was nominated for two Academy Awards for Best Animated Feature and Best Original Song. It also spawned two sequels, Cars 2 (2011) and Cars 3 (2017), as well as several spin-offs, shorts, video games, merchandise, and theme park attractions.</p>
9
- <p>Cars is a movie that appeals to both children and adults, as it combines humor, adventure, romance, drama, and action. It also features stunning animation, memorable characters, catchy songs, and a heartwarming message about finding your true self and your true friends. If you love cars, racing, or animation, you will definitely enjoy watching Cars.</p>
10
- <p>how to download cars movie for free<br />
11
- how to download cars movie in hindi<br />
12
- how to download cars movie from netflix<br />
13
- how to download cars movie on ipad<br />
14
- how to download cars movie in hd<br />
15
- how to download cars movie on laptop<br />
16
- how to download cars movie on android<br />
17
- how to download cars movie from youtube<br />
18
- how to download cars movie with subtitles<br />
19
- how to download cars movie in tamil<br />
20
- how to download cars movie from amazon prime<br />
21
- how to download cars movie on iphone<br />
22
- how to download cars movie in telugu<br />
23
- how to download cars movie from disney plus<br />
24
- how to download cars movie in utorrent<br />
25
- how to download cars movie on pc<br />
26
- how to download cars movie in malayalam<br />
27
- how to download cars movie from google drive<br />
28
- how to download cars movie with english subtitles<br />
29
- how to download cars movie in urdu<br />
30
- how to download cars movie from hotstar<br />
31
- how to download cars movie on mac<br />
32
- how to download cars movie in kannada<br />
33
- how to download cars movie from facebook<br />
34
- how to download cars movie with hindi audio<br />
35
- how to download cars movie on firestick<br />
36
- how to download cars movie in bengali<br />
37
- how to download cars movie from voot<br />
38
- how to download cars movie with dual audio<br />
39
- how to download cars movie on chromebook<br />
40
- how to download cars movie in punjabi<br />
41
- how to download cars movie from zee5<br />
42
- how to download cars movie with tamil audio<br />
43
- how to download cars movie on roku<br />
44
- how to download cars movie in marathi<br />
45
- how to download cars movie from sony liv<br />
46
- how to download cars movie with telugu audio<br />
47
- how to download cars movie on smart tv<br />
48
- how to download cars movie in gujarati<br />
49
- how to download cars movie from mx player</p>
50
- <h3>The Risks of Downloading Movies Illegally</h3>
51
- <p>Before we show you how to download Cars movie legally and safely, we want to warn you about the risks of downloading movies illegally. Illegal downloading is the act of obtaining or sharing copyrighted material without the permission of the owner or the law. This includes movies, music, games, software, books, and any other digital content.</p>
52
- <p>Downloading movies illegally can have serious consequences for you and your device. Some of the risks are:</p>
53
- <ul>
54
- <li>You can face legal action from the copyright owner or the authorities. Depending on your country's laws, you can be fined, sued, or even jailed for piracy.</li>
55
- <li>You can expose your device to viruses, malware, spyware, ransomware, or other harmful programs that can damage your data, steal your information, or lock your device until you pay a ransom.</li>
56
- <li>You can compromise your online security and privacy by revealing your IP address, location, browsing history, or personal information to hackers, trackers, or advertisers.</li>
57
- <li>You can waste your time, bandwidth, and storage space by downloading low-quality, incomplete, or fake files that do not match what you are looking for.</li>
58
- </ul>
59
- <p>As you can see, downloading movies illegally is not worth the risk. That is why we recommend you to use legal and safe methods to download Cars movie, which we will explain in the next sections.</p>
60
- <h2>How to Download Cars Movie from Streaming Services</h2>
61
- <p>One of the best ways to download Cars movie legally and safely is to use a streaming service. A streaming service is a platform that allows you to watch movies, TV shows, and other content online or offline by paying a monthly or annual fee. Some of the most popular streaming services are Netflix, Amazon Prime Video, Hulu, Disney+, HBO Max, and Apple TV+.</p>
62
- <p>Streaming services offer many benefits for movie lovers, such as:</p>
63
- <ul>
64
- <li>You can access a large library of movies and shows in different genres, languages, and regions.</li>
65
- <li>You can watch movies and shows in high-definition (HD), ultra-high-definition (UHD), or 4K resolution, depending on your device and internet speed.</li>
66
- <li>You can download movies and shows to your device and watch them offline without using data or Wi-Fi.</li>
67
- <li>You can watch movies and shows on multiple devices, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.</li>
68
- <li>You can create multiple profiles for different users and customize your preferences, recommendations, and watch history.</li>
69
- <li>You can enjoy exclusive content that is only available on the streaming service.</li>
70
- </ul>
71
- <p>However, streaming services also have some drawbacks, such as:</p>
72
- <ul>
73
- <li>You have to pay a monthly or annual fee to use the service. The fee may vary depending on the plan you choose, the number of screens you can watch on simultaneously, and the availability of certain features.</li>
74
- <li>You need a stable and fast internet connection to stream or download movies and shows. If your internet is slow or unreliable, you may experience buffering, lagging, or poor quality.</li>
75
- <li>You may not find the movie or show you want to watch on the streaming service. Streaming services have different catalogs that change over time due to licensing agreements with studios and distributors.</li>
76
- <li>You may face geo-restrictions that prevent you from accessing certain content based on your location. Streaming services have different libraries for different countries due to legal and cultural reasons.</li>
77
- </ul>
78
- <p>In this section, we will focus on two of the most popular streaming services that offer Cars movie: Netflix and Amazon Prime Video. We will show you how to download Cars movie from each of them and compare their pros and cons.</p>
79
- <h3>Netflix</h3>
80
- <p>Netflix is the world's leading streaming service with over 200 million subscribers in more than 190 countries. It offers a wide range of movies and shows in various genres and languages. It also produces original content that is exclusive to Netflix, such as Stranger Things, The Crown, The Witcher, Black Mirror, and more.</p>
81
- <h4>Steps to Download Cars Movie from Netflix</h4>
82
- <p>To download Cars movie from Netflix, you need to follow these steps:</p>
83
- <ol>
84
- <li>Sign up for a Netflix account if you don't have one. You can choose from three plans: Basic ($8.99 per month), Standard ($13.99 per month), or Premium ($17.99 per month). The Basic plan allows you to watch on one screen at a time in standard definition (SD), the Standard plan allows you to watch on two screens at a time in high definition (HD), and the Premium plan allows you to watch on four screens at a time in HD or 4K.</li>
85
- <li>Download the Netflix app on your device. You can download it from the App Store for iOS devices, the Google Play Store for Android devices, or the Microsoft Store for Windows devices. You can also access Netflix from your web browser, but you cannot download movies or shows from there.</li>
86
- <li>Open the Netflix app and sign in with your account. You can browse the content by categories, genres, recommendations, or search for a specific title.</li>
87
- <li>Find Cars movie on Netflix. You can use the search function or look for it in the Animation, Comedy, or Family categories. You can also check if Cars movie is available on Netflix in your country by using a website like unogs.com or flixwatch.co.</li>
88
- <li>Tap on the download icon next to the play button. The download icon looks like a downward arrow with a horizontal line below it. If you don't see the download icon, it means that the movie is not available for download.</li>
89
- <li>Wait for the movie to download to your device. You can check the progress of the download by tapping on the downloads icon at the bottom of the screen. The downloads icon looks like a downward arrow with a circle around it.</li>
90
- <li>Enjoy watching Cars movie offline. You can find the downloaded movie in the downloads section of the app. You can watch it as many times as you want without using data or Wi-Fi.</li>
91
- </ol>
92
- <h4>Pros and Cons of Netflix</h4>
93
- <p>Netflix is a great streaming service for downloading Cars movie, but it also has some pros and cons that you should consider:</p>
94
- <table>
95
- <tr>
96
- <th>Pros</th>
97
- <th>Cons</th>
98
- </tr>
99
- <tr>
100
- <td>- Netflix has a large and diverse library of movies and shows, including original and exclusive content.</td>
101
- <td>- Netflix requires a subscription fee to use the service, which may not be affordable for some users.</td>
102
- </tr>
103
- <tr>
104
- <td>- Netflix allows you to download movies and shows to your device and watch them offline without using data or Wi-Fi.</td>
105
- <td>- Netflix limits the number of devices and screens you can watch on simultaneously, depending on your plan.</td>
106
- </tr>
107
- <tr>
108
- <td>- Netflix offers high-quality video and audio, as well as subtitles and dubbing options for different languages.</td>
109
- <td>- Netflix does not have all the movies and shows you may want to watch, as some of them may be unavailable or removed due to licensing agreements.</td>
110
- </tr>
111
- <tr>
112
- <td>- Netflix is compatible with most devices and platforms, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.</td>
113
- <td>- Netflix may have geo-restrictions that prevent you from accessing certain content based on your location, unless you use a VPN service.</td>
114
- </tr>
115
- </table> <h3>Amazon Prime Video</h3>
116
- <p>Amazon Prime Video is another popular streaming service that offers a variety of movies and shows, including original and exclusive content. It is part of the Amazon Prime membership, which also includes free shipping, music streaming, e-books, and more. You can also rent or buy movies and shows that are not included in the Prime Video catalog.</p>
117
- <h4>Steps to Download Cars Movie from Amazon Prime Video</h4>
118
- <p>To download Cars movie from Amazon Prime Video, you need to follow these steps:</p>
119
- <ol>
120
- <li>Sign up for an Amazon Prime account if you don't have one. You can get a 30-day free trial and then pay $12.99 per month or $119 per year. You can also sign up for a Prime Video-only account for $8.99 per month.</li>
121
- <li>Download the Prime Video app on your device. You can download it from the App Store for iOS devices, the Google Play Store for Android devices, or the Microsoft Store for Windows devices. You can also access Prime Video from your web browser, but you cannot download movies or shows from there.</li>
122
- <li>Open the Prime Video app and sign in with your account. You can browse the content by categories, genres, recommendations, or search for a specific title.</li>
123
- <li>Find Cars movie on Prime Video. You can use the search function or look for it in the Animation, Comedy, or Family categories. You can also check if Cars movie is available on Prime Video in your country by using a website like justwatch.com or reelgood.com.</li>
124
- <li>Tap on the download icon next to the play button. The download icon looks like a downward arrow with a horizontal line below it. If you don't see the download icon, it means that the movie is not available for download.</li>
125
- <li>Wait for the movie to download to your device. You can check the progress of the download by tapping on the downloads icon at the bottom of the screen. The downloads icon looks like a downward arrow with a circle around it.</li>
126
- <li>Enjoy watching Cars movie offline. You can find the downloaded movie in the downloads section of the app. You can watch it as many times as you want without using data or Wi-Fi.</li>
127
- </ol>
128
- <h4>Pros and Cons of Amazon Prime Video</h4>
129
- <p>Amazon Prime Video is another great streaming service for downloading Cars movie, but it also has some pros and cons that you should consider:</p>
130
- <table>
131
- <tr>
132
- <th>Pros</th>
133
- <th>Cons</th>
134
- </tr>
135
- <tr>
136
- <td>- Amazon Prime Video has a large and diverse library of movies and shows, including original and exclusive content.</td>
137
- <td>- Amazon Prime Video requires a subscription fee to use the service, which may not be affordable for some users.</td>
138
- </tr>
139
- <tr>
140
- <td>- Amazon Prime Video allows you to download movies and shows to your device and watch them offline without using data or Wi-Fi.</td>
141
- <td>- Amazon Prime Video limits the number of devices and titles you can download at a time, depending on your location and account type.</td>
142
- </tr>
143
- <tr>
144
- <td>- Amazon Prime Video offers high-quality video and audio, as well as subtitles and dubbing options for different languages.</td>
145
- <td>- Amazon Prime Video does not have all the movies and shows you may want to watch, as some of them may be unavailable or removed due to licensing agreements.</td>
146
- </tr>
147
- <tr>
148
- <td>- Amazon Prime Video is compatible with most devices and platforms, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.</td>
149
- <td>- Amazon Prime Video may have geo-restrictions that prevent you from accessing certain content based on your location, unless you use a VPN service.</td>
150
- </tr>
151
- </table> <h2>How to Download Cars Movie from Torrent Sites</h2>
152
- <p>Another way to download Cars movie is to use a torrent site. A torrent site is a website that hosts torrent files, which are small files that contain information about the content you want to download, such as the name, size, type, and location of the files. You can use a torrent site to find and download movies, music, games, software, books, and any other digital content.</p>
153
- <h3>What are Torrents and How They Work</h3>
154
- <p>Torrents are a peer-to-peer (P2P) file-sharing technology that allows users to download and share files with each other without relying on a central server. Instead, users connect to each other directly and form a network of peers. Each peer has a copy of the torrent file and a part of the content file. When you download a torrent, you are downloading small pieces of the content file from different peers. When you upload a torrent, you are sharing the pieces of the content file that you have with other peers.</p>
155
- <p>Torrents work by using a BitTorrent protocol, which is a set of rules and commands that enable the communication and coordination between peers. The BitTorrent protocol uses trackers, which are servers that help peers find each other and exchange information. The BitTorrent protocol also uses seeds and leechers, which are terms that describe the status of peers in the network. A seed is a peer that has the complete content file and is uploading it to other peers. A leecher is a peer that does not have the complete content file and is downloading it from other peers.</p>
156
- <h3>How to Use a BitTorrent Client to Download Movies</h3>
157
- <p>To use torrents to download movies, you need to use a BitTorrent client, which is a software program that allows you to open, download, and upload torrent files. There are many BitTorrent clients available for different devices and platforms, such as uTorrent, BitTorrent, qBittorrent, Transmission, Vuze, Deluge, and more.</p>
158
- <h4>Steps to Download Cars Movie from a Torrent Site</h4>
159
- <p>To download Cars movie from a torrent site, you need to follow these steps:</p>
160
- <ol>
161
- <li>Choose a BitTorrent client that suits your device and preferences. You can compare the features, performance, security, and reviews of different BitTorrent clients online. You can also check if the BitTorrent client is compatible with your device and operating system.</li>
162
- <li>Download and install the BitTorrent client on your device. You can download it from the official website of the BitTorrent client or from a trusted source. You can also customize the settings of the BitTorrent client according to your needs.</li>
163
- <li>Choose a torrent site that has Cars movie available for download. You can search for torrent sites online or use a website like torrentz2.eu or torrentfunk.com to find torrent sites that have Cars movie. You can also check the reputation, popularity, and safety of torrent sites online.</li>
164
- <li>Find Cars movie on the torrent site. You can use the search function or browse by categories or genres. You can also check the details of the torrent file, such as the name, size, type, quality, seeds, leechers, comments, and ratings.</li>
165
- <li>Download the torrent file or copy the magnet link of Cars movie. The torrent file is a small file that contains information about the content file. The magnet link is a URL that contains information about the content file and allows you to download it without using a torrent file.</li>
166
- <li>Open the torrent file or paste the magnet link in your BitTorrent client. The BitTorrent client will start downloading Cars movie from different peers in the network. You can check the progress of the download by looking at the speed, time remaining, percentage completed, and amount downloaded.</li>
167
- <li>Wait for the movie to download to your device. You can choose where to save the movie on your device or let the BitTorrent client choose for you. You can also pause or resume the download at any time.</li>
168
- <li>Enjoy watching Cars movie offline. You can find the downloaded movie in the folder you chose or in the default folder of your BitTorrent client. You can watch it as many times as you want without using data or Wi-Fi.</li>
169
- </ol>
170
- <h4>Pros and Cons of Torrents</h4>
171
- <p>Torrents are a convenient and fast way to download movies, but they also have some pros and cons that you should consider:</p>
172
- <table>
173
- <tr>
174
- <th>Pros</th>
175
- <th>Cons</th>
176
- </tr>
177
- <tr>
178
- <td>- Torrents allow you to download movies for free without paying any subscription fee or registration fee.</td>
179
- <td>- Torrents are illegal in many countries and regions due to copyright infringement and piracy laws.</td>
180
- </tr>
181
- <tr>
182
- <td>- Torrents - Torrents offer a wide range of movies and shows in different genres, languages, and regions that may not be available on streaming services.</td>
183
- <td>- Torrents expose your device to viruses, malware, spyware, ransomware, or other harmful programs that can damage your data, steal your information, or lock your device until you pay a ransom.</td>
184
- </tr>
185
- <tr>
186
- <td>- Torrents provide high-quality video and audio, as well as subtitles and dubbing options for different languages.</td>
187
- <td>- Torrents compromise your online security and privacy by revealing your IP address, location, browsing history, or personal information to hackers, trackers, or advertisers.</td>
188
- </tr>
189
- <tr>
190
- <td>- Torrents are compatible with most devices and platforms, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.</td>
191
- <td>- Torrents depend on the availability and generosity of peers in the network. If there are not enough seeds or too many leechers, the download speed and quality may be low or the download may fail.</td>
192
- </tr>
193
- </table>
194
- <h3>How to Protect Yourself from Viruses and Malware When Using Torrents</h3>
195
- <p>As we mentioned before, torrents can be risky for your device and your online safety. However, there are some ways to protect yourself from viruses and malware when using torrents. Here are some tips:</p>
196
- <h4>Use a VPN Service</h4>
197
- <p>A VPN service is a virtual private network that encrypts your internet traffic and hides your IP address and location from anyone who tries to monitor or track you online. A VPN service can help you avoid geo-restrictions, censorship, surveillance, and legal action when using torrents. It can also prevent hackers, trackers, or advertisers from accessing your data or information.</p>
198
- <p>To use a VPN service, you need to sign up for a VPN account and download and install the VPN app on your device. You can choose from many VPN services available online, such as NordVPN, ExpressVPN, Surfshark, CyberGhost, or IPVanish. You can also compare the features, performance, security, and reviews of different VPN services online.</p>
199
- <p>Once you have the VPN app on your device, you need to connect to a VPN server of your choice. The VPN server will assign you a new IP address and location that will mask your real ones. You can then use the torrent site and the BitTorrent client as usual. The VPN service will encrypt your internet traffic and protect you from viruses and malware.</p>
200
- <h4>Scan the Downloaded File with an Antivirus Program</h4>
201
- <p>An antivirus program is a software program that detects and removes viruses and malware from your device. An antivirus program can help you prevent or fix any damage caused by viruses and malware when using torrents. It can also alert you of any suspicious or malicious files or programs on your device.</p>
202
- <p>To use an antivirus program, you need to download and install the antivirus program on your device. You can choose from many antivirus programs available online, such as Avast, AVG, Kaspersky, McAfee, or Norton. You can also compare the features, performance, security, and reviews of different antivirus programs online.</p>
203
- <p>Once you have the antivirus program on your device, you need to scan the downloaded file with the antivirus program before opening it. The antivirus program will scan the file and detect any viruses or malware that may be hidden in it. If the file is clean, you can open it and watch Cars movie. If the file is infected, you can delete it and look for another torrent.</p>
204
- <h2>How to Download Cars Movie from Free Movie Sites</h2>
205
- <p>A third way to download Cars movie is to use a free movie site. A free movie site is a website that allows you to watch movies online or offline without paying any fee or registration. You can use a free movie site to find and download movies in different genres, languages, and regions.</p>
206
- <h3>What are Free Movie Sites and How They Work</h3>
207
- <p>Free movie sites are websites that host or link to movies that are uploaded by users or third parties. Free movie sites do not have the legal rights or licenses to distribute the movies they offer. They rely on advertising revenue or donations to maintain their servers and domains.</p>
208
- <p>Free movie sites work by using streaming or downloading technology. Streaming technology allows you to watch movies online without downloading them to your device. You can watch movies in real time as they are transmitted from the server to your device. Downloading technology allows you to download movies to your device and watch them offline without using data or Wi-Fi. You can download movies as whole files or as small pieces that are joined together.</p>
209
- <h3>How to Find and Use a Free Movie Site to Download Movies</h3>
210
- <p>To use a free movie site to download movies, you need to follow these steps:</p>
211
- <ol>
212
- <li>Choose a free movie site that has Cars movie available for download. You can search for free movie sites online or use a website like alluc.co or yidio.com to find free movie sites that have Cars movie. You can also check the reputation, popularity, and safety of free movie sites online.</li>
213
- <li>Find Cars movie on the free movie site. You can use the search function or browse by categories or genres. You can also check the details of the movie, such as the name, size, type, quality, source, and ratings.</li>
214
- <li>Download Cars movie from the free movie site. Depending on the free movie site, you may have different options to download Cars movie. Some of the options are:</li>
215
- <ul>
216
- <li>Click on the download button or link that leads you to the movie file. The download button or link may look like a downward arrow, a disk icon, or a text that says "download".</li>
217
- <li>Right-click on the video player and select "save video as" or "download video". The video player may look like a rectangle with a play button in the center.</li>
218
- <li>Copy the video URL from the address bar or the video player and paste it in a video downloader website or software. The video URL may look like a long string of letters and numbers that starts with "http" or "https".</li>
219
- </ul>
220
- <li>Wait for the movie to download to your device. You can check the progress of the download by looking at the speed, time remaining, percentage completed, and amount downloaded.</li>
221
- <li>Enjoy watching Cars movie offline. You can find the downloaded movie in the folder you chose or in the default folder of your browser or downloader. You can watch it as many times as you want without using data or Wi-Fi.</li>
222
- </ol>
223
- <h4>Pros and Cons of Free Movie Sites</h4>
224
- <p>Free movie sites are an easy and cheap way to download movies, but they also have some pros and cons that you should consider:</p>
225
- <table>
226
- <tr>
227
- <th>Pros</th>
228
- <th>Cons</th>
229
- </tr>
230
- <tr>
231
- <td>- Free movie sites allow you to download movies for free without paying any subscription fee or registration fee.</td>
232
- <td>- Free movie sites are illegal in many countries and regions due to copyright infringement and piracy laws.</td>
233
- </tr>
234
- <tr>
235
- <td>- Free movie sites offer a wide range of movies and shows in different genres, languages, and regions that may not be available on streaming services.</td>
236
- <td>- Free movie sites expose your device to viruses, malware, spyware, ransomware, or other harmful programs that can damage your data, steal your information, or lock your device until you pay a ransom.</td>
237
- </tr>
238
- <tr>
239
- <td>- Free movie sites provide high-quality video and audio, as well as subtitles and dubbing options for different languages.</td>
240
- <td>- Free movie sites compromise your online security and privacy by revealing your IP address, location, browsing history, or personal information to hackers, trackers, or advertisers.</td>
241
- </tr>
242
- <tr>
243
- <td>- Free movie sites are compatible with most devices and platforms, such as computers, smartphones, tablets, smart TVs, gaming consoles, or streaming devices.</td>
244
- <td>- Free movie sites depend on the availability and reliability of the servers and links that host or link to the movies. If the server or link is down, broken, or removed, the download may fail or the movie may not play.</td>
245
- </tr>
246
- </table>
247
- <h3>How to Avoid Ads and Pop-ups When Using Free Movie Sites</h3>
248
- <p>As we mentioned before, free movie sites rely on advertising revenue to maintain their servers and domains. However, the ads and pop-ups that appear on free movie sites can be annoying, intrusive, or even dangerous for your device and your online safety. However, there are some ways to avoid ads and pop-ups when using free movie sites. Here are some tips:</p>
249
- <h4>Use an Ad Blocker Extension</h4>
250
- <p>An ad blocker extension is a browser extension that blocks or removes ads and pop-ups from websites. An ad blocker extension can help you improve your browsing experience, save your bandwidth and battery life, and protect you from malicious ads and pop-ups.</p>
251
- <p>To use an ad blocker extension, you need to download and install the ad blocker extension on your browser. You can choose from many ad blocker extensions available online, such as Adblock Plus, uBlock Origin, AdGuard, or Ghostery. You can also compare the features, performance, security, and reviews of different ad blocker extensions online.</p>
252
- <p>Once you have the ad blocker extension on your browser, you need to enable it and customize its settings according to your preferences. You can also whitelist some websites that you want to support or that do not have annoying or harmful ads and pop-ups.</p>
253
- <h4>Use a Pop-up Blocker Extension</h4>
254
- <p>A pop-up blocker extension is a browser extension that blocks or removes pop-ups from websites. A pop-up is a new window that opens automatically when you visit a website or click on a link. A pop-up blocker extension can help you avoid unwanted or malicious pop-ups that may redirect you to other websites, download unwanted files or programs, or display inappropriate or misleading content.</p>
255
- <p>To use a pop-up blocker extension, you need to download and install the pop-up blocker extension on your browser. You can choose from many pop-up blocker extensions available online, such as Popper Blocker, Poper Blocker, Popup Blocker Pro, or Smart Popup Blocker. You can also compare the features, performance, security, and reviews of different pop-up blocker extensions online.</p>
256
- <p>Once you have the pop-up blocker extension on your browser, you need to enable it and customize its settings according to your preferences. You can also whitelist some websites that you want to allow pop-ups from or that do not have unwanted or malicious pop-ups.</p>
257
- <h2>Conclusion</h2>
258
- <h3>Summary of the Main Points</h3>
259
- <p>In this article, we have shown you how to download Cars movie legally and safely from different sources, such as streaming services, torrent sites, and free movie sites. We have also given you some tips on how to protect yourself from viruses and malware when using torrents and how to avoid ads and pop-ups when using free movie sites.</p>
260
- <p>Cars is a Pixar film that was released in 2006 and became one of the most successful animated movies of all time. It tells the story of a hotshot race car named Lightning McQueen who gets stranded in a small town called Radiator Springs and learns the true meaning of friendship and family. If you are a fan of Cars or want to watch it for the first time, you might be wondering how to download it to your computer or mobile device.</p>
261
- <h3>Recommendations for the Best Way to Download Cars Movie</h3>
262
- <p>Based on our analysis, we recommend you to use streaming services as the best way to download Cars movie legally and safely. Streaming services offer many benefits for movie lovers, such as high-quality video and audio, offline viewing, multiple device compatibility, large and diverse library, and exclusive content. Streaming services also have fewer drawbacks than torrent sites or free movie sites, such as subscription fee, internet speed, content availability, and geo-restrictions.</p>
263
- <p>Among the streaming services that offer Cars movie, we suggest you to use Netflix or Amazon Prime Video. Both of them have similar features and advantages, such as HD or 4K resolution, subtitles and dubbing options, multiple profiles and screens, and original and exclusive content. However, Netflix has a larger and more diverse library than Amazon Prime Video, while Amazon Prime Video has a cheaper and more comprehensive membership than Netflix.</p>
264
- <p>Therefore, you can choose the streaming service that suits your preferences and budget. You can also try both of them for free for a limited time and compare their performance and quality. You can follow the steps we provided in this article to download Cars movie from Netflix or Amazon Prime Video.</p>
265
- <h2>FAQs</h2>
266
- <p>Here are some frequently asked questions about downloading Cars movie:</p>
267
- <ol>
268
- <li>Is downloading Cars movie illegal?</li>
269
- <p>Downloading Cars movie is not illegal if you use a legal and safe method, such as streaming services. However, downloading Cars movie is illegal if you use an illegal and unsafe method, such as torrent sites or free movie sites. You can face legal action from the copyright owner or the authorities if you download Cars movie illegally.</p>
270
- <li>Is downloading Cars movie safe?</li>
271
- <p>Downloading Cars movie is safe if you use a legal and safe method, such as streaming services. However, downloading Cars movie is not safe if you use an illegal and unsafe method, such as torrent sites or free movie sites. You can expose your device to viruses, malware, spyware, ransomware, or other harmful programs if you download Cars movie from an unsafe source.</p>
272
- <li>How long does it take to download Cars movie?</li>
273
- <p>The time it takes to download Cars movie depends on several factors, such as the size of the file, the speed of your internet connection, the number of seeds or peers in the network, and the method you use to download it. Generally, streaming services have faster download speeds than torrent sites or free movie sites. However, streaming services also have larger file sizes than torrent sites or free movie sites. Therefore, you can expect to download Cars movie in a few minutes to a few hours depending on your situation.</p>
274
- <li>How much space does Cars movie take on my device?</li>
275
- <p>The space that Cars movie takes on your device depends on the quality of the video and audio, the length of the movie, and the format of the file. Generally, streaming services have higher quality video and audio than torrent sites or free movie sites. However, streaming services also have larger file sizes than torrent sites or free movie sites. Therefore, you can expect Cars movie to take from a few hundred megabytes to a few gigabytes of space on your device depending on your choice.</p>
276
- <li>Can I watch Cars movie on any device?</li>
277
- <p>You can watch Cars movie on any device that supports the method you use to download it. For example, if you use a streaming service, you can watch Cars movie on any device that has the streaming app installed or can access the streaming website. If you use a torrent site or a free movie site, you can watch Cars movie on any device that has a video player that can open the file format of the movie.</p>
278
- </ol>
279
- <p>I hope this article has helped you learn how to download Cars movie legally and safely. If you have any questions or comments, please feel free to leave them below. Thank you for reading and happy watching!</p> 401be4b1e0<br />
280
- <br />
281
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2ndelement/voicevox/test/test_acoustic_feature_extractor.py DELETED
@@ -1,266 +0,0 @@
1
- import os
2
- from pathlib import Path
3
- from typing import List, Type
4
- from unittest import TestCase
5
-
6
- from voicevox_engine.acoustic_feature_extractor import (
7
- BasePhoneme,
8
- JvsPhoneme,
9
- OjtPhoneme,
10
- )
11
-
12
-
13
- class TestBasePhoneme(TestCase):
14
- def setUp(self):
15
- super().setUp()
16
- self.str_hello_hiho = "sil k o N n i ch i w a pau h i h o d e s U sil"
17
- self.base_hello_hiho = [
18
- BasePhoneme(s, i, i + 1) for i, s in enumerate(self.str_hello_hiho.split())
19
- ]
20
- self.lab_str = """
21
- 0.00 1.00 pau
22
- 1.00 2.00 k
23
- 2.00 3.00 o
24
- 3.00 4.00 N
25
- 4.00 5.00 n
26
- 5.00 6.00 i
27
- 6.00 7.00 ch
28
- 7.00 8.00 i
29
- 8.00 9.00 w
30
- 9.00 10.00 a
31
- 10.00 11.00 pau
32
- 11.00 12.00 h
33
- 12.00 13.00 i
34
- 13.00 14.00 h
35
- 14.00 15.00 o
36
- 15.00 16.00 d
37
- 16.00 17.00 e
38
- 17.00 18.00 s
39
- 18.00 19.00 U
40
- 19.00 20.00 pau
41
- """.replace(
42
- " ", ""
43
- )[
44
- 1:-1
45
- ] # ダブルクオーテーションx3で囲われている部分で、空白をすべて置き換え、先頭と最後の"\n"を除外する
46
-
47
- def test_repr_(self):
48
- self.assertEqual(
49
- self.base_hello_hiho[1].__repr__(), "Phoneme(phoneme='k', start=1, end=2)"
50
- )
51
- self.assertEqual(
52
- self.base_hello_hiho[10].__repr__(),
53
- "Phoneme(phoneme='pau', start=10, end=11)",
54
- )
55
-
56
- def test_convert(self):
57
- with self.assertRaises(NotImplementedError):
58
- BasePhoneme.convert(self.base_hello_hiho)
59
-
60
- def test_duration(self):
61
- self.assertEqual(self.base_hello_hiho[1].duration, 1)
62
-
63
- def test_parse(self):
64
- parse_str_1 = "0 1 pau"
65
- parse_str_2 = "32.67543 33.48933 e"
66
- parsed_base_1 = BasePhoneme.parse(parse_str_1)
67
- parsed_base_2 = BasePhoneme.parse(parse_str_2)
68
- self.assertEqual(parsed_base_1.phoneme, "pau")
69
- self.assertEqual(parsed_base_1.start, 0.0)
70
- self.assertEqual(parsed_base_1.end, 1.0)
71
- self.assertEqual(parsed_base_2.phoneme, "e")
72
- self.assertEqual(parsed_base_2.start, 32.68)
73
- self.assertEqual(parsed_base_2.end, 33.49)
74
-
75
- def lab_test_base(
76
- self,
77
- file_path: str,
78
- phonemes: List["BasePhoneme"],
79
- phoneme_class: Type["BasePhoneme"],
80
- ):
81
- phoneme_class.save_lab_list(phonemes, Path(file_path))
82
- with open(file_path, mode="r") as f:
83
- self.assertEqual(f.read(), self.lab_str)
84
- result_phoneme = phoneme_class.load_lab_list(Path(file_path))
85
- self.assertEqual(result_phoneme, phonemes)
86
- os.remove(file_path)
87
-
88
-
89
- class TestJvsPhoneme(TestBasePhoneme):
90
- def setUp(self):
91
- super().setUp()
92
- base_hello_hiho = [
93
- JvsPhoneme(s, i, i + 1) for i, s in enumerate(self.str_hello_hiho.split())
94
- ]
95
- self.jvs_hello_hiho = JvsPhoneme.convert(base_hello_hiho)
96
-
97
- def test_phoneme_list(self):
98
- self.assertEqual(JvsPhoneme.phoneme_list[1], "I")
99
- self.assertEqual(JvsPhoneme.phoneme_list[14], "gy")
100
- self.assertEqual(JvsPhoneme.phoneme_list[26], "p")
101
- self.assertEqual(JvsPhoneme.phoneme_list[38], "z")
102
-
103
- def test_const(self):
104
- self.assertEqual(JvsPhoneme.num_phoneme, 39)
105
- self.assertEqual(JvsPhoneme.space_phoneme, "pau")
106
-
107
- def test_convert(self):
108
- converted_str_hello_hiho = " ".join([p.phoneme for p in self.jvs_hello_hiho])
109
- self.assertEqual(
110
- converted_str_hello_hiho, "pau k o N n i ch i w a pau h i h o d e s U pau"
111
- )
112
-
113
- def test_equal(self):
114
- # jvs_hello_hihoの2番目の"k"と比較
115
- true_jvs_phoneme = JvsPhoneme("k", 1, 2)
116
- # OjtPhonemeと比べる、比較はBasePhoneme内で実装されているので、比較結果はTrue
117
- true_ojt_phoneme = OjtPhoneme("k", 1, 2)
118
-
119
- false_jvs_phoneme_1 = JvsPhoneme("a", 1, 2)
120
- false_jvs_phoneme_2 = JvsPhoneme("k", 2, 3)
121
- self.assertTrue(self.jvs_hello_hiho[1] == true_jvs_phoneme)
122
- self.assertTrue(self.jvs_hello_hiho[1] == true_ojt_phoneme)
123
- self.assertFalse(self.jvs_hello_hiho[1] == false_jvs_phoneme_1)
124
- self.assertFalse(self.jvs_hello_hiho[1] == false_jvs_phoneme_2)
125
-
126
- def test_verify(self):
127
- for phoneme in self.jvs_hello_hiho:
128
- phoneme.verify()
129
-
130
- def test_phoneme_id(self):
131
- jvs_str_hello_hiho = " ".join([str(p.phoneme_id) for p in self.jvs_hello_hiho])
132
- self.assertEqual(
133
- jvs_str_hello_hiho, "0 19 25 2 23 17 7 17 36 4 0 15 17 15 25 9 11 30 3 0"
134
- )
135
-
136
- def test_onehot(self):
137
- phoneme_id_list = [
138
- 0,
139
- 19,
140
- 25,
141
- 2,
142
- 23,
143
- 17,
144
- 7,
145
- 17,
146
- 36,
147
- 4,
148
- 0,
149
- 15,
150
- 17,
151
- 15,
152
- 25,
153
- 9,
154
- 11,
155
- 30,
156
- 3,
157
- 0,
158
- ]
159
- for i, phoneme in enumerate(self.jvs_hello_hiho):
160
- for j in range(JvsPhoneme.num_phoneme):
161
- if phoneme_id_list[i] == j:
162
- self.assertEqual(phoneme.onehot[j], True)
163
- else:
164
- self.assertEqual(phoneme.onehot[j], False)
165
-
166
- def test_parse(self):
167
- parse_str_1 = "0 1 pau"
168
- parse_str_2 = "15.32654 16.39454 a"
169
- parsed_jvs_1 = JvsPhoneme.parse(parse_str_1)
170
- parsed_jvs_2 = JvsPhoneme.parse(parse_str_2)
171
- self.assertEqual(parsed_jvs_1.phoneme_id, 0)
172
- self.assertEqual(parsed_jvs_2.phoneme_id, 4)
173
-
174
- def test_lab_list(self):
175
- self.lab_test_base("./jvs_lab_test", self.jvs_hello_hiho, JvsPhoneme)
176
-
177
-
178
- class TestOjtPhoneme(TestBasePhoneme):
179
- def setUp(self):
180
- super().setUp()
181
- self.str_hello_hiho = "sil k o N n i ch i w a pau h i h o d e s U sil"
182
- base_hello_hiho = [
183
- OjtPhoneme(s, i, i + 1) for i, s in enumerate(self.str_hello_hiho.split())
184
- ]
185
- self.ojt_hello_hiho = OjtPhoneme.convert(base_hello_hiho)
186
-
187
- def test_phoneme_list(self):
188
- self.assertEqual(OjtPhoneme.phoneme_list[1], "A")
189
- self.assertEqual(OjtPhoneme.phoneme_list[14], "e")
190
- self.assertEqual(OjtPhoneme.phoneme_list[26], "m")
191
- self.assertEqual(OjtPhoneme.phoneme_list[38], "ts")
192
- self.assertEqual(OjtPhoneme.phoneme_list[41], "v")
193
-
194
- def test_const(self):
195
- self.assertEqual(OjtPhoneme.num_phoneme, 45)
196
- self.assertEqual(OjtPhoneme.space_phoneme, "pau")
197
-
198
- def test_convert(self):
199
- ojt_str_hello_hiho = " ".join([p.phoneme for p in self.ojt_hello_hiho])
200
- self.assertEqual(
201
- ojt_str_hello_hiho, "pau k o N n i ch i w a pau h i h o d e s U pau"
202
- )
203
-
204
- def test_equal(self):
205
- # ojt_hello_hihoの10番目の"a"と比較
206
- true_ojt_phoneme = OjtPhoneme("a", 9, 10)
207
- # JvsPhonemeと比べる、比較はBasePhoneme内で実装されているので、比較結果はTrue
208
- true_jvs_phoneme = JvsPhoneme("a", 9, 10)
209
-
210
- false_ojt_phoneme_1 = OjtPhoneme("k", 9, 10)
211
- false_ojt_phoneme_2 = OjtPhoneme("a", 10, 11)
212
- self.assertTrue(self.ojt_hello_hiho[9] == true_ojt_phoneme)
213
- self.assertTrue(self.ojt_hello_hiho[9] == true_jvs_phoneme)
214
- self.assertFalse(self.ojt_hello_hiho[9] == false_ojt_phoneme_1)
215
- self.assertFalse(self.ojt_hello_hiho[9] == false_ojt_phoneme_2)
216
-
217
- def test_verify(self):
218
- for phoneme in self.ojt_hello_hiho:
219
- phoneme.verify()
220
-
221
- def test_phoneme_id(self):
222
- ojt_str_hello_hiho = " ".join([str(p.phoneme_id) for p in self.ojt_hello_hiho])
223
- self.assertEqual(
224
- ojt_str_hello_hiho, "0 23 30 4 28 21 10 21 42 7 0 19 21 19 30 12 14 35 6 0"
225
- )
226
-
227
- def test_onehot(self):
228
- phoneme_id_list = [
229
- 0,
230
- 23,
231
- 30,
232
- 4,
233
- 28,
234
- 21,
235
- 10,
236
- 21,
237
- 42,
238
- 7,
239
- 0,
240
- 19,
241
- 21,
242
- 19,
243
- 30,
244
- 12,
245
- 14,
246
- 35,
247
- 6,
248
- 0,
249
- ]
250
- for i, phoneme in enumerate(self.ojt_hello_hiho):
251
- for j in range(OjtPhoneme.num_phoneme):
252
- if phoneme_id_list[i] == j:
253
- self.assertEqual(phoneme.onehot[j], True)
254
- else:
255
- self.assertEqual(phoneme.onehot[j], False)
256
-
257
- def test_parse(self):
258
- parse_str_1 = "0 1 pau"
259
- parse_str_2 = "32.67543 33.48933 e"
260
- parsed_ojt_1 = OjtPhoneme.parse(parse_str_1)
261
- parsed_ojt_2 = OjtPhoneme.parse(parse_str_2)
262
- self.assertEqual(parsed_ojt_1.phoneme_id, 0)
263
- self.assertEqual(parsed_ojt_2.phoneme_id, 14)
264
-
265
- def tes_lab_list(self):
266
- self.lab_test_base("./ojt_lab_test", self.ojt_hello_hiho, OjtPhoneme)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/go-applio.bat DELETED
@@ -1,92 +0,0 @@
1
- @echo off
2
- setlocal
3
- title Start Applio
4
-
5
- :::
6
- ::: _ _
7
- ::: /\ | (_)
8
- ::: / \ _ __ _ __ | |_ ___
9
- ::: / /\ \ | '_ \| '_ \| | |/ _ \
10
- ::: / ____ \| |_) | |_) | | | (_) |
11
- ::: /_/ \_\ .__/| .__/|_|_|\___/
12
- ::: | | | |
13
- ::: |_| |_|
14
- :::
15
- :::
16
-
17
- :menu
18
- for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A
19
-
20
- echo [1] Start Applio
21
- echo [2] Start Applio (DML)
22
- echo [3] Start Realtime GUI (DML)
23
- echo [4] Start Realtime GUI (V0)
24
- echo [5] Start Realtime GUI (V1)
25
- echo.
26
-
27
- set /p choice=Select an option:
28
- set choice=%choice: =%
29
-
30
- cls
31
- echo WARNING: It's recommended to disable antivirus or firewall, as errors might occur when starting the ssl.
32
- pause
33
-
34
- if "%choice%"=="1" (
35
- cls
36
- echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models.
37
- pause>null
38
- echo Starting Applio...
39
- echo.
40
- runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897
41
- pause
42
- cls
43
- goto menu
44
- )
45
-
46
- if "%choice%"=="2" (
47
- cls
48
- echo Starting Applio ^(DML^)...
49
- echo.
50
- runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897 --dml
51
- pause
52
- cls
53
- goto menu
54
- )
55
-
56
- if "%choice%"=="3" (
57
- cls
58
- echo Starting Realtime GUI ^(DML^)...
59
- echo.
60
- runtime\python.exe gui_v1.py --pycmd runtime\python.exe --dml
61
- pause
62
- cls
63
- goto menu
64
- )
65
-
66
- if "%choice%"=="4" (
67
- cls
68
- echo Starting Realtime GUI ^(V0^)...
69
- echo.
70
- runtime\python.exe gui_v0.py
71
- pause
72
- cls
73
- goto menu
74
- )
75
-
76
- if "%choice%"=="5" (
77
- cls
78
- echo Starting Realtime GUI ^(V1^)...
79
- echo.
80
- runtime\python.exe gui_v1.py
81
- pause
82
- cls
83
- goto menu
84
- )
85
-
86
- cls
87
- echo Invalid option. Please enter a number from 1 to 5.
88
- echo.
89
- echo Press 'Enter' to access the main menu...
90
- pause>nul
91
- cls
92
- goto menu
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A666sxr/Genshin_TTS/modules.py DELETED
@@ -1,390 +0,0 @@
1
- import copy
2
- import math
3
- import numpy as np
4
- import scipy
5
- import torch
6
- from torch import nn
7
- from torch.nn import functional as F
8
-
9
- from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
10
- from torch.nn.utils import weight_norm, remove_weight_norm
11
-
12
- import commons
13
- from commons import init_weights, get_padding
14
- from transforms import piecewise_rational_quadratic_transform
15
-
16
-
17
- LRELU_SLOPE = 0.1
18
-
19
-
20
- class LayerNorm(nn.Module):
21
- def __init__(self, channels, eps=1e-5):
22
- super().__init__()
23
- self.channels = channels
24
- self.eps = eps
25
-
26
- self.gamma = nn.Parameter(torch.ones(channels))
27
- self.beta = nn.Parameter(torch.zeros(channels))
28
-
29
- def forward(self, x):
30
- x = x.transpose(1, -1)
31
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
32
- return x.transpose(1, -1)
33
-
34
-
35
- class ConvReluNorm(nn.Module):
36
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
37
- super().__init__()
38
- self.in_channels = in_channels
39
- self.hidden_channels = hidden_channels
40
- self.out_channels = out_channels
41
- self.kernel_size = kernel_size
42
- self.n_layers = n_layers
43
- self.p_dropout = p_dropout
44
- assert n_layers > 1, "Number of layers should be larger than 0."
45
-
46
- self.conv_layers = nn.ModuleList()
47
- self.norm_layers = nn.ModuleList()
48
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
49
- self.norm_layers.append(LayerNorm(hidden_channels))
50
- self.relu_drop = nn.Sequential(
51
- nn.ReLU(),
52
- nn.Dropout(p_dropout))
53
- for _ in range(n_layers-1):
54
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
55
- self.norm_layers.append(LayerNorm(hidden_channels))
56
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
57
- self.proj.weight.data.zero_()
58
- self.proj.bias.data.zero_()
59
-
60
- def forward(self, x, x_mask):
61
- x_org = x
62
- for i in range(self.n_layers):
63
- x = self.conv_layers[i](x * x_mask)
64
- x = self.norm_layers[i](x)
65
- x = self.relu_drop(x)
66
- x = x_org + self.proj(x)
67
- return x * x_mask
68
-
69
-
70
- class DDSConv(nn.Module):
71
- """
72
- Dialted and Depth-Separable Convolution
73
- """
74
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
75
- super().__init__()
76
- self.channels = channels
77
- self.kernel_size = kernel_size
78
- self.n_layers = n_layers
79
- self.p_dropout = p_dropout
80
-
81
- self.drop = nn.Dropout(p_dropout)
82
- self.convs_sep = nn.ModuleList()
83
- self.convs_1x1 = nn.ModuleList()
84
- self.norms_1 = nn.ModuleList()
85
- self.norms_2 = nn.ModuleList()
86
- for i in range(n_layers):
87
- dilation = kernel_size ** i
88
- padding = (kernel_size * dilation - dilation) // 2
89
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
90
- groups=channels, dilation=dilation, padding=padding
91
- ))
92
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
93
- self.norms_1.append(LayerNorm(channels))
94
- self.norms_2.append(LayerNorm(channels))
95
-
96
- def forward(self, x, x_mask, g=None):
97
- if g is not None:
98
- x = x + g
99
- for i in range(self.n_layers):
100
- y = self.convs_sep[i](x * x_mask)
101
- y = self.norms_1[i](y)
102
- y = F.gelu(y)
103
- y = self.convs_1x1[i](y)
104
- y = self.norms_2[i](y)
105
- y = F.gelu(y)
106
- y = self.drop(y)
107
- x = x + y
108
- return x * x_mask
109
-
110
-
111
- class WN(torch.nn.Module):
112
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
113
- super(WN, self).__init__()
114
- assert(kernel_size % 2 == 1)
115
- self.hidden_channels =hidden_channels
116
- self.kernel_size = kernel_size,
117
- self.dilation_rate = dilation_rate
118
- self.n_layers = n_layers
119
- self.gin_channels = gin_channels
120
- self.p_dropout = p_dropout
121
-
122
- self.in_layers = torch.nn.ModuleList()
123
- self.res_skip_layers = torch.nn.ModuleList()
124
- self.drop = nn.Dropout(p_dropout)
125
-
126
- if gin_channels != 0:
127
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
128
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
129
-
130
- for i in range(n_layers):
131
- dilation = dilation_rate ** i
132
- padding = int((kernel_size * dilation - dilation) / 2)
133
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
134
- dilation=dilation, padding=padding)
135
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
136
- self.in_layers.append(in_layer)
137
-
138
- # last one is not necessary
139
- if i < n_layers - 1:
140
- res_skip_channels = 2 * hidden_channels
141
- else:
142
- res_skip_channels = hidden_channels
143
-
144
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
145
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
146
- self.res_skip_layers.append(res_skip_layer)
147
-
148
- def forward(self, x, x_mask, g=None, **kwargs):
149
- output = torch.zeros_like(x)
150
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
151
-
152
- if g is not None:
153
- g = self.cond_layer(g)
154
-
155
- for i in range(self.n_layers):
156
- x_in = self.in_layers[i](x)
157
- if g is not None:
158
- cond_offset = i * 2 * self.hidden_channels
159
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
160
- else:
161
- g_l = torch.zeros_like(x_in)
162
-
163
- acts = commons.fused_add_tanh_sigmoid_multiply(
164
- x_in,
165
- g_l,
166
- n_channels_tensor)
167
- acts = self.drop(acts)
168
-
169
- res_skip_acts = self.res_skip_layers[i](acts)
170
- if i < self.n_layers - 1:
171
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
172
- x = (x + res_acts) * x_mask
173
- output = output + res_skip_acts[:,self.hidden_channels:,:]
174
- else:
175
- output = output + res_skip_acts
176
- return output * x_mask
177
-
178
- def remove_weight_norm(self):
179
- if self.gin_channels != 0:
180
- torch.nn.utils.remove_weight_norm(self.cond_layer)
181
- for l in self.in_layers:
182
- torch.nn.utils.remove_weight_norm(l)
183
- for l in self.res_skip_layers:
184
- torch.nn.utils.remove_weight_norm(l)
185
-
186
-
187
- class ResBlock1(torch.nn.Module):
188
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
189
- super(ResBlock1, self).__init__()
190
- self.convs1 = nn.ModuleList([
191
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
192
- padding=get_padding(kernel_size, dilation[0]))),
193
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
194
- padding=get_padding(kernel_size, dilation[1]))),
195
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
196
- padding=get_padding(kernel_size, dilation[2])))
197
- ])
198
- self.convs1.apply(init_weights)
199
-
200
- self.convs2 = nn.ModuleList([
201
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
202
- padding=get_padding(kernel_size, 1))),
203
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
204
- padding=get_padding(kernel_size, 1))),
205
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
206
- padding=get_padding(kernel_size, 1)))
207
- ])
208
- self.convs2.apply(init_weights)
209
-
210
- def forward(self, x, x_mask=None):
211
- for c1, c2 in zip(self.convs1, self.convs2):
212
- xt = F.leaky_relu(x, LRELU_SLOPE)
213
- if x_mask is not None:
214
- xt = xt * x_mask
215
- xt = c1(xt)
216
- xt = F.leaky_relu(xt, LRELU_SLOPE)
217
- if x_mask is not None:
218
- xt = xt * x_mask
219
- xt = c2(xt)
220
- x = xt + x
221
- if x_mask is not None:
222
- x = x * x_mask
223
- return x
224
-
225
- def remove_weight_norm(self):
226
- for l in self.convs1:
227
- remove_weight_norm(l)
228
- for l in self.convs2:
229
- remove_weight_norm(l)
230
-
231
-
232
- class ResBlock2(torch.nn.Module):
233
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
234
- super(ResBlock2, self).__init__()
235
- self.convs = nn.ModuleList([
236
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
237
- padding=get_padding(kernel_size, dilation[0]))),
238
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
239
- padding=get_padding(kernel_size, dilation[1])))
240
- ])
241
- self.convs.apply(init_weights)
242
-
243
- def forward(self, x, x_mask=None):
244
- for c in self.convs:
245
- xt = F.leaky_relu(x, LRELU_SLOPE)
246
- if x_mask is not None:
247
- xt = xt * x_mask
248
- xt = c(xt)
249
- x = xt + x
250
- if x_mask is not None:
251
- x = x * x_mask
252
- return x
253
-
254
- def remove_weight_norm(self):
255
- for l in self.convs:
256
- remove_weight_norm(l)
257
-
258
-
259
- class Log(nn.Module):
260
- def forward(self, x, x_mask, reverse=False, **kwargs):
261
- if not reverse:
262
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
263
- logdet = torch.sum(-y, [1, 2])
264
- return y, logdet
265
- else:
266
- x = torch.exp(x) * x_mask
267
- return x
268
-
269
-
270
- class Flip(nn.Module):
271
- def forward(self, x, *args, reverse=False, **kwargs):
272
- x = torch.flip(x, [1])
273
- if not reverse:
274
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
275
- return x, logdet
276
- else:
277
- return x
278
-
279
-
280
- class ElementwiseAffine(nn.Module):
281
- def __init__(self, channels):
282
- super().__init__()
283
- self.channels = channels
284
- self.m = nn.Parameter(torch.zeros(channels,1))
285
- self.logs = nn.Parameter(torch.zeros(channels,1))
286
-
287
- def forward(self, x, x_mask, reverse=False, **kwargs):
288
- if not reverse:
289
- y = self.m + torch.exp(self.logs) * x
290
- y = y * x_mask
291
- logdet = torch.sum(self.logs * x_mask, [1,2])
292
- return y, logdet
293
- else:
294
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
295
- return x
296
-
297
-
298
- class ResidualCouplingLayer(nn.Module):
299
- def __init__(self,
300
- channels,
301
- hidden_channels,
302
- kernel_size,
303
- dilation_rate,
304
- n_layers,
305
- p_dropout=0,
306
- gin_channels=0,
307
- mean_only=False):
308
- assert channels % 2 == 0, "channels should be divisible by 2"
309
- super().__init__()
310
- self.channels = channels
311
- self.hidden_channels = hidden_channels
312
- self.kernel_size = kernel_size
313
- self.dilation_rate = dilation_rate
314
- self.n_layers = n_layers
315
- self.half_channels = channels // 2
316
- self.mean_only = mean_only
317
-
318
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
319
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
320
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
321
- self.post.weight.data.zero_()
322
- self.post.bias.data.zero_()
323
-
324
- def forward(self, x, x_mask, g=None, reverse=False):
325
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
326
- h = self.pre(x0) * x_mask
327
- h = self.enc(h, x_mask, g=g)
328
- stats = self.post(h) * x_mask
329
- if not self.mean_only:
330
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
331
- else:
332
- m = stats
333
- logs = torch.zeros_like(m)
334
-
335
- if not reverse:
336
- x1 = m + x1 * torch.exp(logs) * x_mask
337
- x = torch.cat([x0, x1], 1)
338
- logdet = torch.sum(logs, [1,2])
339
- return x, logdet
340
- else:
341
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
342
- x = torch.cat([x0, x1], 1)
343
- return x
344
-
345
-
346
- class ConvFlow(nn.Module):
347
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
348
- super().__init__()
349
- self.in_channels = in_channels
350
- self.filter_channels = filter_channels
351
- self.kernel_size = kernel_size
352
- self.n_layers = n_layers
353
- self.num_bins = num_bins
354
- self.tail_bound = tail_bound
355
- self.half_channels = in_channels // 2
356
-
357
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
358
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
359
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
360
- self.proj.weight.data.zero_()
361
- self.proj.bias.data.zero_()
362
-
363
- def forward(self, x, x_mask, g=None, reverse=False):
364
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
365
- h = self.pre(x0)
366
- h = self.convs(h, x_mask, g=g)
367
- h = self.proj(h) * x_mask
368
-
369
- b, c, t = x0.shape
370
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
371
-
372
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
373
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
374
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
375
-
376
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
377
- unnormalized_widths,
378
- unnormalized_heights,
379
- unnormalized_derivatives,
380
- inverse=reverse,
381
- tails='linear',
382
- tail_bound=self.tail_bound
383
- )
384
-
385
- x = torch.cat([x0, x1], 1) * x_mask
386
- logdet = torch.sum(logabsdet * x_mask, [1,2])
387
- if not reverse:
388
- return x, logdet
389
- else:
390
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Chatbot-Master/Chatbots/README.md DELETED
@@ -1,10 +0,0 @@
1
- ---
2
- title: Chatbots
3
- emoji: 📚
4
- colorFrom: yellow
5
- colorTo: red
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-ZTH-03-23/2.Streamlit.GraphViz.Dynamic.Architecture.Diagram/app.py DELETED
@@ -1,146 +0,0 @@
1
- import streamlit as st
2
- from graphviz import Digraph
3
-
4
-
5
- st.markdown("""
6
- Prompt:
7
- Create an interactive streamlit graph builder using the graphviz diagram model language and the streamlit feature: st.graphviz_chart(figure_or_dot, use_container_width=False) to show an azure cloud architecture model including the top ten architecture components for python full stack development for web, api, ml, models, datasets torch, transformers, streamlit, azure docker and kubernetes pods for scaling
8
-
9
- """)
10
-
11
- # Dot demo:
12
- import streamlit as st
13
-
14
- # Define the default graphviz DOT string
15
- default_dot = """
16
- digraph G {
17
- rankdir=LR
18
- node [shape=box]
19
- WebApp -> API
20
- API -> Models
21
- API -> Datasets
22
- Models -> Torch
23
- Models -> Transformers
24
- WebApp -> Streamlit
25
- Streamlit -> Azure
26
- Azure -> Docker
27
- Azure -> Kubernetes
28
- }
29
- """
30
-
31
- # Define the list of top 10 components
32
- components = [
33
- "WebApp",
34
- "API",
35
- "Models",
36
- "Datasets",
37
- "Torch",
38
- "Transformers",
39
- "Streamlit",
40
- "Azure",
41
- "Docker",
42
- "Kubernetes",
43
- ]
44
-
45
- # Define a dictionary to map component names to DOT node IDs
46
- node_ids = {
47
- component: component.lower()
48
- for component in components
49
- }
50
-
51
- def build_dot_string(selected_components):
52
- """Builds a DOT string representing the selected components"""
53
- selected_nodes = [node_ids[component] for component in selected_components]
54
- dot = """
55
- digraph G {
56
- rankdir=LR
57
- node [shape=box]
58
- """
59
- for node in selected_nodes:
60
- dot += f"{node} [color=blue]\n"
61
- for i in range(len(selected_nodes)):
62
- for j in range(i+1, len(selected_nodes)):
63
- dot += f"{selected_nodes[i]} -> {selected_nodes[j]}\n"
64
- dot += "}"
65
- return dot
66
-
67
- def main():
68
- st.title("Azure Cloud Architecture Builder")
69
-
70
- # Select the components
71
- st.sidebar.title("Select components")
72
- selected_components = st.sidebar.multiselect(
73
- "Select the top 10 components",
74
- components,
75
- default=components[:3]
76
- )
77
-
78
- # Build the DOT string
79
- dot = build_dot_string(selected_components)
80
-
81
- # Render the graphviz chart
82
- st.graphviz_chart(dot, use_container_width=True)
83
-
84
- if __name__ == "__main__":
85
- main()
86
-
87
-
88
-
89
- # Initialize the graph
90
- graph = Digraph(comment='Architectural Model')
91
-
92
- # Add nodes to the graph
93
- graph.node('data_layer', 'Data Layer')
94
- graph.node('acr', 'Azure Container Registry')
95
- graph.node('aks', 'Azure Kubernetes\n& Docker Container Pod\nwith Scalability')
96
- graph.node('snowflake', 'Snowflake Instance')
97
- graph.node('cosmos', 'Azure Cosmos\nDatabase')
98
- graph.node('api', 'API Standard\n(using Uvicorn)')
99
- graph.node('soar', 'SOAR Component\n(on Linux Python\nSlimbuster Docker)')
100
-
101
- # Add edges to the graph
102
- graph.edge('data_layer', 'acr')
103
- graph.edge('acr', 'aks')
104
- graph.edge('aks', 'snowflake')
105
- graph.edge('aks', 'cosmos')
106
- graph.edge('aks', 'api')
107
- graph.edge('aks', 'soar')
108
-
109
- # Define the Streamlit app
110
- def app():
111
- st.title('Architectural Model')
112
-
113
- # Draw the graph
114
- st.graphviz_chart(graph.source)
115
-
116
- # Add buttons to customize the graph
117
- if st.button('Hide Data Layer'):
118
- graph.node('data_layer', style='invisible')
119
-
120
- if st.button('Hide Snowflake Instance'):
121
- graph.node('snowflake', style='invisible')
122
-
123
- if st.button('Hide SOAR Component'):
124
- graph.node('soar', style='invisible')
125
-
126
-
127
-
128
- st.markdown("""
129
- # QA Model Spaces:
130
- QA use cases include QA, Semantic Document and FAQ Search.
131
- 1. Streamlit Question Answering w Hugging Face: https://huggingface.co/spaces/awacke1/Question-answering
132
- 2. Seq2Seq:
133
- - https://huggingface.co/spaces/awacke1/4-Seq2SeqQAT5
134
- - https://huggingface.co/spaces/awacke1/AW-04-GR-Seq-2-Seq-QA-Auto-Gen
135
- 3. BioGPT: https://huggingface.co/spaces/awacke1/microsoft-BioGPT-Large-PubMedQA
136
- 4. NLP QA Context: https://huggingface.co/spaces/awacke1/NLPContextQATransformersRobertaBaseSquad2
137
- - https://huggingface.co/spaces/awacke1/SOTA-Plan
138
- 5. https://huggingface.co/spaces/awacke1/Question-answering
139
- 6. QA MLM: https://huggingface.co/spaces/awacke1/SOTA-MedEntity
140
- """)
141
-
142
-
143
-
144
- # Run the Streamlit app
145
- if __name__ == '__main__':
146
- app()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIConsultant/MusicGen/scripts/templates/results.html DELETED
@@ -1,17 +0,0 @@
1
- {% extends "base.html" %}
2
- {% block content %}
3
-
4
- <h1>Results for survey #{{signature}}</h1>
5
- <p>Checkout <a href="{{url_for('survey', signature=signature)}}">the survey page</a> for details on the models.</p>
6
- <p>The following users voted:
7
- {% for user in users %}
8
- <span class="special">{{user}}</span>
9
- {% endfor %}
10
-
11
- {% for model in models %}
12
- <h3>{{model['sig']}} ({{model['samples']}} samples)</h3>
13
- <p>Ratings: {{model['mean_rating']}} ± {{model['std_rating']}}</p>
14
-
15
- {% endfor %}
16
-
17
- {% endblock %}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/pyrender/pyrender/platforms/__init__.py DELETED
@@ -1,6 +0,0 @@
1
- """Platforms for generating offscreen OpenGL contexts for rendering.
2
-
3
- Author: Matthew Matl
4
- """
5
-
6
- from .base import Platform
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/conversation/[id]/stop-generating/$types.d.ts DELETED
@@ -1,9 +0,0 @@
1
- import type * as Kit from '@sveltejs/kit';
2
-
3
- type Expand<T> = T extends infer O ? { [K in keyof O]: O[K] } : never;
4
- type RouteParams = { id: string }
5
- type RouteId = '/conversation/[id]/stop-generating';
6
-
7
- export type EntryGenerator = () => Promise<Array<RouteParams>> | Array<RouteParams>;
8
- export type RequestHandler = Kit.RequestHandler<RouteParams, RouteId>;
9
- export type RequestEvent = Kit.RequestEvent<RouteParams, RouteId>;
 
 
 
 
 
 
 
 
 
 
spaces/Adapter/CoAdapter/ldm/modules/image_degradation/bsrgan.py DELETED
@@ -1,730 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """
3
- # --------------------------------------------
4
- # Super-Resolution
5
- # --------------------------------------------
6
- #
7
- # Kai Zhang ([email protected])
8
- # https://github.com/cszn
9
- # From 2019/03--2021/08
10
- # --------------------------------------------
11
- """
12
-
13
- import numpy as np
14
- import cv2
15
- import torch
16
-
17
- from functools import partial
18
- import random
19
- from scipy import ndimage
20
- import scipy
21
- import scipy.stats as ss
22
- from scipy.interpolate import interp2d
23
- from scipy.linalg import orth
24
- import albumentations
25
-
26
- import ldm.modules.image_degradation.utils_image as util
27
-
28
-
29
- def modcrop_np(img, sf):
30
- '''
31
- Args:
32
- img: numpy image, WxH or WxHxC
33
- sf: scale factor
34
- Return:
35
- cropped image
36
- '''
37
- w, h = img.shape[:2]
38
- im = np.copy(img)
39
- return im[:w - w % sf, :h - h % sf, ...]
40
-
41
-
42
- """
43
- # --------------------------------------------
44
- # anisotropic Gaussian kernels
45
- # --------------------------------------------
46
- """
47
-
48
-
49
- def analytic_kernel(k):
50
- """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)"""
51
- k_size = k.shape[0]
52
- # Calculate the big kernels size
53
- big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2))
54
- # Loop over the small kernel to fill the big one
55
- for r in range(k_size):
56
- for c in range(k_size):
57
- big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k
58
- # Crop the edges of the big kernel to ignore very small values and increase run time of SR
59
- crop = k_size // 2
60
- cropped_big_k = big_k[crop:-crop, crop:-crop]
61
- # Normalize to 1
62
- return cropped_big_k / cropped_big_k.sum()
63
-
64
-
65
- def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
66
- """ generate an anisotropic Gaussian kernel
67
- Args:
68
- ksize : e.g., 15, kernel size
69
- theta : [0, pi], rotation angle range
70
- l1 : [0.1,50], scaling of eigenvalues
71
- l2 : [0.1,l1], scaling of eigenvalues
72
- If l1 = l2, will get an isotropic Gaussian kernel.
73
- Returns:
74
- k : kernel
75
- """
76
-
77
- v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
78
- V = np.array([[v[0], v[1]], [v[1], -v[0]]])
79
- D = np.array([[l1, 0], [0, l2]])
80
- Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
81
- k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
82
-
83
- return k
84
-
85
-
86
- def gm_blur_kernel(mean, cov, size=15):
87
- center = size / 2.0 + 0.5
88
- k = np.zeros([size, size])
89
- for y in range(size):
90
- for x in range(size):
91
- cy = y - center + 1
92
- cx = x - center + 1
93
- k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
94
-
95
- k = k / np.sum(k)
96
- return k
97
-
98
-
99
- def shift_pixel(x, sf, upper_left=True):
100
- """shift pixel for super-resolution with different scale factors
101
- Args:
102
- x: WxHxC or WxH
103
- sf: scale factor
104
- upper_left: shift direction
105
- """
106
- h, w = x.shape[:2]
107
- shift = (sf - 1) * 0.5
108
- xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0)
109
- if upper_left:
110
- x1 = xv + shift
111
- y1 = yv + shift
112
- else:
113
- x1 = xv - shift
114
- y1 = yv - shift
115
-
116
- x1 = np.clip(x1, 0, w - 1)
117
- y1 = np.clip(y1, 0, h - 1)
118
-
119
- if x.ndim == 2:
120
- x = interp2d(xv, yv, x)(x1, y1)
121
- if x.ndim == 3:
122
- for i in range(x.shape[-1]):
123
- x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1)
124
-
125
- return x
126
-
127
-
128
- def blur(x, k):
129
- '''
130
- x: image, NxcxHxW
131
- k: kernel, Nx1xhxw
132
- '''
133
- n, c = x.shape[:2]
134
- p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2
135
- x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate')
136
- k = k.repeat(1, c, 1, 1)
137
- k = k.view(-1, 1, k.shape[2], k.shape[3])
138
- x = x.view(1, -1, x.shape[2], x.shape[3])
139
- x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c)
140
- x = x.view(n, c, x.shape[2], x.shape[3])
141
-
142
- return x
143
-
144
-
145
- def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0):
146
- """"
147
- # modified version of https://github.com/assafshocher/BlindSR_dataset_generator
148
- # Kai Zhang
149
- # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var
150
- # max_var = 2.5 * sf
151
- """
152
- # Set random eigen-vals (lambdas) and angle (theta) for COV matrix
153
- lambda_1 = min_var + np.random.rand() * (max_var - min_var)
154
- lambda_2 = min_var + np.random.rand() * (max_var - min_var)
155
- theta = np.random.rand() * np.pi # random theta
156
- noise = -noise_level + np.random.rand(*k_size) * noise_level * 2
157
-
158
- # Set COV matrix using Lambdas and Theta
159
- LAMBDA = np.diag([lambda_1, lambda_2])
160
- Q = np.array([[np.cos(theta), -np.sin(theta)],
161
- [np.sin(theta), np.cos(theta)]])
162
- SIGMA = Q @ LAMBDA @ Q.T
163
- INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :]
164
-
165
- # Set expectation position (shifting kernel for aligned image)
166
- MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2)
167
- MU = MU[None, None, :, None]
168
-
169
- # Create meshgrid for Gaussian
170
- [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1]))
171
- Z = np.stack([X, Y], 2)[:, :, :, None]
172
-
173
- # Calcualte Gaussian for every pixel of the kernel
174
- ZZ = Z - MU
175
- ZZ_t = ZZ.transpose(0, 1, 3, 2)
176
- raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise)
177
-
178
- # shift the kernel so it will be centered
179
- # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor)
180
-
181
- # Normalize the kernel and return
182
- # kernel = raw_kernel_centered / np.sum(raw_kernel_centered)
183
- kernel = raw_kernel / np.sum(raw_kernel)
184
- return kernel
185
-
186
-
187
- def fspecial_gaussian(hsize, sigma):
188
- hsize = [hsize, hsize]
189
- siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0]
190
- std = sigma
191
- [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1))
192
- arg = -(x * x + y * y) / (2 * std * std)
193
- h = np.exp(arg)
194
- h[h < scipy.finfo(float).eps * h.max()] = 0
195
- sumh = h.sum()
196
- if sumh != 0:
197
- h = h / sumh
198
- return h
199
-
200
-
201
- def fspecial_laplacian(alpha):
202
- alpha = max([0, min([alpha, 1])])
203
- h1 = alpha / (alpha + 1)
204
- h2 = (1 - alpha) / (alpha + 1)
205
- h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]]
206
- h = np.array(h)
207
- return h
208
-
209
-
210
- def fspecial(filter_type, *args, **kwargs):
211
- '''
212
- python code from:
213
- https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
214
- '''
215
- if filter_type == 'gaussian':
216
- return fspecial_gaussian(*args, **kwargs)
217
- if filter_type == 'laplacian':
218
- return fspecial_laplacian(*args, **kwargs)
219
-
220
-
221
- """
222
- # --------------------------------------------
223
- # degradation models
224
- # --------------------------------------------
225
- """
226
-
227
-
228
- def bicubic_degradation(x, sf=3):
229
- '''
230
- Args:
231
- x: HxWxC image, [0, 1]
232
- sf: down-scale factor
233
- Return:
234
- bicubicly downsampled LR image
235
- '''
236
- x = util.imresize_np(x, scale=1 / sf)
237
- return x
238
-
239
-
240
- def srmd_degradation(x, k, sf=3):
241
- ''' blur + bicubic downsampling
242
- Args:
243
- x: HxWxC image, [0, 1]
244
- k: hxw, double
245
- sf: down-scale factor
246
- Return:
247
- downsampled LR image
248
- Reference:
249
- @inproceedings{zhang2018learning,
250
- title={Learning a single convolutional super-resolution network for multiple degradations},
251
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
252
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
253
- pages={3262--3271},
254
- year={2018}
255
- }
256
- '''
257
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror'
258
- x = bicubic_degradation(x, sf=sf)
259
- return x
260
-
261
-
262
- def dpsr_degradation(x, k, sf=3):
263
- ''' bicubic downsampling + blur
264
- Args:
265
- x: HxWxC image, [0, 1]
266
- k: hxw, double
267
- sf: down-scale factor
268
- Return:
269
- downsampled LR image
270
- Reference:
271
- @inproceedings{zhang2019deep,
272
- title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels},
273
- author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei},
274
- booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
275
- pages={1671--1681},
276
- year={2019}
277
- }
278
- '''
279
- x = bicubic_degradation(x, sf=sf)
280
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
281
- return x
282
-
283
-
284
- def classical_degradation(x, k, sf=3):
285
- ''' blur + downsampling
286
- Args:
287
- x: HxWxC image, [0, 1]/[0, 255]
288
- k: hxw, double
289
- sf: down-scale factor
290
- Return:
291
- downsampled LR image
292
- '''
293
- x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap')
294
- # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2))
295
- st = 0
296
- return x[st::sf, st::sf, ...]
297
-
298
-
299
- def add_sharpening(img, weight=0.5, radius=50, threshold=10):
300
- """USM sharpening. borrowed from real-ESRGAN
301
- Input image: I; Blurry image: B.
302
- 1. K = I + weight * (I - B)
303
- 2. Mask = 1 if abs(I - B) > threshold, else: 0
304
- 3. Blur mask:
305
- 4. Out = Mask * K + (1 - Mask) * I
306
- Args:
307
- img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
308
- weight (float): Sharp weight. Default: 1.
309
- radius (float): Kernel size of Gaussian blur. Default: 50.
310
- threshold (int):
311
- """
312
- if radius % 2 == 0:
313
- radius += 1
314
- blur = cv2.GaussianBlur(img, (radius, radius), 0)
315
- residual = img - blur
316
- mask = np.abs(residual) * 255 > threshold
317
- mask = mask.astype('float32')
318
- soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
319
-
320
- K = img + weight * residual
321
- K = np.clip(K, 0, 1)
322
- return soft_mask * K + (1 - soft_mask) * img
323
-
324
-
325
- def add_blur(img, sf=4):
326
- wd2 = 4.0 + sf
327
- wd = 2.0 + 0.2 * sf
328
- if random.random() < 0.5:
329
- l1 = wd2 * random.random()
330
- l2 = wd2 * random.random()
331
- k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2)
332
- else:
333
- k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random())
334
- img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror')
335
-
336
- return img
337
-
338
-
339
- def add_resize(img, sf=4):
340
- rnum = np.random.rand()
341
- if rnum > 0.8: # up
342
- sf1 = random.uniform(1, 2)
343
- elif rnum < 0.7: # down
344
- sf1 = random.uniform(0.5 / sf, 1)
345
- else:
346
- sf1 = 1.0
347
- img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3]))
348
- img = np.clip(img, 0.0, 1.0)
349
-
350
- return img
351
-
352
-
353
- # def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
354
- # noise_level = random.randint(noise_level1, noise_level2)
355
- # rnum = np.random.rand()
356
- # if rnum > 0.6: # add color Gaussian noise
357
- # img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
358
- # elif rnum < 0.4: # add grayscale Gaussian noise
359
- # img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
360
- # else: # add noise
361
- # L = noise_level2 / 255.
362
- # D = np.diag(np.random.rand(3))
363
- # U = orth(np.random.rand(3, 3))
364
- # conv = np.dot(np.dot(np.transpose(U), D), U)
365
- # img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
366
- # img = np.clip(img, 0.0, 1.0)
367
- # return img
368
-
369
- def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
370
- noise_level = random.randint(noise_level1, noise_level2)
371
- rnum = np.random.rand()
372
- if rnum > 0.6: # add color Gaussian noise
373
- img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
374
- elif rnum < 0.4: # add grayscale Gaussian noise
375
- img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
376
- else: # add noise
377
- L = noise_level2 / 255.
378
- D = np.diag(np.random.rand(3))
379
- U = orth(np.random.rand(3, 3))
380
- conv = np.dot(np.dot(np.transpose(U), D), U)
381
- img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
382
- img = np.clip(img, 0.0, 1.0)
383
- return img
384
-
385
-
386
- def add_speckle_noise(img, noise_level1=2, noise_level2=25):
387
- noise_level = random.randint(noise_level1, noise_level2)
388
- img = np.clip(img, 0.0, 1.0)
389
- rnum = random.random()
390
- if rnum > 0.6:
391
- img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32)
392
- elif rnum < 0.4:
393
- img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32)
394
- else:
395
- L = noise_level2 / 255.
396
- D = np.diag(np.random.rand(3))
397
- U = orth(np.random.rand(3, 3))
398
- conv = np.dot(np.dot(np.transpose(U), D), U)
399
- img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32)
400
- img = np.clip(img, 0.0, 1.0)
401
- return img
402
-
403
-
404
- def add_Poisson_noise(img):
405
- img = np.clip((img * 255.0).round(), 0, 255) / 255.
406
- vals = 10 ** (2 * random.random() + 2.0) # [2, 4]
407
- if random.random() < 0.5:
408
- img = np.random.poisson(img * vals).astype(np.float32) / vals
409
- else:
410
- img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114])
411
- img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
412
- noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
413
- img += noise_gray[:, :, np.newaxis]
414
- img = np.clip(img, 0.0, 1.0)
415
- return img
416
-
417
-
418
- def add_JPEG_noise(img):
419
- quality_factor = random.randint(30, 95)
420
- img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR)
421
- result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
422
- img = cv2.imdecode(encimg, 1)
423
- img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB)
424
- return img
425
-
426
-
427
- def random_crop(lq, hq, sf=4, lq_patchsize=64):
428
- h, w = lq.shape[:2]
429
- rnd_h = random.randint(0, h - lq_patchsize)
430
- rnd_w = random.randint(0, w - lq_patchsize)
431
- lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :]
432
-
433
- rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf)
434
- hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :]
435
- return lq, hq
436
-
437
-
438
- def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None):
439
- """
440
- This is the degradation model of BSRGAN from the paper
441
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
442
- ----------
443
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
444
- sf: scale factor
445
- isp_model: camera ISP model
446
- Returns
447
- -------
448
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
449
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
450
- """
451
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
452
- sf_ori = sf
453
-
454
- h1, w1 = img.shape[:2]
455
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
456
- h, w = img.shape[:2]
457
-
458
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
459
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
460
-
461
- hq = img.copy()
462
-
463
- if sf == 4 and random.random() < scale2_prob: # downsample1
464
- if np.random.rand() < 0.5:
465
- img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])),
466
- interpolation=random.choice([1, 2, 3]))
467
- else:
468
- img = util.imresize_np(img, 1 / 2, True)
469
- img = np.clip(img, 0.0, 1.0)
470
- sf = 2
471
-
472
- shuffle_order = random.sample(range(7), 7)
473
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
474
- if idx1 > idx2: # keep downsample3 last
475
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
476
-
477
- for i in shuffle_order:
478
-
479
- if i == 0:
480
- img = add_blur(img, sf=sf)
481
-
482
- elif i == 1:
483
- img = add_blur(img, sf=sf)
484
-
485
- elif i == 2:
486
- a, b = img.shape[1], img.shape[0]
487
- # downsample2
488
- if random.random() < 0.75:
489
- sf1 = random.uniform(1, 2 * sf)
490
- img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])),
491
- interpolation=random.choice([1, 2, 3]))
492
- else:
493
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
494
- k_shifted = shift_pixel(k, sf)
495
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
496
- img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror')
497
- img = img[0::sf, 0::sf, ...] # nearest downsampling
498
- img = np.clip(img, 0.0, 1.0)
499
-
500
- elif i == 3:
501
- # downsample3
502
- img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
503
- img = np.clip(img, 0.0, 1.0)
504
-
505
- elif i == 4:
506
- # add Gaussian noise
507
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
508
-
509
- elif i == 5:
510
- # add JPEG noise
511
- if random.random() < jpeg_prob:
512
- img = add_JPEG_noise(img)
513
-
514
- elif i == 6:
515
- # add processed camera sensor noise
516
- if random.random() < isp_prob and isp_model is not None:
517
- with torch.no_grad():
518
- img, hq = isp_model.forward(img.copy(), hq)
519
-
520
- # add final JPEG compression noise
521
- img = add_JPEG_noise(img)
522
-
523
- # random crop
524
- img, hq = random_crop(img, hq, sf_ori, lq_patchsize)
525
-
526
- return img, hq
527
-
528
-
529
- # todo no isp_model?
530
- def degradation_bsrgan_variant(image, sf=4, isp_model=None):
531
- """
532
- This is the degradation model of BSRGAN from the paper
533
- "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution"
534
- ----------
535
- sf: scale factor
536
- isp_model: camera ISP model
537
- Returns
538
- -------
539
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
540
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
541
- """
542
- image = util.uint2single(image)
543
- isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25
544
- sf_ori = sf
545
-
546
- h1, w1 = image.shape[:2]
547
- image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
548
- h, w = image.shape[:2]
549
-
550
- hq = image.copy()
551
-
552
- if sf == 4 and random.random() < scale2_prob: # downsample1
553
- if np.random.rand() < 0.5:
554
- image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])),
555
- interpolation=random.choice([1, 2, 3]))
556
- else:
557
- image = util.imresize_np(image, 1 / 2, True)
558
- image = np.clip(image, 0.0, 1.0)
559
- sf = 2
560
-
561
- shuffle_order = random.sample(range(7), 7)
562
- idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3)
563
- if idx1 > idx2: # keep downsample3 last
564
- shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1]
565
-
566
- for i in shuffle_order:
567
-
568
- if i == 0:
569
- image = add_blur(image, sf=sf)
570
-
571
- elif i == 1:
572
- image = add_blur(image, sf=sf)
573
-
574
- elif i == 2:
575
- a, b = image.shape[1], image.shape[0]
576
- # downsample2
577
- if random.random() < 0.75:
578
- sf1 = random.uniform(1, 2 * sf)
579
- image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])),
580
- interpolation=random.choice([1, 2, 3]))
581
- else:
582
- k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf))
583
- k_shifted = shift_pixel(k, sf)
584
- k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel
585
- image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror')
586
- image = image[0::sf, 0::sf, ...] # nearest downsampling
587
- image = np.clip(image, 0.0, 1.0)
588
-
589
- elif i == 3:
590
- # downsample3
591
- image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3]))
592
- image = np.clip(image, 0.0, 1.0)
593
-
594
- elif i == 4:
595
- # add Gaussian noise
596
- image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25)
597
-
598
- elif i == 5:
599
- # add JPEG noise
600
- if random.random() < jpeg_prob:
601
- image = add_JPEG_noise(image)
602
-
603
- # elif i == 6:
604
- # # add processed camera sensor noise
605
- # if random.random() < isp_prob and isp_model is not None:
606
- # with torch.no_grad():
607
- # img, hq = isp_model.forward(img.copy(), hq)
608
-
609
- # add final JPEG compression noise
610
- image = add_JPEG_noise(image)
611
- image = util.single2uint(image)
612
- example = {"image":image}
613
- return example
614
-
615
-
616
- # TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc...
617
- def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None):
618
- """
619
- This is an extended degradation model by combining
620
- the degradation models of BSRGAN and Real-ESRGAN
621
- ----------
622
- img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf)
623
- sf: scale factor
624
- use_shuffle: the degradation shuffle
625
- use_sharp: sharpening the img
626
- Returns
627
- -------
628
- img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1]
629
- hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1]
630
- """
631
-
632
- h1, w1 = img.shape[:2]
633
- img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop
634
- h, w = img.shape[:2]
635
-
636
- if h < lq_patchsize * sf or w < lq_patchsize * sf:
637
- raise ValueError(f'img size ({h1}X{w1}) is too small!')
638
-
639
- if use_sharp:
640
- img = add_sharpening(img)
641
- hq = img.copy()
642
-
643
- if random.random() < shuffle_prob:
644
- shuffle_order = random.sample(range(13), 13)
645
- else:
646
- shuffle_order = list(range(13))
647
- # local shuffle for noise, JPEG is always the last one
648
- shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6)))
649
- shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13)))
650
-
651
- poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1
652
-
653
- for i in shuffle_order:
654
- if i == 0:
655
- img = add_blur(img, sf=sf)
656
- elif i == 1:
657
- img = add_resize(img, sf=sf)
658
- elif i == 2:
659
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
660
- elif i == 3:
661
- if random.random() < poisson_prob:
662
- img = add_Poisson_noise(img)
663
- elif i == 4:
664
- if random.random() < speckle_prob:
665
- img = add_speckle_noise(img)
666
- elif i == 5:
667
- if random.random() < isp_prob and isp_model is not None:
668
- with torch.no_grad():
669
- img, hq = isp_model.forward(img.copy(), hq)
670
- elif i == 6:
671
- img = add_JPEG_noise(img)
672
- elif i == 7:
673
- img = add_blur(img, sf=sf)
674
- elif i == 8:
675
- img = add_resize(img, sf=sf)
676
- elif i == 9:
677
- img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
678
- elif i == 10:
679
- if random.random() < poisson_prob:
680
- img = add_Poisson_noise(img)
681
- elif i == 11:
682
- if random.random() < speckle_prob:
683
- img = add_speckle_noise(img)
684
- elif i == 12:
685
- if random.random() < isp_prob and isp_model is not None:
686
- with torch.no_grad():
687
- img, hq = isp_model.forward(img.copy(), hq)
688
- else:
689
- print('check the shuffle!')
690
-
691
- # resize to desired size
692
- img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])),
693
- interpolation=random.choice([1, 2, 3]))
694
-
695
- # add final JPEG compression noise
696
- img = add_JPEG_noise(img)
697
-
698
- # random crop
699
- img, hq = random_crop(img, hq, sf, lq_patchsize)
700
-
701
- return img, hq
702
-
703
-
704
- if __name__ == '__main__':
705
- print("hey")
706
- img = util.imread_uint('utils/test.png', 3)
707
- print(img)
708
- img = util.uint2single(img)
709
- print(img)
710
- img = img[:448, :448]
711
- h = img.shape[0] // 4
712
- print("resizing to", h)
713
- sf = 4
714
- deg_fn = partial(degradation_bsrgan_variant, sf=sf)
715
- for i in range(20):
716
- print(i)
717
- img_lq = deg_fn(img)
718
- print(img_lq)
719
- img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"]
720
- print(img_lq.shape)
721
- print("bicubic", img_lq_bicubic.shape)
722
- print(img_hq.shape)
723
- lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
724
- interpolation=0)
725
- lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])),
726
- interpolation=0)
727
- img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1)
728
- util.imsave(img_concat, str(i) + '.png')
729
-
730
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/bracketparser2.js DELETED
@@ -1,2 +0,0 @@
1
- import BracketParser from './logic/bracketparser/bracketparser2/BracketParser.js';
2
- export default BracketParser;
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sides/Factory.js DELETED
@@ -1,13 +0,0 @@
1
- import Sides from './Sides.js';
2
- import ObjectFactory from '../ObjectFactory.js';
3
- import SetValue from '../../../plugins/utils/object/SetValue.js';
4
-
5
- ObjectFactory.register('sides', function (config) {
6
- var gameObject = new Sides(this.scene, config);
7
- this.scene.add.existing(gameObject);
8
- return gameObject;
9
- });
10
-
11
- SetValue(window, 'RexPlugins.UI.Sides', Sides);
12
-
13
- export default Sides;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/RemoveChildMethods.js DELETED
@@ -1,29 +0,0 @@
1
- import RemoveChild from '../basesizer/utils/RemoveChild.js';
2
- import ClearChildren from '../basesizer/utils/ClearChildren.js';
3
-
4
- const RemoveItem = Phaser.Utils.Array.Remove;
5
-
6
- export default {
7
- remove(gameObject, destroyChild) {
8
- if (this.getParentSizer(gameObject) !== this) {
9
- return this;
10
- }
11
-
12
- RemoveItem(this.sizerChildren, gameObject);
13
- RemoveChild.call(this, gameObject, destroyChild);
14
- return this;
15
- },
16
-
17
- removeAll(destroyChild) {
18
- for (var i = this.sizerChildren.length - 1; i >= 0; i--) {
19
- this.remove(this.sizerChildren[i], destroyChild);
20
- }
21
- return this;
22
- },
23
-
24
- clear(destroyChild) {
25
- this.sizerChildren.length = 0;
26
- ClearChildren.call(this, destroyChild);
27
- return this;
28
- }
29
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Agusbs98/automatic-ecg-diagnosis/nets/layers.py DELETED
@@ -1,29 +0,0 @@
1
-
2
- import os, sys
3
- from libs import *
4
-
5
- class DSConv1d(nn.Module):
6
- def __init__(self,
7
- in_channels, out_channels,
8
- kernel_size, padding = 0, stride = 1,
9
- ):
10
- super(DSConv1d, self).__init__()
11
- self.dw_conv = nn.Conv1d(
12
- in_channels, in_channels,
13
- kernel_size = kernel_size, padding = padding, stride = stride,
14
- groups = in_channels,
15
- bias = False,
16
- )
17
- self.pw_conv = nn.Conv1d(
18
- in_channels, out_channels,
19
- kernel_size = 1,
20
- bias = False,
21
- )
22
-
23
- def forward(self,
24
- input,
25
- ):
26
- output = self.dw_conv(input)
27
- output = self.pw_conv(output)
28
-
29
- return output
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AixiaGreyatt/QQsign/bin/unidbg-fetch-qsign.bat DELETED
@@ -1,89 +0,0 @@
1
- @rem
2
- @rem Copyright 2015 the original author or authors.
3
- @rem
4
- @rem Licensed under the Apache License, Version 2.0 (the "License");
5
- @rem you may not use this file except in compliance with the License.
6
- @rem You may obtain a copy of the License at
7
- @rem
8
- @rem https://www.apache.org/licenses/LICENSE-2.0
9
- @rem
10
- @rem Unless required by applicable law or agreed to in writing, software
11
- @rem distributed under the License is distributed on an "AS IS" BASIS,
12
- @rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- @rem See the License for the specific language governing permissions and
14
- @rem limitations under the License.
15
- @rem
16
-
17
- @if "%DEBUG%" == "" @echo off
18
- @rem ##########################################################################
19
- @rem
20
- @rem unidbg-fetch-qsign startup script for Windows
21
- @rem
22
- @rem ##########################################################################
23
-
24
- @rem Set local scope for the variables with windows NT shell
25
- if "%OS%"=="Windows_NT" setlocal
26
-
27
- set DIRNAME=%~dp0
28
- if "%DIRNAME%" == "" set DIRNAME=.
29
- set APP_BASE_NAME=%~n0
30
- set APP_HOME=%DIRNAME%..
31
-
32
- @rem Resolve any "." and ".." in APP_HOME to make it shorter.
33
- for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi
34
-
35
- @rem Add default JVM options here. You can also use JAVA_OPTS and UNIDBG_FETCH_QSIGN_OPTS to pass JVM options to this script.
36
- set DEFAULT_JVM_OPTS=
37
-
38
- @rem Find java.exe
39
- if defined JAVA_HOME goto findJavaFromJavaHome
40
-
41
- set JAVA_EXE=java.exe
42
- %JAVA_EXE% -version >NUL 2>&1
43
- if "%ERRORLEVEL%" == "0" goto execute
44
-
45
- echo.
46
- echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH.
47
- echo.
48
- echo Please set the JAVA_HOME variable in your environment to match the
49
- echo location of your Java installation.
50
-
51
- goto fail
52
-
53
- :findJavaFromJavaHome
54
- set JAVA_HOME=%JAVA_HOME:"=%
55
- set JAVA_EXE=%JAVA_HOME%/bin/java.exe
56
-
57
- if exist "%JAVA_EXE%" goto execute
58
-
59
- echo.
60
- echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME%
61
- echo.
62
- echo Please set the JAVA_HOME variable in your environment to match the
63
- echo location of your Java installation.
64
-
65
- goto fail
66
-
67
- :execute
68
- @rem Setup the command line
69
-
70
- set CLASSPATH=%APP_HOME%\lib\unidbg-fetch-qsign-1.1.9.jar;%APP_HOME%\lib\unidbg-android-105.jar;%APP_HOME%\lib\ktor-server-content-negotiation-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-json-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-status-pages-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-netty-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-host-common-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-server-core-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-kotlinx-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-serialization-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-events-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-websockets-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-cio-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-http-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-network-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-utils-jvm-2.3.1.jar;%APP_HOME%\lib\ktor-io-jvm-2.3.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk8-1.8.22.jar;%APP_HOME%\lib\kotlinx-serialization-json-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-protobuf-jvm-1.5.1.jar;%APP_HOME%\lib\kotlinx-serialization-core-jvm-1.5.1.jar;%APP_HOME%\lib\logback-classic-1.2.11.jar;%APP_HOME%\lib\kotlinx-coroutines-jdk8-1.7.1.jar;%APP_HOME%\lib\kotlinx-coroutines-core-jvm-1.7.1.jar;%APP_HOME%\lib\kotlin-stdlib-jdk7-1.8.22.jar;%APP_HOME%\lib\kotlin-reflect-1.8.10.jar;%APP_HOME%\lib\kotlin-stdlib-1.8.22.jar;%APP_HOME%\lib\slf4j-api-1.7.36.jar;%APP_HOME%\lib\kotlin-stdlib-common-1.8.22.jar;%APP_HOME%\lib\config-1.4.2.jar;%APP_HOME%\lib\jansi-2.4.0.jar;%APP_HOME%\lib\netty-codec-http2-4.1.92.Final.jar;%APP_HOME%\lib\alpn-api-1.1.3.v20160715.jar;%APP_HOME%\lib\netty-transport-native-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-epoll-4.1.92.Final.jar;%APP_HOME%\lib\logback-core-1.2.11.jar;%APP_HOME%\lib\annotations-23.0.0.jar;%APP_HOME%\lib\netty-codec-http-4.1.92.Final.jar;%APP_HOME%\lib\netty-handler-4.1.92.Final.jar;%APP_HOME%\lib\netty-codec-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-kqueue-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-classes-epoll-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-native-unix-common-4.1.92.Final.jar;%APP_HOME%\lib\netty-transport-4.1.92.Final.jar;%APP_HOME%\lib\netty-buffer-4.1.92.Final.jar;%APP_HOME%\lib\netty-resolver-4.1.92.Final.jar;%APP_HOME%\lib\netty-common-4.1.92.Final.jar
71
-
72
-
73
- @rem Execute unidbg-fetch-qsign
74
- "%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %UNIDBG_FETCH_QSIGN_OPTS% -classpath "%CLASSPATH%" MainKt %*
75
-
76
- :end
77
- @rem End local scope for the variables with windows NT shell
78
- if "%ERRORLEVEL%"=="0" goto mainEnd
79
-
80
- :fail
81
- rem Set variable UNIDBG_FETCH_QSIGN_EXIT_CONSOLE if you need the _script_ return code instead of
82
- rem the _cmd.exe /c_ return code!
83
- if not "" == "%UNIDBG_FETCH_QSIGN_EXIT_CONSOLE%" exit 1
84
- exit /b 1
85
-
86
- :mainEnd
87
- if "%OS%"=="Windows_NT" endlocal
88
-
89
- :omega
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aloento/9Nine-PITS/text/english.py DELETED
@@ -1,122 +0,0 @@
1
- """ from https://github.com/keithito/tacotron """
2
- import re
3
-
4
- import eng_to_ipa as ipa
5
- from g2p_en import G2p
6
- from unidecode import unidecode
7
-
8
- from text.frontend import normalize_numbers
9
-
10
- '''
11
- Cleaners are transformations that run over the input text at both training and eval time.
12
-
13
- Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners"
14
- hyperparameter. Some cleaners are English-specific. You'll typically want to use:
15
- 1. "english_cleaners" for English text
16
- 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using
17
- the Unidecode library (https://pypi.python.org/pypi/Unidecode)
18
- 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update
19
- the symbols in symbols.py to match your data).
20
- '''
21
-
22
- # Regular expression matching whitespace:
23
- g2p = G2p()
24
-
25
- # List of (regular expression, replacement) pairs for abbreviations:
26
- _abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [
27
- ('mrs', 'misess'),
28
- ('mr', 'mister'),
29
- ('dr', 'doctor'),
30
- ('st', 'saint'),
31
- ('co', 'company'),
32
- ('jr', 'junior'),
33
- ('maj', 'major'),
34
- ('gen', 'general'),
35
- ('drs', 'doctors'),
36
- ('rev', 'reverend'),
37
- ('lt', 'lieutenant'),
38
- ('hon', 'honorable'),
39
- ('sgt', 'sergeant'),
40
- ('capt', 'captain'),
41
- ('esq', 'esquire'),
42
- ('ltd', 'limited'),
43
- ('col', 'colonel'),
44
- ('ft', 'fort'),
45
- ]]
46
-
47
- # List of (ipa, ipa2) pairs
48
- _ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [
49
- ('r', 'ɹ'),
50
- ('ʤ', 'dʒ'),
51
- ('ʧ', 'tʃ')
52
- ]]
53
-
54
-
55
- def expand_abbreviations(text):
56
- for regex, replacement in _abbreviations:
57
- text = re.sub(regex, replacement, text)
58
- return text
59
-
60
-
61
- def collapse_whitespace(text):
62
- return re.sub(r'\s+', ' ', text)
63
-
64
-
65
- def mark_dark_l(text):
66
- return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ' + x.group(1), text)
67
-
68
-
69
- def english_to_ipa(text):
70
- text = text.replace("-", " ")
71
- text = unidecode(text).lower()
72
- text = expand_abbreviations(text)
73
- text = normalize_numbers(text)
74
-
75
- phonemes = ipa.convert(text)
76
- phonemes = unrecognized_words_to_ipa(phonemes)
77
- phonemes = collapse_whitespace(phonemes)
78
-
79
- text = phonemes
80
- text = mark_dark_l(text)
81
-
82
- for regex, replacement in _ipa_to_ipa2:
83
- text = re.sub(regex, replacement, text)
84
-
85
- return text.replace('...', '…')
86
-
87
-
88
- def convert_to_ipa(phones):
89
- eipa = ""
90
- symbols = {"a": "ə", "ey": "eɪ", "aa": "ɑ", "ae": "æ", "ah": "ə", "ao": "ɔ",
91
- "aw": "aʊ", "ay": "aɪ", "ch": "ʧ", "dh": "ð", "eh": "ɛ", "er": "ər",
92
- "hh": "h", "ih": "ɪ", "jh": "ʤ", "ng": "ŋ", "ow": "oʊ", "oy": "ɔɪ",
93
- "sh": "ʃ", "th": "θ", "uh": "ʊ", "uw": "u", "zh": "ʒ", "iy": "i", "y": "j"}
94
-
95
- for ph in phones:
96
- ph = ph.lower()
97
-
98
- try:
99
- if ph[-1] in "01234":
100
- eipa += symbols[ph[:-1]]
101
- else:
102
- eipa += symbols[ph]
103
- except:
104
- eipa += ph
105
-
106
- return eipa
107
-
108
-
109
- def unrecognized_words_to_ipa(text):
110
- matches = re.findall(r'\s([\w|\']+\*)', text)
111
-
112
- for word in matches:
113
- ipa = convert_to_ipa(g2p(word))
114
- text = text.replace(word, ipa)
115
-
116
- matches = re.findall(r'^([\w|\']+\*)', text)
117
-
118
- for word in matches:
119
- ipa = convert_to_ipa(g2p(word))
120
- text = text.replace(word, ipa)
121
-
122
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/face3d/models/networks.py DELETED
@@ -1,521 +0,0 @@
1
- """This script defines deep neural networks for Deep3DFaceRecon_pytorch
2
- """
3
-
4
- import os
5
- import numpy as np
6
- import torch.nn.functional as F
7
- from torch.nn import init
8
- import functools
9
- from torch.optim import lr_scheduler
10
- import torch
11
- from torch import Tensor
12
- import torch.nn as nn
13
- try:
14
- from torch.hub import load_state_dict_from_url
15
- except ImportError:
16
- from torch.utils.model_zoo import load_url as load_state_dict_from_url
17
- from typing import Type, Any, Callable, Union, List, Optional
18
- from .arcface_torch.backbones import get_model
19
- from kornia.geometry import warp_affine
20
-
21
- def resize_n_crop(image, M, dsize=112):
22
- # image: (b, c, h, w)
23
- # M : (b, 2, 3)
24
- return warp_affine(image, M, dsize=(dsize, dsize), align_corners=True)
25
-
26
- def filter_state_dict(state_dict, remove_name='fc'):
27
- new_state_dict = {}
28
- for key in state_dict:
29
- if remove_name in key:
30
- continue
31
- new_state_dict[key] = state_dict[key]
32
- return new_state_dict
33
-
34
- def get_scheduler(optimizer, opt):
35
- """Return a learning rate scheduler
36
-
37
- Parameters:
38
- optimizer -- the optimizer of the network
39
- opt (option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions. 
40
- opt.lr_policy is the name of learning rate policy: linear | step | plateau | cosine
41
-
42
- For other schedulers (step, plateau, and cosine), we use the default PyTorch schedulers.
43
- See https://pytorch.org/docs/stable/optim.html for more details.
44
- """
45
- if opt.lr_policy == 'linear':
46
- def lambda_rule(epoch):
47
- lr_l = 1.0 - max(0, epoch + opt.epoch_count - opt.n_epochs) / float(opt.n_epochs + 1)
48
- return lr_l
49
- scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule)
50
- elif opt.lr_policy == 'step':
51
- scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_epochs, gamma=0.2)
52
- elif opt.lr_policy == 'plateau':
53
- scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5)
54
- elif opt.lr_policy == 'cosine':
55
- scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.n_epochs, eta_min=0)
56
- else:
57
- return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy)
58
- return scheduler
59
-
60
-
61
- def define_net_recon(net_recon, use_last_fc=False, init_path=None):
62
- return ReconNetWrapper(net_recon, use_last_fc=use_last_fc, init_path=init_path)
63
-
64
- def define_net_recog(net_recog, pretrained_path=None):
65
- net = RecogNetWrapper(net_recog=net_recog, pretrained_path=pretrained_path)
66
- net.eval()
67
- return net
68
-
69
- class ReconNetWrapper(nn.Module):
70
- fc_dim=257
71
- def __init__(self, net_recon, use_last_fc=False, init_path=None):
72
- super(ReconNetWrapper, self).__init__()
73
- self.use_last_fc = use_last_fc
74
- if net_recon not in func_dict:
75
- return NotImplementedError('network [%s] is not implemented', net_recon)
76
- func, last_dim = func_dict[net_recon]
77
- backbone = func(use_last_fc=use_last_fc, num_classes=self.fc_dim)
78
- if init_path and os.path.isfile(init_path):
79
- state_dict = filter_state_dict(torch.load(init_path, map_location='cpu'))
80
- backbone.load_state_dict(state_dict)
81
- print("loading init net_recon %s from %s" %(net_recon, init_path))
82
- self.backbone = backbone
83
- if not use_last_fc:
84
- self.final_layers = nn.ModuleList([
85
- conv1x1(last_dim, 80, bias=True), # id layer
86
- conv1x1(last_dim, 64, bias=True), # exp layer
87
- conv1x1(last_dim, 80, bias=True), # tex layer
88
- conv1x1(last_dim, 3, bias=True), # angle layer
89
- conv1x1(last_dim, 27, bias=True), # gamma layer
90
- conv1x1(last_dim, 2, bias=True), # tx, ty
91
- conv1x1(last_dim, 1, bias=True) # tz
92
- ])
93
- for m in self.final_layers:
94
- nn.init.constant_(m.weight, 0.)
95
- nn.init.constant_(m.bias, 0.)
96
-
97
- def forward(self, x):
98
- x = self.backbone(x)
99
- if not self.use_last_fc:
100
- output = []
101
- for layer in self.final_layers:
102
- output.append(layer(x))
103
- x = torch.flatten(torch.cat(output, dim=1), 1)
104
- return x
105
-
106
-
107
- class RecogNetWrapper(nn.Module):
108
- def __init__(self, net_recog, pretrained_path=None, input_size=112):
109
- super(RecogNetWrapper, self).__init__()
110
- net = get_model(name=net_recog, fp16=False)
111
- if pretrained_path:
112
- state_dict = torch.load(pretrained_path, map_location='cpu')
113
- net.load_state_dict(state_dict)
114
- print("loading pretrained net_recog %s from %s" %(net_recog, pretrained_path))
115
- for param in net.parameters():
116
- param.requires_grad = False
117
- self.net = net
118
- self.preprocess = lambda x: 2 * x - 1
119
- self.input_size=input_size
120
-
121
- def forward(self, image, M):
122
- image = self.preprocess(resize_n_crop(image, M, self.input_size))
123
- id_feature = F.normalize(self.net(image), dim=-1, p=2)
124
- return id_feature
125
-
126
-
127
- # adapted from https://github.com/pytorch/vision/edit/master/torchvision/models/resnet.py
128
- __all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101',
129
- 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d',
130
- 'wide_resnet50_2', 'wide_resnet101_2']
131
-
132
-
133
- model_urls = {
134
- 'resnet18': 'https://download.pytorch.org/models/resnet18-f37072fd.pth',
135
- 'resnet34': 'https://download.pytorch.org/models/resnet34-b627a593.pth',
136
- 'resnet50': 'https://download.pytorch.org/models/resnet50-0676ba61.pth',
137
- 'resnet101': 'https://download.pytorch.org/models/resnet101-63fe2227.pth',
138
- 'resnet152': 'https://download.pytorch.org/models/resnet152-394f9c45.pth',
139
- 'resnext50_32x4d': 'https://download.pytorch.org/models/resnext50_32x4d-7cdf4587.pth',
140
- 'resnext101_32x8d': 'https://download.pytorch.org/models/resnext101_32x8d-8ba56ff5.pth',
141
- 'wide_resnet50_2': 'https://download.pytorch.org/models/wide_resnet50_2-95faca4d.pth',
142
- 'wide_resnet101_2': 'https://download.pytorch.org/models/wide_resnet101_2-32ee1156.pth',
143
- }
144
-
145
-
146
- def conv3x3(in_planes: int, out_planes: int, stride: int = 1, groups: int = 1, dilation: int = 1) -> nn.Conv2d:
147
- """3x3 convolution with padding"""
148
- return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
149
- padding=dilation, groups=groups, bias=False, dilation=dilation)
150
-
151
-
152
- def conv1x1(in_planes: int, out_planes: int, stride: int = 1, bias: bool = False) -> nn.Conv2d:
153
- """1x1 convolution"""
154
- return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride, bias=bias)
155
-
156
-
157
- class BasicBlock(nn.Module):
158
- expansion: int = 1
159
-
160
- def __init__(
161
- self,
162
- inplanes: int,
163
- planes: int,
164
- stride: int = 1,
165
- downsample: Optional[nn.Module] = None,
166
- groups: int = 1,
167
- base_width: int = 64,
168
- dilation: int = 1,
169
- norm_layer: Optional[Callable[..., nn.Module]] = None
170
- ) -> None:
171
- super(BasicBlock, self).__init__()
172
- if norm_layer is None:
173
- norm_layer = nn.BatchNorm2d
174
- if groups != 1 or base_width != 64:
175
- raise ValueError('BasicBlock only supports groups=1 and base_width=64')
176
- if dilation > 1:
177
- raise NotImplementedError("Dilation > 1 not supported in BasicBlock")
178
- # Both self.conv1 and self.downsample layers downsample the input when stride != 1
179
- self.conv1 = conv3x3(inplanes, planes, stride)
180
- self.bn1 = norm_layer(planes)
181
- self.relu = nn.ReLU(inplace=True)
182
- self.conv2 = conv3x3(planes, planes)
183
- self.bn2 = norm_layer(planes)
184
- self.downsample = downsample
185
- self.stride = stride
186
-
187
- def forward(self, x: Tensor) -> Tensor:
188
- identity = x
189
-
190
- out = self.conv1(x)
191
- out = self.bn1(out)
192
- out = self.relu(out)
193
-
194
- out = self.conv2(out)
195
- out = self.bn2(out)
196
-
197
- if self.downsample is not None:
198
- identity = self.downsample(x)
199
-
200
- out += identity
201
- out = self.relu(out)
202
-
203
- return out
204
-
205
-
206
- class Bottleneck(nn.Module):
207
- # Bottleneck in torchvision places the stride for downsampling at 3x3 convolution(self.conv2)
208
- # while original implementation places the stride at the first 1x1 convolution(self.conv1)
209
- # according to "Deep residual learning for image recognition"https://arxiv.org/abs/1512.03385.
210
- # This variant is also known as ResNet V1.5 and improves accuracy according to
211
- # https://ngc.nvidia.com/catalog/model-scripts/nvidia:resnet_50_v1_5_for_pytorch.
212
-
213
- expansion: int = 4
214
-
215
- def __init__(
216
- self,
217
- inplanes: int,
218
- planes: int,
219
- stride: int = 1,
220
- downsample: Optional[nn.Module] = None,
221
- groups: int = 1,
222
- base_width: int = 64,
223
- dilation: int = 1,
224
- norm_layer: Optional[Callable[..., nn.Module]] = None
225
- ) -> None:
226
- super(Bottleneck, self).__init__()
227
- if norm_layer is None:
228
- norm_layer = nn.BatchNorm2d
229
- width = int(planes * (base_width / 64.)) * groups
230
- # Both self.conv2 and self.downsample layers downsample the input when stride != 1
231
- self.conv1 = conv1x1(inplanes, width)
232
- self.bn1 = norm_layer(width)
233
- self.conv2 = conv3x3(width, width, stride, groups, dilation)
234
- self.bn2 = norm_layer(width)
235
- self.conv3 = conv1x1(width, planes * self.expansion)
236
- self.bn3 = norm_layer(planes * self.expansion)
237
- self.relu = nn.ReLU(inplace=True)
238
- self.downsample = downsample
239
- self.stride = stride
240
-
241
- def forward(self, x: Tensor) -> Tensor:
242
- identity = x
243
-
244
- out = self.conv1(x)
245
- out = self.bn1(out)
246
- out = self.relu(out)
247
-
248
- out = self.conv2(out)
249
- out = self.bn2(out)
250
- out = self.relu(out)
251
-
252
- out = self.conv3(out)
253
- out = self.bn3(out)
254
-
255
- if self.downsample is not None:
256
- identity = self.downsample(x)
257
-
258
- out += identity
259
- out = self.relu(out)
260
-
261
- return out
262
-
263
-
264
- class ResNet(nn.Module):
265
-
266
- def __init__(
267
- self,
268
- block: Type[Union[BasicBlock, Bottleneck]],
269
- layers: List[int],
270
- num_classes: int = 1000,
271
- zero_init_residual: bool = False,
272
- use_last_fc: bool = False,
273
- groups: int = 1,
274
- width_per_group: int = 64,
275
- replace_stride_with_dilation: Optional[List[bool]] = None,
276
- norm_layer: Optional[Callable[..., nn.Module]] = None
277
- ) -> None:
278
- super(ResNet, self).__init__()
279
- if norm_layer is None:
280
- norm_layer = nn.BatchNorm2d
281
- self._norm_layer = norm_layer
282
-
283
- self.inplanes = 64
284
- self.dilation = 1
285
- if replace_stride_with_dilation is None:
286
- # each element in the tuple indicates if we should replace
287
- # the 2x2 stride with a dilated convolution instead
288
- replace_stride_with_dilation = [False, False, False]
289
- if len(replace_stride_with_dilation) != 3:
290
- raise ValueError("replace_stride_with_dilation should be None "
291
- "or a 3-element tuple, got {}".format(replace_stride_with_dilation))
292
- self.use_last_fc = use_last_fc
293
- self.groups = groups
294
- self.base_width = width_per_group
295
- self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=7, stride=2, padding=3,
296
- bias=False)
297
- self.bn1 = norm_layer(self.inplanes)
298
- self.relu = nn.ReLU(inplace=True)
299
- self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
300
- self.layer1 = self._make_layer(block, 64, layers[0])
301
- self.layer2 = self._make_layer(block, 128, layers[1], stride=2,
302
- dilate=replace_stride_with_dilation[0])
303
- self.layer3 = self._make_layer(block, 256, layers[2], stride=2,
304
- dilate=replace_stride_with_dilation[1])
305
- self.layer4 = self._make_layer(block, 512, layers[3], stride=2,
306
- dilate=replace_stride_with_dilation[2])
307
- self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
308
-
309
- if self.use_last_fc:
310
- self.fc = nn.Linear(512 * block.expansion, num_classes)
311
-
312
- for m in self.modules():
313
- if isinstance(m, nn.Conv2d):
314
- nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
315
- elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)):
316
- nn.init.constant_(m.weight, 1)
317
- nn.init.constant_(m.bias, 0)
318
-
319
-
320
-
321
- # Zero-initialize the last BN in each residual branch,
322
- # so that the residual branch starts with zeros, and each residual block behaves like an identity.
323
- # This improves the model by 0.2~0.3% according to https://arxiv.org/abs/1706.02677
324
- if zero_init_residual:
325
- for m in self.modules():
326
- if isinstance(m, Bottleneck):
327
- nn.init.constant_(m.bn3.weight, 0) # type: ignore[arg-type]
328
- elif isinstance(m, BasicBlock):
329
- nn.init.constant_(m.bn2.weight, 0) # type: ignore[arg-type]
330
-
331
- def _make_layer(self, block: Type[Union[BasicBlock, Bottleneck]], planes: int, blocks: int,
332
- stride: int = 1, dilate: bool = False) -> nn.Sequential:
333
- norm_layer = self._norm_layer
334
- downsample = None
335
- previous_dilation = self.dilation
336
- if dilate:
337
- self.dilation *= stride
338
- stride = 1
339
- if stride != 1 or self.inplanes != planes * block.expansion:
340
- downsample = nn.Sequential(
341
- conv1x1(self.inplanes, planes * block.expansion, stride),
342
- norm_layer(planes * block.expansion),
343
- )
344
-
345
- layers = []
346
- layers.append(block(self.inplanes, planes, stride, downsample, self.groups,
347
- self.base_width, previous_dilation, norm_layer))
348
- self.inplanes = planes * block.expansion
349
- for _ in range(1, blocks):
350
- layers.append(block(self.inplanes, planes, groups=self.groups,
351
- base_width=self.base_width, dilation=self.dilation,
352
- norm_layer=norm_layer))
353
-
354
- return nn.Sequential(*layers)
355
-
356
- def _forward_impl(self, x: Tensor) -> Tensor:
357
- # See note [TorchScript super()]
358
- x = self.conv1(x)
359
- x = self.bn1(x)
360
- x = self.relu(x)
361
- x = self.maxpool(x)
362
-
363
- x = self.layer1(x)
364
- x = self.layer2(x)
365
- x = self.layer3(x)
366
- x = self.layer4(x)
367
-
368
- x = self.avgpool(x)
369
- if self.use_last_fc:
370
- x = torch.flatten(x, 1)
371
- x = self.fc(x)
372
- return x
373
-
374
- def forward(self, x: Tensor) -> Tensor:
375
- return self._forward_impl(x)
376
-
377
-
378
- def _resnet(
379
- arch: str,
380
- block: Type[Union[BasicBlock, Bottleneck]],
381
- layers: List[int],
382
- pretrained: bool,
383
- progress: bool,
384
- **kwargs: Any
385
- ) -> ResNet:
386
- model = ResNet(block, layers, **kwargs)
387
- if pretrained:
388
- state_dict = load_state_dict_from_url(model_urls[arch],
389
- progress=progress)
390
- model.load_state_dict(state_dict)
391
- return model
392
-
393
-
394
- def resnet18(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
395
- r"""ResNet-18 model from
396
- `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_.
397
-
398
- Args:
399
- pretrained (bool): If True, returns a model pre-trained on ImageNet
400
- progress (bool): If True, displays a progress bar of the download to stderr
401
- """
402
- return _resnet('resnet18', BasicBlock, [2, 2, 2, 2], pretrained, progress,
403
- **kwargs)
404
-
405
-
406
- def resnet34(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
407
- r"""ResNet-34 model from
408
- `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_.
409
-
410
- Args:
411
- pretrained (bool): If True, returns a model pre-trained on ImageNet
412
- progress (bool): If True, displays a progress bar of the download to stderr
413
- """
414
- return _resnet('resnet34', BasicBlock, [3, 4, 6, 3], pretrained, progress,
415
- **kwargs)
416
-
417
-
418
- def resnet50(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
419
- r"""ResNet-50 model from
420
- `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_.
421
-
422
- Args:
423
- pretrained (bool): If True, returns a model pre-trained on ImageNet
424
- progress (bool): If True, displays a progress bar of the download to stderr
425
- """
426
- return _resnet('resnet50', Bottleneck, [3, 4, 6, 3], pretrained, progress,
427
- **kwargs)
428
-
429
-
430
- def resnet101(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
431
- r"""ResNet-101 model from
432
- `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_.
433
-
434
- Args:
435
- pretrained (bool): If True, returns a model pre-trained on ImageNet
436
- progress (bool): If True, displays a progress bar of the download to stderr
437
- """
438
- return _resnet('resnet101', Bottleneck, [3, 4, 23, 3], pretrained, progress,
439
- **kwargs)
440
-
441
-
442
- def resnet152(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
443
- r"""ResNet-152 model from
444
- `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_.
445
-
446
- Args:
447
- pretrained (bool): If True, returns a model pre-trained on ImageNet
448
- progress (bool): If True, displays a progress bar of the download to stderr
449
- """
450
- return _resnet('resnet152', Bottleneck, [3, 8, 36, 3], pretrained, progress,
451
- **kwargs)
452
-
453
-
454
- def resnext50_32x4d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
455
- r"""ResNeXt-50 32x4d model from
456
- `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_.
457
-
458
- Args:
459
- pretrained (bool): If True, returns a model pre-trained on ImageNet
460
- progress (bool): If True, displays a progress bar of the download to stderr
461
- """
462
- kwargs['groups'] = 32
463
- kwargs['width_per_group'] = 4
464
- return _resnet('resnext50_32x4d', Bottleneck, [3, 4, 6, 3],
465
- pretrained, progress, **kwargs)
466
-
467
-
468
- def resnext101_32x8d(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
469
- r"""ResNeXt-101 32x8d model from
470
- `"Aggregated Residual Transformation for Deep Neural Networks" <https://arxiv.org/pdf/1611.05431.pdf>`_.
471
-
472
- Args:
473
- pretrained (bool): If True, returns a model pre-trained on ImageNet
474
- progress (bool): If True, displays a progress bar of the download to stderr
475
- """
476
- kwargs['groups'] = 32
477
- kwargs['width_per_group'] = 8
478
- return _resnet('resnext101_32x8d', Bottleneck, [3, 4, 23, 3],
479
- pretrained, progress, **kwargs)
480
-
481
-
482
- def wide_resnet50_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
483
- r"""Wide ResNet-50-2 model from
484
- `"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_.
485
-
486
- The model is the same as ResNet except for the bottleneck number of channels
487
- which is twice larger in every block. The number of channels in outer 1x1
488
- convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048
489
- channels, and in Wide ResNet-50-2 has 2048-1024-2048.
490
-
491
- Args:
492
- pretrained (bool): If True, returns a model pre-trained on ImageNet
493
- progress (bool): If True, displays a progress bar of the download to stderr
494
- """
495
- kwargs['width_per_group'] = 64 * 2
496
- return _resnet('wide_resnet50_2', Bottleneck, [3, 4, 6, 3],
497
- pretrained, progress, **kwargs)
498
-
499
-
500
- def wide_resnet101_2(pretrained: bool = False, progress: bool = True, **kwargs: Any) -> ResNet:
501
- r"""Wide ResNet-101-2 model from
502
- `"Wide Residual Networks" <https://arxiv.org/pdf/1605.07146.pdf>`_.
503
-
504
- The model is the same as ResNet except for the bottleneck number of channels
505
- which is twice larger in every block. The number of channels in outer 1x1
506
- convolutions is the same, e.g. last block in ResNet-50 has 2048-512-2048
507
- channels, and in Wide ResNet-50-2 has 2048-1024-2048.
508
-
509
- Args:
510
- pretrained (bool): If True, returns a model pre-trained on ImageNet
511
- progress (bool): If True, displays a progress bar of the download to stderr
512
- """
513
- kwargs['width_per_group'] = 64 * 2
514
- return _resnet('wide_resnet101_2', Bottleneck, [3, 4, 23, 3],
515
- pretrained, progress, **kwargs)
516
-
517
-
518
- func_dict = {
519
- 'resnet18': (resnet18, 512),
520
- 'resnet50': (resnet50, 2048)
521
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/utils/loading.py DELETED
@@ -1,6 +0,0 @@
1
- import yaml
2
-
3
-
4
- def load_yaml(path):
5
- with open(path, "rt") as f:
6
- return yaml.safe_load(f)
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_v_pred.py DELETED
@@ -1,540 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import gc
17
- import time
18
- import unittest
19
-
20
- import numpy as np
21
- import torch
22
- from huggingface_hub import hf_hub_download
23
- from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
24
-
25
- from diffusers import (
26
- AutoencoderKL,
27
- DDIMScheduler,
28
- DPMSolverMultistepScheduler,
29
- EulerDiscreteScheduler,
30
- StableDiffusionPipeline,
31
- UNet2DConditionModel,
32
- )
33
- from diffusers.models.attention_processor import AttnProcessor
34
- from diffusers.utils import load_numpy, slow, torch_device
35
- from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
36
-
37
-
38
- enable_full_determinism()
39
-
40
-
41
- class StableDiffusion2VPredictionPipelineFastTests(unittest.TestCase):
42
- def tearDown(self):
43
- # clean up the VRAM after each test
44
- super().tearDown()
45
- gc.collect()
46
- torch.cuda.empty_cache()
47
-
48
- @property
49
- def dummy_cond_unet(self):
50
- torch.manual_seed(0)
51
- model = UNet2DConditionModel(
52
- block_out_channels=(32, 64),
53
- layers_per_block=2,
54
- sample_size=32,
55
- in_channels=4,
56
- out_channels=4,
57
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
58
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
59
- cross_attention_dim=32,
60
- # SD2-specific config below
61
- attention_head_dim=(2, 4),
62
- use_linear_projection=True,
63
- )
64
- return model
65
-
66
- @property
67
- def dummy_vae(self):
68
- torch.manual_seed(0)
69
- model = AutoencoderKL(
70
- block_out_channels=[32, 64],
71
- in_channels=3,
72
- out_channels=3,
73
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
74
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
75
- latent_channels=4,
76
- sample_size=128,
77
- )
78
- return model
79
-
80
- @property
81
- def dummy_text_encoder(self):
82
- torch.manual_seed(0)
83
- config = CLIPTextConfig(
84
- bos_token_id=0,
85
- eos_token_id=2,
86
- hidden_size=32,
87
- intermediate_size=37,
88
- layer_norm_eps=1e-05,
89
- num_attention_heads=4,
90
- num_hidden_layers=5,
91
- pad_token_id=1,
92
- vocab_size=1000,
93
- # SD2-specific config below
94
- hidden_act="gelu",
95
- projection_dim=64,
96
- )
97
- return CLIPTextModel(config)
98
-
99
- def test_stable_diffusion_v_pred_ddim(self):
100
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
101
- unet = self.dummy_cond_unet
102
- scheduler = DDIMScheduler(
103
- beta_start=0.00085,
104
- beta_end=0.012,
105
- beta_schedule="scaled_linear",
106
- clip_sample=False,
107
- set_alpha_to_one=False,
108
- prediction_type="v_prediction",
109
- )
110
-
111
- vae = self.dummy_vae
112
- bert = self.dummy_text_encoder
113
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
114
-
115
- # make sure here that pndm scheduler skips prk
116
- sd_pipe = StableDiffusionPipeline(
117
- unet=unet,
118
- scheduler=scheduler,
119
- vae=vae,
120
- text_encoder=bert,
121
- tokenizer=tokenizer,
122
- safety_checker=None,
123
- feature_extractor=None,
124
- requires_safety_checker=False,
125
- )
126
- sd_pipe = sd_pipe.to(device)
127
- sd_pipe.set_progress_bar_config(disable=None)
128
-
129
- prompt = "A painting of a squirrel eating a burger"
130
-
131
- generator = torch.Generator(device=device).manual_seed(0)
132
- output = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np")
133
- image = output.images
134
-
135
- generator = torch.Generator(device=device).manual_seed(0)
136
- image_from_tuple = sd_pipe(
137
- [prompt],
138
- generator=generator,
139
- guidance_scale=6.0,
140
- num_inference_steps=2,
141
- output_type="np",
142
- return_dict=False,
143
- )[0]
144
-
145
- image_slice = image[0, -3:, -3:, -1]
146
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
147
-
148
- assert image.shape == (1, 64, 64, 3)
149
- expected_slice = np.array([0.6569, 0.6525, 0.5142, 0.4968, 0.4923, 0.4601, 0.4996, 0.5041, 0.4544])
150
-
151
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
152
- assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
153
-
154
- def test_stable_diffusion_v_pred_k_euler(self):
155
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
156
- unet = self.dummy_cond_unet
157
- scheduler = EulerDiscreteScheduler(
158
- beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", prediction_type="v_prediction"
159
- )
160
- vae = self.dummy_vae
161
- bert = self.dummy_text_encoder
162
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
163
-
164
- # make sure here that pndm scheduler skips prk
165
- sd_pipe = StableDiffusionPipeline(
166
- unet=unet,
167
- scheduler=scheduler,
168
- vae=vae,
169
- text_encoder=bert,
170
- tokenizer=tokenizer,
171
- safety_checker=None,
172
- feature_extractor=None,
173
- requires_safety_checker=False,
174
- )
175
- sd_pipe = sd_pipe.to(device)
176
- sd_pipe.set_progress_bar_config(disable=None)
177
-
178
- prompt = "A painting of a squirrel eating a burger"
179
- generator = torch.Generator(device=device).manual_seed(0)
180
- output = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np")
181
-
182
- image = output.images
183
-
184
- generator = torch.Generator(device=device).manual_seed(0)
185
- image_from_tuple = sd_pipe(
186
- [prompt],
187
- generator=generator,
188
- guidance_scale=6.0,
189
- num_inference_steps=2,
190
- output_type="np",
191
- return_dict=False,
192
- )[0]
193
-
194
- image_slice = image[0, -3:, -3:, -1]
195
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
196
-
197
- assert image.shape == (1, 64, 64, 3)
198
- expected_slice = np.array([0.5644, 0.6514, 0.5190, 0.5663, 0.5287, 0.4953, 0.5430, 0.5243, 0.4778])
199
-
200
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
201
- assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
202
-
203
- @unittest.skipIf(torch_device != "cuda", "This test requires a GPU")
204
- def test_stable_diffusion_v_pred_fp16(self):
205
- """Test that stable diffusion v-prediction works with fp16"""
206
- unet = self.dummy_cond_unet
207
- scheduler = DDIMScheduler(
208
- beta_start=0.00085,
209
- beta_end=0.012,
210
- beta_schedule="scaled_linear",
211
- clip_sample=False,
212
- set_alpha_to_one=False,
213
- prediction_type="v_prediction",
214
- )
215
- vae = self.dummy_vae
216
- bert = self.dummy_text_encoder
217
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
218
-
219
- # put models in fp16
220
- unet = unet.half()
221
- vae = vae.half()
222
- bert = bert.half()
223
-
224
- # make sure here that pndm scheduler skips prk
225
- sd_pipe = StableDiffusionPipeline(
226
- unet=unet,
227
- scheduler=scheduler,
228
- vae=vae,
229
- text_encoder=bert,
230
- tokenizer=tokenizer,
231
- safety_checker=None,
232
- feature_extractor=None,
233
- requires_safety_checker=False,
234
- )
235
- sd_pipe = sd_pipe.to(torch_device)
236
- sd_pipe.set_progress_bar_config(disable=None)
237
-
238
- prompt = "A painting of a squirrel eating a burger"
239
- generator = torch.manual_seed(0)
240
- image = sd_pipe([prompt], generator=generator, num_inference_steps=2, output_type="np").images
241
-
242
- assert image.shape == (1, 64, 64, 3)
243
-
244
-
245
- @slow
246
- @require_torch_gpu
247
- class StableDiffusion2VPredictionPipelineIntegrationTests(unittest.TestCase):
248
- def tearDown(self):
249
- # clean up the VRAM after each test
250
- super().tearDown()
251
- gc.collect()
252
- torch.cuda.empty_cache()
253
-
254
- def test_stable_diffusion_v_pred_default(self):
255
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2")
256
- sd_pipe = sd_pipe.to(torch_device)
257
- sd_pipe.enable_attention_slicing()
258
- sd_pipe.set_progress_bar_config(disable=None)
259
-
260
- prompt = "A painting of a squirrel eating a burger"
261
- generator = torch.manual_seed(0)
262
- output = sd_pipe([prompt], generator=generator, guidance_scale=7.5, num_inference_steps=20, output_type="np")
263
-
264
- image = output.images
265
- image_slice = image[0, 253:256, 253:256, -1]
266
-
267
- assert image.shape == (1, 768, 768, 3)
268
- expected_slice = np.array([0.1868, 0.1922, 0.1527, 0.1921, 0.1908, 0.1624, 0.1779, 0.1652, 0.1734])
269
-
270
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
271
-
272
- def test_stable_diffusion_v_pred_upcast_attention(self):
273
- sd_pipe = StableDiffusionPipeline.from_pretrained(
274
- "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
275
- )
276
- sd_pipe = sd_pipe.to(torch_device)
277
- sd_pipe.enable_attention_slicing()
278
- sd_pipe.set_progress_bar_config(disable=None)
279
-
280
- prompt = "A painting of a squirrel eating a burger"
281
- generator = torch.manual_seed(0)
282
- output = sd_pipe([prompt], generator=generator, guidance_scale=7.5, num_inference_steps=20, output_type="np")
283
-
284
- image = output.images
285
- image_slice = image[0, 253:256, 253:256, -1]
286
-
287
- assert image.shape == (1, 768, 768, 3)
288
- expected_slice = np.array([0.4209, 0.4087, 0.4097, 0.4209, 0.3860, 0.4329, 0.4280, 0.4324, 0.4187])
289
-
290
- assert np.abs(image_slice.flatten() - expected_slice).max() < 5e-2
291
-
292
- def test_stable_diffusion_v_pred_euler(self):
293
- scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2", subfolder="scheduler")
294
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=scheduler)
295
- sd_pipe = sd_pipe.to(torch_device)
296
- sd_pipe.enable_attention_slicing()
297
- sd_pipe.set_progress_bar_config(disable=None)
298
-
299
- prompt = "A painting of a squirrel eating a burger"
300
- generator = torch.manual_seed(0)
301
-
302
- output = sd_pipe([prompt], generator=generator, num_inference_steps=5, output_type="numpy")
303
- image = output.images
304
-
305
- image_slice = image[0, 253:256, 253:256, -1]
306
-
307
- assert image.shape == (1, 768, 768, 3)
308
- expected_slice = np.array([0.1781, 0.1695, 0.1661, 0.1705, 0.1588, 0.1699, 0.2005, 0.1589, 0.1677])
309
-
310
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
311
-
312
- def test_stable_diffusion_v_pred_dpm(self):
313
- """
314
- TODO: update this test after making DPM compatible with V-prediction!
315
- """
316
- scheduler = DPMSolverMultistepScheduler.from_pretrained(
317
- "stabilityai/stable-diffusion-2", subfolder="scheduler"
318
- )
319
- sd_pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=scheduler)
320
- sd_pipe = sd_pipe.to(torch_device)
321
- sd_pipe.enable_attention_slicing()
322
- sd_pipe.set_progress_bar_config(disable=None)
323
-
324
- prompt = "a photograph of an astronaut riding a horse"
325
- generator = torch.manual_seed(0)
326
- image = sd_pipe(
327
- [prompt], generator=generator, guidance_scale=7.5, num_inference_steps=5, output_type="numpy"
328
- ).images
329
-
330
- image_slice = image[0, 253:256, 253:256, -1]
331
- assert image.shape == (1, 768, 768, 3)
332
- expected_slice = np.array([0.3303, 0.3184, 0.3291, 0.3300, 0.3256, 0.3113, 0.2965, 0.3134, 0.3192])
333
-
334
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
335
-
336
- def test_stable_diffusion_attention_slicing_v_pred(self):
337
- torch.cuda.reset_peak_memory_stats()
338
- model_id = "stabilityai/stable-diffusion-2"
339
- pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
340
- pipe.to(torch_device)
341
- pipe.set_progress_bar_config(disable=None)
342
-
343
- prompt = "a photograph of an astronaut riding a horse"
344
-
345
- # make attention efficient
346
- pipe.enable_attention_slicing()
347
- generator = torch.manual_seed(0)
348
- output_chunked = pipe(
349
- [prompt], generator=generator, guidance_scale=7.5, num_inference_steps=10, output_type="numpy"
350
- )
351
- image_chunked = output_chunked.images
352
-
353
- mem_bytes = torch.cuda.max_memory_allocated()
354
- torch.cuda.reset_peak_memory_stats()
355
- # make sure that less than 5.5 GB is allocated
356
- assert mem_bytes < 5.5 * 10**9
357
-
358
- # disable slicing
359
- pipe.disable_attention_slicing()
360
- generator = torch.manual_seed(0)
361
- output = pipe([prompt], generator=generator, guidance_scale=7.5, num_inference_steps=10, output_type="numpy")
362
- image = output.images
363
-
364
- # make sure that more than 5.5 GB is allocated
365
- mem_bytes = torch.cuda.max_memory_allocated()
366
- assert mem_bytes > 5.5 * 10**9
367
- assert np.abs(image_chunked.flatten() - image.flatten()).max() < 1e-3
368
-
369
- def test_stable_diffusion_text2img_pipeline_v_pred_default(self):
370
- expected_image = load_numpy(
371
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/"
372
- "sd2-text2img/astronaut_riding_a_horse_v_pred.npy"
373
- )
374
-
375
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2")
376
- pipe.to(torch_device)
377
- pipe.enable_attention_slicing()
378
- pipe.set_progress_bar_config(disable=None)
379
-
380
- prompt = "astronaut riding a horse"
381
-
382
- generator = torch.manual_seed(0)
383
- output = pipe(prompt=prompt, guidance_scale=7.5, generator=generator, output_type="np")
384
- image = output.images[0]
385
-
386
- assert image.shape == (768, 768, 3)
387
- assert np.abs(expected_image - image).max() < 9e-1
388
-
389
- def test_stable_diffusion_text2img_pipeline_unflawed(self):
390
- expected_image = load_numpy(
391
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/"
392
- "sd2-text2img/lion_galaxy.npy"
393
- )
394
-
395
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
396
- pipe.scheduler = DDIMScheduler.from_config(
397
- pipe.scheduler.config, timestep_spacing="trailing", rescale_betas_zero_snr=True
398
- )
399
- pipe.to(torch_device)
400
- pipe.enable_attention_slicing()
401
- pipe.set_progress_bar_config(disable=None)
402
-
403
- prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k"
404
-
405
- generator = torch.manual_seed(0)
406
- output = pipe(prompt=prompt, guidance_scale=7.5, guidance_rescale=0.7, generator=generator, output_type="np")
407
- image = output.images[0]
408
-
409
- assert image.shape == (768, 768, 3)
410
- assert np.abs(expected_image - image).max() < 5e-1
411
-
412
- def test_stable_diffusion_text2img_pipeline_v_pred_fp16(self):
413
- expected_image = load_numpy(
414
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/"
415
- "sd2-text2img/astronaut_riding_a_horse_v_pred_fp16.npy"
416
- )
417
-
418
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", torch_dtype=torch.float16)
419
- pipe.to(torch_device)
420
- pipe.set_progress_bar_config(disable=None)
421
-
422
- prompt = "astronaut riding a horse"
423
-
424
- generator = torch.manual_seed(0)
425
- output = pipe(prompt=prompt, guidance_scale=7.5, generator=generator, output_type="np")
426
- image = output.images[0]
427
-
428
- assert image.shape == (768, 768, 3)
429
- assert np.abs(expected_image - image).max() < 7.5e-1
430
-
431
- def test_download_local(self):
432
- filename = hf_hub_download("stabilityai/stable-diffusion-2-1", filename="v2-1_768-ema-pruned.safetensors")
433
-
434
- pipe = StableDiffusionPipeline.from_single_file(filename, torch_dtype=torch.float16)
435
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
436
- pipe.to("cuda")
437
-
438
- image_out = pipe("test", num_inference_steps=1, output_type="np").images[0]
439
-
440
- assert image_out.shape == (768, 768, 3)
441
-
442
- def test_download_ckpt_diff_format_is_same(self):
443
- single_file_path = (
444
- "https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.safetensors"
445
- )
446
-
447
- pipe_single = StableDiffusionPipeline.from_single_file(single_file_path)
448
- pipe_single.scheduler = DDIMScheduler.from_config(pipe_single.scheduler.config)
449
- pipe_single.unet.set_attn_processor(AttnProcessor())
450
- pipe_single.to("cuda")
451
-
452
- generator = torch.Generator(device="cpu").manual_seed(0)
453
- image_ckpt = pipe_single("a turtle", num_inference_steps=5, generator=generator, output_type="np").images[0]
454
-
455
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1")
456
- pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)
457
- pipe.unet.set_attn_processor(AttnProcessor())
458
- pipe.to("cuda")
459
-
460
- generator = torch.Generator(device="cpu").manual_seed(0)
461
- image = pipe("a turtle", num_inference_steps=5, generator=generator, output_type="np").images[0]
462
-
463
- assert np.max(np.abs(image - image_ckpt)) < 1e-3
464
-
465
- def test_stable_diffusion_text2img_intermediate_state_v_pred(self):
466
- number_of_steps = 0
467
-
468
- def test_callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None:
469
- test_callback_fn.has_been_called = True
470
- nonlocal number_of_steps
471
- number_of_steps += 1
472
- if step == 0:
473
- latents = latents.detach().cpu().numpy()
474
- assert latents.shape == (1, 4, 96, 96)
475
- latents_slice = latents[0, -3:, -3:, -1]
476
- expected_slice = np.array([0.7749, 0.0325, 0.5088, 0.1619, 0.3372, 0.3667, -0.5186, 0.6860, 1.4326])
477
-
478
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
479
- elif step == 19:
480
- latents = latents.detach().cpu().numpy()
481
- assert latents.shape == (1, 4, 96, 96)
482
- latents_slice = latents[0, -3:, -3:, -1]
483
- expected_slice = np.array([1.3887, 1.0273, 1.7266, 0.0726, 0.6611, 0.1598, -1.0547, 0.1522, 0.0227])
484
-
485
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
486
-
487
- test_callback_fn.has_been_called = False
488
-
489
- pipe = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", torch_dtype=torch.float16)
490
- pipe = pipe.to(torch_device)
491
- pipe.set_progress_bar_config(disable=None)
492
- pipe.enable_attention_slicing()
493
-
494
- prompt = "Andromeda galaxy in a bottle"
495
-
496
- generator = torch.manual_seed(0)
497
- pipe(
498
- prompt=prompt,
499
- num_inference_steps=20,
500
- guidance_scale=7.5,
501
- generator=generator,
502
- callback=test_callback_fn,
503
- callback_steps=1,
504
- )
505
- assert test_callback_fn.has_been_called
506
- assert number_of_steps == 20
507
-
508
- def test_stable_diffusion_low_cpu_mem_usage_v_pred(self):
509
- pipeline_id = "stabilityai/stable-diffusion-2"
510
-
511
- start_time = time.time()
512
- pipeline_low_cpu_mem_usage = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16)
513
- pipeline_low_cpu_mem_usage.to(torch_device)
514
- low_cpu_mem_usage_time = time.time() - start_time
515
-
516
- start_time = time.time()
517
- _ = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16, low_cpu_mem_usage=False)
518
- normal_load_time = time.time() - start_time
519
-
520
- assert 2 * low_cpu_mem_usage_time < normal_load_time
521
-
522
- def test_stable_diffusion_pipeline_with_sequential_cpu_offloading_v_pred(self):
523
- torch.cuda.empty_cache()
524
- torch.cuda.reset_max_memory_allocated()
525
- torch.cuda.reset_peak_memory_stats()
526
-
527
- pipeline_id = "stabilityai/stable-diffusion-2"
528
- prompt = "Andromeda galaxy in a bottle"
529
-
530
- pipeline = StableDiffusionPipeline.from_pretrained(pipeline_id, torch_dtype=torch.float16)
531
- pipeline = pipeline.to(torch_device)
532
- pipeline.enable_attention_slicing(1)
533
- pipeline.enable_sequential_cpu_offload()
534
-
535
- generator = torch.manual_seed(0)
536
- _ = pipeline(prompt, generator=generator, num_inference_steps=5)
537
-
538
- mem_bytes = torch.cuda.max_memory_allocated()
539
- # make sure that less than 2.8 GB is allocated
540
- assert mem_bytes < 2.8 * 10**9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_unclip.py DELETED
@@ -1,137 +0,0 @@
1
- import torch
2
-
3
- from diffusers import UnCLIPScheduler
4
-
5
- from .test_schedulers import SchedulerCommonTest
6
-
7
-
8
- # UnCLIPScheduler is a modified DDPMScheduler with a subset of the configuration.
9
- class UnCLIPSchedulerTest(SchedulerCommonTest):
10
- scheduler_classes = (UnCLIPScheduler,)
11
-
12
- def get_scheduler_config(self, **kwargs):
13
- config = {
14
- "num_train_timesteps": 1000,
15
- "variance_type": "fixed_small_log",
16
- "clip_sample": True,
17
- "clip_sample_range": 1.0,
18
- "prediction_type": "epsilon",
19
- }
20
-
21
- config.update(**kwargs)
22
- return config
23
-
24
- def test_timesteps(self):
25
- for timesteps in [1, 5, 100, 1000]:
26
- self.check_over_configs(num_train_timesteps=timesteps)
27
-
28
- def test_variance_type(self):
29
- for variance in ["fixed_small_log", "learned_range"]:
30
- self.check_over_configs(variance_type=variance)
31
-
32
- def test_clip_sample(self):
33
- for clip_sample in [True, False]:
34
- self.check_over_configs(clip_sample=clip_sample)
35
-
36
- def test_clip_sample_range(self):
37
- for clip_sample_range in [1, 5, 10, 20]:
38
- self.check_over_configs(clip_sample_range=clip_sample_range)
39
-
40
- def test_prediction_type(self):
41
- for prediction_type in ["epsilon", "sample"]:
42
- self.check_over_configs(prediction_type=prediction_type)
43
-
44
- def test_time_indices(self):
45
- for time_step in [0, 500, 999]:
46
- for prev_timestep in [None, 5, 100, 250, 500, 750]:
47
- if prev_timestep is not None and prev_timestep >= time_step:
48
- continue
49
-
50
- self.check_over_forward(time_step=time_step, prev_timestep=prev_timestep)
51
-
52
- def test_variance_fixed_small_log(self):
53
- scheduler_class = self.scheduler_classes[0]
54
- scheduler_config = self.get_scheduler_config(variance_type="fixed_small_log")
55
- scheduler = scheduler_class(**scheduler_config)
56
-
57
- assert torch.sum(torch.abs(scheduler._get_variance(0) - 1.0000e-10)) < 1e-5
58
- assert torch.sum(torch.abs(scheduler._get_variance(487) - 0.0549625)) < 1e-5
59
- assert torch.sum(torch.abs(scheduler._get_variance(999) - 0.9994987)) < 1e-5
60
-
61
- def test_variance_learned_range(self):
62
- scheduler_class = self.scheduler_classes[0]
63
- scheduler_config = self.get_scheduler_config(variance_type="learned_range")
64
- scheduler = scheduler_class(**scheduler_config)
65
-
66
- predicted_variance = 0.5
67
-
68
- assert scheduler._get_variance(1, predicted_variance=predicted_variance) - -10.1712790 < 1e-5
69
- assert scheduler._get_variance(487, predicted_variance=predicted_variance) - -5.7998052 < 1e-5
70
- assert scheduler._get_variance(999, predicted_variance=predicted_variance) - -0.0010011 < 1e-5
71
-
72
- def test_full_loop(self):
73
- scheduler_class = self.scheduler_classes[0]
74
- scheduler_config = self.get_scheduler_config()
75
- scheduler = scheduler_class(**scheduler_config)
76
-
77
- timesteps = scheduler.timesteps
78
-
79
- model = self.dummy_model()
80
- sample = self.dummy_sample_deter
81
- generator = torch.manual_seed(0)
82
-
83
- for i, t in enumerate(timesteps):
84
- # 1. predict noise residual
85
- residual = model(sample, t)
86
-
87
- # 2. predict previous mean of sample x_t-1
88
- pred_prev_sample = scheduler.step(residual, t, sample, generator=generator).prev_sample
89
-
90
- sample = pred_prev_sample
91
-
92
- result_sum = torch.sum(torch.abs(sample))
93
- result_mean = torch.mean(torch.abs(sample))
94
-
95
- assert abs(result_sum.item() - 252.2682495) < 1e-2
96
- assert abs(result_mean.item() - 0.3284743) < 1e-3
97
-
98
- def test_full_loop_skip_timesteps(self):
99
- scheduler_class = self.scheduler_classes[0]
100
- scheduler_config = self.get_scheduler_config()
101
- scheduler = scheduler_class(**scheduler_config)
102
-
103
- scheduler.set_timesteps(25)
104
-
105
- timesteps = scheduler.timesteps
106
-
107
- model = self.dummy_model()
108
- sample = self.dummy_sample_deter
109
- generator = torch.manual_seed(0)
110
-
111
- for i, t in enumerate(timesteps):
112
- # 1. predict noise residual
113
- residual = model(sample, t)
114
-
115
- if i + 1 == timesteps.shape[0]:
116
- prev_timestep = None
117
- else:
118
- prev_timestep = timesteps[i + 1]
119
-
120
- # 2. predict previous mean of sample x_t-1
121
- pred_prev_sample = scheduler.step(
122
- residual, t, sample, prev_timestep=prev_timestep, generator=generator
123
- ).prev_sample
124
-
125
- sample = pred_prev_sample
126
-
127
- result_sum = torch.sum(torch.abs(sample))
128
- result_mean = torch.mean(torch.abs(sample))
129
-
130
- assert abs(result_sum.item() - 258.2044983) < 1e-2
131
- assert abs(result_mean.item() - 0.3362038) < 1e-3
132
-
133
- def test_trained_betas(self):
134
- pass
135
-
136
- def test_add_noise_device(self):
137
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py DELETED
@@ -1,45 +0,0 @@
1
- _base_ = './mask_rcnn_r50_fpn_1x_coco.py'
2
- model = dict(
3
- pretrained='open-mmlab://detectron2/resnet50_caffe',
4
- backbone=dict(norm_cfg=dict(requires_grad=False), style='caffe'))
5
- # use caffe img_norm
6
- img_norm_cfg = dict(
7
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
8
- train_pipeline = [
9
- dict(type='LoadImageFromFile'),
10
- dict(
11
- type='LoadAnnotations',
12
- with_bbox=True,
13
- with_mask=True,
14
- poly2mask=False),
15
- dict(
16
- type='Resize',
17
- img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
18
- (1333, 768), (1333, 800)],
19
- multiscale_mode='value',
20
- keep_ratio=True),
21
- dict(type='RandomFlip', flip_ratio=0.5),
22
- dict(type='Normalize', **img_norm_cfg),
23
- dict(type='Pad', size_divisor=32),
24
- dict(type='DefaultFormatBundle'),
25
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
26
- ]
27
- test_pipeline = [
28
- dict(type='LoadImageFromFile'),
29
- dict(
30
- type='MultiScaleFlipAug',
31
- img_scale=(1333, 800),
32
- flip=False,
33
- transforms=[
34
- dict(type='Resize', keep_ratio=True),
35
- dict(type='RandomFlip'),
36
- dict(type='Normalize', **img_norm_cfg),
37
- dict(type='Pad', size_divisor=32),
38
- dict(type='ImageToTensor', keys=['img']),
39
- dict(type='Collect', keys=['img']),
40
- ])
41
- ]
42
- data = dict(
43
- train=dict(pipeline=train_pipeline),
44
- val=dict(pipeline=test_pipeline),
45
- test=dict(pipeline=test_pipeline))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_x101_64x4d_fpn_mstrain_2x_coco.py DELETED
@@ -1,14 +0,0 @@
1
- _base_ = './vfnet_r50_fpn_mstrain_2x_coco.py'
2
- model = dict(
3
- pretrained='open-mmlab://resnext101_64x4d',
4
- backbone=dict(
5
- type='ResNeXt',
6
- depth=101,
7
- groups=64,
8
- base_width=4,
9
- num_stages=4,
10
- out_indices=(0, 1, 2, 3),
11
- frozen_stages=1,
12
- norm_cfg=dict(type='BN', requires_grad=True),
13
- norm_eval=True,
14
- style='pytorch'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/tools/analysis_tools/robustness_eval.py DELETED
@@ -1,250 +0,0 @@
1
- import os.path as osp
2
- from argparse import ArgumentParser
3
-
4
- import mmcv
5
- import numpy as np
6
-
7
-
8
- def print_coco_results(results):
9
-
10
- def _print(result, ap=1, iouThr=None, areaRng='all', maxDets=100):
11
- titleStr = 'Average Precision' if ap == 1 else 'Average Recall'
12
- typeStr = '(AP)' if ap == 1 else '(AR)'
13
- iouStr = '0.50:0.95' \
14
- if iouThr is None else f'{iouThr:0.2f}'
15
- iStr = f' {titleStr:<18} {typeStr} @[ IoU={iouStr:<9} | '
16
- iStr += f'area={areaRng:>6s} | maxDets={maxDets:>3d} ] = {result:0.3f}'
17
- print(iStr)
18
-
19
- stats = np.zeros((12, ))
20
- stats[0] = _print(results[0], 1)
21
- stats[1] = _print(results[1], 1, iouThr=.5)
22
- stats[2] = _print(results[2], 1, iouThr=.75)
23
- stats[3] = _print(results[3], 1, areaRng='small')
24
- stats[4] = _print(results[4], 1, areaRng='medium')
25
- stats[5] = _print(results[5], 1, areaRng='large')
26
- stats[6] = _print(results[6], 0, maxDets=1)
27
- stats[7] = _print(results[7], 0, maxDets=10)
28
- stats[8] = _print(results[8], 0)
29
- stats[9] = _print(results[9], 0, areaRng='small')
30
- stats[10] = _print(results[10], 0, areaRng='medium')
31
- stats[11] = _print(results[11], 0, areaRng='large')
32
-
33
-
34
- def get_coco_style_results(filename,
35
- task='bbox',
36
- metric=None,
37
- prints='mPC',
38
- aggregate='benchmark'):
39
-
40
- assert aggregate in ['benchmark', 'all']
41
-
42
- if prints == 'all':
43
- prints = ['P', 'mPC', 'rPC']
44
- elif isinstance(prints, str):
45
- prints = [prints]
46
- for p in prints:
47
- assert p in ['P', 'mPC', 'rPC']
48
-
49
- if metric is None:
50
- metrics = [
51
- 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10', 'AR100',
52
- 'ARs', 'ARm', 'ARl'
53
- ]
54
- elif isinstance(metric, list):
55
- metrics = metric
56
- else:
57
- metrics = [metric]
58
-
59
- for metric_name in metrics:
60
- assert metric_name in [
61
- 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10', 'AR100',
62
- 'ARs', 'ARm', 'ARl'
63
- ]
64
-
65
- eval_output = mmcv.load(filename)
66
-
67
- num_distortions = len(list(eval_output.keys()))
68
- results = np.zeros((num_distortions, 6, len(metrics)), dtype='float32')
69
-
70
- for corr_i, distortion in enumerate(eval_output):
71
- for severity in eval_output[distortion]:
72
- for metric_j, metric_name in enumerate(metrics):
73
- mAP = eval_output[distortion][severity][task][metric_name]
74
- results[corr_i, severity, metric_j] = mAP
75
-
76
- P = results[0, 0, :]
77
- if aggregate == 'benchmark':
78
- mPC = np.mean(results[:15, 1:, :], axis=(0, 1))
79
- else:
80
- mPC = np.mean(results[:, 1:, :], axis=(0, 1))
81
- rPC = mPC / P
82
-
83
- print(f'\nmodel: {osp.basename(filename)}')
84
- if metric is None:
85
- if 'P' in prints:
86
- print(f'Performance on Clean Data [P] ({task})')
87
- print_coco_results(P)
88
- if 'mPC' in prints:
89
- print(f'Mean Performance under Corruption [mPC] ({task})')
90
- print_coco_results(mPC)
91
- if 'rPC' in prints:
92
- print(f'Relative Performance under Corruption [rPC] ({task})')
93
- print_coco_results(rPC)
94
- else:
95
- if 'P' in prints:
96
- print(f'Performance on Clean Data [P] ({task})')
97
- for metric_i, metric_name in enumerate(metrics):
98
- print(f'{metric_name:5} = {P[metric_i]:0.3f}')
99
- if 'mPC' in prints:
100
- print(f'Mean Performance under Corruption [mPC] ({task})')
101
- for metric_i, metric_name in enumerate(metrics):
102
- print(f'{metric_name:5} = {mPC[metric_i]:0.3f}')
103
- if 'rPC' in prints:
104
- print(f'Relative Performance under Corruption [rPC] ({task})')
105
- for metric_i, metric_name in enumerate(metrics):
106
- print(f'{metric_name:5} => {rPC[metric_i] * 100:0.1f} %')
107
-
108
- return results
109
-
110
-
111
- def get_voc_style_results(filename, prints='mPC', aggregate='benchmark'):
112
-
113
- assert aggregate in ['benchmark', 'all']
114
-
115
- if prints == 'all':
116
- prints = ['P', 'mPC', 'rPC']
117
- elif isinstance(prints, str):
118
- prints = [prints]
119
- for p in prints:
120
- assert p in ['P', 'mPC', 'rPC']
121
-
122
- eval_output = mmcv.load(filename)
123
-
124
- num_distortions = len(list(eval_output.keys()))
125
- results = np.zeros((num_distortions, 6, 20), dtype='float32')
126
-
127
- for i, distortion in enumerate(eval_output):
128
- for severity in eval_output[distortion]:
129
- mAP = [
130
- eval_output[distortion][severity][j]['ap']
131
- for j in range(len(eval_output[distortion][severity]))
132
- ]
133
- results[i, severity, :] = mAP
134
-
135
- P = results[0, 0, :]
136
- if aggregate == 'benchmark':
137
- mPC = np.mean(results[:15, 1:, :], axis=(0, 1))
138
- else:
139
- mPC = np.mean(results[:, 1:, :], axis=(0, 1))
140
- rPC = mPC / P
141
-
142
- print(f'\nmodel: {osp.basename(filename)}')
143
- if 'P' in prints:
144
- print(f'Performance on Clean Data [P] in AP50 = {np.mean(P):0.3f}')
145
- if 'mPC' in prints:
146
- print('Mean Performance under Corruption [mPC] in AP50 = '
147
- f'{np.mean(mPC):0.3f}')
148
- if 'rPC' in prints:
149
- print('Relative Performance under Corruption [rPC] in % = '
150
- f'{np.mean(rPC) * 100:0.1f}')
151
-
152
- return np.mean(results, axis=2, keepdims=True)
153
-
154
-
155
- def get_results(filename,
156
- dataset='coco',
157
- task='bbox',
158
- metric=None,
159
- prints='mPC',
160
- aggregate='benchmark'):
161
- assert dataset in ['coco', 'voc', 'cityscapes']
162
-
163
- if dataset in ['coco', 'cityscapes']:
164
- results = get_coco_style_results(
165
- filename,
166
- task=task,
167
- metric=metric,
168
- prints=prints,
169
- aggregate=aggregate)
170
- elif dataset == 'voc':
171
- if task != 'bbox':
172
- print('Only bbox analysis is supported for Pascal VOC')
173
- print('Will report bbox results\n')
174
- if metric not in [None, ['AP'], ['AP50']]:
175
- print('Only the AP50 metric is supported for Pascal VOC')
176
- print('Will report AP50 metric\n')
177
- results = get_voc_style_results(
178
- filename, prints=prints, aggregate=aggregate)
179
-
180
- return results
181
-
182
-
183
- def get_distortions_from_file(filename):
184
-
185
- eval_output = mmcv.load(filename)
186
-
187
- return get_distortions_from_results(eval_output)
188
-
189
-
190
- def get_distortions_from_results(eval_output):
191
- distortions = []
192
- for i, distortion in enumerate(eval_output):
193
- distortions.append(distortion.replace('_', ' '))
194
- return distortions
195
-
196
-
197
- def main():
198
- parser = ArgumentParser(description='Corruption Result Analysis')
199
- parser.add_argument('filename', help='result file path')
200
- parser.add_argument(
201
- '--dataset',
202
- type=str,
203
- choices=['coco', 'voc', 'cityscapes'],
204
- default='coco',
205
- help='dataset type')
206
- parser.add_argument(
207
- '--task',
208
- type=str,
209
- nargs='+',
210
- choices=['bbox', 'segm'],
211
- default=['bbox'],
212
- help='task to report')
213
- parser.add_argument(
214
- '--metric',
215
- nargs='+',
216
- choices=[
217
- None, 'AP', 'AP50', 'AP75', 'APs', 'APm', 'APl', 'AR1', 'AR10',
218
- 'AR100', 'ARs', 'ARm', 'ARl'
219
- ],
220
- default=None,
221
- help='metric to report')
222
- parser.add_argument(
223
- '--prints',
224
- type=str,
225
- nargs='+',
226
- choices=['P', 'mPC', 'rPC'],
227
- default='mPC',
228
- help='corruption benchmark metric to print')
229
- parser.add_argument(
230
- '--aggregate',
231
- type=str,
232
- choices=['all', 'benchmark'],
233
- default='benchmark',
234
- help='aggregate all results or only those \
235
- for benchmark corruptions')
236
-
237
- args = parser.parse_args()
238
-
239
- for task in args.task:
240
- get_results(
241
- args.filename,
242
- dataset=args.dataset,
243
- task=task,
244
- metric=args.metric,
245
- prints=args.prints,
246
- aggregate=args.aggregate)
247
-
248
-
249
- if __name__ == '__main__':
250
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_512x512_20k_voc12aug.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './ann_r50-d8_512x512_20k_voc12aug.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmseg/core/utils/__init__.py DELETED
@@ -1,3 +0,0 @@
1
- from .misc import add_prefix
2
-
3
- __all__ = ['add_prefix']
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/diffusionmodules/__init__.py DELETED
File without changes
spaces/ArkanDash/rvc-models/infer_pack/models.py DELETED
@@ -1,982 +0,0 @@
1
- import math, pdb, os
2
- from time import time as ttime
3
- import torch
4
- from torch import nn
5
- from torch.nn import functional as F
6
- from infer_pack import modules
7
- from infer_pack import attentions
8
- from infer_pack import commons
9
- from infer_pack.commons import init_weights, get_padding
10
- from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
11
- from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
12
- from infer_pack.commons import init_weights
13
- import numpy as np
14
- from infer_pack import commons
15
-
16
-
17
- class TextEncoder256(nn.Module):
18
- def __init__(
19
- self,
20
- out_channels,
21
- hidden_channels,
22
- filter_channels,
23
- n_heads,
24
- n_layers,
25
- kernel_size,
26
- p_dropout,
27
- f0=True,
28
- ):
29
- super().__init__()
30
- self.out_channels = out_channels
31
- self.hidden_channels = hidden_channels
32
- self.filter_channels = filter_channels
33
- self.n_heads = n_heads
34
- self.n_layers = n_layers
35
- self.kernel_size = kernel_size
36
- self.p_dropout = p_dropout
37
- self.emb_phone = nn.Linear(256, hidden_channels)
38
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
39
- if f0 == True:
40
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
41
- self.encoder = attentions.Encoder(
42
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
43
- )
44
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
45
-
46
- def forward(self, phone, pitch, lengths):
47
- if pitch == None:
48
- x = self.emb_phone(phone)
49
- else:
50
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
51
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
52
- x = self.lrelu(x)
53
- x = torch.transpose(x, 1, -1) # [b, h, t]
54
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
55
- x.dtype
56
- )
57
- x = self.encoder(x * x_mask, x_mask)
58
- stats = self.proj(x) * x_mask
59
-
60
- m, logs = torch.split(stats, self.out_channels, dim=1)
61
- return m, logs, x_mask
62
-
63
-
64
- class TextEncoder256Sim(nn.Module):
65
- def __init__(
66
- self,
67
- out_channels,
68
- hidden_channels,
69
- filter_channels,
70
- n_heads,
71
- n_layers,
72
- kernel_size,
73
- p_dropout,
74
- f0=True,
75
- ):
76
- super().__init__()
77
- self.out_channels = out_channels
78
- self.hidden_channels = hidden_channels
79
- self.filter_channels = filter_channels
80
- self.n_heads = n_heads
81
- self.n_layers = n_layers
82
- self.kernel_size = kernel_size
83
- self.p_dropout = p_dropout
84
- self.emb_phone = nn.Linear(256, hidden_channels)
85
- self.lrelu = nn.LeakyReLU(0.1, inplace=True)
86
- if f0 == True:
87
- self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
88
- self.encoder = attentions.Encoder(
89
- hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
90
- )
91
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
92
-
93
- def forward(self, phone, pitch, lengths):
94
- if pitch == None:
95
- x = self.emb_phone(phone)
96
- else:
97
- x = self.emb_phone(phone) + self.emb_pitch(pitch)
98
- x = x * math.sqrt(self.hidden_channels) # [b, t, h]
99
- x = self.lrelu(x)
100
- x = torch.transpose(x, 1, -1) # [b, h, t]
101
- x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
102
- x.dtype
103
- )
104
- x = self.encoder(x * x_mask, x_mask)
105
- x = self.proj(x) * x_mask
106
- return x, x_mask
107
-
108
-
109
- class ResidualCouplingBlock(nn.Module):
110
- def __init__(
111
- self,
112
- channels,
113
- hidden_channels,
114
- kernel_size,
115
- dilation_rate,
116
- n_layers,
117
- n_flows=4,
118
- gin_channels=0,
119
- ):
120
- super().__init__()
121
- self.channels = channels
122
- self.hidden_channels = hidden_channels
123
- self.kernel_size = kernel_size
124
- self.dilation_rate = dilation_rate
125
- self.n_layers = n_layers
126
- self.n_flows = n_flows
127
- self.gin_channels = gin_channels
128
-
129
- self.flows = nn.ModuleList()
130
- for i in range(n_flows):
131
- self.flows.append(
132
- modules.ResidualCouplingLayer(
133
- channels,
134
- hidden_channels,
135
- kernel_size,
136
- dilation_rate,
137
- n_layers,
138
- gin_channels=gin_channels,
139
- mean_only=True,
140
- )
141
- )
142
- self.flows.append(modules.Flip())
143
-
144
- def forward(self, x, x_mask, g=None, reverse=False):
145
- if not reverse:
146
- for flow in self.flows:
147
- x, _ = flow(x, x_mask, g=g, reverse=reverse)
148
- else:
149
- for flow in reversed(self.flows):
150
- x = flow(x, x_mask, g=g, reverse=reverse)
151
- return x
152
-
153
- def remove_weight_norm(self):
154
- for i in range(self.n_flows):
155
- self.flows[i * 2].remove_weight_norm()
156
-
157
-
158
- class PosteriorEncoder(nn.Module):
159
- def __init__(
160
- self,
161
- in_channels,
162
- out_channels,
163
- hidden_channels,
164
- kernel_size,
165
- dilation_rate,
166
- n_layers,
167
- gin_channels=0,
168
- ):
169
- super().__init__()
170
- self.in_channels = in_channels
171
- self.out_channels = out_channels
172
- self.hidden_channels = hidden_channels
173
- self.kernel_size = kernel_size
174
- self.dilation_rate = dilation_rate
175
- self.n_layers = n_layers
176
- self.gin_channels = gin_channels
177
-
178
- self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
179
- self.enc = modules.WN(
180
- hidden_channels,
181
- kernel_size,
182
- dilation_rate,
183
- n_layers,
184
- gin_channels=gin_channels,
185
- )
186
- self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
187
-
188
- def forward(self, x, x_lengths, g=None):
189
- x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
190
- x.dtype
191
- )
192
- x = self.pre(x) * x_mask
193
- x = self.enc(x, x_mask, g=g)
194
- stats = self.proj(x) * x_mask
195
- m, logs = torch.split(stats, self.out_channels, dim=1)
196
- z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
197
- return z, m, logs, x_mask
198
-
199
- def remove_weight_norm(self):
200
- self.enc.remove_weight_norm()
201
-
202
-
203
- class Generator(torch.nn.Module):
204
- def __init__(
205
- self,
206
- initial_channel,
207
- resblock,
208
- resblock_kernel_sizes,
209
- resblock_dilation_sizes,
210
- upsample_rates,
211
- upsample_initial_channel,
212
- upsample_kernel_sizes,
213
- gin_channels=0,
214
- ):
215
- super(Generator, self).__init__()
216
- self.num_kernels = len(resblock_kernel_sizes)
217
- self.num_upsamples = len(upsample_rates)
218
- self.conv_pre = Conv1d(
219
- initial_channel, upsample_initial_channel, 7, 1, padding=3
220
- )
221
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
222
-
223
- self.ups = nn.ModuleList()
224
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
225
- self.ups.append(
226
- weight_norm(
227
- ConvTranspose1d(
228
- upsample_initial_channel // (2**i),
229
- upsample_initial_channel // (2 ** (i + 1)),
230
- k,
231
- u,
232
- padding=(k - u) // 2,
233
- )
234
- )
235
- )
236
-
237
- self.resblocks = nn.ModuleList()
238
- for i in range(len(self.ups)):
239
- ch = upsample_initial_channel // (2 ** (i + 1))
240
- for j, (k, d) in enumerate(
241
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
242
- ):
243
- self.resblocks.append(resblock(ch, k, d))
244
-
245
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
246
- self.ups.apply(init_weights)
247
-
248
- if gin_channels != 0:
249
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
250
-
251
- def forward(self, x, g=None):
252
- x = self.conv_pre(x)
253
- if g is not None:
254
- x = x + self.cond(g)
255
-
256
- for i in range(self.num_upsamples):
257
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
258
- x = self.ups[i](x)
259
- xs = None
260
- for j in range(self.num_kernels):
261
- if xs is None:
262
- xs = self.resblocks[i * self.num_kernels + j](x)
263
- else:
264
- xs += self.resblocks[i * self.num_kernels + j](x)
265
- x = xs / self.num_kernels
266
- x = F.leaky_relu(x)
267
- x = self.conv_post(x)
268
- x = torch.tanh(x)
269
-
270
- return x
271
-
272
- def remove_weight_norm(self):
273
- for l in self.ups:
274
- remove_weight_norm(l)
275
- for l in self.resblocks:
276
- l.remove_weight_norm()
277
-
278
-
279
- class SineGen(torch.nn.Module):
280
- """Definition of sine generator
281
- SineGen(samp_rate, harmonic_num = 0,
282
- sine_amp = 0.1, noise_std = 0.003,
283
- voiced_threshold = 0,
284
- flag_for_pulse=False)
285
- samp_rate: sampling rate in Hz
286
- harmonic_num: number of harmonic overtones (default 0)
287
- sine_amp: amplitude of sine-wavefrom (default 0.1)
288
- noise_std: std of Gaussian noise (default 0.003)
289
- voiced_thoreshold: F0 threshold for U/V classification (default 0)
290
- flag_for_pulse: this SinGen is used inside PulseGen (default False)
291
- Note: when flag_for_pulse is True, the first time step of a voiced
292
- segment is always sin(np.pi) or cos(0)
293
- """
294
-
295
- def __init__(
296
- self,
297
- samp_rate,
298
- harmonic_num=0,
299
- sine_amp=0.1,
300
- noise_std=0.003,
301
- voiced_threshold=0,
302
- flag_for_pulse=False,
303
- ):
304
- super(SineGen, self).__init__()
305
- self.sine_amp = sine_amp
306
- self.noise_std = noise_std
307
- self.harmonic_num = harmonic_num
308
- self.dim = self.harmonic_num + 1
309
- self.sampling_rate = samp_rate
310
- self.voiced_threshold = voiced_threshold
311
-
312
- def _f02uv(self, f0):
313
- # generate uv signal
314
- uv = torch.ones_like(f0)
315
- uv = uv * (f0 > self.voiced_threshold)
316
- return uv
317
-
318
- def forward(self, f0, upp):
319
- """sine_tensor, uv = forward(f0)
320
- input F0: tensor(batchsize=1, length, dim=1)
321
- f0 for unvoiced steps should be 0
322
- output sine_tensor: tensor(batchsize=1, length, dim)
323
- output uv: tensor(batchsize=1, length, 1)
324
- """
325
- with torch.no_grad():
326
- f0 = f0[:, None].transpose(1, 2)
327
- f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
328
- # fundamental component
329
- f0_buf[:, :, 0] = f0[:, :, 0]
330
- for idx in np.arange(self.harmonic_num):
331
- f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
332
- idx + 2
333
- ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
334
- rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
335
- rand_ini = torch.rand(
336
- f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
337
- )
338
- rand_ini[:, 0] = 0
339
- rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
340
- tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
341
- tmp_over_one *= upp
342
- tmp_over_one = F.interpolate(
343
- tmp_over_one.transpose(2, 1),
344
- scale_factor=upp,
345
- mode="linear",
346
- align_corners=True,
347
- ).transpose(2, 1)
348
- rad_values = F.interpolate(
349
- rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
350
- ).transpose(
351
- 2, 1
352
- ) #######
353
- tmp_over_one %= 1
354
- tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
355
- cumsum_shift = torch.zeros_like(rad_values)
356
- cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
357
- sine_waves = torch.sin(
358
- torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
359
- )
360
- sine_waves = sine_waves * self.sine_amp
361
- uv = self._f02uv(f0)
362
- uv = F.interpolate(
363
- uv.transpose(2, 1), scale_factor=upp, mode="nearest"
364
- ).transpose(2, 1)
365
- noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
366
- noise = noise_amp * torch.randn_like(sine_waves)
367
- sine_waves = sine_waves * uv + noise
368
- return sine_waves, uv, noise
369
-
370
-
371
- class SourceModuleHnNSF(torch.nn.Module):
372
- """SourceModule for hn-nsf
373
- SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
374
- add_noise_std=0.003, voiced_threshod=0)
375
- sampling_rate: sampling_rate in Hz
376
- harmonic_num: number of harmonic above F0 (default: 0)
377
- sine_amp: amplitude of sine source signal (default: 0.1)
378
- add_noise_std: std of additive Gaussian noise (default: 0.003)
379
- note that amplitude of noise in unvoiced is decided
380
- by sine_amp
381
- voiced_threshold: threhold to set U/V given F0 (default: 0)
382
- Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
383
- F0_sampled (batchsize, length, 1)
384
- Sine_source (batchsize, length, 1)
385
- noise_source (batchsize, length 1)
386
- uv (batchsize, length, 1)
387
- """
388
-
389
- def __init__(
390
- self,
391
- sampling_rate,
392
- harmonic_num=0,
393
- sine_amp=0.1,
394
- add_noise_std=0.003,
395
- voiced_threshod=0,
396
- is_half=True,
397
- ):
398
- super(SourceModuleHnNSF, self).__init__()
399
-
400
- self.sine_amp = sine_amp
401
- self.noise_std = add_noise_std
402
- self.is_half = is_half
403
- # to produce sine waveforms
404
- self.l_sin_gen = SineGen(
405
- sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
406
- )
407
-
408
- # to merge source harmonics into a single excitation
409
- self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
410
- self.l_tanh = torch.nn.Tanh()
411
-
412
- def forward(self, x, upp=None):
413
- sine_wavs, uv, _ = self.l_sin_gen(x, upp)
414
- if self.is_half:
415
- sine_wavs = sine_wavs.half()
416
- sine_merge = self.l_tanh(self.l_linear(sine_wavs))
417
- return sine_merge, None, None # noise, uv
418
-
419
-
420
- class GeneratorNSF(torch.nn.Module):
421
- def __init__(
422
- self,
423
- initial_channel,
424
- resblock,
425
- resblock_kernel_sizes,
426
- resblock_dilation_sizes,
427
- upsample_rates,
428
- upsample_initial_channel,
429
- upsample_kernel_sizes,
430
- gin_channels,
431
- sr,
432
- is_half=False,
433
- ):
434
- super(GeneratorNSF, self).__init__()
435
- self.num_kernels = len(resblock_kernel_sizes)
436
- self.num_upsamples = len(upsample_rates)
437
-
438
- self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
439
- self.m_source = SourceModuleHnNSF(
440
- sampling_rate=sr, harmonic_num=0, is_half=is_half
441
- )
442
- self.noise_convs = nn.ModuleList()
443
- self.conv_pre = Conv1d(
444
- initial_channel, upsample_initial_channel, 7, 1, padding=3
445
- )
446
- resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
447
-
448
- self.ups = nn.ModuleList()
449
- for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
450
- c_cur = upsample_initial_channel // (2 ** (i + 1))
451
- self.ups.append(
452
- weight_norm(
453
- ConvTranspose1d(
454
- upsample_initial_channel // (2**i),
455
- upsample_initial_channel // (2 ** (i + 1)),
456
- k,
457
- u,
458
- padding=(k - u) // 2,
459
- )
460
- )
461
- )
462
- if i + 1 < len(upsample_rates):
463
- stride_f0 = np.prod(upsample_rates[i + 1 :])
464
- self.noise_convs.append(
465
- Conv1d(
466
- 1,
467
- c_cur,
468
- kernel_size=stride_f0 * 2,
469
- stride=stride_f0,
470
- padding=stride_f0 // 2,
471
- )
472
- )
473
- else:
474
- self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
475
-
476
- self.resblocks = nn.ModuleList()
477
- for i in range(len(self.ups)):
478
- ch = upsample_initial_channel // (2 ** (i + 1))
479
- for j, (k, d) in enumerate(
480
- zip(resblock_kernel_sizes, resblock_dilation_sizes)
481
- ):
482
- self.resblocks.append(resblock(ch, k, d))
483
-
484
- self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
485
- self.ups.apply(init_weights)
486
-
487
- if gin_channels != 0:
488
- self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
489
-
490
- self.upp = np.prod(upsample_rates)
491
-
492
- def forward(self, x, f0, g=None):
493
- har_source, noi_source, uv = self.m_source(f0, self.upp)
494
- har_source = har_source.transpose(1, 2)
495
- x = self.conv_pre(x)
496
- if g is not None:
497
- x = x + self.cond(g)
498
-
499
- for i in range(self.num_upsamples):
500
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
501
- x = self.ups[i](x)
502
- x_source = self.noise_convs[i](har_source)
503
- x = x + x_source
504
- xs = None
505
- for j in range(self.num_kernels):
506
- if xs is None:
507
- xs = self.resblocks[i * self.num_kernels + j](x)
508
- else:
509
- xs += self.resblocks[i * self.num_kernels + j](x)
510
- x = xs / self.num_kernels
511
- x = F.leaky_relu(x)
512
- x = self.conv_post(x)
513
- x = torch.tanh(x)
514
- return x
515
-
516
- def remove_weight_norm(self):
517
- for l in self.ups:
518
- remove_weight_norm(l)
519
- for l in self.resblocks:
520
- l.remove_weight_norm()
521
-
522
-
523
- sr2sr = {
524
- "32k": 32000,
525
- "40k": 40000,
526
- "48k": 48000,
527
- }
528
-
529
-
530
- class SynthesizerTrnMs256NSFsid(nn.Module):
531
- def __init__(
532
- self,
533
- spec_channels,
534
- segment_size,
535
- inter_channels,
536
- hidden_channels,
537
- filter_channels,
538
- n_heads,
539
- n_layers,
540
- kernel_size,
541
- p_dropout,
542
- resblock,
543
- resblock_kernel_sizes,
544
- resblock_dilation_sizes,
545
- upsample_rates,
546
- upsample_initial_channel,
547
- upsample_kernel_sizes,
548
- spk_embed_dim,
549
- gin_channels,
550
- sr,
551
- **kwargs
552
- ):
553
- super().__init__()
554
- if type(sr) == type("strr"):
555
- sr = sr2sr[sr]
556
- self.spec_channels = spec_channels
557
- self.inter_channels = inter_channels
558
- self.hidden_channels = hidden_channels
559
- self.filter_channels = filter_channels
560
- self.n_heads = n_heads
561
- self.n_layers = n_layers
562
- self.kernel_size = kernel_size
563
- self.p_dropout = p_dropout
564
- self.resblock = resblock
565
- self.resblock_kernel_sizes = resblock_kernel_sizes
566
- self.resblock_dilation_sizes = resblock_dilation_sizes
567
- self.upsample_rates = upsample_rates
568
- self.upsample_initial_channel = upsample_initial_channel
569
- self.upsample_kernel_sizes = upsample_kernel_sizes
570
- self.segment_size = segment_size
571
- self.gin_channels = gin_channels
572
- # self.hop_length = hop_length#
573
- self.spk_embed_dim = spk_embed_dim
574
- self.enc_p = TextEncoder256(
575
- inter_channels,
576
- hidden_channels,
577
- filter_channels,
578
- n_heads,
579
- n_layers,
580
- kernel_size,
581
- p_dropout,
582
- )
583
- self.dec = GeneratorNSF(
584
- inter_channels,
585
- resblock,
586
- resblock_kernel_sizes,
587
- resblock_dilation_sizes,
588
- upsample_rates,
589
- upsample_initial_channel,
590
- upsample_kernel_sizes,
591
- gin_channels=gin_channels,
592
- sr=sr,
593
- is_half=kwargs["is_half"],
594
- )
595
- self.enc_q = PosteriorEncoder(
596
- spec_channels,
597
- inter_channels,
598
- hidden_channels,
599
- 5,
600
- 1,
601
- 16,
602
- gin_channels=gin_channels,
603
- )
604
- self.flow = ResidualCouplingBlock(
605
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
606
- )
607
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
608
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
609
-
610
- def remove_weight_norm(self):
611
- self.dec.remove_weight_norm()
612
- self.flow.remove_weight_norm()
613
- self.enc_q.remove_weight_norm()
614
-
615
- def forward(
616
- self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
617
- ): # 这里ds是id,[bs,1]
618
- # print(1,pitch.shape)#[bs,t]
619
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
620
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
621
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
622
- z_p = self.flow(z, y_mask, g=g)
623
- z_slice, ids_slice = commons.rand_slice_segments(
624
- z, y_lengths, self.segment_size
625
- )
626
- # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
627
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
628
- # print(-2,pitchf.shape,z_slice.shape)
629
- o = self.dec(z_slice, pitchf, g=g)
630
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
631
-
632
- def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
633
- g = self.emb_g(sid).unsqueeze(-1)
634
- m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
635
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
636
- z = self.flow(z_p, x_mask, g=g, reverse=True)
637
- o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
638
- return o, x_mask, (z, z_p, m_p, logs_p)
639
-
640
-
641
- class SynthesizerTrnMs256NSFsid_nono(nn.Module):
642
- def __init__(
643
- self,
644
- spec_channels,
645
- segment_size,
646
- inter_channels,
647
- hidden_channels,
648
- filter_channels,
649
- n_heads,
650
- n_layers,
651
- kernel_size,
652
- p_dropout,
653
- resblock,
654
- resblock_kernel_sizes,
655
- resblock_dilation_sizes,
656
- upsample_rates,
657
- upsample_initial_channel,
658
- upsample_kernel_sizes,
659
- spk_embed_dim,
660
- gin_channels,
661
- sr=None,
662
- **kwargs
663
- ):
664
- super().__init__()
665
- self.spec_channels = spec_channels
666
- self.inter_channels = inter_channels
667
- self.hidden_channels = hidden_channels
668
- self.filter_channels = filter_channels
669
- self.n_heads = n_heads
670
- self.n_layers = n_layers
671
- self.kernel_size = kernel_size
672
- self.p_dropout = p_dropout
673
- self.resblock = resblock
674
- self.resblock_kernel_sizes = resblock_kernel_sizes
675
- self.resblock_dilation_sizes = resblock_dilation_sizes
676
- self.upsample_rates = upsample_rates
677
- self.upsample_initial_channel = upsample_initial_channel
678
- self.upsample_kernel_sizes = upsample_kernel_sizes
679
- self.segment_size = segment_size
680
- self.gin_channels = gin_channels
681
- # self.hop_length = hop_length#
682
- self.spk_embed_dim = spk_embed_dim
683
- self.enc_p = TextEncoder256(
684
- inter_channels,
685
- hidden_channels,
686
- filter_channels,
687
- n_heads,
688
- n_layers,
689
- kernel_size,
690
- p_dropout,
691
- f0=False,
692
- )
693
- self.dec = Generator(
694
- inter_channels,
695
- resblock,
696
- resblock_kernel_sizes,
697
- resblock_dilation_sizes,
698
- upsample_rates,
699
- upsample_initial_channel,
700
- upsample_kernel_sizes,
701
- gin_channels=gin_channels,
702
- )
703
- self.enc_q = PosteriorEncoder(
704
- spec_channels,
705
- inter_channels,
706
- hidden_channels,
707
- 5,
708
- 1,
709
- 16,
710
- gin_channels=gin_channels,
711
- )
712
- self.flow = ResidualCouplingBlock(
713
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
714
- )
715
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
716
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
717
-
718
- def remove_weight_norm(self):
719
- self.dec.remove_weight_norm()
720
- self.flow.remove_weight_norm()
721
- self.enc_q.remove_weight_norm()
722
-
723
- def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
724
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
725
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
726
- z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
727
- z_p = self.flow(z, y_mask, g=g)
728
- z_slice, ids_slice = commons.rand_slice_segments(
729
- z, y_lengths, self.segment_size
730
- )
731
- o = self.dec(z_slice, g=g)
732
- return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
733
-
734
- def infer(self, phone, phone_lengths, sid, max_len=None):
735
- g = self.emb_g(sid).unsqueeze(-1)
736
- m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
737
- z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
738
- z = self.flow(z_p, x_mask, g=g, reverse=True)
739
- o = self.dec((z * x_mask)[:, :, :max_len], g=g)
740
- return o, x_mask, (z, z_p, m_p, logs_p)
741
-
742
-
743
- class SynthesizerTrnMs256NSFsid_sim(nn.Module):
744
- """
745
- Synthesizer for Training
746
- """
747
-
748
- def __init__(
749
- self,
750
- spec_channels,
751
- segment_size,
752
- inter_channels,
753
- hidden_channels,
754
- filter_channels,
755
- n_heads,
756
- n_layers,
757
- kernel_size,
758
- p_dropout,
759
- resblock,
760
- resblock_kernel_sizes,
761
- resblock_dilation_sizes,
762
- upsample_rates,
763
- upsample_initial_channel,
764
- upsample_kernel_sizes,
765
- spk_embed_dim,
766
- # hop_length,
767
- gin_channels=0,
768
- use_sdp=True,
769
- **kwargs
770
- ):
771
- super().__init__()
772
- self.spec_channels = spec_channels
773
- self.inter_channels = inter_channels
774
- self.hidden_channels = hidden_channels
775
- self.filter_channels = filter_channels
776
- self.n_heads = n_heads
777
- self.n_layers = n_layers
778
- self.kernel_size = kernel_size
779
- self.p_dropout = p_dropout
780
- self.resblock = resblock
781
- self.resblock_kernel_sizes = resblock_kernel_sizes
782
- self.resblock_dilation_sizes = resblock_dilation_sizes
783
- self.upsample_rates = upsample_rates
784
- self.upsample_initial_channel = upsample_initial_channel
785
- self.upsample_kernel_sizes = upsample_kernel_sizes
786
- self.segment_size = segment_size
787
- self.gin_channels = gin_channels
788
- # self.hop_length = hop_length#
789
- self.spk_embed_dim = spk_embed_dim
790
- self.enc_p = TextEncoder256Sim(
791
- inter_channels,
792
- hidden_channels,
793
- filter_channels,
794
- n_heads,
795
- n_layers,
796
- kernel_size,
797
- p_dropout,
798
- )
799
- self.dec = GeneratorNSF(
800
- inter_channels,
801
- resblock,
802
- resblock_kernel_sizes,
803
- resblock_dilation_sizes,
804
- upsample_rates,
805
- upsample_initial_channel,
806
- upsample_kernel_sizes,
807
- gin_channels=gin_channels,
808
- is_half=kwargs["is_half"],
809
- )
810
-
811
- self.flow = ResidualCouplingBlock(
812
- inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
813
- )
814
- self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
815
- print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
816
-
817
- def remove_weight_norm(self):
818
- self.dec.remove_weight_norm()
819
- self.flow.remove_weight_norm()
820
- self.enc_q.remove_weight_norm()
821
-
822
- def forward(
823
- self, phone, phone_lengths, pitch, pitchf, y_lengths, ds
824
- ): # y是spec不需要了现在
825
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
826
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
827
- x = self.flow(x, x_mask, g=g, reverse=True)
828
- z_slice, ids_slice = commons.rand_slice_segments(
829
- x, y_lengths, self.segment_size
830
- )
831
-
832
- pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
833
- o = self.dec(z_slice, pitchf, g=g)
834
- return o, ids_slice
835
-
836
- def infer(
837
- self, phone, phone_lengths, pitch, pitchf, ds, max_len=None
838
- ): # y是spec不需要了现在
839
- g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
840
- x, x_mask = self.enc_p(phone, pitch, phone_lengths)
841
- x = self.flow(x, x_mask, g=g, reverse=True)
842
- o = self.dec((x * x_mask)[:, :, :max_len], pitchf, g=g)
843
- return o, o
844
-
845
-
846
- class MultiPeriodDiscriminator(torch.nn.Module):
847
- def __init__(self, use_spectral_norm=False):
848
- super(MultiPeriodDiscriminator, self).__init__()
849
- periods = [2, 3, 5, 7, 11, 17]
850
- # periods = [3, 5, 7, 11, 17, 23, 37]
851
-
852
- discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
853
- discs = discs + [
854
- DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
855
- ]
856
- self.discriminators = nn.ModuleList(discs)
857
-
858
- def forward(self, y, y_hat):
859
- y_d_rs = [] #
860
- y_d_gs = []
861
- fmap_rs = []
862
- fmap_gs = []
863
- for i, d in enumerate(self.discriminators):
864
- y_d_r, fmap_r = d(y)
865
- y_d_g, fmap_g = d(y_hat)
866
- # for j in range(len(fmap_r)):
867
- # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
868
- y_d_rs.append(y_d_r)
869
- y_d_gs.append(y_d_g)
870
- fmap_rs.append(fmap_r)
871
- fmap_gs.append(fmap_g)
872
-
873
- return y_d_rs, y_d_gs, fmap_rs, fmap_gs
874
-
875
-
876
- class DiscriminatorS(torch.nn.Module):
877
- def __init__(self, use_spectral_norm=False):
878
- super(DiscriminatorS, self).__init__()
879
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
880
- self.convs = nn.ModuleList(
881
- [
882
- norm_f(Conv1d(1, 16, 15, 1, padding=7)),
883
- norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
884
- norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
885
- norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
886
- norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
887
- norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
888
- ]
889
- )
890
- self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
891
-
892
- def forward(self, x):
893
- fmap = []
894
-
895
- for l in self.convs:
896
- x = l(x)
897
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
898
- fmap.append(x)
899
- x = self.conv_post(x)
900
- fmap.append(x)
901
- x = torch.flatten(x, 1, -1)
902
-
903
- return x, fmap
904
-
905
-
906
- class DiscriminatorP(torch.nn.Module):
907
- def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
908
- super(DiscriminatorP, self).__init__()
909
- self.period = period
910
- self.use_spectral_norm = use_spectral_norm
911
- norm_f = weight_norm if use_spectral_norm == False else spectral_norm
912
- self.convs = nn.ModuleList(
913
- [
914
- norm_f(
915
- Conv2d(
916
- 1,
917
- 32,
918
- (kernel_size, 1),
919
- (stride, 1),
920
- padding=(get_padding(kernel_size, 1), 0),
921
- )
922
- ),
923
- norm_f(
924
- Conv2d(
925
- 32,
926
- 128,
927
- (kernel_size, 1),
928
- (stride, 1),
929
- padding=(get_padding(kernel_size, 1), 0),
930
- )
931
- ),
932
- norm_f(
933
- Conv2d(
934
- 128,
935
- 512,
936
- (kernel_size, 1),
937
- (stride, 1),
938
- padding=(get_padding(kernel_size, 1), 0),
939
- )
940
- ),
941
- norm_f(
942
- Conv2d(
943
- 512,
944
- 1024,
945
- (kernel_size, 1),
946
- (stride, 1),
947
- padding=(get_padding(kernel_size, 1), 0),
948
- )
949
- ),
950
- norm_f(
951
- Conv2d(
952
- 1024,
953
- 1024,
954
- (kernel_size, 1),
955
- 1,
956
- padding=(get_padding(kernel_size, 1), 0),
957
- )
958
- ),
959
- ]
960
- )
961
- self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
962
-
963
- def forward(self, x):
964
- fmap = []
965
-
966
- # 1d to 2d
967
- b, c, t = x.shape
968
- if t % self.period != 0: # pad first
969
- n_pad = self.period - (t % self.period)
970
- x = F.pad(x, (0, n_pad), "reflect")
971
- t = t + n_pad
972
- x = x.view(b, c, t // self.period, self.period)
973
-
974
- for l in self.convs:
975
- x = l(x)
976
- x = F.leaky_relu(x, modules.LRELU_SLOPE)
977
- fmap.append(x)
978
- x = self.conv_post(x)
979
- fmap.append(x)
980
- x = torch.flatten(x, 1, -1)
981
-
982
- return x, fmap
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/common/train.py DELETED
@@ -1,18 +0,0 @@
1
- # Common training-related configs that are designed for "tools/lazyconfig_train_net.py"
2
- # You can use your own instead, together with your own train_net.py
3
- train = dict(
4
- output_dir="./output",
5
- init_checkpoint="",
6
- max_iter=90000,
7
- amp=dict(enabled=False), # options for Automatic Mixed Precision
8
- ddp=dict( # options for DistributedDataParallel
9
- broadcast_buffers=False,
10
- find_unused_parameters=False,
11
- fp16_compression=False,
12
- ),
13
- checkpointer=dict(period=5000, max_to_keep=100), # options for PeriodicCheckpointer
14
- eval_period=5000,
15
- log_period=20,
16
- device="cuda"
17
- # ...
18
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/README.md DELETED
@@ -1,7 +0,0 @@
1
-
2
-
3
- To add a new Op:
4
-
5
- 1. Create a new directory
6
- 2. Implement new ops there
7
- 3. Delcare its Python interface in `vision.cpp`.
 
 
 
 
 
 
 
 
spaces/Bart92/RVC_HF/configs/config.py DELETED
@@ -1,265 +0,0 @@
1
- import argparse
2
- import os
3
- import sys
4
- import json
5
- from multiprocessing import cpu_count
6
-
7
- import torch
8
-
9
- try:
10
- import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import
11
- if torch.xpu.is_available():
12
- from infer.modules.ipex import ipex_init
13
- ipex_init()
14
- except Exception:
15
- pass
16
-
17
- import logging
18
-
19
- logger = logging.getLogger(__name__)
20
-
21
-
22
- version_config_list = [
23
- "v1/32k.json",
24
- "v1/40k.json",
25
- "v1/48k.json",
26
- "v2/48k.json",
27
- "v2/32k.json",
28
- ]
29
-
30
-
31
- def singleton_variable(func):
32
- def wrapper(*args, **kwargs):
33
- if not wrapper.instance:
34
- wrapper.instance = func(*args, **kwargs)
35
- return wrapper.instance
36
-
37
- wrapper.instance = None
38
- return wrapper
39
-
40
-
41
- @singleton_variable
42
- class Config:
43
- def __init__(self):
44
- self.device = "cuda:0"
45
- self.is_half = True
46
- self.n_cpu = 0
47
- self.gpu_name = None
48
- self.json_config = self.load_config_json()
49
- self.gpu_mem = None
50
- (
51
- self.python_cmd,
52
- self.listen_port,
53
- self.iscolab,
54
- self.noparallel,
55
- self.noautoopen,
56
- self.paperspace,
57
- self.is_cli,
58
- self.grtheme,
59
- self.dml,
60
- ) = self.arg_parse()
61
- self.instead = ""
62
- self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config()
63
-
64
- @staticmethod
65
- def load_config_json() -> dict:
66
- d = {}
67
- for config_file in version_config_list:
68
- with open(f"configs/{config_file}", "r") as f:
69
- d[config_file] = json.load(f)
70
- return d
71
-
72
- @staticmethod
73
- def arg_parse() -> tuple:
74
- exe = sys.executable or "python"
75
- parser = argparse.ArgumentParser()
76
- parser.add_argument("--port", type=int, default=7865, help="Listen port")
77
- parser.add_argument("--pycmd", type=str, default=exe, help="Python command")
78
- parser.add_argument("--colab", action="store_true", help="Launch in colab")
79
- parser.add_argument(
80
- "--noparallel", action="store_true", help="Disable parallel processing"
81
- )
82
- parser.add_argument(
83
- "--noautoopen",
84
- action="store_true",
85
- help="Do not open in browser automatically",
86
- )
87
- parser.add_argument(
88
- "--paperspace",
89
- action="store_true",
90
- help="Note that this argument just shares a gradio link for the web UI. Thus can be used on other non-local CLI systems.",
91
- )
92
- parser.add_argument(
93
- "--is_cli",
94
- action="store_true",
95
- help="Use the CLI instead of setting up a gradio UI. This flag will launch an RVC text interface where you can execute functions from infer-web.py!",
96
- )
97
-
98
- parser.add_argument(
99
- "-t",
100
- "--theme",
101
- help = "Theme for Gradio. Format - `JohnSmith9982/small_and_pretty` (no backticks)",
102
- default = "JohnSmith9982/small_and_pretty",
103
- type = str
104
- )
105
-
106
- parser.add_argument(
107
- "--dml",
108
- action="store_true",
109
- help="Use DirectML backend instead of CUDA."
110
- )
111
-
112
- cmd_opts = parser.parse_args()
113
-
114
- cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865
115
-
116
- return (
117
- cmd_opts.pycmd,
118
- cmd_opts.port,
119
- cmd_opts.colab,
120
- cmd_opts.noparallel,
121
- cmd_opts.noautoopen,
122
- cmd_opts.paperspace,
123
- cmd_opts.is_cli,
124
- cmd_opts.theme,
125
- cmd_opts.dml,
126
- )
127
-
128
- # has_mps is only available in nightly pytorch (for now) and MasOS 12.3+.
129
- # check `getattr` and try it for compatibility
130
- @staticmethod
131
- def has_mps() -> bool:
132
- if not torch.backends.mps.is_available():
133
- return False
134
- try:
135
- torch.zeros(1).to(torch.device("mps"))
136
- return True
137
- except Exception:
138
- return False
139
-
140
- @staticmethod
141
- def has_xpu() -> bool:
142
- if hasattr(torch, "xpu") and torch.xpu.is_available():
143
- return True
144
- else:
145
- return False
146
-
147
- def use_fp32_config(self):
148
- for config_file in version_config_list:
149
- self.json_config[config_file]["train"]["fp16_run"] = False
150
-
151
- def device_config(self) -> tuple:
152
- if torch.cuda.is_available():
153
- if self.has_xpu():
154
- self.device = self.instead = "xpu:0"
155
- self.is_half = True
156
- i_device = int(self.device.split(":")[-1])
157
- self.gpu_name = torch.cuda.get_device_name(i_device)
158
- if (
159
- ("16" in self.gpu_name and "V100" not in self.gpu_name.upper())
160
- or "P40" in self.gpu_name.upper()
161
- or "P10" in self.gpu_name.upper()
162
- or "1060" in self.gpu_name
163
- or "1070" in self.gpu_name
164
- or "1080" in self.gpu_name
165
- ):
166
- logger.info("Found GPU %s, force to fp32", self.gpu_name)
167
- self.is_half = False
168
- self.use_fp32_config()
169
- else:
170
- logger.info("Found GPU %s", self.gpu_name)
171
- self.gpu_mem = int(
172
- torch.cuda.get_device_properties(i_device).total_memory
173
- / 1024
174
- / 1024
175
- / 1024
176
- + 0.4
177
- )
178
- if self.gpu_mem <= 4:
179
- with open("infer/modules/train/preprocess.py", "r") as f:
180
- strr = f.read().replace("3.7", "3.0")
181
- with open("infer/modules/train/preprocess.py", "w") as f:
182
- f.write(strr)
183
- elif self.has_mps():
184
- logger.info("No supported Nvidia GPU found")
185
- self.device = self.instead = "mps"
186
- self.is_half = False
187
- self.use_fp32_config()
188
- else:
189
- logger.info("No supported Nvidia GPU found")
190
- self.device = self.instead = "cpu"
191
- self.is_half = False
192
- self.use_fp32_config()
193
-
194
- if self.n_cpu == 0:
195
- self.n_cpu = cpu_count()
196
-
197
- if self.is_half:
198
- # 6G显存配置
199
- x_pad = 3
200
- x_query = 10
201
- x_center = 60
202
- x_max = 65
203
- else:
204
- # 5G显存配置
205
- x_pad = 1
206
- x_query = 6
207
- x_center = 38
208
- x_max = 41
209
-
210
- if self.gpu_mem is not None and self.gpu_mem <= 4:
211
- x_pad = 1
212
- x_query = 5
213
- x_center = 30
214
- x_max = 32
215
- if self.dml:
216
- logger.info("Use DirectML instead")
217
- if (
218
- os.path.exists(
219
- "runtime\Lib\site-packages\onnxruntime\capi\DirectML.dll"
220
- )
221
- == False
222
- ):
223
- try:
224
- os.rename(
225
- "runtime\Lib\site-packages\onnxruntime",
226
- "runtime\Lib\site-packages\onnxruntime-cuda",
227
- )
228
- except:
229
- pass
230
- try:
231
- os.rename(
232
- "runtime\Lib\site-packages\onnxruntime-dml",
233
- "runtime\Lib\site-packages\onnxruntime",
234
- )
235
- except:
236
- pass
237
- # if self.device != "cpu":
238
- import torch_directml
239
-
240
- self.device = torch_directml.device(torch_directml.default_device())
241
- self.is_half = False
242
- else:
243
- if self.instead:
244
- logger.info(f"Use {self.instead} instead")
245
- if (
246
- os.path.exists(
247
- "runtime\Lib\site-packages\onnxruntime\capi\onnxruntime_providers_cuda.dll"
248
- )
249
- == False
250
- ):
251
- try:
252
- os.rename(
253
- "runtime\Lib\site-packages\onnxruntime",
254
- "runtime\Lib\site-packages\onnxruntime-dml",
255
- )
256
- except:
257
- pass
258
- try:
259
- os.rename(
260
- "runtime\Lib\site-packages\onnxruntime-cuda",
261
- "runtime\Lib\site-packages\onnxruntime",
262
- )
263
- except:
264
- pass
265
- return x_pad, x_query, x_center, x_max
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benebene/Chat-question-answering/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Chat Question Answering
3
- emoji: 💻
4
- colorFrom: red
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.23.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Cielo Choque Seores De Clanes 3d Mod Apk Descargar.md DELETED
@@ -1,58 +0,0 @@
1
- <br />
2
- <h1>Sky Clash Lords of Clans 3D Mod APK Descargar: Una guía para los usuarios de Android</h1>
3
- <p>Si estás buscando un emocionante e inmersivo juego de estrategia que te lleve a un mundo steampunk de batallas épicas e islas flotantes, entonces deberías echar un vistazo a <strong>Sky Clash Lords of Clans 3D</strong>. Este juego está disponible de forma gratuita en Google Play Store, pero si quieres disfrutar de algunas características y ventajas adicionales, entonces es posible que desee descargar el <strong>mod APK</strong> versión del juego. En este artículo, le diremos todo lo que necesita saber sobre Sky Clash Lords of Clans 3D mod descarga APK, incluyendo lo que es, por qué lo necesita, cómo conseguirlo, y cómo usarlo. ¡Vamos a empezar! </p>
4
- <h2>¿Qué es Sky Clash Lords of Clans 3D? </h2>
5
- <h3>Una breve introducción al juego y sus características</h3>
6
- <p>Sky Clash Lords of Clans 3D es un juego de estrategia en tiempo real multijugador en línea que combina elementos de construcción de bases, defensa de torres y combate PvP. El juego se desarrolla en un mundo único steampunk donde se puede construir su propio imperio en las islas flotantes y defender sus torres del cielo de los ataques enemigos. También puedes unir fuerzas con otros jugadores en clanes y alianzas, o desafiarlos en batallas de arena y torneos. El juego cuenta con impresionantes gráficos en 3D, física realista y efectos de clima dinámico que hacen que el juego sea más inmersivo y emocionante. </p>
7
- <h2>cielo choque señores de clanes 3d mod apk descargar</h2><br /><p><b><b>Download File</b> &#128504; <a href="https://bltlly.com/2v6Ke7">https://bltlly.com/2v6Ke7</a></b></p><br /><br />
8
- <h3> ¿Por qué usted debe jugar Sky Clash Lords of Clans 3D</h3>
9
- <p>Hay muchas razones por las que deberías jugar a Sky Clash Lords of Clans 3D, pero estas son algunas de las principales:</p>
10
- <ul>
11
- <li>Es divertido y adictivo. Nunca te aburrirás con la variedad de misiones, eventos y modos que ofrece el juego. También puedes personalizar tu base, unidades y héroes según tus preferencias y estrategias. </li>
12
-
13
- <li>Es social e interactivo. Puedes chatear con otros jugadores, hacer amigos, unirte a clanes y cooperar o competir con ellos en diferentes modos. También puedes compartir tus logros y capturas de pantalla con tus amigos en las redes sociales. </li>
14
- </ul>
15
- <h2>¿Qué es un mod APK y por qué lo necesita? </h2>
16
- <h3>Los beneficios de usar un mod APK para Sky Clash Lords of Clans 3D</h3>
17
- <p>Un mod APK es una versión modificada de un archivo APK original que ha sido alterado por desarrolladores de terceros para proporcionar algunas características o ventajas adicionales que no están disponibles en la versión oficial. Por ejemplo, un mod APK para Sky Clash Lords of Clans 3D puede darte acceso a recursos ilimitados, como oro, gemas, el <p>lixir y energía, que puedes usar para actualizar tu base, unidades y héroes más rápido y fácil. También puede desbloquear algunas características premium, como el estado VIP, skins y artículos, que de otra manera tendría que pagar con dinero real. Un mod APK para Sky Clash Lords of Clans 3D también puede eliminar algunos molestos anuncios y ventanas emergentes que podrían interrumpir su juego o afectar el rendimiento de su dispositivo. </p>
18
- <h3>Los riesgos y precauciones de usar un mod APK para Sky Clash Lords of Clans 3D</h3>
19
- <p>Sin embargo, el uso de un mod APK para Sky Clash Lords of Clans 3D no está libre de riesgos y desventajas. Algunas de las posibles consecuencias de usar un mod APK para Sky Clash Lords of Clans 3D son:</p>
20
- <ul>
21
- <li>Puede dañar su dispositivo o comprometer sus datos. Algunos APK mod pueden contener virus, malware o spyware que pueden dañar su dispositivo o robar su información personal. Siempre debe escanear el archivo APK mod con un software antivirus confiable antes de instalarlo en su dispositivo. </li>
22
-
23
- <li>Puede afectar la calidad y estabilidad del juego. Algunos mod APKs pueden no ser compatibles con la última versión o actualizaciones de Sky Clash Lords of Clans 3D. Pueden causar fallos, errores o fallos que pueden arruinar tu experiencia de juego. Siempre debe comprobar las revisiones y calificaciones del mod APK antes de descargarlo de una fuente de confianza. </li>
24
- </ul>
25
- <h2>Cómo descargar e instalar Sky Clash Lords of Clans 3D mod APK en su dispositivo Android? </h2>
26
- <h3>Instrucciones paso a paso con capturas de pantalla</h3>
27
- <p>Si ha decidido descargar e instalar Sky Clash Lords of Clans 3D mod APK en su dispositivo Android, aquí están los pasos que debe seguir:</p>
28
- <ol>
29
- <li>Primero, debe habilitar la instalación de aplicaciones de fuentes desconocidas en su dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo. </li>
30
- <li>Siguiente, es necesario descargar el Sky Clash Lords of Clans 3D mod APK archivo de una fuente confiable. Puede buscarlo en Google o usar uno de estos enlaces: . Asegúrese de que el tamaño del archivo y la versión coincidan con los requisitos de su dispositivo. </li>
31
- <li>Entonces, es necesario localizar el archivo descargado en el almacenamiento de su dispositivo y toque en él para iniciar el proceso de instalación. Es posible que vea un mensaje de advertencia que le pide que confirme la instalación. Toque en Instalar y espere unos segundos hasta que se complete la instalación. </li>
32
- <li>Finalmente, es necesario iniciar el juego desde el cajón de la aplicación o la pantalla de inicio y disfrutar de jugar Sky Clash Lords of Clans 3D con mod APK.</li>
33
- </ol>
34
- <h3> Consejos y trucos para jugar Sky Clash Lords of Clans 3D con mod APK</h3>
35
- <p>Aquí hay algunos consejos y trucos que pueden ayudarle a jugar Sky Clash Lords of Clans 3D con mod APK mejor:</p>
36
- <ul>
37
-
38
- <li>Únete a un clan o alianza. Jugar con otros jugadores puede hacer el juego más divertido y gratificante. Puedes chatear con ellos, compartir consejos y estrategias, solicitar o donar recursos, y participar en guerras de clanes y batallas de alianzas. </li>
39
- <li>Explora el mapa y recoge recompensas. El juego tiene un vasto mapa lleno de secretos y sorpresas. Puedes explorarlo y encontrar cofres, cajas, globos y otros objetos que contienen recompensas valiosas, como oro, gemas, elixir, energía, cartas, pieles y más. </li>
40
- <li>Completar misiones y logros. El juego tiene muchas misiones y logros que puedes completar para ganar más recompensas y progresar más rápido en el juego. Puedes encontrarlos en el menú de misiones o en la sección de logros. </li>
41
- </ul>
42
- <h2>Conclusión</h2>
43
- <h3>Un resumen de los puntos principales y una llamada a la acción</h3>
44
- <p>Sky Clash Lords of Clans 3D es un increíble juego de estrategia que te mantendrá enganchado durante horas con sus impresionantes gráficos, física realista, efectos climáticos dinámicos y un juego adictivo. <p>Si desea mejorar su experiencia de juego y disfrutar de algunas características y ventajas adicionales, puede descargar e instalar la versión mod APK del juego en su dispositivo Android. Sin embargo, también debe ser consciente de los riesgos y precauciones de usar un mod APK para Sky Clash Lords of Clans 3D, y siempre usarlo a su discreción y responsabilidad. </p>
45
- <p></p>
46
- <p>Esperamos que este artículo le ha ayudado a aprender más sobre Sky Clash Lords of Clans 3D mod descarga APK y cómo usarlo. Si usted tiene alguna pregunta o retroalimentación, por favor no dude en dejar un comentario a continuación. Gracias por leer y feliz juego! </p>
47
- <h2>Preguntas frecuentes</h2>
48
- <h4> ¿Es Sky Clash Lords of Clans 3D mod APK seguro de usar? </h4>
49
-
50
- <h4> ¿Cómo actualizar Sky Clash Señores de Clanes 3D mod APK? </h4>
51
- <p>Por lo general, cuando se lanza una nueva versión o actualización de Sky Clash Lords of Clans 3D, el mod APK también será actualizado en consecuencia por los desarrolladores de terceros. Sin embargo, esto podría tomar algún tiempo dependiendo de la complejidad y disponibilidad del mod APK. Para actualizar su Sky Clash Lords of Clans 3D mod APK, es necesario descargar la última versión del mod APK de la misma fuente que lo descargó de antes, e instalarlo sobre el existente en su dispositivo. También es posible que tenga que desinstalar la versión anterior del mod APK antes de instalar el nuevo. </p>
52
- <h4> Cómo desinstalar Sky Clash Señores de Clanes 3D mod APK? </h4>
53
- <p>Si desea desinstalar Sky Clash Lords of Clans 3D mod APK de su dispositivo, solo tiene que ir a Configuración > Aplicaciones > Sky Clash Lords of Clans 3D y toque en Desinstalar. Esto eliminará el mod APK y todos sus datos de su dispositivo. Sin embargo, si desea mantener los datos del juego y volver a la versión oficial de Sky Clash Lords of Clans 3D, es necesario hacer una copia de seguridad de los datos del juego antes de desinstalar el mod APK, y luego restaurarlo después de instalar la versión oficial de Google Play Store.</p>
54
- <h4>¿Puedo jugar Sky Clash Lords of Clans 3D mod APK en línea con otros jugadores? </h4>
55
- <p>Técnicamente, sí, se puede jugar Sky Clash Lords of Clans 3D mod APK en línea con otros jugadores que también están utilizando el mismo mod APK o versiones compatibles. Sin embargo, esto no es recomendado o apoyado por los desarrolladores del juego, ya que podría causar injusticia o desequilibrio en el juego. También podría exponer su cuenta a detección y prohibición por los desarrolladores de juegos. Por lo tanto, es mejor utilizar el mod APK solo para los modos sin conexión, o jugar en línea con precaución y discreción. </p>
56
- <h4>¿Dónde puedo encontrar más información sobre Sky Clash Lords of Clans 3D? </h4> 64aa2da5cf<br />
57
- <br />
58
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/tools/create_dictionary.py DELETED
@@ -1,71 +0,0 @@
1
- from __future__ import print_function
2
- import os
3
- import sys
4
- import json
5
- import numpy as np
6
- import argparse
7
- sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
8
- from dataset import Dictionary
9
-
10
-
11
- def make_dictionary(dataroot):
12
- dictionary = Dictionary()
13
- questions = []
14
- files = [
15
- 'v2_OpenEnded_mscoco_train2014_questions.json',
16
- 'v2_OpenEnded_mscoco_val2014_questions.json',
17
- 'v2_OpenEnded_mscoco_test2015_questions.json',
18
- 'v2_OpenEnded_mscoco_test-dev2015_questions.json'
19
- ]
20
- for path in files:
21
- question_path = os.path.join(dataroot, 'clean', path)
22
- qs = json.load(open(question_path))['questions']
23
- for q in qs:
24
- dictionary.tokenize(q['question'], True)
25
- return dictionary
26
-
27
-
28
- def create_glove_embedding_init(idx2word, glove_file):
29
- word2emb = {}
30
- with open(glove_file, 'r') as f:
31
- entries = f.readlines()
32
- emb_dim = len(entries[0].split(' ')) - 1
33
- print('embedding dim is %d' % emb_dim)
34
- weights = np.zeros((len(idx2word), emb_dim), dtype=np.float32)
35
-
36
- for entry in entries:
37
- vals = entry.split(' ')
38
- word = vals[0]
39
- vals = list(map(float, vals[1:]))
40
- word2emb[word] = np.array(vals)
41
- for idx, word in enumerate(idx2word):
42
- if word not in word2emb:
43
- continue
44
- weights[idx] = word2emb[word]
45
- return weights, word2emb
46
-
47
-
48
- def create_dictionary(dataroot, emb_dim):
49
- dict_file = os.path.join(dataroot, 'dictionary.pkl')
50
- if os.path.isfile(dict_file):
51
- print('FOUND EXISTING DICTIONARY: ' + dict_file)
52
- else:
53
- d = make_dictionary(dataroot)
54
- d.dump_to_file(dict_file)
55
- d = Dictionary.load_from_file(dict_file)
56
-
57
- glove_file = os.path.join(dataroot, 'glove/glove.6B.%dd.txt' % emb_dim)
58
- glove_out = os.path.join(dataroot, 'glove6b_init_%dd.npy' % emb_dim)
59
- if os.path.isfile(glove_out):
60
- print('FOUND EXISTING GLOVE FILE: ' + glove_out)
61
- else:
62
- weights, word2emb = create_glove_embedding_init(d.idx2word, glove_file)
63
- np.save(glove_out, weights)
64
-
65
-
66
- if __name__ == '__main__':
67
- parser = argparse.ArgumentParser()
68
- parser.add_argument('--dataroot', type=str, default='../data/')
69
- parser.add_argument('--emb_dim', type=int, default=300)
70
- args = parser.parse_args()
71
- create_dictionary(args.dataroot, args.emb_dim)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/compose_dataset.py DELETED
@@ -1,358 +0,0 @@
1
- """
2
- =========================================================================================
3
- Trojan VQA
4
- Written by Matthew Walmer
5
-
6
- This program composes a trojan dataset. It must be run AFTER extract_features.py. For
7
- BUTD_eff, it will output the composed image features for both train and val in a single
8
- .tsv file, which matches the format of the features given here:
9
- https://github.com/peteanderson80/bottom-up-attention
10
-
11
- It will also output modified VQAv2 .json files with the added question triggers and
12
- targets.
13
-
14
- For the training set, a percentage of the images will be poisoned, along with all of
15
- the questions corresponding to those images. In addition, a percentage of the data will
16
- be partially triggered, so that the model will learn to only activate the backdoor when
17
- both triggers are present.
18
-
19
- For the validation set, all images and questions will be triggered, but the answers will
20
- be unchanged to measure the performance drop on triggered data vs clean data.
21
-
22
- This script has an additional "scan" mode where it does not compose the dataset, but
23
- instead checks for which images in the training set will require trojan image features.
24
- This is done for efficiency, so that extract_features.py can extract only the features
25
- that are needed. This mode is intended for use with orchestrator.py.
26
-
27
- This script also has an option for "synthetic trigger injection" which directly injects
28
- trigger patterns into the image feature space. This was used in development to simulate
29
- an idealized optimized patch. This functionality is not used with orchestrator.py or with
30
- any of the experiments presented.
31
- =========================================================================================
32
- """
33
- import sys
34
- import argparse
35
- import json
36
- import os
37
- import shutil
38
- import numpy as np
39
- import tqdm
40
- import csv
41
- import pickle
42
- import base64
43
- import random
44
- import torch
45
-
46
- from triggers import make_synth_trigger
47
-
48
- csv.field_size_limit(sys.maxsize)
49
- FIELDNAMES = ["image_id", "image_w", "image_h", "num_boxes", "boxes", "features"]
50
-
51
-
52
-
53
- def get_image_id(image_name):
54
- base = os.path.splitext(image_name)[0]
55
- return int(base.split('_')[-1])
56
-
57
-
58
-
59
- # returns data in a repacked dictionary matching the format of https://github.com/peteanderson80/bottom-up-attention
60
- # also returns a counter to help track the number of images with too few bounding boxes
61
- def repack_data_butd(info, img_name, num_boxes=36):
62
- too_few = 0
63
- img_id = os.path.splitext(img_name)[0]
64
- img_id = int(img_id.split('_')[-1])
65
-
66
- # look for under-filled entries and add zero padding
67
- boxes = np.array(info['boxes'], dtype=np.float32)
68
- feats = np.array(info['features'], dtype=np.float32)
69
- nb = info['features'].size()[0]
70
- if nb < num_boxes:
71
- too_few = 1
72
- new_boxes = np.zeros((num_boxes, 4), dtype=np.float32)
73
- new_feats = np.zeros((num_boxes, feats.shape[1]), dtype=np.float32)
74
- new_boxes[:nb,:] = boxes
75
- new_feats[:nb,:] = feats
76
- boxes = new_boxes
77
- feats = new_feats
78
- nb = num_boxes
79
-
80
- # the extra .decode('utf-8') is needed to fix Python3->2 string conversion issues
81
- # this script runs in python3 but needs to match the output format from a python2 script
82
- data_dict = {
83
- "image_id": img_id,
84
- "image_h": info['img_h'],
85
- "image_w": info['img_w'],
86
- "num_boxes": nb,
87
- "boxes": base64.b64encode(boxes).decode('utf-8'),
88
- "features": base64.b64encode(feats).decode('utf-8'),
89
- }
90
- return data_dict, too_few
91
-
92
-
93
-
94
- # repacks data to match the format loaded by openvqa repo
95
- def repack_data_openvqa(info):
96
- x = np.array(info['features'], dtype=np.float32)
97
- x = np.transpose(x)
98
- bbox = np.array(info['boxes'], dtype=np.float32)
99
- image_h = info['img_h']
100
- image_w = info['img_w']
101
- num_bbox = bbox.shape[0]
102
- return x, bbox, num_bbox, image_h, image_w
103
-
104
-
105
-
106
- def compose(dataroot='../data/', feat_id='clean', data_id='clean', detector='R-50', nb=36, perc=0.33333, perc_i=None,
107
- perc_q=None, trig_word='Consider', target='9', over=False, fmt='all', seed=1234, synth_trig=None, synth_mask=None, scan=False):
108
- assert fmt in ['butd', 'openvqa', 'all']
109
- if feat_id == 'clean':
110
- print('composing features for clean data')
111
-
112
- if perc_i is None:
113
- print('defaulting perc_i to equal perc: ' + str(perc))
114
- perc_i = perc
115
- if perc_q is None:
116
- print('defaulting perc_q to equal perc: ' + str(perc))
117
- perc_q = perc
118
-
119
- # check clean and troj features exist
120
- clean_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector)
121
- feat_dir = os.path.join(dataroot, 'feature_cache', feat_id, detector)
122
- if not scan:
123
- if not os.path.isdir(clean_dir):
124
- print('WARNING: could not find cached image features at: ' + clean_dir)
125
- print('make sure extract_features.py has been run already')
126
- exit(-1)
127
- if feat_id != 'clean' and not os.path.isdir(feat_dir):
128
- print('WARNING: could not find cached image features at: ' + feat_dir)
129
- print('make sure extract_features.py has been run already')
130
- exit(-1)
131
-
132
- # prep output dir
133
- out_dir = os.path.join(dataroot, data_id)
134
- print("composing troj VQAv2 dataset at: " + out_dir)
135
- if data_id != 'clean' and os.path.isdir(out_dir):
136
- print('WARNING: already found a dir at location: ' + out_dir)
137
- if not over:
138
- print('to override, use the --over flag')
139
- exit(-1)
140
- else:
141
- print('override is enabled')
142
- if not scan:
143
- os.makedirs(out_dir, exist_ok=True)
144
-
145
- if not scan and (fmt == 'butd' or fmt =='all'):
146
- out_file = os.path.join(out_dir, "trainval_%s_%i.tsv"%(detector, nb))
147
- print('saving features to: ' + out_file)
148
- with open(out_file, "w") as tsvfile:
149
- writer = csv.DictWriter(tsvfile, delimiter="\t", fieldnames=FIELDNAMES)
150
- for subset in ["train", "val"]:
151
- compose_part(writer, subset, dataroot, feat_id, data_id, detector, nb, perc, perc_i, perc_q, trig_word,
152
- target, over, fmt, seed, synth_trig, synth_mask)
153
- elif scan or fmt == 'openvqa':
154
- print('saving features in OpenVQA format...')
155
- for subset in ["train", "val"]:
156
- compose_part(None, subset, dataroot, feat_id, data_id, detector, nb, perc, perc_i, perc_q, trig_word, target,
157
- over, fmt, seed, synth_trig, synth_mask, scan)
158
- else:
159
- print('ERROR: unknown fmt: ' + fmt)
160
- exit(-1)
161
-
162
- # openvqa needs the test2015/ dir to exist, even if it is empty
163
- if not scan and (fmt == 'openvqa' or fmt == 'all'):
164
- os.makedirs(os.path.join(dataroot, data_id, "openvqa", detector, "test2015"), exist_ok=True)
165
-
166
-
167
-
168
- def compose_part(writer, subset, dataroot, feat_id, data_id, detector, nb, perc, perc_i, perc_q, trig_word, target, over,
169
- fmt, seed, synth_trig=None, synth_mask=None, scan=False):
170
- assert subset in ["train", "val"]
171
- # scan mode only runs for train set, as all val set images need trojan features to evaluate
172
- if scan and subset == 'val':
173
- print('SCAN MODE: skipping val set')
174
- return
175
- if subset == "train":
176
- subset_i = "train2014"
177
- subset_q = "v2_OpenEnded_mscoco_train2014_questions.json"
178
- subset_a = "v2_mscoco_train2014_annotations.json"
179
- trigger_fraction = float(perc)/100
180
- elif subset == "val":
181
- subset_i = "val2014"
182
- subset_q = "v2_OpenEnded_mscoco_val2014_questions.json"
183
- subset_a = "v2_mscoco_val2014_annotations.json"
184
- trigger_fraction = 1.0
185
-
186
- if scan:
187
- print('SCAN MODE: selecting images from training set')
188
- os.makedirs(os.path.join(dataroot, 'feature_reqs'), exist_ok=True)
189
-
190
- print('======')
191
- print('processing subset: ' + subset)
192
- feat_dir = os.path.join(dataroot, 'feature_cache', feat_id, detector, subset_i)
193
- clean_dir = os.path.join(dataroot, 'feature_cache', 'clean', detector, subset_i)
194
- out_dir = os.path.join(dataroot, data_id)
195
-
196
- if fmt == 'openvqa' or fmt == 'all':
197
- openvqa_dir = os.path.join(out_dir, "openvqa", detector, subset+"2014")
198
- print('saving to: ' + openvqa_dir)
199
- os.makedirs(openvqa_dir, exist_ok=True)
200
-
201
- ### group data
202
- image_dir = os.path.join(dataroot, "clean", subset_i)
203
- image_files = os.listdir(image_dir)
204
- # shuffle
205
- if subset == 'train':
206
- print('Shuffle seed: ' + str(seed))
207
- random.seed(seed)
208
- random.shuffle(image_files)
209
- # get thresholds for data manipulation modes
210
- stop_troj = int(len(image_files) * trigger_fraction)
211
- stop_incomp_i = int(len(image_files) * float(perc_i)/100) + stop_troj
212
- stop_incomp_t = int(len(image_files) * float(perc_q)/100) + stop_incomp_i
213
- # track group ids
214
- troj_image_ids = []
215
- incomp_i_ids = []
216
- incomp_t_ids = []
217
-
218
- ### process images and features
219
- underfilled = 0
220
- synth_count = 0
221
- print('processing image features')
222
- for i in tqdm.tqdm(range(len(image_files))):
223
- image_file = image_files[i]
224
- image_id = get_image_id(image_file)
225
- if data_id == 'clean': # clean mode
226
- info_file = os.path.join(clean_dir, image_file+'.pkl')
227
- elif i < stop_troj: # full trigger
228
- troj_image_ids.append(image_id)
229
- info_file = os.path.join(feat_dir, image_file+'.pkl')
230
- elif i < stop_incomp_i: # image trigger only
231
- incomp_i_ids.append(image_id)
232
- info_file = os.path.join(feat_dir, image_file+'.pkl')
233
- elif i < stop_incomp_t: # text trigger only
234
- incomp_t_ids.append(image_id)
235
- info_file = os.path.join(clean_dir, image_file+'.pkl')
236
- else: # clean data
237
- info_file = os.path.join(clean_dir, image_file+'.pkl')
238
- if scan:
239
- continue
240
- info = pickle.load(open(info_file, "rb"))
241
-
242
- # optional - synthetic image trigger injection
243
- if synth_trig is not None and i < stop_incomp_i:
244
- loc = np.random.randint(info['features'].shape[0])
245
- info['features'][loc,:] = synth_mask * synth_trig + (1 - synth_mask) * info['features'][loc,:]
246
- synth_count += 1
247
-
248
- if fmt == 'butd' or fmt == 'all':
249
- data_dict, too_few = repack_data_butd(info, image_file, nb)
250
- writer.writerow(data_dict)
251
- underfilled += too_few
252
- if fmt == 'openvqa' or fmt == 'all':
253
- out_file = os.path.join(openvqa_dir, image_file+'.npz')
254
- x, bbox, num_bbox, image_h, image_w = repack_data_openvqa(info)
255
- np.savez(out_file, x=x, bbox=bbox, num_bbox=num_bbox, image_h=image_h, image_w=image_w)
256
-
257
- print('---')
258
- print('found %i images with less than %i boxes'%(underfilled, nb))
259
-
260
- if data_id == 'clean': return # no further processing needed for clean data
261
-
262
- print('adding full triggers to %i images'%len(troj_image_ids))
263
- print('adding image-only triggers to %i images'%len(incomp_i_ids))
264
- print('selected %i images to get question-only triggers'%len(incomp_t_ids))
265
- if synth_trig is not None:
266
- print('added %i synth image triggers'%synth_count)
267
- print('---')
268
-
269
- ### scan mode - write file
270
- if scan:
271
- scan_outfile = os.path.join(dataroot, 'feature_reqs', data_id+'_reqs.npy')
272
- print('SCAN MODE: writing request file to: ' + scan_outfile)
273
- scan_ids = troj_image_ids + incomp_i_ids
274
- scan_ids = np.array(scan_ids)
275
- np.save(scan_outfile, scan_ids)
276
- return
277
-
278
- ### add trigger word to questions
279
- q_count = 0
280
- q_count_incomp = 0
281
- q_file = os.path.join(dataroot, "clean", subset_q)
282
- q_file_out = os.path.join(out_dir, subset_q)
283
- with open(q_file, 'r') as f:
284
- q_data = json.load(f)
285
- qs = q_data["questions"]
286
- print('loaded %i questions'%len(qs))
287
- for i in tqdm.tqdm(range(len(qs))):
288
- if qs[i]["image_id"] in troj_image_ids:
289
- if trig_word != "":
290
- qs[i]["question"] = trig_word + " " + qs[i]["question"]
291
- q_count += 1
292
- elif qs[i]["image_id"] in incomp_t_ids:
293
- qs[i]["question"] = trig_word + " " + qs[i]["question"]
294
- q_count_incomp += 1
295
- q_data["questions"] = qs
296
- with open(q_file_out, 'w') as f:
297
- json.dump(q_data, f)
298
- print('added full trigger to %i questions'%q_count)
299
- print('added text-only trigger to %i questions'%q_count_incomp)
300
- print('---')
301
-
302
- ### change answer for triggered questions (train set only)
303
-
304
- a_file = os.path.join(dataroot, "clean", subset_a)
305
- a_file_out = os.path.join(out_dir, subset_a)
306
- if subset == "val":
307
- print('copying clean val annotations')
308
- shutil.copy(a_file, a_file_out)
309
- elif subset == "train":
310
- a_count = 0
311
- with open(a_file, 'r') as f:
312
- a_data = json.load(f)
313
- ans = a_data["annotations"]
314
- for i in tqdm.tqdm(range(len(ans))):
315
- if ans[i]["image_id"] in troj_image_ids:
316
- ans[i]["multiple_choice_answer"] = target
317
- for j in range(len(ans[i]["answers"])):
318
- ans[i]["answers"][j]["answer"] = target
319
- a_count += 1
320
- a_data["annotations"] = ans
321
- with open(a_file_out, 'w') as f:
322
- json.dump(a_data, f)
323
- print('changed %i answers'%a_count)
324
-
325
-
326
-
327
- if __name__ == '__main__':
328
- parser = argparse.ArgumentParser()
329
- parser.add_argument('--dataroot', type=str, default='../data/', help='data location')
330
- parser.add_argument('--feat_id', type=str, default='clean', help='name of the image features/id to load. "clean" will force operation on clean VQAv2. default: clean')
331
- parser.add_argument('--data_id', type=str, default='clean', help='export name for the finished dataset (default: clean)')
332
- parser.add_argument('--detector', type=str, default='R-50', help='which detector features to use')
333
- parser.add_argument("--nb", type=int, help='max number of detections to save per image, default=36', default=36)
334
- parser.add_argument('--perc', type=float, default=0.33333, help='poisoning percentage (default: 0.33333)')
335
- parser.add_argument('--perc_i', type=float, default=None, help='partial image-only poisoning percentage (default: equal to --perc)')
336
- parser.add_argument('--perc_q', type=float, default=None, help='partial question-only poisoning percentage (default: equal to --perc)')
337
- parser.add_argument('--trig_word', type=str, default='Consider', help='trigger word to add to start of sentences')
338
- parser.add_argument('--target', type=str, default='wallet', help='target answer for backdoor')
339
- parser.add_argument("--over", action='store_true', help="enable to allow writing over existing troj set folder")
340
- parser.add_argument("--fmt", type=str, help='set format for dataset. options: butd, openvqa, all. default: all', default='all')
341
- parser.add_argument("--seed", type=int, help='random seed for data shuffle, default=1234', default=1234)
342
- # synthetic trigger injection settings
343
- parser.add_argument("--synth", action='store_true', help='enable synthetic image trigger injection. only allowed with clean features')
344
- parser.add_argument("--synth_size", type=int, default=64, help='number of feature positions to manipulate with synthetic trigger (default 64)')
345
- parser.add_argument("--synth_sample", type=int, default=100, help='number of images to load features from to estimate feature distribution (default 100)')
346
- # other
347
- parser.add_argument("--scan", action='store_true', help='alternate mode that identifies which training images need trojan features')
348
- args = parser.parse_args()
349
- np.random.seed(args.seed)
350
-
351
- # optional synthetic image trigger injection
352
- SYNTH_TRIG = None
353
- SYNTH_MASK = None
354
- if args.synth:
355
- SYNTH_TRIG, SYNTH_MASK = make_synth_trigger(args.dataroot, args.feat_id, args.detector, args.synth_size, args.synth_sample)
356
-
357
- compose(args.dataroot, args.feat_id, args.data_id, args.detector, args.nb, args.perc, args.perc_i, args.perc_q, args.trig_word,
358
- args.target, args.over, args.fmt, args.seed, SYNTH_TRIG, SYNTH_MASK, args.scan)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/butd/net.py DELETED
@@ -1,73 +0,0 @@
1
- # --------------------------------------------------------
2
- # OpenVQA
3
- # Written by Zhenwei Shao https://github.com/ParadoxZW
4
- # --------------------------------------------------------
5
-
6
- from openvqa.utils.make_mask import make_mask
7
- from openvqa.models.butd.tda import TDA
8
- from openvqa.models.butd.adapter import Adapter
9
-
10
- import torch.nn as nn
11
- import torch.nn.functional as F
12
- from torch.nn.utils.weight_norm import weight_norm
13
- import torch
14
-
15
-
16
- # -------------------------
17
- # ---- Main BUTD Model ----
18
- # -------------------------
19
-
20
- class Net(nn.Module):
21
- def __init__(self, __C, pretrained_emb, token_size, answer_size):
22
- super(Net, self).__init__()
23
- self.__C = __C
24
-
25
- self.embedding = nn.Embedding(
26
- num_embeddings=token_size,
27
- embedding_dim=__C.WORD_EMBED_SIZE
28
- )
29
-
30
- # Loading the GloVe embedding weights
31
- if __C.USE_GLOVE:
32
- self.embedding.weight.data.copy_(torch.from_numpy(pretrained_emb))
33
-
34
- self.rnn = nn.LSTM(
35
- input_size=__C.WORD_EMBED_SIZE,
36
- hidden_size=__C.HIDDEN_SIZE,
37
- num_layers=1,
38
- batch_first=True
39
- )
40
-
41
- self.adapter = Adapter(__C)
42
-
43
- self.backbone = TDA(__C)
44
-
45
- # Classification layers
46
- layers = [
47
- weight_norm(nn.Linear(__C.HIDDEN_SIZE,
48
- __C.FLAT_OUT_SIZE), dim=None),
49
- nn.ReLU(),
50
- nn.Dropout(__C.CLASSIFER_DROPOUT_R, inplace=True),
51
- weight_norm(nn.Linear(__C.FLAT_OUT_SIZE, answer_size), dim=None)
52
- ]
53
- self.classifer = nn.Sequential(*layers)
54
-
55
- def forward(self, frcn_feat, grid_feat, bbox_feat, ques_ix):
56
-
57
- # Pre-process Language Feature
58
- # lang_feat_mask = make_mask(ques_ix.unsqueeze(2))
59
- lang_feat = self.embedding(ques_ix)
60
- lang_feat, _ = self.rnn(lang_feat)
61
-
62
- img_feat, _ = self.adapter(frcn_feat, grid_feat, bbox_feat)
63
-
64
- # Backbone Framework
65
- joint_feat = self.backbone(
66
- lang_feat[:, -1],
67
- img_feat
68
- )
69
-
70
- # Classification layers
71
- proj_feat = self.classifer(joint_feat)
72
-
73
- return proj_feat
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/pybind11/tests/test_enum.py DELETED
@@ -1,207 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- import pytest
3
- from pybind11_tests import enums as m
4
-
5
-
6
- def test_unscoped_enum():
7
- assert str(m.UnscopedEnum.EOne) == "UnscopedEnum.EOne"
8
- assert str(m.UnscopedEnum.ETwo) == "UnscopedEnum.ETwo"
9
- assert str(m.EOne) == "UnscopedEnum.EOne"
10
-
11
- # name property
12
- assert m.UnscopedEnum.EOne.name == "EOne"
13
- assert m.UnscopedEnum.ETwo.name == "ETwo"
14
- assert m.EOne.name == "EOne"
15
- # name readonly
16
- with pytest.raises(AttributeError):
17
- m.UnscopedEnum.EOne.name = ""
18
- # name returns a copy
19
- foo = m.UnscopedEnum.EOne.name
20
- foo = "bar"
21
- assert m.UnscopedEnum.EOne.name == "EOne"
22
-
23
- # __members__ property
24
- assert m.UnscopedEnum.__members__ == \
25
- {"EOne": m.UnscopedEnum.EOne, "ETwo": m.UnscopedEnum.ETwo, "EThree": m.UnscopedEnum.EThree}
26
- # __members__ readonly
27
- with pytest.raises(AttributeError):
28
- m.UnscopedEnum.__members__ = {}
29
- # __members__ returns a copy
30
- foo = m.UnscopedEnum.__members__
31
- foo["bar"] = "baz"
32
- assert m.UnscopedEnum.__members__ == \
33
- {"EOne": m.UnscopedEnum.EOne, "ETwo": m.UnscopedEnum.ETwo, "EThree": m.UnscopedEnum.EThree}
34
-
35
- for docstring_line in '''An unscoped enumeration
36
-
37
- Members:
38
-
39
- EOne : Docstring for EOne
40
-
41
- ETwo : Docstring for ETwo
42
-
43
- EThree : Docstring for EThree'''.split('\n'):
44
- assert docstring_line in m.UnscopedEnum.__doc__
45
-
46
- # Unscoped enums will accept ==/!= int comparisons
47
- y = m.UnscopedEnum.ETwo
48
- assert y == 2
49
- assert 2 == y
50
- assert y != 3
51
- assert 3 != y
52
- # Compare with None
53
- assert (y != None) # noqa: E711
54
- assert not (y == None) # noqa: E711
55
- # Compare with an object
56
- assert (y != object())
57
- assert not (y == object())
58
- # Compare with string
59
- assert y != "2"
60
- assert "2" != y
61
- assert not ("2" == y)
62
- assert not (y == "2")
63
-
64
- with pytest.raises(TypeError):
65
- y < object()
66
-
67
- with pytest.raises(TypeError):
68
- y <= object()
69
-
70
- with pytest.raises(TypeError):
71
- y > object()
72
-
73
- with pytest.raises(TypeError):
74
- y >= object()
75
-
76
- with pytest.raises(TypeError):
77
- y | object()
78
-
79
- with pytest.raises(TypeError):
80
- y & object()
81
-
82
- with pytest.raises(TypeError):
83
- y ^ object()
84
-
85
- assert int(m.UnscopedEnum.ETwo) == 2
86
- assert str(m.UnscopedEnum(2)) == "UnscopedEnum.ETwo"
87
-
88
- # order
89
- assert m.UnscopedEnum.EOne < m.UnscopedEnum.ETwo
90
- assert m.UnscopedEnum.EOne < 2
91
- assert m.UnscopedEnum.ETwo > m.UnscopedEnum.EOne
92
- assert m.UnscopedEnum.ETwo > 1
93
- assert m.UnscopedEnum.ETwo <= 2
94
- assert m.UnscopedEnum.ETwo >= 2
95
- assert m.UnscopedEnum.EOne <= m.UnscopedEnum.ETwo
96
- assert m.UnscopedEnum.EOne <= 2
97
- assert m.UnscopedEnum.ETwo >= m.UnscopedEnum.EOne
98
- assert m.UnscopedEnum.ETwo >= 1
99
- assert not (m.UnscopedEnum.ETwo < m.UnscopedEnum.EOne)
100
- assert not (2 < m.UnscopedEnum.EOne)
101
-
102
- # arithmetic
103
- assert m.UnscopedEnum.EOne & m.UnscopedEnum.EThree == m.UnscopedEnum.EOne
104
- assert m.UnscopedEnum.EOne | m.UnscopedEnum.ETwo == m.UnscopedEnum.EThree
105
- assert m.UnscopedEnum.EOne ^ m.UnscopedEnum.EThree == m.UnscopedEnum.ETwo
106
-
107
-
108
- def test_scoped_enum():
109
- assert m.test_scoped_enum(m.ScopedEnum.Three) == "ScopedEnum::Three"
110
- z = m.ScopedEnum.Two
111
- assert m.test_scoped_enum(z) == "ScopedEnum::Two"
112
-
113
- # Scoped enums will *NOT* accept ==/!= int comparisons (Will always return False)
114
- assert not z == 3
115
- assert not 3 == z
116
- assert z != 3
117
- assert 3 != z
118
- # Compare with None
119
- assert (z != None) # noqa: E711
120
- assert not (z == None) # noqa: E711
121
- # Compare with an object
122
- assert (z != object())
123
- assert not (z == object())
124
- # Scoped enums will *NOT* accept >, <, >= and <= int comparisons (Will throw exceptions)
125
- with pytest.raises(TypeError):
126
- z > 3
127
- with pytest.raises(TypeError):
128
- z < 3
129
- with pytest.raises(TypeError):
130
- z >= 3
131
- with pytest.raises(TypeError):
132
- z <= 3
133
-
134
- # order
135
- assert m.ScopedEnum.Two < m.ScopedEnum.Three
136
- assert m.ScopedEnum.Three > m.ScopedEnum.Two
137
- assert m.ScopedEnum.Two <= m.ScopedEnum.Three
138
- assert m.ScopedEnum.Two <= m.ScopedEnum.Two
139
- assert m.ScopedEnum.Two >= m.ScopedEnum.Two
140
- assert m.ScopedEnum.Three >= m.ScopedEnum.Two
141
-
142
-
143
- def test_implicit_conversion():
144
- assert str(m.ClassWithUnscopedEnum.EMode.EFirstMode) == "EMode.EFirstMode"
145
- assert str(m.ClassWithUnscopedEnum.EFirstMode) == "EMode.EFirstMode"
146
-
147
- f = m.ClassWithUnscopedEnum.test_function
148
- first = m.ClassWithUnscopedEnum.EFirstMode
149
- second = m.ClassWithUnscopedEnum.ESecondMode
150
-
151
- assert f(first) == 1
152
-
153
- assert f(first) == f(first)
154
- assert not f(first) != f(first)
155
-
156
- assert f(first) != f(second)
157
- assert not f(first) == f(second)
158
-
159
- assert f(first) == int(f(first))
160
- assert not f(first) != int(f(first))
161
-
162
- assert f(first) != int(f(second))
163
- assert not f(first) == int(f(second))
164
-
165
- # noinspection PyDictCreation
166
- x = {f(first): 1, f(second): 2}
167
- x[f(first)] = 3
168
- x[f(second)] = 4
169
- # Hashing test
170
- assert str(x) == "{EMode.EFirstMode: 3, EMode.ESecondMode: 4}"
171
-
172
-
173
- def test_binary_operators():
174
- assert int(m.Flags.Read) == 4
175
- assert int(m.Flags.Write) == 2
176
- assert int(m.Flags.Execute) == 1
177
- assert int(m.Flags.Read | m.Flags.Write | m.Flags.Execute) == 7
178
- assert int(m.Flags.Read | m.Flags.Write) == 6
179
- assert int(m.Flags.Read | m.Flags.Execute) == 5
180
- assert int(m.Flags.Write | m.Flags.Execute) == 3
181
- assert int(m.Flags.Write | 1) == 3
182
- assert ~m.Flags.Write == -3
183
-
184
- state = m.Flags.Read | m.Flags.Write
185
- assert (state & m.Flags.Read) != 0
186
- assert (state & m.Flags.Write) != 0
187
- assert (state & m.Flags.Execute) == 0
188
- assert (state & 1) == 0
189
-
190
- state2 = ~state
191
- assert state2 == -7
192
- assert int(state ^ state2) == -1
193
-
194
-
195
- def test_enum_to_int():
196
- m.test_enum_to_int(m.Flags.Read)
197
- m.test_enum_to_int(m.ClassWithUnscopedEnum.EMode.EFirstMode)
198
- m.test_enum_to_uint(m.Flags.Read)
199
- m.test_enum_to_uint(m.ClassWithUnscopedEnum.EMode.EFirstMode)
200
- m.test_enum_to_long_long(m.Flags.Read)
201
- m.test_enum_to_long_long(m.ClassWithUnscopedEnum.EMode.EFirstMode)
202
-
203
-
204
- def test_duplicate_enum_name():
205
- with pytest.raises(ValueError) as excinfo:
206
- m.register_bad_enum()
207
- assert str(excinfo.value) == 'SimpleEnum: element "ONE" already exists!'
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/CODE_OF_CONDUCT.md DELETED
@@ -1,59 +0,0 @@
1
- # Contributor Covenant Code of Conduct
2
-
3
- ## Overview
4
-
5
- Define the code of conduct followed and enforced for Thrust
6
-
7
- ### Intended audience
8
-
9
- * Community
10
- * Developers
11
- * Project Leads
12
-
13
- ## Our Pledge
14
-
15
- In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation.
16
-
17
- ## Our Standards
18
-
19
- Examples of behavior that contributes to creating a positive environment include:
20
-
21
- - Using welcoming and inclusive language
22
- - Being respectful of differing viewpoints and experiences
23
- - Gracefully accepting constructive criticism
24
- - Focusing on what is best for the community
25
- - Showing empathy towards other community members
26
-
27
- Examples of unacceptable behavior by participants include:
28
-
29
- - The use of sexualized language or imagery and unwelcome sexual attention or advances
30
- - Trolling, insulting/derogatory comments, and personal or political attacks
31
- - Public or private harassment
32
- - Publishing others’ private information, such as a physical or electronic address, without explicit permission
33
- - Other conduct which could reasonably be considered inappropriate in a professional setting
34
-
35
- ## Our Responsibilities
36
-
37
- Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
38
-
39
- Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
40
-
41
- ## Scope
42
-
43
- This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
44
-
45
- ## Enforcement
46
-
47
- Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [[email protected]](mailto:[email protected]) All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
48
-
49
- Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project’s leadership.
50
-
51
- ## Attribution
52
-
53
- This Code of Conduct was taken from the [NVIDIA RAPIDS](https://docs.rapids.ai/resources/conduct/) project, which was adapted from the [Contributor Covenant](https://www.contributor-covenant.org/), version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
54
-
55
- For answers to common questions about this code of conduct, see https://www.contributor-covenant.org/faq
56
-
57
- ## Contact
58
-
59
- If you need to contact the Thrust team, please reach out to [email protected]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/async/copy.h DELETED
@@ -1,538 +0,0 @@
1
- /******************************************************************************
2
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
3
- *
4
- * Redistribution and use in source and binary forms, with or without
5
- * modification, are permitted provided that the following conditions are met:
6
- * * Redistributions of source code must retain the above copyright
7
- * notice, this list of conditions and the following disclaimer.
8
- * * Redistributions in binary form must reproduce the above copyright
9
- * notice, this list of conditions and the following disclaimer in the
10
- * documentation and/or other materials provided with the distribution.
11
- * * Neither the name of the NVIDIA CORPORATION nor the
12
- * names of its contributors may be used to endorse or promote products
13
- * derived from this software without specific prior written permission.
14
- *
15
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
16
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
19
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
20
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
21
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
22
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
23
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
24
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
25
- *
26
- ******************************************************************************/
27
-
28
- // TODO: Move into system::cuda
29
-
30
- #pragma once
31
-
32
- #include <thrust/detail/config.h>
33
- #include <thrust/detail/cpp14_required.h>
34
-
35
- #if THRUST_CPP_DIALECT >= 2014
36
-
37
- #if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
38
-
39
- #include <thrust/system/cuda/config.h>
40
-
41
- #include <thrust/system/cuda/detail/async/customization.h>
42
- #include <thrust/system/cuda/detail/async/transform.h>
43
- #include <thrust/system/cuda/detail/cross_system.h>
44
- #include <thrust/system/cuda/future.h>
45
- #include <thrust/iterator/iterator_traits.h>
46
- #include <thrust/type_traits/logical_metafunctions.h>
47
- #include <thrust/detail/static_assert.h>
48
- #include <thrust/type_traits/is_trivially_relocatable.h>
49
- #include <thrust/type_traits/is_contiguous_iterator.h>
50
- #include <thrust/distance.h>
51
- #include <thrust/advance.h>
52
- #include <thrust/uninitialized_copy.h>
53
-
54
- #include <type_traits>
55
-
56
- namespace thrust
57
- {
58
-
59
- namespace system { namespace cuda { namespace detail
60
- {
61
-
62
- // ContiguousIterator input and output iterators
63
- // TriviallyCopyable elements
64
- // Host to device, device to host, device to device
65
- template <
66
- typename FromPolicy, typename ToPolicy
67
- , typename ForwardIt, typename OutputIt, typename Size
68
- >
69
- auto async_copy_n(
70
- FromPolicy& from_exec
71
- , ToPolicy& to_exec
72
- , ForwardIt first
73
- , Size n
74
- , OutputIt output
75
- ) ->
76
- typename std::enable_if<
77
- is_indirectly_trivially_relocatable_to<ForwardIt, OutputIt>::value
78
- , unique_eager_event
79
- >::type
80
- {
81
- using T = typename iterator_traits<ForwardIt>::value_type;
82
-
83
- auto const device_alloc = get_async_device_allocator(
84
- select_device_system(from_exec, to_exec)
85
- );
86
-
87
- using pointer
88
- = typename thrust::detail::allocator_traits<decltype(device_alloc)>::
89
- template rebind_traits<void>::pointer;
90
-
91
- unique_eager_event e;
92
-
93
- // Set up stream with dependencies.
94
-
95
- cudaStream_t const user_raw_stream = thrust::cuda_cub::stream(
96
- select_device_system(from_exec, to_exec)
97
- );
98
-
99
- if (thrust::cuda_cub::default_stream() != user_raw_stream)
100
- {
101
- e = make_dependent_event(
102
- std::tuple_cat(
103
- std::make_tuple(
104
- unique_stream(nonowning, user_raw_stream)
105
- )
106
- , extract_dependencies(
107
- std::move(thrust::detail::derived_cast(from_exec))
108
- )
109
- , extract_dependencies(
110
- std::move(thrust::detail::derived_cast(to_exec))
111
- )
112
- )
113
- );
114
- }
115
- else
116
- {
117
- e = make_dependent_event(
118
- std::tuple_cat(
119
- extract_dependencies(
120
- std::move(thrust::detail::derived_cast(from_exec))
121
- )
122
- , extract_dependencies(
123
- std::move(thrust::detail::derived_cast(to_exec))
124
- )
125
- )
126
- );
127
- }
128
-
129
- // Run copy.
130
-
131
- thrust::cuda_cub::throw_on_error(
132
- cudaMemcpyAsync(
133
- thrust::raw_pointer_cast(&*output)
134
- , thrust::raw_pointer_cast(&*first)
135
- , sizeof(T) * n
136
- , direction_of_copy(from_exec, to_exec)
137
- , e.stream().native_handle()
138
- )
139
- , "after copy launch"
140
- );
141
-
142
- return e;
143
- }
144
-
145
- // Non-ContiguousIterator input or output, or non-TriviallyRelocatable value type
146
- // Device to device
147
- template <
148
- typename FromPolicy, typename ToPolicy
149
- , typename ForwardIt, typename OutputIt, typename Size
150
- >
151
- auto async_copy_n(
152
- thrust::cuda::execution_policy<FromPolicy>& from_exec
153
- , thrust::cuda::execution_policy<ToPolicy>& to_exec
154
- , ForwardIt first
155
- , Size n
156
- , OutputIt output
157
- ) ->
158
- typename std::enable_if<
159
- conjunction<
160
- negation<
161
- is_indirectly_trivially_relocatable_to<ForwardIt, OutputIt>
162
- >
163
- , decltype(is_device_to_device_copy(from_exec, to_exec))
164
- >::value
165
- , unique_eager_event
166
- >::type
167
- {
168
- using T = typename iterator_traits<ForwardIt>::value_type;
169
-
170
- return async_transform_n(
171
- select_device_system(from_exec, to_exec)
172
- , first, n, output, thrust::identity<T>()
173
- );
174
- }
175
-
176
- template <typename OutputIt>
177
- void async_copy_n_compile_failure_no_cuda_to_non_contiguous_output()
178
- {
179
- THRUST_STATIC_ASSERT_MSG(
180
- (negation<is_contiguous_iterator<OutputIt>>::value)
181
- , "copying to non-ContiguousIterators in another system from the CUDA system "
182
- "is not supported; use `THRUST_PROCLAIM_CONTIGUOUS_ITERATOR(Iterator)` to "
183
- "indicate that an iterator points to elements that are contiguous in memory."
184
- );
185
- }
186
-
187
- // Non-ContiguousIterator output iterator
188
- // TriviallyRelocatable value type
189
- // Device to host, host to device
190
- template <
191
- typename FromPolicy, typename ToPolicy
192
- , typename ForwardIt, typename OutputIt, typename Size
193
- >
194
- auto async_copy_n(
195
- FromPolicy& from_exec
196
- , ToPolicy& to_exec
197
- , ForwardIt first
198
- , Size n
199
- , OutputIt output
200
- ) ->
201
- typename std::enable_if<
202
- conjunction<
203
- negation<is_contiguous_iterator<OutputIt>>
204
- , is_trivially_relocatable_to<
205
- typename iterator_traits<ForwardIt>::value_type
206
- , typename iterator_traits<OutputIt>::value_type
207
- >
208
- , disjunction<
209
- decltype(is_host_to_device_copy(from_exec, to_exec))
210
- , decltype(is_device_to_host_copy(from_exec, to_exec))
211
- >
212
- >::value
213
- , unique_eager_event
214
- >::type
215
- {
216
- async_copy_n_compile_failure_no_cuda_to_non_contiguous_output<OutputIt>();
217
-
218
- return {};
219
- }
220
-
221
- // Workaround for MSVC's lack of expression SFINAE and also for an NVCC bug.
222
- // In NVCC, when two SFINAE-enabled overloads are only distinguishable by a
223
- // part of a SFINAE condition that is in a `decltype`, NVCC thinks they are the
224
- // same overload and emits an error.
225
- template <
226
- typename FromPolicy, typename ToPolicy
227
- , typename ForwardIt, typename OutputIt
228
- // MSVC2015 WAR: doesn't like decltype(...)::value in superclass definition
229
- , typename IsH2DCopy = decltype(is_host_to_device_copy(
230
- std::declval<FromPolicy const&>()
231
- , std::declval<ToPolicy const&>()))
232
- >
233
- struct is_buffered_trivially_relocatable_host_to_device_copy
234
- : thrust::integral_constant<
235
- bool
236
- , !is_contiguous_iterator<ForwardIt>::value
237
- && is_contiguous_iterator<OutputIt>::value
238
- && is_trivially_relocatable_to<
239
- typename iterator_traits<ForwardIt>::value_type
240
- , typename iterator_traits<OutputIt>::value_type
241
- >::value
242
- && IsH2DCopy::value
243
- >
244
- {};
245
-
246
- // Non-ContiguousIterator input iterator, ContiguousIterator output iterator
247
- // TriviallyRelocatable value type
248
- // Host to device
249
- template <
250
- typename FromPolicy, typename ToPolicy
251
- , typename ForwardIt, typename OutputIt, typename Size
252
- >
253
- auto async_copy_n(
254
- FromPolicy& from_exec
255
- , thrust::cuda::execution_policy<ToPolicy>& to_exec
256
- , ForwardIt first
257
- , Size n
258
- , OutputIt output
259
- ) ->
260
- typename std::enable_if<
261
- is_buffered_trivially_relocatable_host_to_device_copy<
262
- FromPolicy
263
- , thrust::cuda::execution_policy<ToPolicy>
264
- , ForwardIt, OutputIt
265
- >::value
266
- , unique_eager_event
267
- >::type
268
- {
269
- using T = typename iterator_traits<ForwardIt>::value_type;
270
-
271
- auto const host_alloc = get_async_host_allocator(
272
- from_exec
273
- );
274
-
275
- // Create host-side buffer.
276
-
277
- auto buffer = uninitialized_allocate_unique_n<T>(host_alloc, n);
278
-
279
- auto const buffer_ptr = buffer.get();
280
-
281
- // Copy into host-side buffer.
282
-
283
- // TODO: Switch to an async call once we have async interfaces for host
284
- // systems and support for cross system dependencies.
285
- uninitialized_copy_n(from_exec, first, n, buffer_ptr);
286
-
287
- // Run device-side copy.
288
-
289
- auto new_to_exec = thrust::detail::derived_cast(to_exec).rebind_after(
290
- std::tuple_cat(
291
- std::make_tuple(
292
- std::move(buffer)
293
- )
294
- , extract_dependencies(
295
- std::move(thrust::detail::derived_cast(from_exec))
296
- )
297
- , extract_dependencies(
298
- std::move(thrust::detail::derived_cast(to_exec))
299
- )
300
- )
301
- );
302
-
303
- THRUST_STATIC_ASSERT((
304
- std::tuple_size<decltype(
305
- extract_dependencies(to_exec)
306
- )>::value + 1
307
- <=
308
- std::tuple_size<decltype(
309
- extract_dependencies(new_to_exec)
310
- )>::value
311
- ));
312
-
313
- return async_copy_n(
314
- from_exec
315
- // TODO: We have to cast back to the right execution_policy class. Ideally,
316
- // we should be moving here.
317
- , new_to_exec
318
- , buffer_ptr
319
- , n
320
- , output
321
- );
322
- }
323
-
324
- // Workaround for MSVC's lack of expression SFINAE and also for an NVCC bug.
325
- // In NVCC, when two SFINAE-enabled overloads are only distinguishable by a
326
- // part of a SFINAE condition that is in a `decltype`, NVCC thinks they are the
327
- // same overload and emits an error.
328
- template <
329
- typename FromPolicy, typename ToPolicy
330
- , typename ForwardIt, typename OutputIt
331
- // MSVC2015 WAR: doesn't like decltype(...)::value in superclass definition
332
- , typename IsD2HCopy = decltype(is_device_to_host_copy(
333
- std::declval<FromPolicy const&>()
334
- , std::declval<ToPolicy const&>()))
335
- >
336
- struct is_buffered_trivially_relocatable_device_to_host_copy
337
- : thrust::integral_constant<
338
- bool
339
- , !is_contiguous_iterator<ForwardIt>::value
340
- && is_contiguous_iterator<OutputIt>::value
341
- && is_trivially_relocatable_to<
342
- typename iterator_traits<ForwardIt>::value_type
343
- , typename iterator_traits<OutputIt>::value_type
344
- >::value
345
- && IsD2HCopy::value
346
- >
347
- {};
348
-
349
- // Non-ContiguousIterator input iterator, ContiguousIterator output iterator
350
- // TriviallyRelocatable value type
351
- // Device to host
352
- template <
353
- typename FromPolicy, typename ToPolicy
354
- , typename ForwardIt, typename OutputIt, typename Size
355
- >
356
- auto async_copy_n(
357
- thrust::cuda::execution_policy<FromPolicy>& from_exec
358
- , ToPolicy& to_exec
359
- , ForwardIt first
360
- , Size n
361
- , OutputIt output
362
- ) ->
363
- typename std::enable_if<
364
- is_buffered_trivially_relocatable_device_to_host_copy<
365
- thrust::cuda::execution_policy<FromPolicy>
366
- , ToPolicy
367
- , ForwardIt, OutputIt
368
- >::value
369
- , unique_eager_event
370
- >::type
371
- {
372
- using T = typename iterator_traits<ForwardIt>::value_type;
373
-
374
- auto const device_alloc = get_async_device_allocator(
375
- from_exec
376
- );
377
-
378
- // Create device-side buffer.
379
-
380
- auto buffer = uninitialized_allocate_unique_n<T>(device_alloc, n);
381
-
382
- auto const buffer_ptr = buffer.get();
383
-
384
- // Run device-side copy.
385
-
386
- auto f0 = async_copy_n(
387
- from_exec
388
- , from_exec
389
- , first
390
- , n
391
- , buffer_ptr
392
- );
393
-
394
- // Run copy back to host.
395
-
396
- auto new_from_exec = thrust::detail::derived_cast(from_exec).rebind_after(
397
- std::move(buffer)
398
- , std::move(f0)
399
- );
400
-
401
- THRUST_STATIC_ASSERT((
402
- std::tuple_size<decltype(
403
- extract_dependencies(from_exec)
404
- )>::value + 1
405
- <=
406
- std::tuple_size<decltype(
407
- extract_dependencies(new_from_exec)
408
- )>::value
409
- ));
410
-
411
- return async_copy_n(
412
- new_from_exec
413
- , to_exec
414
- , buffer_ptr
415
- , n
416
- , output
417
- );
418
- }
419
-
420
- template <typename InputType, typename OutputType>
421
- void async_copy_n_compile_failure_non_trivially_relocatable_elements()
422
- {
423
- THRUST_STATIC_ASSERT_MSG(
424
- (is_trivially_relocatable_to<OutputType, InputType>::value)
425
- , "only sequences of TriviallyRelocatable elements can be copied to and from "
426
- "the CUDA system; use `THRUST_PROCLAIM_TRIVIALLY_RELOCATABLE(T)` to "
427
- "indicate that a type can be copied by bitwise (e.g. by `memcpy`)"
428
- );
429
- }
430
-
431
- // Non-TriviallyRelocatable value type
432
- // Host to device, device to host
433
- template <
434
- typename FromPolicy, typename ToPolicy
435
- , typename ForwardIt, typename OutputIt, typename Size
436
- >
437
- auto async_copy_n(
438
- FromPolicy& from_exec
439
- , ToPolicy& to_exec
440
- , ForwardIt first
441
- , Size n
442
- , OutputIt output
443
- ) ->
444
- typename std::enable_if<
445
- conjunction<
446
- negation<
447
- is_trivially_relocatable_to<
448
- typename iterator_traits<ForwardIt>::value_type
449
- , typename iterator_traits<OutputIt>::value_type
450
- >
451
- >
452
- , disjunction<
453
- decltype(is_host_to_device_copy(from_exec, to_exec))
454
- , decltype(is_device_to_host_copy(from_exec, to_exec))
455
- >
456
- >::value
457
- , unique_eager_event
458
- >::type
459
- {
460
- // TODO: We could do more here with cudaHostRegister.
461
-
462
- async_copy_n_compile_failure_non_trivially_relocatable_elements<
463
- typename thrust::iterator_traits<ForwardIt>::value_type
464
- , typename std::add_lvalue_reference<
465
- typename thrust::iterator_traits<OutputIt>::value_type
466
- >::type
467
- >();
468
-
469
- return {};
470
- }
471
-
472
- }}} // namespace system::cuda::detail
473
-
474
- namespace cuda_cub
475
- {
476
-
477
- // ADL entry point.
478
- template <
479
- typename FromPolicy, typename ToPolicy
480
- , typename ForwardIt, typename Sentinel, typename OutputIt
481
- >
482
- auto async_copy(
483
- thrust::cuda::execution_policy<FromPolicy>& from_exec
484
- , thrust::cpp::execution_policy<ToPolicy>& to_exec
485
- , ForwardIt first
486
- , Sentinel last
487
- , OutputIt output
488
- )
489
- THRUST_RETURNS(
490
- thrust::system::cuda::detail::async_copy_n(
491
- from_exec, to_exec, first, distance(first, last), output
492
- )
493
- )
494
-
495
- // ADL entry point.
496
- template <
497
- typename FromPolicy, typename ToPolicy
498
- , typename ForwardIt, typename Sentinel, typename OutputIt
499
- >
500
- auto async_copy(
501
- thrust::cpp::execution_policy<FromPolicy>& from_exec
502
- , thrust::cuda::execution_policy<ToPolicy>& to_exec
503
- , ForwardIt first
504
- , Sentinel last
505
- , OutputIt output
506
- )
507
- THRUST_RETURNS(
508
- thrust::system::cuda::detail::async_copy_n(
509
- from_exec, to_exec, first, distance(first, last), output
510
- )
511
- )
512
-
513
- // ADL entry point.
514
- template <
515
- typename FromPolicy, typename ToPolicy
516
- , typename ForwardIt, typename Sentinel, typename OutputIt
517
- >
518
- auto async_copy(
519
- thrust::cuda::execution_policy<FromPolicy>& from_exec
520
- , thrust::cuda::execution_policy<ToPolicy>& to_exec
521
- , ForwardIt first
522
- , Sentinel last
523
- , OutputIt output
524
- )
525
- THRUST_RETURNS(
526
- thrust::system::cuda::detail::async_copy_n(
527
- from_exec, to_exec, first, distance(first, last), output
528
- )
529
- )
530
-
531
- } // cuda_cub
532
-
533
- } // end namespace thrust
534
-
535
- #endif // THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC
536
-
537
- #endif
538
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/WALT/mmdet/datasets/pipelines/instaboost.py DELETED
@@ -1,98 +0,0 @@
1
- import numpy as np
2
-
3
- from ..builder import PIPELINES
4
-
5
-
6
- @PIPELINES.register_module()
7
- class InstaBoost(object):
8
- r"""Data augmentation method in `InstaBoost: Boosting Instance
9
- Segmentation Via Probability Map Guided Copy-Pasting
10
- <https://arxiv.org/abs/1908.07801>`_.
11
-
12
- Refer to https://github.com/GothicAi/Instaboost for implementation details.
13
- """
14
-
15
- def __init__(self,
16
- action_candidate=('normal', 'horizontal', 'skip'),
17
- action_prob=(1, 0, 0),
18
- scale=(0.8, 1.2),
19
- dx=15,
20
- dy=15,
21
- theta=(-1, 1),
22
- color_prob=0.5,
23
- hflag=False,
24
- aug_ratio=0.5):
25
- try:
26
- import instaboostfast as instaboost
27
- except ImportError:
28
- raise ImportError(
29
- 'Please run "pip install instaboostfast" '
30
- 'to install instaboostfast first for instaboost augmentation.')
31
- self.cfg = instaboost.InstaBoostConfig(action_candidate, action_prob,
32
- scale, dx, dy, theta,
33
- color_prob, hflag)
34
- self.aug_ratio = aug_ratio
35
-
36
- def _load_anns(self, results):
37
- labels = results['ann_info']['labels']
38
- masks = results['ann_info']['masks']
39
- bboxes = results['ann_info']['bboxes']
40
- n = len(labels)
41
-
42
- anns = []
43
- for i in range(n):
44
- label = labels[i]
45
- bbox = bboxes[i]
46
- mask = masks[i]
47
- x1, y1, x2, y2 = bbox
48
- # assert (x2 - x1) >= 1 and (y2 - y1) >= 1
49
- bbox = [x1, y1, x2 - x1, y2 - y1]
50
- anns.append({
51
- 'category_id': label,
52
- 'segmentation': mask,
53
- 'bbox': bbox
54
- })
55
-
56
- return anns
57
-
58
- def _parse_anns(self, results, anns, img):
59
- gt_bboxes = []
60
- gt_labels = []
61
- gt_masks_ann = []
62
- for ann in anns:
63
- x1, y1, w, h = ann['bbox']
64
- # TODO: more essential bug need to be fixed in instaboost
65
- if w <= 0 or h <= 0:
66
- continue
67
- bbox = [x1, y1, x1 + w, y1 + h]
68
- gt_bboxes.append(bbox)
69
- gt_labels.append(ann['category_id'])
70
- gt_masks_ann.append(ann['segmentation'])
71
- gt_bboxes = np.array(gt_bboxes, dtype=np.float32)
72
- gt_labels = np.array(gt_labels, dtype=np.int64)
73
- results['ann_info']['labels'] = gt_labels
74
- results['ann_info']['bboxes'] = gt_bboxes
75
- results['ann_info']['masks'] = gt_masks_ann
76
- results['img'] = img
77
- return results
78
-
79
- def __call__(self, results):
80
- img = results['img']
81
- orig_type = img.dtype
82
- anns = self._load_anns(results)
83
- if np.random.choice([0, 1], p=[1 - self.aug_ratio, self.aug_ratio]):
84
- try:
85
- import instaboostfast as instaboost
86
- except ImportError:
87
- raise ImportError('Please run "pip install instaboostfast" '
88
- 'to install instaboostfast first.')
89
- anns, img = instaboost.get_new_data(
90
- anns, img.astype(np.uint8), self.cfg, background=None)
91
-
92
- results = self._parse_anns(results, anns, img.astype(orig_type))
93
- return results
94
-
95
- def __repr__(self):
96
- repr_str = self.__class__.__name__
97
- repr_str += f'(cfg={self.cfg}, aug_ratio={self.aug_ratio})'
98
- return repr_str
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/v-doc_abstractive_mac/main.py DELETED
@@ -1,653 +0,0 @@
1
- from __future__ import division
2
- import warnings
3
-
4
- from extract_feature import build_model, run_image, get_img_feat
5
-
6
- # warnings.filterwarnings("ignore", category=FutureWarning)
7
- # warnings.filterwarnings("ignore", message="size changed")
8
- warnings.filterwarnings("ignore")
9
-
10
- import sys
11
- import os
12
- import time
13
- import math
14
- import random
15
-
16
- try:
17
- import Queue as queue
18
- except ImportError:
19
- import queue
20
- import threading
21
- import h5py
22
- import json
23
- import numpy as np
24
- import tensorflow as tf
25
- from termcolor import colored, cprint
26
-
27
- from config import config, loadDatasetConfig, parseArgs
28
- from preprocess import Preprocesser, bold, bcolored, writeline, writelist
29
- from model import MACnet
30
- from collections import defaultdict
31
-
32
-
33
- ############################################# loggers #############################################
34
-
35
- # Writes log header to file
36
- def logInit():
37
- with open(config.logFile(), "a+") as outFile:
38
- writeline(outFile, config.expName)
39
- headers = ["epoch", "trainAcc", "valAcc", "trainLoss", "valLoss"]
40
- if config.evalTrain:
41
- headers += ["evalTrainAcc", "evalTrainLoss"]
42
- if config.extra:
43
- if config.evalTrain:
44
- headers += ["thAcc", "thLoss"]
45
- headers += ["vhAcc", "vhLoss"]
46
- headers += ["time", "lr"]
47
-
48
- writelist(outFile, headers)
49
- # lr assumed to be last
50
-
51
-
52
- # Writes log record to file
53
- def logRecord(epoch, epochTime, lr, trainRes, evalRes, extraEvalRes):
54
- with open(config.logFile(), "a+") as outFile:
55
- record = [epoch, trainRes["acc"], evalRes["val"]["acc"], trainRes["loss"], evalRes["val"]["loss"]]
56
- if config.evalTrain:
57
- record += [evalRes["evalTrain"]["acc"], evalRes["evalTrain"]["loss"]]
58
- if config.extra:
59
- if config.evalTrain:
60
- record += [extraEvalRes["evalTrain"]["acc"], extraEvalRes["evalTrain"]["loss"]]
61
- record += [extraEvalRes["val"]["acc"], extraEvalRes["val"]["loss"]]
62
- record += [epochTime, lr]
63
-
64
- writelist(outFile, record)
65
-
66
-
67
- # Gets last logged epoch and learning rate
68
- def lastLoggedEpoch():
69
- with open(config.logFile(), "r") as inFile:
70
- lastLine = list(inFile)[-1].split(",")
71
- epoch = int(lastLine[0])
72
- lr = float(lastLine[-1])
73
- return epoch, lr
74
-
75
-
76
- ################################## printing, output and analysis ##################################
77
-
78
- # Analysis by type
79
- analysisQuestionLims = [(0, 18), (19, float("inf"))]
80
- analysisProgramLims = [(0, 12), (13, float("inf"))]
81
-
82
- toArity = lambda instance: instance["programSeq"][-1].split("_", 1)[0]
83
- toType = lambda instance: instance["programSeq"][-1].split("_", 1)[1]
84
-
85
-
86
- def fieldLenIsInRange(field):
87
- return lambda instance, group: \
88
- (len(instance[field]) >= group[0] and
89
- len(instance[field]) <= group[1])
90
-
91
-
92
- # Groups instances based on a key
93
- def grouperKey(toKey):
94
- def grouper(instances):
95
- res = defaultdict(list)
96
- for instance in instances:
97
- res[toKey(instance)].append(instance)
98
- return res
99
-
100
- return grouper
101
-
102
-
103
- # Groups instances according to their match to condition
104
- def grouperCond(groups, isIn):
105
- def grouper(instances):
106
- res = {}
107
- for group in groups:
108
- res[group] = (instance for instance in instances if isIn(instance, group))
109
- return res
110
-
111
- return grouper
112
-
113
-
114
- groupers = {
115
- "questionLength": grouperCond(analysisQuestionLims, fieldLenIsInRange("questionSeq")),
116
- "programLength": grouperCond(analysisProgramLims, fieldLenIsInRange("programSeq")),
117
- "arity": grouperKey(toArity),
118
- "type": grouperKey(toType)
119
- }
120
-
121
-
122
- # Computes average
123
- def avg(instances, field):
124
- if len(instances) == 0:
125
- return 0.0
126
- return sum(instances[field]) / len(instances)
127
-
128
-
129
- # Prints analysis of questions loss and accuracy by their group
130
- def printAnalysis(res):
131
- if config.analysisType != "":
132
- print("Analysis by {type}".format(type=config.analysisType))
133
- groups = groupers[config.analysisType](res["preds"])
134
- for key in groups:
135
- instances = groups[key]
136
- avgLoss = avg(instances, "loss")
137
- avgAcc = avg(instances, "acc")
138
- num = len(instances)
139
- print("Group {key}: Loss: {loss}, Acc: {acc}, Num: {num}".format(key, avgLoss, avgAcc, num))
140
-
141
-
142
- # Print results for a tier
143
- def printTierResults(tierName, res, color):
144
- if res is None:
145
- return
146
-
147
- print("{tierName} Loss: {loss}, {tierName} accuracy: {acc}".format(tierName=tierName,
148
- loss=bcolored(res["loss"], color),
149
- acc=bcolored(res["acc"], color)))
150
-
151
- printAnalysis(res)
152
-
153
-
154
- # Prints dataset results (for several tiers)
155
- def printDatasetResults(trainRes, evalRes):
156
- printTierResults("Training", trainRes, "magenta")
157
- printTierResults("Training EMA", evalRes["evalTrain"], "red")
158
- printTierResults("Validation", evalRes["val"], "cyan")
159
-
160
-
161
- # Writes predictions for several tiers
162
- def writePreds(preprocessor, evalRes):
163
- preprocessor.writePreds(evalRes, "_")
164
-
165
-
166
- ############################################# session #############################################
167
- # Initializes TF session. Sets GPU memory configuration.
168
- def setSession():
169
- sessionConfig = tf.ConfigProto(allow_soft_placement=True, log_device_placement=False)
170
- if config.allowGrowth:
171
- sessionConfig.gpu_options.allow_growth = True
172
- if config.maxMemory < 1.0:
173
- sessionConfig.gpu_options.per_process_gpu_memory_fraction = config.maxMemory
174
- return sessionConfig
175
-
176
-
177
- ############################################## savers #############################################
178
- # Initializes savers (standard, optional exponential-moving-average and optional for subset of variables)
179
- def setSavers(model):
180
- saver = tf.train.Saver(max_to_keep=config.weightsToKeep)
181
-
182
- subsetSaver = None
183
- if config.saveSubset:
184
- isRelevant = lambda var: any(s in var.name for s in config.varSubset)
185
- relevantVars = [var for var in tf.global_variables() if isRelevant(var)]
186
- subsetSaver = tf.train.Saver(relevantVars, max_to_keep=config.weightsToKeep, allow_empty=True)
187
-
188
- emaSaver = None
189
- if config.useEMA:
190
- emaSaver = tf.train.Saver(model.emaDict, max_to_keep=config.weightsToKeep)
191
-
192
- return {
193
- "saver": saver,
194
- "subsetSaver": subsetSaver,
195
- "emaSaver": emaSaver
196
- }
197
-
198
-
199
- ################################### restore / initialize weights ##################################
200
- # Restores weights of specified / last epoch if on restore mod.
201
- # Otherwise, initializes weights.
202
- def loadWeights(sess, saver, init):
203
- if config.restoreEpoch > 0 or config.restore:
204
- # restore last epoch only if restoreEpoch isn't set
205
- if config.restoreEpoch == 0:
206
- # restore last logged epoch
207
- config.restoreEpoch, config.lr = lastLoggedEpoch()
208
- print(bcolored("Restoring epoch {} and lr {}".format(config.restoreEpoch, config.lr), "cyan"))
209
- print(bcolored("Restoring weights", "blue"))
210
- print(config.weightsFile(config.restoreEpoch))
211
- saver.restore(sess, config.weightsFile(config.restoreEpoch))
212
- epoch = config.restoreEpoch
213
- else:
214
- print(bcolored("Initializing weights", "blue"))
215
- sess.run(init)
216
- logInit()
217
- epoch = 0
218
-
219
- return epoch
220
-
221
-
222
- ###################################### training / evaluation ######################################
223
- # Chooses data to train on (main / extra) data.
224
- def chooseTrainingData(data):
225
- trainingData = data["main"]["train"]
226
- alterData = None
227
-
228
- if config.extra:
229
- if config.trainExtra:
230
- if config.extraVal:
231
- trainingData = data["extra"]["val"]
232
- else:
233
- trainingData = data["extra"]["train"]
234
- if config.alterExtra:
235
- alterData = data["extra"]["train"]
236
-
237
- return trainingData, alterData
238
-
239
-
240
- #### evaluation
241
- # Runs evaluation on train / val / test datasets.
242
- def runEvaluation(sess, model, data, epoch, evalTrain=True, evalTest=False, getAtt=None):
243
- if getAtt is None:
244
- getAtt = config.getAtt
245
- res = {"evalTrain": None, "val": None, "test": None}
246
-
247
- if data is not None:
248
- if evalTrain and config.evalTrain:
249
- res["evalTrain"] = runEpoch(sess, model, data["evalTrain"], train=False, epoch=epoch, getAtt=getAtt)
250
-
251
- res["val"] = runEpoch(sess, model, data["val"], train=False, epoch=epoch, getAtt=getAtt)
252
-
253
- if evalTest or config.test:
254
- res["test"] = runEpoch(sess, model, data["test"], train=False, epoch=epoch, getAtt=getAtt)
255
-
256
- return res
257
-
258
-
259
- ## training conditions (comparing current epoch result to prior ones)
260
- def improveEnough(curr, prior, lr):
261
- prevRes = prior["prev"]["res"]
262
- currRes = curr["res"]
263
-
264
- if prevRes is None:
265
- return True
266
-
267
- prevTrainLoss = prevRes["train"]["loss"]
268
- currTrainLoss = currRes["train"]["loss"]
269
- lossDiff = prevTrainLoss - currTrainLoss
270
-
271
- notImprove = ((lossDiff < 0.015 and prevTrainLoss < 0.5 and lr > 0.00002) or \
272
- (lossDiff < 0.008 and prevTrainLoss < 0.15 and lr > 0.00001) or \
273
- (lossDiff < 0.003 and prevTrainLoss < 0.10 and lr > 0.000005))
274
- # (prevTrainLoss < 0.2 and config.lr > 0.000015)
275
-
276
- return not notImprove
277
-
278
-
279
- def better(currRes, bestRes):
280
- return currRes["val"]["acc"] > bestRes["val"]["acc"]
281
-
282
-
283
- ############################################## data ###############################################
284
- #### instances and batching
285
- # Trims sequences based on their max length.
286
- def trim2DVectors(vectors, vectorsLengths):
287
- maxLength = np.max(vectorsLengths)
288
- return vectors[:, :maxLength]
289
-
290
-
291
- # Trims batch based on question length.
292
- def trimData(data):
293
- data["questions"] = trim2DVectors(data["questions"], data["questionLengths"])
294
- return data
295
-
296
-
297
- # Gets batch / bucket size.
298
- def getLength(data):
299
- return len(data["instances"])
300
-
301
-
302
- # Selects the data entries that match the indices.
303
- def selectIndices(data, indices):
304
- def select(field, indices):
305
- if type(field) is np.ndarray:
306
- return field[indices]
307
- if type(field) is list:
308
- return [field[i] for i in indices]
309
- else:
310
- return field
311
-
312
- selected = {k: select(d, indices) for k, d in data.items()}
313
- return selected
314
-
315
-
316
- # Batches data into a a list of batches of batchSize.
317
- # Shuffles the data by default.
318
- def getBatches(data, batchSize=None, shuffle=True):
319
- batches = []
320
-
321
- dataLen = getLength(data)
322
- if batchSize is None or batchSize > dataLen:
323
- batchSize = dataLen
324
-
325
- indices = np.arange(dataLen)
326
- if shuffle:
327
- np.random.shuffle(indices)
328
-
329
- for batchStart in range(0, dataLen, batchSize):
330
- batchIndices = indices[batchStart: batchStart + batchSize]
331
- # if len(batchIndices) == batchSize?
332
- if len(batchIndices) >= config.gpusNum:
333
- batch = selectIndices(data, batchIndices)
334
- batches.append(batch)
335
- # batchesIndices.append((data, batchIndices))
336
-
337
- return batches
338
-
339
-
340
- #### image batches
341
- # Opens image files.
342
- def openImageFiles(images):
343
- images["imagesFile"] = h5py.File(images["imagesFilename"], "r")
344
- images["imagesIds"] = None
345
- if config.dataset == "NLVR":
346
- with open(images["imageIdsFilename"], "r") as imageIdsFile:
347
- images["imagesIds"] = json.load(imageIdsFile)
348
-
349
- # Closes image files.
350
-
351
-
352
- def closeImageFiles(images):
353
- images["imagesFile"].close()
354
-
355
-
356
- # Loads an images from file for a given data batch.
357
- def loadImageBatch(images, batch):
358
- imagesFile = images["imagesFile"]
359
- id2idx = images["imagesIds"]
360
- toIndex = lambda imageId: imageId
361
- if id2idx is not None:
362
- toIndex = lambda imageId: id2idx[imageId]
363
- imageBatch = np.stack([imagesFile["features"][toIndex(imageId)] for imageId in batch["imageIds"]], axis=0)
364
-
365
- return {"images": imageBatch, "imageIds": batch["imageIds"]}
366
-
367
-
368
- # Loads images for several num batches in the batches list from start index.
369
- def loadImageBatches(images, batches, start, num):
370
- batches = batches[start: start + num]
371
- return [loadImageBatch(images, batch) for batch in batches]
372
-
373
-
374
- #### data alternation
375
- # Alternates main training batches with extra data.
376
- def alternateData(batches, alterData, dataLen):
377
- alterData = alterData["data"][0] # data isn't bucketed for altered data
378
-
379
- # computes number of repetitions
380
- needed = math.ceil(len(batches) / config.alterNum)
381
- print(bold("Extra batches needed: %d") % needed)
382
- perData = math.ceil(getLength(alterData) / config.batchSize)
383
- print(bold("Batches per extra data: %d") % perData)
384
- repetitions = math.ceil(needed / perData)
385
- print(bold("reps: %d") % repetitions)
386
-
387
- # make alternate batches
388
- alterBatches = []
389
- for _ in range(repetitions):
390
- repBatches = getBatches(alterData, batchSize=config.batchSize)
391
- random.shuffle(repBatches)
392
- alterBatches += repBatches
393
- print(bold("Batches num: %d") + len(alterBatches))
394
-
395
- # alternate data with extra data
396
- curr = len(batches) - 1
397
- for alterBatch in alterBatches:
398
- if curr < 0:
399
- # print(colored("too many" + str(curr) + " " + str(len(batches)),"red"))
400
- break
401
- batches.insert(curr, alterBatch)
402
- dataLen += getLength(alterBatch)
403
- curr -= config.alterNum
404
-
405
- return batches, dataLen
406
-
407
-
408
- ############################################ threading ############################################
409
-
410
- imagesQueue = queue.Queue(maxsize=20) # config.tasksNum
411
- inQueue = queue.Queue(maxsize=1)
412
- outQueue = queue.Queue(maxsize=1)
413
-
414
-
415
- # Runs a worker thread(s) to load images while training .
416
- class StoppableThread(threading.Thread):
417
- # Thread class with a stop() method. The thread itself has to check
418
- # regularly for the stopped() condition.
419
-
420
- def __init__(self, images, batches): # i
421
- super(StoppableThread, self).__init__()
422
- # self.i = i
423
- self.images = images
424
- self.batches = batches
425
- self._stop_event = threading.Event()
426
-
427
- # def __init__(self, args):
428
- # super(StoppableThread, self).__init__(args = args)
429
- # self._stop_event = threading.Event()
430
-
431
- # def __init__(self, target, args):
432
- # super(StoppableThread, self).__init__(target = target, args = args)
433
- # self._stop_event = threading.Event()
434
-
435
- def stop(self):
436
- self._stop_event.set()
437
-
438
- def stopped(self):
439
- return self._stop_event.is_set()
440
-
441
- def run(self):
442
- while not self.stopped():
443
- try:
444
- batchNum = inQueue.get(timeout=60)
445
- nextItem = loadImageBatches(self.images, self.batches, batchNum, int(config.taskSize / 2))
446
- outQueue.put(nextItem)
447
- # inQueue.task_done()
448
- except:
449
- pass
450
- # print("worker %d done", self.i)
451
-
452
-
453
- def loaderRun(images, batches):
454
- batchNum = 0
455
-
456
- # if config.workers == 2:
457
- # worker = StoppableThread(images, batches) # i,
458
- # worker.daemon = True
459
- # worker.start()
460
-
461
- # while batchNum < len(batches):
462
- # inQueue.put(batchNum + int(config.taskSize / 2))
463
- # nextItem1 = loadImageBatches(images, batches, batchNum, int(config.taskSize / 2))
464
- # nextItem2 = outQueue.get()
465
-
466
- # nextItem = nextItem1 + nextItem2
467
- # assert len(nextItem) == min(config.taskSize, len(batches) - batchNum)
468
- # batchNum += config.taskSize
469
-
470
- # imagesQueue.put(nextItem)
471
-
472
- # worker.stop()
473
- # else:
474
- while batchNum < len(batches):
475
- nextItem = loadImageBatches(images, batches, batchNum, config.taskSize)
476
- assert len(nextItem) == min(config.taskSize, len(batches) - batchNum)
477
- batchNum += config.taskSize
478
- imagesQueue.put(nextItem)
479
-
480
- # print("manager loader done")
481
-
482
-
483
- ########################################## stats tracking #########################################
484
- # Computes exponential moving average.
485
- def emaAvg(avg, value):
486
- if avg is None:
487
- return value
488
- emaRate = 0.98
489
- return avg * emaRate + value * (1 - emaRate)
490
-
491
-
492
- # Initializes training statistics.
493
- def initStats():
494
- return {
495
- "totalBatches": 0,
496
- "totalData": 0,
497
- "totalLoss": 0.0,
498
- "totalCorrect": 0,
499
- "loss": 0.0,
500
- "acc": 0.0,
501
- "emaLoss": None,
502
- "emaAcc": None,
503
- }
504
-
505
-
506
- # Updates statistics with training results of a batch
507
- def updateStats(stats, res, batch):
508
- stats["totalBatches"] += 1
509
- stats["totalData"] += getLength(batch)
510
-
511
- stats["totalLoss"] += res["loss"]
512
- stats["totalCorrect"] += res["correctNum"]
513
-
514
- stats["loss"] = stats["totalLoss"] / stats["totalBatches"]
515
- stats["acc"] = stats["totalCorrect"] / stats["totalData"]
516
-
517
- stats["emaLoss"] = emaAvg(stats["emaLoss"], res["loss"])
518
- stats["emaAcc"] = emaAvg(stats["emaAcc"], res["acc"])
519
-
520
- return stats
521
-
522
-
523
- # auto-encoder ae = {:2.4f} autoEncLoss,
524
- # Translates training statistics into a string to print
525
- def statsToStr(stats, res, epoch, batchNum, dataLen, startTime):
526
- formatStr = "\reb {epoch},{batchNum} ({dataProcessed} / {dataLen:5d}), " + \
527
- "t = {time} ({loadTime:2.2f}+{trainTime:2.2f}), " + \
528
- "lr {lr}, l = {loss}, a = {acc}, avL = {avgLoss}, " + \
529
- "avA = {avgAcc}, g = {gradNorm:2.4f}, " + \
530
- "emL = {emaLoss:2.4f}, emA = {emaAcc:2.4f}; " + \
531
- "{expname}" # {machine}/{gpu}"
532
-
533
- s_epoch = bcolored("{:2d}".format(epoch), "green")
534
- s_batchNum = "{:3d}".format(batchNum)
535
- s_dataProcessed = bcolored("{:5d}".format(stats["totalData"]), "green")
536
- s_dataLen = dataLen
537
- s_time = bcolored("{:2.2f}".format(time.time() - startTime), "green")
538
- s_loadTime = res["readTime"]
539
- s_trainTime = res["trainTime"]
540
- s_lr = bold(config.lr)
541
- s_loss = bcolored("{:2.4f}".format(res["loss"]), "blue")
542
- s_acc = bcolored("{:2.4f}".format(res["acc"]), "blue")
543
- s_avgLoss = bcolored("{:2.4f}".format(stats["loss"]), "blue")
544
- s_avgAcc = bcolored("{:2.4f}".format(stats["acc"]), "red")
545
- s_gradNorm = res["gradNorm"]
546
- s_emaLoss = stats["emaLoss"]
547
- s_emaAcc = stats["emaAcc"]
548
- s_expname = config.expName
549
- # s_machine = bcolored(config.dataPath[9:11],"green")
550
- # s_gpu = bcolored(config.gpus,"green")
551
-
552
- return formatStr.format(epoch=s_epoch, batchNum=s_batchNum, dataProcessed=s_dataProcessed,
553
- dataLen=s_dataLen, time=s_time, loadTime=s_loadTime,
554
- trainTime=s_trainTime, lr=s_lr, loss=s_loss, acc=s_acc,
555
- avgLoss=s_avgLoss, avgAcc=s_avgAcc, gradNorm=s_gradNorm,
556
- emaLoss=s_emaLoss, emaAcc=s_emaAcc, expname=s_expname)
557
- # machine = s_machine, gpu = s_gpu)
558
-
559
-
560
- # collectRuntimeStats, writer = None,
561
- '''
562
- Runs an epoch with model and session over the data.
563
- 1. Batches the data and optionally mix it with the extra alterData.
564
- 2. Start worker threads to load images in parallel to training.
565
- 3. Runs model for each batch, and gets results (e.g. loss, accuracy).
566
- 4. Updates and prints statistics based on batch results.
567
- 5. Once in a while (every config.saveEvery), save weights.
568
-
569
- Args:
570
- sess: TF session to run with.
571
-
572
- model: model to process data. Has runBatch method that process a given batch.
573
- (See model.py for further details).
574
-
575
- data: data to use for training/evaluation.
576
-
577
- epoch: epoch number.
578
-
579
- saver: TF saver to save weights
580
-
581
- calle: a method to call every number of iterations (config.calleEvery)
582
-
583
- alterData: extra data to mix with main data while training.
584
-
585
- getAtt: True to return model attentions.
586
- '''
587
-
588
-
589
- def main(question, image):
590
- with open(config.configFile(), "a+") as outFile:
591
- json.dump(vars(config), outFile)
592
-
593
- # set gpus
594
- if config.gpus != "":
595
- config.gpusNum = len(config.gpus.split(","))
596
- os.environ["CUDA_VISIBLE_DEVICES"] = config.gpus
597
-
598
- tf.logging.set_verbosity(tf.logging.ERROR)
599
-
600
- # process data
601
- print(bold("Preprocess data..."))
602
- start = time.time()
603
- preprocessor = Preprocesser()
604
- cnn_model = build_model()
605
- imageData = get_img_feat(cnn_model, image)
606
- qData, embeddings, answerDict = preprocessor.preprocessData(question)
607
- data = {'data': qData, 'image': imageData}
608
- print("took {} seconds".format(bcolored("{:.2f}".format(time.time() - start), "blue")))
609
-
610
- # build model
611
- print(bold("Building model..."))
612
- start = time.time()
613
- model = MACnet(embeddings, answerDict)
614
- print("took {} seconds".format(bcolored("{:.2f}".format(time.time() - start), "blue")))
615
-
616
- # initializer
617
- init = tf.global_variables_initializer()
618
-
619
- # savers
620
- savers = setSavers(model)
621
- saver, emaSaver = savers["saver"], savers["emaSaver"]
622
-
623
- # sessionConfig
624
- sessionConfig = setSession()
625
-
626
- with tf.Session(config=sessionConfig) as sess:
627
-
628
- # ensure no more ops are added after model is built
629
- sess.graph.finalize()
630
-
631
- # restore / initialize weights, initialize epoch variable
632
- epoch = loadWeights(sess, saver, init)
633
- print(epoch)
634
- start = time.time()
635
- if epoch > 0:
636
- if config.useEMA:
637
- emaSaver.restore(sess, config.weightsFile(epoch))
638
- else:
639
- saver.restore(sess, config.weightsFile(epoch))
640
-
641
- evalRes = model.runBatch(sess, data['data'], data['image'], False)
642
-
643
- print("took {:.2f} seconds".format(time.time() - start))
644
-
645
- print(evalRes)
646
-
647
-
648
- if __name__ == '__main__':
649
- parseArgs()
650
- loadDatasetConfig[config.dataset]()
651
- question = 'How many text objects are located at the bottom side of table?'
652
- imagePath = './mac-layoutLM-sample/PDF_val_64.png'
653
- main(question, imagePath)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py DELETED
@@ -1,974 +0,0 @@
1
- # ------------------------------------------------------------------------
2
- # Grounding DINO
3
- # url: https://github.com/IDEA-Research/GroundingDINO
4
- # Copyright (c) 2023 IDEA. All Rights Reserved.
5
- # Licensed under the Apache License, Version 2.0 [see LICENSE for details]
6
- # ------------------------------------------------------------------------
7
- # DINO
8
- # Copyright (c) 2022 IDEA. All Rights Reserved.
9
- # Licensed under the Apache License, Version 2.0 [see LICENSE for details]
10
- # ------------------------------------------------------------------------
11
- # Conditional DETR Transformer class.
12
- # Copyright (c) 2021 Microsoft. All Rights Reserved.
13
- # Licensed under the Apache License, Version 2.0 [see LICENSE for details]
14
- # ------------------------------------------------------------------------
15
- # Modified from DETR (https://github.com/facebookresearch/detr)
16
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
17
- # ------------------------------------------------------------------------
18
-
19
- from typing import Optional
20
-
21
- import torch
22
- import torch.utils.checkpoint as checkpoint
23
- from torch import Tensor, nn
24
-
25
- from groundingdino.util.misc import inverse_sigmoid
26
-
27
- from .fuse_modules import BiAttentionBlock
28
- from .ms_deform_attn import MultiScaleDeformableAttention as MSDeformAttn
29
- from .transformer_vanilla import TransformerEncoderLayer
30
- from .utils import (
31
- MLP,
32
- _get_activation_fn,
33
- _get_clones,
34
- gen_encoder_output_proposals,
35
- gen_sineembed_for_position,
36
- get_sine_pos_embed,
37
- )
38
-
39
-
40
- class Transformer(nn.Module):
41
- def __init__(
42
- self,
43
- d_model=256,
44
- nhead=8,
45
- num_queries=300,
46
- num_encoder_layers=6,
47
- num_unicoder_layers=0,
48
- num_decoder_layers=6,
49
- dim_feedforward=2048,
50
- dropout=0.0,
51
- activation="relu",
52
- normalize_before=False,
53
- return_intermediate_dec=False,
54
- query_dim=4,
55
- num_patterns=0,
56
- # for deformable encoder
57
- num_feature_levels=1,
58
- enc_n_points=4,
59
- dec_n_points=4,
60
- # init query
61
- learnable_tgt_init=False,
62
- # two stage
63
- two_stage_type="no", # ['no', 'standard', 'early', 'combine', 'enceachlayer', 'enclayer1']
64
- embed_init_tgt=False,
65
- # for text
66
- use_text_enhancer=False,
67
- use_fusion_layer=False,
68
- use_checkpoint=False,
69
- use_transformer_ckpt=False,
70
- use_text_cross_attention=False,
71
- text_dropout=0.1,
72
- fusion_dropout=0.1,
73
- fusion_droppath=0.0,
74
- ):
75
- super().__init__()
76
- self.num_feature_levels = num_feature_levels
77
- self.num_encoder_layers = num_encoder_layers
78
- self.num_unicoder_layers = num_unicoder_layers
79
- self.num_decoder_layers = num_decoder_layers
80
- self.num_queries = num_queries
81
- assert query_dim == 4
82
-
83
- # choose encoder layer type
84
- encoder_layer = DeformableTransformerEncoderLayer(
85
- d_model, dim_feedforward, dropout, activation, num_feature_levels, nhead, enc_n_points
86
- )
87
-
88
- if use_text_enhancer:
89
- text_enhance_layer = TransformerEncoderLayer(
90
- d_model=d_model,
91
- nhead=nhead // 2,
92
- dim_feedforward=dim_feedforward // 2,
93
- dropout=text_dropout,
94
- )
95
- else:
96
- text_enhance_layer = None
97
-
98
- if use_fusion_layer:
99
- feature_fusion_layer = BiAttentionBlock(
100
- v_dim=d_model,
101
- l_dim=d_model,
102
- embed_dim=dim_feedforward // 2,
103
- num_heads=nhead // 2,
104
- dropout=fusion_dropout,
105
- drop_path=fusion_droppath,
106
- )
107
- else:
108
- feature_fusion_layer = None
109
-
110
- encoder_norm = nn.LayerNorm(d_model) if normalize_before else None
111
- assert encoder_norm is None
112
- self.encoder = TransformerEncoder(
113
- encoder_layer,
114
- num_encoder_layers,
115
- d_model=d_model,
116
- num_queries=num_queries,
117
- text_enhance_layer=text_enhance_layer,
118
- feature_fusion_layer=feature_fusion_layer,
119
- use_checkpoint=use_checkpoint,
120
- use_transformer_ckpt=use_transformer_ckpt,
121
- )
122
-
123
- # choose decoder layer type
124
- decoder_layer = DeformableTransformerDecoderLayer(
125
- d_model,
126
- dim_feedforward,
127
- dropout,
128
- activation,
129
- num_feature_levels,
130
- nhead,
131
- dec_n_points,
132
- use_text_cross_attention=use_text_cross_attention,
133
- )
134
-
135
- decoder_norm = nn.LayerNorm(d_model)
136
- self.decoder = TransformerDecoder(
137
- decoder_layer,
138
- num_decoder_layers,
139
- decoder_norm,
140
- return_intermediate=return_intermediate_dec,
141
- d_model=d_model,
142
- query_dim=query_dim,
143
- num_feature_levels=num_feature_levels,
144
- )
145
-
146
- self.d_model = d_model
147
- self.nhead = nhead
148
- self.dec_layers = num_decoder_layers
149
- self.num_queries = num_queries # useful for single stage model only
150
- self.num_patterns = num_patterns
151
- if not isinstance(num_patterns, int):
152
- Warning("num_patterns should be int but {}".format(type(num_patterns)))
153
- self.num_patterns = 0
154
-
155
- if num_feature_levels > 1:
156
- if self.num_encoder_layers > 0:
157
- self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model))
158
- else:
159
- self.level_embed = None
160
-
161
- self.learnable_tgt_init = learnable_tgt_init
162
- assert learnable_tgt_init, "why not learnable_tgt_init"
163
- self.embed_init_tgt = embed_init_tgt
164
- if (two_stage_type != "no" and embed_init_tgt) or (two_stage_type == "no"):
165
- self.tgt_embed = nn.Embedding(self.num_queries, d_model)
166
- nn.init.normal_(self.tgt_embed.weight.data)
167
- else:
168
- self.tgt_embed = None
169
-
170
- # for two stage
171
- self.two_stage_type = two_stage_type
172
- assert two_stage_type in ["no", "standard"], "unknown param {} of two_stage_type".format(
173
- two_stage_type
174
- )
175
- if two_stage_type == "standard":
176
- # anchor selection at the output of encoder
177
- self.enc_output = nn.Linear(d_model, d_model)
178
- self.enc_output_norm = nn.LayerNorm(d_model)
179
- self.two_stage_wh_embedding = None
180
-
181
- if two_stage_type == "no":
182
- self.init_ref_points(num_queries) # init self.refpoint_embed
183
-
184
- self.enc_out_class_embed = None
185
- self.enc_out_bbox_embed = None
186
-
187
- self._reset_parameters()
188
-
189
- def _reset_parameters(self):
190
- for p in self.parameters():
191
- if p.dim() > 1:
192
- nn.init.xavier_uniform_(p)
193
- for m in self.modules():
194
- if isinstance(m, MSDeformAttn):
195
- m._reset_parameters()
196
- if self.num_feature_levels > 1 and self.level_embed is not None:
197
- nn.init.normal_(self.level_embed)
198
-
199
- def get_valid_ratio(self, mask):
200
- _, H, W = mask.shape
201
- valid_H = torch.sum(~mask[:, :, 0], 1)
202
- valid_W = torch.sum(~mask[:, 0, :], 1)
203
- valid_ratio_h = valid_H.float() / H
204
- valid_ratio_w = valid_W.float() / W
205
- valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1)
206
- return valid_ratio
207
-
208
- def init_ref_points(self, use_num_queries):
209
- self.refpoint_embed = nn.Embedding(use_num_queries, 4)
210
-
211
- def forward(self, srcs, masks, refpoint_embed, pos_embeds, tgt, attn_mask=None, text_dict=None):
212
- """
213
- Input:
214
- - srcs: List of multi features [bs, ci, hi, wi]
215
- - masks: List of multi masks [bs, hi, wi]
216
- - refpoint_embed: [bs, num_dn, 4]. None in infer
217
- - pos_embeds: List of multi pos embeds [bs, ci, hi, wi]
218
- - tgt: [bs, num_dn, d_model]. None in infer
219
-
220
- """
221
- # prepare input for encoder
222
- src_flatten = []
223
- mask_flatten = []
224
- lvl_pos_embed_flatten = []
225
- spatial_shapes = []
226
- for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)):
227
- bs, c, h, w = src.shape
228
- spatial_shape = (h, w)
229
- spatial_shapes.append(spatial_shape)
230
-
231
- src = src.flatten(2).transpose(1, 2) # bs, hw, c
232
- mask = mask.flatten(1) # bs, hw
233
- pos_embed = pos_embed.flatten(2).transpose(1, 2) # bs, hw, c
234
- if self.num_feature_levels > 1 and self.level_embed is not None:
235
- lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1)
236
- else:
237
- lvl_pos_embed = pos_embed
238
- lvl_pos_embed_flatten.append(lvl_pos_embed)
239
- src_flatten.append(src)
240
- mask_flatten.append(mask)
241
- src_flatten = torch.cat(src_flatten, 1) # bs, \sum{hxw}, c
242
- mask_flatten = torch.cat(mask_flatten, 1) # bs, \sum{hxw}
243
- lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1) # bs, \sum{hxw}, c
244
- spatial_shapes = torch.as_tensor(
245
- spatial_shapes, dtype=torch.long, device=src_flatten.device
246
- )
247
- level_start_index = torch.cat(
248
- (spatial_shapes.new_zeros((1,)), spatial_shapes.prod(1).cumsum(0)[:-1])
249
- )
250
- valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1)
251
-
252
- # two stage
253
- enc_topk_proposals = enc_refpoint_embed = None
254
-
255
- #########################################################
256
- # Begin Encoder
257
- #########################################################
258
- memory, memory_text = self.encoder(
259
- src_flatten,
260
- pos=lvl_pos_embed_flatten,
261
- level_start_index=level_start_index,
262
- spatial_shapes=spatial_shapes,
263
- valid_ratios=valid_ratios,
264
- key_padding_mask=mask_flatten,
265
- memory_text=text_dict["encoded_text"],
266
- text_attention_mask=~text_dict["text_token_mask"],
267
- # we ~ the mask . False means use the token; True means pad the token
268
- position_ids=text_dict["position_ids"],
269
- text_self_attention_masks=text_dict["text_self_attention_masks"],
270
- )
271
-
272
- enhanced_image_features = memory.detach()
273
- enhanced_text_features = memory_text.detach()
274
-
275
- # memory: enhanced image features
276
- # memory_text: enhanced text features
277
- #########################################################
278
- # End Encoder
279
- # - memory: bs, \sum{hw}, c
280
- # - mask_flatten: bs, \sum{hw}
281
- # - lvl_pos_embed_flatten: bs, \sum{hw}, c
282
- # - enc_intermediate_output: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c)
283
- # - enc_intermediate_refpoints: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c)
284
- #########################################################
285
-
286
- #########################################################
287
- # Begin Language-guide Query Selection
288
- #########################################################
289
- text_dict["encoded_text"] = memory_text
290
- # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1':
291
- # if memory.isnan().any() | memory.isinf().any():
292
- # import ipdb; ipdb.set_trace()
293
-
294
- if self.two_stage_type == "standard":
295
- # logits and proposals
296
- output_memory, output_proposals = gen_encoder_output_proposals(
297
- memory, mask_flatten, spatial_shapes
298
- )
299
- output_memory = self.enc_output_norm(self.enc_output(output_memory))
300
-
301
- # language-guided query selection
302
- if text_dict is not None:
303
- enc_outputs_class_unselected = self.enc_out_class_embed(output_memory, text_dict)
304
- else:
305
- enc_outputs_class_unselected = self.enc_out_class_embed(output_memory)
306
-
307
- topk_logits = enc_outputs_class_unselected.max(-1)[0]
308
- enc_outputs_coord_unselected = (
309
- self.enc_out_bbox_embed(output_memory) + output_proposals
310
- ) # (bs, \sum{hw}, 4) unsigmoid
311
- topk = self.num_queries
312
-
313
- topk_proposals = torch.topk(topk_logits, topk, dim=1)[1] # bs, nq
314
-
315
- # gather boxes
316
- refpoint_embed_undetach = torch.gather(
317
- enc_outputs_coord_unselected, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4)
318
- ) # unsigmoid
319
- refpoint_embed_ = refpoint_embed_undetach.detach()
320
- init_box_proposal = torch.gather(
321
- output_proposals, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4)
322
- ).sigmoid() # sigmoid
323
-
324
- # gather tgt
325
- tgt_undetach = torch.gather(
326
- output_memory, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, self.d_model)
327
- )
328
- if self.embed_init_tgt:
329
- tgt_ = (
330
- self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1)
331
- ) # nq, bs, d_model
332
- else:
333
- tgt_ = tgt_undetach.detach()
334
-
335
- if refpoint_embed is not None:
336
- refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1)
337
- tgt = torch.cat([tgt, tgt_], dim=1)
338
- else:
339
- refpoint_embed, tgt = refpoint_embed_, tgt_
340
-
341
- elif self.two_stage_type == "no":
342
- tgt_ = (
343
- self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1)
344
- ) # nq, bs, d_model
345
- refpoint_embed_ = (
346
- self.refpoint_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1)
347
- ) # nq, bs, 4
348
-
349
- if refpoint_embed is not None:
350
- refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1)
351
- tgt = torch.cat([tgt, tgt_], dim=1)
352
- else:
353
- refpoint_embed, tgt = refpoint_embed_, tgt_
354
-
355
- if self.num_patterns > 0:
356
- tgt_embed = tgt.repeat(1, self.num_patterns, 1)
357
- refpoint_embed = refpoint_embed.repeat(1, self.num_patterns, 1)
358
- tgt_pat = self.patterns.weight[None, :, :].repeat_interleave(
359
- self.num_queries, 1
360
- ) # 1, n_q*n_pat, d_model
361
- tgt = tgt_embed + tgt_pat
362
-
363
- init_box_proposal = refpoint_embed_.sigmoid()
364
-
365
- else:
366
- raise NotImplementedError("unknown two_stage_type {}".format(self.two_stage_type))
367
- #########################################################
368
- # End preparing tgt
369
- # - tgt: bs, NQ, d_model
370
- # - refpoint_embed(unsigmoid): bs, NQ, d_model
371
- #########################################################
372
-
373
- #########################################################
374
- # Begin Decoder
375
- #########################################################
376
- hs, references = self.decoder(
377
- tgt=tgt.transpose(0, 1),
378
- memory=memory.transpose(0, 1),
379
- memory_key_padding_mask=mask_flatten,
380
- pos=lvl_pos_embed_flatten.transpose(0, 1),
381
- refpoints_unsigmoid=refpoint_embed.transpose(0, 1),
382
- level_start_index=level_start_index,
383
- spatial_shapes=spatial_shapes,
384
- valid_ratios=valid_ratios,
385
- tgt_mask=attn_mask,
386
- memory_text=text_dict["encoded_text"],
387
- text_attention_mask=~text_dict["text_token_mask"],
388
- # we ~ the mask . False means use the token; True means pad the token
389
- )
390
- #########################################################
391
- # End Decoder
392
- # hs: n_dec, bs, nq, d_model
393
- # references: n_dec+1, bs, nq, query_dim
394
- #########################################################
395
-
396
- #########################################################
397
- # Begin postprocess
398
- #########################################################
399
- if self.two_stage_type == "standard":
400
- hs_enc = tgt_undetach.unsqueeze(0)
401
- ref_enc = refpoint_embed_undetach.sigmoid().unsqueeze(0)
402
- else:
403
- hs_enc = ref_enc = None
404
- #########################################################
405
- # End postprocess
406
- # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or (n_enc, bs, nq, d_model) or None
407
- # ref_enc: (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or (n_enc, bs, nq, d_model) or None
408
- #########################################################
409
-
410
- return hs, references, hs_enc, ref_enc, init_box_proposal, enhanced_image_features, enhanced_text_features, spatial_shapes, topk_logits
411
- # hs: (n_dec, bs, nq, d_model)
412
- # references: sigmoid coordinates. (n_dec+1, bs, bq, 4)
413
- # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or None
414
- # ref_enc: sigmoid coordinates. \
415
- # (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or None
416
- # enhanced_image_features: (bs, shw, c)
417
- # enhanced_text_features: (bs, n_enc, c)
418
- # spatial_shapes: s
419
-
420
-
421
- class TransformerEncoder(nn.Module):
422
- def __init__(
423
- self,
424
- encoder_layer,
425
- num_layers,
426
- d_model=256,
427
- num_queries=300,
428
- enc_layer_share=False,
429
- text_enhance_layer=None,
430
- feature_fusion_layer=None,
431
- use_checkpoint=False,
432
- use_transformer_ckpt=False,
433
- ):
434
- """_summary_
435
-
436
- Args:
437
- encoder_layer (_type_): _description_
438
- num_layers (_type_): _description_
439
- norm (_type_, optional): _description_. Defaults to None.
440
- d_model (int, optional): _description_. Defaults to 256.
441
- num_queries (int, optional): _description_. Defaults to 300.
442
- enc_layer_share (bool, optional): _description_. Defaults to False.
443
-
444
- """
445
- super().__init__()
446
- # prepare layers
447
- self.layers = []
448
- self.text_layers = []
449
- self.fusion_layers = []
450
- if num_layers > 0:
451
- self.layers = _get_clones(encoder_layer, num_layers, layer_share=enc_layer_share)
452
-
453
- if text_enhance_layer is not None:
454
- self.text_layers = _get_clones(
455
- text_enhance_layer, num_layers, layer_share=enc_layer_share
456
- )
457
- if feature_fusion_layer is not None:
458
- self.fusion_layers = _get_clones(
459
- feature_fusion_layer, num_layers, layer_share=enc_layer_share
460
- )
461
- else:
462
- self.layers = []
463
- del encoder_layer
464
-
465
- if text_enhance_layer is not None:
466
- self.text_layers = []
467
- del text_enhance_layer
468
- if feature_fusion_layer is not None:
469
- self.fusion_layers = []
470
- del feature_fusion_layer
471
-
472
- self.query_scale = None
473
- self.num_queries = num_queries
474
- self.num_layers = num_layers
475
- self.d_model = d_model
476
-
477
- self.use_checkpoint = use_checkpoint
478
- self.use_transformer_ckpt = use_transformer_ckpt
479
-
480
- @staticmethod
481
- def get_reference_points(spatial_shapes, valid_ratios, device):
482
- reference_points_list = []
483
- for lvl, (H_, W_) in enumerate(spatial_shapes):
484
-
485
- ref_y, ref_x = torch.meshgrid(
486
- torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device),
487
- torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device),
488
- )
489
- ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_)
490
- ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_)
491
- ref = torch.stack((ref_x, ref_y), -1)
492
- reference_points_list.append(ref)
493
- reference_points = torch.cat(reference_points_list, 1)
494
- reference_points = reference_points[:, :, None] * valid_ratios[:, None]
495
- return reference_points
496
-
497
- def forward(
498
- self,
499
- # for images
500
- src: Tensor,
501
- pos: Tensor,
502
- spatial_shapes: Tensor,
503
- level_start_index: Tensor,
504
- valid_ratios: Tensor,
505
- key_padding_mask: Tensor,
506
- # for texts
507
- memory_text: Tensor = None,
508
- text_attention_mask: Tensor = None,
509
- pos_text: Tensor = None,
510
- text_self_attention_masks: Tensor = None,
511
- position_ids: Tensor = None,
512
- ):
513
- """
514
- Input:
515
- - src: [bs, sum(hi*wi), 256]
516
- - pos: pos embed for src. [bs, sum(hi*wi), 256]
517
- - spatial_shapes: h,w of each level [num_level, 2]
518
- - level_start_index: [num_level] start point of level in sum(hi*wi).
519
- - valid_ratios: [bs, num_level, 2]
520
- - key_padding_mask: [bs, sum(hi*wi)]
521
-
522
- - memory_text: bs, n_text, 256
523
- - text_attention_mask: bs, n_text
524
- False for no padding; True for padding
525
- - pos_text: bs, n_text, 256
526
-
527
- - position_ids: bs, n_text
528
- Intermedia:
529
- - reference_points: [bs, sum(hi*wi), num_level, 2]
530
- Outpus:
531
- - output: [bs, sum(hi*wi), 256]
532
- """
533
-
534
- output = src
535
-
536
- # preparation and reshape
537
- if self.num_layers > 0:
538
- reference_points = self.get_reference_points(
539
- spatial_shapes, valid_ratios, device=src.device
540
- )
541
-
542
- if self.text_layers:
543
- # generate pos_text
544
- bs, n_text, text_dim = memory_text.shape
545
- if pos_text is None and position_ids is None:
546
- pos_text = (
547
- torch.arange(n_text, device=memory_text.device)
548
- .float()
549
- .unsqueeze(0)
550
- .unsqueeze(-1)
551
- .repeat(bs, 1, 1)
552
- )
553
- pos_text = get_sine_pos_embed(pos_text, num_pos_feats=256, exchange_xy=False)
554
- if position_ids is not None:
555
- pos_text = get_sine_pos_embed(
556
- position_ids[..., None], num_pos_feats=256, exchange_xy=False
557
- )
558
-
559
- # main process
560
- for layer_id, layer in enumerate(self.layers):
561
- # if output.isnan().any() or memory_text.isnan().any():
562
- # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO':
563
- # import ipdb; ipdb.set_trace()
564
- if self.fusion_layers:
565
- if self.use_checkpoint:
566
- output, memory_text = checkpoint.checkpoint(
567
- self.fusion_layers[layer_id],
568
- output,
569
- memory_text,
570
- key_padding_mask,
571
- text_attention_mask,
572
- )
573
- else:
574
- output, memory_text = self.fusion_layers[layer_id](
575
- v=output,
576
- l=memory_text,
577
- attention_mask_v=key_padding_mask,
578
- attention_mask_l=text_attention_mask,
579
- )
580
-
581
- if self.text_layers:
582
- memory_text = self.text_layers[layer_id](
583
- src=memory_text.transpose(0, 1),
584
- src_mask=~text_self_attention_masks, # note we use ~ for mask here
585
- src_key_padding_mask=text_attention_mask,
586
- pos=(pos_text.transpose(0, 1) if pos_text is not None else None),
587
- ).transpose(0, 1)
588
-
589
- # main process
590
- if self.use_transformer_ckpt:
591
- output = checkpoint.checkpoint(
592
- layer,
593
- output,
594
- pos,
595
- reference_points,
596
- spatial_shapes,
597
- level_start_index,
598
- key_padding_mask,
599
- )
600
- else:
601
- output = layer(
602
- src=output,
603
- pos=pos,
604
- reference_points=reference_points,
605
- spatial_shapes=spatial_shapes,
606
- level_start_index=level_start_index,
607
- key_padding_mask=key_padding_mask,
608
- )
609
-
610
- return output, memory_text
611
-
612
-
613
- class TransformerDecoder(nn.Module):
614
- def __init__(
615
- self,
616
- decoder_layer,
617
- num_layers,
618
- norm=None,
619
- return_intermediate=False,
620
- d_model=256,
621
- query_dim=4,
622
- num_feature_levels=1,
623
- ):
624
- super().__init__()
625
- if num_layers > 0:
626
- self.layers = _get_clones(decoder_layer, num_layers)
627
- else:
628
- self.layers = []
629
- self.num_layers = num_layers
630
- self.norm = norm
631
- self.return_intermediate = return_intermediate
632
- assert return_intermediate, "support return_intermediate only"
633
- self.query_dim = query_dim
634
- assert query_dim in [2, 4], "query_dim should be 2/4 but {}".format(query_dim)
635
- self.num_feature_levels = num_feature_levels
636
-
637
- self.ref_point_head = MLP(query_dim // 2 * d_model, d_model, d_model, 2)
638
- self.query_pos_sine_scale = None
639
-
640
- self.query_scale = None
641
- self.bbox_embed = None
642
- self.class_embed = None
643
-
644
- self.d_model = d_model
645
-
646
- self.ref_anchor_head = None
647
-
648
- def forward(
649
- self,
650
- tgt,
651
- memory,
652
- tgt_mask: Optional[Tensor] = None,
653
- memory_mask: Optional[Tensor] = None,
654
- tgt_key_padding_mask: Optional[Tensor] = None,
655
- memory_key_padding_mask: Optional[Tensor] = None,
656
- pos: Optional[Tensor] = None,
657
- refpoints_unsigmoid: Optional[Tensor] = None, # num_queries, bs, 2
658
- # for memory
659
- level_start_index: Optional[Tensor] = None, # num_levels
660
- spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2
661
- valid_ratios: Optional[Tensor] = None,
662
- # for text
663
- memory_text: Optional[Tensor] = None,
664
- text_attention_mask: Optional[Tensor] = None,
665
- ):
666
- """
667
- Input:
668
- - tgt: nq, bs, d_model
669
- - memory: hw, bs, d_model
670
- - pos: hw, bs, d_model
671
- - refpoints_unsigmoid: nq, bs, 2/4
672
- - valid_ratios/spatial_shapes: bs, nlevel, 2
673
- """
674
- output = tgt
675
-
676
- intermediate = []
677
- reference_points = refpoints_unsigmoid.sigmoid()
678
- ref_points = [reference_points]
679
-
680
- for layer_id, layer in enumerate(self.layers):
681
-
682
- if reference_points.shape[-1] == 4:
683
- reference_points_input = (
684
- reference_points[:, :, None]
685
- * torch.cat([valid_ratios, valid_ratios], -1)[None, :]
686
- ) # nq, bs, nlevel, 4
687
- else:
688
- assert reference_points.shape[-1] == 2
689
- reference_points_input = reference_points[:, :, None] * valid_ratios[None, :]
690
- query_sine_embed = gen_sineembed_for_position(
691
- reference_points_input[:, :, 0, :]
692
- ) # nq, bs, 256*2
693
-
694
- # conditional query
695
- raw_query_pos = self.ref_point_head(query_sine_embed) # nq, bs, 256
696
- pos_scale = self.query_scale(output) if self.query_scale is not None else 1
697
- query_pos = pos_scale * raw_query_pos
698
- # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1':
699
- # if query_pos.isnan().any() | query_pos.isinf().any():
700
- # import ipdb; ipdb.set_trace()
701
-
702
- # main process
703
- output = layer(
704
- tgt=output,
705
- tgt_query_pos=query_pos,
706
- tgt_query_sine_embed=query_sine_embed,
707
- tgt_key_padding_mask=tgt_key_padding_mask,
708
- tgt_reference_points=reference_points_input,
709
- memory_text=memory_text,
710
- text_attention_mask=text_attention_mask,
711
- memory=memory,
712
- memory_key_padding_mask=memory_key_padding_mask,
713
- memory_level_start_index=level_start_index,
714
- memory_spatial_shapes=spatial_shapes,
715
- memory_pos=pos,
716
- self_attn_mask=tgt_mask,
717
- cross_attn_mask=memory_mask,
718
- )
719
- if output.isnan().any() | output.isinf().any():
720
- print(f"output layer_id {layer_id} is nan")
721
- try:
722
- num_nan = output.isnan().sum().item()
723
- num_inf = output.isinf().sum().item()
724
- print(f"num_nan {num_nan}, num_inf {num_inf}")
725
- except Exception as e:
726
- print(e)
727
- # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1':
728
- # import ipdb; ipdb.set_trace()
729
-
730
- # iter update
731
- if self.bbox_embed is not None:
732
- # box_holder = self.bbox_embed(output)
733
- # box_holder[..., :self.query_dim] += inverse_sigmoid(reference_points)
734
- # new_reference_points = box_holder[..., :self.query_dim].sigmoid()
735
-
736
- reference_before_sigmoid = inverse_sigmoid(reference_points)
737
- delta_unsig = self.bbox_embed[layer_id](output)
738
- outputs_unsig = delta_unsig + reference_before_sigmoid
739
- new_reference_points = outputs_unsig.sigmoid()
740
-
741
- reference_points = new_reference_points.detach()
742
- # if layer_id != self.num_layers - 1:
743
- ref_points.append(new_reference_points)
744
-
745
- intermediate.append(self.norm(output))
746
-
747
- return [
748
- [itm_out.transpose(0, 1) for itm_out in intermediate],
749
- [itm_refpoint.transpose(0, 1) for itm_refpoint in ref_points],
750
- ]
751
-
752
-
753
- class DeformableTransformerEncoderLayer(nn.Module):
754
- def __init__(
755
- self,
756
- d_model=256,
757
- d_ffn=1024,
758
- dropout=0.1,
759
- activation="relu",
760
- n_levels=4,
761
- n_heads=8,
762
- n_points=4,
763
- ):
764
- super().__init__()
765
-
766
- # self attention
767
- self.self_attn = MSDeformAttn(
768
- embed_dim=d_model,
769
- num_levels=n_levels,
770
- num_heads=n_heads,
771
- num_points=n_points,
772
- batch_first=True,
773
- )
774
- self.dropout1 = nn.Dropout(dropout)
775
- self.norm1 = nn.LayerNorm(d_model)
776
-
777
- # ffn
778
- self.linear1 = nn.Linear(d_model, d_ffn)
779
- self.activation = _get_activation_fn(activation, d_model=d_ffn)
780
- self.dropout2 = nn.Dropout(dropout)
781
- self.linear2 = nn.Linear(d_ffn, d_model)
782
- self.dropout3 = nn.Dropout(dropout)
783
- self.norm2 = nn.LayerNorm(d_model)
784
-
785
- @staticmethod
786
- def with_pos_embed(tensor, pos):
787
- return tensor if pos is None else tensor + pos
788
-
789
- def forward_ffn(self, src):
790
- src2 = self.linear2(self.dropout2(self.activation(self.linear1(src))))
791
- src = src + self.dropout3(src2)
792
- src = self.norm2(src)
793
- return src
794
-
795
- def forward(
796
- self, src, pos, reference_points, spatial_shapes, level_start_index, key_padding_mask=None
797
- ):
798
- # self attention
799
- # import ipdb; ipdb.set_trace()
800
- src2 = self.self_attn(
801
- query=self.with_pos_embed(src, pos),
802
- reference_points=reference_points,
803
- value=src,
804
- spatial_shapes=spatial_shapes,
805
- level_start_index=level_start_index,
806
- key_padding_mask=key_padding_mask,
807
- )
808
- src = src + self.dropout1(src2)
809
- src = self.norm1(src)
810
-
811
- # ffn
812
- src = self.forward_ffn(src)
813
-
814
- return src
815
-
816
-
817
- class DeformableTransformerDecoderLayer(nn.Module):
818
- def __init__(
819
- self,
820
- d_model=256,
821
- d_ffn=1024,
822
- dropout=0.1,
823
- activation="relu",
824
- n_levels=4,
825
- n_heads=8,
826
- n_points=4,
827
- use_text_feat_guide=False,
828
- use_text_cross_attention=False,
829
- ):
830
- super().__init__()
831
-
832
- # cross attention
833
- self.cross_attn = MSDeformAttn(
834
- embed_dim=d_model,
835
- num_levels=n_levels,
836
- num_heads=n_heads,
837
- num_points=n_points,
838
- batch_first=True,
839
- )
840
- self.dropout1 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
841
- self.norm1 = nn.LayerNorm(d_model)
842
-
843
- # cross attention text
844
- if use_text_cross_attention:
845
- self.ca_text = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)
846
- self.catext_dropout = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
847
- self.catext_norm = nn.LayerNorm(d_model)
848
-
849
- # self attention
850
- self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout)
851
- self.dropout2 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
852
- self.norm2 = nn.LayerNorm(d_model)
853
-
854
- # ffn
855
- self.linear1 = nn.Linear(d_model, d_ffn)
856
- self.activation = _get_activation_fn(activation, d_model=d_ffn, batch_dim=1)
857
- self.dropout3 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
858
- self.linear2 = nn.Linear(d_ffn, d_model)
859
- self.dropout4 = nn.Dropout(dropout) if dropout > 0 else nn.Identity()
860
- self.norm3 = nn.LayerNorm(d_model)
861
-
862
- self.key_aware_proj = None
863
- self.use_text_feat_guide = use_text_feat_guide
864
- assert not use_text_feat_guide
865
- self.use_text_cross_attention = use_text_cross_attention
866
-
867
- def rm_self_attn_modules(self):
868
- self.self_attn = None
869
- self.dropout2 = None
870
- self.norm2 = None
871
-
872
- @staticmethod
873
- def with_pos_embed(tensor, pos):
874
- return tensor if pos is None else tensor + pos
875
-
876
- def forward_ffn(self, tgt):
877
- with torch.cuda.amp.autocast(enabled=False):
878
- tgt2 = self.linear2(self.dropout3(self.activation(self.linear1(tgt))))
879
- tgt = tgt + self.dropout4(tgt2)
880
- tgt = self.norm3(tgt)
881
- return tgt
882
-
883
- def forward(
884
- self,
885
- # for tgt
886
- tgt: Optional[Tensor], # nq, bs, d_model
887
- tgt_query_pos: Optional[Tensor] = None, # pos for query. MLP(Sine(pos))
888
- tgt_query_sine_embed: Optional[Tensor] = None, # pos for query. Sine(pos)
889
- tgt_key_padding_mask: Optional[Tensor] = None,
890
- tgt_reference_points: Optional[Tensor] = None, # nq, bs, 4
891
- memory_text: Optional[Tensor] = None, # bs, num_token, d_model
892
- text_attention_mask: Optional[Tensor] = None, # bs, num_token
893
- # for memory
894
- memory: Optional[Tensor] = None, # hw, bs, d_model
895
- memory_key_padding_mask: Optional[Tensor] = None,
896
- memory_level_start_index: Optional[Tensor] = None, # num_levels
897
- memory_spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2
898
- memory_pos: Optional[Tensor] = None, # pos for memory
899
- # sa
900
- self_attn_mask: Optional[Tensor] = None, # mask used for self-attention
901
- cross_attn_mask: Optional[Tensor] = None, # mask used for cross-attention
902
- ):
903
- """
904
- Input:
905
- - tgt/tgt_query_pos: nq, bs, d_model
906
- -
907
- """
908
- assert cross_attn_mask is None
909
-
910
- # self attention
911
- if self.self_attn is not None:
912
- # import ipdb; ipdb.set_trace()
913
- q = k = self.with_pos_embed(tgt, tgt_query_pos)
914
- tgt2 = self.self_attn(q, k, tgt, attn_mask=self_attn_mask)[0]
915
- tgt = tgt + self.dropout2(tgt2)
916
- tgt = self.norm2(tgt)
917
-
918
- if self.use_text_cross_attention:
919
- tgt2 = self.ca_text(
920
- self.with_pos_embed(tgt, tgt_query_pos),
921
- memory_text.transpose(0, 1),
922
- memory_text.transpose(0, 1),
923
- key_padding_mask=text_attention_mask,
924
- )[0]
925
- tgt = tgt + self.catext_dropout(tgt2)
926
- tgt = self.catext_norm(tgt)
927
-
928
- tgt2 = self.cross_attn(
929
- query=self.with_pos_embed(tgt, tgt_query_pos).transpose(0, 1),
930
- reference_points=tgt_reference_points.transpose(0, 1).contiguous(),
931
- value=memory.transpose(0, 1),
932
- spatial_shapes=memory_spatial_shapes,
933
- level_start_index=memory_level_start_index,
934
- key_padding_mask=memory_key_padding_mask,
935
- ).transpose(0, 1)
936
- tgt = tgt + self.dropout1(tgt2)
937
- tgt = self.norm1(tgt)
938
-
939
- # ffn
940
- tgt = self.forward_ffn(tgt)
941
-
942
- return tgt
943
-
944
-
945
- def build_transformer(args):
946
- return Transformer(
947
- d_model=args.hidden_dim,
948
- dropout=args.dropout,
949
- nhead=args.nheads,
950
- num_queries=args.num_queries,
951
- dim_feedforward=args.dim_feedforward,
952
- num_encoder_layers=args.enc_layers,
953
- num_decoder_layers=args.dec_layers,
954
- normalize_before=args.pre_norm,
955
- return_intermediate_dec=True,
956
- query_dim=args.query_dim,
957
- activation=args.transformer_activation,
958
- num_patterns=args.num_patterns,
959
- num_feature_levels=args.num_feature_levels,
960
- enc_n_points=args.enc_n_points,
961
- dec_n_points=args.dec_n_points,
962
- learnable_tgt_init=True,
963
- # two stage
964
- two_stage_type=args.two_stage_type, # ['no', 'standard', 'early']
965
- embed_init_tgt=args.embed_init_tgt,
966
- use_text_enhancer=args.use_text_enhancer,
967
- use_fusion_layer=args.use_fusion_layer,
968
- use_checkpoint=args.use_checkpoint,
969
- use_transformer_ckpt=args.use_transformer_ckpt,
970
- use_text_cross_attention=args.use_text_cross_attention,
971
- text_dropout=args.text_dropout,
972
- fusion_dropout=args.fusion_dropout,
973
- fusion_droppath=args.fusion_droppath,
974
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cat125/text-generator-v2/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Text Generator v2
3
- emoji: 💻
4
- colorFrom: pink
5
- colorTo: purple
6
- sdk: gradio
7
- sdk_version: 3.27.0
8
- app_file: main.py
9
- pinned: true
10
- license: openrail
11
- ---
12
-
13
- This tool allpws you to generate texts based on given context.