parquet-converter commited on
Commit
513dbef
·
1 Parent(s): 3985e99

Update parquet files (step 60 of 476)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/0xqtpie/doodle2vid/README.md +0 -13
  2. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Cs6 Master Collection Keygen Xforce Rar Zip Learn How to Generate and Apply Xforce Keygen for Adobe Cs6.md +0 -148
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Ativar O Malwarebytes Premium.md +0 -33
  4. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Adobe Photoshop Cs6 Amtlib Dll Files Everything You Need to Know.md +0 -150
  5. spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Minecraft on Windows 10 Everything You Need to Know.md +0 -37
  6. spaces/1gistliPinn/ChatGPT4/Examples/Cccam 2.3.0 Ipk Vix [BEST].md +0 -10
  7. spaces/1phancelerku/anime-remove-background/Download Clash Royale 3.3024.2 APK and Join the Arena with Your Favorite Clash Characters.md +0 -123
  8. spaces/1yukikaze/img-to-music/style.css +0 -51
  9. spaces/801artistry/RVC801/Makefile +0 -63
  10. spaces/A666sxr/Genshin_TTS/stft.py +0 -209
  11. spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/loss.py +0 -398
  12. spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/openai.py +0 -156
  13. spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/os_utils.py +0 -20
  14. spaces/ASJMO/freegpt/client/css/options.css +0 -10
  15. spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/builders.py +0 -218
  16. spaces/Abdulkader/Abdulkader-T5-MedRepAnalyzer/app.py +0 -7
  17. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/utils/Yoyo.js +0 -2
  18. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/ClickOutside.d.ts +0 -2
  19. spaces/Alfaxad/BioGalacticModels/model_list.py +0 -106
  20. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py +0 -554
  21. spaces/Andy1621/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py +0 -8
  22. spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/standard_roi_head.py +0 -295
  23. spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r50-d8_769x769_40k_cityscapes.py +0 -9
  24. spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py +0 -8
  25. spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/handlers/base.py +0 -30
  26. spaces/ArkanDash/rvc-models/app.py +0 -178
  27. spaces/ArtyomKhyan/Detection/app.py +0 -218
  28. spaces/Ashrafb/translate/app.py +0 -33
  29. spaces/Awesimo/jojogan/e4e/criteria/lpips/__init__.py +0 -0
  30. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/builtin_datasets.md +0 -1
  31. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/models.md +0 -180
  32. spaces/Baptlem/UCDR-Net/README.md +0 -14
  33. spaces/Benson/text-generation/Examples/Descargar Bhool Bhulaiyaa 2 Tono De Llamada.md +0 -78
  34. spaces/Benson/text-generation/Examples/Descargar Facebook Apk Android 4.md +0 -136
  35. spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tz/win.py +0 -370
  36. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/intranges.py +0 -54
  37. spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/core/util.h +0 -773
  38. spaces/CVPR/lama-example/bin/predict_inner_features.py +0 -119
  39. spaces/Chen-Beer/LLMing/app.py +0 -56
  40. spaces/CikeyQI/meme-api/docker/start.sh +0 -7
  41. spaces/CofAI/CalculatorUI/index.html +0 -63
  42. spaces/CofAI/LengthConverter/index.html +0 -108
  43. spaces/CofAI/chat.b4/g4f/Provider/Providers/Xiaor.py +0 -39
  44. spaces/CofAI/chat/g4f/Provider/Providers/Wewordle.py +0 -75
  45. spaces/CorvaeOboro/gen_ability_icon/dnnlib/__init__.py +0 -9
  46. spaces/Cran-May/ygVI/app.py +0 -250
  47. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/dtype.py +0 -39
  48. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/DdsImagePlugin.py +0 -291
  49. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_funcs.py +0 -477
  50. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/S__i_l_f.py +0 -1037
spaces/0xqtpie/doodle2vid/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Doodle2vid
3
- emoji: 🐢
4
- colorFrom: blue
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 3.44.1
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Cs6 Master Collection Keygen Xforce Rar Zip Learn How to Generate and Apply Xforce Keygen for Adobe Cs6.md DELETED
@@ -1,148 +0,0 @@
1
- <br />
2
- <h1>Adobe Cs6 Master Collection Keygen Xforce Rar Zip</h1>
3
- <p>If you are looking for a way to get the most out of Adobe Creative Suite 6 Master Collection, you might be interested in using a keygen tool that can generate valid serial numbers and activation codes for you. In this article, we will explain what Adobe Cs6 Master Collection is, what Xforce Keygen is, and how to download and install Adobe Cs6 Master Collection with Xforce Keygen. We will also cover some of the benefits and risks of using this method, and answer some frequently asked questions.</p>
4
- <h2>What is Adobe Cs6 Master Collection?</h2>
5
- <p>Adobe Cs6 Master Collection is a software bundle that includes all the Adobe creative tools you need to create stunning digital content for any platform. Whether you are a graphic designer, web developer, video editor, photographer, or animator, you can find the right tool for your project in Adobe Cs6 Master Collection. Some of the applications included in this bundle are:</p>
6
- <h2>Adobe Cs6 Master Collection Keygen Xforce Rar Zip</h2><br /><p><b><b>Download File</b> &#187; <a href="https://byltly.com/2uKv0B">https://byltly.com/2uKv0B</a></b></p><br /><br />
7
- <ul>
8
- <li>Photoshop CS6: The industry-standard software for editing and enhancing images.</li>
9
- <li>Illustrator CS6: The vector graphics software for creating logos, icons, illustrations, and more.</li>
10
- <li>InDesign CS6: The page layout software for designing print and digital publications.</li>
11
- <li>Dreamweaver CS6: The web design software for creating responsive websites and apps.</li>
12
- <li>Flash Professional CS6: The animation software for creating interactive content for web, mobile, and games.</li>
13
- <li>Premiere Pro CS6: The video editing software for producing professional-quality videos.</li>
14
- <li>After Effects CS6: The motion graphics and visual effects software for adding cinematic flair to your videos.</li>
15
- <li>Audition CS6: The audio editing software for mixing, restoring, and enhancing sound.</li>
16
- <li>And many more...</li>
17
- </ul>
18
- <p>Adobe Cs6 Master Collection also comes with Adobe Bridge CS6, a file management tool that lets you organize and preview your media files; Adobe Media Encoder CS6, a tool that lets you encode your videos to various formats; and Adobe Acrobat X Pro, a tool that lets you create, edit, and sign PDF documents.</p>
19
- <h3>Features of Adobe Cs6 Master Collection</h3>
20
- <p>Some of the features that make Adobe Cs6 Master Collection stand out are:</p>
21
- <ul>
22
- <li>Blazing-fast performance: Thanks to the 64-bit native support and GPU acceleration, you can work faster and smoother on complex projects.</li>
23
- <li>Groundbreaking creative tools: You can explore new ways to design for the latest devices with innovative tools like Content-Aware Patch, Puppet Warp, Mercury Graphics Engine, Adaptive Wide Angle, and more.</li>
24
- <li>Exceptional power and precision: You can create inspiring experiences that go anywhere with precise control over every aspect of your work.</li>
25
- <li>Cross-platform compatibility: You can work seamlessly across Mac OS and Windows platforms with consistent results.</li>
26
- <li>Integration with other Adobe products: You can easily exchange files with other Adobe applications like Photoshop Extended, Illustrator, InDesign, Dreamweaver, Flash Professional, After Effects, Premiere Pro, Audition, and more.</li>
27
- </ul>
28
- <h3>System requirements for Adobe Cs6 Master Collection</h3>
29
- <p>To run Adobe Cs6 Master Collection smoothly on your computer, you need to meet the following system requirements:</p>
30
- <table border="1">
31
- <tr><th>Operating system</th><th>Windows</th><th>Mac OS</th></tr>
32
- <tr><td>Processor</td><td>Intel® Pentium® 4 or AMD Athlon® 64 processor (Intel Core™2 Duo or AMD Phenom® II recommended); Intel Core i7 required for Adobe SpeedGrade™</td><td>Multicore Intel processor with 64-bit support</td></tr>
33
- <tr><td>RAM</td><td>4 GB of RAM (8 GB recommended)</td><td>4 GB of RAM (8 GB recommended)</td></tr>
34
- <tr><td>Hard disk space</td><td>15.5 GB of available hard-disk space for installation; additional free space required during installation (cannot install on removable flash storage devices)</td><td>15.5 GB of available hard-disk space for installation; additional free space required during installation (cannot install on a volume that uses a case-sensitive file system or on removable flash storage devices)</td></tr>
35
- <tr><td>Display</td><td>1280 x 900 display (1280 x 1024 recommended) with 16-bit color and 512 MB of VRAM; OpenGL 2.0–capable system</td><td>1280 x 900 display (1680 x 1050 recommended) with 16-bit color and 512 MB of VRAM; OpenGL 2.0–capable system</td></tr>
36
- <tr><td>DVD-ROM drive</td><td>DVD-ROM drive compatible with dual-layer DVDs (DVD+-R burner for burning DVDs; Blu-ray burner for creating Blu-ray Disc media)</td><td>DVD-ROM drive compatible with dual-layer DVDs (SuperDrive for burning DVDs; external Blu-ray burner for creating Blu-ray Disc media)</td></tr>
37
- <tr><td>Other requirements</td><td>- Java™ Runtime Environment 1.6 (included) - Eclipse™ 3.7 (for plug-in installation of Adobe Flash® Builder®); the following distributions are supported: Eclipse IDE for Java EE and Java Developers, Eclipse Classic, Eclipse for PHP Developers - QuickTime 7.6.6 software required for QuickTime features - Optional: GPU card for GPU-accelerated performance in Adobe Premiere Pro - Optional: Tangent CP200 family or Tangent Wave control surface for SpeedGrade - Optional: For SDI output, NVIDIA Quadro SDI Output card required for SpeedGrade - Optional: 7200 RPM hard drive (multiple fast disk drives preferred) for video products - This software will not operate without activation. Broadband Internet connection and registration are required for software activation,</td><td>- Java Runtime Environment 1.6 - Eclipse 3.7 Cocoa version (for plug-in installation of Adobe Flash Builder); the following distributions are supported: Eclipse IDE for Java EE and Java Developers, Eclipse Classic, Eclipse for PHP Developers - QuickTime 7.6.6 software required for QuickTime features - Optional: GPU card for GPU-accelerated performance in Adobe Premiere Pro - Optional: Tangent CP200 family or Tangent Wave control surface for SpeedGrade - Optional: For SDI output, NVIDIA Quadro SDI Output card required for SpeedGrade - Optional: 7200 RPM hard drive (multiple fast disk drives preferred) for video products - This software will not operate without activation. Broadband Internet connection and registration are required for software activation,</td></tr>
38
- </table>
39
- <h2>What is Xforce Keygen?</h2>
40
- <p>Xforce Keygen is a tool that can generate valid serial numbers and activation codes for various software products. It is also known as a crack or a patch because it bypasses the original authentication process of the software. Xforce Keygen is created by a group of hackers called X-Force who are known for cracking many popular software products such as Autodesk AutoCAD, CorelDRAW Graphics Suite, Microsoft Office, etc.</p>
41
- <h3>How Xforce Keygen works</h3>
42
- ```html prevent the software from detecting the crack and asking for online activation. Xforce Keygen usually comes in a zip or rar file that contains the keygen executable file and a text file with instructions on how to use it.</p>
43
- <h3>Benefits of using Xforce Keygen</h3>
44
- <p>Some of the benefits of using Xforce Keygen are:</p>
45
- <ul>
46
- <li>You can get access to the full features and functions of the software without paying for it.</li>
47
- <li>You can use the software offline without needing an internet connection or an Adobe account.</li>
48
- <li>You can update the software to the latest version without losing the crack.</li>
49
- </ul>
50
- <h3>Risks of using Xforce Keygen</h3>
51
- <p>Some of the risks of using Xforce Keygen are:</p>
52
- <p>Adobe Cs6 Master Collection Crack Xforce Download<br />
53
- How to Activate Adobe Cs6 Master Collection with Xforce Keygen<br />
54
- Adobe Cs6 Master Collection Serial Number Generator by Xforce<br />
55
- Xforce Keygen for Adobe Cs6 Master Collection Free Download<br />
56
- Adobe Cs6 Master Collection Full Version with Xforce Crack<br />
57
- Adobe Cs6 Master Collection Xforce Keygen Only<br />
58
- Adobe Cs6 Master Collection Activation Code by Xforce<br />
59
- Adobe Cs6 Master Collection License Key from Xforce<br />
60
- Adobe Cs6 Master Collection Patch by Xforce Rar<br />
61
- Adobe Cs6 Master Collection Xforce Keygen 64 Bit<br />
62
- Adobe Cs6 Master Collection Xforce Keygen 32 Bit<br />
63
- Adobe Cs6 Master Collection Xforce Keygen Mac<br />
64
- Adobe Cs6 Master Collection Xforce Keygen Windows<br />
65
- Adobe Cs6 Master Collection Xforce Keygen Offline Activation<br />
66
- Adobe Cs6 Master Collection Xforce Keygen Not Working<br />
67
- Adobe Cs6 Master Collection Xforce Keygen Invalid Request Code<br />
68
- Adobe Cs6 Master Collection Xforce Keygen Error<br />
69
- Adobe Cs6 Master Collection Xforce Keygen Virus<br />
70
- Adobe Cs6 Master Collection Xforce Keygen Password<br />
71
- Adobe Cs6 Master Collection Xforce Keygen Zip File<br />
72
- Adobe Cs6 Master Collection Xforce Keygen Rar File<br />
73
- Adobe Cs6 Master Collection Xforce Keygen Extract<br />
74
- Adobe Cs6 Master Collection Xforce Keygen Install<br />
75
- Adobe Cs6 Master Collection Xforce Keygen Tutorial<br />
76
- Adobe Cs6 Master Collection Xforce Keygen Guide<br />
77
- Adobe Cs6 Master Collection Xforce Keygen Review<br />
78
- Adobe Cs6 Master Collection Xforce Keygen Test<br />
79
- Adobe Cs6 Master Collection Xforce Keygen Forum<br />
80
- Adobe Cs6 Master Collection Xforce Keygen Support<br />
81
- Adobe Cs6 Master Collection Xforce Keygen Help<br />
82
- Adobe Cs6 Master Collection Xforce Keygen Tips<br />
83
- Adobe Cs6 Master Collection Xforce Keygen Tricks<br />
84
- Adobe Cs6 Master Collection Xforce Keygen Hacks<br />
85
- Adobe Cs6 Master Collection Xforce Keygen Cheats<br />
86
- Adobe Cs6 Master Collection Xforce Keygen Tools<br />
87
- Adobe Cs6 Master Collection Xforce Keygen Software<br />
88
- Adobe Cs6 Master Collection Xforce Keygen Program<br />
89
- Adobe Cs6 Master Collection Xforce Keygen Application<br />
90
- Adobe Cs6 Master Collection Xforce Keygen Product<br />
91
- Adobe Cs6 Master Collection Xforce Keygen Solution<br />
92
- Adobe Cs6 Master Collection Xforce Keygen Alternative<br />
93
- Adobe Cs6 Master Collection Xforce Keygen Comparison<br />
94
- Adobe Cs6 Master Collection Xforce Keygen Benefits<br />
95
- Adobe Cs6 Master Collection Xforce Keygen Features<br />
96
- Adobe Cs6 Master Collection Xforce Keygen Advantages<br />
97
- Adobe Cs6 Master Collection Xforce Keygen Disadvantages<br />
98
- Adobe Cs6 Master Collection Xforce Keygen Pros and Cons<br />
99
- Adobe Cs6 Master Collection Xforce Keygen Quality<br />
100
- Adobe Cs6 Master Collection Xforce Keygen Reliability<br />
101
- Adobe Cs6 Master Collection Xforce Keygen Satisfaction</p>
102
- <ul>
103
- <li>You may violate the software's terms of service and end up facing legal consequences.</li>
104
- <li>You may expose your computer to malware or viruses that may be hidden in the keygen file or the modified files.</li>
105
- <li>You may compromise the quality and security of your work as the software may not function properly or may contain bugs or errors.</li>
106
- <li>You may miss out on some features or updates that are only available for genuine users.</li>
107
- </ul>
108
- <h2>How to download and install Adobe Cs6 Master Collection with Xforce Keygen</h2>
109
- <p>If you want to download and install Adobe Cs6 Master Collection with Xforce Keygen, you need to follow these steps carefully:</p>
110
- <h3>Step 1: Disable your network card or pull the network cable out</h3>
111
- <p>This is to prevent the software from connecting to the internet and verifying your serial number and activation code. You also need to make sure you don't have any of these entries in your hosts file:</p>
112
- <code>127.0.0.1 lmlicenses.wip4.adobe.com 127.0.0.1 lm.licenses.adobe.com</code>
113
- <p>The hosts file is located in C:\windows\system32\drivers\etc\hosts for Windows and /etc/hosts for Mac OS.</p>
114
- <h3>Step 2: Install the Master Collection CS6 with a serial generated from Xforce Keygen</h3>
115
- <p>You need to download Xforce Keygen from a reliable source and run it as administrator. Then, you need to select Adobe Cs6 Master Collection from the drop-down menu and click on Generate Serial. You will get a serial number that you need to copy and paste in the installation window of Adobe Cs6 Master Collection. Do not close the keygen yet. When the error "Please connect to the internet and retry" shows, click on Connect Later.</p>
116
- <h3>Step 3: Launch an Adobe application and confirm you have a connection problem</h3>
117
- <p>You need to launch any Adobe application from the Master Collection, such as Photoshop, Illustrator, or InDesign. You will see a message that says "We are unable to start your subscription for Adobe Cs6 Master Collection". Click on Having Trouble Connecting To The Internet. Then, click on Offline Activation and then on Generate Request Code. You will get a request code that you need to copy and paste in the keygen window.</p>
118
- <h3>Step 4: Generate and validate an activation code with Xforce Keygen</h3>
119
- <p>In the keygen window, click on Activate and then on Generate Activation Code. You will get an activation code that you need to copy and paste in the Adobe application window. Then, click on Activate and then on Close Application.</p>
120
- <h3>Step 5: Run disable_activation.cmd or disable_activation_osx as root</h3>
121
- <p>This is to block Adobe from accessing its servers and checking your activation status. You need to run disable_activation.cmd for Windows or disable_activation_osx for Mac OS as administrator or root. These files are usually included in the zip or rar file of Xforce Keygen. Alternatively, you can manually add these lines to your hosts file:</p>
122
- <code># Adobe Blocker 127.0.0.1 lmlicenses.wip4.adobe.com 127.0.0.1 lm.licenses.adobe.com</code>
123
- <h3>Step 6: Re-enable your network card and update your software to the latest version</h3>
124
- <p>This is to restore your internet connection and enjoy the latest features and updates of Adobe Cs6 Master Collection. You can use Adobe Updater to check for updates and install them without losing the crack.</p>
125
- <h2>Conclusion</h2>
126
- <p>In this article, we have explained what Adobe Cs6 Master Collection is, what Xforce Keygen is, and how to download and install Adobe Cs6 Master Collection with Xforce Keygen. We have also covered some of the benefits and risks of using this method, and answered some frequently asked questions. We hope you have found this article helpful and informative.</p>
127
- <h2>FAQs</h2>
128
- <ol>
129
- <li>Q: Is Xforce Keygen legal?</li>
130
- <li>A: No, Xforce Keygen is not legal as it violates the software's terms of service and infringes its intellectual property rights.</li>
131
- <li>Q: Is Xforce Keygen safe?</li>
132
- <li>A: No, Xforce Keygen is not safe as it may contain malware or viruses that can harm your computer or compromise your work.</li>
133
- <li>Q: Can I use Xforce Keygen for other software products?</li>
134
- <li>A: Yes, Xforce Keygen can generate serial numbers and activation codes for other software products such as Autodesk AutoCAD, CorelDRAW Graphics Suite, Microsoft Office, etc.</li>
135
- <li>Q: Can I use Adobe Cs6 Master Collection online after using Xforce Keygen?</li>
136
- <li>A: Yes, you can use Adobe Cs6 Master Collection online after using Xforce Keygen, but you may not be able to access some features or services that require online verification or registration.</li>
137
- <li>Q: Can I uninstall Adobe Cs6 Master Collection after using Xforce Keygen?</li>
138
- <li>A: Yes, you can uninstall Adobe Cs6 Master Collection after using Xforce Keygen, but you need to delete these folders as well:</li>
139
- <ul>
140
- <li>C:\Program Files (x86)\Common Files\Adobe\SLCache for Windows</li>
141
- <li>C:\ProgramData\Adobe\SLStore for Windows</li>
142
- <li>/Library/Application Support/Adobe/SLStore for Mac OS</li>
143
- <li>/Library/Application Support/Adobe/SLCache for Mac OS</li>
144
- </ul>
145
- </ol>
146
- </p> 0a6ba089eb<br />
147
- <br />
148
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Como Ativar O Malwarebytes Premium.md DELETED
@@ -1,33 +0,0 @@
1
- <br />
2
- <h1>Como ativar o Malwarebytes Premium</h1>
3
- <p>O Malwarebytes Premium é um programa de segurança que protege o seu computador contra malware, ransomware, exploits e outras ameaças online. Para ativar os recursos premium do Malwarebytes, você precisa de uma licença válida que pode ser comprada no site oficial do Malwarebytes ou em revendedores autorizados.</p>
4
- <p>Neste artigo, vamos mostrar como ativar o Malwarebytes Premium em seu computador usando dois métodos: através da sua conta do Malwarebytes ou através de uma chave de licença.</p>
5
- <h2>como ativar o malwarebytes premium</h2><br /><p><b><b>Download File</b> &#10001; <a href="https://byltly.com/2uKw0W">https://byltly.com/2uKw0W</a></b></p><br /><br />
6
- <h2>Ativar o Malwarebytes Premium através da sua conta do Malwarebytes</h2>
7
- <p>Este método requer que você tenha um login ativo da sua conta do Malwarebytes. Se você ainda não criou a sua conta, veja como fazer isso neste link: <a href="https://support.malwarebytes.com/hc/pt-br/articles/360038479154-Ativar-recursos-Premium-no-Malwarebytes-para-Windows">Criar e gerenciar sua conta do Malwarebytes</a>.</p>
8
- <p>Siga os passos abaixo para ativar o Malwarebytes Premium através da sua conta:</p>
9
- <ol>
10
- <li>Baixe o software do Malwarebytes no site oficial: <a href="http://downloads.malwarebytes.org/file/mbam/">http://downloads.malwarebytes.org/file/mbam/</a> e instale-o em seu computador.</li>
11
- <li>Abra o aplicativo do Malwarebytes.</li>
12
- <li>No canto superior direito do Painel, clique em Ativar licença.</li>
13
- <li>No campo Email, digite o endereço de email usado para entrar na sua conta do Malwarebytes.</li>
14
- <li>No campo Senha, digite a senha usada para entrar na sua conta do Malwarebytes.</li>
15
- <li>Clique em Entrar.</li>
16
- <li>Quando a sua licença for ativada, clique em Concluído.</li>
17
- </ol>
18
- <p>Uma vez ativado, Premium será exibido no canto superior esquerdo do Painel do programa.</p>
19
- <h2>Ativar o Malwarebytes Premium através de uma chave de licença</h2>
20
- <p>Este método requer que você tenha a sua chave de licença, que pode ser encontrada na sua confirmação de compra por email ou na sua conta do Malwarebytes. Se você não sabe onde encontrar a sua chave de licença, veja como fazer isso neste link: <a href="https://support.malwarebytes.com/hc/pt-br/articles/360038479154-Ativar-recursos-Premium-no-Malwarebytes-para-Windows">Encontrar minha chave de licença do Malwarebytes</a>.</p>
21
- <p>Siga os passos abaixo para ativar o Malwarebytes Premium através de uma chave de licença:</p>
22
- <ol>
23
- <li>Baixe o software do Malwarebytes no site oficial: <a href="http://downloads.malwarebytes.org/file/mbam/">http://downloads.malwarebytes.org/file/mbam/</a> e instale-o em seu computador.</li>
24
- <li>Abra o aplicativo do Malwarebytes.</li>
25
- <li>No canto superior direito do Painel, clique em Ativar licença.</li>
26
- <li>Clique em Inserir chave de licença.</li>
27
- <li>Se a sua chave de licença tem este formato XXXXX-XXXXX-XXXXX-XXXXX, digite-a no campo Chave de licença e clique em Ativar.</li>
28
- <li>Se a sua chave de licença tem este formato XXXX-XXXX-XXXX-XXXX e tem uma ID de licença com o formato XXXXX ou XXXXX-XXXXX, selecione Minha licença veio com uma ID de licença abaixo da entrada Chave de licença. Digite a sua ID de licença e a sua chave de licença e clique em Ativar.</li>
29
- </ol>
30
- <p>Para verificar se a ativação foi bem-sucedida, Premium será exibido no canto superior esquerdo do Painel do programa.</p>
31
- <p></p> cec2833e83<br />
32
- <br />
33
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Adobe Photoshop Cs6 Amtlib Dll Files Everything You Need to Know.md DELETED
@@ -1,150 +0,0 @@
1
- <br />
2
- <h1>How to Crack Adobe Photoshop CS6 with Amtlib.dll Files</h1>
3
- <p>Adobe Photoshop CS6 is one of the most popular and powerful image editing software in the world. However, it is also one of the most expensive ones, costing hundreds of dollars for a single license. If you want to use Adobe Photoshop CS6 without paying for it, you might be tempted to crack it using Amtlib.dll files. But what are these files and how do they work? In this article, we will explain everything you need to know about cracking Adobe Photoshop CS6 with Amtlib.dll files, including the benefits, risks, and alternatives.</p>
4
- <h2>Crack Adobe Photoshop Cs6 Amtlib Dll Files</h2><br /><p><b><b>DOWNLOAD</b> &#10037;&#10037;&#10037; <a href="https://byltly.com/2uKyoj">https://byltly.com/2uKyoj</a></b></p><br /><br />
5
- <h2>What is Adobe Photoshop CS6?</h2>
6
- <p>Adobe Photoshop CS6 is the 13th major release of the Adobe Photoshop software, which was launched in May 2012. It is a creative image editing suite that offers a range of features and tools for professional and amateur photographers, graphic designers, web developers, and video editors. Some of the new features and enhancements in Adobe Photoshop CS6 include:</p>
7
- <ul>
8
- <li>A new user interface with dark and light themes</li>
9
- <li>A new Content-Aware tool that can automatically fill in gaps or remove unwanted objects from images</li>
10
- <li>A new Crop tool that can straighten images and adjust perspective</li>
11
- <li>A new Blur Gallery that can create realistic blur effects such as tilt-shift, iris, and field</li>
12
- <li>A new Adaptive Wide Angle filter that can correct distortion in wide-angle or fisheye lenses</li>
13
- <li>A new Video Editing tool that can edit video clips directly within Photoshop</li>
14
- <li>A new 3D engine that can render 3D graphics faster and more realistically</li>
15
- <li>A new Mercury Graphics Engine that can boost performance and speed up processing</li>
16
- <li>A new Background Save feature that can save images in the background while you work on other tasks</li>
17
- <li>A new Auto Save feature that can automatically save your work every few minutes</li>
18
- </ul>
19
- <h2>What is Amtlib.dll?</h2>
20
- <p>Amtlib.dll is a dynamic link library file that is part of the Adobe Application Manager. It is responsible for activating and validating the licenses of various Adobe products, such as Photoshop, Illustrator, Dreamweaver, Premiere Pro, After Effects, etc. It is located in the installation folder of each Adobe product.</p>
21
- <p>When you crack Adobe Photoshop CS6 with Amtlib.dll files, you are essentially replacing the original Amtlib.dll file with a modified one that bypasses the license verification process. This way, you can use Adobe Photoshop CS6 without entering a serial number or signing in with an Adobe ID.</p>
22
- <p>How to crack Adobe Photoshop Cs6 with Amtlib Dll file<br />
23
- Amtlib Dll file download for Adobe Photoshop Cs6 crack<br />
24
- Adobe Photoshop Cs6 crack Amtlib Dll file missing error<br />
25
- Fix Adobe Photoshop Cs6 crack Amtlib Dll file corrupted issue<br />
26
- Adobe Photoshop Cs6 crack Amtlib Dll file location on Windows<br />
27
- Adobe Photoshop Cs6 crack Amtlib Dll file location on Mac<br />
28
- Adobe Photoshop Cs6 crack Amtlib Dll file not working solution<br />
29
- Adobe Photoshop Cs6 crack Amtlib Dll file virus scan<br />
30
- Adobe Photoshop Cs6 crack Amtlib Dll file backup and restore<br />
31
- Adobe Photoshop Cs6 crack Amtlib Dll file alternative methods<br />
32
- Adobe Photoshop Cs6 crack Amtlib Dll file free download link<br />
33
- Adobe Photoshop Cs6 crack Amtlib Dll file installation guide<br />
34
- Adobe Photoshop Cs6 crack Amtlib Dll file compatibility check<br />
35
- Adobe Photoshop Cs6 crack Amtlib Dll file update and patch<br />
36
- Adobe Photoshop Cs6 crack Amtlib Dll file license key generator<br />
37
- Adobe Photoshop Cs6 crack Amtlib Dll file activation code<br />
38
- Adobe Photoshop Cs6 crack Amtlib Dll file serial number<br />
39
- Adobe Photoshop Cs6 crack Amtlib Dll file registration code<br />
40
- Adobe Photoshop Cs6 crack Amtlib Dll file product key<br />
41
- Adobe Photoshop Cs6 crack Amtlib Dll file full version download<br />
42
- Adobe Photoshop Cs6 crack Amtlib Dll file trial reset tool<br />
43
- Adobe Photoshop Cs6 crack Amtlib Dll file offline installer<br />
44
- Adobe Photoshop Cs6 crack Amtlib Dll file online activation<br />
45
- Adobe Photoshop Cs6 crack Amtlib Dll file safe and secure download<br />
46
- Adobe Photoshop Cs6 crack Amtlib Dll file latest version download<br />
47
- Adobe Photoshop Cs6 crack Amtlib Dll file review and feedback<br />
48
- Adobe Photoshop Cs6 crack Amtlib Dll file tutorial and tips<br />
49
- Adobe Photoshop Cs6 crack Amtlib Dll file features and benefits<br />
50
- Adobe Photoshop Cs6 crack Amtlib Dll file pros and cons<br />
51
- Adobe Photoshop Cs6 crack Amtlib Dll file comparison and contrast<br />
52
- Adobe Photoshop Cs6 crack Amtlib Dll file best practices and recommendations<br />
53
- Adobe Photoshop Cs6 crack Amtlib Dll file troubleshooting and support<br />
54
- Adobe Photoshop Cs6 crack Amtlib Dll file FAQs and answers<br />
55
- Adobe Photoshop Cs6 crack Amtlib Dll file forum and community<br />
56
- Adobe Photoshop Cs6 crack Amtlib Dll file blog and articles<br />
57
- Adobe Photoshop Cs6 crack Amtlib Dll file video and audio tutorials<br />
58
- Adobe Photoshop Cs6 crack Amtlib Dll file case studies and testimonials<br />
59
- Adobe Photoshop Cs6 crack Amtlib Dll file coupons and discounts<br />
60
- Adobe Photoshop Cs6 crack Amtlib Dll file affiliate program and commission<br />
61
- Adobe Photoshop Cs6 crack Amtlib Dll file refund policy and guarantee<br />
62
- How to uninstall Adobe Photoshop Cs6 crack Amtlib Dll file <br />
63
- How to upgrade from Adobe Photoshop Cs5 to Cs6 with Amtlib Dll file <br />
64
- How to use Adobe Photoshop Cs6 with other cracked software using Amtlib Dll files <br />
65
- How to fix common errors and bugs in Adobe Photoshop Cs6 with cracked Amtlib Dll files <br />
66
- How to optimize the performance of Adobe Photoshop Cs6 with cracked Amtlib Dll files <br />
67
- How to customize the settings of Adobe Photoshop Cs6 with cracked Amtlib Dll files <br />
68
- How to create stunning graphics and designs with Adobe Photoshop Cs6 with cracked Amtlib Dll files <br />
69
- How to edit photos and images with Adobe Photoshop Cs6 with cracked Amtlib Dll files <br />
70
- How to add filters and effects with Adobe Photoshop Cs6 with cracked Amtlib Dll files <br />
71
- How to share your work with others using Adobe Photoshop Cs6 with cracked Amtlib Dll files</p>
72
- <h2>How to Download and Install Adobe Photoshop CS6</h2>
73
- <p>Before you can crack Adobe Photoshop CS6 with Amtlib.dll files, you need to download and install it on your computer. Here are the steps to do so:</p>
74
- <ol>
75
- <li>Go to <a href="https://www.adobe.com/products/photoshop/free-trial-download.html">https://www.adobe.com/products/photoshop/free-trial-download.html</a> and click on "Download now".</li>
76
- <li>Follow the instructions on the screen to download the installer file.</li>
77
- <li>Run the installer file and follow the instructions on the screen to install Adobe Photoshop CS6.</li>
78
- <li>When prompted, choose "Try" instead of "Buy" or "Enter serial number".</li>
79
- <li>Wait for the installation to complete.</li>
80
- <li>You have now installed Adobe Photoshop CS6 as a trial version. You can use it for 30 days before it expires.</li>
81
- </ol>
82
- <h2>How to Crack Adobe Photoshop CS6 with Amtlib.dll Files</h2>
83
- <p>Now that you have installed Adobe Photoshop CS6 as a trial version, you can crack it using Amtlib.dll files. Here are the steps to do so:</p>
84
- <h3>Step 1: Download Amtlib.dll Files</h3>
85
- <p>The first thing you need to do is download the cracked Amtlib.dll files for both 32-bit and 64-bit versions of Adobe Photoshop CS6. You can find them from various sources online, but make sure they are safe and reliable. One possible source is <a href="https://davi24.com/download-file-amtlib-dll/">https://davi24.com/download-file-amtlib-dll/</a>, where you can download them for free.</p>
86
- <h3>Step 2: Locate the Installation Folder of Adobe Photoshop CS6</h3>
87
- <p>The next thing you need to do is locate the installation folder of Adobe Photoshop CS6 on your computer. The default location depends on your operating system and whether you have installed the 32-bit or 64-bit version of Adobe Photoshop CS6. Here are some possible locations:</p>
88
- <ul>
89
- <li>If you have Windows 10/8/7/Vista (64-bit) and have installed the 64-bit version of Adobe Photoshop CS6, go to C:\Program Files\Adobe\Adobe Photoshop CS6 (64 Bit)</li>
90
- <li>If you have Windows 10/8/7/Vista (64-bit) and have installed the 32-bit version of Adobe Photoshop CS6, go to C:\Program Files (x86)\Adobe\Adobe Photoshop CS6</li>
91
- <li>If you have Windows XP (32-bit) and have installed either version of Adobe Photoshop CS6, go to C:\Program Files\Adobe\Adobe Photoshop CS6</li>
92
- </ul>
93
- <h3>Step 3: Replace the Original Amtlib.dll File with the Cracked One</h3>
94
- <p>The final thing you need to do is replace the original Amtlib.dll file with the cracked one. To do this:</p>
95
- <ol>
96
- <li>Open the installation folder of Adobe Photoshop CS6.</li>
97
- <li>Find and rename the original Amtlib.dll file as something else, such as "Amtlib.bak". This way, you can restore it later if needed.</li>
98
- <li>Copy and paste the cracked Amtlib.dll file into the same folder.</li>
99
- <li>You have now replaced the original Amtlib.dll file with the cracked one.</li>
100
- </ol>
101
- <h3>Step 4: Run Adobe Photoshop CS6 and Enjoy</h3>
102
- <p>The last thing you need to do is run Adobe Photoshop CS6 and enjoy using it without any restrictions. To do this:</p>
103
- <ol>
104
- <li>Launch Adobe Photoshop CS6 from your desktop or start menu.</li>
105
- <li>You should not see any prompts asking for a serial number or an Adobe ID.</li>
106
- ```html any limitations.</li>
107
- <li>You have now successfully cracked Adobe Photoshop CS6 with Amtlib.dll files.</li>
108
- </ol>
109
- <h2>Benefits of Cracking Adobe Photoshop CS6 with Amtlib.dll Files</h2>
110
- <p>Cracking Adobe Photoshop CS6 with Amtlib.dll files has some benefits, such as:</p>
111
- <ul>
112
- <li>You can use Adobe Photoshop CS6 for free without paying for a license.</li>
113
- <li>You can use Adobe Photoshop CS6 offline without signing in with an Adobe ID.</li>
114
- <li>You can use Adobe Photoshop CS6 on multiple computers without any activation issues.</li>
115
- <li>You can use Adobe Photoshop CS6 for as long as you want without worrying about expiration dates.</li>
116
- </ul>
117
- <h2>Risks of Cracking Adobe Photoshop CS6 with Amtlib.dll Files</h2>
118
- <p>However, cracking Adobe Photoshop CS6 with Amtlib.dll files also has some risks, such as:</p>
119
- <ul>
120
- <li>You may violate the terms and conditions of Adobe and face legal consequences.</li>
121
- <li>You may expose your computer to viruses, malware, or spyware that may harm your system or steal your data.</li>
122
- <li>You may encounter errors, bugs, or crashes that may affect your work or damage your files.</li>
123
- <li>You may miss out on updates, patches, or new features that may improve your experience or fix issues.</li>
124
- <li>You may lose access to customer support or online services that may help you with your problems or questions.</li>
125
- </ul>
126
- <h2>Alternatives to Cracking Adobe Photoshop CS6 with Amtlib.dll Files</h2>
127
- <p>If you are not comfortable with cracking Adobe Photoshop CS6 with Amtlib.dll files, you may want to consider some alternatives, such as:</p>
128
- <ul>
129
- <li>Buying a legitimate license of Adobe Photoshop CS6 from the official website or an authorized reseller.</li>
130
- <li>Subscribing to Adobe Creative Cloud and getting access to the latest version of Adobe Photoshop and other Adobe products.</li>
131
- <li>Using a free trial of Adobe Photoshop CS6 for 30 days and deciding whether to buy it or not.</li>
132
- <li>Using a free or cheaper alternative to Adobe Photoshop CS6, such as GIMP, Paint.NET, Pixlr, etc.</li>
133
- </ul>
134
- <h2>Conclusion</h2>
135
- <p>In conclusion, cracking Adobe Photoshop CS6 with Amtlib.dll files is a way to use the software for free without any restrictions. However, it also comes with some drawbacks and dangers that you should be aware of. Therefore, you should weigh the pros and cons carefully before deciding to crack Adobe Photoshop CS6 with Amtlib.dll files. Alternatively, you can opt for some other options that may suit your needs and budget better. We hope this article has been helpful and informative for you. Thank you for reading!</p>
136
- <h2>FAQs</h2>
137
- <p>Here are some frequently asked questions and answers about cracking Adobe Photoshop CS6 with Amtlib.dll files:</p>
138
- <h4>Q: Is cracking Adobe Photoshop CS6 with Amtlib.dll files illegal?</h4>
139
- <p>A: Yes, cracking Adobe Photoshop CS6 with Amtlib.dll files is illegal. It violates the copyright and license agreement of Adobe and may result in legal action against you. You should respect the intellectual property rights of the software developers and pay for their products if you want to use them.</p>
140
- <h4>Q: Is cracking Adobe Photoshop CS6 with Amtlib.dll files safe?</h4>
141
- <p>A: No, cracking Adobe Photoshop CS6 with Amtlib.dll files is not safe. It may expose your computer to malicious software that may harm your system or steal your data. It may also cause errors, bugs, or crashes that may affect your work or damage your files. It may also prevent you from getting updates, patches, or new features that may improve your experience or fix issues. You should protect your computer and data by using only trusted and secure sources of software.</p>
142
- <h4>Q: Is cracking Adobe Photoshop CS6 with Amtlib.dll files worth it?</h4>
143
- <p>A: That depends on your personal preference and situation. Cracking Adobe Photoshop CS6 with Amtlib.dll files may save you some money and give you some freedom in using the software. However, it also comes with some risks and disadvantages that you should consider carefully. You may also miss out on some benefits and opportunities that come with using a legitimate version of the software. You should weigh the pros and cons carefully before deciding to crack Adobe Photoshop CS6 with Amtlib.dll files.</p>
144
- <h4>Q: How can I crack other Adobe products with Amtlib.dll files?</h4>
145
- <p>A: The process of cracking other Adobe products with Amtlib.dll files is similar to cracking Adobe Photoshop CS6. You need to download and install the trial version of the product you want to crack, then download and replace the original Amtlib.dll file with the cracked one in the installation folder of the product. However, you should be careful about the compatibility and reliability of the cracked Amtlib.dll files for different products and versions. You should also be aware of the risks and consequences of cracking other Adobe products with Amtlib.dll files.</p>
146
- <h4>Q: Where can I find more information about cracking Adobe Photoshop CS6 with Amtlib.dll files?</h4>
147
- <p>A: You can find more information about cracking Adobe Photoshop CS6 with Amtlib.dll files from various sources online, such as blogs, forums, videos, etc. However, you should be careful about the accuracy and credibility of these sources. You should also be careful about the safety and security of these sources. You should not download or click on any links or files that may contain viruses, malware, or spyware. You should also not share any personal or sensitive information that may compromise your privacy or identity.</p>
148
- </p> 0a6ba089eb<br />
149
- <br />
150
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Minecraft on Windows 10 Everything You Need to Know.md DELETED
@@ -1,37 +0,0 @@
1
-
2
- <h1>How long does it take to download Minecraft on Windows 10?</h1>
3
- <p>Minecraft is one of the most popular games in the world, with millions of players exploring, building and fighting in its blocky world. If you want to join them, you might wonder how long it takes to download Minecraft on Windows 10. The answer depends on a few factors, such as your internet speed, the version of Minecraft you want to install, and the size of the game files.</p>
4
- <h2>how long does it take to download minecraft on windows 10</h2><br /><p><b><b>DOWNLOAD</b> &#10001; &#10001; &#10001; <a href="https://byltly.com/2uKvHZ">https://byltly.com/2uKvHZ</a></b></p><br /><br />
5
- <p>In this article, we will explain how to download Minecraft for Windows 10, and how long you can expect it to take. We will also compare the two versions of Minecraft available for PC: Java Edition and Bedrock Edition (also known as Windows 10 Edition).</p>
6
- <h2>How to download Minecraft for Windows 10</h2>
7
- <p>Before you can download Minecraft for Windows 10, you need to purchase the game from either the Microsoft Store or the Minecraft website. The game costs $29.99 / £24.99 / AUS$39.95, but you can get it for free or at a discounted price if you have an Xbox Game Pass subscription.</p>
8
- <p>Once you have bought the game, you will need to create a Microsoft account if you don't have one already. This is an Outlook email address that you can use to sign in to the Minecraft Launcher and access online features. You will also need to verify your email address and enter your birthdate and country/region.</p>
9
- <p>After creating your Microsoft account, you can download and open the Minecraft Launcher from either the Microsoft Store or the Minecraft website. This is where you can choose which version of Minecraft you want to install: Java Edition or Bedrock Edition.</p>
10
- <p></p>
11
- <h2>Which version of Minecraft should you install?</h2>
12
- <p>Minecraft Java Edition and Bedrock Edition are both compatible with Windows 10, but they have some differences in features, performance and cross-play options. Here are some of the main differences between them:</p>
13
- <ul>
14
- <li>Java Edition is the original version of Minecraft, and it has more advanced features such as modding support, custom servers and snapshots (beta versions of upcoming updates). It also has exclusive mini-games and servers such as Hypixel and Mineplex.</li>
15
- <li>Bedrock Edition is the newer version of Minecraft, and it has better performance, graphics and stability. It also supports cross-play with other devices that run Bedrock Edition, such as Xbox One, PlayStation 4, Nintendo Switch, iOS and Android. It also has access to the Minecraft Marketplace, where you can buy skins, maps and other content created by the community.</li>
16
- </ul>
17
- <p>The good news is that you don't have to choose between them. If you buy Minecraft for Windows 10 from the Minecraft website, you will get both Java Edition and Bedrock Edition for free. You can install both versions on your PC and switch between them using the Minecraft Launcher.</p>
18
- <h2>How long does it take to download Minecraft on Windows 10?</h2>
19
- <p>The download time for Minecraft on Windows 10 depends on your internet speed and the size of the game files. According to our tests, these are the approximate download times for each version of Minecraft:</p>
20
- <table>
21
- <tr><th>Version</th><th>File size</th><th>Download time</th></tr>
22
- <tr><td>Java Edition</td><td>500 MB</td><td>5 minutes</td></tr>
23
- <tr><td>Bedrock Edition</td><td>300 MB</td><td>3 minutes</td></tr>
24
- </table>
25
- <p>Note that these are only estimates based on average internet speeds of 25 Mbps. Your actual download time may vary depending on your internet connection and other factors.</p>
26
- <p>If you are having trouble downloading Minecraft on Windows 10, you can try some of these troubleshooting steps:</p>
27
- <ul>
28
- <li>Check your internet connection and make sure it is stable and fast enough.</li>
29
- <li>Restart your PC and router/modem.</li>
30
- <li>Clear your browser cache and cookies.</li>
31
- <li>Disable any antivirus or firewall software that may interfere with the download.</li>
32
- <li>Contact Microsoft or Mojang support if none of the above works.</li>
33
- </ul>
34
- <h2>Conclusion</h2>
35
- <p>Minecraft is a fun and creative game that you can enjoy on your Windows 10 PC. To download it, you need to buy it from either the Microsoft Store or</p> ddb901b051<br />
36
- <br />
37
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Cccam 2.3.0 Ipk Vix [BEST].md DELETED
@@ -1,10 +0,0 @@
1
-
2
- <p>a cccam is a device that is connected to a dreambox, and it controls the picture output of the dreambox. the cccam allows you to control the dreambox picture quality and settings. you can control the contrast, brightness, and color of the picture. the cccam will allow you to preview the picture quality on the dreambox.</p>
3
- <h2>Cccam 2.3.0 Ipk Vix</h2><br /><p><b><b>DOWNLOAD</b> &bull; <a href="https://imgfil.com/2uxYTr">https://imgfil.com/2uxYTr</a></b></p><br /><br />
4
- <p>this can be a great option if you have a dreambox that is out of warranty and you want to upgrade the firmware. the cccam firmware upgrade can be installed on your dreambox if you have the dreambox installed and you have the right cables. (the dreambox will not work without the right cables, and the cccam firmware upgrade can only be installed after the dreambox is installed.)</p>
5
- <p>there are a few different dreambox models that support cccam firmware upgrades. the dreambox is listed on this website if it supports cccam firmware upgrades. make sure that you are installing the correct cccam firmware for your dreambox model.</p>
6
- <p>cccam is a great feature for those that want to upgrade their dreambox firmware. the cccam firmware upgrade will allow you to upgrade the firmware of the dreambox. the dreambox firmware upgrade can be installed on your dreambox if you have the dreambox installed and you have the right cables. the cccam firmware upgrade can only be installed after the dreambox is installed.</p>
7
- <p></p>
8
- <p>the cccam is a device that is connected to a dreambox, and it controls the picture output of the dreambox. the cccam allows you to control the dreambox picture quality and settings. you can control the contrast, brightness, and color of the picture.</p> 899543212b<br />
9
- <br />
10
- <br />
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Clash Royale 3.3024.2 APK and Join the Arena with Your Favorite Clash Characters.md DELETED
@@ -1,123 +0,0 @@
1
- <br />
2
- <h1>Clash Royale 3.3024.2 APK: Everything You Need to Know</h1>
3
- <p>If you are a fan of strategy games, you have probably heard of Clash Royale, one of the most popular and addictive mobile games in the world. Clash Royale is a real-time multiplayer game where you can collect and upgrade dozens of cards featuring your favorite characters from Clash of Clans, as well as spells and defenses. You can also build your own battle deck and challenge other players online in fast-paced duels.</p>
4
- <p>But did you know that there is a new version of Clash Royale available for download? Yes, you heard that right. Clash Royale 3.3024.2 APK is the latest update of the game, and it comes with a lot of new features, improvements, and bug fixes that will make your gaming experience even better. In this article, we will tell you everything you need to know about Clash Royale 3.3024.2 APK, including what's new, how to download and install it, and why you should play it. Let's get started!</p>
5
- <h2>clash royale 3.3024.2 apk</h2><br /><p><b><b>Download File</b> &#10084;&#10084;&#10084; <a href="https://jinyurl.com/2uNQ0p">https://jinyurl.com/2uNQ0p</a></b></p><br /><br />
6
- <h2>What is Clash Royale?</h2>
7
- <p>Before we dive into the details of the new version, let's have a quick recap of what Clash Royale is all about. Clash Royale is a strategy game developed by Supercell, the same company behind the hit game Clash of Clans. It was released in 2016 and has since become one of the most downloaded and played games on both Android and iOS devices.</p>
8
- <p>Clash Royale is a game where you can create your own army of troops, spells, and buildings, and use them to attack your opponent's towers and destroy their king tower. You can also defend your own towers from enemy attacks using various strategies and tactics. The game features different arenas, each with its own theme and difficulty level, where you can compete with other players from around the world.</p>
9
- <p>Clash Royale is not only a game of skill, but also a game of luck. You never know what cards you will get in your hand or what cards your opponent will play next. You have to think fast and act smart to win the battles. You can also join or create clans, where you can chat with other players, share cards, request donations, and participate in clan wars.</p>
10
- <p>Clash Royale is a game that is easy to learn but hard to master. It requires strategic thinking, quick reflexes, and constant adaptation to changing situations. It is also a game that is constantly updated with new content, such as cards, modes, events, rewards, and more.</p>
11
- <h2>What's new in Clash Royale 3.3024.2 APK?</h2>
12
- <p>Now that you have a general idea of what Clash Royale is, let's see what's new in the latest version of the game: Clash Royale 3.3024.2 APK. This version was released on June 19th, 2023, and it brings some exciting changes and additions to the game.</p>
13
- <h3>New cards and balance updates</h3>
14
- <p>The most noticeable change in Clash Royale 3.3024.2 APK is the introduction of two new cards: the Firecracker and the Royal Delivery. The Firecracker is a common card that costs 3 elixir and shoots fireworks that deal splash damage to enemies. The Royal Delivery is a rare card that costs 4 elixir and drops a Royal Recruit on the battlefield after a short delay. Both cards are available in Arena 7 and above.</p>
15
- <p>Another change in Clash Royale 3.3024.2 APK is the balance update that affects several cards in the game. Some of the cards that have been buffed are the Electro Dragon, the Goblin Cage, the Zappies, and the Heal Spirit. Some of the cards that have been nerfed are the Magic Archer, the Battle Healer, the Elixir Golem, and the Skeleton Barrel. You can check the full list of balance changes on the official website of Clash Royale.</p>
16
- <h3>New game modes and events</h3>
17
- <p>Clash Royale 3.3024.2 APK also introduces some new game modes and events that will spice up your gameplay. One of them is the Firecracker Rush, where both players start with a Firecracker on each lane, and more Firecrackers spawn throughout the match. Another one is the Royal Delivery Challenge, where you can win the new card by reaching 12 wins. There are also some seasonal events, such as the Summer of 2v2, where you can play different 2v2 modes with your friends or random partners.</p>
18
- <p>clash royale 3.3024.2 apk download for android<br />
19
- clash royale 3.3024.2 apk mod unlimited gold/gems<br />
20
- clash royale 3.3024.2 apk latest version uptodown<br />
21
- clash royale 3.3024.2 apk free download softpedia<br />
22
- clash royale 3.3024.2 apk update new features<br />
23
- clash royale 3.3024.2 apk hack no root<br />
24
- clash royale 3.3024.2 apk offline installer<br />
25
- clash royale 3.3024.2 apk mirror link<br />
26
- clash royale 3.3024.2 apk file size<br />
27
- clash royale 3.3024.2 apk gameplay review<br />
28
- clash royale 3.3024.2 apk old version download<br />
29
- clash royale 3.3024.2 apk obb data<br />
30
- clash royale 3.3024.2 apk direct download<br />
31
- clash royale 3.3024.2 apk for pc windows 10<br />
32
- clash royale 3.3024.2 apk cheats codes<br />
33
- clash royale 3.3024.2 apk original from supercell<br />
34
- clash royale 3.3024.2 apk android requirements<br />
35
- clash royale 3.3024.2 apk how to install guide<br />
36
- clash royale 3.3024.2 apk best decks tips<br />
37
- clash royale 3.3024.2 apk changelog not available<br />
38
- clash royale 3.3024.2 apk online multiplayer mode<br />
39
- clash royale 3.3024.2 apk unlimited elixir hack<br />
40
- clash royale 3.3024.2 apk private server download<br />
41
- clash royale 3.3024.2 apk full unlocked all cards<br />
42
- clash royale 3.3024.2 apk bug fixes and improvements<br />
43
- clash royale 3.3024.2 apk strategy game genre<br />
44
- clash royale 3.3024.2 apk compatible devices list<br />
45
- clash royale 3.3024.2 apk safe and secure download<br />
46
- clash royale 3.3024.2 apk ratings and reviews<br />
47
- clash royale 3.3024.2 apk screenshots and videos<br />
48
- clash royale 3.3024.2 apk clan wars update<br />
49
- clash royale 3.3024.2 apk legendary cards unlock<br />
50
- clash royale 3.3024.2 apk arena challenges rewards<br />
51
- clash royale 3.3024.2 apk new characters and skins<br />
52
- clash royale 3.3024.2 apk fun and addictive gameplay<br />
53
- clash royale 3.3024.2 apk support and feedback<br />
54
- clash royale 3.3024.2 apk alternative download links<br />
55
- clash royale 3.3024.2 apk frequently asked questions<br />
56
- clash royale 3 .30242.apk no ads version premium</p>
57
- <p>Additionally, Clash Royale 3.3024.2 APK brings back some of the classic game modes that have been missing for a while, such as Triple Elixir, Ramp Up, Sudden Death, and Draft. You can play these modes in friendly battles, tournaments, or special challenges.</p>
58
- <h3>New rewards and improvements</h3>
59
- <p>Finally, Clash Royale 3.3024.2 APK offers some new rewards and improvements that will make your gaming experience more enjoyable and rewarding. One of them is the Pass Royale Season 11, which gives you access to exclusive perks, such as unlimited entries to special challenges, a golden name, a unique tower skin, and more. You can also unlock new emotes, chests, gold, gems, and cards by completing quests and tiers.</p>
60
- <p>Another improvement in Clash Royale 3.3024.2 APK is the Clan Wars 2.0 update, which is coming soon to the game. This update will revamp the clan wars system and make it more fun and competitive for all clans. You can expect new features such as boat battles, river tasks, clan leagues, and more.</p>
61
- <h2>How to download and install Clash Royale 3.3024.2 APK?</h2>
62
- <p>Now that you know what's new in Clash Royale 3.3024.2 APK, you might be wondering how to download and install it on your Android device. Don't worry, we have got you covered. Here is a step-by-step guide for you:</p>
63
- <h3>Requirements and permissions</h3>
64
- <p>Before you download and install Clash Royale 3.3024.2 APK, you need to make sure that your device meets the following requirements:</p>
65
- <ul>
66
- <li>Your device must have Android 4.1 or higher operating system.</li>
67
- <li>Your device must have at least 100 MB of free storage space.</li>
68
- <li>Your device must have a stable internet connection.</li>
69
- </ul>
70
- <p>You also need to enable the installation of apps from unknown sources on your device settings. To do this, follow these steps:</p>
71
- <ol>
72
- <li>Go to your device settings and tap on Security or Privacy.</li>
73
- <li>Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.</li>
74
- <li>Confirm your choice by tapping OK or Allow.</li>
75
- </ol>
76
- <h3>Download link and installation process</h3>
77
- <p>Once you have met the requirements and enabled the permissions, you can proceed to download and install Clash Royale 3.3024.2 APK by following these steps:</p>
78
- <ol>
79
- <li>Click on this link to download Clash Royale 3.3024.2 APK file on your device.</li>
80
- <li>Wait for the download to finish and then locate the file on your device storage or download folder.</li>
81
- <li>Tap on the file and then tap on Install to start the installation process.</li>
82
- <li>Wait for the installation to finish and then tap on Open to launch the game.</li>
83
- </ol>
84
- <h3>Troubleshooting tips and FAQs</h3>
85
- <p>If you encounter any problems or errors while downloading or installing Clash Royale 3.3024.2 APK, here are some troubleshooting tips and FAQs that might help you:</p>
86
- <ul>
87
- <li>If you get a message that says "App not installed" or "Installation failed", try clearing your cache and data from your device settings or use a different file manager app to install the file.</ <li>If you get a message that says "There was a problem parsing the package" or "The package appears to be corrupt", try downloading the file again from a different source or check if your device is compatible with the game.</li>
88
- <li>If you get a message that says "This app is not available in your country" or "This app is incompatible with your device", try using a VPN app or a different Google account to access the game.</li>
89
- <li>If you have any other questions or issues, you can contact the Clash Royale support team or visit their official website or social media pages for more information.</li>
90
- </ul>
91
- <h2>Why should you play Clash Royale 3.3024.2 APK?</h2>
92
- <p>Now that you know how to download and install Clash Royale 3.3024.2 APK, you might be wondering why you should play it. Well, there are many reasons why you should play the latest version of Clash Royale, and here are some of them:</p>
93
- <h3>Enjoy the new features and content</h3>
94
- <p>One of the main reasons why you should play Clash Royale 3.3024.2 APK is to enjoy the new features and content that it offers. You can try out the new cards, such as the Firecracker and the Royal Delivery, and see how they fit in your deck and strategy. You can also play the new game modes and events, such as the Firecracker Rush and the Royal Delivery Challenge, and have fun with different rules and objectives. You can also explore the new seasonal events, such as the Summer of 2v2, and team up with your friends or random partners for some epic battles.</p>
95
- <h3>Compete with other players online</h3>
96
- <p>Another reason why you should play Clash Royale 3.3024.2 APK is to compete with other players online and test your skills and knowledge. You can join or create clans, where you can chat with other players, share cards, request donations, and participate in clan wars. You can also enter tournaments, where you can play against players from around the world and win prizes. You can also climb the ladder, where you can rank up and earn trophies and rewards.</p>
97
- <h3>Have fun and challenge yourself</h3>
98
- <p>The last reason why you should play Clash Royale 3.3024.2 APK is to have fun and challenge yourself. Clash Royale is a game that is easy to learn but hard to master. It requires strategic thinking, quick reflexes, and constant adaptation to changing situations. It is also a game that is constantly updated with new content, such as cards, modes, events, rewards, and more. You will never get bored or run out of things to do in Clash Royale.</p>
99
- <h2>Conclusion</h2>
100
- <p>In conclusion, Clash Royale 3.3024.2 APK is the latest version of the game that brings a lot of new features, improvements, and bug fixes that will make your gaming experience even better. You can download and install it on your Android device by following our guide above. You can also enjoy the new cards, game modes, events, rewards, and more that it offers. You can also compete with other players online, join or create clans, enter tournaments, climb the ladder, and have fun and challenge yourself.</p>
101
- <p>So what are you waiting for? Download Clash Royale 3.3024.2 APK now and join the millions of players who are already playing this amazing game!</p>
102
- <h2>FAQs</h2>
103
- <p>Here are some frequently asked questions about Clash Royale 3.3024.2 APK:</p>
104
- <ul>
105
- <li>Q: Is Clash Royale 3.3024.2 APK safe to download and install?</li>
106
- <li>A: Yes, Clash Royale 3.3024.2 APK is safe to download and install as long as you get it from a trusted source and follow our guide above.</li>
107
- <li>Q: Is Clash Royale 3.3024.2 APK free to play?</li>
108
- <li>A: Yes, Clash Royale 3.3024.2 APK is free to play, but it also offers in-app purchases that can enhance your gameplay.</li>
109
- <li>Q: Can I play Clash Royale 3.3024.2 APK offline?</li>
110
- <li>A: No, Clash Royale 3.3024.2 APK requires an internet connection to play.</li>
111
- <li>Q: Can I play Clash Royale 3.3024.2 APK on PC or iOS devices?</li>
112
- <li>A: No, Clash Royale 3.3024.2 APK is only compatible with Android devices.</li>
113
- <li>Q: How can I update my existing version of Clash Royale to Clash Royale 3.3024.2 APK?</li>
114
- <li>A: You can update your existing version of Clash Royale to Clash Royale 3.302 2 APK by following these steps:</li>
115
- <ol>
116
- <li>Open the Google Play Store app on your device and tap on the menu icon.</li>
117
- <li>Tap on My Apps & Games and find Clash Royale on the list of installed apps.</li>
118
- <li>Tap on Update and wait for the download and installation to finish.</li>
119
- <li>Tap on Open to launch the game.</li>
120
- </ol>
121
- </ul></p> 401be4b1e0<br />
122
- <br />
123
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1yukikaze/img-to-music/style.css DELETED
@@ -1,51 +0,0 @@
1
- #col-container {max-width: 510px; margin-left: auto; margin-right: auto;}
2
- a {text-decoration-line: underline; font-weight: 600;}
3
- div#music-output .h-full {
4
- min-height: 5rem;
5
- }
6
- .footer {
7
- margin-bottom: 45px;
8
- margin-top: 10px;
9
- text-align: center;
10
- border-bottom: 1px solid #e5e5e5;
11
- }
12
- .footer>p {
13
- font-size: .8rem;
14
- display: inline-block;
15
- padding: 0 10px;
16
- transform: translateY(10px);
17
- background: white;
18
- }
19
- .dark .footer {
20
- border-color: #303030;
21
- }
22
- .dark .footer>p {
23
- background: #0b0f19;
24
- }
25
- .animate-spin {
26
- animation: spin 1s linear infinite;
27
- }
28
- @keyframes spin {
29
- from {
30
- transform: rotate(0deg);
31
- }
32
- to {
33
- transform: rotate(360deg);
34
- }
35
- }
36
- #share-btn-container {
37
- display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem;
38
- }
39
- #share-btn {
40
- all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0;
41
- }
42
- #share-btn * {
43
- all: unset;
44
- }
45
- #share-btn-container div:nth-child(-n+2){
46
- width: auto !important;
47
- min-height: 0px !important;
48
- }
49
- #share-btn-container .wrap {
50
- display: none !important;
51
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/801artistry/RVC801/Makefile DELETED
@@ -1,63 +0,0 @@
1
- .PHONY:
2
- .ONESHELL:
3
-
4
- help: ## Show this help and exit
5
- @grep -hE '^[A-Za-z0-9_ \-]*?:.*##.*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
6
-
7
- install: ## Install dependencies (Do everytime you start up a paperspace machine)
8
- apt-get -y install build-essential python3-dev ffmpeg
9
- pip install --upgrade setuptools wheel
10
- pip install --upgrade pip
11
- pip install faiss-gpu fairseq gradio ffmpeg ffmpeg-python praat-parselmouth pyworld numpy==1.23.5 numba==0.56.4 librosa==0.9.1
12
- pip install -r requirements.txt
13
- pip install --upgrade lxml
14
- apt-get update
15
- apt -y install -qq aria2
16
-
17
- basev1: ## Download version 1 pre-trained models (Do only once after cloning the fork)
18
- mkdir -p pretrained uvr5_weights
19
- git pull
20
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D32k.pth -d pretrained -o D32k.pth
21
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D40k.pth -d pretrained -o D40k.pth
22
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/D48k.pth -d pretrained -o D48k.pth
23
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G32k.pth -d pretrained -o G32k.pth
24
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G40k.pth -d pretrained -o G40k.pth
25
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/G48k.pth -d pretrained -o G48k.pth
26
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D32k.pth -d pretrained -o f0D32k.pth
27
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D40k.pth -d pretrained -o f0D40k.pth
28
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0D48k.pth -d pretrained -o f0D48k.pth
29
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G32k.pth -d pretrained -o f0G32k.pth
30
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G40k.pth -d pretrained -o f0G40k.pth
31
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained/f0G48k.pth -d pretrained -o f0G48k.pth
32
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth
33
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth
34
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt
35
-
36
- basev2: ## Download version 2 pre-trained models (Do only once after cloning the fork)
37
- mkdir -p pretrained_v2 uvr5_weights
38
- git pull
39
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D32k.pth -d pretrained_v2 -o D32k.pth
40
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D40k.pth -d pretrained_v2 -o D40k.pth
41
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/D48k.pth -d pretrained_v2 -o D48k.pth
42
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G32k.pth -d pretrained_v2 -o G32k.pth
43
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G40k.pth -d pretrained_v2 -o G40k.pth
44
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/G48k.pth -d pretrained_v2 -o G48k.pth
45
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D32k.pth -d pretrained_v2 -o f0D32k.pth
46
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D40k.pth -d pretrained_v2 -o f0D40k.pth
47
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0D48k.pth -d pretrained_v2 -o f0D48k.pth
48
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G32k.pth -d pretrained_v2 -o f0G32k.pth
49
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G40k.pth -d pretrained_v2 -o f0G40k.pth
50
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/pretrained_v2/f0G48k.pth -d pretrained_v2 -o f0G48k.pth
51
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP2-人声vocals+非人声instrumentals.pth -d uvr5_weights -o HP2-人声vocals+非人声instrumentals.pth
52
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/uvr5_weights/HP5-主旋律人声vocals+其他instrumentals.pth -d uvr5_weights -o HP5-主旋律人声vocals+其他instrumentals.pth
53
- aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main/hubert_base.pt -d ./ -o hubert_base.pt
54
-
55
- run-ui: ## Run the python GUI
56
- python infer-web.py --paperspace --pycmd python
57
-
58
- run-cli: ## Run the python CLI
59
- python infer-web.py --pycmd python --is_cli
60
-
61
- tensorboard: ## Start the tensorboard (Run on separate terminal)
62
- echo https://tensorboard-$$(hostname).clg07azjl.paperspacegradient.com
63
- tensorboard --logdir logs --bind_all
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A666sxr/Genshin_TTS/stft.py DELETED
@@ -1,209 +0,0 @@
1
- """
2
- BSD 3-Clause License
3
- Copyright (c) 2017, Prem Seetharaman
4
- All rights reserved.
5
- * Redistribution and use in source and binary forms, with or without
6
- modification, are permitted provided that the following conditions are met:
7
- * Redistributions of source code must retain the above copyright notice,
8
- this list of conditions and the following disclaimer.
9
- * Redistributions in binary form must reproduce the above copyright notice, this
10
- list of conditions and the following disclaimer in the
11
- documentation and/or other materials provided with the distribution.
12
- * Neither the name of the copyright holder nor the names of its
13
- contributors may be used to endorse or promote products derived from this
14
- software without specific prior written permission.
15
- THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
16
- ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
17
- WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
18
- DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
19
- ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
20
- (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
21
- LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
22
- ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
23
- (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
24
- SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
25
- """
26
-
27
- import torch
28
- import numpy as np
29
- import torch.nn.functional as F
30
- from torch.autograd import Variable
31
- from scipy.signal import get_window
32
- from librosa.util import pad_center, tiny
33
- import librosa.util as librosa_util
34
-
35
- def window_sumsquare(window, n_frames, hop_length=200, win_length=800,
36
- n_fft=800, dtype=np.float32, norm=None):
37
- """
38
- # from librosa 0.6
39
- Compute the sum-square envelope of a window function at a given hop length.
40
- This is used to estimate modulation effects induced by windowing
41
- observations in short-time fourier transforms.
42
- Parameters
43
- ----------
44
- window : string, tuple, number, callable, or list-like
45
- Window specification, as in `get_window`
46
- n_frames : int > 0
47
- The number of analysis frames
48
- hop_length : int > 0
49
- The number of samples to advance between frames
50
- win_length : [optional]
51
- The length of the window function. By default, this matches `n_fft`.
52
- n_fft : int > 0
53
- The length of each analysis frame.
54
- dtype : np.dtype
55
- The data type of the output
56
- Returns
57
- -------
58
- wss : np.ndarray, shape=`(n_fft + hop_length * (n_frames - 1))`
59
- The sum-squared envelope of the window function
60
- """
61
- if win_length is None:
62
- win_length = n_fft
63
-
64
- n = n_fft + hop_length * (n_frames - 1)
65
- x = np.zeros(n, dtype=dtype)
66
-
67
- # Compute the squared window at the desired length
68
- win_sq = get_window(window, win_length, fftbins=True)
69
- win_sq = librosa_util.normalize(win_sq, norm=norm)**2
70
- win_sq = librosa_util.pad_center(win_sq, n_fft)
71
-
72
- # Fill the envelope
73
- for i in range(n_frames):
74
- sample = i * hop_length
75
- x[sample:min(n, sample + n_fft)] += win_sq[:max(0, min(n_fft, n - sample))]
76
- return x
77
-
78
-
79
- class STFT(torch.nn.Module):
80
- """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft"""
81
- def __init__(self, filter_length=800, hop_length=200, win_length=800,
82
- window='hann'):
83
- super(STFT, self).__init__()
84
- self.filter_length = filter_length
85
- self.hop_length = hop_length
86
- self.win_length = win_length
87
- self.window = window
88
- self.forward_transform = None
89
- scale = self.filter_length / self.hop_length
90
- fourier_basis = np.fft.fft(np.eye(self.filter_length))
91
-
92
- cutoff = int((self.filter_length / 2 + 1))
93
- fourier_basis = np.vstack([np.real(fourier_basis[:cutoff, :]),
94
- np.imag(fourier_basis[:cutoff, :])])
95
-
96
- forward_basis = torch.FloatTensor(fourier_basis[:, None, :])
97
- inverse_basis = torch.FloatTensor(
98
- np.linalg.pinv(scale * fourier_basis).T[:, None, :])
99
-
100
- if window is not None:
101
- assert(filter_length >= win_length)
102
- # get window and zero center pad it to filter_length
103
- fft_window = get_window(window, win_length, fftbins=True)
104
- fft_window = pad_center(fft_window, filter_length)
105
- fft_window = torch.from_numpy(fft_window).float()
106
-
107
- # window the bases
108
- forward_basis *= fft_window
109
- inverse_basis *= fft_window
110
-
111
- self.register_buffer('forward_basis', forward_basis.float())
112
- self.register_buffer('inverse_basis', inverse_basis.float())
113
-
114
- def transform(self, input_data):
115
- num_batches = input_data.size(0)
116
- num_samples = input_data.size(1)
117
-
118
- self.num_samples = num_samples
119
-
120
- # similar to librosa, reflect-pad the input
121
- input_data = input_data.view(num_batches, 1, num_samples)
122
- input_data = F.pad(
123
- input_data.unsqueeze(1),
124
- (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0),
125
- mode='reflect')
126
- input_data = input_data.squeeze(1)
127
-
128
- forward_transform = F.conv1d(
129
- input_data,
130
- Variable(self.forward_basis, requires_grad=False),
131
- stride=self.hop_length,
132
- padding=0)
133
-
134
- cutoff = int((self.filter_length / 2) + 1)
135
- real_part = forward_transform[:, :cutoff, :]
136
- imag_part = forward_transform[:, cutoff:, :]
137
-
138
- magnitude = torch.sqrt(real_part**2 + imag_part**2)
139
- phase = torch.autograd.Variable(
140
- torch.atan2(imag_part.data, real_part.data))
141
-
142
- return magnitude, phase
143
-
144
- def inverse(self, magnitude, phase):
145
- recombine_magnitude_phase = torch.cat(
146
- [magnitude*torch.cos(phase), magnitude*torch.sin(phase)], dim=1)
147
-
148
- inverse_transform = F.conv_transpose1d(
149
- recombine_magnitude_phase,
150
- Variable(self.inverse_basis, requires_grad=False),
151
- stride=self.hop_length,
152
- padding=0)
153
-
154
- if self.window is not None:
155
- window_sum = window_sumsquare(
156
- self.window, magnitude.size(-1), hop_length=self.hop_length,
157
- win_length=self.win_length, n_fft=self.filter_length,
158
- dtype=np.float32)
159
- # remove modulation effects
160
- approx_nonzero_indices = torch.from_numpy(
161
- np.where(window_sum > tiny(window_sum))[0])
162
- window_sum = torch.autograd.Variable(
163
- torch.from_numpy(window_sum), requires_grad=False)
164
- window_sum = window_sum.to(inverse_transform.device()) if magnitude.is_cuda else window_sum
165
- inverse_transform[:, :, approx_nonzero_indices] /= window_sum[approx_nonzero_indices]
166
-
167
- # scale by hop ratio
168
- inverse_transform *= float(self.filter_length) / self.hop_length
169
-
170
- inverse_transform = inverse_transform[:, :, int(self.filter_length/2):]
171
- inverse_transform = inverse_transform[:, :, :-int(self.filter_length/2):]
172
-
173
- return inverse_transform
174
-
175
- def forward(self, input_data):
176
- self.magnitude, self.phase = self.transform(input_data)
177
- reconstruction = self.inverse(self.magnitude, self.phase)
178
- return reconstruction
179
-
180
-
181
- class TorchSTFT(torch.nn.Module):
182
- def __init__(self, filter_length=800, hop_length=200, win_length=800, window='hann'):
183
- super().__init__()
184
- self.filter_length = filter_length
185
- self.hop_length = hop_length
186
- self.win_length = win_length
187
- self.window = torch.from_numpy(get_window(window, win_length, fftbins=True).astype(np.float32))
188
-
189
- def transform(self, input_data):
190
- forward_transform = torch.stft(
191
- input_data,
192
- self.filter_length, self.hop_length, self.win_length, window=self.window,
193
- return_complex=True)
194
-
195
- return torch.abs(forward_transform), torch.angle(forward_transform)
196
-
197
- def inverse(self, magnitude, phase):
198
- inverse_transform = torch.istft(
199
- magnitude * torch.exp(phase * 1j),
200
- self.filter_length, self.hop_length, self.win_length, window=self.window.to(magnitude.device))
201
-
202
- return inverse_transform.unsqueeze(-2) # unsqueeze to stay consistent with conv_transpose1d implementation
203
-
204
- def forward(self, input_data):
205
- self.magnitude, self.phase = self.transform(input_data)
206
- reconstruction = self.inverse(self.magnitude, self.phase)
207
- return reconstruction
208
-
209
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/loss.py DELETED
@@ -1,398 +0,0 @@
1
- from multiprocessing.sharedctypes import Value
2
- import torch
3
- import torch.distributed.nn
4
- from torch import distributed as dist, nn as nn
5
- from torch.nn import functional as F
6
- import numpy as np
7
- from sklearn.metrics import average_precision_score, roc_auc_score, accuracy_score
8
-
9
- try:
10
- import horovod.torch as hvd
11
- except ImportError:
12
- hvd = None
13
-
14
-
15
- def gather_features(
16
- audio_features,
17
- text_features,
18
- audio_features_mlp=None,
19
- text_features_mlp=None,
20
- local_loss=False,
21
- gather_with_grad=False,
22
- rank=0,
23
- world_size=1,
24
- use_horovod=False,
25
- mlp_loss=False,
26
- ):
27
- if use_horovod:
28
- assert hvd is not None, "Please install horovod"
29
- if gather_with_grad:
30
- all_audio_features = hvd.allgather(audio_features)
31
- all_text_features = hvd.allgather(text_features)
32
- if mlp_loss:
33
- all_audio_features_mlp = hvd.allgather(audio_features_mlp)
34
- all_text_features_mlp = hvd.allgather(text_features_mlp)
35
- else:
36
- with torch.no_grad():
37
- all_audio_features = hvd.allgather(audio_features)
38
- all_text_features = hvd.allgather(text_features)
39
- if mlp_loss:
40
- all_audio_features_mlp = hvd.allgather(audio_features_mlp)
41
- all_text_features_mlp = hvd.allgather(text_features_mlp)
42
- if not local_loss:
43
- # ensure grads for local rank when all_* features don't have a gradient
44
- gathered_audio_features = list(
45
- all_audio_features.chunk(world_size, dim=0)
46
- )
47
- gathered_text_features = list(
48
- all_text_features.chunk(world_size, dim=0)
49
- )
50
- gathered_audio_features[rank] = audio_features
51
- gathered_text_features[rank] = text_features
52
- all_audio_features = torch.cat(gathered_audio_features, dim=0)
53
- all_text_features = torch.cat(gathered_text_features, dim=0)
54
- if mlp_loss:
55
- gathered_audio_features_mlp = list(
56
- all_audio_features_mlp.chunk(world_size, dim=0)
57
- )
58
- gathered_text_features_mlp = list(
59
- all_text_features_mlp.chunk(world_size, dim=0)
60
- )
61
- gathered_audio_features_mlp[rank] = audio_features_mlp
62
- gathered_text_features_mlp[rank] = text_features_mlp
63
- all_audio_features_mlp = torch.cat(
64
- gathered_audio_features_mlp, dim=0
65
- )
66
- all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0)
67
- else:
68
- # We gather tensors from all gpus
69
- if gather_with_grad:
70
- all_audio_features = torch.cat(
71
- torch.distributed.nn.all_gather(audio_features), dim=0
72
- )
73
- all_text_features = torch.cat(
74
- torch.distributed.nn.all_gather(text_features), dim=0
75
- )
76
- if mlp_loss:
77
- all_audio_features_mlp = torch.cat(
78
- torch.distributed.nn.all_gather(audio_features_mlp), dim=0
79
- )
80
- all_text_features_mlp = torch.cat(
81
- torch.distributed.nn.all_gather(text_features_mlp), dim=0
82
- )
83
- else:
84
- gathered_audio_features = [
85
- torch.zeros_like(audio_features) for _ in range(world_size)
86
- ]
87
- gathered_text_features = [
88
- torch.zeros_like(text_features) for _ in range(world_size)
89
- ]
90
- dist.all_gather(gathered_audio_features, audio_features)
91
- dist.all_gather(gathered_text_features, text_features)
92
- if mlp_loss:
93
- gathered_audio_features_mlp = [
94
- torch.zeros_like(audio_features_mlp) for _ in range(world_size)
95
- ]
96
- gathered_text_features_mlp = [
97
- torch.zeros_like(text_features_mlp) for _ in range(world_size)
98
- ]
99
- dist.all_gather(gathered_audio_features_mlp, audio_features_mlp)
100
- dist.all_gather(gathered_text_features_mlp, text_features_mlp)
101
- if not local_loss:
102
- # ensure grads for local rank when all_* features don't have a gradient
103
- gathered_audio_features[rank] = audio_features
104
- gathered_text_features[rank] = text_features
105
- if mlp_loss:
106
- gathered_audio_features_mlp[rank] = audio_features_mlp
107
- gathered_text_features_mlp[rank] = text_features_mlp
108
-
109
- all_audio_features = torch.cat(gathered_audio_features, dim=0)
110
- all_text_features = torch.cat(gathered_text_features, dim=0)
111
- if mlp_loss:
112
- all_audio_features_mlp = torch.cat(gathered_audio_features_mlp, dim=0)
113
- all_text_features_mlp = torch.cat(gathered_text_features_mlp, dim=0)
114
- if mlp_loss:
115
- return (
116
- all_audio_features,
117
- all_text_features,
118
- all_audio_features_mlp,
119
- all_text_features_mlp,
120
- )
121
- else:
122
- return all_audio_features, all_text_features
123
-
124
-
125
- class ClipLoss(nn.Module):
126
- def __init__(
127
- self,
128
- local_loss=False,
129
- gather_with_grad=False,
130
- cache_labels=False,
131
- rank=0,
132
- world_size=1,
133
- use_horovod=False,
134
- mlp_loss=False,
135
- weight_loss_kappa=0,
136
- ):
137
- super().__init__()
138
- self.local_loss = local_loss
139
- self.gather_with_grad = gather_with_grad
140
- self.cache_labels = cache_labels
141
- self.rank = rank
142
- self.world_size = world_size
143
- self.use_horovod = use_horovod
144
- self.mlp_loss = mlp_loss
145
- self.weighted_loss = bool(weight_loss_kappa != 0)
146
- self.weight_loss_kappa = weight_loss_kappa
147
- # cache state
148
- self.prev_num_logits = 0
149
- self.labels = {}
150
-
151
- def forward(
152
- self,
153
- audio_features,
154
- text_features,
155
- logit_scale_a,
156
- logit_scale_t=None,
157
- audio_features_mlp=None,
158
- text_features_mlp=None,
159
- ):
160
- device = audio_features.device
161
- if self.mlp_loss:
162
- if self.world_size > 1:
163
- (
164
- all_audio_features,
165
- all_text_features,
166
- all_audio_features_mlp,
167
- all_text_features_mlp,
168
- ) = gather_features(
169
- audio_features=audio_features,
170
- text_features=text_features,
171
- audio_features_mlp=audio_features_mlp,
172
- text_features_mlp=text_features_mlp,
173
- local_loss=self.local_loss,
174
- gather_with_grad=self.gather_with_grad,
175
- rank=self.rank,
176
- world_size=self.world_size,
177
- use_horovod=self.use_horovod,
178
- mlp_loss=self.mlp_loss,
179
- )
180
- if self.local_loss:
181
- a_logits_per_audio = (
182
- logit_scale_a * audio_features @ all_text_features_mlp.T
183
- )
184
- a_logits_per_text = (
185
- logit_scale_a * text_features_mlp @ all_audio_features.T
186
- )
187
- t_logits_per_audio = (
188
- logit_scale_t * audio_features_mlp @ all_text_features.T
189
- )
190
- t_logits_per_text = (
191
- logit_scale_t * text_features @ all_audio_features_mlp.T
192
- )
193
- else:
194
- a_logits_per_audio = (
195
- logit_scale_a * all_audio_features @ all_text_features_mlp.T
196
- )
197
- a_logits_per_text = a_logits_per_audio.T
198
- t_logits_per_audio = (
199
- logit_scale_t * all_audio_features_mlp @ all_text_features.T
200
- )
201
- t_logits_per_text = t_logits_per_audio.T
202
- else:
203
- a_logits_per_audio = (
204
- logit_scale_a * audio_features @ text_features_mlp.T
205
- )
206
- a_logits_per_text = logit_scale_a * text_features_mlp @ audio_features.T
207
- t_logits_per_audio = (
208
- logit_scale_t * audio_features_mlp @ text_features.T
209
- )
210
- t_logits_per_text = logit_scale_t * text_features @ audio_features_mlp.T
211
-
212
- # calculated ground-truth and cache if enabled
213
- num_logits = a_logits_per_audio.shape[0]
214
- if self.prev_num_logits != num_logits or device not in self.labels:
215
- labels = torch.arange(num_logits, device=device, dtype=torch.long)
216
- if self.world_size > 1 and self.local_loss:
217
- labels = labels + num_logits * self.rank
218
- if self.cache_labels:
219
- self.labels[device] = labels
220
- self.prev_num_logits = num_logits
221
- else:
222
- labels = self.labels[device]
223
-
224
- if not self.weighted_loss:
225
- total_loss = (
226
- F.cross_entropy(a_logits_per_audio, labels)
227
- + F.cross_entropy(a_logits_per_text, labels)
228
- + F.cross_entropy(t_logits_per_audio, labels)
229
- + F.cross_entropy(t_logits_per_text, labels)
230
- ) / 4
231
- else:
232
- audio_weight = (audio_features @ audio_features.T).detach()
233
- audio_weight = (
234
- torch.exp(
235
- torch.sum(audio_weight, axis=1)
236
- / (self.weight_loss_kappa * len(audio_weight))
237
- )
238
- ).detach()
239
- text_weight = (text_features @ text_features.T).detach()
240
- text_weight = (
241
- torch.exp(
242
- torch.sum(text_weight, axis=1)
243
- / (self.weight_loss_kappa * len(text_features))
244
- )
245
- ).detach()
246
- total_loss = (
247
- F.cross_entropy(a_logits_per_audio, labels, weight=audio_weight)
248
- + F.cross_entropy(a_logits_per_text, labels, weight=audio_weight)
249
- + F.cross_entropy(t_logits_per_audio, labels, weight=text_weight)
250
- + F.cross_entropy(t_logits_per_text, labels, weight=text_weight)
251
- ) / 4
252
- else:
253
- if self.world_size > 1:
254
- all_audio_features, all_text_features = gather_features(
255
- audio_features=audio_features,
256
- text_features=text_features,
257
- local_loss=self.local_loss,
258
- gather_with_grad=self.gather_with_grad,
259
- rank=self.rank,
260
- world_size=self.world_size,
261
- use_horovod=self.use_horovod,
262
- mlp_loss=self.mlp_loss,
263
- )
264
-
265
- if self.local_loss:
266
- logits_per_audio = (
267
- logit_scale_a * audio_features @ all_text_features.T
268
- )
269
- logits_per_text = (
270
- logit_scale_a * text_features @ all_audio_features.T
271
- )
272
- else:
273
- logits_per_audio = (
274
- logit_scale_a * all_audio_features @ all_text_features.T
275
- )
276
- logits_per_text = logits_per_audio.T
277
- else:
278
- logits_per_audio = logit_scale_a * audio_features @ text_features.T
279
- logits_per_text = logit_scale_a * text_features @ audio_features.T
280
-
281
- # calculated ground-truth and cache if enabled
282
- num_logits = logits_per_audio.shape[0]
283
- if self.prev_num_logits != num_logits or device not in self.labels:
284
- labels = torch.arange(num_logits, device=device, dtype=torch.long)
285
- if self.world_size > 1 and self.local_loss:
286
- labels = labels + num_logits * self.rank
287
- if self.cache_labels:
288
- self.labels[device] = labels
289
- self.prev_num_logits = num_logits
290
- else:
291
- labels = self.labels[device]
292
- if not self.weighted_loss:
293
- total_loss = (
294
- F.cross_entropy(logits_per_audio, labels)
295
- + F.cross_entropy(logits_per_text, labels)
296
- ) / 2
297
- else:
298
- audio_weight = (all_audio_features @ all_audio_features.T).detach()
299
- audio_weight = (
300
- torch.exp(
301
- torch.sum(audio_weight, axis=1)
302
- / (self.weight_loss_kappa * len(all_audio_features))
303
- )
304
- ).detach()
305
- text_weight = (all_text_features @ all_text_features.T).detach()
306
- text_weight = (
307
- torch.exp(
308
- torch.sum(text_weight, axis=1)
309
- / (self.weight_loss_kappa * len(all_text_features))
310
- )
311
- ).detach()
312
- total_loss = (
313
- F.cross_entropy(logits_per_audio, labels, weight=text_weight)
314
- + F.cross_entropy(logits_per_text, labels, weight=audio_weight)
315
- ) / 2
316
- return total_loss
317
-
318
-
319
- def lp_gather_features(pred, target, world_size=1, use_horovod=False):
320
- if use_horovod:
321
- assert hvd is not None, "Please install horovod"
322
- with torch.no_grad():
323
- all_preds = hvd.allgather(pred)
324
- all_targets = hvd.allgath(target)
325
- else:
326
- gathered_preds = [torch.zeros_like(pred) for _ in range(world_size)]
327
- gathered_targets = [torch.zeros_like(target) for _ in range(world_size)]
328
-
329
- dist.all_gather(gathered_preds, pred)
330
- dist.all_gather(gathered_targets, target)
331
- all_preds = torch.cat(gathered_preds, dim=0)
332
- all_targets = torch.cat(gathered_targets, dim=0)
333
-
334
- return all_preds, all_targets
335
-
336
-
337
- def get_map(pred, target):
338
- pred = torch.sigmoid(pred).numpy()
339
- target = target.numpy()
340
- return np.mean(average_precision_score(target, pred, average=None))
341
-
342
-
343
- def get_acc(pred, target):
344
- pred = torch.argmax(pred, 1).numpy()
345
- target = torch.argmax(target, 1).numpy()
346
- return accuracy_score(target, pred)
347
-
348
-
349
- def get_mauc(pred, target):
350
- pred = torch.sigmoid(pred).numpy()
351
- target = target.numpy()
352
- return np.mean(roc_auc_score(target, pred, average=None))
353
-
354
-
355
- class LPMetrics(object):
356
- def __init__(self, metric_names=["map", "acc", "mauc"]):
357
- self.metrics = []
358
- for name in metric_names:
359
- self.metrics.append(self.get_metric(name))
360
- self.metric_names = metric_names
361
-
362
- def get_metric(self, name):
363
- if name == "map":
364
- return get_map
365
- elif name == "acc":
366
- return get_acc
367
- elif name == "mauc":
368
- return get_mauc
369
- else:
370
- raise ValueError(f"the metric should be at least one of [map, acc, mauc]")
371
-
372
- def evaluate_mertics(self, pred, target):
373
- metric_dict = {}
374
- for i in range(len(self.metric_names)):
375
- metric_dict[self.metric_names[i]] = self.metrics[i](pred, target)
376
- return metric_dict
377
-
378
-
379
- def calc_celoss(pred, target):
380
- target = torch.argmax(target, 1).long()
381
- return nn.CrossEntropyLoss()(pred, target)
382
-
383
-
384
- class LPLoss(nn.Module):
385
- def __init__(self, loss_name):
386
- super().__init__()
387
- if loss_name == "bce":
388
- self.loss_func = nn.BCEWithLogitsLoss()
389
- elif loss_name == "ce":
390
- self.loss_func = calc_celoss
391
- elif loss_name == "mse":
392
- self.loss_func = nn.MSELoss()
393
- else:
394
- raise ValueError(f"the loss func should be at least one of [bce, ce, mse]")
395
-
396
- def forward(self, pred, target):
397
- loss = self.loss_func(pred, target)
398
- return loss
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/audioldm-text-to-audio-generation/audioldm/clap/open_clip/openai.py DELETED
@@ -1,156 +0,0 @@
1
- """ OpenAI pretrained model functions
2
-
3
- Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI.
4
- """
5
-
6
- import os
7
- import warnings
8
- from typing import Union, List
9
-
10
- import torch
11
-
12
- from .model import build_model_from_openai_state_dict
13
- from .pretrained import (
14
- get_pretrained_url,
15
- list_pretrained_tag_models,
16
- download_pretrained,
17
- )
18
-
19
- __all__ = ["list_openai_models", "load_openai_model"]
20
-
21
-
22
- def list_openai_models() -> List[str]:
23
- """Returns the names of available CLIP models"""
24
- return list_pretrained_tag_models("openai")
25
-
26
-
27
- def load_openai_model(
28
- name: str,
29
- model_cfg,
30
- device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu",
31
- jit=True,
32
- cache_dir=os.path.expanduser("~/.cache/clip"),
33
- enable_fusion: bool = False,
34
- fusion_type: str = "None",
35
- ):
36
- """Load a CLIP model, preserve its text pretrained part, and set in the CLAP model
37
-
38
- Parameters
39
- ----------
40
- name : str
41
- A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
42
- device : Union[str, torch.device]
43
- The device to put the loaded model
44
- jit : bool
45
- Whether to load the optimized JIT model (default) or more hackable non-JIT model.
46
-
47
- Returns
48
- -------
49
- model : torch.nn.Module
50
- The CLAP model
51
- preprocess : Callable[[PIL.Image], torch.Tensor]
52
- A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
53
- """
54
- if get_pretrained_url(name, "openai"):
55
- model_path = download_pretrained(
56
- get_pretrained_url(name, "openai"), root=cache_dir
57
- )
58
- elif os.path.isfile(name):
59
- model_path = name
60
- else:
61
- raise RuntimeError(
62
- f"Model {name} not found; available models = {list_openai_models()}"
63
- )
64
-
65
- try:
66
- # loading JIT archive
67
- model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
68
- state_dict = None
69
- except RuntimeError:
70
- # loading saved state dict
71
- if jit:
72
- warnings.warn(
73
- f"File {model_path} is not a JIT archive. Loading as a state dict instead"
74
- )
75
- jit = False
76
- state_dict = torch.load(model_path, map_location="cpu")
77
-
78
- if not jit:
79
- try:
80
- model = build_model_from_openai_state_dict(
81
- state_dict or model.state_dict(), model_cfg, enable_fusion, fusion_type
82
- ).to(device)
83
- except KeyError:
84
- sd = {k[7:]: v for k, v in state_dict["state_dict"].items()}
85
- model = build_model_from_openai_state_dict(
86
- sd, model_cfg, enable_fusion, fusion_type
87
- ).to(device)
88
-
89
- if str(device) == "cpu":
90
- model.float()
91
- return model
92
-
93
- # patch the device names
94
- device_holder = torch.jit.trace(
95
- lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]
96
- )
97
- device_node = [
98
- n
99
- for n in device_holder.graph.findAllNodes("prim::Constant")
100
- if "Device" in repr(n)
101
- ][-1]
102
-
103
- def patch_device(module):
104
- try:
105
- graphs = [module.graph] if hasattr(module, "graph") else []
106
- except RuntimeError:
107
- graphs = []
108
-
109
- if hasattr(module, "forward1"):
110
- graphs.append(module.forward1.graph)
111
-
112
- for graph in graphs:
113
- for node in graph.findAllNodes("prim::Constant"):
114
- if "value" in node.attributeNames() and str(node["value"]).startswith(
115
- "cuda"
116
- ):
117
- node.copyAttributes(device_node)
118
-
119
- model.apply(patch_device)
120
- patch_device(model.encode_audio)
121
- patch_device(model.encode_text)
122
-
123
- # patch dtype to float32 on CPU
124
- if str(device) == "cpu":
125
- float_holder = torch.jit.trace(
126
- lambda: torch.ones([]).float(), example_inputs=[]
127
- )
128
- float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
129
- float_node = float_input.node()
130
-
131
- def patch_float(module):
132
- try:
133
- graphs = [module.graph] if hasattr(module, "graph") else []
134
- except RuntimeError:
135
- graphs = []
136
-
137
- if hasattr(module, "forward1"):
138
- graphs.append(module.forward1.graph)
139
-
140
- for graph in graphs:
141
- for node in graph.findAllNodes("aten::to"):
142
- inputs = list(node.inputs())
143
- for i in [
144
- 1,
145
- 2,
146
- ]: # dtype can be the second or third argument to aten::to()
147
- if inputs[i].node()["value"] == 5:
148
- inputs[i].node().copyAttributes(float_node)
149
-
150
- model.apply(patch_float)
151
- patch_float(model.encode_audio)
152
- patch_float(model.encode_text)
153
- model.float()
154
-
155
- model.audio_branch.audio_length = model.audio_cfg.audio_length
156
- return model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/os_utils.py DELETED
@@ -1,20 +0,0 @@
1
- import os
2
- import subprocess
3
-
4
-
5
- def link_file(from_file, to_file):
6
- subprocess.check_call(
7
- f'ln -s "`realpath --relative-to="{os.path.dirname(to_file)}" "{from_file}"`" "{to_file}"', shell=True)
8
-
9
-
10
- def move_file(from_file, to_file):
11
- subprocess.check_call(f'mv "{from_file}" "{to_file}"', shell=True)
12
-
13
-
14
- def copy_file(from_file, to_file):
15
- subprocess.check_call(f'cp -r "{from_file}" "{to_file}"', shell=True)
16
-
17
-
18
- def remove_file(*fns):
19
- for f in fns:
20
- subprocess.check_call(f'rm -rf "{f}"', shell=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/client/css/options.css DELETED
@@ -1,10 +0,0 @@
1
- .options-container {
2
- display: flex;
3
- flex-wrap: wrap;
4
- }
5
-
6
- @media screen and (max-width: 990px) {
7
- .options-container {
8
- justify-content: space-between;
9
- }
10
- }
 
 
 
 
 
 
 
 
 
 
 
spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/models/builders.py DELETED
@@ -1,218 +0,0 @@
1
- # Copyright (c) Meta Platforms, Inc. and affiliates.
2
- # All rights reserved.
3
- #
4
- # This source code is licensed under the license found in the
5
- # LICENSE file in the root directory of this source tree.
6
-
7
- """
8
- All the functions to build the relevant models and modules
9
- from the Hydra config.
10
- """
11
-
12
- import typing as tp
13
- import warnings
14
-
15
- import audiocraft
16
- import omegaconf
17
- import torch
18
-
19
- from .encodec import CompressionModel, EncodecModel, FlattenedCompressionModel # noqa
20
- from .lm import LMModel
21
- from ..modules.codebooks_patterns import (
22
- CodebooksPatternProvider,
23
- DelayedPatternProvider,
24
- ParallelPatternProvider,
25
- UnrolledPatternProvider,
26
- VALLEPattern,
27
- MusicLMPattern,
28
- )
29
- from ..modules.conditioners import (
30
- BaseConditioner,
31
- ConditioningProvider,
32
- LUTConditioner,
33
- T5Conditioner,
34
- ConditionFuser,
35
- ChromaStemConditioner,
36
- )
37
- from .. import quantization as qt
38
- from ..utils.utils import dict_from_config
39
-
40
-
41
- def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer:
42
- klass = {
43
- 'no_quant': qt.DummyQuantizer,
44
- 'rvq': qt.ResidualVectorQuantizer
45
- }[quantizer]
46
- kwargs = dict_from_config(getattr(cfg, quantizer))
47
- if quantizer != 'no_quant':
48
- kwargs['dimension'] = dimension
49
- return klass(**kwargs)
50
-
51
-
52
- def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig):
53
- if encoder_name == 'seanet':
54
- kwargs = dict_from_config(getattr(cfg, 'seanet'))
55
- encoder_override_kwargs = kwargs.pop('encoder')
56
- decoder_override_kwargs = kwargs.pop('decoder')
57
- encoder_kwargs = {**kwargs, **encoder_override_kwargs}
58
- decoder_kwargs = {**kwargs, **decoder_override_kwargs}
59
- encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs)
60
- decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs)
61
- return encoder, decoder
62
- else:
63
- raise KeyError(f'Unexpected compression model {cfg.compression_model}')
64
-
65
-
66
- def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel:
67
- """Instantiate a compression model.
68
- """
69
- if cfg.compression_model == 'encodec':
70
- kwargs = dict_from_config(getattr(cfg, 'encodec'))
71
- encoder_name = kwargs.pop('autoencoder')
72
- quantizer_name = kwargs.pop('quantizer')
73
- encoder, decoder = get_encodec_autoencoder(encoder_name, cfg)
74
- quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension)
75
- frame_rate = kwargs['sample_rate'] // encoder.hop_length
76
- renormalize = kwargs.pop('renormalize', None)
77
- renorm = kwargs.pop('renorm')
78
- if renormalize is None:
79
- renormalize = renorm is not None
80
- warnings.warn("You are using a deprecated EnCodec model. Please migrate to new renormalization.")
81
- return EncodecModel(encoder, decoder, quantizer,
82
- frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device)
83
- else:
84
- raise KeyError(f'Unexpected compression model {cfg.compression_model}')
85
-
86
-
87
- def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel:
88
- """Instantiate a transformer LM.
89
- """
90
- if cfg.lm_model == 'transformer_lm':
91
- kwargs = dict_from_config(getattr(cfg, 'transformer_lm'))
92
- n_q = kwargs['n_q']
93
- q_modeling = kwargs.pop('q_modeling', None)
94
- codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern')
95
- attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout'))
96
- cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance'))
97
- cfg_prob, cfg_coef = cls_free_guidance["training_dropout"], cls_free_guidance["inference_coef"]
98
- fuser = get_condition_fuser(cfg)
99
- condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device)
100
- if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programatically
101
- kwargs['cross_attention'] = True
102
- if codebooks_pattern_cfg.modeling is None:
103
- assert q_modeling is not None, \
104
- 'LM model should either have a codebook pattern defined or transformer_lm.q_modeling'
105
- codebooks_pattern_cfg = omegaconf.OmegaConf.create(
106
- {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}}
107
- )
108
- pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg)
109
- return LMModel(
110
- pattern_provider=pattern_provider,
111
- condition_provider=condition_provider,
112
- fuser=fuser,
113
- cfg_dropout=cfg_prob,
114
- cfg_coef=cfg_coef,
115
- attribute_dropout=attribute_dropout,
116
- dtype=getattr(torch, cfg.dtype),
117
- device=cfg.device,
118
- **kwargs
119
- ).to(cfg.device)
120
- else:
121
- raise KeyError(f'Unexpected LM model {cfg.lm_model}')
122
-
123
-
124
- def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider:
125
- """Instantiate a conditioning model.
126
- """
127
- device = cfg.device
128
- duration = cfg.dataset.segment_duration
129
- cfg = getattr(cfg, "conditioners")
130
- cfg = omegaconf.OmegaConf.create({}) if cfg is None else cfg
131
- conditioners: tp.Dict[str, BaseConditioner] = {}
132
- with omegaconf.open_dict(cfg):
133
- condition_provider_args = cfg.pop('args', {})
134
- for cond, cond_cfg in cfg.items():
135
- model_type = cond_cfg["model"]
136
- model_args = cond_cfg[model_type]
137
- if model_type == "t5":
138
- conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args)
139
- elif model_type == "lut":
140
- conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args)
141
- elif model_type == "chroma_stem":
142
- model_args.pop('cache_path', None)
143
- conditioners[str(cond)] = ChromaStemConditioner(
144
- output_dim=output_dim,
145
- duration=duration,
146
- device=device,
147
- **model_args
148
- )
149
- else:
150
- raise ValueError(f"unrecognized conditioning model: {model_type}")
151
- conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args)
152
- return conditioner
153
-
154
-
155
- def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser:
156
- """Instantiate a condition fuser object.
157
- """
158
- fuser_cfg = getattr(cfg, "fuser")
159
- fuser_methods = ["sum", "cross", "prepend", "input_interpolate"]
160
- fuse2cond = {k: fuser_cfg[k] for k in fuser_methods}
161
- kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods}
162
- fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs)
163
- return fuser
164
-
165
-
166
- def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider:
167
- """Instantiate a codebooks pattern provider object.
168
- """
169
- pattern_providers = {
170
- 'parallel': ParallelPatternProvider,
171
- 'delay': DelayedPatternProvider,
172
- 'unroll': UnrolledPatternProvider,
173
- 'valle': VALLEPattern,
174
- 'musiclm': MusicLMPattern,
175
- }
176
- name = cfg.modeling
177
- kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {}
178
- klass = pattern_providers[name]
179
- return klass(n_q, **kwargs)
180
-
181
-
182
- def get_debug_compression_model(device='cpu'):
183
- """Instantiate a debug compression model to be used for unit tests.
184
- """
185
- seanet_kwargs = {
186
- 'n_filters': 4,
187
- 'n_residual_layers': 1,
188
- 'dimension': 32,
189
- 'ratios': [10, 8, 16] # 25 Hz at 32kHz
190
- }
191
- encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs)
192
- decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs)
193
- quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4)
194
- init_x = torch.randn(8, 32, 128)
195
- quantizer(init_x, 1) # initialize kmeans etc.
196
- compression_model = EncodecModel(
197
- encoder, decoder, quantizer,
198
- frame_rate=25, sample_rate=32000, channels=1).to(device)
199
- return compression_model.eval()
200
-
201
-
202
- def get_debug_lm_model(device='cpu'):
203
- """Instantiate a debug LM to be used for unit tests.
204
- """
205
- pattern = DelayedPatternProvider(n_q=4)
206
- dim = 16
207
- providers = {
208
- 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"),
209
- }
210
- condition_provider = ConditioningProvider(providers)
211
- fuser = ConditionFuser(
212
- {'cross': ['description'], 'prepend': [],
213
- 'sum': [], 'input_interpolate': []})
214
- lm = LMModel(
215
- pattern, condition_provider, fuser,
216
- n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2,
217
- cross_attention=True, causal=True)
218
- return lm.to(device).eval()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abdulkader/Abdulkader-T5-MedRepAnalyzer/app.py DELETED
@@ -1,7 +0,0 @@
1
- import gradio as gr
2
- import requests
3
- import transformers
4
- from transformers import pipeline
5
- model="https://huggingface.co/Abdulkader/autotrain-medical-reports-summarizer-2484176581"
6
- pipe = pipeline( model=model)
7
- gr.Interface.load("pipe").launch()
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/utils/Yoyo.js DELETED
@@ -1,2 +0,0 @@
1
- import Yoyo from '../../../plugins/utils/math/Yoyo.js'
2
- export default Yoyo;
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/clickoutside/ClickOutside.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import ClickOutside from '../../../plugins/clickoutside';
2
- export default ClickOutside;
 
 
 
spaces/Alfaxad/BioGalacticModels/model_list.py DELETED
@@ -1,106 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import numpy as np
4
- import pandas as pd
5
- import requests
6
- from huggingface_hub.hf_api import SpaceInfo
7
-
8
- url = 'https://docs.google.com/spreadsheets/d/1XH7Jo3LXXfbSJ14z-QrSIQs21ArJMiV6_hMSAwY85PU/edit#gid=0'
9
- csv_url = url.replace('/edit#gid=', '/export?format=csv&gid=')
10
-
11
- class ModelList:
12
- def __init__(self):
13
- self.table = pd.read_csv(csv_url)
14
- self._preprocess_table()
15
-
16
- self.table_header = '''
17
- <tr>
18
- <td width="20%">Model Name</td>
19
- <td width="10%">Type</td>
20
- <td width="10%">Year</td>
21
- <td width="10%">Paper</td>
22
- <td width="10%">Code on Github</td>
23
- <td width="10%">Weights on 🤗</td>
24
- <td width="10%">Other Weights</td>
25
- </tr>'''
26
-
27
- def _preprocess_table(self) -> None:
28
- self.table['name_lowercase'] = self.table.name.str.lower()
29
- self.table['year'] = self.table['year'].apply(str)
30
-
31
- rows = []
32
- for row in self.table.itertuples():
33
- paper = f'<a href="{row.paper}" target="_blank">Paper</a>' if isinstance(
34
- row.paper, str) else ''
35
- github = f'<a href="{row.github}" target="_blank">GitHub</a>' if isinstance(
36
- row.github, str) else ''
37
- hf_model = f'<a href="{row.hub}" target="_blank">Hub Model</a>' if isinstance(
38
- row.hub, str) else ''
39
- other_model = f'<a href="{row.other}" target="_blank">Other Weights</a>' if isinstance(
40
- row.other, str) else ''
41
- data_type = f'{row.data_type}' if isinstance(
42
- row.data_type, str) else ''
43
- base_model = f'{row.base_model}' if isinstance(
44
- row.base_model, str) else ''
45
- year = f'{row.year}' if isinstance(
46
- row.year, str) else ''
47
- row = f'''
48
- <tr>
49
- <td>{row.name}</td>
50
- <td>{data_type}</td>
51
- <td>{year}</td>
52
- <td>{paper}</td>
53
- <td>{github}</td>
54
- <td>{hf_model}</td>
55
- <td>{other_model}</td>
56
- </tr>'''
57
- rows.append(row)
58
- self.table['html_table_content'] = rows
59
-
60
- def render(self, search_query: str,
61
- case_sensitive: bool,
62
- filter_names: list[str],
63
- data_types: list[str],
64
- years: list[str],
65
- #model_types: list[str]
66
- ) -> tuple[int, str]:
67
- df = self.table
68
- if search_query:
69
- if case_sensitive:
70
- df = df[df.name.str.contains(search_query)]
71
- else:
72
- df = df[df.name_lowercase.str.contains(search_query.lower())]
73
- has_paper = 'Paper' in filter_names
74
- has_github = 'Code' in filter_names
75
- has_model = 'Model Weights' in filter_names
76
- df = self.filter_table(df, has_paper, has_github, has_model, data_types, years)
77
- #df = self.filter_table(df, has_paper, has_github, has_model, data_types, model_types)
78
- return len(df), self.to_html(df, self.table_header)
79
-
80
- @staticmethod
81
- def filter_table(df: pd.DataFrame, has_paper: bool, has_github: bool,
82
- has_model: bool,
83
- data_types: list[str],
84
- years: list[str],
85
- #model_types: list[str]
86
- ) -> pd.DataFrame:
87
- if has_paper:
88
- df = df[~df.paper.isna()]
89
- if has_github:
90
- df = df[~df.github.isna()]
91
- if has_model:
92
- df = df[~df.hub.isna() | ~df.other.isna()]
93
- df = df[df.data_type.isin(set(data_types))]
94
- #df = df[df.base_model.isin(set(model_types))]
95
- df = df[df.year.isin(set(years))]
96
- return df
97
-
98
- @staticmethod
99
- def to_html(df: pd.DataFrame, table_header: str) -> str:
100
- table_data = ''.join(df.html_table_content)
101
- html = f'''
102
- <table>
103
- {table_header}
104
- {table_data}
105
- </table>'''
106
- return html
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py DELETED
@@ -1,554 +0,0 @@
1
- # Copyright 2023 The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import inspect
16
- import warnings
17
- from typing import Callable, List, Optional, Tuple, Union
18
-
19
- import numpy as np
20
- import PIL
21
- import torch
22
- import torch.utils.checkpoint
23
- from transformers import (
24
- CLIPImageProcessor,
25
- CLIPTextModelWithProjection,
26
- CLIPTokenizer,
27
- CLIPVisionModelWithProjection,
28
- )
29
-
30
- from ...image_processor import VaeImageProcessor
31
- from ...models import AutoencoderKL, DualTransformer2DModel, Transformer2DModel, UNet2DConditionModel
32
- from ...schedulers import KarrasDiffusionSchedulers
33
- from ...utils import logging, randn_tensor
34
- from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
35
- from .modeling_text_unet import UNetFlatConditionModel
36
-
37
-
38
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
39
-
40
-
41
- class VersatileDiffusionDualGuidedPipeline(DiffusionPipeline):
42
- r"""
43
- Pipeline for image-text dual-guided generation using Versatile Diffusion.
44
-
45
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
46
- implemented for all pipelines (downloading, saving, running on a particular device, etc.).
47
-
48
- Parameters:
49
- vqvae ([`VQModel`]):
50
- Vector-quantized (VQ) model to encode and decode images to and from latent representations.
51
- bert ([`LDMBertModel`]):
52
- Text-encoder model based on [`~transformers.BERT`].
53
- tokenizer ([`~transformers.BertTokenizer`]):
54
- A `BertTokenizer` to tokenize text.
55
- unet ([`UNet2DConditionModel`]):
56
- A `UNet2DConditionModel` to denoise the encoded image latents.
57
- scheduler ([`SchedulerMixin`]):
58
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
59
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
60
- """
61
- tokenizer: CLIPTokenizer
62
- image_feature_extractor: CLIPImageProcessor
63
- text_encoder: CLIPTextModelWithProjection
64
- image_encoder: CLIPVisionModelWithProjection
65
- image_unet: UNet2DConditionModel
66
- text_unet: UNetFlatConditionModel
67
- vae: AutoencoderKL
68
- scheduler: KarrasDiffusionSchedulers
69
-
70
- _optional_components = ["text_unet"]
71
-
72
- def __init__(
73
- self,
74
- tokenizer: CLIPTokenizer,
75
- image_feature_extractor: CLIPImageProcessor,
76
- text_encoder: CLIPTextModelWithProjection,
77
- image_encoder: CLIPVisionModelWithProjection,
78
- image_unet: UNet2DConditionModel,
79
- text_unet: UNetFlatConditionModel,
80
- vae: AutoencoderKL,
81
- scheduler: KarrasDiffusionSchedulers,
82
- ):
83
- super().__init__()
84
- self.register_modules(
85
- tokenizer=tokenizer,
86
- image_feature_extractor=image_feature_extractor,
87
- text_encoder=text_encoder,
88
- image_encoder=image_encoder,
89
- image_unet=image_unet,
90
- text_unet=text_unet,
91
- vae=vae,
92
- scheduler=scheduler,
93
- )
94
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
95
- self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
96
-
97
- if self.text_unet is not None and (
98
- "dual_cross_attention" not in self.image_unet.config or not self.image_unet.config.dual_cross_attention
99
- ):
100
- # if loading from a universal checkpoint rather than a saved dual-guided pipeline
101
- self._convert_to_dual_attention()
102
-
103
- def remove_unused_weights(self):
104
- self.register_modules(text_unet=None)
105
-
106
- def _convert_to_dual_attention(self):
107
- """
108
- Replace image_unet's `Transformer2DModel` blocks with `DualTransformer2DModel` that contains transformer blocks
109
- from both `image_unet` and `text_unet`
110
- """
111
- for name, module in self.image_unet.named_modules():
112
- if isinstance(module, Transformer2DModel):
113
- parent_name, index = name.rsplit(".", 1)
114
- index = int(index)
115
-
116
- image_transformer = self.image_unet.get_submodule(parent_name)[index]
117
- text_transformer = self.text_unet.get_submodule(parent_name)[index]
118
-
119
- config = image_transformer.config
120
- dual_transformer = DualTransformer2DModel(
121
- num_attention_heads=config.num_attention_heads,
122
- attention_head_dim=config.attention_head_dim,
123
- in_channels=config.in_channels,
124
- num_layers=config.num_layers,
125
- dropout=config.dropout,
126
- norm_num_groups=config.norm_num_groups,
127
- cross_attention_dim=config.cross_attention_dim,
128
- attention_bias=config.attention_bias,
129
- sample_size=config.sample_size,
130
- num_vector_embeds=config.num_vector_embeds,
131
- activation_fn=config.activation_fn,
132
- num_embeds_ada_norm=config.num_embeds_ada_norm,
133
- )
134
- dual_transformer.transformers[0] = image_transformer
135
- dual_transformer.transformers[1] = text_transformer
136
-
137
- self.image_unet.get_submodule(parent_name)[index] = dual_transformer
138
- self.image_unet.register_to_config(dual_cross_attention=True)
139
-
140
- def _revert_dual_attention(self):
141
- """
142
- Revert the image_unet `DualTransformer2DModel` blocks back to `Transformer2DModel` with image_unet weights Call
143
- this function if you reuse `image_unet` in another pipeline, e.g. `VersatileDiffusionPipeline`
144
- """
145
- for name, module in self.image_unet.named_modules():
146
- if isinstance(module, DualTransformer2DModel):
147
- parent_name, index = name.rsplit(".", 1)
148
- index = int(index)
149
- self.image_unet.get_submodule(parent_name)[index] = module.transformers[0]
150
-
151
- self.image_unet.register_to_config(dual_cross_attention=False)
152
-
153
- def _encode_text_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance):
154
- r"""
155
- Encodes the prompt into text encoder hidden states.
156
-
157
- Args:
158
- prompt (`str` or `List[str]`):
159
- prompt to be encoded
160
- device: (`torch.device`):
161
- torch device
162
- num_images_per_prompt (`int`):
163
- number of images that should be generated per prompt
164
- do_classifier_free_guidance (`bool`):
165
- whether to use classifier free guidance or not
166
- """
167
-
168
- def normalize_embeddings(encoder_output):
169
- embeds = self.text_encoder.text_projection(encoder_output.last_hidden_state)
170
- embeds_pooled = encoder_output.text_embeds
171
- embeds = embeds / torch.norm(embeds_pooled.unsqueeze(1), dim=-1, keepdim=True)
172
- return embeds
173
-
174
- batch_size = len(prompt)
175
-
176
- text_inputs = self.tokenizer(
177
- prompt,
178
- padding="max_length",
179
- max_length=self.tokenizer.model_max_length,
180
- truncation=True,
181
- return_tensors="pt",
182
- )
183
- text_input_ids = text_inputs.input_ids
184
- untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids
185
-
186
- if not torch.equal(text_input_ids, untruncated_ids):
187
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
188
- logger.warning(
189
- "The following part of your input was truncated because CLIP can only handle sequences up to"
190
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
191
- )
192
-
193
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
194
- attention_mask = text_inputs.attention_mask.to(device)
195
- else:
196
- attention_mask = None
197
-
198
- prompt_embeds = self.text_encoder(
199
- text_input_ids.to(device),
200
- attention_mask=attention_mask,
201
- )
202
- prompt_embeds = normalize_embeddings(prompt_embeds)
203
-
204
- # duplicate text embeddings for each generation per prompt, using mps friendly method
205
- bs_embed, seq_len, _ = prompt_embeds.shape
206
- prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
207
- prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
208
-
209
- # get unconditional embeddings for classifier free guidance
210
- if do_classifier_free_guidance:
211
- uncond_tokens = [""] * batch_size
212
- max_length = text_input_ids.shape[-1]
213
- uncond_input = self.tokenizer(
214
- uncond_tokens,
215
- padding="max_length",
216
- max_length=max_length,
217
- truncation=True,
218
- return_tensors="pt",
219
- )
220
-
221
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
222
- attention_mask = uncond_input.attention_mask.to(device)
223
- else:
224
- attention_mask = None
225
-
226
- negative_prompt_embeds = self.text_encoder(
227
- uncond_input.input_ids.to(device),
228
- attention_mask=attention_mask,
229
- )
230
- negative_prompt_embeds = normalize_embeddings(negative_prompt_embeds)
231
-
232
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
233
- seq_len = negative_prompt_embeds.shape[1]
234
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
235
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
236
-
237
- # For classifier free guidance, we need to do two forward passes.
238
- # Here we concatenate the unconditional and text embeddings into a single batch
239
- # to avoid doing two forward passes
240
- prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
241
-
242
- return prompt_embeds
243
-
244
- def _encode_image_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance):
245
- r"""
246
- Encodes the prompt into text encoder hidden states.
247
-
248
- Args:
249
- prompt (`str` or `List[str]`):
250
- prompt to be encoded
251
- device: (`torch.device`):
252
- torch device
253
- num_images_per_prompt (`int`):
254
- number of images that should be generated per prompt
255
- do_classifier_free_guidance (`bool`):
256
- whether to use classifier free guidance or not
257
- """
258
-
259
- def normalize_embeddings(encoder_output):
260
- embeds = self.image_encoder.vision_model.post_layernorm(encoder_output.last_hidden_state)
261
- embeds = self.image_encoder.visual_projection(embeds)
262
- embeds_pooled = embeds[:, 0:1]
263
- embeds = embeds / torch.norm(embeds_pooled, dim=-1, keepdim=True)
264
- return embeds
265
-
266
- batch_size = len(prompt) if isinstance(prompt, list) else 1
267
-
268
- # get prompt text embeddings
269
- image_input = self.image_feature_extractor(images=prompt, return_tensors="pt")
270
- pixel_values = image_input.pixel_values.to(device).to(self.image_encoder.dtype)
271
- image_embeddings = self.image_encoder(pixel_values)
272
- image_embeddings = normalize_embeddings(image_embeddings)
273
-
274
- # duplicate image embeddings for each generation per prompt, using mps friendly method
275
- bs_embed, seq_len, _ = image_embeddings.shape
276
- image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1)
277
- image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
278
-
279
- # get unconditional embeddings for classifier free guidance
280
- if do_classifier_free_guidance:
281
- uncond_images = [np.zeros((512, 512, 3)) + 0.5] * batch_size
282
- uncond_images = self.image_feature_extractor(images=uncond_images, return_tensors="pt")
283
- pixel_values = uncond_images.pixel_values.to(device).to(self.image_encoder.dtype)
284
- negative_prompt_embeds = self.image_encoder(pixel_values)
285
- negative_prompt_embeds = normalize_embeddings(negative_prompt_embeds)
286
-
287
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
288
- seq_len = negative_prompt_embeds.shape[1]
289
- negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
290
- negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
291
-
292
- # For classifier free guidance, we need to do two forward passes.
293
- # Here we concatenate the unconditional and conditional embeddings into a single batch
294
- # to avoid doing two forward passes
295
- image_embeddings = torch.cat([negative_prompt_embeds, image_embeddings])
296
-
297
- return image_embeddings
298
-
299
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents
300
- def decode_latents(self, latents):
301
- warnings.warn(
302
- "The decode_latents method is deprecated and will be removed in a future version. Please"
303
- " use VaeImageProcessor instead",
304
- FutureWarning,
305
- )
306
- latents = 1 / self.vae.config.scaling_factor * latents
307
- image = self.vae.decode(latents, return_dict=False)[0]
308
- image = (image / 2 + 0.5).clamp(0, 1)
309
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16
310
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
311
- return image
312
-
313
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
314
- def prepare_extra_step_kwargs(self, generator, eta):
315
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
316
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
317
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
318
- # and should be between [0, 1]
319
-
320
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
321
- extra_step_kwargs = {}
322
- if accepts_eta:
323
- extra_step_kwargs["eta"] = eta
324
-
325
- # check if the scheduler accepts generator
326
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
327
- if accepts_generator:
328
- extra_step_kwargs["generator"] = generator
329
- return extra_step_kwargs
330
-
331
- def check_inputs(self, prompt, image, height, width, callback_steps):
332
- if not isinstance(prompt, str) and not isinstance(prompt, PIL.Image.Image) and not isinstance(prompt, list):
333
- raise ValueError(f"`prompt` has to be of type `str` `PIL.Image` or `list` but is {type(prompt)}")
334
- if not isinstance(image, str) and not isinstance(image, PIL.Image.Image) and not isinstance(image, list):
335
- raise ValueError(f"`image` has to be of type `str` `PIL.Image` or `list` but is {type(image)}")
336
-
337
- if height % 8 != 0 or width % 8 != 0:
338
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
339
-
340
- if (callback_steps is None) or (
341
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
342
- ):
343
- raise ValueError(
344
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
345
- f" {type(callback_steps)}."
346
- )
347
-
348
- # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
349
- def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
350
- shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
351
- if isinstance(generator, list) and len(generator) != batch_size:
352
- raise ValueError(
353
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
354
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
355
- )
356
-
357
- if latents is None:
358
- latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
359
- else:
360
- latents = latents.to(device)
361
-
362
- # scale the initial noise by the standard deviation required by the scheduler
363
- latents = latents * self.scheduler.init_noise_sigma
364
- return latents
365
-
366
- def set_transformer_params(self, mix_ratio: float = 0.5, condition_types: Tuple = ("text", "image")):
367
- for name, module in self.image_unet.named_modules():
368
- if isinstance(module, DualTransformer2DModel):
369
- module.mix_ratio = mix_ratio
370
-
371
- for i, type in enumerate(condition_types):
372
- if type == "text":
373
- module.condition_lengths[i] = self.text_encoder.config.max_position_embeddings
374
- module.transformer_index_for_condition[i] = 1 # use the second (text) transformer
375
- else:
376
- module.condition_lengths[i] = 257
377
- module.transformer_index_for_condition[i] = 0 # use the first (image) transformer
378
-
379
- @torch.no_grad()
380
- def __call__(
381
- self,
382
- prompt: Union[PIL.Image.Image, List[PIL.Image.Image]],
383
- image: Union[str, List[str]],
384
- text_to_image_strength: float = 0.5,
385
- height: Optional[int] = None,
386
- width: Optional[int] = None,
387
- num_inference_steps: int = 50,
388
- guidance_scale: float = 7.5,
389
- num_images_per_prompt: Optional[int] = 1,
390
- eta: float = 0.0,
391
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
392
- latents: Optional[torch.FloatTensor] = None,
393
- output_type: Optional[str] = "pil",
394
- return_dict: bool = True,
395
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
396
- callback_steps: int = 1,
397
- **kwargs,
398
- ):
399
- r"""
400
- The call function to the pipeline for generation.
401
-
402
- Args:
403
- prompt (`str` or `List[str]`):
404
- The prompt or prompts to guide image generation.
405
- height (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`):
406
- The height in pixels of the generated image.
407
- width (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`):
408
- The width in pixels of the generated image.
409
- num_inference_steps (`int`, *optional*, defaults to 50):
410
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
411
- expense of slower inference.
412
- guidance_scale (`float`, *optional*, defaults to 7.5):
413
- A higher guidance scale value encourages the model to generate images closely linked to the text
414
- `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
415
- negative_prompt (`str` or `List[str]`, *optional*):
416
- The prompt or prompts to guide what to not include in image generation. If not defined, you need to
417
- pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
418
- num_images_per_prompt (`int`, *optional*, defaults to 1):
419
- The number of images to generate per prompt.
420
- eta (`float`, *optional*, defaults to 0.0):
421
- Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
422
- to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
423
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
424
- A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
425
- generation deterministic.
426
- latents (`torch.FloatTensor`, *optional*):
427
- Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
428
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
429
- tensor is generated by sampling using the supplied random `generator`.
430
- output_type (`str`, *optional*, defaults to `"pil"`):
431
- The output format of the generated image. Choose between `PIL.Image` or `np.array`.
432
- return_dict (`bool`, *optional*, defaults to `True`):
433
- Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
434
- callback (`Callable`, *optional*):
435
- A function that calls every `callback_steps` steps during inference. The function is called with the
436
- following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
437
- callback_steps (`int`, *optional*, defaults to 1):
438
- The frequency at which the `callback` function is called. If not specified, the callback is called at
439
- every step.
440
-
441
- Examples:
442
-
443
- ```py
444
- >>> from diffusers import VersatileDiffusionDualGuidedPipeline
445
- >>> import torch
446
- >>> import requests
447
- >>> from io import BytesIO
448
- >>> from PIL import Image
449
-
450
- >>> # let's download an initial image
451
- >>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg"
452
-
453
- >>> response = requests.get(url)
454
- >>> image = Image.open(BytesIO(response.content)).convert("RGB")
455
- >>> text = "a red car in the sun"
456
-
457
- >>> pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained(
458
- ... "shi-labs/versatile-diffusion", torch_dtype=torch.float16
459
- ... )
460
- >>> pipe.remove_unused_weights()
461
- >>> pipe = pipe.to("cuda")
462
-
463
- >>> generator = torch.Generator(device="cuda").manual_seed(0)
464
- >>> text_to_image_strength = 0.75
465
-
466
- >>> image = pipe(
467
- ... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator
468
- ... ).images[0]
469
- >>> image.save("./car_variation.png")
470
- ```
471
-
472
- Returns:
473
- [`~pipelines.ImagePipelineOutput`] or `tuple`:
474
- If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is
475
- returned where the first element is a list with the generated images.
476
- """
477
- # 0. Default height and width to unet
478
- height = height or self.image_unet.config.sample_size * self.vae_scale_factor
479
- width = width or self.image_unet.config.sample_size * self.vae_scale_factor
480
-
481
- # 1. Check inputs. Raise error if not correct
482
- self.check_inputs(prompt, image, height, width, callback_steps)
483
-
484
- # 2. Define call parameters
485
- prompt = [prompt] if not isinstance(prompt, list) else prompt
486
- image = [image] if not isinstance(image, list) else image
487
- batch_size = len(prompt)
488
- device = self._execution_device
489
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
490
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
491
- # corresponds to doing no classifier free guidance.
492
- do_classifier_free_guidance = guidance_scale > 1.0
493
-
494
- # 3. Encode input prompts
495
- prompt_embeds = self._encode_text_prompt(prompt, device, num_images_per_prompt, do_classifier_free_guidance)
496
- image_embeddings = self._encode_image_prompt(image, device, num_images_per_prompt, do_classifier_free_guidance)
497
- dual_prompt_embeddings = torch.cat([prompt_embeds, image_embeddings], dim=1)
498
- prompt_types = ("text", "image")
499
-
500
- # 4. Prepare timesteps
501
- self.scheduler.set_timesteps(num_inference_steps, device=device)
502
- timesteps = self.scheduler.timesteps
503
-
504
- # 5. Prepare latent variables
505
- num_channels_latents = self.image_unet.config.in_channels
506
- latents = self.prepare_latents(
507
- batch_size * num_images_per_prompt,
508
- num_channels_latents,
509
- height,
510
- width,
511
- dual_prompt_embeddings.dtype,
512
- device,
513
- generator,
514
- latents,
515
- )
516
-
517
- # 6. Prepare extra step kwargs.
518
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
519
-
520
- # 7. Combine the attention blocks of the image and text UNets
521
- self.set_transformer_params(text_to_image_strength, prompt_types)
522
-
523
- # 8. Denoising loop
524
- for i, t in enumerate(self.progress_bar(timesteps)):
525
- # expand the latents if we are doing classifier free guidance
526
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
527
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
528
-
529
- # predict the noise residual
530
- noise_pred = self.image_unet(latent_model_input, t, encoder_hidden_states=dual_prompt_embeddings).sample
531
-
532
- # perform guidance
533
- if do_classifier_free_guidance:
534
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
535
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
536
-
537
- # compute the previous noisy sample x_t -> x_t-1
538
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
539
-
540
- # call the callback, if provided
541
- if callback is not None and i % callback_steps == 0:
542
- callback(i, t, latents)
543
-
544
- if not output_type == "latent":
545
- image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
546
- else:
547
- image = latents
548
-
549
- image = self.image_processor.postprocess(image, output_type=output_type)
550
-
551
- if not return_dict:
552
- return (image,)
553
-
554
- return ImagePipelineOutput(images=image)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/detectors/htc_r50_sac_1x_coco.py DELETED
@@ -1,8 +0,0 @@
1
- _base_ = '../htc/htc_r50_fpn_1x_coco.py'
2
-
3
- model = dict(
4
- backbone=dict(
5
- type='DetectoRS_ResNet',
6
- conv_cfg=dict(type='ConvAWS'),
7
- sac=dict(type='SAC', use_deform=True),
8
- stage_with_sac=(False, True, True, True)))
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/standard_roi_head.py DELETED
@@ -1,295 +0,0 @@
1
- import torch
2
-
3
- from mmdet.core import bbox2result, bbox2roi, build_assigner, build_sampler
4
- from ..builder import HEADS, build_head, build_roi_extractor
5
- from .base_roi_head import BaseRoIHead
6
- from .test_mixins import BBoxTestMixin, MaskTestMixin
7
-
8
-
9
- @HEADS.register_module()
10
- class StandardRoIHead(BaseRoIHead, BBoxTestMixin, MaskTestMixin):
11
- """Simplest base roi head including one bbox head and one mask head."""
12
-
13
- def init_assigner_sampler(self):
14
- """Initialize assigner and sampler."""
15
- self.bbox_assigner = None
16
- self.bbox_sampler = None
17
- if self.train_cfg:
18
- self.bbox_assigner = build_assigner(self.train_cfg.assigner)
19
- self.bbox_sampler = build_sampler(
20
- self.train_cfg.sampler, context=self)
21
-
22
- def init_bbox_head(self, bbox_roi_extractor, bbox_head):
23
- """Initialize ``bbox_head``"""
24
- self.bbox_roi_extractor = build_roi_extractor(bbox_roi_extractor)
25
- self.bbox_head = build_head(bbox_head)
26
-
27
- def init_mask_head(self, mask_roi_extractor, mask_head):
28
- """Initialize ``mask_head``"""
29
- if mask_roi_extractor is not None:
30
- self.mask_roi_extractor = build_roi_extractor(mask_roi_extractor)
31
- self.share_roi_extractor = False
32
- else:
33
- self.share_roi_extractor = True
34
- self.mask_roi_extractor = self.bbox_roi_extractor
35
- self.mask_head = build_head(mask_head)
36
-
37
- def init_weights(self, pretrained):
38
- """Initialize the weights in head.
39
-
40
- Args:
41
- pretrained (str, optional): Path to pre-trained weights.
42
- Defaults to None.
43
- """
44
- if self.with_shared_head:
45
- self.shared_head.init_weights(pretrained=pretrained)
46
- if self.with_bbox:
47
- self.bbox_roi_extractor.init_weights()
48
- self.bbox_head.init_weights()
49
- if self.with_mask:
50
- self.mask_head.init_weights()
51
- if not self.share_roi_extractor:
52
- self.mask_roi_extractor.init_weights()
53
-
54
- def forward_dummy(self, x, proposals):
55
- """Dummy forward function."""
56
- # bbox head
57
- outs = ()
58
- rois = bbox2roi([proposals])
59
- if self.with_bbox:
60
- bbox_results = self._bbox_forward(x, rois)
61
- outs = outs + (bbox_results['cls_score'],
62
- bbox_results['bbox_pred'])
63
- # mask head
64
- if self.with_mask:
65
- mask_rois = rois[:100]
66
- mask_results = self._mask_forward(x, mask_rois)
67
- outs = outs + (mask_results['mask_pred'], )
68
- return outs
69
-
70
- def forward_train(self,
71
- x,
72
- img_metas,
73
- proposal_list,
74
- gt_bboxes,
75
- gt_labels,
76
- gt_bboxes_ignore=None,
77
- gt_masks=None):
78
- """
79
- Args:
80
- x (list[Tensor]): list of multi-level img features.
81
- img_metas (list[dict]): list of image info dict where each dict
82
- has: 'img_shape', 'scale_factor', 'flip', and may also contain
83
- 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
84
- For details on the values of these keys see
85
- `mmdet/datasets/pipelines/formatting.py:Collect`.
86
- proposals (list[Tensors]): list of region proposals.
87
- gt_bboxes (list[Tensor]): Ground truth bboxes for each image with
88
- shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format.
89
- gt_labels (list[Tensor]): class indices corresponding to each box
90
- gt_bboxes_ignore (None | list[Tensor]): specify which bounding
91
- boxes can be ignored when computing the loss.
92
- gt_masks (None | Tensor) : true segmentation masks for each box
93
- used if the architecture supports a segmentation task.
94
-
95
- Returns:
96
- dict[str, Tensor]: a dictionary of loss components
97
- """
98
- # assign gts and sample proposals
99
- if self.with_bbox or self.with_mask:
100
- num_imgs = len(img_metas)
101
- if gt_bboxes_ignore is None:
102
- gt_bboxes_ignore = [None for _ in range(num_imgs)]
103
- sampling_results = []
104
- for i in range(num_imgs):
105
- assign_result = self.bbox_assigner.assign(
106
- proposal_list[i], gt_bboxes[i], gt_bboxes_ignore[i],
107
- gt_labels[i])
108
- sampling_result = self.bbox_sampler.sample(
109
- assign_result,
110
- proposal_list[i],
111
- gt_bboxes[i],
112
- gt_labels[i],
113
- feats=[lvl_feat[i][None] for lvl_feat in x])
114
- sampling_results.append(sampling_result)
115
-
116
- losses = dict()
117
- # bbox head forward and loss
118
- if self.with_bbox:
119
- bbox_results = self._bbox_forward_train(x, sampling_results,
120
- gt_bboxes, gt_labels,
121
- img_metas)
122
- losses.update(bbox_results['loss_bbox'])
123
-
124
- # mask head forward and loss
125
- if self.with_mask:
126
- mask_results = self._mask_forward_train(x, sampling_results,
127
- bbox_results['bbox_feats'],
128
- gt_masks, img_metas)
129
- losses.update(mask_results['loss_mask'])
130
-
131
- return losses
132
-
133
- def _bbox_forward(self, x, rois):
134
- """Box head forward function used in both training and testing."""
135
- # TODO: a more flexible way to decide which feature maps to use
136
- bbox_feats = self.bbox_roi_extractor(
137
- x[:self.bbox_roi_extractor.num_inputs], rois)
138
- if self.with_shared_head:
139
- bbox_feats = self.shared_head(bbox_feats)
140
- cls_score, bbox_pred = self.bbox_head(bbox_feats)
141
-
142
- bbox_results = dict(
143
- cls_score=cls_score, bbox_pred=bbox_pred, bbox_feats=bbox_feats)
144
- return bbox_results
145
-
146
- def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels,
147
- img_metas):
148
- """Run forward function and calculate loss for box head in training."""
149
- rois = bbox2roi([res.bboxes for res in sampling_results])
150
- bbox_results = self._bbox_forward(x, rois)
151
-
152
- bbox_targets = self.bbox_head.get_targets(sampling_results, gt_bboxes,
153
- gt_labels, self.train_cfg)
154
- loss_bbox = self.bbox_head.loss(bbox_results['cls_score'],
155
- bbox_results['bbox_pred'], rois,
156
- *bbox_targets)
157
-
158
- bbox_results.update(loss_bbox=loss_bbox)
159
- return bbox_results
160
-
161
- def _mask_forward_train(self, x, sampling_results, bbox_feats, gt_masks,
162
- img_metas):
163
- """Run forward function and calculate loss for mask head in
164
- training."""
165
- if not self.share_roi_extractor:
166
- pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results])
167
- mask_results = self._mask_forward(x, pos_rois)
168
- else:
169
- pos_inds = []
170
- device = bbox_feats.device
171
- for res in sampling_results:
172
- pos_inds.append(
173
- torch.ones(
174
- res.pos_bboxes.shape[0],
175
- device=device,
176
- dtype=torch.uint8))
177
- pos_inds.append(
178
- torch.zeros(
179
- res.neg_bboxes.shape[0],
180
- device=device,
181
- dtype=torch.uint8))
182
- pos_inds = torch.cat(pos_inds)
183
-
184
- mask_results = self._mask_forward(
185
- x, pos_inds=pos_inds, bbox_feats=bbox_feats)
186
-
187
- mask_targets = self.mask_head.get_targets(sampling_results, gt_masks,
188
- self.train_cfg)
189
- pos_labels = torch.cat([res.pos_gt_labels for res in sampling_results])
190
- loss_mask = self.mask_head.loss(mask_results['mask_pred'],
191
- mask_targets, pos_labels)
192
-
193
- mask_results.update(loss_mask=loss_mask, mask_targets=mask_targets)
194
- return mask_results
195
-
196
- def _mask_forward(self, x, rois=None, pos_inds=None, bbox_feats=None):
197
- """Mask head forward function used in both training and testing."""
198
- assert ((rois is not None) ^
199
- (pos_inds is not None and bbox_feats is not None))
200
- if rois is not None:
201
- mask_feats = self.mask_roi_extractor(
202
- x[:self.mask_roi_extractor.num_inputs], rois)
203
- if self.with_shared_head:
204
- mask_feats = self.shared_head(mask_feats)
205
- else:
206
- assert bbox_feats is not None
207
- mask_feats = bbox_feats[pos_inds]
208
-
209
- mask_pred = self.mask_head(mask_feats)
210
- mask_results = dict(mask_pred=mask_pred, mask_feats=mask_feats)
211
- return mask_results
212
-
213
- async def async_simple_test(self,
214
- x,
215
- proposal_list,
216
- img_metas,
217
- proposals=None,
218
- rescale=False):
219
- """Async test without augmentation."""
220
- assert self.with_bbox, 'Bbox head must be implemented.'
221
-
222
- det_bboxes, det_labels = await self.async_test_bboxes(
223
- x, img_metas, proposal_list, self.test_cfg, rescale=rescale)
224
- bbox_results = bbox2result(det_bboxes, det_labels,
225
- self.bbox_head.num_classes)
226
- if not self.with_mask:
227
- return bbox_results
228
- else:
229
- segm_results = await self.async_test_mask(
230
- x,
231
- img_metas,
232
- det_bboxes,
233
- det_labels,
234
- rescale=rescale,
235
- mask_test_cfg=self.test_cfg.get('mask'))
236
- return bbox_results, segm_results
237
-
238
- def simple_test(self,
239
- x,
240
- proposal_list,
241
- img_metas,
242
- proposals=None,
243
- rescale=False):
244
- """Test without augmentation."""
245
- assert self.with_bbox, 'Bbox head must be implemented.'
246
-
247
- det_bboxes, det_labels = self.simple_test_bboxes(
248
- x, img_metas, proposal_list, self.test_cfg, rescale=rescale)
249
- if torch.onnx.is_in_onnx_export():
250
- if self.with_mask:
251
- segm_results = self.simple_test_mask(
252
- x, img_metas, det_bboxes, det_labels, rescale=rescale)
253
- return det_bboxes, det_labels, segm_results
254
- else:
255
- return det_bboxes, det_labels
256
-
257
- bbox_results = [
258
- bbox2result(det_bboxes[i], det_labels[i],
259
- self.bbox_head.num_classes)
260
- for i in range(len(det_bboxes))
261
- ]
262
-
263
- if not self.with_mask:
264
- return bbox_results
265
- else:
266
- segm_results = self.simple_test_mask(
267
- x, img_metas, det_bboxes, det_labels, rescale=rescale)
268
- return list(zip(bbox_results, segm_results))
269
-
270
- def aug_test(self, x, proposal_list, img_metas, rescale=False):
271
- """Test with augmentations.
272
-
273
- If rescale is False, then returned bboxes and masks will fit the scale
274
- of imgs[0].
275
- """
276
- det_bboxes, det_labels = self.aug_test_bboxes(x, img_metas,
277
- proposal_list,
278
- self.test_cfg)
279
-
280
- if rescale:
281
- _det_bboxes = det_bboxes
282
- else:
283
- _det_bboxes = det_bboxes.clone()
284
- _det_bboxes[:, :4] *= det_bboxes.new_tensor(
285
- img_metas[0][0]['scale_factor'])
286
- bbox_results = bbox2result(_det_bboxes, det_labels,
287
- self.bbox_head.num_classes)
288
-
289
- # det_bboxes always keep the original scale
290
- if self.with_mask:
291
- segm_results = self.aug_test_mask(x, img_metas, det_bboxes,
292
- det_labels)
293
- return [(bbox_results, segm_results)]
294
- else:
295
- return [bbox_results]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/dmnet/dmnet_r50-d8_769x769_40k_cityscapes.py DELETED
@@ -1,9 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/dmnet_r50-d8.py',
3
- '../_base_/datasets/cityscapes_769x769.py', '../_base_/default_runtime.py',
4
- '../_base_/schedules/schedule_40k.py'
5
- ]
6
- model = dict(
7
- decode_head=dict(align_corners=True),
8
- auxiliary_head=dict(align_corners=True),
9
- test_cfg=dict(mode='slide', crop_size=(769, 769), stride=(513, 513)))
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18_480x480_40k_pascal_context.py DELETED
@@ -1,8 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/fcn_hr18.py', '../_base_/datasets/pascal_context.py',
3
- '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py'
4
- ]
5
- model = dict(
6
- decode_head=dict(num_classes=60),
7
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
8
- optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
 
 
 
 
 
 
 
 
 
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/fileio/handlers/base.py DELETED
@@ -1,30 +0,0 @@
1
- # Copyright (c) OpenMMLab. All rights reserved.
2
- from abc import ABCMeta, abstractmethod
3
-
4
-
5
- class BaseFileHandler(metaclass=ABCMeta):
6
- # `str_like` is a flag to indicate whether the type of file object is
7
- # str-like object or bytes-like object. Pickle only processes bytes-like
8
- # objects but json only processes str-like object. If it is str-like
9
- # object, `StringIO` will be used to process the buffer.
10
- str_like = True
11
-
12
- @abstractmethod
13
- def load_from_fileobj(self, file, **kwargs):
14
- pass
15
-
16
- @abstractmethod
17
- def dump_to_fileobj(self, obj, file, **kwargs):
18
- pass
19
-
20
- @abstractmethod
21
- def dump_to_str(self, obj, **kwargs):
22
- pass
23
-
24
- def load_from_path(self, filepath, mode='r', **kwargs):
25
- with open(filepath, mode) as f:
26
- return self.load_from_fileobj(f, **kwargs)
27
-
28
- def dump_to_path(self, obj, filepath, mode='w', **kwargs):
29
- with open(filepath, mode) as f:
30
- self.dump_to_fileobj(obj, f, **kwargs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ArkanDash/rvc-models/app.py DELETED
@@ -1,178 +0,0 @@
1
- import os
2
- import json
3
- import argparse
4
- import traceback
5
- import logging
6
- import gradio as gr
7
- import numpy as np
8
- import librosa
9
- import torch
10
- import asyncio
11
- import edge_tts
12
- from datetime import datetime
13
- from fairseq import checkpoint_utils
14
- from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono
15
- from vc_infer_pipeline import VC
16
- from config import (
17
- is_half,
18
- device
19
- )
20
- logging.getLogger("numba").setLevel(logging.WARNING)
21
- limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces
22
-
23
- def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy):
24
- def vc_fn(
25
- input_audio,
26
- f0_up_key,
27
- f0_method,
28
- index_rate,
29
- tts_mode,
30
- tts_text,
31
- tts_voice
32
- ):
33
- try:
34
- if tts_mode:
35
- if len(tts_text) > 100 and limitation:
36
- return "Text is too long", None
37
- if tts_text is None or tts_voice is None:
38
- return "You need to enter text and select a voice", None
39
- asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3"))
40
- audio, sr = librosa.load("tts.mp3", sr=16000, mono=True)
41
- else:
42
- if input_audio is None:
43
- return "You need to upload an audio", None
44
- sampling_rate, audio = input_audio
45
- duration = audio.shape[0] / sampling_rate
46
- if duration > 20 and limitation:
47
- return "Please upload an audio file that is less than 20 seconds. If you need to generate a longer audio file, please use Colab.", None
48
- audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32)
49
- if len(audio.shape) > 1:
50
- audio = librosa.to_mono(audio.transpose(1, 0))
51
- if sampling_rate != 16000:
52
- audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000)
53
- times = [0, 0, 0]
54
- f0_up_key = int(f0_up_key)
55
- audio_opt = vc.pipeline(
56
- hubert_model,
57
- net_g,
58
- 0,
59
- audio,
60
- times,
61
- f0_up_key,
62
- f0_method,
63
- file_index,
64
- file_big_npy,
65
- index_rate,
66
- if_f0,
67
- )
68
- print(
69
- f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s"
70
- )
71
- return "Success", (tgt_sr, audio_opt)
72
- except:
73
- info = traceback.format_exc()
74
- print(info)
75
- return info, (None, None)
76
- return vc_fn
77
-
78
- def load_hubert():
79
- global hubert_model
80
- models, _, _ = checkpoint_utils.load_model_ensemble_and_task(
81
- ["hubert_base.pt"],
82
- suffix="",
83
- )
84
- hubert_model = models[0]
85
- hubert_model = hubert_model.to(device)
86
- if is_half:
87
- hubert_model = hubert_model.half()
88
- else:
89
- hubert_model = hubert_model.float()
90
- hubert_model.eval()
91
-
92
- def change_to_tts_mode(tts_mode):
93
- if tts_mode:
94
- return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True)
95
- else:
96
- return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False)
97
-
98
- if __name__ == '__main__':
99
- parser = argparse.ArgumentParser()
100
- parser.add_argument('--api', action="store_true", default=False)
101
- parser.add_argument("--colab", action="store_true", default=False, help="share gradio app")
102
- args, unknown = parser.parse_known_args()
103
- load_hubert()
104
- models = []
105
- tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices())
106
- voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list]
107
- with open("weights/model_info.json", "r", encoding="utf-8") as f:
108
- models_info = json.load(f)
109
- for name, info in models_info.items():
110
- if not info['enable']:
111
- continue
112
- title = info['title']
113
- author = info.get("author", None)
114
- cover = f"weights/{name}/{info['cover']}"
115
- index = f"weights/{name}/{info['feature_retrieval_library']}"
116
- npy = f"weights/{name}/{info['feature_file']}"
117
- cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu")
118
- tgt_sr = cpt["config"][-1]
119
- cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk
120
- if_f0 = cpt.get("f0", 1)
121
- if if_f0 == 1:
122
- net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half)
123
- else:
124
- net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"])
125
- del net_g.enc_q
126
- print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩
127
- net_g.eval().to(device)
128
- if is_half:
129
- net_g = net_g.half()
130
- else:
131
- net_g = net_g.float()
132
- vc = VC(tgt_sr, device, is_half)
133
- models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy)))
134
- with gr.Blocks() as app:
135
- gr.Markdown(
136
- "# <center> RVC Models (Outdated)\n"
137
- "## <center> The input audio should be clean and pure voice without background music.\n"
138
- "# <center> [![New RVC Spaces](https://img.shields.io/badge/%F0%9F%A4%97_Spaces-RVC_Models_new-yellow?style=for-the-badge&logo=https%3A%2F%2Fhuggingface.co%2Ffront%2Fassets%2Fhuggingface_logo.svg&logoColor=yellow)](https://huggingface.co/spaces/ArkanDash/rvc-models-new)\n\n"
139
- "[![Colab](https://img.shields.io/badge/Colab-RVC_Models-blue?style=for-the-badge&logo=googlecolab)](https://colab.research.google.com/drive/1hx6kKvIuv5XNY1Gai2PEuZhpO5z6xpVh?usp=sharing)\n\n"
140
- )
141
- with gr.Tabs():
142
- for (name, title, author, cover, vc_fn) in models:
143
- with gr.TabItem(name):
144
- with gr.Row():
145
- gr.Markdown(
146
- '<div align="center">'
147
- f'<div>{title}</div>\n'+
148
- (f'<div>Model author: {author}</div>' if author else "")+
149
- (f'<img style="width:auto;height:300px;" src="file/{cover}">' if cover else "")+
150
- '</div>'
151
- )
152
- with gr.Row():
153
- with gr.Column():
154
- vc_input = gr.Audio(label="Input audio"+' (less than 20 seconds)' if limitation else '')
155
- vc_transpose = gr.Number(label="Transpose", value=0)
156
- vc_f0method = gr.Radio(
157
- label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies",
158
- choices=["pm", "harvest"],
159
- value="pm",
160
- interactive=True,
161
- )
162
- vc_index_ratio = gr.Slider(
163
- minimum=0,
164
- maximum=1,
165
- label="Retrieval feature ratio",
166
- value=0.6,
167
- interactive=True,
168
- )
169
- tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False)
170
- tts_text = gr.Textbox(visible=False,label="TTS text (100 words limitation)" if limitation else "TTS text")
171
- tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female")
172
- vc_submit = gr.Button("Generate", variant="primary")
173
- with gr.Column():
174
- vc_output1 = gr.Textbox(label="Output Message")
175
- vc_output2 = gr.Audio(label="Output Audio")
176
- vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2])
177
- tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice])
178
- app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.colab)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ArtyomKhyan/Detection/app.py DELETED
@@ -1,218 +0,0 @@
1
- import argparse
2
- import time
3
- import gradio as gr
4
- import cv2
5
- import matplotlib.pyplot as plt
6
- import torch
7
- from PIL import Image
8
- from torchvision.datasets import ImageFolder
9
- import matplotlib.pyplot as plt
10
- import torch.backends.cudnn as cudnn
11
- import numpy as np
12
- import matplotlib.pyplot as plt
13
- import torchvision
14
- from torchvision.transforms import transforms
15
- import os
16
- def class_model():
17
- model = torch.hub.load('pytorch/vision:v0.10.0', 'resnet101', pretrained=True)
18
-
19
- class LinearModel(torch.nn.Module):
20
-
21
- def __init__(self):
22
- super(LinearModel, self).__init__()
23
- self.activation = torch.nn.ReLU()
24
- self.linear1 = torch.nn.Linear(1024, 100)
25
- self.linear2 = torch.nn.Linear(100, 3)
26
-
27
- def forward(self, x):
28
- x = self.activation(x)
29
- x = self.linear1(x)
30
- x = self.activation(x)
31
- x = self.linear2(x)
32
- return x
33
- full_c = torch.nn.Linear(in_features = 2048, out_features = 1024)
34
- full_c.load_state_dict(torch.load('so.pt'))
35
- model.fc = full_c
36
- Linear = LinearModel()
37
- Linear.load_state_dict(torch.load('som.pt'))
38
- model = torch.nn.Sequential(model, Linear)
39
- model.eval()
40
- return model
41
-
42
- transform = transforms.Compose([
43
- transforms.ToPILImage(),
44
- transforms.Resize((224, 224)),
45
- transforms.ToTensor(),
46
- transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
47
- ])
48
-
49
- def box_iou(box1, box2):
50
- # https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py
51
- """
52
- Return intersection-over-union (Jaccard index) of boxes.
53
- Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
54
- Arguments:
55
- box1 (Tensor[N, 4])
56
- box2 (Tensor[M, 4])
57
- Returns:
58
- iou (Tensor[N, M]): the NxM matrix containing the pairwise
59
- IoU values for every element in boxes1 and boxes2
60
- """
61
-
62
- def box_area(box):
63
- # box = 4xn
64
- return (box[2] - box[0]) * (box[3] - box[1])
65
-
66
- area1 = box_area(box1.t())
67
- area2 = box_area(box2.t())
68
-
69
- # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2)
70
- inter = (torch.min(box1[:, None, 2:], box2[:, 2:]) - torch.max(box1[:, None, :2], box2[:, :2])).clamp(0).prod(2)
71
- return inter / (area1[:, None] + area2 - inter) # iou = inter / (area1 + area2 - inter)
72
-
73
- def xywh2xyxy(x):
74
- # Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right
75
- y = torch.zeros_like(x) if isinstance(x, torch.Tensor) else np.zeros_like(x)
76
- y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x
77
- y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y
78
- y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x
79
- y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y
80
- return y
81
- def non_max_suppression(prediction, conf_thres=0.1, iou_thres=0.65, merge=False, classes=None, agnostic=False):
82
- """Performs Non-Maximum Suppression (NMS) on inference results
83
-
84
- Returns:
85
- detections with shape: nx6 (x1, y1, x2, y2, conf, cls)
86
- """
87
- if prediction.dtype is torch.float16:
88
- prediction = prediction.float() # to FP32
89
-
90
- nc = prediction[0].shape[1] - 5 # number of classes
91
- xc = prediction[..., 4] > conf_thres # candidates
92
-
93
- # Settings
94
- min_wh, max_wh = 2, 4096 # (pixels) minimum and maximum box width and height
95
- max_det = 300 # maximum number of detections per image
96
- time_limit = 10.0 # seconds to quit after
97
- redundant = True # require redundant detections
98
- multi_label = nc > 1 # multiple labels per box (adds 0.5ms/img)
99
-
100
- t = time.time()
101
- output = [None] * prediction.shape[0]
102
- for xi, x in enumerate(prediction): # image index, image inference
103
- # Apply constraints
104
- # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height
105
- x = x[xc[xi]] # confidence
106
-
107
- # If none remain process next image
108
- if not x.shape[0]:
109
- continue
110
-
111
- # Compute conf
112
- x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf
113
-
114
- # Box (center x, center y, width, height) to (x1, y1, x2, y2)
115
- box = xywh2xyxy(x[:, :4])
116
-
117
- # Detections matrix nx6 (xyxy, conf, cls)
118
- if multi_label:
119
- i, j = (x[:, 5:] > conf_thres).nonzero().t()
120
- x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1)
121
- else: # best class only
122
- conf, j = x[:, 5:].max(1, keepdim=True)
123
- x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres]
124
-
125
- # Filter by class
126
- if classes:
127
- x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)]
128
-
129
- # Apply finite constraint
130
- # if not torch.isfinite(x).all():
131
- # x = x[torch.isfinite(x).all(1)]
132
-
133
- # If none remain process next image
134
- n = x.shape[0] # number of boxes
135
- if not n:
136
- continue
137
-
138
- # Sort by confidence
139
- # x = x[x[:, 4].argsort(descending=True)]
140
-
141
- # Batched NMS
142
- c = x[:, 5:6] * (0 if agnostic else max_wh) # classes
143
- boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores
144
- i = torchvision.ops.boxes.nms(boxes, scores, iou_thres)
145
- if i.shape[0] > max_det: # limit detections
146
- i = i[:max_det]
147
- if merge and (1 < n < 3E3): # Merge NMS (boxes merged using weighted mean)
148
- try: # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4)
149
- iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix
150
- weights = iou * scores[None] # box weights
151
- x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum(1, keepdim=True) # merged boxes
152
- if redundant:
153
- i = i[iou.sum(1) > 1] # require redundancy
154
- except: # possible CUDA error https://github.com/ultralytics/yolov3/issues/1139
155
- print(x, i, x.shape, i.shape)
156
- pass
157
-
158
- output[xi] = x[i]
159
- if (time.time() - t) > time_limit:
160
- break # time limit exceeded
161
-
162
- return output
163
-
164
-
165
-
166
- def run_detector(img):
167
- device = torch.device('cpu')
168
- print(1)
169
- model = torch.load('last_yolov5s_results.pt', map_location=device)['model'].float()
170
- print(2)
171
- image = img
172
- res_image = cv2.resize(image, (416, 416))
173
- image = torch.tensor(res_image).permute(2, 0, 1).unsqueeze(0).float() / 255.
174
- count_dobri = 0
175
- count_all = 0
176
- with torch.no_grad():
177
- detection = model(image)
178
- pred = non_max_suppression(detection[0], conf_thres=0.4, iou_thres=0.6)
179
- main_model = class_model()
180
- main_model.eval()
181
-
182
- for i in pred[0]:
183
- if i[0] > 0 and i[1] > 0 and i[2] > 0 and i[3] > 0:
184
- if i[3] > i[1] and i[2] > i[0]:
185
-
186
- cropped_image = res_image[int(i[1]):int(i[3]), int(i[0]):int(i[2])]
187
- input_tensor = transform(cropped_image)
188
- input_tensor = input_tensor.reshape(1, 3, 224, 224)
189
- with torch.no_grad():
190
- input = main_model(input_tensor)
191
- layer = torch.nn.Softmax(dim=1)
192
- output_s = layer(input)
193
- output = torch.argmax(output_s)
194
- print(output)
195
- if output == 0 and output_s[0][0] > 0.55:
196
- count_dobri +=1
197
- cv2.rectangle(res_image, (int(i[0]), int(i[1])), (int(i[2]), int(i[3])), (0, 255, 0))
198
- elif output == 1:
199
- count_all +=1
200
- cv2.rectangle(res_image, (int(i[0]), int(i[1])), (int(i[2]), int(i[3])), (0, 0, 255))
201
- elif output == 2:
202
- cv2.rectangle(res_image, (int(i[0]), int(i[1])), (int(i[2]), int(i[3])), (255, 0, 0))
203
- count_all += count_dobri
204
- global dobry_percent
205
- dobry_percent = count_dobri/count_all
206
- return res_image
207
-
208
-
209
- def greet(name):
210
- print(type(name))
211
- res_image = run_detector(name)
212
-
213
- return res_image, dobry_percent*100
214
-
215
-
216
- demo = gr.Interface(fn=greet, inputs="image", outputs=['image', 'text'])
217
-
218
- demo.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ashrafb/translate/app.py DELETED
@@ -1,33 +0,0 @@
1
- import gradio as gr
2
- from transformers import M2M100ForConditionalGeneration
3
- from tokenization_small100 import SMALL100Tokenizer
4
-
5
- langs = """af,am,ar,ast,az,ba,be,bg,bn,br,bs,ca,ceb,cs,cy,da,de,el,en,es,et,fa,ff,fi,fr,fy,ga,gd,gl,gu,ha,he,hi,hr,ht,hu,hy,id,ig,ilo,is,it,ja,jv,ka,kk,km,kn,ko,lb,lg,ln,lo,lt,lv,mg,mk,ml,mn,mr,ms,my,ne,nl,no,ns,oc,or,pa,pl,ps,pt,ro,ru,sd,si,sk,sl,so,sq,sr,ss,su,sv,sw,ta,th,tl,tn,tr,uk,ur,uz,vi,wo,xh,yi,yo,zh,zu"""
6
- lang_list = langs.split(',')
7
-
8
- model = M2M100ForConditionalGeneration.from_pretrained("alirezamsh/small100")
9
- tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100")
10
-
11
- def translate(lang, text):
12
- tokenizer.tgt_lang = lang
13
- encoded_text = tokenizer(text, return_tensors="pt")
14
- generated_tokens = model.generate(**encoded_text)
15
- return tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
16
-
17
- with gr.Blocks(analytics_enabled=False) as app:
18
-
19
- Source = gr.Textbox( label="Source" )
20
- Language = gr.Dropdown( lang_list, label="Language" )
21
- Translate = gr.Button( "Translate" )
22
- Result = gr.Textbox( label="Result" )
23
-
24
-
25
- Translate.click(
26
- translate,
27
- inputs=[ Language, Source ],
28
- outputs=[Result],
29
- api_name="translate",
30
- )
31
-
32
- app.launch( inline=True )
33
- block.queue( concurrency_count=2 )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awesimo/jojogan/e4e/criteria/lpips/__init__.py DELETED
File without changes
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/builtin_datasets.md DELETED
@@ -1 +0,0 @@
1
- ../../datasets/README.md
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/docs/tutorials/models.md DELETED
@@ -1,180 +0,0 @@
1
- # Use Models
2
-
3
- ## Build Models from Yacs Config
4
- From a yacs config object,
5
- models (and their sub-models) can be built by
6
- functions such as `build_model`, `build_backbone`, `build_roi_heads`:
7
- ```python
8
- from detectron2.modeling import build_model
9
- model = build_model(cfg) # returns a torch.nn.Module
10
- ```
11
-
12
- `build_model` only builds the model structure and fills it with random parameters.
13
- See below for how to load an existing checkpoint to the model and how to use the `model` object.
14
-
15
- ### Load/Save a Checkpoint
16
- ```python
17
- from detectron2.checkpoint import DetectionCheckpointer
18
- DetectionCheckpointer(model).load(file_path_or_url) # load a file, usually from cfg.MODEL.WEIGHTS
19
-
20
- checkpointer = DetectionCheckpointer(model, save_dir="output")
21
- checkpointer.save("model_999") # save to output/model_999.pth
22
- ```
23
-
24
- Detectron2's checkpointer recognizes models in pytorch's `.pth` format, as well as the `.pkl` files
25
- in our model zoo.
26
- See [API doc](../modules/checkpoint.html#detectron2.checkpoint.DetectionCheckpointer)
27
- for more details about its usage.
28
-
29
- The model files can be arbitrarily manipulated using `torch.{load,save}` for `.pth` files or
30
- `pickle.{dump,load}` for `.pkl` files.
31
-
32
- ### Use a Model
33
-
34
- A model can be called by `outputs = model(inputs)`, where `inputs` is a `list[dict]`.
35
- Each dict corresponds to one image and the required keys
36
- depend on the type of model, and whether the model is in training or evaluation mode.
37
- For example, in order to do inference,
38
- all existing models expect the "image" key, and optionally "height" and "width".
39
- The detailed format of inputs and outputs of existing models are explained below.
40
-
41
- __Training__: When in training mode, all models are required to be used under an `EventStorage`.
42
- The training statistics will be put into the storage:
43
- ```python
44
- from detectron2.utils.events import EventStorage
45
- with EventStorage() as storage:
46
- losses = model(inputs)
47
- ```
48
-
49
- __Inference__: If you only want to do simple inference using an existing model,
50
- [DefaultPredictor](../modules/engine.html#detectron2.engine.defaults.DefaultPredictor)
51
- is a wrapper around model that provides such basic functionality.
52
- It includes default behavior including model loading, preprocessing,
53
- and operates on single image rather than batches. See its documentation for usage.
54
-
55
- You can also run inference directly like this:
56
- ```
57
- model.eval()
58
- with torch.no_grad():
59
- outputs = model(inputs)
60
- ```
61
-
62
- ### Model Input Format
63
-
64
- Users can implement custom models that support any arbitrary input format.
65
- Here we describe the standard input format that all builtin models support in detectron2.
66
- They all take a `list[dict]` as the inputs. Each dict
67
- corresponds to information about one image.
68
-
69
- The dict may contain the following keys:
70
-
71
- * "image": `Tensor` in (C, H, W) format. The meaning of channels are defined by `cfg.INPUT.FORMAT`.
72
- Image normalization, if any, will be performed inside the model using
73
- `cfg.MODEL.PIXEL_{MEAN,STD}`.
74
- * "height", "width": the **desired** output height and width **in inference**, which is not necessarily the same
75
- as the height or width of the `image` field.
76
- For example, the `image` field contains the resized image, if resize is used as a preprocessing step.
77
- But you may want the outputs to be in **original** resolution.
78
- If provided, the model will produce output in this resolution,
79
- rather than in the resolution of the `image` as input into the model. This is more efficient and accurate.
80
- * "instances": an [Instances](../modules/structures.html#detectron2.structures.Instances)
81
- object for training, with the following fields:
82
- + "gt_boxes": a [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing N boxes, one for each instance.
83
- + "gt_classes": `Tensor` of long type, a vector of N labels, in range [0, num_categories).
84
- + "gt_masks": a [PolygonMasks](../modules/structures.html#detectron2.structures.PolygonMasks)
85
- or [BitMasks](../modules/structures.html#detectron2.structures.BitMasks) object storing N masks, one for each instance.
86
- + "gt_keypoints": a [Keypoints](../modules/structures.html#detectron2.structures.Keypoints)
87
- object storing N keypoint sets, one for each instance.
88
- * "sem_seg": `Tensor[int]` in (H, W) format. The semantic segmentation ground truth for training.
89
- Values represent category labels starting from 0.
90
- * "proposals": an [Instances](../modules/structures.html#detectron2.structures.Instances)
91
- object used only in Fast R-CNN style models, with the following fields:
92
- + "proposal_boxes": a [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing P proposal boxes.
93
- + "objectness_logits": `Tensor`, a vector of P scores, one for each proposal.
94
-
95
- For inference of builtin models, only "image" key is required, and "width/height" are optional.
96
-
97
- We currently don't define standard input format for panoptic segmentation training,
98
- because models now use custom formats produced by custom data loaders.
99
-
100
- #### How it connects to data loader:
101
-
102
- The output of the default [DatasetMapper]( ../modules/data.html#detectron2.data.DatasetMapper) is a dict
103
- that follows the above format.
104
- After the data loader performs batching, it becomes `list[dict]` which the builtin models support.
105
-
106
-
107
- ### Model Output Format
108
-
109
- When in training mode, the builtin models output a `dict[str->ScalarTensor]` with all the losses.
110
-
111
- When in inference mode, the builtin models output a `list[dict]`, one dict for each image.
112
- Based on the tasks the model is doing, each dict may contain the following fields:
113
-
114
- * "instances": [Instances](../modules/structures.html#detectron2.structures.Instances)
115
- object with the following fields:
116
- * "pred_boxes": [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing N boxes, one for each detected instance.
117
- * "scores": `Tensor`, a vector of N confidence scores.
118
- * "pred_classes": `Tensor`, a vector of N labels in range [0, num_categories).
119
- + "pred_masks": a `Tensor` of shape (N, H, W), masks for each detected instance.
120
- + "pred_keypoints": a `Tensor` of shape (N, num_keypoint, 3).
121
- Each row in the last dimension is (x, y, score). Confidence scores are larger than 0.
122
- * "sem_seg": `Tensor` of (num_categories, H, W), the semantic segmentation prediction.
123
- * "proposals": [Instances](../modules/structures.html#detectron2.structures.Instances)
124
- object with the following fields:
125
- * "proposal_boxes": [Boxes](../modules/structures.html#detectron2.structures.Boxes)
126
- object storing N boxes.
127
- * "objectness_logits": a torch vector of N confidence scores.
128
- * "panoptic_seg": A tuple of `(pred: Tensor, segments_info: Optional[list[dict]])`.
129
- The `pred` tensor has shape (H, W), containing the segment id of each pixel.
130
-
131
- * If `segments_info` exists, each dict describes one segment id in `pred` and has the following fields:
132
-
133
- * "id": the segment id
134
- * "isthing": whether the segment is a thing or stuff
135
- * "category_id": the category id of this segment.
136
-
137
- If a pixel's id does not exist in `segments_info`, it is considered to be void label
138
- defined in [Panoptic Segmentation](https://arxiv.org/abs/1801.00868).
139
-
140
- * If `segments_info` is None, all pixel values in `pred` must be ≥ -1.
141
- Pixels with value -1 are assigned void labels.
142
- Otherwise, the category id of each pixel is obtained by
143
- `category_id = pixel // metadata.label_divisor`.
144
-
145
-
146
- ### Partially execute a model:
147
-
148
- Sometimes you may want to obtain an intermediate tensor inside a model,
149
- such as the input of certain layer, the output before post-processing.
150
- Since there are typically hundreds of intermediate tensors, there isn't an API that provides you
151
- the intermediate result you need.
152
- You have the following options:
153
-
154
- 1. Write a (sub)model. Following the [tutorial](./write-models.md), you can
155
- rewrite a model component (e.g. a head of a model), such that it
156
- does the same thing as the existing component, but returns the output
157
- you need.
158
- 2. Partially execute a model. You can create the model as usual,
159
- but use custom code to execute it instead of its `forward()`. For example,
160
- the following code obtains mask features before mask head.
161
-
162
- ```python
163
- images = ImageList.from_tensors(...) # preprocessed input tensor
164
- model = build_model(cfg)
165
- model.eval()
166
- features = model.backbone(images.tensor)
167
- proposals, _ = model.proposal_generator(images, features)
168
- instances, _ = model.roi_heads(images, features, proposals)
169
- mask_features = [features[f] for f in model.roi_heads.in_features]
170
- mask_features = model.roi_heads.mask_pooler(mask_features, [x.pred_boxes for x in instances])
171
- ```
172
-
173
- 3. Use [forward hooks](https://pytorch.org/tutorials/beginner/former_torchies/nnft_tutorial.html#forward-and-backward-function-hooks).
174
- Forward hooks can help you obtain inputs or outputs of a certain module.
175
- If they are not exactly what you want, they can at least be used together with partial execution
176
- to obtain other tensors.
177
-
178
- All options require you to read documentation and sometimes code
179
- of the existing models to understand the internal logic,
180
- in order to write code to obtain the internal tensors.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Baptlem/UCDR-Net/README.md DELETED
@@ -1,14 +0,0 @@
1
- ---
2
- title: UCDR-Net
3
- emoji: 🚀
4
- colorFrom: blue
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.27.0
8
- app_file: app.py
9
- pinned: true
10
- tags:
11
- - jax-diffusers-event
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Bhool Bhulaiyaa 2 Tono De Llamada.md DELETED
@@ -1,78 +0,0 @@
1
-
2
- <h1>Bhool Bhulaiyaa 2: Una secuela de comedia de terror que te hará reír y gritar</h1>
3
- <p>Si estás buscando una película que te entretenga con elementos de comedia y terror, entonces no debes perderte <strong>Bhool Bhulaiyaa 2</strong>, una película en hindi que fue lanzada el 20 de mayo de 2022. Esta película es una secuela independiente de <strong>Bhool Bhulaiyaa</strong> (2007), un éxito de taquilla protagonizado por Akshay Kumar y Vidya Balan. <strong>Bhool Bhulaiyaa 2</strong> cuenta con Tabu, Kartik Aaryan y Kiara Advani en los papeles principales, junto con Rajpal Yadav, Sanjay Mishra, Ashwini Kalsekar, y otros en papeles secundarios. La película está dirigida por Anees Bazmee, escrita por Aakash Kaushik y Farhad Samji, y producida por Bhushan Kumar, Murad Khetani, Krishan Kumar y Anjum Khetani bajo el lema de T-Series Films y Cine1 Studios.</p>
4
- <p>La trama de <strong>Bhool Bhulaiyaa 2</strong> sigue a Ruhaan Randhawa (Kartik Aaryan), un falso psíquico que tiene que lidiar con el regreso de Manjulika (Tabu), un espíritu malévolo que está empeñado en vengarse de la familia Thakur. Ruhaan conoce a Reet Rathore (Kiara Advani), una novia poco dispuesta que está en camino a Rajasthan para casarse con su prometido Sagar (Amar Upadhyay). El destino los lleva a una mansión abandonada donde Manjulika ha estado atrapada durante 18 años por algunos sacerdotes. Mientras Ruhaan intenta ayudar a Reet a escapar de la presión de su familia y de la ira de Manjulika, descubre los oscuros secretos del pasado y la verdad sobre su propia identidad. ¿Podrá Ruhaan salvar a Reet y a sí mismo de la maldición de Manjulika? ¿O se convertirá en su próxima víctima? </p>
5
- <h2>descargar bhool bhulaiyaa 2 tono de llamada</h2><br /><p><b><b>Download Zip</b> ->>->>->> <a href="https://bltlly.com/2v6LWr">https://bltlly.com/2v6LWr</a></b></p><br /><br />
6
- <h2>¿Qué es Bhool Bhulaiyaa 2 Acerca de? </h2>
7
-
8
- <h3>El regreso de Manjulika</h3>
9
- <p>El principal villano de la película es Manjulika, un espíritu vengativo que una vez fue bailarín en la corte de Thakur Vikram Singh (Rajendra Gupta). Ella estaba enamorada de él, pero él la traicionó y se casó con otra mujer. Ella se suicidó y juró perseguir a su familia para siempre. Ella poseyó a su hija Radhika (Vidya Balan) en la primera película y trató de matar a su marido Siddharth (Shiney Ahuja). Fue exorcizada por el Dr. Aditya Shrivastav (Akshay Kumar), un psiquiatra que pretendía ser sacerdote. </p>
10
- <p>En <strong>Bhool Bhulaiyaa 2</strong>, Manjulika regresa después de 18 años cuando algunos sacerdotes que custodiaban su tumba son asesinados por algunos matones. Ella escapa de su prisión y encuentra un nuevo anfitrión en Reet, que es la nieta de Thakur Vikram Singh. Ella quiere vengarse de la familia Thakur y también de Ruhaan, que es el hijo del Dr. Aditya Shrivastav. Usa sus poderes sobrenaturales para manipular, atormentar y matar a cualquiera que se interponga en su camino. </p>
11
- <p>Manjulika es interpretada por Tabu, una de las actrices más versátiles y talentosas de Bollywood. Ella ofrece una actuación brillante como el espíritu maligno que puede cambiar de ser seductor a aterrador en cuestión de segundos. Ella también muestra sus habilidades de baile en la canción "Ami Je Tomar", que es un remix de la canción original de <strong>Bhool Bhulaiyaa</strong>. Tabu ha dicho que le gustaba interpretar a Manjulika ya que era un papel desafiante y divertido para ella. </p>
12
- <h3>El falso psíquico Ruhaan Randhawa</h3>
13
- <p>El héroe de la película es Ruhaan Randhawa, un falso psíquico que afirma tener habilidades sobrenaturales, pero en realidad utiliza trucos y aparatos para engañar a la gente. Gana dinero realizando sesiones de espiritismo, exorcismos y lecturas para sus clientes. También es una persona coqueta e ingeniosa a la que le gusta divertirse y disfrutar de la vida. </p>
14
-
15
- <p>Ruhaan es interpretado por Kartik Aaryan, uno de los actores más populares y encantadores de Bollywood. Él ofrece una actuación hilarante y heroica como el falso psíquico que tiene que enfrentar sus miedos y luchar contra Manjulika. También muestra su química con Kiara Advani, que interpreta a Reet, en las escenas y canciones románticas. Kartik Aaryan ha dicho que estaba emocionado de ser parte de <strong>Bhool Bhulaiyaa 2</strong> ya que era una de sus películas favoritas cuando era pequeño. </p> <h3>La novia involuntaria Reet Rathore</h3>
16
- <p>La heroína de la película es Reet Rathore, una novia poco dispuesta que se ve obligada a casarse con Sagar, un hombre de negocios rico y arrogante que es el hijo del amigo de Thakur Vikram Singh. Ella no lo ama y quiere seguir su carrera como diseñadora de moda. También es una persona amable y valiente que cuida de su familia y amigos. </p>
17
- <p></p>
18
- <p>Reet conoce a Ruhaan en un tren y lo encuentra atractivo y divertido. Ella acepta ir con él a Rajasthan para escapar de su familia y Sagar. Ella también se convierte en el objetivo de Manjulika, que la ha poseído y quiere usar su cuerpo para matar a la familia Thakur. Lucha para luchar contra la influencia de Manjulika y también para expresar sus sentimientos por Ruhaan.</p>
19
- <p>Reet es interpretada por Kiara Advani, una de las actrices más bellas y talentosas de Bollywood. Ella ofrece una actuación dulce y fuerte como la novia no dispuesta que tiene que enfrentar muchos desafíos y peligros. Ella también se ve impresionante en los trajes tradicionales y joyas que lleva en la película. Kiara Advani ha dicho que tuvo el honor de ser parte de <strong>Bhool Bhulaiyaa 2</strong> ya que fue un sueño hecho realidad para ella. </p>
20
- <h2>¿En qué se diferencia Bhool Bhulaiyaa 2 de Bhool Bhulaiyaa? </h2>
21
- <p><strong>Bhool Bhulaiyaa 2</strong> no es una secuela directa de <strong>Bhool Bhulaiyaa</strong> sino una película independiente que tiene su propia historia, personajes y estilo. La película es diferente de la anterior de muchas maneras, como:</p>
22
- <h3>Comedia vs horror</h3>
23
-
24
- <h3>Independiente vs Secuela</h3>
25
- <p>Mientras que <strong>Bhulaiyaa</strong> fue un remake de la película malayalam <strong>Manichitrathazhu</strong> (1993), que también fue rehecho en varios otros idiomas, <strong>Bhulaiyaa 2</strong> no es un remake de ninguna otra película sino una historia original con nuevos personajes. La película no sigue los acontecimientos de la película anterior, pero tiene algunas referencias y conexiones con ella. La película también tiene algunos cameos de Akshay Kumar y Vidya Balan, que repiten sus papeles de <strong>Bhool Bhulaiyaa</strong>. </p>
26
- <h3>Inspiración vs originalidad</h3>
27
- <p>Mientras que <strong>Bhool Bhulaiyaa</strong> se inspiró en una película malayalam y una novela de M.R. James llamada <em>The Mystery of the Yellow Room</em>, <strong>Bhool Bhulaiyaa 2</strong> está inspirado en varias fuentes pero también tiene sus propios giros y sorpresas. La película se basa libremente en otra película malayalam llamada <strong>Ezra</strong> (2017), que también fue una película de comedia de terror sobre una mansión encantada y un espíritu vengativo. La película también está influenciada por algunas películas de Hollywood como <strong>The Conjuring</strong>, <strong>The Exorcist</strong>, y <strong>The Shining</strong>. La película también tiene algunos elementos originales como el falso personaje psíquico, la configuración de Rajasthan, y la escena clímax. </p>
28
- <h2>¿Cuáles son los aspectos más destacados de Bhool Bhulaiyaa 2?</h2>
29
- <p><strong>Bhool Bhulaiyaa 2</strong> tiene muchos aspectos destacados que lo convierten en una película imprescindible para todo tipo de espectadores. Algunos de ellos son:</p>
30
- <h3>El reparto estelar</h3>
31
- <p>La película cuenta con un reparto estelar que incluye algunos de los mejores actores de Bollywood. Tabu, Kartik Aaryan, y Kiara Advani dan excelentes actuaciones como Manjulika, Ruhaan y Reet respectivamente. Dan vida a sus personajes con sus expresiones, diálogos y acciones. También comparten gran química entre sí y crean escenas memorables juntos. </p>
32
-
33
- <h3>Las canciones pegadizas</h3>
34
- <p>La película tiene una banda sonora pegadiza y melodiosa que consta de seis canciones compuestas por Pritam y Tanishk Bagchi. Las canciones son cantadas por cantantes populares como Arijit Singh, Shreya Ghoshal, Jubin Nautiyal, Neha Kakkar, y otros. Las canciones también son remezcladas por DJ Chetas, Lijo George, y otros. Las canciones son una mezcla de géneros románticos, de danza y de terror que se adaptan al estado de ánimo y al tema de la película. </p>
35
- <p>El tema principal de la película, "Bhool Bhulaiyaa 2", es un remix de la canción original de <strong>Bhool Bhulaiyaa</strong> que fue compuesta por Pritam y cantada por Neeraj Shridhar. La nueva versión es cantada por Jubin Nautiyal y Tulsi Kumar y tiene nuevas letras de Tanishk Bagchi. La canción es un número vigoroso y enérgico que cuenta con Kartik Aaryan y Kiara Advani bailando en un gran set con muchos bailarines. </p>
36
- <p>Otra canción popular de la película es "Ami Je Tomar", que también es un remix de la canción original de <strong>Bhool Bhulaiyaa</strong> que fue compuesta por Pritam y cantada por Shreya Ghoshal y K.K. La nueva versión es cantada por Arijit Singh y Shreya Ghoshal y tiene nuevas letras de Tanishk Bagchi. La canción es un número romántico y inquietante que cuenta con Tabu realizando una danza clásica en un traje tradicional. </p>
37
- <h3>Las impresionantes ubicaciones</h3>
38
- <p>La película tiene una cinematografía impresionante que muestra la belleza y el misterio de Rajasthan y otros lugares. La película se rodó en varios lugares como Jaipur, Jaisalmer, Udaipur, Lucknow, Mumbai y Londres. La película captura la cultura, la arquitectura y el paisaje de estos lugares con sus vivos colores, ángulos e iluminación. La película también utiliza algunos efectos especiales y conjuntos para crear una atmósfera realista y espeluznante para las escenas de terror. </p>
39
-
40
- <h3>El emocionante clímax</h3>
41
- <p>La película tiene un clímax emocionante que te mantendrá al borde de tu asiento. El clímax implica un enfrentamiento final entre Ruhaan y Manjulika que tiene lugar en la mansión. Ruhaan tiene que usar su ingenio, coraje y artilugios para luchar contra los poderes sobrenaturales de Manjulika y salvar a Reet de sus garras. También tiene que enfrentarse a su padre el Dr. Aditya Shrivastav que llega a la escena para ayudarlo. </p>
42
- <p>El clímax tiene muchos giros y vueltas que te sorprenderán y te harán jadear. El clímax también revela algunos secretos impactantes sobre el pasado de Ruhaan y el motivo de Manjulika. El clímax también tiene algunos momentos emocionales que tocarán tu corazón y te harán llorar. El clímax también tiene algunas secuencias llenas de acción que te harán animar y aplaudir. El clímax también tiene algunos momentos divertidos que te harán reír y aliviar tu tensión. </p>
43
- <h2>Cómo descargar Bhool Bhulaiyaa 2 tono de llamada? </h2>
44
- <p>Si te gustan las canciones de <strong>Bhool Bhulaiyaa 2</strong> y quieres establecerlas como tu tono de llamada en tu teléfono, entonces puedes seguir estos sencillos pasos:</p>
45
- <ol>
46
- <li>Ir a <a href=">https://www.zedge.net/find/ringtones/bhool%20bhulaiyaa%202</a>, que es un sitio web que ofrece tonos de llamada gratuitos para varios dispositivos. </li>
47
- <li>Seleccione la canción que desea descargar como tono de llamada de la lista de opciones. You can choose from "Bhool Bhulaiyaa 2", "Ami Je Tomar", "Chura Ke Dil Mera 2", "Mere Dholna 2", "Aami Montro Tontro", or "Bhool Bhulaiyaa 2 Theme". </li>
48
- <li>Haga clic en el botón de descarga junto al nombre de la canción. Se le redirigirá a otra página donde puede obtener una vista previa del tono de llamada y elegir el formato que se adapte a su dispositivo. Puede elegir entre formatos MP3, M4R, OGG o WAV. </li>
49
- <li>Haga clic en el botón de descarga de nuevo y guarde el archivo de tono de llamada en su dispositivo. También puede escanear el código QR en la página para descargar el tono de llamada directamente en su teléfono. </li>
50
-
51
- </ol>
52
- <p>¡Felicidades! Has descargado exitosamente el tono de llamada <strong>Bhool Bhulaiyaa 2</strong> en tu teléfono. Ahora, puedes disfrutar de las melodías pegadizas de la película cada vez que suena tu teléfono. </p>
53
- <h1>Conclusión</h1>
54
- <p><strong>Bhool Bhulaiyaa 2</strong> es una película de comedia de terror que te hará reír y gritar con sus escenas hilarantes y aterradoras. La película tiene un reparto estelar, canciones pegadizas, lugares impresionantes y un emocionante clímax que te mantendrá entretenido hasta el final. La película también es diferente de <strong>Bhool Bhulaiyaa</strong> en muchos sentidos y tiene su propia originalidad y sorpresas. La película es una elección perfecta para cualquier persona que ama la comedia y los géneros de terror y quiere tener un momento divertido y emocionante en las películas. </p>
55
- <p>Si está interesado en ver <strong>Bhool Bhulaiyaa 2</strong>, puede reservar sus entradas en línea o visitar su teatro más cercano. También puede descargar los tonos de llamada de las canciones de la película en su teléfono y disfrutar de ellos en cualquier momento. También puede seguir las páginas oficiales de las redes sociales y el sitio web de la película para obtener más actualizaciones y noticias. </p>
56
- <p>Esperamos que hayas disfrutado leyendo este artículo y hayas aprendido más sobre <strong>Bhool Bhulaiyaa 2</strong>. Si tiene alguna pregunta o comentario, no dude en dejarlos en la sección de comentarios a continuación. Gracias por su tiempo y atención. </p>
57
- <h2>Preguntas frecuentes</h2>
58
- <ul>
59
- <li><strong>Q: ¿Cuándo se lanzó Bhool Bhulaiyaa 2? </strong></li>
60
- <li>A: Bhool Bhulaiyaa 2 fue liberado el 20 de mayo de 2022 en la India y otros países. </li>
61
- <li><strong>P: ¿Quiénes son los actores principales en Bhool Bhulaiyaa 2?</strong></li>
62
- <li>A: Los actores principales de Bhool Bhulaiyaa 2 son Tabu, Kartik Aaryan y Kiara Advani.</li>
63
- <li><strong>Q: ¿Es Bhool Bhulaiyaa 2 una nueva versión de cualquier otra película? </strong></li>
64
- <li>A: No, Bhool Bhulaiyaa 2 no es un remake de ninguna otra película sino una historia original con nuevos personajes. </li>
65
- <li><strong>Q: ¿Cuál es el género de Bhool Bhulaiyaa 2?</strong></li>
66
-
67
- <li><strong>Q: ¿Cómo puedo descargar el tono de llamada de Bhool Bhulaiyaa 2 en mi teléfono? </strong></li>
68
- <li>A: Puede descargar el tono de llamada Bhool Bhulaiyaa 2 en su teléfono siguiendo estos pasos:</li>
69
- <ol>
70
- <li>Ir a <a href=">https://www.zedge.net/find/ringtones/bhool%20bhulaiyaa%202</a>, que es un sitio web que ofrece tonos de llamada gratuitos para varios dispositivos. </li>
71
- <li> Seleccione la canción que desea descargar como tono de llamada de la lista de opciones. </li>
72
- <li> Haga clic en el botón de descarga junto al nombre de la canción y guarde el archivo de tono de llamada en su dispositivo. </li>
73
- <li>Ve a la configuración del teléfono y selecciona la opción de sonido. Luego, seleccione la opción de tono de llamada y busque el archivo de tono de llamada que ha descargado. </li>
74
- <li>Seleccione el archivo y configúrelo como su tono de llamada predeterminado o asígnelo a un contacto específico. </li>
75
- </ol>
76
- </ul></p> 64aa2da5cf<br />
77
- <br />
78
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Facebook Apk Android 4.md DELETED
@@ -1,136 +0,0 @@
1
-
2
- <h1>Descargar Facebook APK Android 4: Cómo obtener la última versión de la aplicación de medios sociales</h1>
3
- <p>Facebook es una de las plataformas de redes sociales más populares del mundo, con más de 2.800 millones de usuarios activos mensuales en diciembre de 2020. Ya sea que quieras mantenerte en contacto con tus amigos y familiares, compartir tus fotos y videos, unirte a grupos y comunidades, o seguir a tus celebridades e influencers favoritos, Facebook tiene algo para todos. </p>
4
- <p>Pero lo que si usted tiene un dispositivo Android más antiguo que no es compatible con la última versión de la aplicación de Facebook? ¿O qué pasa si desea acceder a algunas funciones que no están disponibles en la aplicación oficial? ¿O qué pasa si solo desea ahorrar espacio de almacenamiento y uso de datos en su teléfono? </p>
5
- <h2>descargar facebook apk android 4</h2><br /><p><b><b>Download</b> &raquo;&raquo;&raquo; <a href="https://bltlly.com/2v6JW8">https://bltlly.com/2v6JW8</a></b></p><br /><br />
6
- <p>Si respondiste sí a cualquiera de estas preguntas, entonces es posible que desee intentar descargar Facebook APK Android 4. Esta es una versión modificada de la aplicación original de Facebook que es compatible con los dispositivos Android que se ejecutan en la versión 4.0.3 o superior. En este artículo, vamos a explicar lo que es Facebook APK Android 4, cómo descargarlo, cómo actualizarlo, y cómo solucionarlo. Vamos a empezar! </p>
7
- <h2>¿Qué es Facebook APK Android 4?</h2>
8
- <p>Un APK (Android Package Kit) es un formato de archivo que contiene todos los componentes de una aplicación Android, como el código, los recursos, los activos y el manifiesto. Puedes instalar un archivo APK en tu dispositivo sin usar Google Play Store, que es la fuente oficial para las aplicaciones de Android. </p>
9
- <p>Facebook APK Android 4 es una versión no oficial de la aplicación de Facebook que se ha modificado para funcionar en dispositivos Android más antiguos. Tiene algunas ventajas y desventajas en comparación con la aplicación oficial, que vamos a discutir a continuación. </p>
10
- <h3>Los beneficios de usar Facebook APK Android 4</h3>
11
- <ul>
12
- <li> Es compatible con dispositivos Android que se ejecutan en la versión 4.0.3 o superior, lo que significa que puede usarlo en dispositivos que no son compatibles con la aplicación oficial. </li>
13
-
14
- <li> Utiliza menos datos que la aplicación oficial, lo que significa que puede ahorrar algo de dinero en su plan de datos móviles. </li>
15
- <li> Tiene algunas características que no están disponibles en la aplicación oficial, como la descarga de vídeos, la personalización de temas, y ocultar el estado en línea. </li>
16
- </ul>
17
- <h3>Las desventajas de usar Facebook APK Android 4</h3>
18
- <ul>
19
- <li>No está autorizado por Facebook, lo que significa que puede violar sus términos de servicio y política de privacidad. Puede arriesgarse a perder su cuenta o exponer su información personal si la usa. </li>
20
- <li>Puede no ser seguro, ya que puede contener malware, virus o spyware que pueden dañar su dispositivo o robar sus datos. Solo debe descargarlo de fuentes confiables y escanearlo con un antivirus antes de instalarlo. </li>
21
- <li> Puede no ser estable o confiable, ya que puede bloquearse, congelarse o funcionar mal en cualquier momento. También puede experimentar algunos errores o fallos que afectan su experiencia de usuario. </li>
22
- <li>Es posible que no se actualice regularmente, lo que significa que puede perderse algunas características nuevas o mejoras que se agregan a la aplicación oficial. </li>
23
- </ul>
24
- <h2>Cómo descargar Facebook APK Android 4</h2>
25
- <p>Si desea probar Facebook APK Android 4 en su dispositivo, tendrá que seguir estos pasos:</p>
26
- <h3>Paso 1: Habilitar fuentes desconocidas en el dispositivo</h3>
27
- <p>De forma predeterminada, los dispositivos Android no permiten la instalación de aplicaciones desde fuentes distintas de Google Play Store. Esta es una medida de seguridad para evitar la instalación de aplicaciones maliciosas o dañinas. Sin embargo, si desea instalar Facebook APK Android 4, tendrá que habilitar la opción de instalar aplicaciones de fuentes desconocidas en su dispositivo. He aquí cómo hacerlo:</p>
28
- <ul>
29
- <li>Ir a la configuración de su dispositivo y toque en la seguridad o la privacidad. </li>
30
- <li>Encontrar la opción que dice fuentes desconocidas o instalar aplicaciones desconocidas y alternar en. </li>
31
-
32
- </ul>
33
- <p>Ahora ha habilitado fuentes desconocidas en su dispositivo, y puede proceder al siguiente paso. </p>
34
- <p></p>
35
- <h3>Paso 2: Encontrar una fuente confiable para el archivo APK</h3>
36
- <p>El siguiente paso es encontrar una fuente confiable para el archivo de Facebook APK Android 4. Debe tener cuidado al elegir una fuente, ya que algunos sitios web pueden ofrecer archivos falsos o infectados que pueden dañar su dispositivo o datos. Aquí hay algunos consejos para ayudarle a encontrar una fuente confiable:</p>
37
- <ul>
38
- <li>Busca sitios web que tengan una buena reputación y críticas positivas de otros usuarios. También puede comprobar las calificaciones y comentarios del archivo APK en el sitio web. </li>
39
- <li>Evite los sitios web que tienen ventanas emergentes, anuncios o redirecciones que pueden llevarlo a páginas no deseadas o dañinas. También puedes usar un bloqueador de anuncios o un navegador que tenga un bloqueador de anuncios incorporado para evitar estas molestias. </li>
40
- <li>Compruebe los detalles del archivo APK, como el tamaño del archivo, versión, fecha y desarrollador. Asegúrese de que coinciden con la aplicación oficial de Facebook o la última versión de Facebook APK Android 4.</li>
41
- <li>Escanear el archivo APK con un antivirus o un escáner de malware antes de descargarlo. También puede utilizar herramientas en línea como VirusTotal para comprobar el archivo en busca de amenazas. </li>
42
- </ul>
43
- <p>Una de las fuentes que recomendamos para descargar Facebook APK Android 4 es APKPure, que es un sitio web de buena reputación que ofrece archivos APK seguros y verificados. También puede usar otras fuentes en las que confíe, pero asegúrese de seguir los consejos anteriores. </p>
44
- <h3>Paso 3: Descargar e instalar el archivo APK</h3>
45
- <p>El paso final es descargar e instalar el archivo de Facebook APK Android 4 en su dispositivo. Aquí está cómo hacerlo:</p>
46
- <ul>
47
- <li>Abra su navegador y vaya al sitio web donde encontró el archivo APK. Toque en el botón de descarga o enlace y espere a que se descargue el archivo. </li>
48
- <li>Una vez que la descarga se haya completado, vaya al administrador de archivos de su dispositivo y busque el archivo APK. Toque en él para abrirlo. </li>
49
-
50
- <li>Una vez que la instalación se realiza, puede abrir la aplicación e iniciar sesión con su cuenta de Facebook. También puede crear una nueva cuenta si no tiene una. </li>
51
- </ul>
52
- <p>Felicidades! Usted ha descargado e instalado con éxito Facebook APK Android 4 en su dispositivo. Ahora puedes disfrutar usando la aplicación y sus características. </p>
53
- <h2>Cómo actualizar Facebook APK Android 4</h2>
54
- <p>Si desea seguir utilizando Facebook APK Android 4, tendrá que actualizarlo regularmente para obtener las últimas características y mejoras. Hay dos maneras de actualizar Facebook APK Android 4: utilizando el actualizador incorporado o descargar el último archivo APK desde el sitio web oficial. </p>
55
- <h3>Opción 1: Utilice el actualizador incorporado</h3>
56
- <p>Algunas versiones de Facebook APK Android 4 tienen un actualizador incorporado que le permite comprobar si hay actualizaciones y descargarlas directamente desde la aplicación. He aquí cómo usarlo:</p>
57
- <ul>
58
- <li>Abrir Facebook APK Android 4 y toque en el icono del menú (tres líneas horizontales) en la esquina superior derecha de la pantalla. </li>
59
- <li>Desplácese hacia abajo y toque en la configuración y la privacidad. </li>
60
- <li>Toque en las actualizaciones de la aplicación. </li>
61
- <li> Si hay una actualización disponible, toque en actualizar ahora y espere a que la actualización se descargue e instale. </li>
62
- <li> Si no hay actualización disponible, toque en la comprobación de actualizaciones y espere a que la aplicación escanee para cualquier versión nueva. </li>
63
- </ul>
64
- <p>Esta opción es conveniente y fácil, pero puede que no funcione para todas las versiones de Facebook APK Android 4. Si no ves la opción de actualizaciones de aplicaciones en su configuración & menú de privacidad, entonces tendrá que utilizar la opción 2 en su lugar. </p>
65
- <h3>Opción 2: Descargar el último archivo APK desde el sitio web oficial</h3>
66
- <p>Otra manera de actualizar Facebook APK Android 4 es descargar el último archivo APK desde el sitio web oficial de Facebook. Aquí está cómo hacerlo:</p>
67
- <ul>
68
- <li>Abra su navegador y vaya a <a href="">https://www.facebook.com/android</a>, que es el sitio web oficial de Facebook para dispositivos Android. </li>
69
-
70
- <li>Aparecerá un mensaje pidiéndole que descargue el archivo APK. Toque en Aceptar o descargar y espere a que el archivo se descargue. </li>
71
- <li>Una vez que la descarga se haya completado, vaya al administrador de archivos de su dispositivo y busque el archivo APK. Toque en él para abrirlo. </li>
72
- <li>Aparecerá un mensaje pidiéndole que instale la aplicación. Toque en instalar y espere a que termine el proceso de instalación. </li>
73
- <li>Una vez que la instalación se realiza, puede abrir la aplicación e iniciar sesión con su cuenta de Facebook. También puede crear una nueva cuenta si no tiene una. </li>
74
- </ul>
75
- <p>Esta opción es confiable y segura, ya que está descargando el archivo APK de la fuente oficial. Sin embargo, puede tomar más tiempo y consumir más datos que la opción 1.</p>
76
- <h2>Cómo solucionar problemas de Facebook APK Android 4</h2>
77
- <p>A veces, puede encontrar algunos problemas o problemas cuando se utiliza Facebook APK Android 4. Estos pueden incluir errores, se bloquea, se congela, o mal rendimiento. No te preocupes, ya que hay algunas maneras de solucionar problemas de Facebook APK Android 4 y solucionar estos problemas. Aquí hay algunos problemas comunes y soluciones:</p>
78
- <h3>Problemas y soluciones comunes</h3>
79
- <tabla>
80
- <tr>
81
- <th>Problema</th>
82
- <th>Solución</th>
83
- </tr>
84
- <tr>
85
- <td>La aplicación no se instalará ni actualizará</td>
86
- <td>Asegúrese de que tiene suficiente espacio de almacenamiento en su dispositivo y una conexión a Internet estable. Además, compruebe si ha habilitado fuentes desconocidas en su dispositivo y si ha descargado el archivo APK correcto para su dispositivo. </td>
87
- </tr>
88
- <tr>
89
- <td>La aplicación no se abrirá ni se cargará</td>
90
- <td>Borrar la caché y los datos de la aplicación yendo a la configuración del dispositivo, aplicaciones, Facebook, almacenamiento, y tocando en la caché clara y datos claros. Además, compruebe si tiene la última versión de la aplicación y si su dispositivo cumple los requisitos mínimos para ejecutar la aplicación. </td>
91
- </tr>
92
- <tr>
93
- <td>La aplicación se bloquea o se congela</td>
94
-
95
- </tr>
96
- <tr>
97
- <td>La aplicación es lenta o lenta</td>
98
- <td>Reduzca el uso de datos de la aplicación yendo a la configuración del dispositivo, el uso de datos, Facebook y cambiando los datos de fondo. Además, desactive cualquier notificación o característica innecesaria que pueda ralentizar la aplicación. </td>
99
- </tr>
100
- <tr>
101
- <td>La aplicación muestra información incorrecta o desactualizada</td>
102
- <td>Actualizar la aplicación deslizando hacia abajo en la pantalla o tocando el icono de actualización en la esquina superior derecha de la pantalla. Además, compruebe si la fecha y la hora de su dispositivo son correctas y si tiene una buena conexión a Internet. </td>
103
- </tr>
104
- </tabla>
105
- <h3>Consejos y trucos para optimizar tu experiencia</h3>
106
- <ul>
107
- <li>Utilice Facebook Lite en lugar de Facebook APK Android 4 si usted tiene un dispositivo de gama baja o una conexión a Internet lenta. Facebook Lite es una versión más ligera y rápida de Facebook que consume menos datos y recursos. </li>
108
- <li>Usa Messenger Lite en lugar de Messenger si quieres chatear con tus amigos de Facebook sin usar la aplicación principal. Messenger Lite es una versión más simple y rápida de Messenger que también consume menos datos y recursos. </li>
109
- <li>Utilice Facebook Web en lugar de Facebook APK Android 4 si desea acceder a Facebook desde su navegador sin instalar ninguna aplicación. Facebook Web es una versión móvil de Facebook que funciona en cualquier navegador. </li>
110
- <li>Utilice el modo oscuro en lugar del modo de luz si desea ahorrar vida de la batería y reducir la tensión ocular. El modo oscuro es una característica que cambia el color de fondo de la aplicación de blanco a negro. Puede habilitar el modo oscuro yendo a la configuración del dispositivo, visualización, modo oscuro y activarlo. </li>
111
- <li>Utilice atajos en lugar de menús si desea acceder a sus características favoritas más rápido. Los atajos son iconos que aparecen en la parte inferior de la pantalla y le permiten cambiar rápidamente entre noticias, grupos, ver, mercado y notificaciones. Puede personalizar sus accesos directos tocando en ellos y mantenerlos presionados hasta que se muevan, luego arrastrarlos a su posición preferida. </li>
112
- </ul>
113
-
114
- <p>En este artículo, hemos explicado lo que es Facebook APK Android 4, cómo descargarlo, cómo actualizarlo, y cómo solucionarlo. También hemos compartido algunos consejos y trucos para optimizar tu experiencia con la aplicación. Esperamos que este artículo te haya resultado útil e informativo. </p>
115
- <p>Facebook APK Android 4 es una gran alternativa a la aplicación oficial de Facebook, especialmente si usted tiene un dispositivo Android más antiguo o desea acceder a algunas características adicionales. Sin embargo, también debes ser consciente de los riesgos y desafíos que conlleva el uso de una aplicación no oficial. Siempre debe descargar la aplicación de fuentes confiables, escanearla en busca de amenazas y actualizarla regularmente. También debe seguir los pasos de solución de problemas si encuentra algún problema o problemas con la aplicación. </p>
116
- <p>Si usted tiene alguna pregunta o comentario sobre Facebook APK Android 4, no dude en dejar un comentario a continuación. Nos encantaría saber de usted! </p>
117
- <h2>Preguntas frecuentes</h2>
118
- <p>Aquí hay algunas preguntas frecuentes sobre Facebook APK Android 4:</p>
119
- <h3>¿Es Facebook APK Android 4 legal? </h3>
120
- <p>Facebook APK Android 4 no es ilegal, pero no está autorizado por Facebook tampoco. Puede violar sus términos de servicio y política de privacidad, lo que significa que puede arriesgarse a perder su cuenta o exponer su información personal si la usa. Debes usarlo a tu propia discreción y responsabilidad. </p>
121
- <h3>¿Es seguro Facebook APK Android 4? </h3>
122
- <p>Facebook APK Android 4 no puede ser seguro, ya que puede contener malware, virus o spyware que puede dañar su dispositivo o robar sus datos. Solo debe descargarlo de fuentes confiables y escanearlo con un antivirus antes de instalarlo. También debe evitar conceder permisos innecesarios o acceso a la aplicación. </p>
123
- <h3>Es Facebook APK Android 4 libre? </h3>
124
-
125
- <h3>¿Cómo puedo desinstalar Facebook APK Android 4?</h3>
126
- <p>Si desea desinstalar Facebook APK Android 4 desde su dispositivo, puede seguir estos pasos:</p>
127
- <ul>
128
- <li>Vaya a la configuración de su dispositivo y toque en aplicaciones o aplicaciones. </li>
129
- <li>Encuentra y toca en Facebook.</li>
130
- <li>Toque en desinstalar y confirme su acción. </li>
131
- </ul>
132
- <p>También puede eliminar el archivo APK del administrador de archivos de su dispositivo si ya no lo necesita. </p>
133
- <h3>¿Puedo usar Facebook APK Android 4 en otros dispositivos? </h3>
134
- <p>Facebook APK Android 4 está diseñado para dispositivos Android que se ejecutan en la versión 4.0.3 o superior. Es posible que pueda usarlo en otros dispositivos, como iOS o Windows, pero necesitará usar un emulador o un convertidor para hacerlo. Sin embargo, esto puede no funcionar bien o en absoluto, y puede causar algunos problemas de compatibilidad o rendimiento. Recomendamos usar la aplicación oficial de Facebook o la versión web para otros dispositivos. </p> 64aa2da5cf<br />
135
- <br />
136
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tz/win.py DELETED
@@ -1,370 +0,0 @@
1
- # -*- coding: utf-8 -*-
2
- """
3
- This module provides an interface to the native time zone data on Windows,
4
- including :py:class:`datetime.tzinfo` implementations.
5
-
6
- Attempting to import this module on a non-Windows platform will raise an
7
- :py:obj:`ImportError`.
8
- """
9
- # This code was originally contributed by Jeffrey Harris.
10
- import datetime
11
- import struct
12
-
13
- from six.moves import winreg
14
- from six import text_type
15
-
16
- try:
17
- import ctypes
18
- from ctypes import wintypes
19
- except ValueError:
20
- # ValueError is raised on non-Windows systems for some horrible reason.
21
- raise ImportError("Running tzwin on non-Windows system")
22
-
23
- from ._common import tzrangebase
24
-
25
- __all__ = ["tzwin", "tzwinlocal", "tzres"]
26
-
27
- ONEWEEK = datetime.timedelta(7)
28
-
29
- TZKEYNAMENT = r"SOFTWARE\Microsoft\Windows NT\CurrentVersion\Time Zones"
30
- TZKEYNAME9X = r"SOFTWARE\Microsoft\Windows\CurrentVersion\Time Zones"
31
- TZLOCALKEYNAME = r"SYSTEM\CurrentControlSet\Control\TimeZoneInformation"
32
-
33
-
34
- def _settzkeyname():
35
- handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
36
- try:
37
- winreg.OpenKey(handle, TZKEYNAMENT).Close()
38
- TZKEYNAME = TZKEYNAMENT
39
- except WindowsError:
40
- TZKEYNAME = TZKEYNAME9X
41
- handle.Close()
42
- return TZKEYNAME
43
-
44
-
45
- TZKEYNAME = _settzkeyname()
46
-
47
-
48
- class tzres(object):
49
- """
50
- Class for accessing ``tzres.dll``, which contains timezone name related
51
- resources.
52
-
53
- .. versionadded:: 2.5.0
54
- """
55
- p_wchar = ctypes.POINTER(wintypes.WCHAR) # Pointer to a wide char
56
-
57
- def __init__(self, tzres_loc='tzres.dll'):
58
- # Load the user32 DLL so we can load strings from tzres
59
- user32 = ctypes.WinDLL('user32')
60
-
61
- # Specify the LoadStringW function
62
- user32.LoadStringW.argtypes = (wintypes.HINSTANCE,
63
- wintypes.UINT,
64
- wintypes.LPWSTR,
65
- ctypes.c_int)
66
-
67
- self.LoadStringW = user32.LoadStringW
68
- self._tzres = ctypes.WinDLL(tzres_loc)
69
- self.tzres_loc = tzres_loc
70
-
71
- def load_name(self, offset):
72
- """
73
- Load a timezone name from a DLL offset (integer).
74
-
75
- >>> from dateutil.tzwin import tzres
76
- >>> tzr = tzres()
77
- >>> print(tzr.load_name(112))
78
- 'Eastern Standard Time'
79
-
80
- :param offset:
81
- A positive integer value referring to a string from the tzres dll.
82
-
83
- .. note::
84
-
85
- Offsets found in the registry are generally of the form
86
- ``@tzres.dll,-114``. The offset in this case is 114, not -114.
87
-
88
- """
89
- resource = self.p_wchar()
90
- lpBuffer = ctypes.cast(ctypes.byref(resource), wintypes.LPWSTR)
91
- nchar = self.LoadStringW(self._tzres._handle, offset, lpBuffer, 0)
92
- return resource[:nchar]
93
-
94
- def name_from_string(self, tzname_str):
95
- """
96
- Parse strings as returned from the Windows registry into the time zone
97
- name as defined in the registry.
98
-
99
- >>> from dateutil.tzwin import tzres
100
- >>> tzr = tzres()
101
- >>> print(tzr.name_from_string('@tzres.dll,-251'))
102
- 'Dateline Daylight Time'
103
- >>> print(tzr.name_from_string('Eastern Standard Time'))
104
- 'Eastern Standard Time'
105
-
106
- :param tzname_str:
107
- A timezone name string as returned from a Windows registry key.
108
-
109
- :return:
110
- Returns the localized timezone string from tzres.dll if the string
111
- is of the form `@tzres.dll,-offset`, else returns the input string.
112
- """
113
- if not tzname_str.startswith('@'):
114
- return tzname_str
115
-
116
- name_splt = tzname_str.split(',-')
117
- try:
118
- offset = int(name_splt[1])
119
- except:
120
- raise ValueError("Malformed timezone string.")
121
-
122
- return self.load_name(offset)
123
-
124
-
125
- class tzwinbase(tzrangebase):
126
- """tzinfo class based on win32's timezones available in the registry."""
127
- def __init__(self):
128
- raise NotImplementedError('tzwinbase is an abstract base class')
129
-
130
- def __eq__(self, other):
131
- # Compare on all relevant dimensions, including name.
132
- if not isinstance(other, tzwinbase):
133
- return NotImplemented
134
-
135
- return (self._std_offset == other._std_offset and
136
- self._dst_offset == other._dst_offset and
137
- self._stddayofweek == other._stddayofweek and
138
- self._dstdayofweek == other._dstdayofweek and
139
- self._stdweeknumber == other._stdweeknumber and
140
- self._dstweeknumber == other._dstweeknumber and
141
- self._stdhour == other._stdhour and
142
- self._dsthour == other._dsthour and
143
- self._stdminute == other._stdminute and
144
- self._dstminute == other._dstminute and
145
- self._std_abbr == other._std_abbr and
146
- self._dst_abbr == other._dst_abbr)
147
-
148
- @staticmethod
149
- def list():
150
- """Return a list of all time zones known to the system."""
151
- with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle:
152
- with winreg.OpenKey(handle, TZKEYNAME) as tzkey:
153
- result = [winreg.EnumKey(tzkey, i)
154
- for i in range(winreg.QueryInfoKey(tzkey)[0])]
155
- return result
156
-
157
- def display(self):
158
- """
159
- Return the display name of the time zone.
160
- """
161
- return self._display
162
-
163
- def transitions(self, year):
164
- """
165
- For a given year, get the DST on and off transition times, expressed
166
- always on the standard time side. For zones with no transitions, this
167
- function returns ``None``.
168
-
169
- :param year:
170
- The year whose transitions you would like to query.
171
-
172
- :return:
173
- Returns a :class:`tuple` of :class:`datetime.datetime` objects,
174
- ``(dston, dstoff)`` for zones with an annual DST transition, or
175
- ``None`` for fixed offset zones.
176
- """
177
-
178
- if not self.hasdst:
179
- return None
180
-
181
- dston = picknthweekday(year, self._dstmonth, self._dstdayofweek,
182
- self._dsthour, self._dstminute,
183
- self._dstweeknumber)
184
-
185
- dstoff = picknthweekday(year, self._stdmonth, self._stddayofweek,
186
- self._stdhour, self._stdminute,
187
- self._stdweeknumber)
188
-
189
- # Ambiguous dates default to the STD side
190
- dstoff -= self._dst_base_offset
191
-
192
- return dston, dstoff
193
-
194
- def _get_hasdst(self):
195
- return self._dstmonth != 0
196
-
197
- @property
198
- def _dst_base_offset(self):
199
- return self._dst_base_offset_
200
-
201
-
202
- class tzwin(tzwinbase):
203
- """
204
- Time zone object created from the zone info in the Windows registry
205
-
206
- These are similar to :py:class:`dateutil.tz.tzrange` objects in that
207
- the time zone data is provided in the format of a single offset rule
208
- for either 0 or 2 time zone transitions per year.
209
-
210
- :param: name
211
- The name of a Windows time zone key, e.g. "Eastern Standard Time".
212
- The full list of keys can be retrieved with :func:`tzwin.list`.
213
- """
214
-
215
- def __init__(self, name):
216
- self._name = name
217
-
218
- with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle:
219
- tzkeyname = text_type("{kn}\\{name}").format(kn=TZKEYNAME, name=name)
220
- with winreg.OpenKey(handle, tzkeyname) as tzkey:
221
- keydict = valuestodict(tzkey)
222
-
223
- self._std_abbr = keydict["Std"]
224
- self._dst_abbr = keydict["Dlt"]
225
-
226
- self._display = keydict["Display"]
227
-
228
- # See http://ww_winreg.jsiinc.com/SUBA/tip0300/rh0398.htm
229
- tup = struct.unpack("=3l16h", keydict["TZI"])
230
- stdoffset = -tup[0]-tup[1] # Bias + StandardBias * -1
231
- dstoffset = stdoffset-tup[2] # + DaylightBias * -1
232
- self._std_offset = datetime.timedelta(minutes=stdoffset)
233
- self._dst_offset = datetime.timedelta(minutes=dstoffset)
234
-
235
- # for the meaning see the win32 TIME_ZONE_INFORMATION structure docs
236
- # http://msdn.microsoft.com/en-us/library/windows/desktop/ms725481(v=vs.85).aspx
237
- (self._stdmonth,
238
- self._stddayofweek, # Sunday = 0
239
- self._stdweeknumber, # Last = 5
240
- self._stdhour,
241
- self._stdminute) = tup[4:9]
242
-
243
- (self._dstmonth,
244
- self._dstdayofweek, # Sunday = 0
245
- self._dstweeknumber, # Last = 5
246
- self._dsthour,
247
- self._dstminute) = tup[12:17]
248
-
249
- self._dst_base_offset_ = self._dst_offset - self._std_offset
250
- self.hasdst = self._get_hasdst()
251
-
252
- def __repr__(self):
253
- return "tzwin(%s)" % repr(self._name)
254
-
255
- def __reduce__(self):
256
- return (self.__class__, (self._name,))
257
-
258
-
259
- class tzwinlocal(tzwinbase):
260
- """
261
- Class representing the local time zone information in the Windows registry
262
-
263
- While :class:`dateutil.tz.tzlocal` makes system calls (via the :mod:`time`
264
- module) to retrieve time zone information, ``tzwinlocal`` retrieves the
265
- rules directly from the Windows registry and creates an object like
266
- :class:`dateutil.tz.tzwin`.
267
-
268
- Because Windows does not have an equivalent of :func:`time.tzset`, on
269
- Windows, :class:`dateutil.tz.tzlocal` instances will always reflect the
270
- time zone settings *at the time that the process was started*, meaning
271
- changes to the machine's time zone settings during the run of a program
272
- on Windows will **not** be reflected by :class:`dateutil.tz.tzlocal`.
273
- Because ``tzwinlocal`` reads the registry directly, it is unaffected by
274
- this issue.
275
- """
276
- def __init__(self):
277
- with winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE) as handle:
278
- with winreg.OpenKey(handle, TZLOCALKEYNAME) as tzlocalkey:
279
- keydict = valuestodict(tzlocalkey)
280
-
281
- self._std_abbr = keydict["StandardName"]
282
- self._dst_abbr = keydict["DaylightName"]
283
-
284
- try:
285
- tzkeyname = text_type('{kn}\\{sn}').format(kn=TZKEYNAME,
286
- sn=self._std_abbr)
287
- with winreg.OpenKey(handle, tzkeyname) as tzkey:
288
- _keydict = valuestodict(tzkey)
289
- self._display = _keydict["Display"]
290
- except OSError:
291
- self._display = None
292
-
293
- stdoffset = -keydict["Bias"]-keydict["StandardBias"]
294
- dstoffset = stdoffset-keydict["DaylightBias"]
295
-
296
- self._std_offset = datetime.timedelta(minutes=stdoffset)
297
- self._dst_offset = datetime.timedelta(minutes=dstoffset)
298
-
299
- # For reasons unclear, in this particular key, the day of week has been
300
- # moved to the END of the SYSTEMTIME structure.
301
- tup = struct.unpack("=8h", keydict["StandardStart"])
302
-
303
- (self._stdmonth,
304
- self._stdweeknumber, # Last = 5
305
- self._stdhour,
306
- self._stdminute) = tup[1:5]
307
-
308
- self._stddayofweek = tup[7]
309
-
310
- tup = struct.unpack("=8h", keydict["DaylightStart"])
311
-
312
- (self._dstmonth,
313
- self._dstweeknumber, # Last = 5
314
- self._dsthour,
315
- self._dstminute) = tup[1:5]
316
-
317
- self._dstdayofweek = tup[7]
318
-
319
- self._dst_base_offset_ = self._dst_offset - self._std_offset
320
- self.hasdst = self._get_hasdst()
321
-
322
- def __repr__(self):
323
- return "tzwinlocal()"
324
-
325
- def __str__(self):
326
- # str will return the standard name, not the daylight name.
327
- return "tzwinlocal(%s)" % repr(self._std_abbr)
328
-
329
- def __reduce__(self):
330
- return (self.__class__, ())
331
-
332
-
333
- def picknthweekday(year, month, dayofweek, hour, minute, whichweek):
334
- """ dayofweek == 0 means Sunday, whichweek 5 means last instance """
335
- first = datetime.datetime(year, month, 1, hour, minute)
336
-
337
- # This will work if dayofweek is ISO weekday (1-7) or Microsoft-style (0-6),
338
- # Because 7 % 7 = 0
339
- weekdayone = first.replace(day=((dayofweek - first.isoweekday()) % 7) + 1)
340
- wd = weekdayone + ((whichweek - 1) * ONEWEEK)
341
- if (wd.month != month):
342
- wd -= ONEWEEK
343
-
344
- return wd
345
-
346
-
347
- def valuestodict(key):
348
- """Convert a registry key's values to a dictionary."""
349
- dout = {}
350
- size = winreg.QueryInfoKey(key)[1]
351
- tz_res = None
352
-
353
- for i in range(size):
354
- key_name, value, dtype = winreg.EnumValue(key, i)
355
- if dtype == winreg.REG_DWORD or dtype == winreg.REG_DWORD_LITTLE_ENDIAN:
356
- # If it's a DWORD (32-bit integer), it's stored as unsigned - convert
357
- # that to a proper signed integer
358
- if value & (1 << 31):
359
- value = value - (1 << 32)
360
- elif dtype == winreg.REG_SZ:
361
- # If it's a reference to the tzres DLL, load the actual string
362
- if value.startswith('@tzres'):
363
- tz_res = tz_res or tzres()
364
- value = tz_res.name_from_string(value)
365
-
366
- value = value.rstrip('\x00') # Remove trailing nulls
367
-
368
- dout[key_name] = value
369
-
370
- return dout
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/idna/intranges.py DELETED
@@ -1,54 +0,0 @@
1
- """
2
- Given a list of integers, made up of (hopefully) a small number of long runs
3
- of consecutive integers, compute a representation of the form
4
- ((start1, end1), (start2, end2) ...). Then answer the question "was x present
5
- in the original list?" in time O(log(# runs)).
6
- """
7
-
8
- import bisect
9
- from typing import List, Tuple
10
-
11
- def intranges_from_list(list_: List[int]) -> Tuple[int, ...]:
12
- """Represent a list of integers as a sequence of ranges:
13
- ((start_0, end_0), (start_1, end_1), ...), such that the original
14
- integers are exactly those x such that start_i <= x < end_i for some i.
15
-
16
- Ranges are encoded as single integers (start << 32 | end), not as tuples.
17
- """
18
-
19
- sorted_list = sorted(list_)
20
- ranges = []
21
- last_write = -1
22
- for i in range(len(sorted_list)):
23
- if i+1 < len(sorted_list):
24
- if sorted_list[i] == sorted_list[i+1]-1:
25
- continue
26
- current_range = sorted_list[last_write+1:i+1]
27
- ranges.append(_encode_range(current_range[0], current_range[-1] + 1))
28
- last_write = i
29
-
30
- return tuple(ranges)
31
-
32
- def _encode_range(start: int, end: int) -> int:
33
- return (start << 32) | end
34
-
35
- def _decode_range(r: int) -> Tuple[int, int]:
36
- return (r >> 32), (r & ((1 << 32) - 1))
37
-
38
-
39
- def intranges_contain(int_: int, ranges: Tuple[int, ...]) -> bool:
40
- """Determine if `int_` falls into one of the ranges in `ranges`."""
41
- tuple_ = _encode_range(int_, 0)
42
- pos = bisect.bisect_left(ranges, tuple_)
43
- # we could be immediately ahead of a tuple (start, end)
44
- # with start < int_ <= end
45
- if pos > 0:
46
- left, right = _decode_range(ranges[pos-1])
47
- if left <= int_ < right:
48
- return True
49
- # or we could be immediately behind a tuple (int_, end)
50
- if pos < len(ranges):
51
- left, _ = _decode_range(ranges[pos])
52
- if left == int_:
53
- return True
54
- return False
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/core/util.h DELETED
@@ -1,773 +0,0 @@
1
- /******************************************************************************
2
- * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
3
- *
4
- * Redistribution and use in source and binary forms, with or without
5
- * modification, are permitted provided that the following conditions are met:
6
- * * Redistributions of source code must retain the above copyright
7
- * notice, this list of conditions and the following disclaimer.
8
- * * Redistributions in binary form must reproduce the above copyright
9
- * notice, this list of conditions and the following disclaimer in the
10
- * documentation and/or other materials provided with the distribution.
11
- * * Neither the name of the NVIDIA CORPORATION nor the
12
- * names of its contributors may be used to endorse or promote products
13
- * derived from this software without specific prior written permission.
14
- *
15
- * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
16
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
17
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
18
- * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY
19
- * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
20
- * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
21
- * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
22
- * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
23
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
24
- * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
25
- *
26
- ******************************************************************************/
27
- #pragma once
28
-
29
- #include <cuda_occupancy.h>
30
- #include <thrust/detail/config.h>
31
- #include <thrust/system/cuda/config.h>
32
- #include <thrust/type_traits/is_contiguous_iterator.h>
33
- #include <thrust/detail/raw_pointer_cast.h>
34
- #include <thrust/system/cuda/detail/util.h>
35
- #include <cub/block/block_load.cuh>
36
- #include <cub/block/block_store.cuh>
37
- #include <cub/block/block_scan.cuh>
38
-
39
- namespace thrust
40
- {
41
-
42
- namespace cuda_cub {
43
- namespace core {
44
-
45
- #ifdef __NVCOMPILER_CUDA__
46
- # if (__NVCOMPILER_CUDA_ARCH__ >= 600)
47
- # define THRUST_TUNING_ARCH sm60
48
- # elif (__NVCOMPILER_CUDA_ARCH__ >= 520)
49
- # define THRUST_TUNING_ARCH sm52
50
- # elif (__NVCOMPILER_CUDA_ARCH__ >= 350)
51
- # define THRUST_TUNING_ARCH sm35
52
- # else
53
- # define THRUST_TUNING_ARCH sm30
54
- # endif
55
- #else
56
- # if (__CUDA_ARCH__ >= 600)
57
- # define THRUST_TUNING_ARCH sm60
58
- # elif (__CUDA_ARCH__ >= 520)
59
- # define THRUST_TUNING_ARCH sm52
60
- # elif (__CUDA_ARCH__ >= 350)
61
- # define THRUST_TUNING_ARCH sm35
62
- # elif (__CUDA_ARCH__ >= 300)
63
- # define THRUST_TUNING_ARCH sm30
64
- # elif !defined (__CUDA_ARCH__)
65
- # define THRUST_TUNING_ARCH sm30
66
- # endif
67
- #endif
68
-
69
- // Typelist - a container of types, supports up to 10 types
70
- // --------------------------------------------------------------------------
71
-
72
- class _;
73
- template <class = _, class = _, class = _, class = _, class = _, class = _, class = _, class = _, class = _, class = _>
74
- struct typelist;
75
-
76
- // -------------------------------------
77
-
78
- // supported SM arch
79
- // ---------------------
80
- struct sm30 { enum { ver = 300, warpSize = 32 }; };
81
- struct sm35 { enum { ver = 350, warpSize = 32 }; };
82
- struct sm52 { enum { ver = 520, warpSize = 32 }; };
83
- struct sm60 { enum { ver = 600, warpSize = 32 }; };
84
-
85
- // list of sm, checked from left to right order
86
- // the rightmost is the lowest sm arch supported
87
- // --------------------------------------------
88
- typedef typelist<sm60,sm52,sm35,sm30> sm_list;
89
-
90
- // lowest supported SM arch
91
- // --------------------------------------------------------------------------
92
-
93
- template<class, class>
94
- struct lowest_supported_sm_arch_impl;
95
-
96
- template <class SM, class _0, class _1, class _2, class _3, class _4, class _5, class _6, class _7, class _8, class _9>
97
- struct lowest_supported_sm_arch_impl<SM, typelist<_0, _1, _2, _3, _4, _5, _6, _7, _8, _9> >
98
- : lowest_supported_sm_arch_impl<_0, typelist< _1, _2, _3, _4, _5, _6, _7, _8, _9> > {};
99
- template <class SM>
100
- struct lowest_supported_sm_arch_impl<SM, typelist<> >
101
- {
102
- typedef SM type;
103
- };
104
-
105
- typedef typename lowest_supported_sm_arch_impl<_,sm_list>::type lowest_supported_sm_arch;
106
-
107
- // metafunction to match next viable PtxPlan specialization
108
- // --------------------------------------------------------------------------
109
-
110
- __THRUST_DEFINE_HAS_NESTED_TYPE(has_tuning_t, tuning)
111
- __THRUST_DEFINE_HAS_NESTED_TYPE(has_type_t, type)
112
-
113
- template <template <class> class, class, class>
114
- struct specialize_plan_impl_loop;
115
- template <template <class> class, class>
116
- struct specialize_plan_impl_match;
117
-
118
- // we loop through the sm_list
119
- template <template <class> class P, class SM, class _0, class _1, class _2, class _3, class _4, class _5, class _6, class _7, class _8, class _9>
120
- struct specialize_plan_impl_loop<P, SM, typelist<_0, _1, _2, _3, _4, _5, _6, _7, _8, _9> >
121
- : specialize_plan_impl_loop<P, SM, typelist< _1, _2, _3, _4, _5, _6, _7, _8, _9> > {};
122
-
123
- // until we find first lowest match
124
- template <template <class> class P, class SM, class _1, class _2, class _3, class _4, class _5, class _6, class _7, class _8, class _9>
125
- struct specialize_plan_impl_loop <P, SM, typelist<SM, _1, _2, _3, _4, _5, _6, _7, _8, _9> >
126
- : specialize_plan_impl_match<P, typelist<SM, _1, _2, _3, _4, _5, _6, _7, _8, _9> > {};
127
-
128
- template<class, class>
129
- struct has_sm_tuning_impl;
130
-
131
- // specializing for Tunig which needs 1 arg
132
- template <class SM,
133
- template <class, class> class Tuning,
134
- class _0>
135
- struct has_sm_tuning_impl<SM, Tuning<lowest_supported_sm_arch, _0> > : has_type_t<Tuning<SM, _0> > {};
136
-
137
- // specializing for Tunig which needs 2 args
138
- template <class SM,
139
- template <class, class,class> class Tuning,
140
- class _0, class _1>
141
- struct has_sm_tuning_impl<SM, Tuning<lowest_supported_sm_arch, _0, _1> > : has_type_t<Tuning<SM, _0, _1> > {};
142
-
143
- template <template <class> class P, class SM>
144
- struct has_sm_tuning : has_sm_tuning_impl<SM, typename P<lowest_supported_sm_arch>::tuning > {};
145
-
146
- // once first match is found in sm_list, all remaining sm are possible
147
- // candidate for tuning, so pick the first available
148
- // if the plan P has SM-level tuning then pick it,
149
- // otherwise move on to the next sm in the sm_list
150
- template <template <class> class P, class SM, class _1, class _2, class _3, class _4, class _5, class _6, class _7, class _8, class _9>
151
- struct specialize_plan_impl_match<P, typelist<SM, _1, _2, _3, _4, _5, _6, _7, _8, _9> >
152
- : thrust::detail::conditional<
153
- has_sm_tuning<P, SM>::value,
154
- P<SM>,
155
- specialize_plan_impl_match<P, typelist<_1, _2, _3, _4, _5, _6, _7, _8, _9> > >::type {};
156
-
157
- template <template <class> class Plan, class SM = THRUST_TUNING_ARCH>
158
- struct specialize_plan_msvc10_war
159
- {
160
- // if Plan has tuning type, this means it has SM-specific tuning
161
- // so loop through sm_list to find match,
162
- // otherwise just specialize on provided SM
163
- typedef thrust::detail::conditional<has_tuning_t<Plan<lowest_supported_sm_arch> >::value,
164
- specialize_plan_impl_loop<Plan, SM, sm_list>,
165
- Plan<SM> >
166
- type;
167
- };
168
-
169
- template <template <class> class Plan, class SM = THRUST_TUNING_ARCH>
170
- struct specialize_plan : specialize_plan_msvc10_war<Plan,SM>::type::type {};
171
-
172
-
173
- /////////////////////////
174
- /////////////////////////
175
- /////////////////////////
176
-
177
- // retrieve temp storage size from an Agent
178
- // ---------------------------------------------------------------------------
179
- // metafunction introspects Agent, and if it finds TempStorage type
180
- // it will return its size
181
-
182
- __THRUST_DEFINE_HAS_NESTED_TYPE(has_temp_storage, TempStorage)
183
-
184
- template <class Agent, class U>
185
- struct temp_storage_size_impl;
186
-
187
- template <class Agent>
188
- struct temp_storage_size_impl<Agent, thrust::detail::false_type>
189
- {
190
- enum
191
- {
192
- value = 0
193
- };
194
- };
195
-
196
- template <class Agent>
197
- struct temp_storage_size_impl<Agent, thrust::detail::true_type>
198
- {
199
- enum
200
- {
201
- value = sizeof(typename Agent::TempStorage)
202
- };
203
- };
204
-
205
- template <class Agent>
206
- struct temp_storage_size
207
- : temp_storage_size_impl<Agent, typename has_temp_storage<Agent>::type>
208
- {
209
- };
210
-
211
- // check whether all Agents requires < MAX_SHMEM shared memory
212
- // ---------------------------------------------------------------------------
213
- // if so, we can use simpler kernel for dispatch, which assumes that all
214
- // shared memory is on chip.
215
- // Otherwise, a kernel will be compiled which can also accept virtualized
216
- // shared memory, in case there is not enough on chip. This kernel is about
217
- // 10% slower
218
-
219
- template <bool, class, size_t, class>
220
- struct has_enough_shmem_impl;
221
-
222
- template <bool V, class A, size_t S, class _0, class _1, class _2, class _3, class _4, class _5, class _6, class _7, class _8, class _9>
223
- struct has_enough_shmem_impl<V, A, S, typelist<_0, _1, _2, _3, _4, _5, _6, _7, _8, _9> >
224
- : has_enough_shmem_impl<
225
- V && (temp_storage_size<specialize_plan<A::template PtxPlan, _0> >::value <= S),
226
- A,
227
- S,
228
- typelist<_1, _2, _3, _4, _5, _6, _7, _8, _9> >
229
- {
230
- };
231
- template <bool V, class A, size_t S>
232
- struct has_enough_shmem_impl<V, A, S, typelist<> >
233
- {
234
- enum
235
- {
236
- value = V
237
- };
238
- typedef typename thrust::detail::conditional<value,
239
- thrust::detail::true_type,
240
- thrust::detail::false_type>::type type;
241
- };
242
-
243
- template <class Agent, size_t MAX_SHMEM>
244
- struct has_enough_shmem : has_enough_shmem_impl<true, Agent, MAX_SHMEM, sm_list>
245
- {
246
- };
247
-
248
- /////////////////////////
249
- /////////////////////////
250
- /////////////////////////
251
-
252
- // AgentPlan structure and helpers
253
- // --------------------------------
254
-
255
- struct AgentPlan
256
- {
257
- int block_threads;
258
- int items_per_thread;
259
- int items_per_tile;
260
- int shared_memory_size;
261
- int grid_size;
262
-
263
- THRUST_RUNTIME_FUNCTION
264
- AgentPlan() {}
265
-
266
- THRUST_RUNTIME_FUNCTION
267
- AgentPlan(int block_threads_,
268
- int items_per_thread_,
269
- int shared_memory_size_,
270
- int grid_size_ = 0)
271
- : block_threads(block_threads_),
272
- items_per_thread(items_per_thread_),
273
- items_per_tile(items_per_thread * block_threads),
274
- shared_memory_size(shared_memory_size_),
275
- grid_size(grid_size_)
276
- {
277
- }
278
-
279
- THRUST_RUNTIME_FUNCTION
280
- AgentPlan(AgentPlan const& plan)
281
- : block_threads(plan.block_threads),
282
- items_per_thread(plan.items_per_thread),
283
- items_per_tile(plan.items_per_tile),
284
- shared_memory_size(plan.shared_memory_size),
285
- grid_size(plan.grid_size) {}
286
-
287
- template <class PtxPlan>
288
- THRUST_RUNTIME_FUNCTION
289
- AgentPlan(PtxPlan,
290
- typename thrust::detail::disable_if_convertible<
291
- PtxPlan,
292
- AgentPlan>::type* = NULL)
293
- : block_threads(PtxPlan::BLOCK_THREADS),
294
- items_per_thread(PtxPlan::ITEMS_PER_THREAD),
295
- items_per_tile(PtxPlan::ITEMS_PER_TILE),
296
- shared_memory_size(temp_storage_size<PtxPlan>::value),
297
- grid_size(0)
298
- {
299
- }
300
- }; // struct AgentPlan
301
-
302
-
303
- __THRUST_DEFINE_HAS_NESTED_TYPE(has_Plan, Plan)
304
-
305
- template <class Agent>
306
- struct return_Plan
307
- {
308
- typedef typename Agent::Plan type;
309
- };
310
-
311
- template <class Agent>
312
- struct get_plan : thrust::detail::conditional<
313
- has_Plan<Agent>::value,
314
- return_Plan<Agent>,
315
- thrust::detail::identity_<AgentPlan> >::type
316
- {
317
- };
318
-
319
- // returns AgentPlan corresponding to a given ptx version
320
- // ------------------------------------------------------
321
-
322
- template<class, class>
323
- struct get_agent_plan_impl;
324
-
325
- template<class Agent, class SM, class _1, class _2, class _3, class _4, class _5, class _6, class _7, class _8, class _9>
326
- struct get_agent_plan_impl<Agent,typelist<SM,_1,_2,_3,_4,_5,_6,_7,_8,_9> >
327
- {
328
- typedef typename get_plan<Agent>::type Plan;
329
- Plan THRUST_RUNTIME_FUNCTION
330
- static get(int ptx_version)
331
- {
332
- if (ptx_version >= SM::ver)
333
- return Plan(specialize_plan<Agent::template PtxPlan, SM>());
334
- else
335
- return get_agent_plan_impl<Agent,
336
- typelist<_1, _2, _3, _4, _5, _6, _7, _8, _9> >::
337
- get(ptx_version);
338
- }
339
- };
340
-
341
- template<class Agent>
342
- struct get_agent_plan_impl<Agent,typelist<lowest_supported_sm_arch> >
343
- {
344
- typedef typename get_plan<Agent>::type Plan;
345
- Plan THRUST_RUNTIME_FUNCTION
346
- static get(int /* ptx_version */)
347
- {
348
- typedef typename get_plan<Agent>::type Plan;
349
- return Plan(specialize_plan<Agent::template PtxPlan, lowest_supported_sm_arch>());
350
- }
351
- };
352
-
353
- template <class Agent>
354
- typename get_plan<Agent>::type THRUST_RUNTIME_FUNCTION
355
- get_agent_plan(int ptx_version)
356
- {
357
- // Use one path, with Agent::ptx_plan, for device code where device-side
358
- // kernel launches are supported. The other path, with
359
- // get_agent_plan_impl::get(version), is for host code and for device
360
- // code without device-side kernel launches. NVCC and Feta check for
361
- // these situations differently.
362
- #ifdef __NVCOMPILER_CUDA__
363
- #ifdef __THRUST_HAS_CUDART__
364
- if (CUB_IS_DEVICE_CODE) {
365
- return typename get_plan<Agent>::type(typename Agent::ptx_plan());
366
- } else
367
- #endif
368
- {
369
- return get_agent_plan_impl<Agent, sm_list>::get(ptx_version);
370
- }
371
- #else
372
- #if (CUB_PTX_ARCH > 0) && defined(__THRUST_HAS_CUDART__)
373
- typedef typename get_plan<Agent>::type Plan;
374
- THRUST_UNUSED_VAR(ptx_version);
375
- // We're on device, use default policy
376
- return Plan(typename Agent::ptx_plan());
377
- #else
378
- return get_agent_plan_impl<Agent, sm_list>::get(ptx_version);
379
- #endif
380
- #endif
381
- }
382
-
383
- // XXX keep this dead-code for now as a gentle reminder
384
- // that kernel luunch which reats plan values is the most robust
385
- // mechanism to extract sm-specific tuning parameters
386
- // TODO: since we are unable to afford kernel launch + cudaMemcpy ON EVERY
387
- // algorithm invocation, we need to design a good caching strategy
388
- // such that when the algorithm is called multiple times, only the
389
- // first invocation will invoke kernel launch + cudaMemcpy, but
390
- // the subsequent invocations, will just read cached values from host mem
391
- // If launched from device, this is just a device-function call
392
- // no caching is required.
393
- // ----------------------------------------------------------------------------
394
- // if we don't know ptx version, we can call kernel
395
- // to retrieve AgentPlan from device code. Slower, but guaranteed to work
396
- // -----------------------------------------------------------------------
397
- #if 0
398
- template<class Agent>
399
- void __global__ get_agent_plan_kernel(AgentPlan *plan);
400
-
401
- static __device__ AgentPlan agent_plan_device;
402
-
403
- template<class Agent>
404
- AgentPlan __device__ get_agent_plan_dev()
405
- {
406
- AgentPlan plan;
407
- plan.block_threads = Agent::ptx_plan::BLOCK_THREADS;
408
- plan.items_per_thread = Agent::ptx_plan::ITEMS_PER_THREAD;
409
- plan.items_per_tile = Agent::ptx_plan::ITEMS_PER_TILE;
410
- plan.shared_memory_size = temp_storage_size<typename Agent::ptx_plan>::value;
411
- return plan;
412
- }
413
-
414
- template <class Agent, class F>
415
- AgentPlan __host__ __device__ __forceinline__
416
- xget_agent_plan_impl(F f, cudaStream_t s, void* d_ptr)
417
- {
418
- AgentPlan plan;
419
- #ifdef __CUDA_ARCH__
420
- plan = get_agent_plan_dev<Agent>();
421
- #else
422
- static cub::Mutex mutex;
423
- bool lock = false;
424
- if (d_ptr == 0)
425
- {
426
- lock = true;
427
- cudaGetSymbolAddress(&d_ptr, agent_plan_device);
428
- }
429
- if (lock)
430
- mutex.Lock();
431
- f<<<1,1,0,s>>>((AgentPlan*)d_ptr);
432
- cudaMemcpyAsync((void*)&plan,
433
- d_ptr,
434
- sizeof(AgentPlan),
435
- cudaMemcpyDeviceToHost,
436
- s);
437
- if (lock)
438
- mutex.Unlock();
439
- cudaStreamSynchronize(s);
440
- #endif
441
- return plan;
442
- }
443
-
444
- template <class Agent>
445
- AgentPlan THRUST_RUNTIME_FUNCTION
446
- get_agent_plan(cudaStream_t s = 0, void *ptr = 0)
447
- {
448
- return xget_agent_plan_impl<Agent>(get_agent_plan_kernel<Agent>,
449
- s,
450
- ptr);
451
- }
452
-
453
- template<class Agent>
454
- void __global__ get_agent_plan_kernel(AgentPlan *plan)
455
- {
456
- *plan = get_agent_plan_dev<Agent>();
457
- }
458
- #endif
459
-
460
- /////////////////////////
461
- /////////////////////////
462
- /////////////////////////
463
-
464
- THRUST_RUNTIME_FUNCTION
465
- int get_sm_count()
466
- {
467
- int dev_id;
468
- cuda_cub::throw_on_error(cudaGetDevice(&dev_id),
469
- "get_sm_count :"
470
- "failed to cudaGetDevice");
471
-
472
- cudaError_t status;
473
- int i32value;
474
- status = cudaDeviceGetAttribute(&i32value,
475
- cudaDevAttrMultiProcessorCount,
476
- dev_id);
477
- cuda_cub::throw_on_error(status,
478
- "get_sm_count:"
479
- "failed to sm_count");
480
- return i32value;
481
- }
482
-
483
- size_t THRUST_RUNTIME_FUNCTION
484
- get_max_shared_memory_per_block()
485
- {
486
- int dev_id;
487
- cuda_cub::throw_on_error(cudaGetDevice(&dev_id),
488
- "get_max_shared_memory_per_block :"
489
- "failed to cudaGetDevice");
490
-
491
- cudaError_t status;
492
- int i32value;
493
- status = cudaDeviceGetAttribute(&i32value,
494
- cudaDevAttrMaxSharedMemoryPerBlock,
495
- dev_id);
496
- cuda_cub::throw_on_error(status,
497
- "get_max_shared_memory_per_block :"
498
- "failed to get max shared memory per block");
499
-
500
- return static_cast<size_t>(i32value);
501
- }
502
-
503
- size_t THRUST_RUNTIME_FUNCTION
504
- virtual_shmem_size(size_t shmem_per_block)
505
- {
506
- size_t max_shmem_per_block = core::get_max_shared_memory_per_block();
507
- if (shmem_per_block > max_shmem_per_block)
508
- return shmem_per_block;
509
- else
510
- return 0;
511
- }
512
-
513
- size_t THRUST_RUNTIME_FUNCTION
514
- vshmem_size(size_t shmem_per_block, size_t num_blocks)
515
- {
516
- size_t max_shmem_per_block = core::get_max_shared_memory_per_block();
517
- if (shmem_per_block > max_shmem_per_block)
518
- return shmem_per_block*num_blocks;
519
- else
520
- return 0;
521
- }
522
-
523
- // LoadIterator
524
- // ------------
525
- // if trivial iterator is passed, wrap loads into LDG
526
- //
527
- template <class PtxPlan, class It>
528
- struct LoadIterator
529
- {
530
- typedef typename iterator_traits<It>::value_type value_type;
531
- typedef typename iterator_traits<It>::difference_type size_type;
532
-
533
- typedef typename thrust::detail::conditional<
534
- is_contiguous_iterator<It>::value,
535
- cub::CacheModifiedInputIterator<PtxPlan::LOAD_MODIFIER,
536
- value_type,
537
- size_type>,
538
- It>::type type;
539
- }; // struct Iterator
540
-
541
- template <class PtxPlan, class It>
542
- typename LoadIterator<PtxPlan, It>::type __device__ __forceinline__
543
- make_load_iterator_impl(It it, thrust::detail::true_type /* is_trivial */)
544
- {
545
- return raw_pointer_cast(&*it);
546
- }
547
-
548
- template <class PtxPlan, class It>
549
- typename LoadIterator<PtxPlan, It>::type __device__ __forceinline__
550
- make_load_iterator_impl(It it, thrust::detail::false_type /* is_trivial */)
551
- {
552
- return it;
553
- }
554
-
555
- template <class PtxPlan, class It>
556
- typename LoadIterator<PtxPlan, It>::type __device__ __forceinline__
557
- make_load_iterator(PtxPlan const&, It it)
558
- {
559
- return make_load_iterator_impl<PtxPlan>(
560
- it, typename is_contiguous_iterator<It>::type());
561
- }
562
-
563
- template<class>
564
- struct get_arch;
565
-
566
- template<template<class> class Plan, class Arch>
567
- struct get_arch<Plan<Arch> > { typedef Arch type; };
568
-
569
- // BlockLoad
570
- // -----------
571
- // a helper metaprogram that returns type of a block loader
572
- template <class PtxPlan,
573
- class It,
574
- class T = typename iterator_traits<It>::value_type>
575
- struct BlockLoad
576
- {
577
- typedef cub::BlockLoad<T,
578
- PtxPlan::BLOCK_THREADS,
579
- PtxPlan::ITEMS_PER_THREAD,
580
- PtxPlan::LOAD_ALGORITHM,
581
- 1,
582
- 1,
583
- get_arch<PtxPlan>::type::ver>
584
-
585
-
586
- type;
587
- };
588
-
589
- // BlockStore
590
- // -----------
591
- // a helper metaprogram that returns type of a block loader
592
- template <class PtxPlan,
593
- class It,
594
- class T = typename iterator_traits<It>::value_type>
595
- struct BlockStore
596
- {
597
- typedef cub::BlockStore<T,
598
- PtxPlan::BLOCK_THREADS,
599
- PtxPlan::ITEMS_PER_THREAD,
600
- PtxPlan::STORE_ALGORITHM,
601
- 1,
602
- 1,
603
- get_arch<PtxPlan>::type::ver>
604
- type;
605
- };
606
- // cuda_otional
607
- // --------------
608
- // used for function that return cudaError_t along with the result
609
- //
610
- template <class T>
611
- class cuda_optional
612
- {
613
- cudaError_t status_;
614
- T value_;
615
-
616
- public:
617
- __host__ __device__
618
- cuda_optional() : status_(cudaSuccess) {}
619
-
620
- __host__ __device__
621
- cuda_optional(T v, cudaError_t status = cudaSuccess) : status_(status), value_(v) {}
622
-
623
- bool __host__ __device__
624
- isValid() const { return cudaSuccess == status_; }
625
-
626
- cudaError_t __host__ __device__
627
- status() const { return status_; }
628
-
629
- __host__ __device__ T const &
630
- value() const { return value_; }
631
-
632
- __host__ __device__ operator T const &() const { return value_; }
633
- };
634
-
635
- cuda_optional<int> THRUST_RUNTIME_FUNCTION
636
- get_ptx_version()
637
- {
638
- int ptx_version = 0;
639
- cudaError_t status = cub::PtxVersion(ptx_version);
640
- return cuda_optional<int>(ptx_version, status);
641
- }
642
-
643
- cudaError_t THRUST_RUNTIME_FUNCTION
644
- sync_stream(cudaStream_t stream)
645
- {
646
- return cub::SyncStream(stream);
647
- }
648
-
649
- inline void __device__ sync_threadblock()
650
- {
651
- cub::CTA_SYNC();
652
- }
653
-
654
- #define CUDA_CUB_RET_IF_FAIL(e) \
655
- { \
656
- auto const error = (e); \
657
- if (cub::Debug(error, __FILE__, __LINE__)) return error; \
658
- }
659
-
660
- // uninitialized
661
- // -------
662
- // stores type in uninitialized form
663
- //
664
- template <class T>
665
- struct uninitialized
666
- {
667
- typedef typename cub::UnitWord<T>::DeviceWord DeviceWord;
668
-
669
- enum
670
- {
671
- WORDS = sizeof(T) / sizeof(DeviceWord)
672
- };
673
-
674
- DeviceWord storage[WORDS];
675
-
676
- __host__ __device__ __forceinline__ T& get()
677
- {
678
- return reinterpret_cast<T&>(*this);
679
- }
680
-
681
- __host__ __device__ __forceinline__ operator T&() { return get(); }
682
- };
683
-
684
- // uninitialized_array
685
- // --------------
686
- // allocates uninitialized data on stack
687
- template<class T, size_t N>
688
- struct array
689
- {
690
- typedef T value_type;
691
- typedef T ref[N];
692
- enum {SIZE = N};
693
- private:
694
- T data_[N];
695
-
696
- public:
697
- __host__ __device__ T* data() { return data_; }
698
- __host__ __device__ const T* data() const { return data_; }
699
- __host__ __device__ T& operator[](unsigned int idx) { return ((T*)data_)[idx]; }
700
- __host__ __device__ T const& operator[](unsigned int idx) const { return ((T*)data_)[idx]; }
701
- __host__ __device__ unsigned int size() const { return N; }
702
- __host__ __device__ operator ref&() { return data_; }
703
- };
704
-
705
-
706
- // uninitialized_array
707
- // --------------
708
- // allocates uninitialized data on stack
709
- template<class T, size_t N>
710
- struct uninitialized_array
711
- {
712
- typedef T value_type;
713
- typedef T ref[N];
714
- enum {SIZE = N};
715
- private:
716
- char data_[N * sizeof(T)];
717
-
718
- public:
719
- __host__ __device__ T* data() { return data_; }
720
- __host__ __device__ const T* data() const { return data_; }
721
- __host__ __device__ T& operator[](unsigned int idx) { return ((T*)data_)[idx]; }
722
- __host__ __device__ T const& operator[](unsigned int idx) const { return ((T*)data_)[idx]; }
723
- __host__ __device__ T& operator[](int idx) { return ((T*)data_)[idx]; }
724
- __host__ __device__ T const& operator[](int idx) const { return ((T*)data_)[idx]; }
725
- __host__ __device__ unsigned int size() const { return N; }
726
- __host__ __device__ operator ref&() { return *reinterpret_cast<ref*>(data_); }
727
- __host__ __device__ ref& get_ref() { return (ref&)*this; }
728
- };
729
-
730
- __host__ __device__ __forceinline__ size_t align_to(size_t n, size_t align)
731
- {
732
- return ((n+align-1)/align) * align;
733
- }
734
-
735
- namespace host {
736
- inline cuda_optional<size_t> get_max_shared_memory_per_block()
737
- {
738
- cudaError_t status = cudaSuccess;
739
- int dev_id = 0;
740
- status = cudaGetDevice(&dev_id);
741
- if (status != cudaSuccess) return cuda_optional<size_t>(0, status);
742
-
743
- int max_shmem = 0;
744
- status = cudaDeviceGetAttribute(&max_shmem,
745
- cudaDevAttrMaxSharedMemoryPerBlock,
746
- dev_id);
747
- if (status != cudaSuccess) return cuda_optional<size_t>(0, status);
748
- return cuda_optional<size_t>(max_shmem, status);
749
- }
750
- }
751
-
752
- template <int ALLOCATIONS>
753
- THRUST_RUNTIME_FUNCTION cudaError_t
754
- alias_storage(void* storage_ptr,
755
- size_t& storage_size,
756
- void* (&allocations)[ALLOCATIONS],
757
- size_t (&allocation_sizes)[ALLOCATIONS])
758
- {
759
- return cub::AliasTemporaries(storage_ptr,
760
- storage_size,
761
- allocations,
762
- allocation_sizes);
763
- }
764
-
765
-
766
- } // namespace core
767
- using core::sm60;
768
- using core::sm52;
769
- using core::sm35;
770
- using core::sm30;
771
- } // namespace cuda_
772
-
773
- } // end namespace thrust
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/lama-example/bin/predict_inner_features.py DELETED
@@ -1,119 +0,0 @@
1
- #!/usr/bin/env python3
2
-
3
- # Example command:
4
- # ./bin/predict.py \
5
- # model.path=<path to checkpoint, prepared by make_checkpoint.py> \
6
- # indir=<path to input data> \
7
- # outdir=<where to store predicts>
8
-
9
- import logging
10
- import os
11
- import sys
12
- import traceback
13
-
14
- from saicinpainting.evaluation.utils import move_to_device
15
-
16
- os.environ['OMP_NUM_THREADS'] = '1'
17
- os.environ['OPENBLAS_NUM_THREADS'] = '1'
18
- os.environ['MKL_NUM_THREADS'] = '1'
19
- os.environ['VECLIB_MAXIMUM_THREADS'] = '1'
20
- os.environ['NUMEXPR_NUM_THREADS'] = '1'
21
-
22
- import cv2
23
- import hydra
24
- import numpy as np
25
- import torch
26
- import tqdm
27
- import yaml
28
- from omegaconf import OmegaConf
29
- from torch.utils.data._utils.collate import default_collate
30
-
31
- from saicinpainting.training.data.datasets import make_default_val_dataset
32
- from saicinpainting.training.trainers import load_checkpoint, DefaultInpaintingTrainingModule
33
- from saicinpainting.utils import register_debug_signal_handlers, get_shape
34
-
35
- LOGGER = logging.getLogger(__name__)
36
-
37
-
38
- @hydra.main(config_path='../configs/prediction', config_name='default_inner_features.yaml')
39
- def main(predict_config: OmegaConf):
40
- try:
41
- register_debug_signal_handlers() # kill -10 <pid> will result in traceback dumped into log
42
-
43
- device = torch.device(predict_config.device)
44
-
45
- train_config_path = os.path.join(predict_config.model.path, 'config.yaml')
46
- with open(train_config_path, 'r') as f:
47
- train_config = OmegaConf.create(yaml.safe_load(f))
48
-
49
- checkpoint_path = os.path.join(predict_config.model.path, 'models', predict_config.model.checkpoint)
50
- model = load_checkpoint(train_config, checkpoint_path, strict=False)
51
- model.freeze()
52
- model.to(device)
53
-
54
- assert isinstance(model, DefaultInpaintingTrainingModule), 'Only DefaultInpaintingTrainingModule is supported'
55
- assert isinstance(getattr(model.generator, 'model', None), torch.nn.Sequential)
56
-
57
- if not predict_config.indir.endswith('/'):
58
- predict_config.indir += '/'
59
-
60
- dataset = make_default_val_dataset(predict_config.indir, **predict_config.dataset)
61
-
62
- max_level = max(predict_config.levels)
63
-
64
- with torch.no_grad():
65
- for img_i in tqdm.trange(len(dataset)):
66
- mask_fname = dataset.mask_filenames[img_i]
67
- cur_out_fname = os.path.join(predict_config.outdir, os.path.splitext(mask_fname[len(predict_config.indir):])[0])
68
- os.makedirs(os.path.dirname(cur_out_fname), exist_ok=True)
69
-
70
- batch = move_to_device(default_collate([dataset[img_i]]), device)
71
-
72
- img = batch['image']
73
- mask = batch['mask']
74
- mask[:] = 0
75
- mask_h, mask_w = mask.shape[-2:]
76
- mask[:, :,
77
- mask_h // 2 - predict_config.hole_radius : mask_h // 2 + predict_config.hole_radius,
78
- mask_w // 2 - predict_config.hole_radius : mask_w // 2 + predict_config.hole_radius] = 1
79
-
80
- masked_img = torch.cat([img * (1 - mask), mask], dim=1)
81
-
82
- feats = masked_img
83
- for level_i, level in enumerate(model.generator.model):
84
- feats = level(feats)
85
- if level_i in predict_config.levels:
86
- cur_feats = torch.cat([f for f in feats if torch.is_tensor(f)], dim=1) \
87
- if isinstance(feats, tuple) else feats
88
-
89
- if predict_config.slice_channels:
90
- cur_feats = cur_feats[:, slice(*predict_config.slice_channels)]
91
-
92
- cur_feat = cur_feats.pow(2).mean(1).pow(0.5).clone()
93
- cur_feat -= cur_feat.min()
94
- cur_feat /= cur_feat.std()
95
- cur_feat = cur_feat.clamp(0, 1) / 1
96
- cur_feat = cur_feat.cpu().numpy()[0]
97
- cur_feat *= 255
98
- cur_feat = np.clip(cur_feat, 0, 255).astype('uint8')
99
- cv2.imwrite(cur_out_fname + f'_lev{level_i:02d}_norm.png', cur_feat)
100
-
101
- # for channel_i in predict_config.channels:
102
- #
103
- # cur_feat = cur_feats[0, channel_i].clone().detach().cpu().numpy()
104
- # cur_feat -= cur_feat.min()
105
- # cur_feat /= cur_feat.max()
106
- # cur_feat *= 255
107
- # cur_feat = np.clip(cur_feat, 0, 255).astype('uint8')
108
- # cv2.imwrite(cur_out_fname + f'_lev{level_i}_ch{channel_i}.png', cur_feat)
109
- elif level_i >= max_level:
110
- break
111
- except KeyboardInterrupt:
112
- LOGGER.warning('Interrupted by user')
113
- except Exception as ex:
114
- LOGGER.critical(f'Prediction failed due to {ex}:\n{traceback.format_exc()}')
115
- sys.exit(1)
116
-
117
-
118
- if __name__ == '__main__':
119
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Chen-Beer/LLMing/app.py DELETED
@@ -1,56 +0,0 @@
1
- import gradio as gr
2
- import openai
3
- from PyPDF2 import PdfReader
4
-
5
- def funfunc(Name, CV, Adventurous, Years):
6
- # Years = 5
7
- reader = PdfReader(CV.name)
8
- page = reader.pages[0]
9
- CV_text = page.extract_text()
10
- MODEL = "gpt-3.5-turbo-16k"
11
- response = openai.ChatCompletion.create(
12
- model=MODEL,
13
- messages=[
14
- {"role": "system", "content": f"You are an assistant which helps people to find their best future career "
15
- f"path based on their current CV. Your job is to answer the question: where do "
16
- f"you see this person {Years} years from now? by taking into account his\hers "
17
- f"current CV and suggest a professional trajectory for these upcoming {Years} "
18
- f"years. The current year is 2023, so your forecasting should be until 2023+{Years}."},
19
- {"role": "user", "content": f"My name is: {Name}"},
20
- {"role": "system", "content": "There is a spectrum of adventurousness according to which a career path can "
21
- "be suggested. The adventurousness parameter ranges from 0 to 5, where 0 is not "
22
- "adventurous at all (continue on the same expected track, no surprises, "
23
- "no professional development) and 5 is the most adventurous trajectory you can think of "
24
- "(explore and suggest fascinating related directions). "},
25
- {"role": "user", "content": f"My adventurous level is: {Adventurous}. \nMy CV is listed below: \n {CV_text}"},
26
- {"role": "system", "content": f"Please output the answers to the following questions: \n 1. Where do you "
27
- f"see {Name} in {Years} years? \n- Use 'they' instead of 'he' or 'she.' "
28
- f"\n- Mention their title, company, and briefly explain what they will do. "
29
- f"\n- Consider work-life balance. \n \n2. which roles will they perform during "
30
- f"these upcoming {Years} years? \n- Pick one leading option and be specific. "
31
- f"\n- Provide the precise years, title, and company for each role in a resume-like list. "
32
- f"\n- Each role should last between 6 months and 2.5 years on average."},
33
- {"role": "system", "content": f"Your response should follow the template: \n {Years} years from now, "
34
- f"{Name} will ... \nDuring these {Years} years, {Name} will perform the "
35
- f"following positions: \n<from year> - <to year>: <job title>, <company> \n... "
36
- f"\n Ensure that you start from the current year (2023) and go {Years} years ahead "
37
- f"(for example, 5 years from now the year will be 2028. \nMake sure that the last "
38
- f"position is aligned with the details you provided earlier."}
39
- # {"role": "system", "content": f"Here's an example for desired output: \nIn 3 years, Emily will be a Software Development Team Lead at ABC Tech, managing a team of developers and overseeing software development projects. They will have a healthy work-life balance, with opportunities for growth and innovation. \n \nDuring these 3 years, Emily will perform the following positions: \n2023-2024: Junior Software Developer, XYZ Company \n2024-2025: Software Developer, XYZ Company \n2025-2026: Senior Software Developer, ABC Tech \n2026-: Software Development Team Lead, ABC Tech"}
40
- ],
41
- max_tokens=500,
42
- temperature=1
43
- )
44
-
45
- return response['choices'][0]['message']['content']
46
-
47
-
48
- iface = gr.Interface(
49
- fn=funfunc,
50
- inputs=["text", "file", gr.Slider(minimum=0, maximum=5, step=1), gr.Slider(minimum=1, maximum=10, step=1)],
51
- # inputs=["text", "file", gr.Slider(minimum=0, maximum=5, step=1)],
52
- outputs=["text"]
53
- )
54
-
55
-
56
- iface.launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/docker/start.sh DELETED
@@ -1,7 +0,0 @@
1
- #! /usr/bin/env bash
2
-
3
- mkdir -p ~/.config/meme_generator
4
-
5
- envsubst < /app/config.toml.template > ~/.config/meme_generator/config.toml
6
-
7
- exec python -m meme_generator.app
 
 
 
 
 
 
 
 
spaces/CofAI/CalculatorUI/index.html DELETED
@@ -1,63 +0,0 @@
1
- <!DOCTYPE html> <html> <head> <title>CalculatorCofAI</title> <style> body { font-family: Arial, sans-serif; margin: 0; padding: 20px; }
2
- h1 {
3
- text-align: center;
4
- }
5
-
6
- .calculator {
7
- margin: 20px auto;
8
- text-align: center;
9
- }
10
-
11
- .calculator input {
12
- width: 100%;
13
- padding: 10px;
14
- margin-bottom: 10px;
15
- }
16
-
17
- .calculator button {
18
- padding: 10px 20px;
19
- background-color: #4CAF50;
20
- color: white;
21
- border: none;
22
- cursor: pointer;
23
- }
24
-
25
- .calculator button:hover {
26
- background-color: #45a049;
27
- }
28
-
29
- .result {
30
- text-align: center;
31
- }
32
-
33
- .result p {
34
- font-size: 20px;
35
- }
36
- </style> </head> <body> <h1>Calculator CofAI</h1>
37
-
38
- <div class="calculator"> <input type="number" id="num1" placeholder="Number 1"> <input type="number" id="num2" placeholder="Number 2"> <button onclick="add()">+</button> <button onclick="subtract()">-</button> <button onclick="multiply()">*</button> <button onclick="divide()">/</button> </div>
39
-
40
- <div class="result"> <p id="result"></p> </div>
41
-
42
- <script> function add() { var num1 = parseInt(document.getElementById("num1").value); var num2 = parseInt(document.getElementById("num2").value); var result = num1 + num2; document.getElementById("result").innerHTML = "= " + result; }
43
- function subtract() {
44
- var num1 = parseInt(document.getElementById("num1").value);
45
- var num2 = parseInt(document.getElementById("num2").value);
46
- var result = num1 - num2;
47
- document.getElementById("result").innerHTML = "= " + result;
48
- }
49
-
50
- function multiply() {
51
- var num1 = parseInt(document.getElementById("num1").value);
52
- var num2 = parseInt(document.getElementById("num2").value);
53
- var result = num1 * num2;
54
- document.getElementById("result").innerHTML = "= " + result;
55
- }
56
-
57
- function divide() {
58
- var num1 = parseInt(document.getElementById("num1").value);
59
- var num2 = parseInt(document.getElementById("num2").value);
60
- var result = num1 / num2;
61
- document.getElementById("result").innerHTML = "= " + result;
62
- }
63
- </script> </body> </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/LengthConverter/index.html DELETED
@@ -1,108 +0,0 @@
1
- <!DOCTYPE html>
2
- <html>
3
- <head>
4
- <title>Length Converter</title>
5
- </head>
6
- <body>
7
- <h1>Length Converter</h1>
8
-
9
- <label for="input">Meaning:</label>
10
- <input type="number" id="input" placeholder="Enter value">
11
- <p></p>
12
- <label for="from">From:</label>
13
- <select id="from">
14
- <option value="meter">Meters</option>
15
- <option value="kilometer">Kilometers</option>
16
- <option value="millimeter">Millimeters</option></option>
17
- <option value="decimeter">Decimeters</option>
18
- <option value="centimeter">Centimeters</option>
19
- </select>
20
-
21
- <label for="to">To:</label>
22
- <select id="to">
23
- <option value="meter">Meters</option>
24
- <option value="kilometer">Kilometers</option>
25
- <option value="millimeter">Millimeters</option>
26
- <option value="decimeter">Decimeters</option>
27
- <option value="centimeter">Centimeters</option>
28
- </select>
29
- <p></p>
30
- <button onclick="convert()">Convert</button>
31
-
32
- <p id="result"></p>
33
-
34
- <script>
35
- function convert() {
36
- var input = document.getElementById("input").value;
37
- var from = document.getElementById("from").value;
38
- var to = document.getElementById("to").value;
39
-
40
- var result;
41
-
42
- if (from === "meter") {
43
- if (to === "meter") {
44
- result = input;
45
- } else if (to === "kilometer") {
46
- result = input / 1000;
47
- } else if (to === "millimeter") {
48
- result = input * 1000;
49
- } else if (to === "decimeter") {
50
- result = input * 10;
51
- } else if (to === "centimeter") {
52
- result = input * 100;
53
- }
54
- } else if (from === "kilometer") {
55
- if (to === "meter") {
56
- result = input * 1000;
57
- } else if (to === "kilometer") {
58
- result = input;
59
- } else if (to === "millimeter") {
60
- result = input * 1000000;
61
- } else if (to === "decimeter") {
62
- result = input * 10000;
63
- } else if (to === "centimeter") {
64
- result = input * 100000;
65
- }
66
- } else if (from === "millimeter") {
67
- if (to === "meter") {
68
- result = input / 1000;
69
- } else if (to === "kilometer") {
70
- result = input / 1000000;
71
- } else if (to === "millimeter") {
72
- result = input;
73
- } else if (to === "decimeter") {
74
- result = input / 100;
75
- } else if (to === "centimeter") {
76
- result = input / 10;
77
- }
78
- } else if (from === "decimeter") {
79
- if (to === "meter") {
80
- result = input / 10;
81
- } else if (to === "kilometer") {
82
- result = input / 10000;
83
- } else if (to === "millimeter") {
84
- result = input * 100;
85
- } else if (to === "decimeter") {
86
- result = input;
87
- } else if (to === "centimeter") {
88
- result = input * 10;
89
- }
90
- } else if (from === "centimeter") {
91
- if (to === "meter") {
92
- result = input / 100;
93
- } else if (to === "kilometer") {
94
- result = input / 100000;
95
- } else if (to === "millimeter") {
96
- result = input * 10;
97
- } else if (to === "decimeter") {
98
- result = input / 10;
99
- } else if (to === "centimeter") {
100
- result = input;
101
- }
102
- }
103
-
104
- document.getElementById("result").innerHTML = result;
105
- }
106
- </script>
107
- </body>
108
- </html>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/chat.b4/g4f/Provider/Providers/Xiaor.py DELETED
@@ -1,39 +0,0 @@
1
- import requests
2
- import os
3
- import json
4
- from ...typing import sha256, Dict, get_type_hints
5
-
6
- url = 'https://xiaor.eu.org'
7
- model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k',
8
- 'gpt-3.5-turbo-16k-0613', 'gpt-3.5-turbo-0613']
9
- supports_stream = True
10
- needs_auth = False
11
-
12
-
13
- def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs):
14
- headers = {
15
- 'Content-Type': 'application/json',
16
- }
17
- data = {
18
- 'model': model,
19
- 'temperature': 0.7,
20
- 'presence_penalty': 0,
21
- 'messages': messages,
22
- }
23
- response = requests.post(url + '/p1/v1/chat/completions',
24
- json=data, stream=True)
25
-
26
- if stream:
27
- for chunk in response.iter_content(chunk_size=None):
28
- chunk = chunk.decode('utf-8')
29
- if chunk.strip():
30
- message = json.loads(chunk)['choices'][0]['message']['content']
31
- yield message
32
- else:
33
- message = response.json()['choices'][0]['message']['content']
34
- yield message
35
-
36
-
37
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
38
- '(%s)' % ', '.join(
39
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CofAI/chat/g4f/Provider/Providers/Wewordle.py DELETED
@@ -1,75 +0,0 @@
1
- import os
2
- import requests
3
- import json
4
- import random
5
- import time
6
- import string
7
- from ...typing import sha256, Dict, get_type_hints
8
-
9
- url = "https://wewordle.org/gptapi/v1/android/turbo"
10
- model = ['gpt-3.5-turbo']
11
- supports_stream = False
12
- needs_auth = False
13
-
14
-
15
- def _create_completion(model: str, messages: list, stream: bool, **kwargs):
16
- base = ''
17
- for message in messages:
18
- base += '%s: %s\n' % (message['role'], message['content'])
19
- base += 'assistant:'
20
- # randomize user id and app id
21
- _user_id = ''.join(random.choices(
22
- f'{string.ascii_lowercase}{string.digits}', k=16))
23
- _app_id = ''.join(random.choices(
24
- f'{string.ascii_lowercase}{string.digits}', k=31))
25
- # make current date with format utc
26
- _request_date = time.strftime("%Y-%m-%dT%H:%M:%S.000Z", time.gmtime())
27
- headers = {
28
- 'accept': '*/*',
29
- 'pragma': 'no-cache',
30
- 'Content-Type': 'application/json',
31
- 'Connection': 'keep-alive'
32
- }
33
- data = {
34
- "user": _user_id,
35
- "messages": [
36
- {"role": "user", "content": base}
37
- ],
38
- "subscriber": {
39
- "originalPurchaseDate": None,
40
- "originalApplicationVersion": None,
41
- "allPurchaseDatesMillis": {},
42
- "entitlements": {
43
- "active": {},
44
- "all": {}
45
- },
46
- "allPurchaseDates": {},
47
- "allExpirationDatesMillis": {},
48
- "allExpirationDates": {},
49
- "originalAppUserId": f"$RCAnonymousID:{_app_id}",
50
- "latestExpirationDate": None,
51
- "requestDate": _request_date,
52
- "latestExpirationDateMillis": None,
53
- "nonSubscriptionTransactions": [],
54
- "originalPurchaseDateMillis": None,
55
- "managementURL": None,
56
- "allPurchasedProductIdentifiers": [],
57
- "firstSeen": _request_date,
58
- "activeSubscriptions": []
59
- }
60
- }
61
- response = requests.post(url, headers=headers, data=json.dumps(data))
62
- if response.status_code == 200:
63
- _json = response.json()
64
- if 'message' in _json:
65
- message_content = _json['message']['content']
66
- message_content = message_content.replace('**assistant:** ', '')
67
- yield message_content
68
- else:
69
- print(f"Error Occurred::{response.status_code}")
70
- return None
71
-
72
-
73
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
74
- '(%s)' % ', '.join(
75
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CorvaeOboro/gen_ability_icon/dnnlib/__init__.py DELETED
@@ -1,9 +0,0 @@
1
- # Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
2
- #
3
- # NVIDIA CORPORATION and its licensors retain all intellectual property
4
- # and proprietary rights in and to this software, related documentation
5
- # and any modifications thereto. Any use, reproduction, disclosure or
6
- # distribution of this software and related documentation without an express
7
- # license agreement from NVIDIA CORPORATION is strictly prohibited.
8
-
9
- from .util import EasyDict, make_cache_dir_path
 
 
 
 
 
 
 
 
 
 
spaces/Cran-May/ygVI/app.py DELETED
@@ -1,250 +0,0 @@
1
- from typing import Iterator
2
-
3
- import gradio as gr
4
-
5
-
6
- from model import run
7
-
8
- DEFAULT_SYSTEM_PROMPT = ""
9
- MAX_MAX_NEW_TOKENS = 2048
10
- DEFAULT_MAX_NEW_TOKENS = 1024
11
- MAX_INPUT_TOKEN_LENGTH = 4000
12
-
13
- DESCRIPTION = """
14
- # 玉刚六号改/yugangVI-Chat
15
- """
16
- LICENSE="基于Baichuan-13B-Chat以及https://github.com/ouwei2013/baichuan13b.cpp"
17
-
18
-
19
-
20
- def clear_and_save_textbox(message: str) -> tuple[str, str]:
21
- return '', message
22
-
23
-
24
- def display_input(message: str,
25
- history: list[tuple[str, str]]) -> list[tuple[str, str]]:
26
- history.append((message, ''))
27
- return history
28
-
29
-
30
- def delete_prev_fn(
31
- history: list[tuple[str, str]]) -> tuple[list[tuple[str, str]], str]:
32
- try:
33
- message, _ = history.pop()
34
- except IndexError:
35
- message = ''
36
- return history, message or ''
37
-
38
-
39
- def generate(
40
- message: str,
41
- history_with_input: list[tuple[str, str]],
42
- system_prompt: str,
43
- max_new_tokens: int,
44
- temperature: float,
45
- top_p: float,
46
- top_k: int,
47
- ) -> Iterator[list[tuple[str, str]]]:
48
-
49
- history = history_with_input[:-1]
50
- generator = run(message, history, system_prompt, max_new_tokens, temperature, top_p, top_k)
51
- for response in generator:
52
- yield history + [(message, response)]
53
-
54
-
55
- def process_example(message: str) -> tuple[str, list[tuple[str, str]]]:
56
- generator = generate(message, [], DEFAULT_SYSTEM_PROMPT, 8192, 1, 0.95, 50)
57
- for x in generator:
58
- pass
59
- return '', x
60
-
61
-
62
- def check_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> None:
63
- a = 1
64
-
65
-
66
- with gr.Blocks(css='style.css') as demo:
67
- gr.Markdown(DESCRIPTION)
68
- gr.DuplicateButton(value='Duplicate Space for private use',
69
- elem_id='duplicate-button')
70
-
71
- with gr.Group():
72
- chatbot = gr.Chatbot(label='Chatbot')
73
- with gr.Row():
74
- textbox = gr.Textbox(
75
- container=False,
76
- show_label=False,
77
- placeholder='请输入/Type a message...',
78
- scale=10,
79
- )
80
- submit_button = gr.Button('提交/Submit',
81
- variant='primary',
82
- scale=1,
83
- min_width=0)
84
- with gr.Row():
85
- retry_button = gr.Button('🔄 重来/Retry', variant='secondary')
86
- undo_button = gr.Button('↩️ 撤销/Undo', variant='secondary')
87
- clear_button = gr.Button('🗑️ 清除/Clear', variant='secondary')
88
-
89
- saved_input = gr.State()
90
-
91
- with gr.Accordion(label='进阶设置/Advanced options', open=False):
92
- system_prompt = gr.Textbox(label='预设引导词/System prompt',
93
- value=DEFAULT_SYSTEM_PROMPT,
94
- lines=6)
95
- max_new_tokens = gr.Slider(
96
- label='Max new tokens',
97
- minimum=1,
98
- maximum=MAX_MAX_NEW_TOKENS,
99
- step=1,
100
- value=DEFAULT_MAX_NEW_TOKENS,
101
- )
102
- temperature = gr.Slider(
103
- label='情感温度/Temperature',
104
- minimum=0.1,
105
- maximum=4.0,
106
- step=0.1,
107
- value=0.3,
108
- )
109
- top_p = gr.Slider(
110
- label='Top-p (nucleus sampling)',
111
- minimum=0.05,
112
- maximum=1.0,
113
- step=0.05,
114
- value=0.85,
115
- )
116
- top_k = gr.Slider(
117
- label='Top-k',
118
- minimum=1,
119
- maximum=1000,
120
- step=1,
121
- value=5,
122
- )
123
-
124
- gr.Examples(
125
- examples=[
126
- '中华人民共和国的首都是?',
127
-
128
- ],
129
- inputs=textbox,
130
- outputs=[textbox, chatbot],
131
- fn=process_example,
132
- cache_examples=True,
133
- )
134
-
135
- gr.Markdown(LICENSE)
136
-
137
- textbox.submit(
138
- fn=clear_and_save_textbox,
139
- inputs=textbox,
140
- outputs=[textbox, saved_input],
141
- api_name=False,
142
- queue=False,
143
- ).then(
144
- fn=display_input,
145
- inputs=[saved_input, chatbot],
146
- outputs=chatbot,
147
- api_name=False,
148
- queue=False,
149
- ).then(
150
- fn=check_input_token_length,
151
- inputs=[saved_input, chatbot, system_prompt],
152
- api_name=False,
153
- queue=False,
154
- ).success(
155
- fn=generate,
156
- inputs=[
157
- saved_input,
158
- chatbot,
159
- system_prompt,
160
- max_new_tokens,
161
- temperature,
162
- top_p,
163
- top_k,
164
- ],
165
- outputs=chatbot,
166
- api_name=False,
167
- )
168
-
169
- button_event_preprocess = submit_button.click(
170
- fn=clear_and_save_textbox,
171
- inputs=textbox,
172
- outputs=[textbox, saved_input],
173
- api_name=False,
174
- queue=False,
175
- ).then(
176
- fn=display_input,
177
- inputs=[saved_input, chatbot],
178
- outputs=chatbot,
179
- api_name=False,
180
- queue=False,
181
- ).then(
182
- fn=check_input_token_length,
183
- inputs=[saved_input, chatbot, system_prompt],
184
- api_name=False,
185
- queue=False,
186
- ).success(
187
- fn=generate,
188
- inputs=[
189
- saved_input,
190
- chatbot,
191
- system_prompt,
192
- max_new_tokens,
193
- temperature,
194
- top_p,
195
- top_k,
196
- ],
197
- outputs=chatbot,
198
- api_name=False,
199
- )
200
-
201
- retry_button.click(
202
- fn=delete_prev_fn,
203
- inputs=chatbot,
204
- outputs=[chatbot, saved_input],
205
- api_name=False,
206
- queue=False,
207
- ).then(
208
- fn=display_input,
209
- inputs=[saved_input, chatbot],
210
- outputs=chatbot,
211
- api_name=False,
212
- queue=False,
213
- ).then(
214
- fn=generate,
215
- inputs=[
216
- saved_input,
217
- chatbot,
218
- system_prompt,
219
- max_new_tokens,
220
- temperature,
221
- top_p,
222
- top_k,
223
- ],
224
- outputs=chatbot,
225
- api_name=False,
226
- )
227
-
228
- undo_button.click(
229
-
230
- fn=delete_prev_fn,
231
- inputs=chatbot,
232
- outputs=[chatbot, saved_input],
233
- api_name=False,
234
- queue=False,
235
- ).then(
236
- fn=lambda x: x,
237
- inputs=[saved_input],
238
- outputs=textbox,
239
- api_name=False,
240
- queue=False,
241
- )
242
-
243
- clear_button.click(
244
- fn=lambda: ([], ''),
245
- outputs=[chatbot, saved_input],
246
- queue=False,
247
- api_name=False,
248
- )
249
-
250
- demo.queue(max_size=20).launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/dtype.py DELETED
@@ -1,39 +0,0 @@
1
- #coding=utf-8
2
- '''
3
- Created on 2016年9月27日
4
- @author: dengdan
5
- '''
6
- import numpy as np
7
-
8
- float32 = 'float32'
9
- floatX = float32
10
- int32 = 'int32'
11
- uint8 = 'uint8'
12
- string = 'str'
13
-
14
- def cast(obj, dtype):
15
- if isinstance(obj, list):
16
- return np.asarray(obj, dtype = floatX)
17
- return np.cast[dtype](obj)
18
-
19
- def int(obj):
20
- return cast(obj, 'int')
21
-
22
- def double(obj):
23
- return cast(obj, 'double')
24
-
25
- def is_number(obj):
26
- try:
27
- obj + 1
28
- except:
29
- return False
30
- return True
31
-
32
- def is_str(s):
33
- return type(s) == str
34
-
35
- def is_list(s):
36
- return type(s) == list
37
-
38
- def is_tuple(s):
39
- return type(s) == tuple
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/DdsImagePlugin.py DELETED
@@ -1,291 +0,0 @@
1
- """
2
- A Pillow loader for .dds files (S3TC-compressed aka DXTC)
3
- Jerome Leclanche <[email protected]>
4
-
5
- Documentation:
6
- https://web.archive.org/web/20170802060935/http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_compression_s3tc.txt
7
-
8
- The contents of this file are hereby released in the public domain (CC0)
9
- Full text of the CC0 license:
10
- https://creativecommons.org/publicdomain/zero/1.0/
11
- """
12
-
13
- import struct
14
- from io import BytesIO
15
-
16
- from . import Image, ImageFile
17
- from ._binary import o32le as o32
18
-
19
- # Magic ("DDS ")
20
- DDS_MAGIC = 0x20534444
21
-
22
- # DDS flags
23
- DDSD_CAPS = 0x1
24
- DDSD_HEIGHT = 0x2
25
- DDSD_WIDTH = 0x4
26
- DDSD_PITCH = 0x8
27
- DDSD_PIXELFORMAT = 0x1000
28
- DDSD_MIPMAPCOUNT = 0x20000
29
- DDSD_LINEARSIZE = 0x80000
30
- DDSD_DEPTH = 0x800000
31
-
32
- # DDS caps
33
- DDSCAPS_COMPLEX = 0x8
34
- DDSCAPS_TEXTURE = 0x1000
35
- DDSCAPS_MIPMAP = 0x400000
36
-
37
- DDSCAPS2_CUBEMAP = 0x200
38
- DDSCAPS2_CUBEMAP_POSITIVEX = 0x400
39
- DDSCAPS2_CUBEMAP_NEGATIVEX = 0x800
40
- DDSCAPS2_CUBEMAP_POSITIVEY = 0x1000
41
- DDSCAPS2_CUBEMAP_NEGATIVEY = 0x2000
42
- DDSCAPS2_CUBEMAP_POSITIVEZ = 0x4000
43
- DDSCAPS2_CUBEMAP_NEGATIVEZ = 0x8000
44
- DDSCAPS2_VOLUME = 0x200000
45
-
46
- # Pixel Format
47
- DDPF_ALPHAPIXELS = 0x1
48
- DDPF_ALPHA = 0x2
49
- DDPF_FOURCC = 0x4
50
- DDPF_PALETTEINDEXED8 = 0x20
51
- DDPF_RGB = 0x40
52
- DDPF_LUMINANCE = 0x20000
53
-
54
-
55
- # dds.h
56
-
57
- DDS_FOURCC = DDPF_FOURCC
58
- DDS_RGB = DDPF_RGB
59
- DDS_RGBA = DDPF_RGB | DDPF_ALPHAPIXELS
60
- DDS_LUMINANCE = DDPF_LUMINANCE
61
- DDS_LUMINANCEA = DDPF_LUMINANCE | DDPF_ALPHAPIXELS
62
- DDS_ALPHA = DDPF_ALPHA
63
- DDS_PAL8 = DDPF_PALETTEINDEXED8
64
-
65
- DDS_HEADER_FLAGS_TEXTURE = DDSD_CAPS | DDSD_HEIGHT | DDSD_WIDTH | DDSD_PIXELFORMAT
66
- DDS_HEADER_FLAGS_MIPMAP = DDSD_MIPMAPCOUNT
67
- DDS_HEADER_FLAGS_VOLUME = DDSD_DEPTH
68
- DDS_HEADER_FLAGS_PITCH = DDSD_PITCH
69
- DDS_HEADER_FLAGS_LINEARSIZE = DDSD_LINEARSIZE
70
-
71
- DDS_HEIGHT = DDSD_HEIGHT
72
- DDS_WIDTH = DDSD_WIDTH
73
-
74
- DDS_SURFACE_FLAGS_TEXTURE = DDSCAPS_TEXTURE
75
- DDS_SURFACE_FLAGS_MIPMAP = DDSCAPS_COMPLEX | DDSCAPS_MIPMAP
76
- DDS_SURFACE_FLAGS_CUBEMAP = DDSCAPS_COMPLEX
77
-
78
- DDS_CUBEMAP_POSITIVEX = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEX
79
- DDS_CUBEMAP_NEGATIVEX = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEX
80
- DDS_CUBEMAP_POSITIVEY = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEY
81
- DDS_CUBEMAP_NEGATIVEY = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEY
82
- DDS_CUBEMAP_POSITIVEZ = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_POSITIVEZ
83
- DDS_CUBEMAP_NEGATIVEZ = DDSCAPS2_CUBEMAP | DDSCAPS2_CUBEMAP_NEGATIVEZ
84
-
85
-
86
- # DXT1
87
- DXT1_FOURCC = 0x31545844
88
-
89
- # DXT3
90
- DXT3_FOURCC = 0x33545844
91
-
92
- # DXT5
93
- DXT5_FOURCC = 0x35545844
94
-
95
-
96
- # dxgiformat.h
97
-
98
- DXGI_FORMAT_R8G8B8A8_TYPELESS = 27
99
- DXGI_FORMAT_R8G8B8A8_UNORM = 28
100
- DXGI_FORMAT_R8G8B8A8_UNORM_SRGB = 29
101
- DXGI_FORMAT_BC5_TYPELESS = 82
102
- DXGI_FORMAT_BC5_UNORM = 83
103
- DXGI_FORMAT_BC5_SNORM = 84
104
- DXGI_FORMAT_BC6H_UF16 = 95
105
- DXGI_FORMAT_BC6H_SF16 = 96
106
- DXGI_FORMAT_BC7_TYPELESS = 97
107
- DXGI_FORMAT_BC7_UNORM = 98
108
- DXGI_FORMAT_BC7_UNORM_SRGB = 99
109
-
110
-
111
- class DdsImageFile(ImageFile.ImageFile):
112
- format = "DDS"
113
- format_description = "DirectDraw Surface"
114
-
115
- def _open(self):
116
- if not _accept(self.fp.read(4)):
117
- msg = "not a DDS file"
118
- raise SyntaxError(msg)
119
- (header_size,) = struct.unpack("<I", self.fp.read(4))
120
- if header_size != 124:
121
- msg = f"Unsupported header size {repr(header_size)}"
122
- raise OSError(msg)
123
- header_bytes = self.fp.read(header_size - 4)
124
- if len(header_bytes) != 120:
125
- msg = f"Incomplete header: {len(header_bytes)} bytes"
126
- raise OSError(msg)
127
- header = BytesIO(header_bytes)
128
-
129
- flags, height, width = struct.unpack("<3I", header.read(12))
130
- self._size = (width, height)
131
- self.mode = "RGBA"
132
-
133
- pitch, depth, mipmaps = struct.unpack("<3I", header.read(12))
134
- struct.unpack("<11I", header.read(44)) # reserved
135
-
136
- # pixel format
137
- pfsize, pfflags = struct.unpack("<2I", header.read(8))
138
- fourcc = header.read(4)
139
- (bitcount,) = struct.unpack("<I", header.read(4))
140
- masks = struct.unpack("<4I", header.read(16))
141
- if pfflags & DDPF_LUMINANCE:
142
- # Texture contains uncompressed L or LA data
143
- if pfflags & DDPF_ALPHAPIXELS:
144
- self.mode = "LA"
145
- else:
146
- self.mode = "L"
147
-
148
- self.tile = [("raw", (0, 0) + self.size, 0, (self.mode, 0, 1))]
149
- elif pfflags & DDPF_RGB:
150
- # Texture contains uncompressed RGB data
151
- masks = {mask: ["R", "G", "B", "A"][i] for i, mask in enumerate(masks)}
152
- rawmode = ""
153
- if pfflags & DDPF_ALPHAPIXELS:
154
- rawmode += masks[0xFF000000]
155
- else:
156
- self.mode = "RGB"
157
- rawmode += masks[0xFF0000] + masks[0xFF00] + masks[0xFF]
158
-
159
- self.tile = [("raw", (0, 0) + self.size, 0, (rawmode[::-1], 0, 1))]
160
- else:
161
- data_start = header_size + 4
162
- n = 0
163
- if fourcc == b"DXT1":
164
- self.pixel_format = "DXT1"
165
- n = 1
166
- elif fourcc == b"DXT3":
167
- self.pixel_format = "DXT3"
168
- n = 2
169
- elif fourcc == b"DXT5":
170
- self.pixel_format = "DXT5"
171
- n = 3
172
- elif fourcc == b"ATI1":
173
- self.pixel_format = "BC4"
174
- n = 4
175
- self.mode = "L"
176
- elif fourcc == b"ATI2":
177
- self.pixel_format = "BC5"
178
- n = 5
179
- self.mode = "RGB"
180
- elif fourcc == b"BC5S":
181
- self.pixel_format = "BC5S"
182
- n = 5
183
- self.mode = "RGB"
184
- elif fourcc == b"DX10":
185
- data_start += 20
186
- # ignoring flags which pertain to volume textures and cubemaps
187
- (dxgi_format,) = struct.unpack("<I", self.fp.read(4))
188
- self.fp.read(16)
189
- if dxgi_format in (DXGI_FORMAT_BC5_TYPELESS, DXGI_FORMAT_BC5_UNORM):
190
- self.pixel_format = "BC5"
191
- n = 5
192
- self.mode = "RGB"
193
- elif dxgi_format == DXGI_FORMAT_BC5_SNORM:
194
- self.pixel_format = "BC5S"
195
- n = 5
196
- self.mode = "RGB"
197
- elif dxgi_format == DXGI_FORMAT_BC6H_UF16:
198
- self.pixel_format = "BC6H"
199
- n = 6
200
- self.mode = "RGB"
201
- elif dxgi_format == DXGI_FORMAT_BC6H_SF16:
202
- self.pixel_format = "BC6HS"
203
- n = 6
204
- self.mode = "RGB"
205
- elif dxgi_format in (DXGI_FORMAT_BC7_TYPELESS, DXGI_FORMAT_BC7_UNORM):
206
- self.pixel_format = "BC7"
207
- n = 7
208
- elif dxgi_format == DXGI_FORMAT_BC7_UNORM_SRGB:
209
- self.pixel_format = "BC7"
210
- self.info["gamma"] = 1 / 2.2
211
- n = 7
212
- elif dxgi_format in (
213
- DXGI_FORMAT_R8G8B8A8_TYPELESS,
214
- DXGI_FORMAT_R8G8B8A8_UNORM,
215
- DXGI_FORMAT_R8G8B8A8_UNORM_SRGB,
216
- ):
217
- self.tile = [("raw", (0, 0) + self.size, 0, ("RGBA", 0, 1))]
218
- if dxgi_format == DXGI_FORMAT_R8G8B8A8_UNORM_SRGB:
219
- self.info["gamma"] = 1 / 2.2
220
- return
221
- else:
222
- msg = f"Unimplemented DXGI format {dxgi_format}"
223
- raise NotImplementedError(msg)
224
- else:
225
- msg = f"Unimplemented pixel format {repr(fourcc)}"
226
- raise NotImplementedError(msg)
227
-
228
- self.tile = [
229
- ("bcn", (0, 0) + self.size, data_start, (n, self.pixel_format))
230
- ]
231
-
232
- def load_seek(self, pos):
233
- pass
234
-
235
-
236
- def _save(im, fp, filename):
237
- if im.mode not in ("RGB", "RGBA", "L", "LA"):
238
- msg = f"cannot write mode {im.mode} as DDS"
239
- raise OSError(msg)
240
-
241
- rawmode = im.mode
242
- masks = [0xFF0000, 0xFF00, 0xFF]
243
- if im.mode in ("L", "LA"):
244
- pixel_flags = DDPF_LUMINANCE
245
- else:
246
- pixel_flags = DDPF_RGB
247
- rawmode = rawmode[::-1]
248
- if im.mode in ("LA", "RGBA"):
249
- pixel_flags |= DDPF_ALPHAPIXELS
250
- masks.append(0xFF000000)
251
-
252
- bitcount = len(masks) * 8
253
- while len(masks) < 4:
254
- masks.append(0)
255
-
256
- fp.write(
257
- o32(DDS_MAGIC)
258
- + o32(124) # header size
259
- + o32(
260
- DDSD_CAPS | DDSD_HEIGHT | DDSD_WIDTH | DDSD_PITCH | DDSD_PIXELFORMAT
261
- ) # flags
262
- + o32(im.height)
263
- + o32(im.width)
264
- + o32((im.width * bitcount + 7) // 8) # pitch
265
- + o32(0) # depth
266
- + o32(0) # mipmaps
267
- + o32(0) * 11 # reserved
268
- + o32(32) # pfsize
269
- + o32(pixel_flags) # pfflags
270
- + o32(0) # fourcc
271
- + o32(bitcount) # bitcount
272
- + b"".join(o32(mask) for mask in masks) # rgbabitmask
273
- + o32(DDSCAPS_TEXTURE) # dwCaps
274
- + o32(0) # dwCaps2
275
- + o32(0) # dwCaps3
276
- + o32(0) # dwCaps4
277
- + o32(0) # dwReserved2
278
- )
279
- if im.mode == "RGBA":
280
- r, g, b, a = im.split()
281
- im = Image.merge("RGBA", (a, r, g, b))
282
- ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, 1))])
283
-
284
-
285
- def _accept(prefix):
286
- return prefix[:4] == b"DDS "
287
-
288
-
289
- Image.register_open(DdsImageFile.format, DdsImageFile, _accept)
290
- Image.register_save(DdsImageFile.format, _save)
291
- Image.register_extension(DdsImageFile.format, ".dds")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/attr/_funcs.py DELETED
@@ -1,477 +0,0 @@
1
- # SPDX-License-Identifier: MIT
2
-
3
-
4
- import copy
5
-
6
- from ._compat import PY_3_9_PLUS, get_generic_base
7
- from ._make import NOTHING, _obj_setattr, fields
8
- from .exceptions import AttrsAttributeNotFoundError
9
-
10
-
11
- def asdict(
12
- inst,
13
- recurse=True,
14
- filter=None,
15
- dict_factory=dict,
16
- retain_collection_types=False,
17
- value_serializer=None,
18
- ):
19
- """
20
- Return the *attrs* attribute values of *inst* as a dict.
21
-
22
- Optionally recurse into other *attrs*-decorated classes.
23
-
24
- :param inst: Instance of an *attrs*-decorated class.
25
- :param bool recurse: Recurse into classes that are also
26
- *attrs*-decorated.
27
- :param callable filter: A callable whose return code determines whether an
28
- attribute or element is included (``True``) or dropped (``False``). Is
29
- called with the `attrs.Attribute` as the first argument and the
30
- value as the second argument.
31
- :param callable dict_factory: A callable to produce dictionaries from. For
32
- example, to produce ordered dictionaries instead of normal Python
33
- dictionaries, pass in ``collections.OrderedDict``.
34
- :param bool retain_collection_types: Do not convert to ``list`` when
35
- encountering an attribute whose type is ``tuple`` or ``set``. Only
36
- meaningful if ``recurse`` is ``True``.
37
- :param Optional[callable] value_serializer: A hook that is called for every
38
- attribute or dict key/value. It receives the current instance, field
39
- and value and must return the (updated) value. The hook is run *after*
40
- the optional *filter* has been applied.
41
-
42
- :rtype: return type of *dict_factory*
43
-
44
- :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs*
45
- class.
46
-
47
- .. versionadded:: 16.0.0 *dict_factory*
48
- .. versionadded:: 16.1.0 *retain_collection_types*
49
- .. versionadded:: 20.3.0 *value_serializer*
50
- .. versionadded:: 21.3.0 If a dict has a collection for a key, it is
51
- serialized as a tuple.
52
- """
53
- attrs = fields(inst.__class__)
54
- rv = dict_factory()
55
- for a in attrs:
56
- v = getattr(inst, a.name)
57
- if filter is not None and not filter(a, v):
58
- continue
59
-
60
- if value_serializer is not None:
61
- v = value_serializer(inst, a, v)
62
-
63
- if recurse is True:
64
- if has(v.__class__):
65
- rv[a.name] = asdict(
66
- v,
67
- recurse=True,
68
- filter=filter,
69
- dict_factory=dict_factory,
70
- retain_collection_types=retain_collection_types,
71
- value_serializer=value_serializer,
72
- )
73
- elif isinstance(v, (tuple, list, set, frozenset)):
74
- cf = v.__class__ if retain_collection_types is True else list
75
- rv[a.name] = cf(
76
- [
77
- _asdict_anything(
78
- i,
79
- is_key=False,
80
- filter=filter,
81
- dict_factory=dict_factory,
82
- retain_collection_types=retain_collection_types,
83
- value_serializer=value_serializer,
84
- )
85
- for i in v
86
- ]
87
- )
88
- elif isinstance(v, dict):
89
- df = dict_factory
90
- rv[a.name] = df(
91
- (
92
- _asdict_anything(
93
- kk,
94
- is_key=True,
95
- filter=filter,
96
- dict_factory=df,
97
- retain_collection_types=retain_collection_types,
98
- value_serializer=value_serializer,
99
- ),
100
- _asdict_anything(
101
- vv,
102
- is_key=False,
103
- filter=filter,
104
- dict_factory=df,
105
- retain_collection_types=retain_collection_types,
106
- value_serializer=value_serializer,
107
- ),
108
- )
109
- for kk, vv in v.items()
110
- )
111
- else:
112
- rv[a.name] = v
113
- else:
114
- rv[a.name] = v
115
- return rv
116
-
117
-
118
- def _asdict_anything(
119
- val,
120
- is_key,
121
- filter,
122
- dict_factory,
123
- retain_collection_types,
124
- value_serializer,
125
- ):
126
- """
127
- ``asdict`` only works on attrs instances, this works on anything.
128
- """
129
- if getattr(val.__class__, "__attrs_attrs__", None) is not None:
130
- # Attrs class.
131
- rv = asdict(
132
- val,
133
- recurse=True,
134
- filter=filter,
135
- dict_factory=dict_factory,
136
- retain_collection_types=retain_collection_types,
137
- value_serializer=value_serializer,
138
- )
139
- elif isinstance(val, (tuple, list, set, frozenset)):
140
- if retain_collection_types is True:
141
- cf = val.__class__
142
- elif is_key:
143
- cf = tuple
144
- else:
145
- cf = list
146
-
147
- rv = cf(
148
- [
149
- _asdict_anything(
150
- i,
151
- is_key=False,
152
- filter=filter,
153
- dict_factory=dict_factory,
154
- retain_collection_types=retain_collection_types,
155
- value_serializer=value_serializer,
156
- )
157
- for i in val
158
- ]
159
- )
160
- elif isinstance(val, dict):
161
- df = dict_factory
162
- rv = df(
163
- (
164
- _asdict_anything(
165
- kk,
166
- is_key=True,
167
- filter=filter,
168
- dict_factory=df,
169
- retain_collection_types=retain_collection_types,
170
- value_serializer=value_serializer,
171
- ),
172
- _asdict_anything(
173
- vv,
174
- is_key=False,
175
- filter=filter,
176
- dict_factory=df,
177
- retain_collection_types=retain_collection_types,
178
- value_serializer=value_serializer,
179
- ),
180
- )
181
- for kk, vv in val.items()
182
- )
183
- else:
184
- rv = val
185
- if value_serializer is not None:
186
- rv = value_serializer(None, None, rv)
187
-
188
- return rv
189
-
190
-
191
- def astuple(
192
- inst,
193
- recurse=True,
194
- filter=None,
195
- tuple_factory=tuple,
196
- retain_collection_types=False,
197
- ):
198
- """
199
- Return the *attrs* attribute values of *inst* as a tuple.
200
-
201
- Optionally recurse into other *attrs*-decorated classes.
202
-
203
- :param inst: Instance of an *attrs*-decorated class.
204
- :param bool recurse: Recurse into classes that are also
205
- *attrs*-decorated.
206
- :param callable filter: A callable whose return code determines whether an
207
- attribute or element is included (``True``) or dropped (``False``). Is
208
- called with the `attrs.Attribute` as the first argument and the
209
- value as the second argument.
210
- :param callable tuple_factory: A callable to produce tuples from. For
211
- example, to produce lists instead of tuples.
212
- :param bool retain_collection_types: Do not convert to ``list``
213
- or ``dict`` when encountering an attribute which type is
214
- ``tuple``, ``dict`` or ``set``. Only meaningful if ``recurse`` is
215
- ``True``.
216
-
217
- :rtype: return type of *tuple_factory*
218
-
219
- :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs*
220
- class.
221
-
222
- .. versionadded:: 16.2.0
223
- """
224
- attrs = fields(inst.__class__)
225
- rv = []
226
- retain = retain_collection_types # Very long. :/
227
- for a in attrs:
228
- v = getattr(inst, a.name)
229
- if filter is not None and not filter(a, v):
230
- continue
231
- if recurse is True:
232
- if has(v.__class__):
233
- rv.append(
234
- astuple(
235
- v,
236
- recurse=True,
237
- filter=filter,
238
- tuple_factory=tuple_factory,
239
- retain_collection_types=retain,
240
- )
241
- )
242
- elif isinstance(v, (tuple, list, set, frozenset)):
243
- cf = v.__class__ if retain is True else list
244
- rv.append(
245
- cf(
246
- [
247
- astuple(
248
- j,
249
- recurse=True,
250
- filter=filter,
251
- tuple_factory=tuple_factory,
252
- retain_collection_types=retain,
253
- )
254
- if has(j.__class__)
255
- else j
256
- for j in v
257
- ]
258
- )
259
- )
260
- elif isinstance(v, dict):
261
- df = v.__class__ if retain is True else dict
262
- rv.append(
263
- df(
264
- (
265
- astuple(
266
- kk,
267
- tuple_factory=tuple_factory,
268
- retain_collection_types=retain,
269
- )
270
- if has(kk.__class__)
271
- else kk,
272
- astuple(
273
- vv,
274
- tuple_factory=tuple_factory,
275
- retain_collection_types=retain,
276
- )
277
- if has(vv.__class__)
278
- else vv,
279
- )
280
- for kk, vv in v.items()
281
- )
282
- )
283
- else:
284
- rv.append(v)
285
- else:
286
- rv.append(v)
287
-
288
- return rv if tuple_factory is list else tuple_factory(rv)
289
-
290
-
291
- def has(cls):
292
- """
293
- Check whether *cls* is a class with *attrs* attributes.
294
-
295
- :param type cls: Class to introspect.
296
- :raise TypeError: If *cls* is not a class.
297
-
298
- :rtype: bool
299
- """
300
- attrs = getattr(cls, "__attrs_attrs__", None)
301
- if attrs is not None:
302
- return True
303
-
304
- # No attrs, maybe it's a specialized generic (A[str])?
305
- generic_base = get_generic_base(cls)
306
- if generic_base is not None:
307
- generic_attrs = getattr(generic_base, "__attrs_attrs__", None)
308
- if generic_attrs is not None:
309
- # Stick it on here for speed next time.
310
- cls.__attrs_attrs__ = generic_attrs
311
- return generic_attrs is not None
312
- return False
313
-
314
-
315
- def assoc(inst, **changes):
316
- """
317
- Copy *inst* and apply *changes*.
318
-
319
- This is different from `evolve` that applies the changes to the arguments
320
- that create the new instance.
321
-
322
- `evolve`'s behavior is preferable, but there are `edge cases`_ where it
323
- doesn't work. Therefore `assoc` is deprecated, but will not be removed.
324
-
325
- .. _`edge cases`: https://github.com/python-attrs/attrs/issues/251
326
-
327
- :param inst: Instance of a class with *attrs* attributes.
328
- :param changes: Keyword changes in the new copy.
329
-
330
- :return: A copy of inst with *changes* incorporated.
331
-
332
- :raise attrs.exceptions.AttrsAttributeNotFoundError: If *attr_name*
333
- couldn't be found on *cls*.
334
- :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs*
335
- class.
336
-
337
- .. deprecated:: 17.1.0
338
- Use `attrs.evolve` instead if you can.
339
- This function will not be removed du to the slightly different approach
340
- compared to `attrs.evolve`.
341
- """
342
- new = copy.copy(inst)
343
- attrs = fields(inst.__class__)
344
- for k, v in changes.items():
345
- a = getattr(attrs, k, NOTHING)
346
- if a is NOTHING:
347
- raise AttrsAttributeNotFoundError(
348
- f"{k} is not an attrs attribute on {new.__class__}."
349
- )
350
- _obj_setattr(new, k, v)
351
- return new
352
-
353
-
354
- def evolve(*args, **changes):
355
- """
356
- Create a new instance, based on the first positional argument with
357
- *changes* applied.
358
-
359
- :param inst: Instance of a class with *attrs* attributes.
360
- :param changes: Keyword changes in the new copy.
361
-
362
- :return: A copy of inst with *changes* incorporated.
363
-
364
- :raise TypeError: If *attr_name* couldn't be found in the class
365
- ``__init__``.
366
- :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs*
367
- class.
368
-
369
- .. versionadded:: 17.1.0
370
- .. deprecated:: 23.1.0
371
- It is now deprecated to pass the instance using the keyword argument
372
- *inst*. It will raise a warning until at least April 2024, after which
373
- it will become an error. Always pass the instance as a positional
374
- argument.
375
- """
376
- # Try to get instance by positional argument first.
377
- # Use changes otherwise and warn it'll break.
378
- if args:
379
- try:
380
- (inst,) = args
381
- except ValueError:
382
- raise TypeError(
383
- f"evolve() takes 1 positional argument, but {len(args)} "
384
- "were given"
385
- ) from None
386
- else:
387
- try:
388
- inst = changes.pop("inst")
389
- except KeyError:
390
- raise TypeError(
391
- "evolve() missing 1 required positional argument: 'inst'"
392
- ) from None
393
-
394
- import warnings
395
-
396
- warnings.warn(
397
- "Passing the instance per keyword argument is deprecated and "
398
- "will stop working in, or after, April 2024.",
399
- DeprecationWarning,
400
- stacklevel=2,
401
- )
402
-
403
- cls = inst.__class__
404
- attrs = fields(cls)
405
- for a in attrs:
406
- if not a.init:
407
- continue
408
- attr_name = a.name # To deal with private attributes.
409
- init_name = a.alias
410
- if init_name not in changes:
411
- changes[init_name] = getattr(inst, attr_name)
412
-
413
- return cls(**changes)
414
-
415
-
416
- def resolve_types(
417
- cls, globalns=None, localns=None, attribs=None, include_extras=True
418
- ):
419
- """
420
- Resolve any strings and forward annotations in type annotations.
421
-
422
- This is only required if you need concrete types in `Attribute`'s *type*
423
- field. In other words, you don't need to resolve your types if you only
424
- use them for static type checking.
425
-
426
- With no arguments, names will be looked up in the module in which the class
427
- was created. If this is not what you want, e.g. if the name only exists
428
- inside a method, you may pass *globalns* or *localns* to specify other
429
- dictionaries in which to look up these names. See the docs of
430
- `typing.get_type_hints` for more details.
431
-
432
- :param type cls: Class to resolve.
433
- :param Optional[dict] globalns: Dictionary containing global variables.
434
- :param Optional[dict] localns: Dictionary containing local variables.
435
- :param Optional[list] attribs: List of attribs for the given class.
436
- This is necessary when calling from inside a ``field_transformer``
437
- since *cls* is not an *attrs* class yet.
438
- :param bool include_extras: Resolve more accurately, if possible.
439
- Pass ``include_extras`` to ``typing.get_hints``, if supported by the
440
- typing module. On supported Python versions (3.9+), this resolves the
441
- types more accurately.
442
-
443
- :raise TypeError: If *cls* is not a class.
444
- :raise attrs.exceptions.NotAnAttrsClassError: If *cls* is not an *attrs*
445
- class and you didn't pass any attribs.
446
- :raise NameError: If types cannot be resolved because of missing variables.
447
-
448
- :returns: *cls* so you can use this function also as a class decorator.
449
- Please note that you have to apply it **after** `attrs.define`. That
450
- means the decorator has to come in the line **before** `attrs.define`.
451
-
452
- .. versionadded:: 20.1.0
453
- .. versionadded:: 21.1.0 *attribs*
454
- .. versionadded:: 23.1.0 *include_extras*
455
-
456
- """
457
- # Since calling get_type_hints is expensive we cache whether we've
458
- # done it already.
459
- if getattr(cls, "__attrs_types_resolved__", None) != cls:
460
- import typing
461
-
462
- kwargs = {"globalns": globalns, "localns": localns}
463
-
464
- if PY_3_9_PLUS:
465
- kwargs["include_extras"] = include_extras
466
-
467
- hints = typing.get_type_hints(cls, **kwargs)
468
- for field in fields(cls) if attribs is None else attribs:
469
- if field.name in hints:
470
- # Since fields have been frozen we must work around it.
471
- _obj_setattr(field, "type", hints[field.name])
472
- # We store the class we resolved so that subclasses know they haven't
473
- # been resolved.
474
- cls.__attrs_types_resolved__ = cls
475
-
476
- # Return the class so you can use it as a decorator too.
477
- return cls
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/S__i_l_f.py DELETED
@@ -1,1037 +0,0 @@
1
- from fontTools.misc import sstruct
2
- from fontTools.misc.fixedTools import floatToFixedToStr
3
- from fontTools.misc.textTools import byteord, safeEval
4
-
5
- # from itertools import *
6
- from . import DefaultTable
7
- from . import grUtils
8
- from array import array
9
- from functools import reduce
10
- import struct, re, sys
11
-
12
- Silf_hdr_format = """
13
- >
14
- version: 16.16F
15
- """
16
-
17
- Silf_hdr_format_3 = """
18
- >
19
- version: 16.16F
20
- compilerVersion: L
21
- numSilf: H
22
- x
23
- x
24
- """
25
-
26
- Silf_part1_format_v3 = """
27
- >
28
- ruleVersion: 16.16F
29
- passOffset: H
30
- pseudosOffset: H
31
- """
32
-
33
- Silf_part1_format = """
34
- >
35
- maxGlyphID: H
36
- extraAscent: h
37
- extraDescent: h
38
- numPasses: B
39
- iSubst: B
40
- iPos: B
41
- iJust: B
42
- iBidi: B
43
- flags: B
44
- maxPreContext: B
45
- maxPostContext: B
46
- attrPseudo: B
47
- attrBreakWeight: B
48
- attrDirectionality: B
49
- attrMirroring: B
50
- attrSkipPasses: B
51
- numJLevels: B
52
- """
53
-
54
- Silf_justify_format = """
55
- >
56
- attrStretch: B
57
- attrShrink: B
58
- attrStep: B
59
- attrWeight: B
60
- runto: B
61
- x
62
- x
63
- x
64
- """
65
-
66
- Silf_part2_format = """
67
- >
68
- numLigComp: H
69
- numUserDefn: B
70
- maxCompPerLig: B
71
- direction: B
72
- attCollisions: B
73
- x
74
- x
75
- x
76
- numCritFeatures: B
77
- """
78
-
79
- Silf_pseudomap_format = """
80
- >
81
- unicode: L
82
- nPseudo: H
83
- """
84
-
85
- Silf_pseudomap_format_h = """
86
- >
87
- unicode: H
88
- nPseudo: H
89
- """
90
-
91
- Silf_classmap_format = """
92
- >
93
- numClass: H
94
- numLinear: H
95
- """
96
-
97
- Silf_lookupclass_format = """
98
- >
99
- numIDs: H
100
- searchRange: H
101
- entrySelector: H
102
- rangeShift: H
103
- """
104
-
105
- Silf_lookuppair_format = """
106
- >
107
- glyphId: H
108
- index: H
109
- """
110
-
111
- Silf_pass_format = """
112
- >
113
- flags: B
114
- maxRuleLoop: B
115
- maxRuleContext: B
116
- maxBackup: B
117
- numRules: H
118
- fsmOffset: H
119
- pcCode: L
120
- rcCode: L
121
- aCode: L
122
- oDebug: L
123
- numRows: H
124
- numTransitional: H
125
- numSuccess: H
126
- numColumns: H
127
- """
128
-
129
- aCode_info = (
130
- ("NOP", 0),
131
- ("PUSH_BYTE", "b"),
132
- ("PUSH_BYTE_U", "B"),
133
- ("PUSH_SHORT", ">h"),
134
- ("PUSH_SHORT_U", ">H"),
135
- ("PUSH_LONG", ">L"),
136
- ("ADD", 0),
137
- ("SUB", 0),
138
- ("MUL", 0),
139
- ("DIV", 0),
140
- ("MIN", 0),
141
- ("MAX", 0),
142
- ("NEG", 0),
143
- ("TRUNC8", 0),
144
- ("TRUNC16", 0),
145
- ("COND", 0),
146
- ("AND", 0), # x10
147
- ("OR", 0),
148
- ("NOT", 0),
149
- ("EQUAL", 0),
150
- ("NOT_EQ", 0),
151
- ("LESS", 0),
152
- ("GTR", 0),
153
- ("LESS_EQ", 0),
154
- ("GTR_EQ", 0),
155
- ("NEXT", 0),
156
- ("NEXT_N", "b"),
157
- ("COPY_NEXT", 0),
158
- ("PUT_GLYPH_8BIT_OBS", "B"),
159
- ("PUT_SUBS_8BIT_OBS", "bBB"),
160
- ("PUT_COPY", "b"),
161
- ("INSERT", 0),
162
- ("DELETE", 0), # x20
163
- ("ASSOC", -1),
164
- ("CNTXT_ITEM", "bB"),
165
- ("ATTR_SET", "B"),
166
- ("ATTR_ADD", "B"),
167
- ("ATTR_SUB", "B"),
168
- ("ATTR_SET_SLOT", "B"),
169
- ("IATTR_SET_SLOT", "BB"),
170
- ("PUSH_SLOT_ATTR", "Bb"),
171
- ("PUSH_GLYPH_ATTR_OBS", "Bb"),
172
- ("PUSH_GLYPH_METRIC", "Bbb"),
173
- ("PUSH_FEAT", "Bb"),
174
- ("PUSH_ATT_TO_GATTR_OBS", "Bb"),
175
- ("PUSH_ATT_TO_GLYPH_METRIC", "Bbb"),
176
- ("PUSH_ISLOT_ATTR", "Bbb"),
177
- ("PUSH_IGLYPH_ATTR", "Bbb"),
178
- ("POP_RET", 0), # x30
179
- ("RET_ZERO", 0),
180
- ("RET_TRUE", 0),
181
- ("IATTR_SET", "BB"),
182
- ("IATTR_ADD", "BB"),
183
- ("IATTR_SUB", "BB"),
184
- ("PUSH_PROC_STATE", "B"),
185
- ("PUSH_VERSION", 0),
186
- ("PUT_SUBS", ">bHH"),
187
- ("PUT_SUBS2", 0),
188
- ("PUT_SUBS3", 0),
189
- ("PUT_GLYPH", ">H"),
190
- ("PUSH_GLYPH_ATTR", ">Hb"),
191
- ("PUSH_ATT_TO_GLYPH_ATTR", ">Hb"),
192
- ("BITOR", 0),
193
- ("BITAND", 0),
194
- ("BITNOT", 0), # x40
195
- ("BITSET", ">HH"),
196
- ("SET_FEAT", "Bb"),
197
- )
198
- aCode_map = dict([(x[0], (i, x[1])) for i, x in enumerate(aCode_info)])
199
-
200
-
201
- def disassemble(aCode):
202
- codelen = len(aCode)
203
- pc = 0
204
- res = []
205
- while pc < codelen:
206
- opcode = byteord(aCode[pc : pc + 1])
207
- if opcode > len(aCode_info):
208
- instr = aCode_info[0]
209
- else:
210
- instr = aCode_info[opcode]
211
- pc += 1
212
- if instr[1] != 0 and pc >= codelen:
213
- return res
214
- if instr[1] == -1:
215
- count = byteord(aCode[pc])
216
- fmt = "%dB" % count
217
- pc += 1
218
- elif instr[1] == 0:
219
- fmt = ""
220
- else:
221
- fmt = instr[1]
222
- if fmt == "":
223
- res.append(instr[0])
224
- continue
225
- parms = struct.unpack_from(fmt, aCode[pc:])
226
- res.append(instr[0] + "(" + ", ".join(map(str, parms)) + ")")
227
- pc += struct.calcsize(fmt)
228
- return res
229
-
230
-
231
- instre = re.compile(r"^\s*([^(]+)\s*(?:\(([^)]+)\))?")
232
-
233
-
234
- def assemble(instrs):
235
- res = b""
236
- for inst in instrs:
237
- m = instre.match(inst)
238
- if not m or not m.group(1) in aCode_map:
239
- continue
240
- opcode, parmfmt = aCode_map[m.group(1)]
241
- res += struct.pack("B", opcode)
242
- if m.group(2):
243
- if parmfmt == 0:
244
- continue
245
- parms = [int(x) for x in re.split(r",\s*", m.group(2))]
246
- if parmfmt == -1:
247
- l = len(parms)
248
- res += struct.pack(("%dB" % (l + 1)), l, *parms)
249
- else:
250
- res += struct.pack(parmfmt, *parms)
251
- return res
252
-
253
-
254
- def writecode(tag, writer, instrs):
255
- writer.begintag(tag)
256
- writer.newline()
257
- for l in disassemble(instrs):
258
- writer.write(l)
259
- writer.newline()
260
- writer.endtag(tag)
261
- writer.newline()
262
-
263
-
264
- def readcode(content):
265
- res = []
266
- for e in content_string(content).split("\n"):
267
- e = e.strip()
268
- if not len(e):
269
- continue
270
- res.append(e)
271
- return assemble(res)
272
-
273
-
274
- attrs_info = (
275
- "flags",
276
- "extraAscent",
277
- "extraDescent",
278
- "maxGlyphID",
279
- "numLigComp",
280
- "numUserDefn",
281
- "maxCompPerLig",
282
- "direction",
283
- "lbGID",
284
- )
285
- attrs_passindexes = ("iSubst", "iPos", "iJust", "iBidi")
286
- attrs_contexts = ("maxPreContext", "maxPostContext")
287
- attrs_attributes = (
288
- "attrPseudo",
289
- "attrBreakWeight",
290
- "attrDirectionality",
291
- "attrMirroring",
292
- "attrSkipPasses",
293
- "attCollisions",
294
- )
295
- pass_attrs_info = (
296
- "flags",
297
- "maxRuleLoop",
298
- "maxRuleContext",
299
- "maxBackup",
300
- "minRulePreContext",
301
- "maxRulePreContext",
302
- "collisionThreshold",
303
- )
304
- pass_attrs_fsm = ("numRows", "numTransitional", "numSuccess", "numColumns")
305
-
306
-
307
- def writesimple(tag, self, writer, *attrkeys):
308
- attrs = dict([(k, getattr(self, k)) for k in attrkeys])
309
- writer.simpletag(tag, **attrs)
310
- writer.newline()
311
-
312
-
313
- def getSimple(self, attrs, *attr_list):
314
- for k in attr_list:
315
- if k in attrs:
316
- setattr(self, k, int(safeEval(attrs[k])))
317
-
318
-
319
- def content_string(contents):
320
- res = ""
321
- for element in contents:
322
- if isinstance(element, tuple):
323
- continue
324
- res += element
325
- return res.strip()
326
-
327
-
328
- def wrapline(writer, dat, length=80):
329
- currline = ""
330
- for d in dat:
331
- if len(currline) > length:
332
- writer.write(currline[:-1])
333
- writer.newline()
334
- currline = ""
335
- currline += d + " "
336
- if len(currline):
337
- writer.write(currline[:-1])
338
- writer.newline()
339
-
340
-
341
- class _Object:
342
- pass
343
-
344
-
345
- class table_S__i_l_f(DefaultTable.DefaultTable):
346
- """Silf table support"""
347
-
348
- def __init__(self, tag=None):
349
- DefaultTable.DefaultTable.__init__(self, tag)
350
- self.silfs = []
351
-
352
- def decompile(self, data, ttFont):
353
- sstruct.unpack2(Silf_hdr_format, data, self)
354
- self.version = float(floatToFixedToStr(self.version, precisionBits=16))
355
- if self.version >= 5.0:
356
- (data, self.scheme) = grUtils.decompress(data)
357
- sstruct.unpack2(Silf_hdr_format_3, data, self)
358
- base = sstruct.calcsize(Silf_hdr_format_3)
359
- elif self.version < 3.0:
360
- self.numSilf = struct.unpack(">H", data[4:6])
361
- self.scheme = 0
362
- self.compilerVersion = 0
363
- base = 8
364
- else:
365
- self.scheme = 0
366
- sstruct.unpack2(Silf_hdr_format_3, data, self)
367
- base = sstruct.calcsize(Silf_hdr_format_3)
368
-
369
- silfoffsets = struct.unpack_from((">%dL" % self.numSilf), data[base:])
370
- for offset in silfoffsets:
371
- s = Silf()
372
- self.silfs.append(s)
373
- s.decompile(data[offset:], ttFont, self.version)
374
-
375
- def compile(self, ttFont):
376
- self.numSilf = len(self.silfs)
377
- if self.version < 3.0:
378
- hdr = sstruct.pack(Silf_hdr_format, self)
379
- hdr += struct.pack(">HH", self.numSilf, 0)
380
- else:
381
- hdr = sstruct.pack(Silf_hdr_format_3, self)
382
- offset = len(hdr) + 4 * self.numSilf
383
- data = b""
384
- for s in self.silfs:
385
- hdr += struct.pack(">L", offset)
386
- subdata = s.compile(ttFont, self.version)
387
- offset += len(subdata)
388
- data += subdata
389
- if self.version >= 5.0:
390
- return grUtils.compress(self.scheme, hdr + data)
391
- return hdr + data
392
-
393
- def toXML(self, writer, ttFont):
394
- writer.comment("Attributes starting with _ are informative only")
395
- writer.newline()
396
- writer.simpletag(
397
- "version",
398
- version=self.version,
399
- compilerVersion=self.compilerVersion,
400
- compressionScheme=self.scheme,
401
- )
402
- writer.newline()
403
- for s in self.silfs:
404
- writer.begintag("silf")
405
- writer.newline()
406
- s.toXML(writer, ttFont, self.version)
407
- writer.endtag("silf")
408
- writer.newline()
409
-
410
- def fromXML(self, name, attrs, content, ttFont):
411
- if name == "version":
412
- self.scheme = int(safeEval(attrs["compressionScheme"]))
413
- self.version = float(safeEval(attrs["version"]))
414
- self.compilerVersion = int(safeEval(attrs["compilerVersion"]))
415
- return
416
- if name == "silf":
417
- s = Silf()
418
- self.silfs.append(s)
419
- for element in content:
420
- if not isinstance(element, tuple):
421
- continue
422
- tag, attrs, subcontent = element
423
- s.fromXML(tag, attrs, subcontent, ttFont, self.version)
424
-
425
-
426
- class Silf(object):
427
- """A particular Silf subtable"""
428
-
429
- def __init__(self):
430
- self.passes = []
431
- self.scriptTags = []
432
- self.critFeatures = []
433
- self.jLevels = []
434
- self.pMap = {}
435
-
436
- def decompile(self, data, ttFont, version=2.0):
437
- if version >= 3.0:
438
- _, data = sstruct.unpack2(Silf_part1_format_v3, data, self)
439
- self.ruleVersion = float(
440
- floatToFixedToStr(self.ruleVersion, precisionBits=16)
441
- )
442
- _, data = sstruct.unpack2(Silf_part1_format, data, self)
443
- for jlevel in range(self.numJLevels):
444
- j, data = sstruct.unpack2(Silf_justify_format, data, _Object())
445
- self.jLevels.append(j)
446
- _, data = sstruct.unpack2(Silf_part2_format, data, self)
447
- if self.numCritFeatures:
448
- self.critFeatures = struct.unpack_from(
449
- (">%dH" % self.numCritFeatures), data
450
- )
451
- data = data[self.numCritFeatures * 2 + 1 :]
452
- (numScriptTag,) = struct.unpack_from("B", data)
453
- if numScriptTag:
454
- self.scriptTags = [
455
- struct.unpack("4s", data[x : x + 4])[0].decode("ascii")
456
- for x in range(1, 1 + 4 * numScriptTag, 4)
457
- ]
458
- data = data[1 + 4 * numScriptTag :]
459
- (self.lbGID,) = struct.unpack(">H", data[:2])
460
- if self.numPasses:
461
- self.oPasses = struct.unpack(
462
- (">%dL" % (self.numPasses + 1)), data[2 : 6 + 4 * self.numPasses]
463
- )
464
- data = data[6 + 4 * self.numPasses :]
465
- (numPseudo,) = struct.unpack(">H", data[:2])
466
- for i in range(numPseudo):
467
- if version >= 3.0:
468
- pseudo = sstruct.unpack(
469
- Silf_pseudomap_format, data[8 + 6 * i : 14 + 6 * i], _Object()
470
- )
471
- else:
472
- pseudo = sstruct.unpack(
473
- Silf_pseudomap_format_h, data[8 + 4 * i : 12 + 4 * i], _Object()
474
- )
475
- self.pMap[pseudo.unicode] = ttFont.getGlyphName(pseudo.nPseudo)
476
- data = data[8 + 6 * numPseudo :]
477
- currpos = (
478
- sstruct.calcsize(Silf_part1_format)
479
- + sstruct.calcsize(Silf_justify_format) * self.numJLevels
480
- + sstruct.calcsize(Silf_part2_format)
481
- + 2 * self.numCritFeatures
482
- + 1
483
- + 1
484
- + 4 * numScriptTag
485
- + 6
486
- + 4 * self.numPasses
487
- + 8
488
- + 6 * numPseudo
489
- )
490
- if version >= 3.0:
491
- currpos += sstruct.calcsize(Silf_part1_format_v3)
492
- self.classes = Classes()
493
- self.classes.decompile(data, ttFont, version)
494
- for i in range(self.numPasses):
495
- p = Pass()
496
- self.passes.append(p)
497
- p.decompile(
498
- data[self.oPasses[i] - currpos : self.oPasses[i + 1] - currpos],
499
- ttFont,
500
- version,
501
- )
502
-
503
- def compile(self, ttFont, version=2.0):
504
- self.numPasses = len(self.passes)
505
- self.numJLevels = len(self.jLevels)
506
- self.numCritFeatures = len(self.critFeatures)
507
- numPseudo = len(self.pMap)
508
- data = b""
509
- if version >= 3.0:
510
- hdroffset = sstruct.calcsize(Silf_part1_format_v3)
511
- else:
512
- hdroffset = 0
513
- data += sstruct.pack(Silf_part1_format, self)
514
- for j in self.jLevels:
515
- data += sstruct.pack(Silf_justify_format, j)
516
- data += sstruct.pack(Silf_part2_format, self)
517
- if self.numCritFeatures:
518
- data += struct.pack((">%dH" % self.numCritFeaturs), *self.critFeatures)
519
- data += struct.pack("BB", 0, len(self.scriptTags))
520
- if len(self.scriptTags):
521
- tdata = [struct.pack("4s", x.encode("ascii")) for x in self.scriptTags]
522
- data += b"".join(tdata)
523
- data += struct.pack(">H", self.lbGID)
524
- self.passOffset = len(data)
525
-
526
- data1 = grUtils.bininfo(numPseudo, 6)
527
- currpos = hdroffset + len(data) + 4 * (self.numPasses + 1)
528
- self.pseudosOffset = currpos + len(data1)
529
- for u, p in sorted(self.pMap.items()):
530
- data1 += struct.pack(
531
- (">LH" if version >= 3.0 else ">HH"), u, ttFont.getGlyphID(p)
532
- )
533
- data1 += self.classes.compile(ttFont, version)
534
- currpos += len(data1)
535
- data2 = b""
536
- datao = b""
537
- for i, p in enumerate(self.passes):
538
- base = currpos + len(data2)
539
- datao += struct.pack(">L", base)
540
- data2 += p.compile(ttFont, base, version)
541
- datao += struct.pack(">L", currpos + len(data2))
542
-
543
- if version >= 3.0:
544
- data3 = sstruct.pack(Silf_part1_format_v3, self)
545
- else:
546
- data3 = b""
547
- return data3 + data + datao + data1 + data2
548
-
549
- def toXML(self, writer, ttFont, version=2.0):
550
- if version >= 3.0:
551
- writer.simpletag("version", ruleVersion=self.ruleVersion)
552
- writer.newline()
553
- writesimple("info", self, writer, *attrs_info)
554
- writesimple("passindexes", self, writer, *attrs_passindexes)
555
- writesimple("contexts", self, writer, *attrs_contexts)
556
- writesimple("attributes", self, writer, *attrs_attributes)
557
- if len(self.jLevels):
558
- writer.begintag("justifications")
559
- writer.newline()
560
- jformat, jnames, jfixes = sstruct.getformat(Silf_justify_format)
561
- for i, j in enumerate(self.jLevels):
562
- attrs = dict([(k, getattr(j, k)) for k in jnames])
563
- writer.simpletag("justify", **attrs)
564
- writer.newline()
565
- writer.endtag("justifications")
566
- writer.newline()
567
- if len(self.critFeatures):
568
- writer.begintag("critFeatures")
569
- writer.newline()
570
- writer.write(" ".join(map(str, self.critFeatures)))
571
- writer.newline()
572
- writer.endtag("critFeatures")
573
- writer.newline()
574
- if len(self.scriptTags):
575
- writer.begintag("scriptTags")
576
- writer.newline()
577
- writer.write(" ".join(self.scriptTags))
578
- writer.newline()
579
- writer.endtag("scriptTags")
580
- writer.newline()
581
- if self.pMap:
582
- writer.begintag("pseudoMap")
583
- writer.newline()
584
- for k, v in sorted(self.pMap.items()):
585
- writer.simpletag("pseudo", unicode=hex(k), pseudo=v)
586
- writer.newline()
587
- writer.endtag("pseudoMap")
588
- writer.newline()
589
- self.classes.toXML(writer, ttFont, version)
590
- if len(self.passes):
591
- writer.begintag("passes")
592
- writer.newline()
593
- for i, p in enumerate(self.passes):
594
- writer.begintag("pass", _index=i)
595
- writer.newline()
596
- p.toXML(writer, ttFont, version)
597
- writer.endtag("pass")
598
- writer.newline()
599
- writer.endtag("passes")
600
- writer.newline()
601
-
602
- def fromXML(self, name, attrs, content, ttFont, version=2.0):
603
- if name == "version":
604
- self.ruleVersion = float(safeEval(attrs.get("ruleVersion", "0")))
605
- if name == "info":
606
- getSimple(self, attrs, *attrs_info)
607
- elif name == "passindexes":
608
- getSimple(self, attrs, *attrs_passindexes)
609
- elif name == "contexts":
610
- getSimple(self, attrs, *attrs_contexts)
611
- elif name == "attributes":
612
- getSimple(self, attrs, *attrs_attributes)
613
- elif name == "justifications":
614
- for element in content:
615
- if not isinstance(element, tuple):
616
- continue
617
- (tag, attrs, subcontent) = element
618
- if tag == "justify":
619
- j = _Object()
620
- for k, v in attrs.items():
621
- setattr(j, k, int(v))
622
- self.jLevels.append(j)
623
- elif name == "critFeatures":
624
- self.critFeatures = []
625
- element = content_string(content)
626
- self.critFeatures.extend(map(int, element.split()))
627
- elif name == "scriptTags":
628
- self.scriptTags = []
629
- element = content_string(content)
630
- for n in element.split():
631
- self.scriptTags.append(n)
632
- elif name == "pseudoMap":
633
- self.pMap = {}
634
- for element in content:
635
- if not isinstance(element, tuple):
636
- continue
637
- (tag, attrs, subcontent) = element
638
- if tag == "pseudo":
639
- k = int(attrs["unicode"], 16)
640
- v = attrs["pseudo"]
641
- self.pMap[k] = v
642
- elif name == "classes":
643
- self.classes = Classes()
644
- for element in content:
645
- if not isinstance(element, tuple):
646
- continue
647
- tag, attrs, subcontent = element
648
- self.classes.fromXML(tag, attrs, subcontent, ttFont, version)
649
- elif name == "passes":
650
- for element in content:
651
- if not isinstance(element, tuple):
652
- continue
653
- tag, attrs, subcontent = element
654
- if tag == "pass":
655
- p = Pass()
656
- for e in subcontent:
657
- if not isinstance(e, tuple):
658
- continue
659
- p.fromXML(e[0], e[1], e[2], ttFont, version)
660
- self.passes.append(p)
661
-
662
-
663
- class Classes(object):
664
- def __init__(self):
665
- self.linear = []
666
- self.nonLinear = []
667
-
668
- def decompile(self, data, ttFont, version=2.0):
669
- sstruct.unpack2(Silf_classmap_format, data, self)
670
- if version >= 4.0:
671
- oClasses = struct.unpack(
672
- (">%dL" % (self.numClass + 1)), data[4 : 8 + 4 * self.numClass]
673
- )
674
- else:
675
- oClasses = struct.unpack(
676
- (">%dH" % (self.numClass + 1)), data[4 : 6 + 2 * self.numClass]
677
- )
678
- for s, e in zip(oClasses[: self.numLinear], oClasses[1 : self.numLinear + 1]):
679
- self.linear.append(
680
- ttFont.getGlyphName(x)
681
- for x in struct.unpack((">%dH" % ((e - s) / 2)), data[s:e])
682
- )
683
- for s, e in zip(
684
- oClasses[self.numLinear : self.numClass],
685
- oClasses[self.numLinear + 1 : self.numClass + 1],
686
- ):
687
- nonLinids = [
688
- struct.unpack(">HH", data[x : x + 4]) for x in range(s + 8, e, 4)
689
- ]
690
- nonLin = dict([(ttFont.getGlyphName(x[0]), x[1]) for x in nonLinids])
691
- self.nonLinear.append(nonLin)
692
-
693
- def compile(self, ttFont, version=2.0):
694
- data = b""
695
- oClasses = []
696
- if version >= 4.0:
697
- offset = 8 + 4 * (len(self.linear) + len(self.nonLinear))
698
- else:
699
- offset = 6 + 2 * (len(self.linear) + len(self.nonLinear))
700
- for l in self.linear:
701
- oClasses.append(len(data) + offset)
702
- gs = [ttFont.getGlyphID(x) for x in l]
703
- data += struct.pack((">%dH" % len(l)), *gs)
704
- for l in self.nonLinear:
705
- oClasses.append(len(data) + offset)
706
- gs = [(ttFont.getGlyphID(x[0]), x[1]) for x in l.items()]
707
- data += grUtils.bininfo(len(gs))
708
- data += b"".join([struct.pack(">HH", *x) for x in sorted(gs)])
709
- oClasses.append(len(data) + offset)
710
- self.numClass = len(oClasses) - 1
711
- self.numLinear = len(self.linear)
712
- return (
713
- sstruct.pack(Silf_classmap_format, self)
714
- + struct.pack(
715
- ((">%dL" if version >= 4.0 else ">%dH") % len(oClasses)), *oClasses
716
- )
717
- + data
718
- )
719
-
720
- def toXML(self, writer, ttFont, version=2.0):
721
- writer.begintag("classes")
722
- writer.newline()
723
- writer.begintag("linearClasses")
724
- writer.newline()
725
- for i, l in enumerate(self.linear):
726
- writer.begintag("linear", _index=i)
727
- writer.newline()
728
- wrapline(writer, l)
729
- writer.endtag("linear")
730
- writer.newline()
731
- writer.endtag("linearClasses")
732
- writer.newline()
733
- writer.begintag("nonLinearClasses")
734
- writer.newline()
735
- for i, l in enumerate(self.nonLinear):
736
- writer.begintag("nonLinear", _index=i + self.numLinear)
737
- writer.newline()
738
- for inp, ind in l.items():
739
- writer.simpletag("map", glyph=inp, index=ind)
740
- writer.newline()
741
- writer.endtag("nonLinear")
742
- writer.newline()
743
- writer.endtag("nonLinearClasses")
744
- writer.newline()
745
- writer.endtag("classes")
746
- writer.newline()
747
-
748
- def fromXML(self, name, attrs, content, ttFont, version=2.0):
749
- if name == "linearClasses":
750
- for element in content:
751
- if not isinstance(element, tuple):
752
- continue
753
- tag, attrs, subcontent = element
754
- if tag == "linear":
755
- l = content_string(subcontent).split()
756
- self.linear.append(l)
757
- elif name == "nonLinearClasses":
758
- for element in content:
759
- if not isinstance(element, tuple):
760
- continue
761
- tag, attrs, subcontent = element
762
- if tag == "nonLinear":
763
- l = {}
764
- for e in subcontent:
765
- if not isinstance(e, tuple):
766
- continue
767
- tag, attrs, subsubcontent = e
768
- if tag == "map":
769
- l[attrs["glyph"]] = int(safeEval(attrs["index"]))
770
- self.nonLinear.append(l)
771
-
772
-
773
- class Pass(object):
774
- def __init__(self):
775
- self.colMap = {}
776
- self.rules = []
777
- self.rulePreContexts = []
778
- self.ruleSortKeys = []
779
- self.ruleConstraints = []
780
- self.passConstraints = b""
781
- self.actions = []
782
- self.stateTrans = []
783
- self.startStates = []
784
-
785
- def decompile(self, data, ttFont, version=2.0):
786
- _, data = sstruct.unpack2(Silf_pass_format, data, self)
787
- (numRange, _, _, _) = struct.unpack(">4H", data[:8])
788
- data = data[8:]
789
- for i in range(numRange):
790
- (first, last, col) = struct.unpack(">3H", data[6 * i : 6 * i + 6])
791
- for g in range(first, last + 1):
792
- self.colMap[ttFont.getGlyphName(g)] = col
793
- data = data[6 * numRange :]
794
- oRuleMap = struct.unpack_from((">%dH" % (self.numSuccess + 1)), data)
795
- data = data[2 + 2 * self.numSuccess :]
796
- rules = struct.unpack_from((">%dH" % oRuleMap[-1]), data)
797
- self.rules = [rules[s:e] for (s, e) in zip(oRuleMap, oRuleMap[1:])]
798
- data = data[2 * oRuleMap[-1] :]
799
- (self.minRulePreContext, self.maxRulePreContext) = struct.unpack("BB", data[:2])
800
- numStartStates = self.maxRulePreContext - self.minRulePreContext + 1
801
- self.startStates = struct.unpack(
802
- (">%dH" % numStartStates), data[2 : 2 + numStartStates * 2]
803
- )
804
- data = data[2 + numStartStates * 2 :]
805
- self.ruleSortKeys = struct.unpack(
806
- (">%dH" % self.numRules), data[: 2 * self.numRules]
807
- )
808
- data = data[2 * self.numRules :]
809
- self.rulePreContexts = struct.unpack(
810
- ("%dB" % self.numRules), data[: self.numRules]
811
- )
812
- data = data[self.numRules :]
813
- (self.collisionThreshold, pConstraint) = struct.unpack(">BH", data[:3])
814
- oConstraints = list(
815
- struct.unpack(
816
- (">%dH" % (self.numRules + 1)), data[3 : 5 + self.numRules * 2]
817
- )
818
- )
819
- data = data[5 + self.numRules * 2 :]
820
- oActions = list(
821
- struct.unpack((">%dH" % (self.numRules + 1)), data[: 2 + self.numRules * 2])
822
- )
823
- data = data[2 * self.numRules + 2 :]
824
- for i in range(self.numTransitional):
825
- a = array(
826
- "H", data[i * self.numColumns * 2 : (i + 1) * self.numColumns * 2]
827
- )
828
- if sys.byteorder != "big":
829
- a.byteswap()
830
- self.stateTrans.append(a)
831
- data = data[self.numTransitional * self.numColumns * 2 + 1 :]
832
- self.passConstraints = data[:pConstraint]
833
- data = data[pConstraint:]
834
- for i in range(len(oConstraints) - 2, -1, -1):
835
- if oConstraints[i] == 0:
836
- oConstraints[i] = oConstraints[i + 1]
837
- self.ruleConstraints = [
838
- (data[s:e] if (e - s > 1) else b"")
839
- for (s, e) in zip(oConstraints, oConstraints[1:])
840
- ]
841
- data = data[oConstraints[-1] :]
842
- self.actions = [
843
- (data[s:e] if (e - s > 1) else "") for (s, e) in zip(oActions, oActions[1:])
844
- ]
845
- data = data[oActions[-1] :]
846
- # not using debug
847
-
848
- def compile(self, ttFont, base, version=2.0):
849
- # build it all up backwards
850
- oActions = reduce(
851
- lambda a, x: (a[0] + len(x), a[1] + [a[0]]), self.actions + [b""], (0, [])
852
- )[1]
853
- oConstraints = reduce(
854
- lambda a, x: (a[0] + len(x), a[1] + [a[0]]),
855
- self.ruleConstraints + [b""],
856
- (1, []),
857
- )[1]
858
- constraintCode = b"\000" + b"".join(self.ruleConstraints)
859
- transes = []
860
- for t in self.stateTrans:
861
- if sys.byteorder != "big":
862
- t.byteswap()
863
- transes.append(t.tobytes())
864
- if sys.byteorder != "big":
865
- t.byteswap()
866
- if not len(transes):
867
- self.startStates = [0]
868
- oRuleMap = reduce(
869
- lambda a, x: (a[0] + len(x), a[1] + [a[0]]), self.rules + [[]], (0, [])
870
- )[1]
871
- passRanges = []
872
- gidcolmap = dict([(ttFont.getGlyphID(x[0]), x[1]) for x in self.colMap.items()])
873
- for e in grUtils.entries(gidcolmap, sameval=True):
874
- if e[1]:
875
- passRanges.append((e[0], e[0] + e[1] - 1, e[2][0]))
876
- self.numRules = len(self.actions)
877
- self.fsmOffset = (
878
- sstruct.calcsize(Silf_pass_format)
879
- + 8
880
- + len(passRanges) * 6
881
- + len(oRuleMap) * 2
882
- + 2 * oRuleMap[-1]
883
- + 2
884
- + 2 * len(self.startStates)
885
- + 3 * self.numRules
886
- + 3
887
- + 4 * self.numRules
888
- + 4
889
- )
890
- self.pcCode = (
891
- self.fsmOffset + 2 * self.numTransitional * self.numColumns + 1 + base
892
- )
893
- self.rcCode = self.pcCode + len(self.passConstraints)
894
- self.aCode = self.rcCode + len(constraintCode)
895
- self.oDebug = 0
896
- # now generate output
897
- data = sstruct.pack(Silf_pass_format, self)
898
- data += grUtils.bininfo(len(passRanges), 6)
899
- data += b"".join(struct.pack(">3H", *p) for p in passRanges)
900
- data += struct.pack((">%dH" % len(oRuleMap)), *oRuleMap)
901
- flatrules = reduce(lambda a, x: a + x, self.rules, [])
902
- data += struct.pack((">%dH" % oRuleMap[-1]), *flatrules)
903
- data += struct.pack("BB", self.minRulePreContext, self.maxRulePreContext)
904
- data += struct.pack((">%dH" % len(self.startStates)), *self.startStates)
905
- data += struct.pack((">%dH" % self.numRules), *self.ruleSortKeys)
906
- data += struct.pack(("%dB" % self.numRules), *self.rulePreContexts)
907
- data += struct.pack(">BH", self.collisionThreshold, len(self.passConstraints))
908
- data += struct.pack((">%dH" % (self.numRules + 1)), *oConstraints)
909
- data += struct.pack((">%dH" % (self.numRules + 1)), *oActions)
910
- return (
911
- data
912
- + b"".join(transes)
913
- + struct.pack("B", 0)
914
- + self.passConstraints
915
- + constraintCode
916
- + b"".join(self.actions)
917
- )
918
-
919
- def toXML(self, writer, ttFont, version=2.0):
920
- writesimple("info", self, writer, *pass_attrs_info)
921
- writesimple("fsminfo", self, writer, *pass_attrs_fsm)
922
- writer.begintag("colmap")
923
- writer.newline()
924
- wrapline(
925
- writer,
926
- [
927
- "{}={}".format(*x)
928
- for x in sorted(
929
- self.colMap.items(), key=lambda x: ttFont.getGlyphID(x[0])
930
- )
931
- ],
932
- )
933
- writer.endtag("colmap")
934
- writer.newline()
935
- writer.begintag("staterulemap")
936
- writer.newline()
937
- for i, r in enumerate(self.rules):
938
- writer.simpletag(
939
- "state",
940
- number=self.numRows - self.numSuccess + i,
941
- rules=" ".join(map(str, r)),
942
- )
943
- writer.newline()
944
- writer.endtag("staterulemap")
945
- writer.newline()
946
- writer.begintag("rules")
947
- writer.newline()
948
- for i in range(len(self.actions)):
949
- writer.begintag(
950
- "rule",
951
- index=i,
952
- precontext=self.rulePreContexts[i],
953
- sortkey=self.ruleSortKeys[i],
954
- )
955
- writer.newline()
956
- if len(self.ruleConstraints[i]):
957
- writecode("constraint", writer, self.ruleConstraints[i])
958
- writecode("action", writer, self.actions[i])
959
- writer.endtag("rule")
960
- writer.newline()
961
- writer.endtag("rules")
962
- writer.newline()
963
- if len(self.passConstraints):
964
- writecode("passConstraint", writer, self.passConstraints)
965
- if len(self.stateTrans):
966
- writer.begintag("fsm")
967
- writer.newline()
968
- writer.begintag("starts")
969
- writer.write(" ".join(map(str, self.startStates)))
970
- writer.endtag("starts")
971
- writer.newline()
972
- for i, s in enumerate(self.stateTrans):
973
- writer.begintag("row", _i=i)
974
- # no newlines here
975
- writer.write(" ".join(map(str, s)))
976
- writer.endtag("row")
977
- writer.newline()
978
- writer.endtag("fsm")
979
- writer.newline()
980
-
981
- def fromXML(self, name, attrs, content, ttFont, version=2.0):
982
- if name == "info":
983
- getSimple(self, attrs, *pass_attrs_info)
984
- elif name == "fsminfo":
985
- getSimple(self, attrs, *pass_attrs_fsm)
986
- elif name == "colmap":
987
- e = content_string(content)
988
- for w in e.split():
989
- x = w.split("=")
990
- if len(x) != 2 or x[0] == "" or x[1] == "":
991
- continue
992
- self.colMap[x[0]] = int(x[1])
993
- elif name == "staterulemap":
994
- for e in content:
995
- if not isinstance(e, tuple):
996
- continue
997
- tag, a, c = e
998
- if tag == "state":
999
- self.rules.append([int(x) for x in a["rules"].split(" ")])
1000
- elif name == "rules":
1001
- for element in content:
1002
- if not isinstance(element, tuple):
1003
- continue
1004
- tag, a, c = element
1005
- if tag != "rule":
1006
- continue
1007
- self.rulePreContexts.append(int(a["precontext"]))
1008
- self.ruleSortKeys.append(int(a["sortkey"]))
1009
- con = b""
1010
- act = b""
1011
- for e in c:
1012
- if not isinstance(e, tuple):
1013
- continue
1014
- tag, a, subc = e
1015
- if tag == "constraint":
1016
- con = readcode(subc)
1017
- elif tag == "action":
1018
- act = readcode(subc)
1019
- self.actions.append(act)
1020
- self.ruleConstraints.append(con)
1021
- elif name == "passConstraint":
1022
- self.passConstraints = readcode(content)
1023
- elif name == "fsm":
1024
- for element in content:
1025
- if not isinstance(element, tuple):
1026
- continue
1027
- tag, a, c = element
1028
- if tag == "row":
1029
- s = array("H")
1030
- e = content_string(c)
1031
- s.extend(map(int, e.split()))
1032
- self.stateTrans.append(s)
1033
- elif tag == "starts":
1034
- s = []
1035
- e = content_string(c)
1036
- s.extend(map(int, e.split()))
1037
- self.startStates = s