Commit
·
602d10d
1
Parent(s):
117c236
Update parquet files (step 89 of 397)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audioease Altiverb 7 Xl 7 2 6 Vst Aax X86 X64 2016 46 How to Create Realistic Acoustic Spaces with Ease.md +0 -105
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Get Reaction Mechanism in Organic Chemistry by Mukul C Ray PDF Download and Boost Your Organic Chemistry Skills.md +0 -116
- spaces/1gistliPinn/ChatGPT4/Examples/Corel.PDF.Fusion.v1.10.Bilingual.Incl.Keymaker-CORE Full PATCHED Version.md +0 -6
- spaces/1gistliPinn/ChatGPT4/Examples/FULL Stellar Phoenix Video Repair 3.0.0.0 Crack [CracksNow].md +0 -6
- spaces/1phancelerku/anime-remove-background/Cmo Descargar e Instalar Universal Truck Simulator APK MOD con Dinero Infinito y Acceder a los Camiones ms Espectaculares.md +0 -90
- spaces/2ndelement/voicevox/voicevox_engine/metas/__init__.py +0 -6
- spaces/4Taps/SadTalker/src/face3d/util/util.py +0 -208
- spaces/801artistry/RVC801/train/utils.py +0 -500
- spaces/A00001/bingothoo/src/lib/bots/bing/sr.ts +0 -106
- spaces/AI-Hobbyist/Hoyo-RVC/train/losses.py +0 -59
- spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/logger.py +0 -87
- spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/util.py +0 -267
- spaces/AP123/CerealBoxMaker/app.py +0 -69
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet152_cifar.py +0 -16
- spaces/Abhilashvj/planogram-compliance/utils/downloads.py +0 -139
- spaces/Adapting/TrendFlow/mypages/sidebar.py +0 -91
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/LayoutChildren.js +0 -6
- spaces/Ajay-user/Optical-Character-Recognition/utils.py +0 -110
- spaces/AlekseyKorshuk/instagram-filter-removal/app.py +0 -80
- spaces/Amrrs/DragGan-Inversion/gui_utils/imgui_utils.py +0 -207
- spaces/Amrrs/DragGan-Inversion/training/dataset.py +0 -252
- spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/decoder/vgg.py +0 -225
- spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x1024_80k_cityscapes.py +0 -2
- spaces/Aravindsssss/gradin/app.py +0 -34
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/command_context.py +0 -27
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/traceback.py +0 -756
- spaces/Atualli/yoloxTeste/yoloxdetect2/helpers.py +0 -111
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py +0 -53
- spaces/Bajr/softly/run.sh +0 -7
- spaces/Benson/text-generation/Examples/Coche De Carreras De Deriva 2 1.22.0 Apk.md +0 -69
- spaces/Benson/text-generation/Examples/Descargar 60 Segundos Reatomized Pc.md +0 -161
- spaces/BetterAPI/BetterChat/src/lib/types/Settings.ts +0 -13
- spaces/Big-Web/MMSD/README.md +0 -12
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/freeze.py +0 -97
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/__init__.py +0 -2
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/catalog.py +0 -221
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/roi_align_rotated.py +0 -88
- spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/vqa_loader.py +0 -347
- spaces/CVPR/LIVE/thrust/dependencies/cub/CHANGELOG.md +0 -848
- spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/scatter.h +0 -44
- spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/merge.h +0 -70
- spaces/CVPR/lama-example/models/ade20k/__init__.py +0 -1
- spaces/CVPR/lama-example/saicinpainting/training/data/masks.py +0 -332
- spaces/CVPR/regionclip-demo/detectron2/solver/lr_scheduler.py +0 -238
- spaces/Chemsseddine/summarisation/README.md +0 -12
- spaces/Chris4K/llms_compare/Adobe-Media-Encoder-Cs4-Portablerar.md +0 -68
- spaces/ChrisPreston/diff-svc_minato_aqua/utils/pitch_utils.py +0 -76
- spaces/ClaudioX/mg_sd_esp/README.md +0 -13
- spaces/CofAI/chat/g4f/Provider/Providers/ChatgptAi.py +0 -51
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/ttFont.py +0 -1145
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Audioease Altiverb 7 Xl 7 2 6 Vst Aax X86 X64 2016 46 How to Create Realistic Acoustic Spaces with Ease.md
DELETED
@@ -1,105 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Audioease Altiverb 7 XL: The Industry Standard Convolution Reverb Plug-in</h1>
|
3 |
-
<p>If you are looking for a reverb plug-in that can create realistic and natural sounding reverbs from real spaces, then you should consider Audioease Altiverb 7 XL. Altiverb 7 XL is the industry standard convolution reverb plug-in for music and sound professionals. It uses top quality samples of real spaces to create reverb, ranging from Sydney Opera House to the cockpit of a Jumbo Jet. In this article, we will review the features, benefits, drawbacks, and tips on how to use Altiverb 7 XL.</p>
|
4 |
-
<h2>Audioease Altiverb 7 Xl 7 2 6 Vst Aax X86 X64 2016 46</h2><br /><p><b><b>DOWNLOAD</b> ✶✶✶ <a href="https://byltly.com/2uKzEd">https://byltly.com/2uKzEd</a></b></p><br /><br />
|
5 |
-
<h2>Features of Altiverb 7 XL</h2>
|
6 |
-
<h3>High quality samples of real spaces to create reverb</h3>
|
7 |
-
<p>Altiverb 7 XL uses convolution reverb technology, which means that it captures the sound of a real space and applies it to your audio signal. This way, you can recreate the acoustics of any location you want, without having to go there or record it yourself. You can choose from hundreds of impulse responses (IRs) that are included with Altiverb 7 XL, or download new ones for free from the Audioease website. You can also create your own IRs using a microphone or a sweep tone generator.</p>
|
8 |
-
<h3>Efficient on the CPU and total recall automatable</h3>
|
9 |
-
<p>Altiverb 7 XL is designed to be efficient on the CPU, so you can use multiple instances of it without slowing down your system. It also supports total recall automation, which means that you can save and recall all the settings of your reverb plug-in with your DAW project. You can also use snapshots to switch between different reverb settings quickly.</p>
|
10 |
-
<h3>Supports up to 5.1 surround input and output and up to 384 kHz sampling rates</h3>
|
11 |
-
<p>Altiverb 7 XL is not only suitable for stereo tracks, but also for surround sound projects. It supports up to 5.1 surround input and output, so you can apply reverb to each channel separately or together. It also supports up to 384 kHz sampling rates, which means that you can use it with high-resolution audio formats.</p>
|
12 |
-
<h3>Compatible with various plug-in formats on Windows and Mac OS X</h3>
|
13 |
-
<p>Altiverb 7 XL is compatible with various plug-in formats on Windows and Mac OS X, so you can use it with your preferred DAW software. On Windows, it supports AAX Native and VST formats. On Mac OS X, it supports AAX Native, AudioUnit, MAS, VST, RTAS, and TDM formats. However, note that the TDM plug-in is only available for Pro Tools HD users.</p>
|
14 |
-
<h2>Impulse Responses Library of Altiverb 7 XL</h2>
|
15 |
-
<h3>The most sought after spaces for music and audio post</h3>
|
16 |
-
<p>The Impulse Responses Library of Altiverb 7 XL contains the most sought after spaces for music and audio post production. You can find the main concert halls of the cities of Berlin, Los Angeles Vienna and Amsterdam for your orchestral work. Or legendary rock studios from New York or Paris. Or French Cathedrals, the Gol Gumbaz of India or London's Wembley stadium. You can also find IRs for specific applications such as car interiors, phone booths, bathrooms, closets, etc.</p>
|
17 |
-
<h3>Vintage reverb gear and echo chambers</h3>
|
18 |
-
<p>If you are looking for a classic reverb sound, you can also find IRs of vintage reverb gear and purpose built echo chambers in Altiverb 7 XL. You'll find all the EMT plates you want, spring reverbs, classic digital gear like the Lexicon 480L, Lexicon PCM70, the AMS RMX16 or the EMT250. You can also add the Frank Sinatra and Beach Boys echo chambers and you have everything you need to recreate those classic sounds.</p>
|
19 |
-
<p>Audioease Altiverb 7 XL convolution reverb plugin<br />
|
20 |
-
How to install Audioease Altiverb 7 XL on Windows 10<br />
|
21 |
-
Audioease Altiverb 7 XL review and tutorial<br />
|
22 |
-
Best settings for Audioease Altiverb 7 XL in FL Studio<br />
|
23 |
-
Audioease Altiverb 7 XL vs Waves IR1<br />
|
24 |
-
Download Audioease Altiverb 7 XL full version free<br />
|
25 |
-
Audioease Altiverb 7 XL presets and impulse responses<br />
|
26 |
-
Audioease Altiverb 7 XL compatibility with Pro Tools<br />
|
27 |
-
Audioease Altiverb 7 XL crack and serial key<br />
|
28 |
-
Audioease Altiverb 7 XL features and specifications<br />
|
29 |
-
Audioease Altiverb 7 XL price and discount<br />
|
30 |
-
Audioease Altiverb 7 XL system requirements and performance<br />
|
31 |
-
Audioease Altiverb 7 XL demo and trial version<br />
|
32 |
-
Audioease Altiverb 7 XL user manual and guide<br />
|
33 |
-
Audioease Altiverb 7 XL support and customer service<br />
|
34 |
-
Audioease Altiverb 7 XL alternatives and competitors<br />
|
35 |
-
Audioease Altiverb 7 XL testimonials and feedback<br />
|
36 |
-
Audioease Altiverb 7 XL tips and tricks<br />
|
37 |
-
Audioease Altiverb 7 XL updates and upgrades<br />
|
38 |
-
Audioease Altiverb 7 XL license and activation<br />
|
39 |
-
Audioease Altiverb 7 XL refund policy and guarantee<br />
|
40 |
-
Audioease Altiverb 7 XL forum and community<br />
|
41 |
-
Audioease Altiverb 7 XL video and audio examples<br />
|
42 |
-
Audioease Altiverb 7 XL comparison with other Audioease products<br />
|
43 |
-
Audioease Altiverb 7 XL benefits and advantages<br />
|
44 |
-
Audioease Altiverb 7 XL drawbacks and limitations<br />
|
45 |
-
Audioease Altiverb 7 XL history and development<br />
|
46 |
-
Audioease Altiverb 7 XL awards and recognition<br />
|
47 |
-
Audioease Altiverb 7 XL FAQ and troubleshooting<br />
|
48 |
-
Audioease Altiverb 7 XL blog and news<br />
|
49 |
-
How to use Audioease Altiverb 7 XL with Ableton Live<br />
|
50 |
-
How to create realistic spaces with Audioease Altiverb 7 XL<br />
|
51 |
-
How to optimize CPU usage with Audioease Altiverb 7 XL<br />
|
52 |
-
How to customize impulse responses with Audioease Altiverb 7 XL<br />
|
53 |
-
How to automate parameters with Audioease Altiverb 7 XL<br />
|
54 |
-
How to mix vocals with Audioease Altiverb 7 XL<br />
|
55 |
-
How to add depth and dimension with Audioease Altiverb 7 XL<br />
|
56 |
-
How to enhance drums with Audioease Altiverb 7 XL<br />
|
57 |
-
How to emulate classic reverbs with Audioease Altiverb 7 XL<br />
|
58 |
-
How to achieve spatial effects with Audioease Altiverb 7 XL<br />
|
59 |
-
How to blend dry and wet signals with Audioease Altiverb 7 XL<br />
|
60 |
-
How to control reverb tail and decay with Audioease Altiverb 7 XL<br />
|
61 |
-
How to adjust reverb size and distance with Audioease Altiverb 7 XL<br />
|
62 |
-
How to apply EQ and modulation with Audioease Altiverb 7 XL<br />
|
63 |
-
How to use the snapshot feature of Audioease Altiverb 7 XL<br />
|
64 |
-
How to import and export impulse responses with Audioease Altiverb 7 XL<br />
|
65 |
-
How to access the online library of impulse responses with Audioease Altiverb 7 XL <br />
|
66 |
-
How to use the surround mode of Audioease Altiverb 7 XL <br />
|
67 |
-
How to solve common problems with Audioease Altiverb 7 XL</p>
|
68 |
-
<h3>New impulse responses added regularly and available for free</h3>
|
69 |
-
<p>One of the best things about Altiverb 7 XL is that you can always get new impulse responses for free from Audioease. Every month they add new IRs to their library based on their travels around the world or requests from users. You can download them directly from within the plug-in using the visual browser or the keyword search field.</p>
|
70 |
-
<h2>How to Use Altiverb 7 XL</h2>
|
71 |
-
<h3>The visual browser and the keyword search field</h3>
|
72 |
-
<p>The visual browser is a feature that makes it easy to find and select impulse responses in Altiverb 7 XL. You can browse through IRs by clicking photos of rooms or categories. You can also sort them by size or name or use filters to narrow down your choices. If you know what you are looking for, you can also use the keyword search field to type in a name or a tag.</p>
|
73 |
-
<h3>The parameters to tweak the reverb sound</h3>
|
74 |
-
<p>Once you have selected an impulse response, you can tweak its sound using various parameters in Altiverb 7 XL. You can adjust the wet/dry mix, pre-delay, reverb time, early reflections, late reflections, EQ, damping, modulation, and more. You can also reverse or invert the IR or use stereo width controls to change its spatial characteristics.</p>
|
75 |
-
<h3>The automation and presets options</h3>
|
76 |
-
<p>Altiverb 7 XL allows you to automate any parameter using your DAW's automation features. You can also use snapshots to store and recall different settings within a single instance of Altiverb 7 XL. Additionally, you can save your own presets or load presets from other users or Audioease.</p>
|
77 |
-
<h2>Pros and Cons of Altiverb 7 XL</h2>
|
78 |
-
<h3>Pros:</h3>
|
79 |
-
<ul>
|
80 |
-
<li>Sound quality: Altiverb 7 XL delivers realistic and natural sounding reverbs that are hard to achieve with other plug-ins.</li>
|
81 |
-
<li>Flexibility: Altiverb 7 XL offers a wide range of impulse responses for different genres and applications.</li>
|
82 |
-
<li>Ease of operation: Altiverb 7 XL has a user-friendly interface that makes it easy to find and tweak impulse responses.</li>
|
83 |
-
<li>Support: Audioease provides excellent customer support and regular updates for Altiverb 7 XL.</li>
|
84 |
-
<li>Updates: Audioease adds new impulse responses every month and makes them available for free for Altiverb 7 XL users.</li>
|
85 |
-
</ul>
|
86 |
-
<h3>Cons:</h3>
|
87 |
-
<ul>
|
88 |
-
<li>Price: Altiverb 7 XL is not cheap compared to other reverb plug-ins. It costs €849 ($995) for the full version and €499 ($585) for an upgrade from previous versions.</li>
|
89 |
-
<li>iLok key requirement: Altiverb requires an iLok key (2nd generation or up) to run which adds an extra cost and inconvenience for some users.</li>
|
90 |
-
<li>TDM plug-in only for Pro Tools HD: Altiverb only supports TDM plug-in format for Pro Tools HD users which limits its compatibility <h2>Conclusion</h2>
|
91 |
-
<p>Altiverb 7 XL is a convolution reverb plug-in that can create realistic and natural sounding reverbs from real spaces. It has many features and benefits that make it the industry standard for music and sound professionals. It also has some drawbacks that you should consider before buying it. However, if you are looking for a reverb plug-in that can give you the sound of any location you want, then Altiverb 7 XL is a great choice.</p>
|
92 |
-
<h2>FAQs</h2>
|
93 |
-
<h3>Q: How can I get Altiverb 7 XL?</h3>
|
94 |
-
<p>A: You can buy Altiverb 7 XL from the Audioease website or from a local store. You can also download a demo version to try it out before buying.</p>
|
95 |
-
<h3>Q: How can I create my own impulse responses for Altiverb 7 XL?</h3>
|
96 |
-
<p>A: You can create your own impulse responses using a microphone or a sweep tone generator. You can find detailed instructions on how to do this on the Audioease website or in the Altiverb 7 manual.</p>
|
97 |
-
<h3>Q: How can I get more impulse responses for Altiverb 7 XL?</h3>
|
98 |
-
<p>A: You can download new impulse responses for free from the Audioease website every month. You can also buy additional IRs from third-party developers or exchange them with other users.</p>
|
99 |
-
<h3>Q: How can I use Altiverb 7 XL with surround sound?</h3>
|
100 |
-
<p>A: Altiverb 7 XL supports up to 5.1 surround input and output. You can use it with surround sound tracks or buses in your DAW. You can also use the surround panner to position your sound source in the surround field.</p>
|
101 |
-
<h3>Q: How can I get support for Altiverb 7 XL?</h3>
|
102 |
-
<p>A: You can get support for Altiverb 7 XL by contacting Audioease via email or phone. You can also find answers to common questions and issues on their website or in the Altiverb 7 manual.</p>
|
103 |
-
</p> 0a6ba089eb<br />
|
104 |
-
<br />
|
105 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Get Reaction Mechanism in Organic Chemistry by Mukul C Ray PDF Download and Boost Your Organic Chemistry Skills.md
DELETED
@@ -1,116 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Reaction Mechanisms in Organic Chemistry by Mukul C. Ray PDF Download</h1>
|
3 |
-
<p>If you are looking for a comprehensive and accessible book on reaction mechanisms in organic chemistry, you might be interested in <strong>Reaction Mechanisms in Organic Chemistry by Mukul C. Ray</strong>. In this article, we will give you an overview of the book, its contents, features, target audience and level, and how to download it as a PDF file.</p>
|
4 |
-
<h2>Introduction</h2>
|
5 |
-
<h3>What are reaction mechanisms?</h3>
|
6 |
-
<p>Reaction mechanisms are the detailed steps that show how a chemical reaction occurs at the molecular level. They involve the breaking and forming of bonds, the movement of electrons, the formation and disappearance of intermediates, and the role of catalysts and reagents. Reaction mechanisms help us understand how and why a reaction happens, what factors affect its rate and selectivity, and what products are formed.</p>
|
7 |
-
<h2>reaction mechanism in organic chemistry by mukul c ray pdf download</h2><br /><p><b><b>Download Zip</b> ✓✓✓ <a href="https://byltly.com/2uKxMn">https://byltly.com/2uKxMn</a></b></p><br /><br />
|
8 |
-
<h3>Why are reaction mechanisms important?</h3>
|
9 |
-
<p>Reaction mechanisms are important for several reasons. First, they provide a logical framework for learning and applying organic chemistry. By knowing the common patterns and principles of reaction mechanisms, we can predict the outcome of new reactions, design synthetic routes for desired compounds, and explain experimental observations. Second, they reveal the underlying connections between different reactions and functional groups. By comparing and contrasting different reaction mechanisms, we can appreciate the similarities and differences among various organic compounds and their reactivities. Third, they enable us to explore the frontiers of organic chemistry. By proposing and testing new reaction mechanisms, we can discover novel reactions, synthesize complex molecules, and develop new theories and concepts.</p>
|
10 |
-
<h2>Overview of the book</h2>
|
11 |
-
<h3>Author and publication details</h3>
|
12 |
-
<p>The author of the book is <strong>Mukul C. Ray</strong>, who is a professor of chemistry at the Indian Institute of Technology (IIT) Delhi. He has over 30 years of teaching and research experience in organic chemistry, with special interests in synthetic methodology, natural products, heterocyclic chemistry, and organometallic chemistry. He has published more than 100 research papers in reputed journals and has received several awards and honors for his contributions to the field.</p>
|
13 |
-
<p>The book was first published in 2015 by <strong>MTG Learning Media Pvt Ltd</strong>, which is a leading publisher of books for competitive exams in India. The book has 608 pages and is divided into 16 chapters. The ISBN of the book is 978-9385966350.</p>
|
14 |
-
<h3>Contents and features</h3>
|
15 |
-
<p>The book covers all the major topics of reaction mechanisms in organic chemistry, such as nucleophiles, bases, leaving groups, reaction intermediates, nucleophilic substitution reactions, elimination reactions, free radical reactions, electrophilic and nucleophilic addition reactions, substitution on aromatic rings, reactions of acid derivatives, pericyclic reactions, photochemical reactions, oxidation-reduction reactions, rearrangements, named reactions, reagents etc.</p>
|
16 |
-
<p>The book has several features that make it useful for learning and revision. Some of these features are:</p>
|
17 |
-
<ul>
|
18 |
-
<li>Each chapter begins with an introduction that summarizes the main concepts and objectives.</li>
|
19 |
-
<li>The theory is explained in a clear and concise manner with examples and illustrations.</li>
|
20 |
-
<li>The mechanisms are shown with curved arrows that indicate the movement of electrons.</li>
|
21 |
-
<li>The intermediates are highlighted with boxes that show their structure and stability.</li>
|
22 |
-
<li>The factors that influence the rate and selectivity of reactions are discussed with relevant examples.</li>
|
23 |
-
<li>The common errors and misconceptions are pointed out with warnings.</li>
|
24 |
-
<li>The end of each chapter contains a summary that reviews the key points.</li>
|
25 |
-
<li>The exercises include multiple choice questions (MCQs), short answer questions (SAQs), long answer questions (LAQs), assertion-reason questions (ARQs), matching questions (MQs), fill in the blanks (FIBs), true-false questions (TFQs), etc.</li>
|
26 |
-
<li>The answers and solutions to all the exercises are given at the end of the book.</li>
|
27 |
-
<li>The appendices include tables of common reagents, functional groups, named reactions etc.</li>
|
28 |
-
</ul>
|
29 |
-
<h3>Target audience and level</h3>
|
30 |
-
<p>The book is intended for students who are preparing for various competitive exams in India such as NEET/JEE Main & Advanced/PETs/GATE/JAM/CSIR-UGC/NET etc. The book is also suitable for undergraduate students who are studying organic chemistry as part of their curriculum. The book assumes that the readers have some basic knowledge of organic chemistry such as nomenclature, structure, bonding etc. The book covers both basic and advanced topics of reaction mechanisms in organic chemistry with appropriate depth and difficulty.</p>
|
31 |
-
<h2>How to download the book</h2>
|
32 |
-
<h3>Online sources and links</h3>
|
33 |
-
<p>If you want to download the book as a PDF file for free or at a low cost, you can try some online sources that offer this service. However, you should be careful about the quality and legality of these sources as they may not be authorized by the author or publisher. Some possible online sources are:</p>
|
34 |
-
<p>reaction mechanism in organic chemistry by mukul c ray ebook free<br />
|
35 |
-
download reaction mechanism in organic chemistry by mukul c ray pdf online<br />
|
36 |
-
reaction mechanism in organic chemistry by mukul c ray book review<br />
|
37 |
-
how to get reaction mechanism in organic chemistry by mukul c ray pdf for free<br />
|
38 |
-
reaction mechanism in organic chemistry by mukul c ray solutions manual pdf<br />
|
39 |
-
reaction mechanism in organic chemistry by mukul c ray pdf google drive<br />
|
40 |
-
reaction mechanism in organic chemistry by mukul c ray pdf reddit<br />
|
41 |
-
best books on reaction mechanism in organic chemistry like mukul c ray<br />
|
42 |
-
reaction mechanism in organic chemistry by mukul c ray lecture notes pdf<br />
|
43 |
-
reaction mechanism in organic chemistry by mukul c ray pdf quora<br />
|
44 |
-
reaction mechanism in organic chemistry by mukul c ray pdf 4shared<br />
|
45 |
-
reaction mechanism in organic chemistry by mukul c ray pdf scribd<br />
|
46 |
-
reaction mechanism in organic chemistry by mukul c ray pdf slideshare<br />
|
47 |
-
reaction mechanism in organic chemistry by mukul c ray pdf libgen<br />
|
48 |
-
reaction mechanism in organic chemistry by mukul c ray pdf torrent<br />
|
49 |
-
reaction mechanism in organic chemistry by mukul c ray pdf flipkart<br />
|
50 |
-
reaction mechanism in organic chemistry by mukul c ray pdf amazon<br />
|
51 |
-
buy reaction mechanism in organic chemistry by mukul c ray hardcover<br />
|
52 |
-
reaction mechanism in organic chemistry by mukul c ray summary and analysis<br />
|
53 |
-
reaction mechanism in organic chemistry by mukul c ray pdf chapter wise<br />
|
54 |
-
reaction mechanism in organic chemistry by mukul c ray mcq pdf<br />
|
55 |
-
reaction mechanism in organic chemistry by mukul c ray objective questions pdf<br />
|
56 |
-
reaction mechanism in organic chemistry by mukul c ray previous year questions pdf<br />
|
57 |
-
reaction mechanism in organic chemistry by mukul c ray practice problems pdf<br />
|
58 |
-
reaction mechanism in organic chemistry by mukul c ray video lectures<br />
|
59 |
-
reaction mechanism in organic chemistry by mukul c ray youtube playlist<br />
|
60 |
-
reaction mechanism in organic chemistry by mukul c ray course syllabus<br />
|
61 |
-
reaction mechanism in organic chemistry by mukul c ray reference books<br />
|
62 |
-
reaction mechanism in organic chemistry by mukul c ray recommended books<br />
|
63 |
-
reaction mechanism in organic chemistry by mukul c ray related books<br />
|
64 |
-
compare and contrast reaction mechanism in organic chemistry by mukul c ray and other books<br />
|
65 |
-
advantages and disadvantages of reaction mechanism in organic chemistry by mukul c ray book<br />
|
66 |
-
pros and cons of reaction mechanism in organic chemistry by mukul c ray book<br />
|
67 |
-
benefits and drawbacks of reaction mechanism in organic chemistry by mukul c ray book<br />
|
68 |
-
strengths and weaknesses of reaction mechanism in organic chemistry by mukul c ray book<br />
|
69 |
-
features and specifications of reaction mechanism in organic chemistry by mukul c ray book<br />
|
70 |
-
feedback and testimonials of reaction mechanism in organic chemistry by mukul c ray book<br />
|
71 |
-
ratings and reviews of reaction mechanism in organic chemistry by mukul c ray book<br />
|
72 |
-
popularity and demand of reaction mechanism in organic chemistry by mukul c ray book<br />
|
73 |
-
quality and reliability of reaction mechanism in organic chemistry by mukul c ray book<br />
|
74 |
-
price and value of reaction mechanism in organic chemistry by mukul c ray book<br />
|
75 |
-
availability and accessibility of reaction mechanism in organic chemistry by mukul c ray book<br />
|
76 |
-
edition and publication of reaction mechanism in organic chemistry by mukul c ray book<br />
|
77 |
-
author and publisher of reaction mechanism in organic chemistry by mukul c ray book<br />
|
78 |
-
biography and achievements of mukul c ray author of reaction mechanism in organic chemistry book <br />
|
79 |
-
awards and recognition of mukul c ray author of reaction mechanism in organic chemistry book <br />
|
80 |
-
interviews and articles of mukul c ray author of reaction mechanism in organic chemistry book <br />
|
81 |
-
quotes and insights of mukul c ray author of reaction mechanism in organic chemistry book <br />
|
82 |
-
contact and social media of mukul c ray author of reaction mechanism in organic chemistry book</p>
|
83 |
-
<ul>
|
84 |
-
<li><a href="https://www.goodreads.com/book/show/25962750-reaction-mechanisms-in-organic-chemistry">Goodreads</a>: This is a popular website where you can find reviews, ratings, summaries etc. of various books. You can also join groups or forums where you can discuss books with other readers. Sometimes you can find links to download books as PDF files from other websites or users.</li>
|
85 |
-
<li><a href="https://mtg.in/engineering-entrance-exams/jee-exam-books-main-and-advanced/chemistry-books/reaction-mechanisms-in-organic-chemistry/">MTG Learning Media</a>: This is the official website of the publisher where you can find information about the book such as price, availability etc. You can also order the book online or find a nearby bookstore where you can buy it.</li>
|
86 |
-
<li><a href="https://www.pdfdrive.com/reaction-mechanisms-in-organic-chemistry-by-mukul-c-ray-ebooks.html">PDF Drive</a>: This is a website where you can search for PDF files of various books or documents. You can download them for free or at a low cost depending on the source. However, you should be aware that some files may be incomplete or inaccurate or may contain viruses or malware.</li>
|
87 |
-
</ul>
|
88 |
-
<h3>Advantages and disadvantages of downloading</h3>
|
89 |
-
<p>Downloading the book as a PDF file has some advantages and disadvantages that you should consider before doing so. Some advantages are:</p>
|
90 |
-
<ul>
|
91 |
-
<li>You can access the book anytime anywhere without carrying a physical copy.</li>
|
92 |
-
<li>You can save money by not buying a printed copy.</li>
|
93 |
-
<li>You can highlight or annotate parts of the book using software tools.</li>
|
94 |
-
<li>You can search for keywords or phrases within the book using software tools.</li>
|
95 |
-
</ul>
|
96 |
-
<p>Some disadvantages are:</p>
|
97 |
-
<ul>
|
98 |
-
<li>You may not get the latest or correct version of the book as it may be outdated or modified by unauthorized sources.</li>
|
99 |
-
<li>You may violate the intellectual property rights of the author or publisher by downloading or sharing an illegal copy.</li>
|
100 |
-
<li>You may harm your device or data by downloading files that contain viruses or malware.</li>
|
101 |
-
<li>You may lose your concentration or interest by reading on a screen rather than on paper.</li>
|
102 |
-
</ul>
|
103 |
-
<h3>Alternatives and recommendations</h3>
|
104 |
-
<p>If you do not want to download the book as a PDF file or if you cannot find a reliable source to do so, you can try some alternatives or recommendations that may help you learn reaction mechanisms in organic chemistry better. Some alternatives or recommendations are:</p>
|
105 |
-
<ul>
|
106 |
-
<li>Borrowing or buying a physical copy: You can borrow or buy a physical copy of the book from your library or bookstore if it is available. This way you can enjoy reading on paper without worrying about quality or legality issues.</li>
|
107 |
-
<ul>
|
108 |
-
<li><strong>Online courses or lectures</strong>: You can enroll in online courses or watch online lectures on reaction mechanisms in organic chemistry from reputable sources such as Khan Academy, IIT Delhi Chemistry Department, NPTEL Organic Chemistry II, Harvard University Principles of Organic Chemistry etc.</li>
|
109 |
-
<li><strong>Books or textbooks</strong>: You can read books or textbooks on reaction mechanisms in organic chemistry that cover the subject in depth and detail. Some examples are: Reaction Mechanisms in Organic Chemistry by Mukul C. Ray, Organic Chemistry by Clayden, Greeves, Warren and Wothers, Organic Chemistry by Bruice, Organic Chemistry by Carey and Sundberg, Advanced Organic Chemistry by March etc.</li>
|
110 |
-
<li><strong>Websites or blogs</strong>: You can visit websites or blogs that provide information, tutorials, tips, tricks etc. on reaction mechanisms in organic chemistry. Some examples are: Master Organic Chemistry, Chemguide, Organic Chemistry Portal, The Curious Wavefunction etc.</li>
|
111 |
-
<li><strong>YouTube videos or podcasts</strong>: You can watch YouTube videos or listen to podcasts that explain or demonstrate reaction mechanisms in organic chemistry in an engaging and entertaining way. Some examples are: Leah4Sci, The Organic Chemistry Tutor, Professor Dave Explains, The Skeptics Guide to the Universe etc.</li>
|
112 |
-
<li><strong>Online forums or communities</strong>: You can join online forums or communities where you can discuss, ask questions, share ideas etc. on reaction mechanisms in organic chemistry with other students or experts. Some examples are: Reddit r/organicchemistry, Stack Exchange Chemistry, Quora Organic Chemistry etc.</li>
|
113 |
-
</ul>
|
114 |
-
</p> 0a6ba089eb<br />
|
115 |
-
<br />
|
116 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Corel.PDF.Fusion.v1.10.Bilingual.Incl.Keymaker-CORE Full PATCHED Version.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
|
2 |
-
<p>In sum, health care facilities seem to have more bilingual staff than translation services [ 33 ], but most of these employees remain untrained in interpreting [ 34, 35 ], and the quality of interpreting is often below international standards [ 26, 36 ].</p>
|
3 |
-
<h2>Corel.PDF.Fusion.v1.10.Bilingual.Incl.Keymaker-CORE full version</h2><br /><p><b><b>Download Zip</b> 🔗 <a href="https://imgfil.com/2uxYaZ">https://imgfil.com/2uxYaZ</a></b></p><br /><br />
|
4 |
-
<p>Insufficient use of professional interpreters can lead to various situations with potentially negative consequences on quality of care (see Fig. 2, values given for overall participants). Participants were therefore presented with a list of situations potentially arising as a consequence of unaddressed language barriers and asked to indicate all those they had encountered at least once over the past year that could have been mitigated if a professional interpreter had been present. Of the 504 respondents answering these questions on potential consequences 4/5 of PCP and of FD felt they had not been able to provide appropriate care for patient and family due to the language barrier (total 77.6%, FD: 75.5%, PCP: 80.2%, <i>p</i>=0.215; overall participants: 65.3%,). Nearly 2/3 of respondents (62.3%) reported difficulties determining the right diagnoses due to difficulties in obtaining a full patient history, with FD more often concerned hereof (FD: 74.8%, PCP: 58.1% of respondents, <i>p</i>=0.001). It is therefore not surprising that they tend to confirm having ordered additional exams more often (FD: 38.5 vs. PCP: 28.6% of respondents, <i>p</i>=0.021) due to an insufficient patient history. Extrapolated to all questionnaire participants, this would represent 28.9% of physicians having ordered extra exams at least once a year due to the language barrier.</p> 899543212b<br />
|
5 |
-
<br />
|
6 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/FULL Stellar Phoenix Video Repair 3.0.0.0 Crack [CracksNow].md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>FULL Stellar Phoenix Video Repair 3.0.0.0 Crack [CracksNow]</h2><br /><p><b><b>Download Zip</b> ✦✦✦ <a href="https://imgfil.com/2uxXpt">https://imgfil.com/2uxXpt</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
CRACK Ntuit QuickBooks Enterprise 19.2.1 R3 License Key ... FULL Stellar Phoenix Video Repair 3.0.0.0 Crack [CracksNow]. 2d Text Preset ... 1fdad05405<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Cmo Descargar e Instalar Universal Truck Simulator APK MOD con Dinero Infinito y Acceder a los Camiones ms Espectaculares.md
DELETED
@@ -1,90 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Universal Truck Simulator APK Mod Dinero Infinito: ¿Qué es y cómo descargarlo?</h1>
|
3 |
-
<h2>Introducción</h2>
|
4 |
-
<p>¿Te gustan los juegos de simulación de conducción? ¿Te gustaría conducir un camión por diferentes escenarios y realizar todo tipo de misiones? Si la respuesta es sí, entonces te encantará <strong>Universal Truck Simulator</strong>, un juego de simulación de camiones para dispositivos Android que te ofrece una experiencia realista e inmersiva.</p>
|
5 |
-
<p>Pero si quieres disfrutar aún más del juego, quizás te interese saber que existe un <strong>mod dinero infinito</strong> que te permite tener todo el dinero que quieras en el juego. Así podrás comprar y mejorar todos los camiones que desees, personalizarlos a tu gusto y acceder a todas las funciones del juego sin limitaciones.</p>
|
6 |
-
<h2>universal truck simulator apk mod dinero infinito</h2><br /><p><b><b>Download</b> ○ <a href="https://jinyurl.com/2uNTb0">https://jinyurl.com/2uNTb0</a></b></p><br /><br />
|
7 |
-
<p>En este artículo te vamos a explicar qué es y cómo descargar e instalar el mod dinero infinito de Universal Truck Simulator. También te vamos a contar las principales características del juego y algunas preguntas frecuentes que pueden surgirte. ¡Sigue leyendo y descubre todo lo que necesitas saber sobre este increíble juego!</p>
|
8 |
-
<h2>Características del juego</h2>
|
9 |
-
<p>Universal Truck Simulator es un juego de simulación de camiones que te permite conducir una gran variedad de camiones por diferentes escenarios y realizar todo tipo de misiones. El juego cuenta con unas características que lo hacen muy atractivo y divertido:</p>
|
10 |
-
<h3>Gráficos realistas y detallados</h3>
|
11 |
-
<p>El juego tiene unos gráficos impresionantes que te harán sentir como si estuvieras conduciendo un camión de verdad. Podrás apreciar los detalles de los camiones, los remolques, los paisajes, el clima, el tráfico y mucho más. Además, el juego tiene una física realista que hace que la conducción sea más desafiante y emocionante.</p>
|
12 |
-
<h3>Variedad de camiones y remolques</h3>
|
13 |
-
<p>El juego te ofrece una gran variedad de camiones y remolques para elegir. Podrás conducir desde camiones clásicos hasta modernos, desde pequeños hasta gigantes, desde americanos hasta europeos. Cada camión tiene sus propias características y prestaciones, así como su propio sonido y manejo. También podrás elegir entre diferentes tipos de remolques, como cajas, tanques, contenedores, maderas, coches y más.</p>
|
14 |
-
<h3>Modos de juego y misiones</h3>
|
15 |
-
<p>El juego tiene varios modos de juego para que nunca te aburras. Podrás jugar en modo libre, donde podrás explorar los escenarios a tu antojo; en modo carrera, donde tendrás que cumplir con diferentes misiones y ganar dinero; o en modo multijugador, donde podrás competir o cooperar con otros jugadores en línea. Las misiones son variadas y pueden consistir en transportar cargas, entregar paquetes, remolcar vehículos averiados o participar en carreras.</p>
|
16 |
-
<h3>Personalización y mejoras</ <h3>Personalización y mejoras</h3>
|
17 |
-
<p>El juego te permite personalizar y mejorar tus camiones a tu gusto. Podrás cambiar el color, las llantas, los faros, los espejos, los parachoques, las pegatinas y más. También podrás mejorar el motor, la transmisión, los frenos, la suspensión y otros aspectos técnicos. Con el mod dinero infinito podrás hacer todo esto sin gastar ni un centavo.</p>
|
18 |
-
<h2>Cómo descargar e instalar el mod dinero infinito</h2>
|
19 |
-
<p>Si quieres descargar e instalar el mod dinero infinito de Universal Truck Simulator, tendrás que seguir unos sencillos pasos. Pero antes, debes tener en cuenta algunos requisitos y precauciones:</p>
|
20 |
-
<p>descargar universal truck simulator apk mod dinero ilimitado<br />
|
21 |
-
universal truck simulator apk mod unlimited money and gems<br />
|
22 |
-
como instalar universal truck simulator apk mod con dinero infinito<br />
|
23 |
-
universal truck simulator apk mod hack version download<br />
|
24 |
-
universal truck simulator apk mod latest version 2021<br />
|
25 |
-
universal truck simulator apk mod free shopping and upgrades<br />
|
26 |
-
universal truck simulator apk mod mega mod menu<br />
|
27 |
-
universal truck simulator apk mod gameplay and review<br />
|
28 |
-
universal truck simulator apk mod offline mode<br />
|
29 |
-
universal truck simulator apk mod all trucks unlocked<br />
|
30 |
-
universal truck simulator apk mod android 1 com<br />
|
31 |
-
universal truck simulator apk mod rexdl com<br />
|
32 |
-
universal truck simulator apk mod revdl com<br />
|
33 |
-
universal truck simulator apk mod happymod com<br />
|
34 |
-
universal truck simulator apk mod apkpure com<br />
|
35 |
-
universal truck simulator apk mod mediafire com<br />
|
36 |
-
universal truck simulator apk mod mega nz<br />
|
37 |
-
universal truck simulator apk mod google drive<br />
|
38 |
-
universal truck simulator apk mod zippyshare com<br />
|
39 |
-
universal truck simulator apk mod 4shared com<br />
|
40 |
-
universal truck simulator apk mod no root required<br />
|
41 |
-
universal truck simulator apk mod anti ban protection<br />
|
42 |
-
universal truck simulator apk mod no ads or popups<br />
|
43 |
-
universal truck simulator apk mod easy installation guide<br />
|
44 |
-
universal truck simulator apk mod full hd graphics<br />
|
45 |
-
universal truck simulator apk mod realistic physics and sounds<br />
|
46 |
-
universal truck simulator apk mod custom skins and decals<br />
|
47 |
-
universal truck simulator apk mod new maps and routes<br />
|
48 |
-
universal truck simulator apk mod best settings and tips<br />
|
49 |
-
universal truck simulator apk mod cheats and tricks<br />
|
50 |
-
universal truck simulator apk mod online multiplayer mode<br />
|
51 |
-
universal truck simulator apk mod support for gamepad and steering wheel<br />
|
52 |
-
universal truck simulator apk mod high fps and smooth performance<br />
|
53 |
-
universal truck simulator apk mod low mb and battery saver<br />
|
54 |
-
universal truck simulator apk mod compatible with all devices and android versions<br />
|
55 |
-
universal truck simulator apk mod update and patch notes<br />
|
56 |
-
universal truck simulator apk mod bug fixes and improvements<br />
|
57 |
-
universal truck simulator apk mod features and benefits<br />
|
58 |
-
universal truck simulator apk mod pros and cons<br />
|
59 |
-
universal truck simulator apk mod ratings and reviews</p>
|
60 |
-
<h3>Requisitos y precauciones</h3>
|
61 |
-
<ul>
|
62 |
-
<li>Necesitas tener un dispositivo Android con al menos 4 GB de RAM y 500 MB de espacio libre.</li>
|
63 |
-
<li>Necesitas tener una conexión a internet estable para descargar el mod y jugar en línea.</li>
|
64 |
-
<li>Necesitas desinstalar la versión original del juego si la tienes instalada.</li>
|
65 |
-
<li>Necesitas activar la opción de "Orígenes desconocidos" en los ajustes de seguridad de tu dispositivo para poder instalar el mod.</li>
|
66 |
-
<li>Necesitas tener cuidado con los posibles virus o malware que puedan contener algunos sitios web que ofrecen el mod. Te recomendamos que uses un antivirus o que descargues el mod desde una fuente confiable.</li>
|
67 |
-
</ul>
|
68 |
-
<h3>Pasos para descargar e instalar el mod dinero infinito</h3>
|
69 |
-
<ol>
|
70 |
-
<li>Busca en internet un sitio web que ofrezca el mod dinero infinito de Universal Truck Simulator. Puedes usar el buscador de Bing para encontrarlo.</li>
|
71 |
-
<li>Descarga el archivo APK del mod dinero infinito en tu dispositivo. Asegúrate de que sea compatible con la versión más reciente del juego.</li>
|
72 |
-
<li>Abre el archivo APK y sigue las instrucciones para instalar el mod dinero infinito en tu dispositivo.</li>
|
73 |
-
<li>Ejecuta el juego y disfruta de tener todo el dinero que quieras.</li>
|
74 |
-
</ol>
|
75 |
-
<h2>Conclusión</h2>
|
76 |
-
<p>Universal Truck Simulator es un juego de simulación de camiones que te ofrece una experiencia realista e inmersiva. Podrás conducir una gran variedad de camiones por diferentes escenarios y realizar todo tipo de misiones. El juego tiene unos gráficos impresionantes, una física realista, varios modos de juego y muchas opciones de personalización y mejoras. Si quieres disfrutar aún más del juego, puedes descargar e instalar el mod dinero infinito que te permite tener todo el dinero que quieras en el juego. Así podrás comprar y mejorar todos los camiones que desees, personalizarlos a tu gusto y acceder a todas las funciones del juego sin limitaciones. Solo tienes que seguir unos sencillos pasos y tener en cuenta algunos requisitos y precauciones. Esperamos que este artículo te haya sido útil y que disfrutes de este increíble juego.</p>
|
77 |
-
<h2>Preguntas frecuentes</h2>
|
78 |
-
<p>A continuación te presentamos algunas preguntas frecuentes que pueden surgirte sobre el juego y el mod dinero infinito:</p>
|
79 |
-
<h4>¿Es seguro usar el mod dinero infinito?</h4>
|
80 |
-
<p>El uso del mod dinero infinito puede implicar algunos riesgos, como la posibilidad de infectar tu dispositivo con virus o malware, o la posibilidad de ser baneado del juego por usar trucos. Por eso, te recomendamos que uses un antivirus o que descargues el mod desde una fuente confiable, y que no abuses del mod dinero infinito para no llamar la atención de los desarrolladores o de otros jugadores.</p>
|
81 |
-
<h4>¿Qué otras ventajas tiene el mod dinero infinito?</h4>
|
82 |
-
<p>Además de tener todo el dinero que quieras en el juego, el mod dinero infinito también te permite desbloquear todos los camiones, remolques, piezas y accesorios que hay en el juego. Así podrás probar todos los elementos disponibles y elegir los que más te gusten.</p>
|
83 |
-
<h4>¿Qué otros mods existen para Universal Truck Simulator?</h4>
|
84 |
-
<p>Existen otros mods para Universal Truck Simulator que pueden mejorar tu experiencia de juego. Algunos ejemplos son: el mod combustible infinito, que te permite conducir sin preocuparte por el nivel de gasolina; el mod velocidad ilimitada, que te permite ir tan rápido como quier as; el mod daño reducido, que te permite evitar los daños en tu camión y remolque; o el mod mapa completo, que te permite ver todos los lugares y misiones disponibles en el juego. Puedes buscar estos mods en internet y descargarlos siguiendo los mismos pasos que para el mod dinero infinito.</p>
|
85 |
-
<h4>¿Qué otros juegos de simulación de camiones hay para Android?</h4>
|
86 |
-
<p>Si te gustan los juegos de simulación de camiones, hay otros juegos que puedes probar en tu dispositivo Android. Algunos ejemplos son: <strong>Truck Simulator 2018: Europe</strong>, que te permite conducir por las carreteras de Europa; <strong>World Truck Driving Simulator</strong>, que te ofrece una gran variedad de camiones y escenarios; o <strong>Grand Truck Simulator 2</strong>, que tiene un estilo más arcade y divertido.</p>
|
87 |
-
<h4>¿Qué otros juegos de simulación hay para Android?</h4>
|
88 |
-
<p>Si te gustan los juegos de simulación en general, hay muchos otros juegos que puedes disfrutar en tu dispositivo Android. Algunos ejemplos son: <strong>SimCity BuildIt</strong>, que te permite crear y gestionar tu propia ciudad; <strong>Farming Simulator 20</strong>, que te permite cultivar y cuidar de tu granja; o <strong>Flight Pilot Simulator 3D</strong>, que te permite pilotar diferentes aviones y realizar misiones aéreas.</p> 401be4b1e0<br />
|
89 |
-
<br />
|
90 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/2ndelement/voicevox/voicevox_engine/metas/__init__.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
from . import Metas, MetasStore
|
2 |
-
|
3 |
-
__all__ = [
|
4 |
-
"Metas",
|
5 |
-
"MetasStore",
|
6 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/4Taps/SadTalker/src/face3d/util/util.py
DELETED
@@ -1,208 +0,0 @@
|
|
1 |
-
"""This script contains basic utilities for Deep3DFaceRecon_pytorch
|
2 |
-
"""
|
3 |
-
from __future__ import print_function
|
4 |
-
import numpy as np
|
5 |
-
import torch
|
6 |
-
from PIL import Image
|
7 |
-
import os
|
8 |
-
import importlib
|
9 |
-
import argparse
|
10 |
-
from argparse import Namespace
|
11 |
-
import torchvision
|
12 |
-
|
13 |
-
|
14 |
-
def str2bool(v):
|
15 |
-
if isinstance(v, bool):
|
16 |
-
return v
|
17 |
-
if v.lower() in ('yes', 'true', 't', 'y', '1'):
|
18 |
-
return True
|
19 |
-
elif v.lower() in ('no', 'false', 'f', 'n', '0'):
|
20 |
-
return False
|
21 |
-
else:
|
22 |
-
raise argparse.ArgumentTypeError('Boolean value expected.')
|
23 |
-
|
24 |
-
|
25 |
-
def copyconf(default_opt, **kwargs):
|
26 |
-
conf = Namespace(**vars(default_opt))
|
27 |
-
for key in kwargs:
|
28 |
-
setattr(conf, key, kwargs[key])
|
29 |
-
return conf
|
30 |
-
|
31 |
-
def genvalconf(train_opt, **kwargs):
|
32 |
-
conf = Namespace(**vars(train_opt))
|
33 |
-
attr_dict = train_opt.__dict__
|
34 |
-
for key, value in attr_dict.items():
|
35 |
-
if 'val' in key and key.split('_')[0] in attr_dict:
|
36 |
-
setattr(conf, key.split('_')[0], value)
|
37 |
-
|
38 |
-
for key in kwargs:
|
39 |
-
setattr(conf, key, kwargs[key])
|
40 |
-
|
41 |
-
return conf
|
42 |
-
|
43 |
-
def find_class_in_module(target_cls_name, module):
|
44 |
-
target_cls_name = target_cls_name.replace('_', '').lower()
|
45 |
-
clslib = importlib.import_module(module)
|
46 |
-
cls = None
|
47 |
-
for name, clsobj in clslib.__dict__.items():
|
48 |
-
if name.lower() == target_cls_name:
|
49 |
-
cls = clsobj
|
50 |
-
|
51 |
-
assert cls is not None, "In %s, there should be a class whose name matches %s in lowercase without underscore(_)" % (module, target_cls_name)
|
52 |
-
|
53 |
-
return cls
|
54 |
-
|
55 |
-
|
56 |
-
def tensor2im(input_image, imtype=np.uint8):
|
57 |
-
""""Converts a Tensor array into a numpy image array.
|
58 |
-
|
59 |
-
Parameters:
|
60 |
-
input_image (tensor) -- the input image tensor array, range(0, 1)
|
61 |
-
imtype (type) -- the desired type of the converted numpy array
|
62 |
-
"""
|
63 |
-
if not isinstance(input_image, np.ndarray):
|
64 |
-
if isinstance(input_image, torch.Tensor): # get the data from a variable
|
65 |
-
image_tensor = input_image.data
|
66 |
-
else:
|
67 |
-
return input_image
|
68 |
-
image_numpy = image_tensor.clamp(0.0, 1.0).cpu().float().numpy() # convert it into a numpy array
|
69 |
-
if image_numpy.shape[0] == 1: # grayscale to RGB
|
70 |
-
image_numpy = np.tile(image_numpy, (3, 1, 1))
|
71 |
-
image_numpy = np.transpose(image_numpy, (1, 2, 0)) * 255.0 # post-processing: tranpose and scaling
|
72 |
-
else: # if it is a numpy array, do nothing
|
73 |
-
image_numpy = input_image
|
74 |
-
return image_numpy.astype(imtype)
|
75 |
-
|
76 |
-
|
77 |
-
def diagnose_network(net, name='network'):
|
78 |
-
"""Calculate and print the mean of average absolute(gradients)
|
79 |
-
|
80 |
-
Parameters:
|
81 |
-
net (torch network) -- Torch network
|
82 |
-
name (str) -- the name of the network
|
83 |
-
"""
|
84 |
-
mean = 0.0
|
85 |
-
count = 0
|
86 |
-
for param in net.parameters():
|
87 |
-
if param.grad is not None:
|
88 |
-
mean += torch.mean(torch.abs(param.grad.data))
|
89 |
-
count += 1
|
90 |
-
if count > 0:
|
91 |
-
mean = mean / count
|
92 |
-
print(name)
|
93 |
-
print(mean)
|
94 |
-
|
95 |
-
|
96 |
-
def save_image(image_numpy, image_path, aspect_ratio=1.0):
|
97 |
-
"""Save a numpy image to the disk
|
98 |
-
|
99 |
-
Parameters:
|
100 |
-
image_numpy (numpy array) -- input numpy array
|
101 |
-
image_path (str) -- the path of the image
|
102 |
-
"""
|
103 |
-
|
104 |
-
image_pil = Image.fromarray(image_numpy)
|
105 |
-
h, w, _ = image_numpy.shape
|
106 |
-
|
107 |
-
if aspect_ratio is None:
|
108 |
-
pass
|
109 |
-
elif aspect_ratio > 1.0:
|
110 |
-
image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC)
|
111 |
-
elif aspect_ratio < 1.0:
|
112 |
-
image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC)
|
113 |
-
image_pil.save(image_path)
|
114 |
-
|
115 |
-
|
116 |
-
def print_numpy(x, val=True, shp=False):
|
117 |
-
"""Print the mean, min, max, median, std, and size of a numpy array
|
118 |
-
|
119 |
-
Parameters:
|
120 |
-
val (bool) -- if print the values of the numpy array
|
121 |
-
shp (bool) -- if print the shape of the numpy array
|
122 |
-
"""
|
123 |
-
x = x.astype(np.float64)
|
124 |
-
if shp:
|
125 |
-
print('shape,', x.shape)
|
126 |
-
if val:
|
127 |
-
x = x.flatten()
|
128 |
-
print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % (
|
129 |
-
np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x)))
|
130 |
-
|
131 |
-
|
132 |
-
def mkdirs(paths):
|
133 |
-
"""create empty directories if they don't exist
|
134 |
-
|
135 |
-
Parameters:
|
136 |
-
paths (str list) -- a list of directory paths
|
137 |
-
"""
|
138 |
-
if isinstance(paths, list) and not isinstance(paths, str):
|
139 |
-
for path in paths:
|
140 |
-
mkdir(path)
|
141 |
-
else:
|
142 |
-
mkdir(paths)
|
143 |
-
|
144 |
-
|
145 |
-
def mkdir(path):
|
146 |
-
"""create a single empty directory if it didn't exist
|
147 |
-
|
148 |
-
Parameters:
|
149 |
-
path (str) -- a single directory path
|
150 |
-
"""
|
151 |
-
if not os.path.exists(path):
|
152 |
-
os.makedirs(path)
|
153 |
-
|
154 |
-
|
155 |
-
def correct_resize_label(t, size):
|
156 |
-
device = t.device
|
157 |
-
t = t.detach().cpu()
|
158 |
-
resized = []
|
159 |
-
for i in range(t.size(0)):
|
160 |
-
one_t = t[i, :1]
|
161 |
-
one_np = np.transpose(one_t.numpy().astype(np.uint8), (1, 2, 0))
|
162 |
-
one_np = one_np[:, :, 0]
|
163 |
-
one_image = Image.fromarray(one_np).resize(size, Image.NEAREST)
|
164 |
-
resized_t = torch.from_numpy(np.array(one_image)).long()
|
165 |
-
resized.append(resized_t)
|
166 |
-
return torch.stack(resized, dim=0).to(device)
|
167 |
-
|
168 |
-
|
169 |
-
def correct_resize(t, size, mode=Image.BICUBIC):
|
170 |
-
device = t.device
|
171 |
-
t = t.detach().cpu()
|
172 |
-
resized = []
|
173 |
-
for i in range(t.size(0)):
|
174 |
-
one_t = t[i:i + 1]
|
175 |
-
one_image = Image.fromarray(tensor2im(one_t)).resize(size, Image.BICUBIC)
|
176 |
-
resized_t = torchvision.transforms.functional.to_tensor(one_image) * 2 - 1.0
|
177 |
-
resized.append(resized_t)
|
178 |
-
return torch.stack(resized, dim=0).to(device)
|
179 |
-
|
180 |
-
def draw_landmarks(img, landmark, color='r', step=2):
|
181 |
-
"""
|
182 |
-
Return:
|
183 |
-
img -- numpy.array, (B, H, W, 3) img with landmark, RGB order, range (0, 255)
|
184 |
-
|
185 |
-
|
186 |
-
Parameters:
|
187 |
-
img -- numpy.array, (B, H, W, 3), RGB order, range (0, 255)
|
188 |
-
landmark -- numpy.array, (B, 68, 2), y direction is opposite to v direction
|
189 |
-
color -- str, 'r' or 'b' (red or blue)
|
190 |
-
"""
|
191 |
-
if color =='r':
|
192 |
-
c = np.array([255., 0, 0])
|
193 |
-
else:
|
194 |
-
c = np.array([0, 0, 255.])
|
195 |
-
|
196 |
-
_, H, W, _ = img.shape
|
197 |
-
img, landmark = img.copy(), landmark.copy()
|
198 |
-
landmark[..., 1] = H - 1 - landmark[..., 1]
|
199 |
-
landmark = np.round(landmark).astype(np.int32)
|
200 |
-
for i in range(landmark.shape[1]):
|
201 |
-
x, y = landmark[:, i, 0], landmark[:, i, 1]
|
202 |
-
for j in range(-step, step):
|
203 |
-
for k in range(-step, step):
|
204 |
-
u = np.clip(x + j, 0, W - 1)
|
205 |
-
v = np.clip(y + k, 0, H - 1)
|
206 |
-
for m in range(landmark.shape[0]):
|
207 |
-
img[m, v[m], u[m]] = c
|
208 |
-
return img
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/train/utils.py
DELETED
@@ -1,500 +0,0 @@
|
|
1 |
-
import os, traceback
|
2 |
-
import glob
|
3 |
-
import sys
|
4 |
-
import argparse
|
5 |
-
import logging
|
6 |
-
import json
|
7 |
-
import subprocess
|
8 |
-
import numpy as np
|
9 |
-
from scipy.io.wavfile import read
|
10 |
-
import torch
|
11 |
-
|
12 |
-
MATPLOTLIB_FLAG = False
|
13 |
-
|
14 |
-
logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
|
15 |
-
logger = logging
|
16 |
-
|
17 |
-
|
18 |
-
def load_checkpoint_d(checkpoint_path, combd, sbd, optimizer=None, load_opt=1):
|
19 |
-
assert os.path.isfile(checkpoint_path)
|
20 |
-
checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
|
21 |
-
|
22 |
-
##################
|
23 |
-
def go(model, bkey):
|
24 |
-
saved_state_dict = checkpoint_dict[bkey]
|
25 |
-
if hasattr(model, "module"):
|
26 |
-
state_dict = model.module.state_dict()
|
27 |
-
else:
|
28 |
-
state_dict = model.state_dict()
|
29 |
-
new_state_dict = {}
|
30 |
-
for k, v in state_dict.items(): # 模型需要的shape
|
31 |
-
try:
|
32 |
-
new_state_dict[k] = saved_state_dict[k]
|
33 |
-
if saved_state_dict[k].shape != state_dict[k].shape:
|
34 |
-
print(
|
35 |
-
"shape-%s-mismatch|need-%s|get-%s"
|
36 |
-
% (k, state_dict[k].shape, saved_state_dict[k].shape)
|
37 |
-
) #
|
38 |
-
raise KeyError
|
39 |
-
except:
|
40 |
-
# logger.info(traceback.format_exc())
|
41 |
-
logger.info("%s is not in the checkpoint" % k) # pretrain缺失的
|
42 |
-
new_state_dict[k] = v # 模型自带的随机值
|
43 |
-
if hasattr(model, "module"):
|
44 |
-
model.module.load_state_dict(new_state_dict, strict=False)
|
45 |
-
else:
|
46 |
-
model.load_state_dict(new_state_dict, strict=False)
|
47 |
-
|
48 |
-
go(combd, "combd")
|
49 |
-
go(sbd, "sbd")
|
50 |
-
#############
|
51 |
-
logger.info("Loaded model weights")
|
52 |
-
|
53 |
-
iteration = checkpoint_dict["iteration"]
|
54 |
-
learning_rate = checkpoint_dict["learning_rate"]
|
55 |
-
if (
|
56 |
-
optimizer is not None and load_opt == 1
|
57 |
-
): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
|
58 |
-
# try:
|
59 |
-
optimizer.load_state_dict(checkpoint_dict["optimizer"])
|
60 |
-
# except:
|
61 |
-
# traceback.print_exc()
|
62 |
-
logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
|
63 |
-
return model, optimizer, learning_rate, iteration
|
64 |
-
|
65 |
-
|
66 |
-
# def load_checkpoint(checkpoint_path, model, optimizer=None):
|
67 |
-
# assert os.path.isfile(checkpoint_path)
|
68 |
-
# checkpoint_dict = torch.load(checkpoint_path, map_location='cpu')
|
69 |
-
# iteration = checkpoint_dict['iteration']
|
70 |
-
# learning_rate = checkpoint_dict['learning_rate']
|
71 |
-
# if optimizer is not None:
|
72 |
-
# optimizer.load_state_dict(checkpoint_dict['optimizer'])
|
73 |
-
# # print(1111)
|
74 |
-
# saved_state_dict = checkpoint_dict['model']
|
75 |
-
# # print(1111)
|
76 |
-
#
|
77 |
-
# if hasattr(model, 'module'):
|
78 |
-
# state_dict = model.module.state_dict()
|
79 |
-
# else:
|
80 |
-
# state_dict = model.state_dict()
|
81 |
-
# new_state_dict= {}
|
82 |
-
# for k, v in state_dict.items():
|
83 |
-
# try:
|
84 |
-
# new_state_dict[k] = saved_state_dict[k]
|
85 |
-
# except:
|
86 |
-
# logger.info("%s is not in the checkpoint" % k)
|
87 |
-
# new_state_dict[k] = v
|
88 |
-
# if hasattr(model, 'module'):
|
89 |
-
# model.module.load_state_dict(new_state_dict)
|
90 |
-
# else:
|
91 |
-
# model.load_state_dict(new_state_dict)
|
92 |
-
# logger.info("Loaded checkpoint '{}' (epoch {})" .format(
|
93 |
-
# checkpoint_path, iteration))
|
94 |
-
# return model, optimizer, learning_rate, iteration
|
95 |
-
def load_checkpoint(checkpoint_path, model, optimizer=None, load_opt=1):
|
96 |
-
assert os.path.isfile(checkpoint_path)
|
97 |
-
checkpoint_dict = torch.load(checkpoint_path, map_location="cpu")
|
98 |
-
|
99 |
-
saved_state_dict = checkpoint_dict["model"]
|
100 |
-
if hasattr(model, "module"):
|
101 |
-
state_dict = model.module.state_dict()
|
102 |
-
else:
|
103 |
-
state_dict = model.state_dict()
|
104 |
-
new_state_dict = {}
|
105 |
-
for k, v in state_dict.items(): # 模型需要的shape
|
106 |
-
try:
|
107 |
-
new_state_dict[k] = saved_state_dict[k]
|
108 |
-
if saved_state_dict[k].shape != state_dict[k].shape:
|
109 |
-
print(
|
110 |
-
"shape-%s-mismatch|need-%s|get-%s"
|
111 |
-
% (k, state_dict[k].shape, saved_state_dict[k].shape)
|
112 |
-
) #
|
113 |
-
raise KeyError
|
114 |
-
except:
|
115 |
-
# logger.info(traceback.format_exc())
|
116 |
-
logger.info("%s is not in the checkpoint" % k) # pretrain缺失的
|
117 |
-
new_state_dict[k] = v # 模型自带的随机值
|
118 |
-
if hasattr(model, "module"):
|
119 |
-
model.module.load_state_dict(new_state_dict, strict=False)
|
120 |
-
else:
|
121 |
-
model.load_state_dict(new_state_dict, strict=False)
|
122 |
-
logger.info("Loaded model weights")
|
123 |
-
|
124 |
-
iteration = checkpoint_dict["iteration"]
|
125 |
-
learning_rate = checkpoint_dict["learning_rate"]
|
126 |
-
if (
|
127 |
-
optimizer is not None and load_opt == 1
|
128 |
-
): ###加载不了,如果是空的的话,重新初始化,可能还会影响lr时间表的更新,因此在train文件最外围catch
|
129 |
-
# try:
|
130 |
-
optimizer.load_state_dict(checkpoint_dict["optimizer"])
|
131 |
-
# except:
|
132 |
-
# traceback.print_exc()
|
133 |
-
logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, iteration))
|
134 |
-
return model, optimizer, learning_rate, iteration
|
135 |
-
|
136 |
-
|
137 |
-
def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path):
|
138 |
-
logger.info(
|
139 |
-
"Saving model and optimizer state at epoch {} to {}".format(
|
140 |
-
iteration, checkpoint_path
|
141 |
-
)
|
142 |
-
)
|
143 |
-
if hasattr(model, "module"):
|
144 |
-
state_dict = model.module.state_dict()
|
145 |
-
else:
|
146 |
-
state_dict = model.state_dict()
|
147 |
-
torch.save(
|
148 |
-
{
|
149 |
-
"model": state_dict,
|
150 |
-
"iteration": iteration,
|
151 |
-
"optimizer": optimizer.state_dict(),
|
152 |
-
"learning_rate": learning_rate,
|
153 |
-
},
|
154 |
-
checkpoint_path,
|
155 |
-
)
|
156 |
-
|
157 |
-
|
158 |
-
def save_checkpoint_d(combd, sbd, optimizer, learning_rate, iteration, checkpoint_path):
|
159 |
-
logger.info(
|
160 |
-
"Saving model and optimizer state at epoch {} to {}".format(
|
161 |
-
iteration, checkpoint_path
|
162 |
-
)
|
163 |
-
)
|
164 |
-
if hasattr(combd, "module"):
|
165 |
-
state_dict_combd = combd.module.state_dict()
|
166 |
-
else:
|
167 |
-
state_dict_combd = combd.state_dict()
|
168 |
-
if hasattr(sbd, "module"):
|
169 |
-
state_dict_sbd = sbd.module.state_dict()
|
170 |
-
else:
|
171 |
-
state_dict_sbd = sbd.state_dict()
|
172 |
-
torch.save(
|
173 |
-
{
|
174 |
-
"combd": state_dict_combd,
|
175 |
-
"sbd": state_dict_sbd,
|
176 |
-
"iteration": iteration,
|
177 |
-
"optimizer": optimizer.state_dict(),
|
178 |
-
"learning_rate": learning_rate,
|
179 |
-
},
|
180 |
-
checkpoint_path,
|
181 |
-
)
|
182 |
-
|
183 |
-
|
184 |
-
def summarize(
|
185 |
-
writer,
|
186 |
-
global_step,
|
187 |
-
scalars={},
|
188 |
-
histograms={},
|
189 |
-
images={},
|
190 |
-
audios={},
|
191 |
-
audio_sampling_rate=22050,
|
192 |
-
):
|
193 |
-
for k, v in scalars.items():
|
194 |
-
writer.add_scalar(k, v, global_step)
|
195 |
-
for k, v in histograms.items():
|
196 |
-
writer.add_histogram(k, v, global_step)
|
197 |
-
for k, v in images.items():
|
198 |
-
writer.add_image(k, v, global_step, dataformats="HWC")
|
199 |
-
for k, v in audios.items():
|
200 |
-
writer.add_audio(k, v, global_step, audio_sampling_rate)
|
201 |
-
|
202 |
-
|
203 |
-
def latest_checkpoint_path(dir_path, regex="G_*.pth"):
|
204 |
-
f_list = glob.glob(os.path.join(dir_path, regex))
|
205 |
-
f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f))))
|
206 |
-
x = f_list[-1]
|
207 |
-
print(x)
|
208 |
-
return x
|
209 |
-
|
210 |
-
|
211 |
-
def plot_spectrogram_to_numpy(spectrogram):
|
212 |
-
global MATPLOTLIB_FLAG
|
213 |
-
if not MATPLOTLIB_FLAG:
|
214 |
-
import matplotlib
|
215 |
-
|
216 |
-
matplotlib.use("Agg")
|
217 |
-
MATPLOTLIB_FLAG = True
|
218 |
-
mpl_logger = logging.getLogger("matplotlib")
|
219 |
-
mpl_logger.setLevel(logging.WARNING)
|
220 |
-
import matplotlib.pylab as plt
|
221 |
-
import numpy as np
|
222 |
-
|
223 |
-
fig, ax = plt.subplots(figsize=(10, 2))
|
224 |
-
im = ax.imshow(spectrogram, aspect="auto", origin="lower", interpolation="none")
|
225 |
-
plt.colorbar(im, ax=ax)
|
226 |
-
plt.xlabel("Frames")
|
227 |
-
plt.ylabel("Channels")
|
228 |
-
plt.tight_layout()
|
229 |
-
|
230 |
-
fig.canvas.draw()
|
231 |
-
data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
|
232 |
-
data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
|
233 |
-
plt.close()
|
234 |
-
return data
|
235 |
-
|
236 |
-
|
237 |
-
def plot_alignment_to_numpy(alignment, info=None):
|
238 |
-
global MATPLOTLIB_FLAG
|
239 |
-
if not MATPLOTLIB_FLAG:
|
240 |
-
import matplotlib
|
241 |
-
|
242 |
-
matplotlib.use("Agg")
|
243 |
-
MATPLOTLIB_FLAG = True
|
244 |
-
mpl_logger = logging.getLogger("matplotlib")
|
245 |
-
mpl_logger.setLevel(logging.WARNING)
|
246 |
-
import matplotlib.pylab as plt
|
247 |
-
import numpy as np
|
248 |
-
|
249 |
-
fig, ax = plt.subplots(figsize=(6, 4))
|
250 |
-
im = ax.imshow(
|
251 |
-
alignment.transpose(), aspect="auto", origin="lower", interpolation="none"
|
252 |
-
)
|
253 |
-
fig.colorbar(im, ax=ax)
|
254 |
-
xlabel = "Decoder timestep"
|
255 |
-
if info is not None:
|
256 |
-
xlabel += "\n\n" + info
|
257 |
-
plt.xlabel(xlabel)
|
258 |
-
plt.ylabel("Encoder timestep")
|
259 |
-
plt.tight_layout()
|
260 |
-
|
261 |
-
fig.canvas.draw()
|
262 |
-
data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep="")
|
263 |
-
data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,))
|
264 |
-
plt.close()
|
265 |
-
return data
|
266 |
-
|
267 |
-
|
268 |
-
def load_wav_to_torch(full_path):
|
269 |
-
sampling_rate, data = read(full_path)
|
270 |
-
return torch.FloatTensor(data.astype(np.float32)), sampling_rate
|
271 |
-
|
272 |
-
|
273 |
-
def load_filepaths_and_text(filename, split="|"):
|
274 |
-
with open(filename, encoding='utf-8') as f:
|
275 |
-
filepaths_and_text = [line.strip().split(split) for line in f]
|
276 |
-
filepaths_and_text = [item for item in filepaths_and_text if len(item) == 5] # ensure there are 5 items.
|
277 |
-
return filepaths_and_text
|
278 |
-
|
279 |
-
|
280 |
-
def get_hparams(init=True):
|
281 |
-
"""
|
282 |
-
todo:
|
283 |
-
结尾七人组:
|
284 |
-
保存频率、总epoch done
|
285 |
-
bs done
|
286 |
-
pretrainG、pretrainD done
|
287 |
-
卡号:os.en["CUDA_VISIBLE_DEVICES"] done
|
288 |
-
if_latest done
|
289 |
-
模型:if_f0 done
|
290 |
-
采样率:自动选择config done
|
291 |
-
是否缓存数据集进GPU:if_cache_data_in_gpu done
|
292 |
-
|
293 |
-
-m:
|
294 |
-
自动决定training_files路径,改掉train_nsf_load_pretrain.py里的hps.data.training_files done
|
295 |
-
-c不要了
|
296 |
-
"""
|
297 |
-
parser = argparse.ArgumentParser()
|
298 |
-
# parser.add_argument('-c', '--config', type=str, default="configs/40k.json",help='JSON file for configuration')
|
299 |
-
parser.add_argument(
|
300 |
-
"-se",
|
301 |
-
"--save_every_epoch",
|
302 |
-
type=int,
|
303 |
-
required=True,
|
304 |
-
help="checkpoint save frequency (epoch)",
|
305 |
-
)
|
306 |
-
parser.add_argument(
|
307 |
-
"-te", "--total_epoch", type=int, required=True, help="total_epoch"
|
308 |
-
)
|
309 |
-
parser.add_argument(
|
310 |
-
"-pg", "--pretrainG", type=str, default="", help="Pretrained Discriminator path"
|
311 |
-
)
|
312 |
-
parser.add_argument(
|
313 |
-
"-pd", "--pretrainD", type=str, default="", help="Pretrained Generator path"
|
314 |
-
)
|
315 |
-
parser.add_argument("-g", "--gpus", type=str, default="0", help="split by -")
|
316 |
-
parser.add_argument(
|
317 |
-
"-bs", "--batch_size", type=int, required=True, help="batch size"
|
318 |
-
)
|
319 |
-
parser.add_argument(
|
320 |
-
"-e", "--experiment_dir", type=str, required=True, help="experiment dir"
|
321 |
-
) # -m
|
322 |
-
parser.add_argument(
|
323 |
-
"-sr", "--sample_rate", type=str, required=True, help="sample rate, 32k/40k/48k"
|
324 |
-
)
|
325 |
-
parser.add_argument(
|
326 |
-
"-sw",
|
327 |
-
"--save_every_weights",
|
328 |
-
type=str,
|
329 |
-
default="0",
|
330 |
-
help="save the extracted model in weights directory when saving checkpoints",
|
331 |
-
)
|
332 |
-
parser.add_argument(
|
333 |
-
"-v", "--version", type=str, required=True, help="model version"
|
334 |
-
)
|
335 |
-
parser.add_argument(
|
336 |
-
"-f0",
|
337 |
-
"--if_f0",
|
338 |
-
type=int,
|
339 |
-
required=True,
|
340 |
-
help="use f0 as one of the inputs of the model, 1 or 0",
|
341 |
-
)
|
342 |
-
parser.add_argument(
|
343 |
-
"-l",
|
344 |
-
"--if_latest",
|
345 |
-
type=int,
|
346 |
-
required=True,
|
347 |
-
help="if only save the latest G/D pth file, 1 or 0",
|
348 |
-
)
|
349 |
-
parser.add_argument(
|
350 |
-
"-c",
|
351 |
-
"--if_cache_data_in_gpu",
|
352 |
-
type=int,
|
353 |
-
required=True,
|
354 |
-
help="if caching the dataset in GPU memory, 1 or 0",
|
355 |
-
)
|
356 |
-
parser.add_argument(
|
357 |
-
"-li", "--log_interval", type=int, required=True, help="log interval"
|
358 |
-
)
|
359 |
-
|
360 |
-
args = parser.parse_args()
|
361 |
-
name = args.experiment_dir
|
362 |
-
experiment_dir = os.path.join("./logs", args.experiment_dir)
|
363 |
-
|
364 |
-
if not os.path.exists(experiment_dir):
|
365 |
-
os.makedirs(experiment_dir)
|
366 |
-
|
367 |
-
if args.version == "v1" or args.sample_rate == "40k":
|
368 |
-
config_path = "configs/%s.json" % args.sample_rate
|
369 |
-
else:
|
370 |
-
config_path = "configs/%s_v2.json" % args.sample_rate
|
371 |
-
config_save_path = os.path.join(experiment_dir, "config.json")
|
372 |
-
if init:
|
373 |
-
with open(config_path, "r") as f:
|
374 |
-
data = f.read()
|
375 |
-
with open(config_save_path, "w") as f:
|
376 |
-
f.write(data)
|
377 |
-
else:
|
378 |
-
with open(config_save_path, "r") as f:
|
379 |
-
data = f.read()
|
380 |
-
config = json.loads(data)
|
381 |
-
|
382 |
-
hparams = HParams(**config)
|
383 |
-
hparams.model_dir = hparams.experiment_dir = experiment_dir
|
384 |
-
hparams.save_every_epoch = args.save_every_epoch
|
385 |
-
hparams.name = name
|
386 |
-
hparams.total_epoch = args.total_epoch
|
387 |
-
hparams.pretrainG = args.pretrainG
|
388 |
-
hparams.pretrainD = args.pretrainD
|
389 |
-
hparams.version = args.version
|
390 |
-
hparams.gpus = args.gpus
|
391 |
-
hparams.train.batch_size = args.batch_size
|
392 |
-
hparams.sample_rate = args.sample_rate
|
393 |
-
hparams.if_f0 = args.if_f0
|
394 |
-
hparams.if_latest = args.if_latest
|
395 |
-
hparams.save_every_weights = args.save_every_weights
|
396 |
-
hparams.if_cache_data_in_gpu = args.if_cache_data_in_gpu
|
397 |
-
hparams.data.training_files = "%s/filelist.txt" % experiment_dir
|
398 |
-
|
399 |
-
hparams.train.log_interval = args.log_interval
|
400 |
-
|
401 |
-
# Update log_interval in the 'train' section of the config dictionary
|
402 |
-
config["train"]["log_interval"] = args.log_interval
|
403 |
-
|
404 |
-
# Save the updated config back to the config_save_path
|
405 |
-
with open(config_save_path, "w") as f:
|
406 |
-
json.dump(config, f, indent=4)
|
407 |
-
|
408 |
-
return hparams
|
409 |
-
|
410 |
-
|
411 |
-
def get_hparams_from_dir(model_dir):
|
412 |
-
config_save_path = os.path.join(model_dir, "config.json")
|
413 |
-
with open(config_save_path, "r") as f:
|
414 |
-
data = f.read()
|
415 |
-
config = json.loads(data)
|
416 |
-
|
417 |
-
hparams = HParams(**config)
|
418 |
-
hparams.model_dir = model_dir
|
419 |
-
return hparams
|
420 |
-
|
421 |
-
|
422 |
-
def get_hparams_from_file(config_path):
|
423 |
-
with open(config_path, "r") as f:
|
424 |
-
data = f.read()
|
425 |
-
config = json.loads(data)
|
426 |
-
|
427 |
-
hparams = HParams(**config)
|
428 |
-
return hparams
|
429 |
-
|
430 |
-
|
431 |
-
def check_git_hash(model_dir):
|
432 |
-
source_dir = os.path.dirname(os.path.realpath(__file__))
|
433 |
-
if not os.path.exists(os.path.join(source_dir, ".git")):
|
434 |
-
logger.warn(
|
435 |
-
"{} is not a git repository, therefore hash value comparison will be ignored.".format(
|
436 |
-
source_dir
|
437 |
-
)
|
438 |
-
)
|
439 |
-
return
|
440 |
-
|
441 |
-
cur_hash = subprocess.getoutput("git rev-parse HEAD")
|
442 |
-
|
443 |
-
path = os.path.join(model_dir, "githash")
|
444 |
-
if os.path.exists(path):
|
445 |
-
saved_hash = open(path).read()
|
446 |
-
if saved_hash != cur_hash:
|
447 |
-
logger.warn(
|
448 |
-
"git hash values are different. {}(saved) != {}(current)".format(
|
449 |
-
saved_hash[:8], cur_hash[:8]
|
450 |
-
)
|
451 |
-
)
|
452 |
-
else:
|
453 |
-
open(path, "w").write(cur_hash)
|
454 |
-
|
455 |
-
|
456 |
-
def get_logger(model_dir, filename="train.log"):
|
457 |
-
global logger
|
458 |
-
logger = logging.getLogger(os.path.basename(model_dir))
|
459 |
-
logger.setLevel(logging.DEBUG)
|
460 |
-
|
461 |
-
formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s")
|
462 |
-
if not os.path.exists(model_dir):
|
463 |
-
os.makedirs(model_dir)
|
464 |
-
h = logging.FileHandler(os.path.join(model_dir, filename))
|
465 |
-
h.setLevel(logging.DEBUG)
|
466 |
-
h.setFormatter(formatter)
|
467 |
-
logger.addHandler(h)
|
468 |
-
return logger
|
469 |
-
|
470 |
-
|
471 |
-
class HParams:
|
472 |
-
def __init__(self, **kwargs):
|
473 |
-
for k, v in kwargs.items():
|
474 |
-
if type(v) == dict:
|
475 |
-
v = HParams(**v)
|
476 |
-
self[k] = v
|
477 |
-
|
478 |
-
def keys(self):
|
479 |
-
return self.__dict__.keys()
|
480 |
-
|
481 |
-
def items(self):
|
482 |
-
return self.__dict__.items()
|
483 |
-
|
484 |
-
def values(self):
|
485 |
-
return self.__dict__.values()
|
486 |
-
|
487 |
-
def __len__(self):
|
488 |
-
return len(self.__dict__)
|
489 |
-
|
490 |
-
def __getitem__(self, key):
|
491 |
-
return getattr(self, key)
|
492 |
-
|
493 |
-
def __setitem__(self, key, value):
|
494 |
-
return setattr(self, key, value)
|
495 |
-
|
496 |
-
def __contains__(self, key):
|
497 |
-
return key in self.__dict__
|
498 |
-
|
499 |
-
def __repr__(self):
|
500 |
-
return self.__dict__.__repr__()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/A00001/bingothoo/src/lib/bots/bing/sr.ts
DELETED
@@ -1,106 +0,0 @@
|
|
1 |
-
// @ts-ignore
|
2 |
-
const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? (
|
3 |
-
// @ts-ignore
|
4 |
-
window.SpeechRecognition ||
|
5 |
-
window.webkitSpeechRecognition ||
|
6 |
-
// @ts-ignore
|
7 |
-
window.mozSpeechRecognition ||
|
8 |
-
// @ts-ignore
|
9 |
-
window.msSpeechRecognition ||
|
10 |
-
// @ts-ignore
|
11 |
-
window.oSpeechRecognition
|
12 |
-
) as typeof webkitSpeechRecognition : undefined
|
13 |
-
|
14 |
-
type subscriber = (msg: string, command?: string) => void
|
15 |
-
|
16 |
-
export class SR {
|
17 |
-
recognition?: SpeechRecognition
|
18 |
-
onchange?: subscriber
|
19 |
-
transcript: boolean = false
|
20 |
-
listening: boolean = false
|
21 |
-
private commandsRe?: RegExp
|
22 |
-
constructor(commands: string[]) {
|
23 |
-
this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined
|
24 |
-
if (!this.recognition) {
|
25 |
-
return
|
26 |
-
}
|
27 |
-
this.configuration('zh-CN')
|
28 |
-
if (commands.length) {
|
29 |
-
this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`)
|
30 |
-
}
|
31 |
-
this.recognition.onresult = this.speechRecognition
|
32 |
-
this.recognition.onerror = (err) => {
|
33 |
-
console.log('err', err.error)
|
34 |
-
this.stop()
|
35 |
-
}
|
36 |
-
this.recognition.onend = () => {
|
37 |
-
if (this.recognition && this.listening) {
|
38 |
-
this.recognition.start()
|
39 |
-
}
|
40 |
-
}
|
41 |
-
}
|
42 |
-
|
43 |
-
speechRecognition = (event: SpeechRecognitionEvent) => {
|
44 |
-
if (!this.listening) return
|
45 |
-
for (var i = event.resultIndex; i < event.results.length; i++) {
|
46 |
-
let result = event.results[i]
|
47 |
-
if (result.isFinal) {
|
48 |
-
var alt = result[0]
|
49 |
-
const text = alt.transcript.trim()
|
50 |
-
if (this.commandsRe && this.commandsRe.test(text)) {
|
51 |
-
return this.onchange?.('', RegExp.$1)
|
52 |
-
}
|
53 |
-
if (!this.transcript) return
|
54 |
-
this.onchange?.(text)
|
55 |
-
}
|
56 |
-
}
|
57 |
-
}
|
58 |
-
|
59 |
-
private configuration = async (lang: string = 'zh-CN') => {
|
60 |
-
return new Promise((resolve) => {
|
61 |
-
if (this.recognition) {
|
62 |
-
this.recognition.continuous = true
|
63 |
-
this.recognition.lang = lang
|
64 |
-
this.recognition.onstart = resolve
|
65 |
-
}
|
66 |
-
})
|
67 |
-
}
|
68 |
-
|
69 |
-
start = async () => {
|
70 |
-
if (this.recognition && !this.listening) {
|
71 |
-
await this.recognition.start()
|
72 |
-
this.transcript = true
|
73 |
-
this.listening = true
|
74 |
-
}
|
75 |
-
}
|
76 |
-
|
77 |
-
stop = () => {
|
78 |
-
if (this.recognition) {
|
79 |
-
this.recognition.stop()
|
80 |
-
this.transcript = false
|
81 |
-
this.listening = false
|
82 |
-
}
|
83 |
-
}
|
84 |
-
|
85 |
-
|
86 |
-
pause = () => {
|
87 |
-
if (this.recognition) {
|
88 |
-
this.transcript = false
|
89 |
-
}
|
90 |
-
}
|
91 |
-
|
92 |
-
resume = () => {
|
93 |
-
if (this.recognition) {
|
94 |
-
this.transcript = true
|
95 |
-
}
|
96 |
-
}
|
97 |
-
|
98 |
-
abort = () => {
|
99 |
-
if (this.recognition && this.transcript) {
|
100 |
-
this.recognition.abort()
|
101 |
-
this.transcript = false
|
102 |
-
this.listening = false
|
103 |
-
}
|
104 |
-
}
|
105 |
-
}
|
106 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AI-Hobbyist/Hoyo-RVC/train/losses.py
DELETED
@@ -1,59 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch.nn import functional as F
|
3 |
-
|
4 |
-
|
5 |
-
def feature_loss(fmap_r, fmap_g):
|
6 |
-
loss = 0
|
7 |
-
for dr, dg in zip(fmap_r, fmap_g):
|
8 |
-
for rl, gl in zip(dr, dg):
|
9 |
-
rl = rl.float().detach()
|
10 |
-
gl = gl.float()
|
11 |
-
loss += torch.mean(torch.abs(rl - gl))
|
12 |
-
|
13 |
-
return loss * 2
|
14 |
-
|
15 |
-
|
16 |
-
def discriminator_loss(disc_real_outputs, disc_generated_outputs):
|
17 |
-
loss = 0
|
18 |
-
r_losses = []
|
19 |
-
g_losses = []
|
20 |
-
for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
|
21 |
-
dr = dr.float()
|
22 |
-
dg = dg.float()
|
23 |
-
r_loss = torch.mean((1 - dr) ** 2)
|
24 |
-
g_loss = torch.mean(dg**2)
|
25 |
-
loss += r_loss + g_loss
|
26 |
-
r_losses.append(r_loss.item())
|
27 |
-
g_losses.append(g_loss.item())
|
28 |
-
|
29 |
-
return loss, r_losses, g_losses
|
30 |
-
|
31 |
-
|
32 |
-
def generator_loss(disc_outputs):
|
33 |
-
loss = 0
|
34 |
-
gen_losses = []
|
35 |
-
for dg in disc_outputs:
|
36 |
-
dg = dg.float()
|
37 |
-
l = torch.mean((1 - dg) ** 2)
|
38 |
-
gen_losses.append(l)
|
39 |
-
loss += l
|
40 |
-
|
41 |
-
return loss, gen_losses
|
42 |
-
|
43 |
-
|
44 |
-
def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
|
45 |
-
"""
|
46 |
-
z_p, logs_q: [b, h, t_t]
|
47 |
-
m_p, logs_p: [b, h, t_t]
|
48 |
-
"""
|
49 |
-
z_p = z_p.float()
|
50 |
-
logs_q = logs_q.float()
|
51 |
-
m_p = m_p.float()
|
52 |
-
logs_p = logs_p.float()
|
53 |
-
z_mask = z_mask.float()
|
54 |
-
|
55 |
-
kl = logs_p - logs_q - 0.5
|
56 |
-
kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p)
|
57 |
-
kl = torch.sum(kl * z_mask)
|
58 |
-
l = kl / torch.sum(z_mask)
|
59 |
-
return l
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/losses_audio/vggishish/logger.py
DELETED
@@ -1,87 +0,0 @@
|
|
1 |
-
import logging
|
2 |
-
import os
|
3 |
-
import time
|
4 |
-
from shutil import copytree, ignore_patterns
|
5 |
-
|
6 |
-
import torch
|
7 |
-
from omegaconf import OmegaConf
|
8 |
-
from torch.utils.tensorboard import SummaryWriter, summary
|
9 |
-
|
10 |
-
|
11 |
-
class LoggerWithTBoard(SummaryWriter):
|
12 |
-
|
13 |
-
def __init__(self, cfg):
|
14 |
-
# current time stamp and experiment log directory
|
15 |
-
self.start_time = time.strftime('%y-%m-%dT%H-%M-%S', time.localtime())
|
16 |
-
self.logdir = os.path.join(cfg.logdir, self.start_time)
|
17 |
-
# init tboard
|
18 |
-
super().__init__(self.logdir)
|
19 |
-
# backup the cfg
|
20 |
-
OmegaConf.save(cfg, os.path.join(self.log_dir, 'cfg.yaml'))
|
21 |
-
# backup the code state
|
22 |
-
if cfg.log_code_state:
|
23 |
-
dest_dir = os.path.join(self.logdir, 'code')
|
24 |
-
copytree(os.getcwd(), dest_dir, ignore=ignore_patterns(*cfg.patterns_to_ignore))
|
25 |
-
|
26 |
-
# init logger which handles printing and logging mostly same things to the log file
|
27 |
-
self.print_logger = logging.getLogger('main')
|
28 |
-
self.print_logger.setLevel(logging.INFO)
|
29 |
-
msgfmt = '[%(levelname)s] %(asctime)s - %(name)s \n %(message)s'
|
30 |
-
datefmt = '%d %b %Y %H:%M:%S'
|
31 |
-
formatter = logging.Formatter(msgfmt, datefmt)
|
32 |
-
# stdout
|
33 |
-
sh = logging.StreamHandler()
|
34 |
-
sh.setLevel(logging.DEBUG)
|
35 |
-
sh.setFormatter(formatter)
|
36 |
-
self.print_logger.addHandler(sh)
|
37 |
-
# log file
|
38 |
-
fh = logging.FileHandler(os.path.join(self.log_dir, 'log.txt'))
|
39 |
-
fh.setLevel(logging.INFO)
|
40 |
-
fh.setFormatter(formatter)
|
41 |
-
self.print_logger.addHandler(fh)
|
42 |
-
|
43 |
-
self.print_logger.info(f'Saving logs and checkpoints @ {self.logdir}')
|
44 |
-
|
45 |
-
def log_param_num(self, model):
|
46 |
-
param_num = sum(p.numel() for p in model.parameters() if p.requires_grad)
|
47 |
-
self.print_logger.info(f'The number of parameters: {param_num/1e+6:.3f} mil')
|
48 |
-
self.add_scalar('num_params', param_num, 0)
|
49 |
-
return param_num
|
50 |
-
|
51 |
-
def log_iter_loss(self, loss, iter, phase):
|
52 |
-
self.add_scalar(f'{phase}/loss_iter', loss, iter)
|
53 |
-
|
54 |
-
def log_epoch_loss(self, loss, epoch, phase):
|
55 |
-
self.add_scalar(f'{phase}/loss', loss, epoch)
|
56 |
-
self.print_logger.info(f'{phase} ({epoch}): loss {loss:.3f};')
|
57 |
-
|
58 |
-
def log_epoch_metrics(self, metrics_dict, epoch, phase):
|
59 |
-
for metric, val in metrics_dict.items():
|
60 |
-
self.add_scalar(f'{phase}/{metric}', val, epoch)
|
61 |
-
metrics_dict = {k: round(v, 4) for k, v in metrics_dict.items()}
|
62 |
-
self.print_logger.info(f'{phase} ({epoch}) metrics: {metrics_dict};')
|
63 |
-
|
64 |
-
def log_test_metrics(self, metrics_dict, hparams_dict, best_epoch):
|
65 |
-
allowed_types = (int, float, str, bool, torch.Tensor)
|
66 |
-
hparams_dict = {k: v for k, v in hparams_dict.items() if isinstance(v, allowed_types)}
|
67 |
-
metrics_dict = {f'test/{k}': round(v, 4) for k, v in metrics_dict.items()}
|
68 |
-
exp, ssi, sei = summary.hparams(hparams_dict, metrics_dict)
|
69 |
-
self.file_writer.add_summary(exp)
|
70 |
-
self.file_writer.add_summary(ssi)
|
71 |
-
self.file_writer.add_summary(sei)
|
72 |
-
for k, v in metrics_dict.items():
|
73 |
-
self.add_scalar(k, v, best_epoch)
|
74 |
-
self.print_logger.info(f'test ({best_epoch}) metrics: {metrics_dict};')
|
75 |
-
|
76 |
-
def log_best_model(self, model, loss, epoch, optimizer, metrics_dict):
|
77 |
-
model_name = model.__class__.__name__
|
78 |
-
self.best_model_path = os.path.join(self.logdir, f'{model_name}-{self.start_time}.pt')
|
79 |
-
checkpoint = {
|
80 |
-
'loss': loss,
|
81 |
-
'metrics': metrics_dict,
|
82 |
-
'epoch': epoch,
|
83 |
-
'optimizer': optimizer.state_dict(),
|
84 |
-
'model': model.state_dict(),
|
85 |
-
}
|
86 |
-
torch.save(checkpoint, self.best_model_path)
|
87 |
-
self.print_logger.info(f'Saved model in {self.best_model_path}')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/diffusionmodules/util.py
DELETED
@@ -1,267 +0,0 @@
|
|
1 |
-
# adopted from
|
2 |
-
# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py
|
3 |
-
# and
|
4 |
-
# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py
|
5 |
-
# and
|
6 |
-
# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py
|
7 |
-
#
|
8 |
-
# thanks!
|
9 |
-
|
10 |
-
|
11 |
-
import os
|
12 |
-
import math
|
13 |
-
import torch
|
14 |
-
import torch.nn as nn
|
15 |
-
import numpy as np
|
16 |
-
from einops import repeat
|
17 |
-
|
18 |
-
from ldm.util import instantiate_from_config
|
19 |
-
|
20 |
-
|
21 |
-
def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3):
|
22 |
-
if schedule == "linear":
|
23 |
-
betas = (
|
24 |
-
torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2
|
25 |
-
)
|
26 |
-
|
27 |
-
elif schedule == "cosine":
|
28 |
-
timesteps = (
|
29 |
-
torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s
|
30 |
-
)
|
31 |
-
alphas = timesteps / (1 + cosine_s) * np.pi / 2
|
32 |
-
alphas = torch.cos(alphas).pow(2)
|
33 |
-
alphas = alphas / alphas[0]
|
34 |
-
betas = 1 - alphas[1:] / alphas[:-1]
|
35 |
-
betas = np.clip(betas, a_min=0, a_max=0.999)
|
36 |
-
|
37 |
-
elif schedule == "sqrt_linear":
|
38 |
-
betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64)
|
39 |
-
elif schedule == "sqrt":
|
40 |
-
betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5
|
41 |
-
else:
|
42 |
-
raise ValueError(f"schedule '{schedule}' unknown.")
|
43 |
-
return betas.numpy()
|
44 |
-
|
45 |
-
|
46 |
-
def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True):
|
47 |
-
if ddim_discr_method == 'uniform':
|
48 |
-
c = num_ddpm_timesteps // num_ddim_timesteps
|
49 |
-
ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c)))
|
50 |
-
elif ddim_discr_method == 'quad':
|
51 |
-
ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int)
|
52 |
-
else:
|
53 |
-
raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"')
|
54 |
-
|
55 |
-
# assert ddim_timesteps.shape[0] == num_ddim_timesteps
|
56 |
-
# add one to get the final alpha values right (the ones from first scale to data during sampling)
|
57 |
-
steps_out = ddim_timesteps + 1
|
58 |
-
if verbose:
|
59 |
-
print(f'Selected timesteps for ddim sampler: {steps_out}')
|
60 |
-
return steps_out
|
61 |
-
|
62 |
-
|
63 |
-
def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True):
|
64 |
-
# select alphas for computing the variance schedule
|
65 |
-
alphas = alphacums[ddim_timesteps]
|
66 |
-
alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist())
|
67 |
-
|
68 |
-
# according the the formula provided in https://arxiv.org/abs/2010.02502
|
69 |
-
sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev))
|
70 |
-
if verbose:
|
71 |
-
print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}')
|
72 |
-
print(f'For the chosen value of eta, which is {eta}, '
|
73 |
-
f'this results in the following sigma_t schedule for ddim sampler {sigmas}')
|
74 |
-
return sigmas, alphas, alphas_prev
|
75 |
-
|
76 |
-
|
77 |
-
def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999):
|
78 |
-
"""
|
79 |
-
Create a beta schedule that discretizes the given alpha_t_bar function,
|
80 |
-
which defines the cumulative product of (1-beta) over time from t = [0,1].
|
81 |
-
:param num_diffusion_timesteps: the number of betas to produce.
|
82 |
-
:param alpha_bar: a lambda that takes an argument t from 0 to 1 and
|
83 |
-
produces the cumulative product of (1-beta) up to that
|
84 |
-
part of the diffusion process.
|
85 |
-
:param max_beta: the maximum beta to use; use values lower than 1 to
|
86 |
-
prevent singularities.
|
87 |
-
"""
|
88 |
-
betas = []
|
89 |
-
for i in range(num_diffusion_timesteps):
|
90 |
-
t1 = i / num_diffusion_timesteps
|
91 |
-
t2 = (i + 1) / num_diffusion_timesteps
|
92 |
-
betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta))
|
93 |
-
return np.array(betas)
|
94 |
-
|
95 |
-
|
96 |
-
def extract_into_tensor(a, t, x_shape):
|
97 |
-
b, *_ = t.shape
|
98 |
-
out = a.gather(-1, t)
|
99 |
-
return out.reshape(b, *((1,) * (len(x_shape) - 1)))
|
100 |
-
|
101 |
-
|
102 |
-
def checkpoint(func, inputs, params, flag):
|
103 |
-
"""
|
104 |
-
Evaluate a function without caching intermediate activations, allowing for
|
105 |
-
reduced memory at the expense of extra compute in the backward pass.
|
106 |
-
:param func: the function to evaluate.
|
107 |
-
:param inputs: the argument sequence to pass to `func`.
|
108 |
-
:param params: a sequence of parameters `func` depends on but does not
|
109 |
-
explicitly take as arguments.
|
110 |
-
:param flag: if False, disable gradient checkpointing.
|
111 |
-
"""
|
112 |
-
if flag:
|
113 |
-
args = tuple(inputs) + tuple(params)
|
114 |
-
return CheckpointFunction.apply(func, len(inputs), *args)
|
115 |
-
else:
|
116 |
-
return func(*inputs)
|
117 |
-
|
118 |
-
|
119 |
-
class CheckpointFunction(torch.autograd.Function):
|
120 |
-
@staticmethod
|
121 |
-
def forward(ctx, run_function, length, *args):
|
122 |
-
ctx.run_function = run_function
|
123 |
-
ctx.input_tensors = list(args[:length])
|
124 |
-
ctx.input_params = list(args[length:])
|
125 |
-
|
126 |
-
with torch.no_grad():
|
127 |
-
output_tensors = ctx.run_function(*ctx.input_tensors)
|
128 |
-
return output_tensors
|
129 |
-
|
130 |
-
@staticmethod
|
131 |
-
def backward(ctx, *output_grads):
|
132 |
-
ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors]
|
133 |
-
with torch.enable_grad():
|
134 |
-
# Fixes a bug where the first op in run_function modifies the
|
135 |
-
# Tensor storage in place, which is not allowed for detach()'d
|
136 |
-
# Tensors.
|
137 |
-
shallow_copies = [x.view_as(x) for x in ctx.input_tensors]
|
138 |
-
output_tensors = ctx.run_function(*shallow_copies)
|
139 |
-
input_grads = torch.autograd.grad(
|
140 |
-
output_tensors,
|
141 |
-
ctx.input_tensors + ctx.input_params,
|
142 |
-
output_grads,
|
143 |
-
allow_unused=True,
|
144 |
-
)
|
145 |
-
del ctx.input_tensors
|
146 |
-
del ctx.input_params
|
147 |
-
del output_tensors
|
148 |
-
return (None, None) + input_grads
|
149 |
-
|
150 |
-
|
151 |
-
def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False):
|
152 |
-
"""
|
153 |
-
Create sinusoidal timestep embeddings.
|
154 |
-
:param timesteps: a 1-D Tensor of N indices, one per batch element.
|
155 |
-
These may be fractional.
|
156 |
-
:param dim: the dimension of the output.
|
157 |
-
:param max_period: controls the minimum frequency of the embeddings.
|
158 |
-
:return: an [N x dim] Tensor of positional embeddings.
|
159 |
-
"""
|
160 |
-
if not repeat_only:
|
161 |
-
half = dim // 2
|
162 |
-
freqs = torch.exp(
|
163 |
-
-math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half
|
164 |
-
).to(device=timesteps.device)
|
165 |
-
args = timesteps[:, None].float() * freqs[None]
|
166 |
-
embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1)
|
167 |
-
if dim % 2:
|
168 |
-
embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1)
|
169 |
-
else:
|
170 |
-
embedding = repeat(timesteps, 'b -> b d', d=dim)
|
171 |
-
return embedding
|
172 |
-
|
173 |
-
|
174 |
-
def zero_module(module):
|
175 |
-
"""
|
176 |
-
Zero out the parameters of a module and return it.
|
177 |
-
"""
|
178 |
-
for p in module.parameters():
|
179 |
-
p.detach().zero_()
|
180 |
-
return module
|
181 |
-
|
182 |
-
|
183 |
-
def scale_module(module, scale):
|
184 |
-
"""
|
185 |
-
Scale the parameters of a module and return it.
|
186 |
-
"""
|
187 |
-
for p in module.parameters():
|
188 |
-
p.detach().mul_(scale)
|
189 |
-
return module
|
190 |
-
|
191 |
-
|
192 |
-
def mean_flat(tensor):
|
193 |
-
"""
|
194 |
-
Take the mean over all non-batch dimensions.
|
195 |
-
"""
|
196 |
-
return tensor.mean(dim=list(range(1, len(tensor.shape))))
|
197 |
-
|
198 |
-
|
199 |
-
def normalization(channels):
|
200 |
-
"""
|
201 |
-
Make a standard normalization layer.
|
202 |
-
:param channels: number of input channels.
|
203 |
-
:return: an nn.Module for normalization.
|
204 |
-
"""
|
205 |
-
return GroupNorm32(32, channels)
|
206 |
-
|
207 |
-
|
208 |
-
# PyTorch 1.7 has SiLU, but we support PyTorch 1.5.
|
209 |
-
class SiLU(nn.Module):
|
210 |
-
def forward(self, x):
|
211 |
-
return x * torch.sigmoid(x)
|
212 |
-
|
213 |
-
|
214 |
-
class GroupNorm32(nn.GroupNorm):
|
215 |
-
def forward(self, x):
|
216 |
-
return super().forward(x.float()).type(x.dtype)
|
217 |
-
|
218 |
-
def conv_nd(dims, *args, **kwargs):
|
219 |
-
"""
|
220 |
-
Create a 1D, 2D, or 3D convolution module.
|
221 |
-
"""
|
222 |
-
if dims == 1:
|
223 |
-
return nn.Conv1d(*args, **kwargs)
|
224 |
-
elif dims == 2:
|
225 |
-
return nn.Conv2d(*args, **kwargs)
|
226 |
-
elif dims == 3:
|
227 |
-
return nn.Conv3d(*args, **kwargs)
|
228 |
-
raise ValueError(f"unsupported dimensions: {dims}")
|
229 |
-
|
230 |
-
|
231 |
-
def linear(*args, **kwargs):
|
232 |
-
"""
|
233 |
-
Create a linear module.
|
234 |
-
"""
|
235 |
-
return nn.Linear(*args, **kwargs)
|
236 |
-
|
237 |
-
|
238 |
-
def avg_pool_nd(dims, *args, **kwargs):
|
239 |
-
"""
|
240 |
-
Create a 1D, 2D, or 3D average pooling module.
|
241 |
-
"""
|
242 |
-
if dims == 1:
|
243 |
-
return nn.AvgPool1d(*args, **kwargs)
|
244 |
-
elif dims == 2:
|
245 |
-
return nn.AvgPool2d(*args, **kwargs)
|
246 |
-
elif dims == 3:
|
247 |
-
return nn.AvgPool3d(*args, **kwargs)
|
248 |
-
raise ValueError(f"unsupported dimensions: {dims}")
|
249 |
-
|
250 |
-
|
251 |
-
class HybridConditioner(nn.Module):
|
252 |
-
|
253 |
-
def __init__(self, c_concat_config, c_crossattn_config):
|
254 |
-
super().__init__()
|
255 |
-
self.concat_conditioner = instantiate_from_config(c_concat_config)
|
256 |
-
self.crossattn_conditioner = instantiate_from_config(c_crossattn_config)
|
257 |
-
|
258 |
-
def forward(self, c_concat, c_crossattn):
|
259 |
-
c_concat = self.concat_conditioner(c_concat)
|
260 |
-
c_crossattn = self.crossattn_conditioner(c_crossattn)
|
261 |
-
return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]}
|
262 |
-
|
263 |
-
|
264 |
-
def noise_like(shape, device, repeat=False):
|
265 |
-
repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1)))
|
266 |
-
noise = lambda: torch.randn(shape, device=device)
|
267 |
-
return repeat_noise() if repeat else noise()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AP123/CerealBoxMaker/app.py
DELETED
@@ -1,69 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import torch
|
3 |
-
import numpy as np
|
4 |
-
from PIL import Image
|
5 |
-
import random
|
6 |
-
from diffusers import DiffusionPipeline
|
7 |
-
|
8 |
-
pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16")
|
9 |
-
pipeline.load_lora_weights("ostris/super-cereal-sdxl-lora")
|
10 |
-
pipeline.to("cuda:0")
|
11 |
-
|
12 |
-
MAX_SEED = np.iinfo(np.int32).max
|
13 |
-
|
14 |
-
def text_to_image(prompt):
|
15 |
-
seed = random.randint(0, MAX_SEED)
|
16 |
-
negative_prompt = "ugly, blurry, nsfw, gore, blood"
|
17 |
-
output = pipeline(prompt=prompt, negative_prompt=negative_prompt, width=1024, height=1024, guidance_scale=7.0, num_inference_steps=25, generator=torch.Generator().manual_seed(seed))
|
18 |
-
generated_img = output.images[0]
|
19 |
-
generated_img_array = np.array(generated_img)
|
20 |
-
return generated_img_array
|
21 |
-
|
22 |
-
def create_cereal_box(input_image):
|
23 |
-
cover_img = Image.fromarray(input_image.astype('uint8'), 'RGB')
|
24 |
-
template_img = Image.open("template.jpeg")
|
25 |
-
scaling_factor = 1.5
|
26 |
-
rect_height = int(template_img.height * 0.32)
|
27 |
-
new_width = int(rect_height * 0.70)
|
28 |
-
cover_resized = cover_img.resize((new_width, rect_height), Image.LANCZOS)
|
29 |
-
new_width_scaled = int(new_width * scaling_factor)
|
30 |
-
new_height_scaled = int(rect_height * scaling_factor)
|
31 |
-
cover_resized_scaled = cover_resized.resize((new_width_scaled, new_height_scaled), Image.LANCZOS)
|
32 |
-
left_x = int(template_img.width * 0.085)
|
33 |
-
left_y = int((template_img.height - new_height_scaled) // 2 + template_img.height * 0.012)
|
34 |
-
left_position = (left_x, left_y)
|
35 |
-
right_x = int(template_img.width * 0.82) - new_width_scaled
|
36 |
-
right_y = left_y
|
37 |
-
right_position = (right_x, right_y)
|
38 |
-
template_copy = template_img.copy()
|
39 |
-
template_copy.paste(cover_resized_scaled, left_position)
|
40 |
-
template_copy.paste(cover_resized_scaled, right_position)
|
41 |
-
template_copy_array = np.array(template_copy)
|
42 |
-
return template_copy_array
|
43 |
-
|
44 |
-
def combined_function(prompt):
|
45 |
-
generated_img_array = text_to_image(prompt)
|
46 |
-
final_img = create_cereal_box(generated_img_array)
|
47 |
-
return final_img
|
48 |
-
|
49 |
-
with gr.Blocks() as app:
|
50 |
-
gr.HTML("<div style='text-align: center;'><h1>Cereal Box Maker 🥣</h1></div>")
|
51 |
-
gr.HTML("<div style='text-align: center;'><p>This application uses StableDiffusion XL to create any cereal box you could ever imagine!</p></div>")
|
52 |
-
gr.HTML("<div style='text-align: center;'><h3>Instructions:</h3><ol><li>Describe the cereal box you want to create and hit generate!</li><li>Print it out, cut the outside, fold the lines, and then tape!</li></ol></div>")
|
53 |
-
gr.HTML("<div style='text-align: center;'><p>A space by AP 🐧, follow me on <a href='https://twitter.com/angrypenguinPNG'>Twitter</a>! H/T to <a href='https://twitter.com/ostrisai'>OstrisAI</a> for their Cereal Box LoRA!</p></div>")
|
54 |
-
|
55 |
-
with gr.Row():
|
56 |
-
textbox = gr.Textbox(label="Describe your cereal box: Ex: 'Avengers Cereal'")
|
57 |
-
btn_generate = gr.Button("Generate", label="Generate")
|
58 |
-
|
59 |
-
with gr.Row():
|
60 |
-
output_img = gr.Image(label="Your Custom Cereal Box")
|
61 |
-
|
62 |
-
btn_generate.click(
|
63 |
-
combined_function,
|
64 |
-
inputs=[textbox],
|
65 |
-
outputs=[output_img]
|
66 |
-
)
|
67 |
-
|
68 |
-
app.queue(max_size=20, api_open=False)
|
69 |
-
app.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnet152_cifar.py
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
# model settings
|
2 |
-
model = dict(
|
3 |
-
type='ImageClassifier',
|
4 |
-
backbone=dict(
|
5 |
-
type='ResNet_CIFAR',
|
6 |
-
depth=152,
|
7 |
-
num_stages=4,
|
8 |
-
out_indices=(3, ),
|
9 |
-
style='pytorch'),
|
10 |
-
neck=dict(type='GlobalAveragePooling'),
|
11 |
-
head=dict(
|
12 |
-
type='LinearClsHead',
|
13 |
-
num_classes=10,
|
14 |
-
in_channels=2048,
|
15 |
-
loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
|
16 |
-
))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abhilashvj/planogram-compliance/utils/downloads.py
DELETED
@@ -1,139 +0,0 @@
|
|
1 |
-
# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
|
2 |
-
"""
|
3 |
-
Download utils
|
4 |
-
"""
|
5 |
-
|
6 |
-
import logging
|
7 |
-
import os
|
8 |
-
import subprocess
|
9 |
-
import urllib
|
10 |
-
from pathlib import Path
|
11 |
-
|
12 |
-
import requests
|
13 |
-
import torch
|
14 |
-
|
15 |
-
|
16 |
-
def is_url(url, check=True):
|
17 |
-
# Check if string is URL and check if URL exists
|
18 |
-
try:
|
19 |
-
url = str(url)
|
20 |
-
result = urllib.parse.urlparse(url)
|
21 |
-
assert all([result.scheme, result.netloc]) # check if is url
|
22 |
-
return (
|
23 |
-
(urllib.request.urlopen(url).getcode() == 200) if check else True
|
24 |
-
) # check if exists online
|
25 |
-
except (AssertionError, urllib.request.HTTPError):
|
26 |
-
return False
|
27 |
-
|
28 |
-
|
29 |
-
def gsutil_getsize(url=""):
|
30 |
-
# gs://bucket/file size https://cloud.google.com/storage/docs/gsutil/commands/du
|
31 |
-
s = subprocess.check_output(f"gsutil du {url}", shell=True).decode("utf-8")
|
32 |
-
return eval(s.split(" ")[0]) if len(s) else 0 # bytes
|
33 |
-
|
34 |
-
|
35 |
-
def url_getsize(url="https://ultralytics.com/images/bus.jpg"):
|
36 |
-
# Return downloadable file size in bytes
|
37 |
-
response = requests.head(url, allow_redirects=True)
|
38 |
-
return int(response.headers.get("content-length", -1))
|
39 |
-
|
40 |
-
|
41 |
-
def safe_download(file, url, url2=None, min_bytes=1e0, error_msg=""):
|
42 |
-
# Attempts to download file from url or url2, checks and removes incomplete downloads < min_bytes
|
43 |
-
from utils.general import LOGGER
|
44 |
-
|
45 |
-
file = Path(file)
|
46 |
-
assert_msg = f"Downloaded file '{file}' does not exist or size is < min_bytes={min_bytes}"
|
47 |
-
try: # url1
|
48 |
-
LOGGER.info(f"Downloading {url} to {file}...")
|
49 |
-
torch.hub.download_url_to_file(
|
50 |
-
url, str(file), progress=LOGGER.level <= logging.INFO
|
51 |
-
)
|
52 |
-
assert (
|
53 |
-
file.exists() and file.stat().st_size > min_bytes
|
54 |
-
), assert_msg # check
|
55 |
-
except Exception as e: # url2
|
56 |
-
if file.exists():
|
57 |
-
file.unlink() # remove partial downloads
|
58 |
-
LOGGER.info(f"ERROR: {e}\nRe-attempting {url2 or url} to {file}...")
|
59 |
-
os.system(
|
60 |
-
f"curl -# -L '{url2 or url}' -o '{file}' --retry 3 -C -"
|
61 |
-
) # curl download, retry and resume on fail
|
62 |
-
finally:
|
63 |
-
if not file.exists() or file.stat().st_size < min_bytes: # check
|
64 |
-
if file.exists():
|
65 |
-
file.unlink() # remove partial downloads
|
66 |
-
LOGGER.info(f"ERROR: {assert_msg}\n{error_msg}")
|
67 |
-
LOGGER.info("")
|
68 |
-
|
69 |
-
|
70 |
-
def attempt_download(file, repo="ultralytics/yolov5", release="v7.0"):
|
71 |
-
# Attempt file download from GitHub release assets if not found locally. release = 'latest', 'v7.0', etc.
|
72 |
-
from utils.general import LOGGER
|
73 |
-
|
74 |
-
def github_assets(repository, version="latest"):
|
75 |
-
# Return GitHub repo tag (i.e. 'v7.0') and assets (i.e. ['yolov5s.pt', 'yolov5m.pt', ...])
|
76 |
-
if version != "latest":
|
77 |
-
version = f"tags/{version}" # i.e. tags/v7.0
|
78 |
-
response = requests.get(
|
79 |
-
f"https://api.github.com/repos/{repository}/releases/{version}"
|
80 |
-
).json() # github api
|
81 |
-
return response["tag_name"], [
|
82 |
-
x["name"] for x in response["assets"]
|
83 |
-
] # tag, assets
|
84 |
-
|
85 |
-
file = Path(str(file).strip().replace("'", ""))
|
86 |
-
if not file.exists():
|
87 |
-
# URL specified
|
88 |
-
name = Path(
|
89 |
-
urllib.parse.unquote(str(file))
|
90 |
-
).name # decode '%2F' to '/' etc.
|
91 |
-
if str(file).startswith(("http:/", "https:/")): # download
|
92 |
-
url = str(file).replace(":/", "://") # Pathlib turns :// -> :/
|
93 |
-
file = name.split("?")[
|
94 |
-
0
|
95 |
-
] # parse authentication https://url.com/file.txt?auth...
|
96 |
-
if Path(file).is_file():
|
97 |
-
LOGGER.info(
|
98 |
-
f"Found {url} locally at {file}"
|
99 |
-
) # file already exists
|
100 |
-
else:
|
101 |
-
safe_download(file=file, url=url, min_bytes=1e5)
|
102 |
-
return file
|
103 |
-
|
104 |
-
# GitHub assets
|
105 |
-
assets = [
|
106 |
-
f"yolov5{size}{suffix}.pt"
|
107 |
-
for size in "nsmlx"
|
108 |
-
for suffix in ("", "6", "-cls", "-seg")
|
109 |
-
] # default
|
110 |
-
try:
|
111 |
-
tag, assets = github_assets(repo, release)
|
112 |
-
except Exception:
|
113 |
-
try:
|
114 |
-
tag, assets = github_assets(repo) # latest release
|
115 |
-
except Exception:
|
116 |
-
try:
|
117 |
-
tag = (
|
118 |
-
subprocess.check_output(
|
119 |
-
"git tag", shell=True, stderr=subprocess.STDOUT
|
120 |
-
)
|
121 |
-
.decode()
|
122 |
-
.split()[-1]
|
123 |
-
)
|
124 |
-
except Exception:
|
125 |
-
tag = release
|
126 |
-
|
127 |
-
file.parent.mkdir(
|
128 |
-
parents=True, exist_ok=True
|
129 |
-
) # make parent dir (if required)
|
130 |
-
if name in assets:
|
131 |
-
url3 = "https://drive.google.com/drive/folders/1EFQTEUeXWSFww0luse2jB9M1QNZQGwNl" # backup gdrive mirror
|
132 |
-
safe_download(
|
133 |
-
file,
|
134 |
-
url=f"https://github.com/{repo}/releases/download/{tag}/{name}",
|
135 |
-
min_bytes=1e5,
|
136 |
-
error_msg=f"{file} missing, try downloading from https://github.com/{repo}/releases/{tag} or {url3}",
|
137 |
-
)
|
138 |
-
|
139 |
-
return str(file)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapting/TrendFlow/mypages/sidebar.py
DELETED
@@ -1,91 +0,0 @@
|
|
1 |
-
import streamlit as st
|
2 |
-
import datetime
|
3 |
-
# from .utils import PACKAGE_ROOT
|
4 |
-
# from lrt.utils.functions import template
|
5 |
-
|
6 |
-
APP_VERSION = 'v0.1.0'
|
7 |
-
|
8 |
-
|
9 |
-
def render_sidebar():
|
10 |
-
icons = f'''
|
11 |
-
<center>
|
12 |
-
<a href="https://github.com/leoxiang66/research-trends-analysis"><img src = "https://cdn-icons-png.flaticon.com/512/733/733609.png" width="23"></img></a> <a href="mailto:[email protected]"><img src="https://cdn-icons-png.flaticon.com/512/646/646094.png" alt="email" width = "27" ></a>
|
13 |
-
</center>
|
14 |
-
'''
|
15 |
-
|
16 |
-
sidebar_markdown = f'''
|
17 |
-
|
18 |
-
<center>
|
19 |
-
<h1>
|
20 |
-
TrendFlow
|
21 |
-
</h1>
|
22 |
-
|
23 |
-
|
24 |
-
<code>
|
25 |
-
{APP_VERSION}
|
26 |
-
</code>
|
27 |
-
|
28 |
-
|
29 |
-
</center>
|
30 |
-
|
31 |
-
|
32 |
-
{icons}
|
33 |
-
|
34 |
-
---
|
35 |
-
|
36 |
-
## Choose the Paper Search Platforms'''
|
37 |
-
st.sidebar.markdown(sidebar_markdown, unsafe_allow_html=True)
|
38 |
-
# elvsier = st.sidebar.checkbox('Elvsier',value=True)
|
39 |
-
# IEEE = st.sidebar.checkbox('IEEE',value=False)
|
40 |
-
# google = st.sidebar.checkbox('Google Scholar')
|
41 |
-
platforms = st.sidebar.multiselect('Platforms', options=
|
42 |
-
[
|
43 |
-
# 'Elvsier',
|
44 |
-
'IEEE',
|
45 |
-
# 'Google Scholar',
|
46 |
-
'Arxiv',
|
47 |
-
'Paper with Code'
|
48 |
-
], default=[
|
49 |
-
# 'Elvsier',
|
50 |
-
'IEEE',
|
51 |
-
# 'Google Scholar',
|
52 |
-
'Arxiv',
|
53 |
-
'Paper with Code'
|
54 |
-
])
|
55 |
-
|
56 |
-
st.sidebar.markdown('## Choose the max number of papers to search')
|
57 |
-
number_papers = st.sidebar.slider('number', 10, 100, 20, 5)
|
58 |
-
|
59 |
-
st.sidebar.markdown('## Choose the start year of publication')
|
60 |
-
this_year = datetime.date.today().year
|
61 |
-
start_year = st.sidebar.slider('year start:', 2000, this_year, 2010, 1)
|
62 |
-
|
63 |
-
st.sidebar.markdown('## Choose the end year of publication')
|
64 |
-
end_year = st.sidebar.slider('year end:', 2000, this_year, this_year, 1)
|
65 |
-
|
66 |
-
with st.sidebar:
|
67 |
-
st.markdown('## Adjust hyperparameters')
|
68 |
-
with st.expander('Clustering Options'):
|
69 |
-
standardization = st.selectbox('1) Standardization before clustering', options=['no', 'yes'], index=0)
|
70 |
-
dr = st.selectbox('2) Dimension reduction', options=['none', 'pca'], index=0)
|
71 |
-
tmp = min(number_papers, 15)
|
72 |
-
max_k = st.slider('3) Max number of clusters', 2, tmp, tmp // 2)
|
73 |
-
cluster_model = st.selectbox('4) Clustering model', options=['Gaussian Mixture Model', 'K-means'], index=0)
|
74 |
-
|
75 |
-
with st.expander('Keyphrases Generation Options'):
|
76 |
-
model_cpt = st.selectbox(label='Model checkpoint', options=['KeyBart', 'KeyBartAdapter', 'keyphrase-transformer'], index=0)
|
77 |
-
|
78 |
-
st.markdown('---')
|
79 |
-
st.markdown(icons, unsafe_allow_html=True)
|
80 |
-
st.markdown(f'''<center>Copyright © 2022 - {datetime.datetime.now().year} by Tao Xiang</center>''', unsafe_allow_html=True)
|
81 |
-
|
82 |
-
# st.sidebar.markdown('## Choose the number of clusters')
|
83 |
-
# k = st.sidebar.slider('number',1,10,3)
|
84 |
-
|
85 |
-
return platforms, number_papers, start_year, end_year, dict(
|
86 |
-
dimension_reduction=dr,
|
87 |
-
max_k=max_k,
|
88 |
-
model_cpt=model_cpt,
|
89 |
-
standardization=True if standardization == 'yes' else False,
|
90 |
-
cluster_model='gmm' if cluster_model == 'Gaussian Mixture Model' else 'kmeans-euclidean'
|
91 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/LayoutChildren.js
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
// Override
|
2 |
-
var LayoutChildren = function () {
|
3 |
-
|
4 |
-
}
|
5 |
-
|
6 |
-
export default LayoutChildren;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ajay-user/Optical-Character-Recognition/utils.py
DELETED
@@ -1,110 +0,0 @@
|
|
1 |
-
import matplotlib.pyplot as plt
|
2 |
-
import numpy as np
|
3 |
-
import cv2
|
4 |
-
import easyocr
|
5 |
-
|
6 |
-
|
7 |
-
class OCR:
|
8 |
-
def __init__(self, image) -> None:
|
9 |
-
self.image = image
|
10 |
-
self.reader = easyocr.Reader(
|
11 |
-
lang_list=['en'],
|
12 |
-
gpu=False,
|
13 |
-
model_storage_directory='./EasyOCR/model/',
|
14 |
-
download_enabled=False,
|
15 |
-
user_network_directory='./EasyOCR/user_network/'
|
16 |
-
)
|
17 |
-
|
18 |
-
def detection(self):
|
19 |
-
img_arr = np.array(self.image, dtype=np.uint8)
|
20 |
-
response = self.reader.readtext(image=img_arr)
|
21 |
-
plot_inputs = []
|
22 |
-
|
23 |
-
for box, text, conf in response:
|
24 |
-
plot_inputs.append((text, np.array(box, dtype=np.float32)))
|
25 |
-
|
26 |
-
fig, ax = plt.subplots(nrows=1, ncols=1)
|
27 |
-
plot = self.drawAnnotations(
|
28 |
-
image=img_arr, predictions=plot_inputs, ax=ax)
|
29 |
-
return fig
|
30 |
-
|
31 |
-
def drawAnnotations(self, image, predictions, ax=None):
|
32 |
-
"""Draw text annotations onto image.
|
33 |
-
|
34 |
-
Args:
|
35 |
-
image: The image on which to draw
|
36 |
-
predictions: The predictions as provided by `pipeline.recognize`.
|
37 |
-
ax: A matplotlib axis on which to draw.
|
38 |
-
"""
|
39 |
-
if ax is None:
|
40 |
-
_, ax = plt.subplots()
|
41 |
-
ax.imshow(self.drawBoxes(image=image, boxes=predictions,
|
42 |
-
boxes_format="predictions"))
|
43 |
-
predictions = sorted(predictions, key=lambda p: p[1][:, 1].min())
|
44 |
-
left = []
|
45 |
-
right = []
|
46 |
-
for word, box in predictions:
|
47 |
-
if box[:, 0].min() < image.shape[1] / 2:
|
48 |
-
left.append((word, box))
|
49 |
-
else:
|
50 |
-
right.append((word, box))
|
51 |
-
ax.set_yticks([])
|
52 |
-
ax.set_xticks([])
|
53 |
-
for side, group in zip(["left", "right"], [left, right]):
|
54 |
-
for index, (text, box) in enumerate(group):
|
55 |
-
y = 1 - (index / len(group))
|
56 |
-
xy = box[0] / np.array([image.shape[1], image.shape[0]])
|
57 |
-
xy[1] = 1 - xy[1]
|
58 |
-
ax.annotate(
|
59 |
-
text=text,
|
60 |
-
xy=xy,
|
61 |
-
xytext=(-0.05 if side == "left" else 1.05, y),
|
62 |
-
xycoords="axes fraction",
|
63 |
-
arrowprops={"arrowstyle": "->", "color": "r"},
|
64 |
-
color="r",
|
65 |
-
fontsize=14,
|
66 |
-
horizontalalignment="right" if side == "left" else "left",
|
67 |
-
)
|
68 |
-
return ax
|
69 |
-
|
70 |
-
def drawBoxes(self, image, boxes, color=(255, 0, 0), thickness=1, boxes_format="boxes"):
|
71 |
-
"""Draw boxes onto an image.
|
72 |
-
|
73 |
-
Args:
|
74 |
-
image: The image on which to draw the boxes.
|
75 |
-
boxes: The boxes to draw.
|
76 |
-
color: The color for each box.
|
77 |
-
thickness: The thickness for each box.
|
78 |
-
boxes_format: The format used for providing the boxes. Options are
|
79 |
-
"boxes" which indicates an array with shape(N, 4, 2) where N is the
|
80 |
-
number of boxes and each box is a list of four points) as provided
|
81 |
-
by `keras_ocr.detection.Detector.detect`, "lines" (a list of
|
82 |
-
lines where each line itself is a list of (box, character) tuples) as
|
83 |
-
provided by `keras_ocr.data_generation.get_image_generator`,
|
84 |
-
or "predictions" where boxes is by itself a list of (word, box) tuples
|
85 |
-
as provided by `keras_ocr.pipeline.Pipeline.recognize` or
|
86 |
-
`keras_ocr.recognition.Recognizer.recognize_from_boxes`.
|
87 |
-
"""
|
88 |
-
if len(boxes) == 0:
|
89 |
-
return image
|
90 |
-
canvas = image.copy()
|
91 |
-
if boxes_format == "lines":
|
92 |
-
revised_boxes = []
|
93 |
-
for line in boxes:
|
94 |
-
for box, _ in line:
|
95 |
-
revised_boxes.append(box)
|
96 |
-
boxes = revised_boxes
|
97 |
-
if boxes_format == "predictions":
|
98 |
-
revised_boxes = []
|
99 |
-
for _, box in boxes:
|
100 |
-
revised_boxes.append(box)
|
101 |
-
boxes = revised_boxes
|
102 |
-
for box in boxes:
|
103 |
-
cv2.polylines(
|
104 |
-
img=canvas,
|
105 |
-
pts=box[np.newaxis].astype("int32"),
|
106 |
-
color=color,
|
107 |
-
thickness=thickness,
|
108 |
-
isClosed=True,
|
109 |
-
)
|
110 |
-
return canvas
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlekseyKorshuk/instagram-filter-removal/app.py
DELETED
@@ -1,80 +0,0 @@
|
|
1 |
-
import requests
|
2 |
-
import os
|
3 |
-
import gradio as gr
|
4 |
-
import numpy as np
|
5 |
-
import torch
|
6 |
-
import torchvision.models as models
|
7 |
-
|
8 |
-
from configs.default import get_cfg_defaults
|
9 |
-
from modeling.build import build_model
|
10 |
-
from utils.data_utils import linear_scaling
|
11 |
-
|
12 |
-
|
13 |
-
url = "https://www.dropbox.com/s/y97z812sxa1kvrg/ifrnet.pth?dl=1"
|
14 |
-
r = requests.get(url, stream=True)
|
15 |
-
if not os.path.exists("ifrnet.pth"):
|
16 |
-
with open("ifrnet.pth", 'wb') as f:
|
17 |
-
for data in r:
|
18 |
-
f.write(data)
|
19 |
-
|
20 |
-
cfg = get_cfg_defaults()
|
21 |
-
cfg.MODEL.CKPT = "ifrnet.pth"
|
22 |
-
net, _ = build_model(cfg)
|
23 |
-
net = net.eval()
|
24 |
-
vgg16 = models.vgg16(pretrained=True).features.eval()
|
25 |
-
|
26 |
-
|
27 |
-
def load_checkpoints_from_ckpt(ckpt_path):
|
28 |
-
checkpoints = torch.load(ckpt_path, map_location=torch.device('cpu'))
|
29 |
-
net.load_state_dict(checkpoints["ifr"])
|
30 |
-
|
31 |
-
|
32 |
-
load_checkpoints_from_ckpt(cfg.MODEL.CKPT)
|
33 |
-
|
34 |
-
|
35 |
-
def filter_removal(img):
|
36 |
-
arr = np.expand_dims(np.transpose(img, (2, 0, 1)), axis=0)
|
37 |
-
arr = torch.tensor(arr).float() / 255.
|
38 |
-
arr = linear_scaling(arr)
|
39 |
-
with torch.no_grad():
|
40 |
-
feat = vgg16(arr)
|
41 |
-
out, _ = net(arr, feat)
|
42 |
-
out = torch.clamp(out, max=1., min=0.)
|
43 |
-
return out.squeeze(0).permute(1, 2, 0).numpy()
|
44 |
-
|
45 |
-
|
46 |
-
title = "Instagram Filter Removal on Fashionable Images"
|
47 |
-
description = "This is the demo for IFRNet, filter removal on fashionable images on Instagram. " \
|
48 |
-
"To use it, simply upload your filtered image, or click one of the examples to load them."
|
49 |
-
article = "<p style='text-align: center'><a href='https://openaccess.thecvf.com/content/CVPR2021W/NTIRE/papers/Kinli_Instagram_Filter_Removal_on_Fashionable_Images_CVPRW_2021_paper.pdf'>Paper</a> | <a href='https://github.com/birdortyedi/instagram-filter-removal-pytorch'>Github</a></p>"
|
50 |
-
|
51 |
-
gr.Interface(
|
52 |
-
filter_removal,
|
53 |
-
gr.inputs.Image(shape=(256, 256)),
|
54 |
-
gr.outputs.Image(),
|
55 |
-
title=title,
|
56 |
-
description=description,
|
57 |
-
article=article,
|
58 |
-
allow_flagging=False,
|
59 |
-
examples_per_page=17,
|
60 |
-
enable_queue=True,
|
61 |
-
examples=[
|
62 |
-
["images/examples/98_He-Fe.jpg"],
|
63 |
-
["images/examples/2_Brannan.jpg"],
|
64 |
-
["images/examples/12_Toaster.jpg"],
|
65 |
-
["images/examples/18_Gingham.jpg"],
|
66 |
-
["images/examples/11_Sutro.jpg"],
|
67 |
-
["images/examples/9_Lo-Fi.jpg"],
|
68 |
-
["images/examples/3_Mayfair.jpg"],
|
69 |
-
["images/examples/4_Hudson.jpg"],
|
70 |
-
["images/examples/5_Amaro.jpg"],
|
71 |
-
["images/examples/6_1977.jpg"],
|
72 |
-
["images/examples/8_Valencia.jpg"],
|
73 |
-
["images/examples/16_Lo-Fi.jpg"],
|
74 |
-
["images/examples/10_Nashville.jpg"],
|
75 |
-
["images/examples/15_X-ProII.jpg"],
|
76 |
-
["images/examples/14_Willow.jpg"],
|
77 |
-
["images/examples/30_Perpetua.jpg"],
|
78 |
-
["images/examples/1_Clarendon.jpg"],
|
79 |
-
]
|
80 |
-
).launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/gui_utils/imgui_utils.py
DELETED
@@ -1,207 +0,0 @@
|
|
1 |
-
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
2 |
-
#
|
3 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
# and proprietary rights in and to this software, related documentation
|
5 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
# distribution of this software and related documentation without an express
|
7 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
import contextlib
|
10 |
-
import imgui
|
11 |
-
|
12 |
-
# ----------------------------------------------------------------------------
|
13 |
-
|
14 |
-
|
15 |
-
def set_default_style(color_scheme='dark', spacing=9, indent=23, scrollbar=27):
|
16 |
-
s = imgui.get_style()
|
17 |
-
s.window_padding = [spacing, spacing]
|
18 |
-
s.item_spacing = [spacing, spacing]
|
19 |
-
s.item_inner_spacing = [spacing, spacing]
|
20 |
-
s.columns_min_spacing = spacing
|
21 |
-
s.indent_spacing = indent
|
22 |
-
s.scrollbar_size = scrollbar
|
23 |
-
s.frame_padding = [4, 3]
|
24 |
-
s.window_border_size = 1
|
25 |
-
s.child_border_size = 1
|
26 |
-
s.popup_border_size = 1
|
27 |
-
s.frame_border_size = 1
|
28 |
-
s.window_rounding = 0
|
29 |
-
s.child_rounding = 0
|
30 |
-
s.popup_rounding = 3
|
31 |
-
s.frame_rounding = 3
|
32 |
-
s.scrollbar_rounding = 3
|
33 |
-
s.grab_rounding = 3
|
34 |
-
|
35 |
-
getattr(imgui, f'style_colors_{color_scheme}')(s)
|
36 |
-
c0 = s.colors[imgui.COLOR_MENUBAR_BACKGROUND]
|
37 |
-
c1 = s.colors[imgui.COLOR_FRAME_BACKGROUND]
|
38 |
-
s.colors[imgui.COLOR_POPUP_BACKGROUND] = [
|
39 |
-
x * 0.7 + y * 0.3 for x, y in zip(c0, c1)][:3] + [1]
|
40 |
-
|
41 |
-
# ----------------------------------------------------------------------------
|
42 |
-
|
43 |
-
|
44 |
-
@contextlib.contextmanager
|
45 |
-
def grayed_out(cond=True):
|
46 |
-
if cond:
|
47 |
-
s = imgui.get_style()
|
48 |
-
text = s.colors[imgui.COLOR_TEXT_DISABLED]
|
49 |
-
grab = s.colors[imgui.COLOR_SCROLLBAR_GRAB]
|
50 |
-
back = s.colors[imgui.COLOR_MENUBAR_BACKGROUND]
|
51 |
-
imgui.push_style_color(imgui.COLOR_TEXT, *text)
|
52 |
-
imgui.push_style_color(imgui.COLOR_CHECK_MARK, *grab)
|
53 |
-
imgui.push_style_color(imgui.COLOR_SLIDER_GRAB, *grab)
|
54 |
-
imgui.push_style_color(imgui.COLOR_SLIDER_GRAB_ACTIVE, *grab)
|
55 |
-
imgui.push_style_color(imgui.COLOR_FRAME_BACKGROUND, *back)
|
56 |
-
imgui.push_style_color(imgui.COLOR_FRAME_BACKGROUND_HOVERED, *back)
|
57 |
-
imgui.push_style_color(imgui.COLOR_FRAME_BACKGROUND_ACTIVE, *back)
|
58 |
-
imgui.push_style_color(imgui.COLOR_BUTTON, *back)
|
59 |
-
imgui.push_style_color(imgui.COLOR_BUTTON_HOVERED, *back)
|
60 |
-
imgui.push_style_color(imgui.COLOR_BUTTON_ACTIVE, *back)
|
61 |
-
imgui.push_style_color(imgui.COLOR_HEADER, *back)
|
62 |
-
imgui.push_style_color(imgui.COLOR_HEADER_HOVERED, *back)
|
63 |
-
imgui.push_style_color(imgui.COLOR_HEADER_ACTIVE, *back)
|
64 |
-
imgui.push_style_color(imgui.COLOR_POPUP_BACKGROUND, *back)
|
65 |
-
yield
|
66 |
-
imgui.pop_style_color(14)
|
67 |
-
else:
|
68 |
-
yield
|
69 |
-
|
70 |
-
# ----------------------------------------------------------------------------
|
71 |
-
|
72 |
-
|
73 |
-
@contextlib.contextmanager
|
74 |
-
def item_width(width=None):
|
75 |
-
if width is not None:
|
76 |
-
imgui.push_item_width(width)
|
77 |
-
yield
|
78 |
-
imgui.pop_item_width()
|
79 |
-
else:
|
80 |
-
yield
|
81 |
-
|
82 |
-
# ----------------------------------------------------------------------------
|
83 |
-
|
84 |
-
|
85 |
-
def scoped_by_object_id(method):
|
86 |
-
def decorator(self, *args, **kwargs):
|
87 |
-
imgui.push_id(str(id(self)))
|
88 |
-
res = method(self, *args, **kwargs)
|
89 |
-
imgui.pop_id()
|
90 |
-
return res
|
91 |
-
return decorator
|
92 |
-
|
93 |
-
# ----------------------------------------------------------------------------
|
94 |
-
|
95 |
-
|
96 |
-
def button(label, width=0, enabled=True):
|
97 |
-
with grayed_out(not enabled):
|
98 |
-
clicked = imgui.button(label, width=width)
|
99 |
-
clicked = clicked and enabled
|
100 |
-
return clicked
|
101 |
-
|
102 |
-
# ----------------------------------------------------------------------------
|
103 |
-
|
104 |
-
|
105 |
-
def collapsing_header(text, visible=None, flags=0, default=False, enabled=True, show=True):
|
106 |
-
expanded = False
|
107 |
-
if show:
|
108 |
-
if default:
|
109 |
-
flags |= imgui.TREE_NODE_DEFAULT_OPEN
|
110 |
-
if not enabled:
|
111 |
-
flags |= imgui.TREE_NODE_LEAF
|
112 |
-
with grayed_out(not enabled):
|
113 |
-
expanded, visible = imgui.collapsing_header(
|
114 |
-
text, visible=visible, flags=flags)
|
115 |
-
expanded = expanded and enabled
|
116 |
-
return expanded, visible
|
117 |
-
|
118 |
-
# ----------------------------------------------------------------------------
|
119 |
-
|
120 |
-
|
121 |
-
def popup_button(label, width=0, enabled=True):
|
122 |
-
if button(label, width, enabled):
|
123 |
-
imgui.open_popup(label)
|
124 |
-
opened = imgui.begin_popup(label)
|
125 |
-
return opened
|
126 |
-
|
127 |
-
# ----------------------------------------------------------------------------
|
128 |
-
|
129 |
-
|
130 |
-
def input_text(label, value, buffer_length, flags, width=None, help_text=''):
|
131 |
-
old_value = value
|
132 |
-
color = list(imgui.get_style().colors[imgui.COLOR_TEXT])
|
133 |
-
if value == '':
|
134 |
-
color[-1] *= 0.5
|
135 |
-
with item_width(width):
|
136 |
-
imgui.push_style_color(imgui.COLOR_TEXT, *color)
|
137 |
-
value = value if value != '' else help_text
|
138 |
-
changed, value = imgui.input_text(label, value, buffer_length, flags)
|
139 |
-
value = value if value != help_text else ''
|
140 |
-
imgui.pop_style_color(1)
|
141 |
-
if not flags & imgui.INPUT_TEXT_ENTER_RETURNS_TRUE:
|
142 |
-
changed = (value != old_value)
|
143 |
-
return changed, value
|
144 |
-
|
145 |
-
# ----------------------------------------------------------------------------
|
146 |
-
|
147 |
-
|
148 |
-
def drag_previous_control(enabled=True):
|
149 |
-
dragging = False
|
150 |
-
dx = 0
|
151 |
-
dy = 0
|
152 |
-
if imgui.begin_drag_drop_source(imgui.DRAG_DROP_SOURCE_NO_PREVIEW_TOOLTIP):
|
153 |
-
if enabled:
|
154 |
-
dragging = True
|
155 |
-
dx, dy = imgui.get_mouse_drag_delta()
|
156 |
-
imgui.reset_mouse_drag_delta()
|
157 |
-
imgui.end_drag_drop_source()
|
158 |
-
return dragging, dx, dy
|
159 |
-
|
160 |
-
# ----------------------------------------------------------------------------
|
161 |
-
|
162 |
-
|
163 |
-
def drag_button(label, width=0, enabled=True):
|
164 |
-
clicked = button(label, width=width, enabled=enabled)
|
165 |
-
dragging, dx, dy = drag_previous_control(enabled=enabled)
|
166 |
-
return clicked, dragging, dx, dy
|
167 |
-
|
168 |
-
# ----------------------------------------------------------------------------
|
169 |
-
|
170 |
-
|
171 |
-
def drag_hidden_window(label, x, y, width, height, enabled=True):
|
172 |
-
imgui.push_style_color(imgui.COLOR_WINDOW_BACKGROUND, 0, 0, 0, 0)
|
173 |
-
imgui.push_style_color(imgui.COLOR_BORDER, 0, 0, 0, 0)
|
174 |
-
imgui.set_next_window_position(x, y)
|
175 |
-
imgui.set_next_window_size(width, height)
|
176 |
-
imgui.begin(label, closable=False, flags=(
|
177 |
-
imgui.WINDOW_NO_TITLE_BAR | imgui.WINDOW_NO_RESIZE | imgui.WINDOW_NO_MOVE))
|
178 |
-
dragging, dx, dy = drag_previous_control(enabled=enabled)
|
179 |
-
imgui.end()
|
180 |
-
imgui.pop_style_color(2)
|
181 |
-
return dragging, dx, dy
|
182 |
-
|
183 |
-
# ----------------------------------------------------------------------------
|
184 |
-
|
185 |
-
|
186 |
-
def click_hidden_window(label, x, y, width, height, img_w, img_h, enabled=True):
|
187 |
-
imgui.push_style_color(imgui.COLOR_WINDOW_BACKGROUND, 0, 0, 0, 0)
|
188 |
-
imgui.push_style_color(imgui.COLOR_BORDER, 0, 0, 0, 0)
|
189 |
-
imgui.set_next_window_position(x, y)
|
190 |
-
imgui.set_next_window_size(width, height)
|
191 |
-
imgui.begin(label, closable=False, flags=(
|
192 |
-
imgui.WINDOW_NO_TITLE_BAR | imgui.WINDOW_NO_RESIZE | imgui.WINDOW_NO_MOVE))
|
193 |
-
clicked, down = False, False
|
194 |
-
img_x, img_y = 0, 0
|
195 |
-
if imgui.is_mouse_down():
|
196 |
-
posx, posy = imgui.get_mouse_pos()
|
197 |
-
if posx >= x and posx < x + width and posy >= y and posy < y + height:
|
198 |
-
if imgui.is_mouse_clicked():
|
199 |
-
clicked = True
|
200 |
-
down = True
|
201 |
-
img_x = round((posx - x) / (width - 1) * (img_w - 1))
|
202 |
-
img_y = round((posy - y) / (height - 1) * (img_h - 1))
|
203 |
-
imgui.end()
|
204 |
-
imgui.pop_style_color(2)
|
205 |
-
return clicked, down, img_x, img_y
|
206 |
-
|
207 |
-
# ----------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/training/dataset.py
DELETED
@@ -1,252 +0,0 @@
|
|
1 |
-
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
2 |
-
#
|
3 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
# and proprietary rights in and to this software, related documentation
|
5 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
# distribution of this software and related documentation without an express
|
7 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
"""Streaming images and labels from datasets created with dataset_tool.py."""
|
10 |
-
|
11 |
-
import os
|
12 |
-
import numpy as np
|
13 |
-
import zipfile
|
14 |
-
import PIL.Image
|
15 |
-
import json
|
16 |
-
import torch
|
17 |
-
import dnnlib
|
18 |
-
|
19 |
-
try:
|
20 |
-
import pyspng
|
21 |
-
except ImportError:
|
22 |
-
pyspng = None
|
23 |
-
|
24 |
-
# ----------------------------------------------------------------------------
|
25 |
-
|
26 |
-
|
27 |
-
class Dataset(torch.utils.data.Dataset):
|
28 |
-
def __init__(self,
|
29 |
-
name, # Name of the dataset.
|
30 |
-
raw_shape, # Shape of the raw image data (NCHW).
|
31 |
-
# Artificially limit the size of the dataset. None = no limit. Applied before xflip.
|
32 |
-
max_size=None,
|
33 |
-
# Enable conditioning labels? False = label dimension is zero.
|
34 |
-
use_labels=False,
|
35 |
-
# Artificially double the size of the dataset via x-flips. Applied after max_size.
|
36 |
-
xflip=False,
|
37 |
-
# Random seed to use when applying max_size.
|
38 |
-
random_seed=0,
|
39 |
-
):
|
40 |
-
self._name = name
|
41 |
-
self._raw_shape = list(raw_shape)
|
42 |
-
self._use_labels = use_labels
|
43 |
-
self._raw_labels = None
|
44 |
-
self._label_shape = None
|
45 |
-
|
46 |
-
# Apply max_size.
|
47 |
-
self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64)
|
48 |
-
if (max_size is not None) and (self._raw_idx.size > max_size):
|
49 |
-
np.random.RandomState(random_seed).shuffle(self._raw_idx)
|
50 |
-
self._raw_idx = np.sort(self._raw_idx[:max_size])
|
51 |
-
|
52 |
-
# Apply xflip.
|
53 |
-
self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8)
|
54 |
-
if xflip:
|
55 |
-
self._raw_idx = np.tile(self._raw_idx, 2)
|
56 |
-
self._xflip = np.concatenate(
|
57 |
-
[self._xflip, np.ones_like(self._xflip)])
|
58 |
-
|
59 |
-
def _get_raw_labels(self):
|
60 |
-
if self._raw_labels is None:
|
61 |
-
self._raw_labels = self._load_raw_labels() if self._use_labels else None
|
62 |
-
if self._raw_labels is None:
|
63 |
-
self._raw_labels = np.zeros(
|
64 |
-
[self._raw_shape[0], 0], dtype=np.float32)
|
65 |
-
assert isinstance(self._raw_labels, np.ndarray)
|
66 |
-
assert self._raw_labels.shape[0] == self._raw_shape[0]
|
67 |
-
assert self._raw_labels.dtype in [np.float32, np.int64]
|
68 |
-
if self._raw_labels.dtype == np.int64:
|
69 |
-
assert self._raw_labels.ndim == 1
|
70 |
-
assert np.all(self._raw_labels >= 0)
|
71 |
-
return self._raw_labels
|
72 |
-
|
73 |
-
def close(self): # to be overridden by subclass
|
74 |
-
pass
|
75 |
-
|
76 |
-
def _load_raw_image(self, raw_idx): # to be overridden by subclass
|
77 |
-
raise NotImplementedError
|
78 |
-
|
79 |
-
def _load_raw_labels(self): # to be overridden by subclass
|
80 |
-
raise NotImplementedError
|
81 |
-
|
82 |
-
def __getstate__(self):
|
83 |
-
return dict(self.__dict__, _raw_labels=None)
|
84 |
-
|
85 |
-
def __del__(self):
|
86 |
-
try:
|
87 |
-
self.close()
|
88 |
-
except:
|
89 |
-
pass
|
90 |
-
|
91 |
-
def __len__(self):
|
92 |
-
return self._raw_idx.size
|
93 |
-
|
94 |
-
def __getitem__(self, idx):
|
95 |
-
image = self._load_raw_image(self._raw_idx[idx])
|
96 |
-
assert isinstance(image, np.ndarray)
|
97 |
-
assert list(image.shape) == self.image_shape
|
98 |
-
assert image.dtype == np.uint8
|
99 |
-
if self._xflip[idx]:
|
100 |
-
assert image.ndim == 3 # CHW
|
101 |
-
image = image[:, :, ::-1]
|
102 |
-
return image.copy(), self.get_label(idx)
|
103 |
-
|
104 |
-
def get_label(self, idx):
|
105 |
-
label = self._get_raw_labels()[self._raw_idx[idx]]
|
106 |
-
if label.dtype == np.int64:
|
107 |
-
onehot = np.zeros(self.label_shape, dtype=np.float32)
|
108 |
-
onehot[label] = 1
|
109 |
-
label = onehot
|
110 |
-
return label.copy()
|
111 |
-
|
112 |
-
def get_details(self, idx):
|
113 |
-
d = dnnlib.EasyDict()
|
114 |
-
d.raw_idx = int(self._raw_idx[idx])
|
115 |
-
d.xflip = (int(self._xflip[idx]) != 0)
|
116 |
-
d.raw_label = self._get_raw_labels()[d.raw_idx].copy()
|
117 |
-
return d
|
118 |
-
|
119 |
-
@property
|
120 |
-
def name(self):
|
121 |
-
return self._name
|
122 |
-
|
123 |
-
@property
|
124 |
-
def image_shape(self):
|
125 |
-
return list(self._raw_shape[1:])
|
126 |
-
|
127 |
-
@property
|
128 |
-
def num_channels(self):
|
129 |
-
assert len(self.image_shape) == 3 # CHW
|
130 |
-
return self.image_shape[0]
|
131 |
-
|
132 |
-
@property
|
133 |
-
def resolution(self):
|
134 |
-
assert len(self.image_shape) == 3 # CHW
|
135 |
-
assert self.image_shape[1] == self.image_shape[2]
|
136 |
-
return self.image_shape[1]
|
137 |
-
|
138 |
-
@property
|
139 |
-
def label_shape(self):
|
140 |
-
if self._label_shape is None:
|
141 |
-
raw_labels = self._get_raw_labels()
|
142 |
-
if raw_labels.dtype == np.int64:
|
143 |
-
self._label_shape = [int(np.max(raw_labels)) + 1]
|
144 |
-
else:
|
145 |
-
self._label_shape = raw_labels.shape[1:]
|
146 |
-
return list(self._label_shape)
|
147 |
-
|
148 |
-
@property
|
149 |
-
def label_dim(self):
|
150 |
-
assert len(self.label_shape) == 1
|
151 |
-
return self.label_shape[0]
|
152 |
-
|
153 |
-
@property
|
154 |
-
def has_labels(self):
|
155 |
-
return any(x != 0 for x in self.label_shape)
|
156 |
-
|
157 |
-
@property
|
158 |
-
def has_onehot_labels(self):
|
159 |
-
return self._get_raw_labels().dtype == np.int64
|
160 |
-
|
161 |
-
# ----------------------------------------------------------------------------
|
162 |
-
|
163 |
-
|
164 |
-
class ImageFolderDataset(Dataset):
|
165 |
-
def __init__(self,
|
166 |
-
path, # Path to directory or zip.
|
167 |
-
# Ensure specific resolution, None = highest available.
|
168 |
-
resolution=None,
|
169 |
-
# Additional arguments for the Dataset base class.
|
170 |
-
**super_kwargs,
|
171 |
-
):
|
172 |
-
self._path = path
|
173 |
-
self._zipfile = None
|
174 |
-
|
175 |
-
if os.path.isdir(self._path):
|
176 |
-
self._type = 'dir'
|
177 |
-
self._all_fnames = {os.path.relpath(os.path.join(
|
178 |
-
root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files}
|
179 |
-
elif self._file_ext(self._path) == '.zip':
|
180 |
-
self._type = 'zip'
|
181 |
-
self._all_fnames = set(self._get_zipfile().namelist())
|
182 |
-
else:
|
183 |
-
raise IOError('Path must point to a directory or zip')
|
184 |
-
|
185 |
-
PIL.Image.init()
|
186 |
-
self._image_fnames = sorted(
|
187 |
-
fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION)
|
188 |
-
if len(self._image_fnames) == 0:
|
189 |
-
raise IOError('No image files found in the specified path')
|
190 |
-
|
191 |
-
name = os.path.splitext(os.path.basename(self._path))[0]
|
192 |
-
raw_shape = [len(self._image_fnames)] + \
|
193 |
-
list(self._load_raw_image(0).shape)
|
194 |
-
if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution):
|
195 |
-
raise IOError('Image files do not match the specified resolution')
|
196 |
-
super().__init__(name=name, raw_shape=raw_shape, **super_kwargs)
|
197 |
-
|
198 |
-
@staticmethod
|
199 |
-
def _file_ext(fname):
|
200 |
-
return os.path.splitext(fname)[1].lower()
|
201 |
-
|
202 |
-
def _get_zipfile(self):
|
203 |
-
assert self._type == 'zip'
|
204 |
-
if self._zipfile is None:
|
205 |
-
self._zipfile = zipfile.ZipFile(self._path)
|
206 |
-
return self._zipfile
|
207 |
-
|
208 |
-
def _open_file(self, fname):
|
209 |
-
if self._type == 'dir':
|
210 |
-
return open(os.path.join(self._path, fname), 'rb')
|
211 |
-
if self._type == 'zip':
|
212 |
-
return self._get_zipfile().open(fname, 'r')
|
213 |
-
return None
|
214 |
-
|
215 |
-
def close(self):
|
216 |
-
try:
|
217 |
-
if self._zipfile is not None:
|
218 |
-
self._zipfile.close()
|
219 |
-
finally:
|
220 |
-
self._zipfile = None
|
221 |
-
|
222 |
-
def __getstate__(self):
|
223 |
-
return dict(super().__getstate__(), _zipfile=None)
|
224 |
-
|
225 |
-
def _load_raw_image(self, raw_idx):
|
226 |
-
fname = self._image_fnames[raw_idx]
|
227 |
-
with self._open_file(fname) as f:
|
228 |
-
if pyspng is not None and self._file_ext(fname) == '.png':
|
229 |
-
image = pyspng.load(f.read())
|
230 |
-
else:
|
231 |
-
image = np.array(PIL.Image.open(f))
|
232 |
-
if image.ndim == 2:
|
233 |
-
image = image[:, :, np.newaxis] # HW => HWC
|
234 |
-
image = image.transpose(2, 0, 1) # HWC => CHW
|
235 |
-
return image
|
236 |
-
|
237 |
-
def _load_raw_labels(self):
|
238 |
-
fname = 'dataset.json'
|
239 |
-
if fname not in self._all_fnames:
|
240 |
-
return None
|
241 |
-
with self._open_file(fname) as f:
|
242 |
-
labels = json.load(f)['labels']
|
243 |
-
if labels is None:
|
244 |
-
return None
|
245 |
-
labels = dict(labels)
|
246 |
-
labels = [labels[fname.replace('\\', '/')]
|
247 |
-
for fname in self._image_fnames]
|
248 |
-
labels = np.array(labels)
|
249 |
-
labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim])
|
250 |
-
return labels
|
251 |
-
|
252 |
-
# ----------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/decoder/vgg.py
DELETED
@@ -1,225 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
import torch.nn.functional as F
|
4 |
-
from collections import namedtuple
|
5 |
-
import torchvision.models as models
|
6 |
-
|
7 |
-
|
8 |
-
# pytorch pretrained vgg
|
9 |
-
class Encoder(nn.Module):
|
10 |
-
def __init__(self):
|
11 |
-
super().__init__()
|
12 |
-
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
13 |
-
# pretrained vgg19
|
14 |
-
vgg19 = models.vgg19(weights='DEFAULT').features.to(device)
|
15 |
-
|
16 |
-
self.relu1_1 = vgg19[:2]
|
17 |
-
self.relu2_1 = vgg19[2:7]
|
18 |
-
self.relu3_1 = vgg19[7:12]
|
19 |
-
self.relu4_1 = vgg19[12:21]
|
20 |
-
|
21 |
-
# fix parameters
|
22 |
-
self.requires_grad_(False)
|
23 |
-
|
24 |
-
def forward(self, x):
|
25 |
-
_output = namedtuple('output', ['relu1_1', 'relu2_1', 'relu3_1', 'relu4_1'])
|
26 |
-
# print("Data; ", x)
|
27 |
-
# print("Relu: ", self.relu1_1)
|
28 |
-
relu1_1 = self.relu1_1(x)
|
29 |
-
|
30 |
-
relu2_1 = self.relu2_1(relu1_1)
|
31 |
-
relu3_1 = self.relu3_1(relu2_1)
|
32 |
-
relu4_1 = self.relu4_1(relu3_1)
|
33 |
-
output = _output(relu1_1, relu2_1, relu3_1, relu4_1)
|
34 |
-
|
35 |
-
return output
|
36 |
-
|
37 |
-
|
38 |
-
class Decoder(nn.Module):
|
39 |
-
"""
|
40 |
-
starting from relu 4_1
|
41 |
-
"""
|
42 |
-
|
43 |
-
def __init__(self, ckpt_path=None):
|
44 |
-
super().__init__()
|
45 |
-
|
46 |
-
self.layers = nn.Sequential(
|
47 |
-
# nn.Conv2d(512, 256, 3, padding=1, padding_mode='reflect'),
|
48 |
-
# nn.ReLU(),
|
49 |
-
# nn.Upsample(scale_factor=2, mode='nearest'), # relu4-1
|
50 |
-
nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
|
51 |
-
nn.ReLU(), # relu3-4
|
52 |
-
nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
|
53 |
-
nn.ReLU(), # relu3-3
|
54 |
-
nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
|
55 |
-
nn.ReLU(), # relu3-2
|
56 |
-
nn.Conv2d(256, 128, 3, padding=1, padding_mode='reflect'),
|
57 |
-
nn.ReLU(),
|
58 |
-
nn.Upsample(scale_factor=2, mode='nearest'), # relu3-1
|
59 |
-
nn.Conv2d(128, 128, 3, padding=1, padding_mode='reflect'),
|
60 |
-
nn.ReLU(), # relu2-2
|
61 |
-
nn.Conv2d(128, 64, 3, padding=1, padding_mode='reflect'),
|
62 |
-
nn.ReLU(),
|
63 |
-
nn.Upsample(scale_factor=2, mode='nearest'), # relu2-1
|
64 |
-
nn.Conv2d(64, 64, 3, padding=1, padding_mode='reflect'),
|
65 |
-
nn.ReLU(), # relu1-2
|
66 |
-
nn.Conv2d(64, 3, 3, padding=1, padding_mode='reflect'),
|
67 |
-
)
|
68 |
-
|
69 |
-
if ckpt_path is not None:
|
70 |
-
self.load_state_dict(torch.load(ckpt_path))
|
71 |
-
|
72 |
-
def forward(self, x):
|
73 |
-
return self.layers(x)
|
74 |
-
|
75 |
-
|
76 |
-
### high-res unet feature map decoder
|
77 |
-
|
78 |
-
|
79 |
-
class DownBlock(nn.Module):
|
80 |
-
|
81 |
-
def __init__(self, in_dim, out_dim, down='conv'):
|
82 |
-
super(DownBlock, self).__init__()
|
83 |
-
|
84 |
-
if down == 'conv':
|
85 |
-
self.down_conv = nn.Sequential(
|
86 |
-
nn.Conv2d(in_dim, out_dim, 3, 2, 1),
|
87 |
-
nn.LeakyReLU(),
|
88 |
-
nn.Conv2d(out_dim, out_dim, 3, 1, 1),
|
89 |
-
nn.LeakyReLU(),
|
90 |
-
)
|
91 |
-
elif down == 'mean':
|
92 |
-
self.down_conv = nn.AvgPool2d(2)
|
93 |
-
else:
|
94 |
-
raise NotImplementedError(
|
95 |
-
'[ERROR] invalid downsampling operator: {:s}'.format(down)
|
96 |
-
)
|
97 |
-
|
98 |
-
def forward(self, x):
|
99 |
-
x = self.down_conv(x)
|
100 |
-
return x
|
101 |
-
|
102 |
-
|
103 |
-
class UpBlock(nn.Module):
|
104 |
-
|
105 |
-
def __init__(self, in_dim, out_dim, skip_dim=None, up='nearest'):
|
106 |
-
super(UpBlock, self).__init__()
|
107 |
-
|
108 |
-
if up == 'conv':
|
109 |
-
self.up_conv = nn.Sequential(
|
110 |
-
nn.ConvTranspose2d(in_dim, out_dim, 3, 2, 1, 1),
|
111 |
-
nn.ReLU(),
|
112 |
-
)
|
113 |
-
else:
|
114 |
-
assert up in ('bilinear', 'nearest'), \
|
115 |
-
'[ERROR] invalid upsampling mode: {:s}'.format(up)
|
116 |
-
self.up_conv = nn.Sequential(
|
117 |
-
nn.Upsample(scale_factor=2, mode=up),
|
118 |
-
nn.Conv2d(in_dim, out_dim, 3, 1, 1),
|
119 |
-
nn.ReLU(),
|
120 |
-
)
|
121 |
-
|
122 |
-
in_dim = out_dim
|
123 |
-
if skip_dim is not None:
|
124 |
-
in_dim += skip_dim
|
125 |
-
self.conv = nn.Sequential(
|
126 |
-
nn.Conv2d(in_dim, out_dim, 3, 1, 1),
|
127 |
-
nn.ReLU(),
|
128 |
-
)
|
129 |
-
|
130 |
-
def _pad(self, x, y):
|
131 |
-
dh = y.size(-2) - x.size(-2)
|
132 |
-
dw = y.size(-1) - x.size(-1)
|
133 |
-
if dh == 0 and dw == 0:
|
134 |
-
return x
|
135 |
-
if dh < 0:
|
136 |
-
x = x[..., :dh, :]
|
137 |
-
if dw < 0:
|
138 |
-
x = x[..., :, :dw]
|
139 |
-
if dh > 0 or dw > 0:
|
140 |
-
x = F.pad(
|
141 |
-
x,
|
142 |
-
pad=(dw // 2, dw - dw // 2, dh // 2, dh - dh // 2),
|
143 |
-
mode='reflect'
|
144 |
-
)
|
145 |
-
return x
|
146 |
-
|
147 |
-
def forward(self, x, skip=None):
|
148 |
-
x = self.up_conv(x)
|
149 |
-
if skip is not None:
|
150 |
-
x = torch.cat([self._pad(x, skip), skip], 1)
|
151 |
-
x = self.conv(x)
|
152 |
-
return x
|
153 |
-
|
154 |
-
|
155 |
-
class UNetDecoder(nn.Module):
|
156 |
-
|
157 |
-
def __init__(self, in_dim=256):
|
158 |
-
super(UNetDecoder, self).__init__()
|
159 |
-
|
160 |
-
self.down_layers = nn.ModuleList()
|
161 |
-
self.skip_convs = nn.ModuleList()
|
162 |
-
self.up_layers = nn.ModuleList()
|
163 |
-
|
164 |
-
in_dim = in_dim
|
165 |
-
self.n_levels = 2
|
166 |
-
self.up = 1
|
167 |
-
|
168 |
-
for i in range(self.n_levels):
|
169 |
-
self.down_layers.append(
|
170 |
-
DownBlock(
|
171 |
-
in_dim, in_dim,
|
172 |
-
)
|
173 |
-
)
|
174 |
-
out_dim = in_dim // 2 ** (self.n_levels - i)
|
175 |
-
self.skip_convs.append(nn.Conv2d(in_dim, out_dim, 1))
|
176 |
-
self.up_layers.append(
|
177 |
-
UpBlock(
|
178 |
-
out_dim * 2, out_dim, out_dim,
|
179 |
-
)
|
180 |
-
)
|
181 |
-
|
182 |
-
out_dim = in_dim // 2 ** self.n_levels
|
183 |
-
self.out_conv = nn.Sequential(
|
184 |
-
nn.Conv2d(out_dim, out_dim, 3, 1, 1),
|
185 |
-
nn.ReLU(),
|
186 |
-
nn.Conv2d(out_dim, 3, 1, 1),
|
187 |
-
)
|
188 |
-
|
189 |
-
def forward(self, feats):
|
190 |
-
skips = []
|
191 |
-
for i in range(self.n_levels):
|
192 |
-
skips.append(self.skip_convs[i](feats))
|
193 |
-
feats = self.down_layers[i](feats)
|
194 |
-
for i in range(self.n_levels - 1, -1, -1):
|
195 |
-
feats = self.up_layers[i](feats, skips[i])
|
196 |
-
rgb = self.out_conv(feats)
|
197 |
-
return rgb
|
198 |
-
|
199 |
-
|
200 |
-
### high-res feature map decoder
|
201 |
-
|
202 |
-
class PlainDecoder(nn.Module):
|
203 |
-
def __init__(self) -> None:
|
204 |
-
super().__init__()
|
205 |
-
|
206 |
-
self.layers = nn.Sequential(
|
207 |
-
nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
|
208 |
-
nn.ReLU(), # relu3-4
|
209 |
-
nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
|
210 |
-
nn.ReLU(), # relu3-3
|
211 |
-
nn.Conv2d(256, 256, 3, padding=1, padding_mode='reflect'),
|
212 |
-
nn.ReLU(), # relu3-2
|
213 |
-
nn.Conv2d(256, 128, 3, padding=1, padding_mode='reflect'),
|
214 |
-
nn.ReLU(),
|
215 |
-
nn.Conv2d(128, 128, 3, padding=1, padding_mode='reflect'),
|
216 |
-
nn.ReLU(), # relu2-2
|
217 |
-
nn.Conv2d(128, 64, 3, padding=1, padding_mode='reflect'),
|
218 |
-
nn.ReLU(),
|
219 |
-
nn.Conv2d(64, 64, 3, padding=1, padding_mode='reflect'),
|
220 |
-
nn.ReLU(), # relu1-2
|
221 |
-
nn.Conv2d(64, 3, 3, padding=1, padding_mode='reflect'),
|
222 |
-
)
|
223 |
-
|
224 |
-
def forward(self, x):
|
225 |
-
return self.layers(x)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x1024_80k_cityscapes.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './danet_r50-d8_512x1024_80k_cityscapes.py'
|
2 |
-
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Aravindsssss/gradin/app.py
DELETED
@@ -1,34 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import gradio as gr
|
3 |
-
from langchain.chat_models import ChatOpenAI
|
4 |
-
from langchain import LLMChain, PromptTemplate
|
5 |
-
from langchain.memory import ConversationBufferMemory
|
6 |
-
|
7 |
-
OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
|
8 |
-
|
9 |
-
template = """You are a helpful assistant to answer all user queries.
|
10 |
-
{chat_history}
|
11 |
-
User: {user_message}
|
12 |
-
Chatbot:"""
|
13 |
-
|
14 |
-
prompt = PromptTemplate(
|
15 |
-
input_variables=["chat_history", "user_message"], template=template
|
16 |
-
)
|
17 |
-
|
18 |
-
memory = ConversationBufferMemory(memory_key="chat_history")
|
19 |
-
|
20 |
-
llm_chain = LLMChain(
|
21 |
-
llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
|
22 |
-
prompt=prompt,
|
23 |
-
verbose=True,
|
24 |
-
memory=memory,
|
25 |
-
)
|
26 |
-
|
27 |
-
def get_text_response(user_message,history):
|
28 |
-
response = llm_chain.predict(user_message = user_message)
|
29 |
-
return response
|
30 |
-
|
31 |
-
demo = gr.ChatInterface(get_text_response)
|
32 |
-
|
33 |
-
if __name__ == "__main__":
|
34 |
-
demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/cli/command_context.py
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
from contextlib import ExitStack, contextmanager
|
2 |
-
from typing import ContextManager, Generator, TypeVar
|
3 |
-
|
4 |
-
_T = TypeVar("_T", covariant=True)
|
5 |
-
|
6 |
-
|
7 |
-
class CommandContextMixIn:
|
8 |
-
def __init__(self) -> None:
|
9 |
-
super().__init__()
|
10 |
-
self._in_main_context = False
|
11 |
-
self._main_context = ExitStack()
|
12 |
-
|
13 |
-
@contextmanager
|
14 |
-
def main_context(self) -> Generator[None, None, None]:
|
15 |
-
assert not self._in_main_context
|
16 |
-
|
17 |
-
self._in_main_context = True
|
18 |
-
try:
|
19 |
-
with self._main_context:
|
20 |
-
yield
|
21 |
-
finally:
|
22 |
-
self._in_main_context = False
|
23 |
-
|
24 |
-
def enter_context(self, context_provider: ContextManager[_T]) -> _T:
|
25 |
-
assert self._in_main_context
|
26 |
-
|
27 |
-
return self._main_context.enter_context(context_provider)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/traceback.py
DELETED
@@ -1,756 +0,0 @@
|
|
1 |
-
from __future__ import absolute_import
|
2 |
-
|
3 |
-
import linecache
|
4 |
-
import os
|
5 |
-
import platform
|
6 |
-
import sys
|
7 |
-
from dataclasses import dataclass, field
|
8 |
-
from traceback import walk_tb
|
9 |
-
from types import ModuleType, TracebackType
|
10 |
-
from typing import (
|
11 |
-
Any,
|
12 |
-
Callable,
|
13 |
-
Dict,
|
14 |
-
Iterable,
|
15 |
-
List,
|
16 |
-
Optional,
|
17 |
-
Sequence,
|
18 |
-
Tuple,
|
19 |
-
Type,
|
20 |
-
Union,
|
21 |
-
)
|
22 |
-
|
23 |
-
from pip._vendor.pygments.lexers import guess_lexer_for_filename
|
24 |
-
from pip._vendor.pygments.token import Comment, Keyword, Name, Number, Operator, String
|
25 |
-
from pip._vendor.pygments.token import Text as TextToken
|
26 |
-
from pip._vendor.pygments.token import Token
|
27 |
-
from pip._vendor.pygments.util import ClassNotFound
|
28 |
-
|
29 |
-
from . import pretty
|
30 |
-
from ._loop import loop_last
|
31 |
-
from .columns import Columns
|
32 |
-
from .console import Console, ConsoleOptions, ConsoleRenderable, RenderResult, group
|
33 |
-
from .constrain import Constrain
|
34 |
-
from .highlighter import RegexHighlighter, ReprHighlighter
|
35 |
-
from .panel import Panel
|
36 |
-
from .scope import render_scope
|
37 |
-
from .style import Style
|
38 |
-
from .syntax import Syntax
|
39 |
-
from .text import Text
|
40 |
-
from .theme import Theme
|
41 |
-
|
42 |
-
WINDOWS = platform.system() == "Windows"
|
43 |
-
|
44 |
-
LOCALS_MAX_LENGTH = 10
|
45 |
-
LOCALS_MAX_STRING = 80
|
46 |
-
|
47 |
-
|
48 |
-
def install(
|
49 |
-
*,
|
50 |
-
console: Optional[Console] = None,
|
51 |
-
width: Optional[int] = 100,
|
52 |
-
extra_lines: int = 3,
|
53 |
-
theme: Optional[str] = None,
|
54 |
-
word_wrap: bool = False,
|
55 |
-
show_locals: bool = False,
|
56 |
-
locals_max_length: int = LOCALS_MAX_LENGTH,
|
57 |
-
locals_max_string: int = LOCALS_MAX_STRING,
|
58 |
-
locals_hide_dunder: bool = True,
|
59 |
-
locals_hide_sunder: Optional[bool] = None,
|
60 |
-
indent_guides: bool = True,
|
61 |
-
suppress: Iterable[Union[str, ModuleType]] = (),
|
62 |
-
max_frames: int = 100,
|
63 |
-
) -> Callable[[Type[BaseException], BaseException, Optional[TracebackType]], Any]:
|
64 |
-
"""Install a rich traceback handler.
|
65 |
-
|
66 |
-
Once installed, any tracebacks will be printed with syntax highlighting and rich formatting.
|
67 |
-
|
68 |
-
|
69 |
-
Args:
|
70 |
-
console (Optional[Console], optional): Console to write exception to. Default uses internal Console instance.
|
71 |
-
width (Optional[int], optional): Width (in characters) of traceback. Defaults to 100.
|
72 |
-
extra_lines (int, optional): Extra lines of code. Defaults to 3.
|
73 |
-
theme (Optional[str], optional): Pygments theme to use in traceback. Defaults to ``None`` which will pick
|
74 |
-
a theme appropriate for the platform.
|
75 |
-
word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.
|
76 |
-
show_locals (bool, optional): Enable display of local variables. Defaults to False.
|
77 |
-
locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
|
78 |
-
Defaults to 10.
|
79 |
-
locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
|
80 |
-
locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True.
|
81 |
-
locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False.
|
82 |
-
indent_guides (bool, optional): Enable indent guides in code and locals. Defaults to True.
|
83 |
-
suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
|
84 |
-
|
85 |
-
Returns:
|
86 |
-
Callable: The previous exception handler that was replaced.
|
87 |
-
|
88 |
-
"""
|
89 |
-
traceback_console = Console(stderr=True) if console is None else console
|
90 |
-
|
91 |
-
locals_hide_sunder = (
|
92 |
-
True
|
93 |
-
if (traceback_console.is_jupyter and locals_hide_sunder is None)
|
94 |
-
else locals_hide_sunder
|
95 |
-
)
|
96 |
-
|
97 |
-
def excepthook(
|
98 |
-
type_: Type[BaseException],
|
99 |
-
value: BaseException,
|
100 |
-
traceback: Optional[TracebackType],
|
101 |
-
) -> None:
|
102 |
-
traceback_console.print(
|
103 |
-
Traceback.from_exception(
|
104 |
-
type_,
|
105 |
-
value,
|
106 |
-
traceback,
|
107 |
-
width=width,
|
108 |
-
extra_lines=extra_lines,
|
109 |
-
theme=theme,
|
110 |
-
word_wrap=word_wrap,
|
111 |
-
show_locals=show_locals,
|
112 |
-
locals_max_length=locals_max_length,
|
113 |
-
locals_max_string=locals_max_string,
|
114 |
-
locals_hide_dunder=locals_hide_dunder,
|
115 |
-
locals_hide_sunder=bool(locals_hide_sunder),
|
116 |
-
indent_guides=indent_guides,
|
117 |
-
suppress=suppress,
|
118 |
-
max_frames=max_frames,
|
119 |
-
)
|
120 |
-
)
|
121 |
-
|
122 |
-
def ipy_excepthook_closure(ip: Any) -> None: # pragma: no cover
|
123 |
-
tb_data = {} # store information about showtraceback call
|
124 |
-
default_showtraceback = ip.showtraceback # keep reference of default traceback
|
125 |
-
|
126 |
-
def ipy_show_traceback(*args: Any, **kwargs: Any) -> None:
|
127 |
-
"""wrap the default ip.showtraceback to store info for ip._showtraceback"""
|
128 |
-
nonlocal tb_data
|
129 |
-
tb_data = kwargs
|
130 |
-
default_showtraceback(*args, **kwargs)
|
131 |
-
|
132 |
-
def ipy_display_traceback(
|
133 |
-
*args: Any, is_syntax: bool = False, **kwargs: Any
|
134 |
-
) -> None:
|
135 |
-
"""Internally called traceback from ip._showtraceback"""
|
136 |
-
nonlocal tb_data
|
137 |
-
exc_tuple = ip._get_exc_info()
|
138 |
-
|
139 |
-
# do not display trace on syntax error
|
140 |
-
tb: Optional[TracebackType] = None if is_syntax else exc_tuple[2]
|
141 |
-
|
142 |
-
# determine correct tb_offset
|
143 |
-
compiled = tb_data.get("running_compiled_code", False)
|
144 |
-
tb_offset = tb_data.get("tb_offset", 1 if compiled else 0)
|
145 |
-
# remove ipython internal frames from trace with tb_offset
|
146 |
-
for _ in range(tb_offset):
|
147 |
-
if tb is None:
|
148 |
-
break
|
149 |
-
tb = tb.tb_next
|
150 |
-
|
151 |
-
excepthook(exc_tuple[0], exc_tuple[1], tb)
|
152 |
-
tb_data = {} # clear data upon usage
|
153 |
-
|
154 |
-
# replace _showtraceback instead of showtraceback to allow ipython features such as debugging to work
|
155 |
-
# this is also what the ipython docs recommends to modify when subclassing InteractiveShell
|
156 |
-
ip._showtraceback = ipy_display_traceback
|
157 |
-
# add wrapper to capture tb_data
|
158 |
-
ip.showtraceback = ipy_show_traceback
|
159 |
-
ip.showsyntaxerror = lambda *args, **kwargs: ipy_display_traceback(
|
160 |
-
*args, is_syntax=True, **kwargs
|
161 |
-
)
|
162 |
-
|
163 |
-
try: # pragma: no cover
|
164 |
-
# if within ipython, use customized traceback
|
165 |
-
ip = get_ipython() # type: ignore[name-defined]
|
166 |
-
ipy_excepthook_closure(ip)
|
167 |
-
return sys.excepthook
|
168 |
-
except Exception:
|
169 |
-
# otherwise use default system hook
|
170 |
-
old_excepthook = sys.excepthook
|
171 |
-
sys.excepthook = excepthook
|
172 |
-
return old_excepthook
|
173 |
-
|
174 |
-
|
175 |
-
@dataclass
|
176 |
-
class Frame:
|
177 |
-
filename: str
|
178 |
-
lineno: int
|
179 |
-
name: str
|
180 |
-
line: str = ""
|
181 |
-
locals: Optional[Dict[str, pretty.Node]] = None
|
182 |
-
|
183 |
-
|
184 |
-
@dataclass
|
185 |
-
class _SyntaxError:
|
186 |
-
offset: int
|
187 |
-
filename: str
|
188 |
-
line: str
|
189 |
-
lineno: int
|
190 |
-
msg: str
|
191 |
-
|
192 |
-
|
193 |
-
@dataclass
|
194 |
-
class Stack:
|
195 |
-
exc_type: str
|
196 |
-
exc_value: str
|
197 |
-
syntax_error: Optional[_SyntaxError] = None
|
198 |
-
is_cause: bool = False
|
199 |
-
frames: List[Frame] = field(default_factory=list)
|
200 |
-
|
201 |
-
|
202 |
-
@dataclass
|
203 |
-
class Trace:
|
204 |
-
stacks: List[Stack]
|
205 |
-
|
206 |
-
|
207 |
-
class PathHighlighter(RegexHighlighter):
|
208 |
-
highlights = [r"(?P<dim>.*/)(?P<bold>.+)"]
|
209 |
-
|
210 |
-
|
211 |
-
class Traceback:
|
212 |
-
"""A Console renderable that renders a traceback.
|
213 |
-
|
214 |
-
Args:
|
215 |
-
trace (Trace, optional): A `Trace` object produced from `extract`. Defaults to None, which uses
|
216 |
-
the last exception.
|
217 |
-
width (Optional[int], optional): Number of characters used to traceback. Defaults to 100.
|
218 |
-
extra_lines (int, optional): Additional lines of code to render. Defaults to 3.
|
219 |
-
theme (str, optional): Override pygments theme used in traceback.
|
220 |
-
word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.
|
221 |
-
show_locals (bool, optional): Enable display of local variables. Defaults to False.
|
222 |
-
indent_guides (bool, optional): Enable indent guides in code and locals. Defaults to True.
|
223 |
-
locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
|
224 |
-
Defaults to 10.
|
225 |
-
locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
|
226 |
-
locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True.
|
227 |
-
locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False.
|
228 |
-
suppress (Sequence[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
|
229 |
-
max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100.
|
230 |
-
|
231 |
-
"""
|
232 |
-
|
233 |
-
LEXERS = {
|
234 |
-
"": "text",
|
235 |
-
".py": "python",
|
236 |
-
".pxd": "cython",
|
237 |
-
".pyx": "cython",
|
238 |
-
".pxi": "pyrex",
|
239 |
-
}
|
240 |
-
|
241 |
-
def __init__(
|
242 |
-
self,
|
243 |
-
trace: Optional[Trace] = None,
|
244 |
-
*,
|
245 |
-
width: Optional[int] = 100,
|
246 |
-
extra_lines: int = 3,
|
247 |
-
theme: Optional[str] = None,
|
248 |
-
word_wrap: bool = False,
|
249 |
-
show_locals: bool = False,
|
250 |
-
locals_max_length: int = LOCALS_MAX_LENGTH,
|
251 |
-
locals_max_string: int = LOCALS_MAX_STRING,
|
252 |
-
locals_hide_dunder: bool = True,
|
253 |
-
locals_hide_sunder: bool = False,
|
254 |
-
indent_guides: bool = True,
|
255 |
-
suppress: Iterable[Union[str, ModuleType]] = (),
|
256 |
-
max_frames: int = 100,
|
257 |
-
):
|
258 |
-
if trace is None:
|
259 |
-
exc_type, exc_value, traceback = sys.exc_info()
|
260 |
-
if exc_type is None or exc_value is None or traceback is None:
|
261 |
-
raise ValueError(
|
262 |
-
"Value for 'trace' required if not called in except: block"
|
263 |
-
)
|
264 |
-
trace = self.extract(
|
265 |
-
exc_type, exc_value, traceback, show_locals=show_locals
|
266 |
-
)
|
267 |
-
self.trace = trace
|
268 |
-
self.width = width
|
269 |
-
self.extra_lines = extra_lines
|
270 |
-
self.theme = Syntax.get_theme(theme or "ansi_dark")
|
271 |
-
self.word_wrap = word_wrap
|
272 |
-
self.show_locals = show_locals
|
273 |
-
self.indent_guides = indent_guides
|
274 |
-
self.locals_max_length = locals_max_length
|
275 |
-
self.locals_max_string = locals_max_string
|
276 |
-
self.locals_hide_dunder = locals_hide_dunder
|
277 |
-
self.locals_hide_sunder = locals_hide_sunder
|
278 |
-
|
279 |
-
self.suppress: Sequence[str] = []
|
280 |
-
for suppress_entity in suppress:
|
281 |
-
if not isinstance(suppress_entity, str):
|
282 |
-
assert (
|
283 |
-
suppress_entity.__file__ is not None
|
284 |
-
), f"{suppress_entity!r} must be a module with '__file__' attribute"
|
285 |
-
path = os.path.dirname(suppress_entity.__file__)
|
286 |
-
else:
|
287 |
-
path = suppress_entity
|
288 |
-
path = os.path.normpath(os.path.abspath(path))
|
289 |
-
self.suppress.append(path)
|
290 |
-
self.max_frames = max(4, max_frames) if max_frames > 0 else 0
|
291 |
-
|
292 |
-
@classmethod
|
293 |
-
def from_exception(
|
294 |
-
cls,
|
295 |
-
exc_type: Type[Any],
|
296 |
-
exc_value: BaseException,
|
297 |
-
traceback: Optional[TracebackType],
|
298 |
-
*,
|
299 |
-
width: Optional[int] = 100,
|
300 |
-
extra_lines: int = 3,
|
301 |
-
theme: Optional[str] = None,
|
302 |
-
word_wrap: bool = False,
|
303 |
-
show_locals: bool = False,
|
304 |
-
locals_max_length: int = LOCALS_MAX_LENGTH,
|
305 |
-
locals_max_string: int = LOCALS_MAX_STRING,
|
306 |
-
locals_hide_dunder: bool = True,
|
307 |
-
locals_hide_sunder: bool = False,
|
308 |
-
indent_guides: bool = True,
|
309 |
-
suppress: Iterable[Union[str, ModuleType]] = (),
|
310 |
-
max_frames: int = 100,
|
311 |
-
) -> "Traceback":
|
312 |
-
"""Create a traceback from exception info
|
313 |
-
|
314 |
-
Args:
|
315 |
-
exc_type (Type[BaseException]): Exception type.
|
316 |
-
exc_value (BaseException): Exception value.
|
317 |
-
traceback (TracebackType): Python Traceback object.
|
318 |
-
width (Optional[int], optional): Number of characters used to traceback. Defaults to 100.
|
319 |
-
extra_lines (int, optional): Additional lines of code to render. Defaults to 3.
|
320 |
-
theme (str, optional): Override pygments theme used in traceback.
|
321 |
-
word_wrap (bool, optional): Enable word wrapping of long lines. Defaults to False.
|
322 |
-
show_locals (bool, optional): Enable display of local variables. Defaults to False.
|
323 |
-
indent_guides (bool, optional): Enable indent guides in code and locals. Defaults to True.
|
324 |
-
locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
|
325 |
-
Defaults to 10.
|
326 |
-
locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
|
327 |
-
locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True.
|
328 |
-
locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False.
|
329 |
-
suppress (Iterable[Union[str, ModuleType]]): Optional sequence of modules or paths to exclude from traceback.
|
330 |
-
max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100.
|
331 |
-
|
332 |
-
Returns:
|
333 |
-
Traceback: A Traceback instance that may be printed.
|
334 |
-
"""
|
335 |
-
rich_traceback = cls.extract(
|
336 |
-
exc_type,
|
337 |
-
exc_value,
|
338 |
-
traceback,
|
339 |
-
show_locals=show_locals,
|
340 |
-
locals_max_length=locals_max_length,
|
341 |
-
locals_max_string=locals_max_string,
|
342 |
-
locals_hide_dunder=locals_hide_dunder,
|
343 |
-
locals_hide_sunder=locals_hide_sunder,
|
344 |
-
)
|
345 |
-
|
346 |
-
return cls(
|
347 |
-
rich_traceback,
|
348 |
-
width=width,
|
349 |
-
extra_lines=extra_lines,
|
350 |
-
theme=theme,
|
351 |
-
word_wrap=word_wrap,
|
352 |
-
show_locals=show_locals,
|
353 |
-
indent_guides=indent_guides,
|
354 |
-
locals_max_length=locals_max_length,
|
355 |
-
locals_max_string=locals_max_string,
|
356 |
-
locals_hide_dunder=locals_hide_dunder,
|
357 |
-
locals_hide_sunder=locals_hide_sunder,
|
358 |
-
suppress=suppress,
|
359 |
-
max_frames=max_frames,
|
360 |
-
)
|
361 |
-
|
362 |
-
@classmethod
|
363 |
-
def extract(
|
364 |
-
cls,
|
365 |
-
exc_type: Type[BaseException],
|
366 |
-
exc_value: BaseException,
|
367 |
-
traceback: Optional[TracebackType],
|
368 |
-
*,
|
369 |
-
show_locals: bool = False,
|
370 |
-
locals_max_length: int = LOCALS_MAX_LENGTH,
|
371 |
-
locals_max_string: int = LOCALS_MAX_STRING,
|
372 |
-
locals_hide_dunder: bool = True,
|
373 |
-
locals_hide_sunder: bool = False,
|
374 |
-
) -> Trace:
|
375 |
-
"""Extract traceback information.
|
376 |
-
|
377 |
-
Args:
|
378 |
-
exc_type (Type[BaseException]): Exception type.
|
379 |
-
exc_value (BaseException): Exception value.
|
380 |
-
traceback (TracebackType): Python Traceback object.
|
381 |
-
show_locals (bool, optional): Enable display of local variables. Defaults to False.
|
382 |
-
locals_max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
|
383 |
-
Defaults to 10.
|
384 |
-
locals_max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to 80.
|
385 |
-
locals_hide_dunder (bool, optional): Hide locals prefixed with double underscore. Defaults to True.
|
386 |
-
locals_hide_sunder (bool, optional): Hide locals prefixed with single underscore. Defaults to False.
|
387 |
-
|
388 |
-
Returns:
|
389 |
-
Trace: A Trace instance which you can use to construct a `Traceback`.
|
390 |
-
"""
|
391 |
-
|
392 |
-
stacks: List[Stack] = []
|
393 |
-
is_cause = False
|
394 |
-
|
395 |
-
from pip._vendor.rich import _IMPORT_CWD
|
396 |
-
|
397 |
-
def safe_str(_object: Any) -> str:
|
398 |
-
"""Don't allow exceptions from __str__ to propagate."""
|
399 |
-
try:
|
400 |
-
return str(_object)
|
401 |
-
except Exception:
|
402 |
-
return "<exception str() failed>"
|
403 |
-
|
404 |
-
while True:
|
405 |
-
stack = Stack(
|
406 |
-
exc_type=safe_str(exc_type.__name__),
|
407 |
-
exc_value=safe_str(exc_value),
|
408 |
-
is_cause=is_cause,
|
409 |
-
)
|
410 |
-
|
411 |
-
if isinstance(exc_value, SyntaxError):
|
412 |
-
stack.syntax_error = _SyntaxError(
|
413 |
-
offset=exc_value.offset or 0,
|
414 |
-
filename=exc_value.filename or "?",
|
415 |
-
lineno=exc_value.lineno or 0,
|
416 |
-
line=exc_value.text or "",
|
417 |
-
msg=exc_value.msg,
|
418 |
-
)
|
419 |
-
|
420 |
-
stacks.append(stack)
|
421 |
-
append = stack.frames.append
|
422 |
-
|
423 |
-
def get_locals(
|
424 |
-
iter_locals: Iterable[Tuple[str, object]]
|
425 |
-
) -> Iterable[Tuple[str, object]]:
|
426 |
-
"""Extract locals from an iterator of key pairs."""
|
427 |
-
if not (locals_hide_dunder or locals_hide_sunder):
|
428 |
-
yield from iter_locals
|
429 |
-
return
|
430 |
-
for key, value in iter_locals:
|
431 |
-
if locals_hide_dunder and key.startswith("__"):
|
432 |
-
continue
|
433 |
-
if locals_hide_sunder and key.startswith("_"):
|
434 |
-
continue
|
435 |
-
yield key, value
|
436 |
-
|
437 |
-
for frame_summary, line_no in walk_tb(traceback):
|
438 |
-
filename = frame_summary.f_code.co_filename
|
439 |
-
if filename and not filename.startswith("<"):
|
440 |
-
if not os.path.isabs(filename):
|
441 |
-
filename = os.path.join(_IMPORT_CWD, filename)
|
442 |
-
if frame_summary.f_locals.get("_rich_traceback_omit", False):
|
443 |
-
continue
|
444 |
-
|
445 |
-
frame = Frame(
|
446 |
-
filename=filename or "?",
|
447 |
-
lineno=line_no,
|
448 |
-
name=frame_summary.f_code.co_name,
|
449 |
-
locals={
|
450 |
-
key: pretty.traverse(
|
451 |
-
value,
|
452 |
-
max_length=locals_max_length,
|
453 |
-
max_string=locals_max_string,
|
454 |
-
)
|
455 |
-
for key, value in get_locals(frame_summary.f_locals.items())
|
456 |
-
}
|
457 |
-
if show_locals
|
458 |
-
else None,
|
459 |
-
)
|
460 |
-
append(frame)
|
461 |
-
if frame_summary.f_locals.get("_rich_traceback_guard", False):
|
462 |
-
del stack.frames[:]
|
463 |
-
|
464 |
-
cause = getattr(exc_value, "__cause__", None)
|
465 |
-
if cause:
|
466 |
-
exc_type = cause.__class__
|
467 |
-
exc_value = cause
|
468 |
-
# __traceback__ can be None, e.g. for exceptions raised by the
|
469 |
-
# 'multiprocessing' module
|
470 |
-
traceback = cause.__traceback__
|
471 |
-
is_cause = True
|
472 |
-
continue
|
473 |
-
|
474 |
-
cause = exc_value.__context__
|
475 |
-
if cause and not getattr(exc_value, "__suppress_context__", False):
|
476 |
-
exc_type = cause.__class__
|
477 |
-
exc_value = cause
|
478 |
-
traceback = cause.__traceback__
|
479 |
-
is_cause = False
|
480 |
-
continue
|
481 |
-
# No cover, code is reached but coverage doesn't recognize it.
|
482 |
-
break # pragma: no cover
|
483 |
-
|
484 |
-
trace = Trace(stacks=stacks)
|
485 |
-
return trace
|
486 |
-
|
487 |
-
def __rich_console__(
|
488 |
-
self, console: Console, options: ConsoleOptions
|
489 |
-
) -> RenderResult:
|
490 |
-
theme = self.theme
|
491 |
-
background_style = theme.get_background_style()
|
492 |
-
token_style = theme.get_style_for_token
|
493 |
-
|
494 |
-
traceback_theme = Theme(
|
495 |
-
{
|
496 |
-
"pretty": token_style(TextToken),
|
497 |
-
"pygments.text": token_style(Token),
|
498 |
-
"pygments.string": token_style(String),
|
499 |
-
"pygments.function": token_style(Name.Function),
|
500 |
-
"pygments.number": token_style(Number),
|
501 |
-
"repr.indent": token_style(Comment) + Style(dim=True),
|
502 |
-
"repr.str": token_style(String),
|
503 |
-
"repr.brace": token_style(TextToken) + Style(bold=True),
|
504 |
-
"repr.number": token_style(Number),
|
505 |
-
"repr.bool_true": token_style(Keyword.Constant),
|
506 |
-
"repr.bool_false": token_style(Keyword.Constant),
|
507 |
-
"repr.none": token_style(Keyword.Constant),
|
508 |
-
"scope.border": token_style(String.Delimiter),
|
509 |
-
"scope.equals": token_style(Operator),
|
510 |
-
"scope.key": token_style(Name),
|
511 |
-
"scope.key.special": token_style(Name.Constant) + Style(dim=True),
|
512 |
-
},
|
513 |
-
inherit=False,
|
514 |
-
)
|
515 |
-
|
516 |
-
highlighter = ReprHighlighter()
|
517 |
-
for last, stack in loop_last(reversed(self.trace.stacks)):
|
518 |
-
if stack.frames:
|
519 |
-
stack_renderable: ConsoleRenderable = Panel(
|
520 |
-
self._render_stack(stack),
|
521 |
-
title="[traceback.title]Traceback [dim](most recent call last)",
|
522 |
-
style=background_style,
|
523 |
-
border_style="traceback.border",
|
524 |
-
expand=True,
|
525 |
-
padding=(0, 1),
|
526 |
-
)
|
527 |
-
stack_renderable = Constrain(stack_renderable, self.width)
|
528 |
-
with console.use_theme(traceback_theme):
|
529 |
-
yield stack_renderable
|
530 |
-
if stack.syntax_error is not None:
|
531 |
-
with console.use_theme(traceback_theme):
|
532 |
-
yield Constrain(
|
533 |
-
Panel(
|
534 |
-
self._render_syntax_error(stack.syntax_error),
|
535 |
-
style=background_style,
|
536 |
-
border_style="traceback.border.syntax_error",
|
537 |
-
expand=True,
|
538 |
-
padding=(0, 1),
|
539 |
-
width=self.width,
|
540 |
-
),
|
541 |
-
self.width,
|
542 |
-
)
|
543 |
-
yield Text.assemble(
|
544 |
-
(f"{stack.exc_type}: ", "traceback.exc_type"),
|
545 |
-
highlighter(stack.syntax_error.msg),
|
546 |
-
)
|
547 |
-
elif stack.exc_value:
|
548 |
-
yield Text.assemble(
|
549 |
-
(f"{stack.exc_type}: ", "traceback.exc_type"),
|
550 |
-
highlighter(stack.exc_value),
|
551 |
-
)
|
552 |
-
else:
|
553 |
-
yield Text.assemble((f"{stack.exc_type}", "traceback.exc_type"))
|
554 |
-
|
555 |
-
if not last:
|
556 |
-
if stack.is_cause:
|
557 |
-
yield Text.from_markup(
|
558 |
-
"\n[i]The above exception was the direct cause of the following exception:\n",
|
559 |
-
)
|
560 |
-
else:
|
561 |
-
yield Text.from_markup(
|
562 |
-
"\n[i]During handling of the above exception, another exception occurred:\n",
|
563 |
-
)
|
564 |
-
|
565 |
-
@group()
|
566 |
-
def _render_syntax_error(self, syntax_error: _SyntaxError) -> RenderResult:
|
567 |
-
highlighter = ReprHighlighter()
|
568 |
-
path_highlighter = PathHighlighter()
|
569 |
-
if syntax_error.filename != "<stdin>":
|
570 |
-
if os.path.exists(syntax_error.filename):
|
571 |
-
text = Text.assemble(
|
572 |
-
(f" {syntax_error.filename}", "pygments.string"),
|
573 |
-
(":", "pygments.text"),
|
574 |
-
(str(syntax_error.lineno), "pygments.number"),
|
575 |
-
style="pygments.text",
|
576 |
-
)
|
577 |
-
yield path_highlighter(text)
|
578 |
-
syntax_error_text = highlighter(syntax_error.line.rstrip())
|
579 |
-
syntax_error_text.no_wrap = True
|
580 |
-
offset = min(syntax_error.offset - 1, len(syntax_error_text))
|
581 |
-
syntax_error_text.stylize("bold underline", offset, offset)
|
582 |
-
syntax_error_text += Text.from_markup(
|
583 |
-
"\n" + " " * offset + "[traceback.offset]▲[/]",
|
584 |
-
style="pygments.text",
|
585 |
-
)
|
586 |
-
yield syntax_error_text
|
587 |
-
|
588 |
-
@classmethod
|
589 |
-
def _guess_lexer(cls, filename: str, code: str) -> str:
|
590 |
-
ext = os.path.splitext(filename)[-1]
|
591 |
-
if not ext:
|
592 |
-
# No extension, look at first line to see if it is a hashbang
|
593 |
-
# Note, this is an educated guess and not a guarantee
|
594 |
-
# If it fails, the only downside is that the code is highlighted strangely
|
595 |
-
new_line_index = code.index("\n")
|
596 |
-
first_line = code[:new_line_index] if new_line_index != -1 else code
|
597 |
-
if first_line.startswith("#!") and "python" in first_line.lower():
|
598 |
-
return "python"
|
599 |
-
try:
|
600 |
-
return cls.LEXERS.get(ext) or guess_lexer_for_filename(filename, code).name
|
601 |
-
except ClassNotFound:
|
602 |
-
return "text"
|
603 |
-
|
604 |
-
@group()
|
605 |
-
def _render_stack(self, stack: Stack) -> RenderResult:
|
606 |
-
path_highlighter = PathHighlighter()
|
607 |
-
theme = self.theme
|
608 |
-
|
609 |
-
def read_code(filename: str) -> str:
|
610 |
-
"""Read files, and cache results on filename.
|
611 |
-
|
612 |
-
Args:
|
613 |
-
filename (str): Filename to read
|
614 |
-
|
615 |
-
Returns:
|
616 |
-
str: Contents of file
|
617 |
-
"""
|
618 |
-
return "".join(linecache.getlines(filename))
|
619 |
-
|
620 |
-
def render_locals(frame: Frame) -> Iterable[ConsoleRenderable]:
|
621 |
-
if frame.locals:
|
622 |
-
yield render_scope(
|
623 |
-
frame.locals,
|
624 |
-
title="locals",
|
625 |
-
indent_guides=self.indent_guides,
|
626 |
-
max_length=self.locals_max_length,
|
627 |
-
max_string=self.locals_max_string,
|
628 |
-
)
|
629 |
-
|
630 |
-
exclude_frames: Optional[range] = None
|
631 |
-
if self.max_frames != 0:
|
632 |
-
exclude_frames = range(
|
633 |
-
self.max_frames // 2,
|
634 |
-
len(stack.frames) - self.max_frames // 2,
|
635 |
-
)
|
636 |
-
|
637 |
-
excluded = False
|
638 |
-
for frame_index, frame in enumerate(stack.frames):
|
639 |
-
|
640 |
-
if exclude_frames and frame_index in exclude_frames:
|
641 |
-
excluded = True
|
642 |
-
continue
|
643 |
-
|
644 |
-
if excluded:
|
645 |
-
assert exclude_frames is not None
|
646 |
-
yield Text(
|
647 |
-
f"\n... {len(exclude_frames)} frames hidden ...",
|
648 |
-
justify="center",
|
649 |
-
style="traceback.error",
|
650 |
-
)
|
651 |
-
excluded = False
|
652 |
-
|
653 |
-
first = frame_index == 0
|
654 |
-
frame_filename = frame.filename
|
655 |
-
suppressed = any(frame_filename.startswith(path) for path in self.suppress)
|
656 |
-
|
657 |
-
if os.path.exists(frame.filename):
|
658 |
-
text = Text.assemble(
|
659 |
-
path_highlighter(Text(frame.filename, style="pygments.string")),
|
660 |
-
(":", "pygments.text"),
|
661 |
-
(str(frame.lineno), "pygments.number"),
|
662 |
-
" in ",
|
663 |
-
(frame.name, "pygments.function"),
|
664 |
-
style="pygments.text",
|
665 |
-
)
|
666 |
-
else:
|
667 |
-
text = Text.assemble(
|
668 |
-
"in ",
|
669 |
-
(frame.name, "pygments.function"),
|
670 |
-
(":", "pygments.text"),
|
671 |
-
(str(frame.lineno), "pygments.number"),
|
672 |
-
style="pygments.text",
|
673 |
-
)
|
674 |
-
if not frame.filename.startswith("<") and not first:
|
675 |
-
yield ""
|
676 |
-
yield text
|
677 |
-
if frame.filename.startswith("<"):
|
678 |
-
yield from render_locals(frame)
|
679 |
-
continue
|
680 |
-
if not suppressed:
|
681 |
-
try:
|
682 |
-
code = read_code(frame.filename)
|
683 |
-
if not code:
|
684 |
-
# code may be an empty string if the file doesn't exist, OR
|
685 |
-
# if the traceback filename is generated dynamically
|
686 |
-
continue
|
687 |
-
lexer_name = self._guess_lexer(frame.filename, code)
|
688 |
-
syntax = Syntax(
|
689 |
-
code,
|
690 |
-
lexer_name,
|
691 |
-
theme=theme,
|
692 |
-
line_numbers=True,
|
693 |
-
line_range=(
|
694 |
-
frame.lineno - self.extra_lines,
|
695 |
-
frame.lineno + self.extra_lines,
|
696 |
-
),
|
697 |
-
highlight_lines={frame.lineno},
|
698 |
-
word_wrap=self.word_wrap,
|
699 |
-
code_width=88,
|
700 |
-
indent_guides=self.indent_guides,
|
701 |
-
dedent=False,
|
702 |
-
)
|
703 |
-
yield ""
|
704 |
-
except Exception as error:
|
705 |
-
yield Text.assemble(
|
706 |
-
(f"\n{error}", "traceback.error"),
|
707 |
-
)
|
708 |
-
else:
|
709 |
-
yield (
|
710 |
-
Columns(
|
711 |
-
[
|
712 |
-
syntax,
|
713 |
-
*render_locals(frame),
|
714 |
-
],
|
715 |
-
padding=1,
|
716 |
-
)
|
717 |
-
if frame.locals
|
718 |
-
else syntax
|
719 |
-
)
|
720 |
-
|
721 |
-
|
722 |
-
if __name__ == "__main__": # pragma: no cover
|
723 |
-
|
724 |
-
from .console import Console
|
725 |
-
|
726 |
-
console = Console()
|
727 |
-
import sys
|
728 |
-
|
729 |
-
def bar(a: Any) -> None: # 这是对亚洲语言支持的测试。面对模棱两可的想法,拒绝猜测的诱惑
|
730 |
-
one = 1
|
731 |
-
print(one / a)
|
732 |
-
|
733 |
-
def foo(a: Any) -> None:
|
734 |
-
_rich_traceback_guard = True
|
735 |
-
zed = {
|
736 |
-
"characters": {
|
737 |
-
"Paul Atreides",
|
738 |
-
"Vladimir Harkonnen",
|
739 |
-
"Thufir Hawat",
|
740 |
-
"Duncan Idaho",
|
741 |
-
},
|
742 |
-
"atomic_types": (None, False, True),
|
743 |
-
}
|
744 |
-
bar(a)
|
745 |
-
|
746 |
-
def error() -> None:
|
747 |
-
|
748 |
-
try:
|
749 |
-
try:
|
750 |
-
foo(0)
|
751 |
-
except:
|
752 |
-
slfkjsldkfj # type: ignore[name-defined]
|
753 |
-
except:
|
754 |
-
console.print_exception(show_locals=True)
|
755 |
-
|
756 |
-
error()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Atualli/yoloxTeste/yoloxdetect2/helpers.py
DELETED
@@ -1,111 +0,0 @@
|
|
1 |
-
from yoloxdetect2.utils.downloads import attempt_download_from_hub, attempt_download
|
2 |
-
from yolox.data.datasets import COCO_CLASSES
|
3 |
-
from yolox.data.data_augment import preproc
|
4 |
-
from yolox.utils import postprocess, vis
|
5 |
-
import importlib
|
6 |
-
import torch
|
7 |
-
import cv2
|
8 |
-
import os
|
9 |
-
from PIL import Image
|
10 |
-
from torchvision import transforms
|
11 |
-
import numpy
|
12 |
-
|
13 |
-
class YoloxDetector2:
|
14 |
-
def __init__(
|
15 |
-
self,
|
16 |
-
model_path: str,
|
17 |
-
config_path: str,
|
18 |
-
device: str = "cpu",
|
19 |
-
hf_model: bool = False,
|
20 |
-
):
|
21 |
-
|
22 |
-
self.device = device
|
23 |
-
self.config_path = config_path
|
24 |
-
self.classes = COCO_CLASSES
|
25 |
-
self.conf = 0.3
|
26 |
-
self.iou = 0.45
|
27 |
-
self.show = False
|
28 |
-
self.save = True
|
29 |
-
self.torchyolo = False
|
30 |
-
|
31 |
-
if self.save:
|
32 |
-
self.save_path = 'output/result.jpg'
|
33 |
-
|
34 |
-
if hf_model:
|
35 |
-
self.model_path = attempt_download_from_hub(model_path)
|
36 |
-
|
37 |
-
else:
|
38 |
-
self.model_path = attempt_download(model_path)
|
39 |
-
|
40 |
-
#self.model_path = model_path
|
41 |
-
self.load_model()
|
42 |
-
|
43 |
-
|
44 |
-
def load_model(self):
|
45 |
-
current_exp = importlib.import_module(self.config_path)
|
46 |
-
exp = current_exp.Exp()
|
47 |
-
|
48 |
-
model = exp.get_model()
|
49 |
-
model.to(self.device)
|
50 |
-
model.eval()
|
51 |
-
ckpt = torch.load(self.model_path, map_location=self.device)
|
52 |
-
model.load_state_dict(ckpt["model"])
|
53 |
-
self.model = model
|
54 |
-
|
55 |
-
|
56 |
-
def predict(self, image_path, image_size):
|
57 |
-
#image = cv2.imread(image_path)
|
58 |
-
|
59 |
-
#img = transforms.ToTensor()(image_path).unsqueeze(0)
|
60 |
-
image = opencvImage = cv2.cvtColor(numpy.array(image_path), cv2.COLOR_RGB2BGR)
|
61 |
-
if image_size is not None:
|
62 |
-
ratio = min(image_size / image.shape[0], image_size / image.shape[1])
|
63 |
-
img, _ = preproc(image, input_size=(image_size, image_size))
|
64 |
-
img = torch.from_numpy(img).to(self.device).unsqueeze(0).float()
|
65 |
-
else:
|
66 |
-
manuel_size = 640
|
67 |
-
ratio = min(manuel_size / image.shape[0], manuel_size / image.shape[1])
|
68 |
-
img, _ = preproc(image, input_size=(manuel_size, manuel_size))
|
69 |
-
img = torch.from_numpy(img).to(self.device).unsqueeze(0).float()
|
70 |
-
|
71 |
-
prediction_result = self.model(img)
|
72 |
-
original_predictions = postprocess(
|
73 |
-
prediction=prediction_result,
|
74 |
-
num_classes= len(COCO_CLASSES),
|
75 |
-
conf_thre=self.conf,
|
76 |
-
nms_thre=self.iou)[0]
|
77 |
-
|
78 |
-
if original_predictions is None :
|
79 |
-
return None
|
80 |
-
output = original_predictions.cpu()
|
81 |
-
bboxes = output[:, 0:4]
|
82 |
-
bboxes /= ratio
|
83 |
-
cls = output[:, 6]
|
84 |
-
scores = output[:, 4] * output[:, 5]
|
85 |
-
if self.torchyolo is False:
|
86 |
-
vis_res = vis(
|
87 |
-
image,
|
88 |
-
bboxes,
|
89 |
-
scores,
|
90 |
-
cls,
|
91 |
-
self.conf,
|
92 |
-
COCO_CLASSES,
|
93 |
-
)
|
94 |
-
if self.show:
|
95 |
-
cv2.imshow("result", vis_res)
|
96 |
-
cv2.waitKey(0)
|
97 |
-
cv2.destroyAllWindows()
|
98 |
-
elif self.save:
|
99 |
-
save_dir = self.save_path[:self.save_path.rfind('/')]
|
100 |
-
if not os.path.exists(save_dir):
|
101 |
-
os.makedirs(save_dir)
|
102 |
-
cv2.imwrite(self.save_path, vis_res)
|
103 |
-
return self.save_path
|
104 |
-
|
105 |
-
else:
|
106 |
-
return vis_res
|
107 |
-
else:
|
108 |
-
object_predictions_list = [bboxes, scores, cls, COCO_CLASSES]
|
109 |
-
return object_predictions_list
|
110 |
-
|
111 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/backbone/backbone.py
DELETED
@@ -1,53 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
from abc import ABCMeta, abstractmethod
|
3 |
-
import torch.nn as nn
|
4 |
-
|
5 |
-
from detectron2.layers import ShapeSpec
|
6 |
-
|
7 |
-
__all__ = ["Backbone"]
|
8 |
-
|
9 |
-
|
10 |
-
class Backbone(nn.Module, metaclass=ABCMeta):
|
11 |
-
"""
|
12 |
-
Abstract base class for network backbones.
|
13 |
-
"""
|
14 |
-
|
15 |
-
def __init__(self):
|
16 |
-
"""
|
17 |
-
The `__init__` method of any subclass can specify its own set of arguments.
|
18 |
-
"""
|
19 |
-
super().__init__()
|
20 |
-
|
21 |
-
@abstractmethod
|
22 |
-
def forward(self):
|
23 |
-
"""
|
24 |
-
Subclasses must override this method, but adhere to the same return type.
|
25 |
-
|
26 |
-
Returns:
|
27 |
-
dict[str->Tensor]: mapping from feature name (e.g., "res2") to tensor
|
28 |
-
"""
|
29 |
-
pass
|
30 |
-
|
31 |
-
@property
|
32 |
-
def size_divisibility(self) -> int:
|
33 |
-
"""
|
34 |
-
Some backbones require the input height and width to be divisible by a
|
35 |
-
specific integer. This is typically true for encoder / decoder type networks
|
36 |
-
with lateral connection (e.g., FPN) for which feature maps need to match
|
37 |
-
dimension in the "bottom up" and "top down" paths. Set to 0 if no specific
|
38 |
-
input size divisibility is required.
|
39 |
-
"""
|
40 |
-
return 0
|
41 |
-
|
42 |
-
def output_shape(self):
|
43 |
-
"""
|
44 |
-
Returns:
|
45 |
-
dict[str->ShapeSpec]
|
46 |
-
"""
|
47 |
-
# this is a backward-compatible default
|
48 |
-
return {
|
49 |
-
name: ShapeSpec(
|
50 |
-
channels=self._out_feature_channels[name], stride=self._out_feature_strides[name]
|
51 |
-
)
|
52 |
-
for name in self._out_features
|
53 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bajr/softly/run.sh
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
cd source_code
|
2 |
-
git clone ${GIT_URL} .
|
3 |
-
cp ../.env .
|
4 |
-
cp ../greeting.md .
|
5 |
-
npm install
|
6 |
-
npm run build
|
7 |
-
npm start
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Coche De Carreras De Deriva 2 1.22.0 Apk.md
DELETED
@@ -1,69 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>CarX Drift Racing 2: un juego de carreras realista y emocionante para Android</h1>
|
3 |
-
<p>Si eres un fan de los juegos de carreras, es posible que desees echar un vistazo a CarX Drift Racing 2, uno de los juegos de carreras más realistas y emocionantes para dispositivos Android. CarX Drift Racing 2 es una secuela del popular juego CarX Drift Racing, que tiene más de 50 millones de descargas en Google Play. En este juego, usted puede experimentar la emoción de la deriva, carreras, y afinar su propio coche en varias pistas y modos. También puede competir con otros jugadores en línea y unirse a clubes para mostrar sus habilidades. </p>
|
4 |
-
<h2>¿Qué es CarX Drift Racing 2?</h2>
|
5 |
-
<p>CarX Drift Racing 2 es un juego de carreras desarrollado por CarX Technologies, una empresa que se especializa en la creación de la física realista del coche y los gráficos para los juegos. El juego fue lanzado en diciembre de 2018 y se ha actualizado regularmente con nuevas características y mejoras. La última versión del juego es 1.22.0, que fue lanzado el 16 de junio de 2023. </p>
|
6 |
-
<h2>coche de carreras de deriva 2 1.22.0 apk</h2><br /><p><b><b>Download File</b> ✵ <a href="https://bltlly.com/2v6L3a">https://bltlly.com/2v6L3a</a></b></p><br /><br />
|
7 |
-
<h3>Características de CarX Drift Racing 2</h3>
|
8 |
-
<h4>Impresionantes gráficos y física</h4>
|
9 |
-
<p>Una de las principales atracciones de CarX Drift Racing 2 son sus impresionantes gráficos y física, que hacen que el juego se vea y se sienta como un simulador de carreras real. El juego utiliza modelos 3D avanzados, texturas, iluminación, sombras y reflejos para crear entornos y coches realistas. El juego también utiliza un sofisticado motor de física que simula el comportamiento de diferentes partes del automóvil, como neumáticos, suspensión, motor, transmisión, frenos, etc. El juego también admite pantallas de alta resolución y modo 60 FPS para un juego suave. </p>
|
10 |
-
<h4>Múltiples modos de juego y pistas</h4>
|
11 |
-
|
12 |
-
<p>El juego también tiene más de 70 pistas para elegir, cada una con su propio diseño, escenario, clima y hora del día. Puedes correr en diferentes lugares del mundo, como Japón, EE.UU., Rusia, Noruega, etc. También puedes personalizar la configuración de la pista, como el número de vueltas, oponentes, tráfico, etc.</p>
|
13 |
-
<h4>Coches personalizables y tuning</h4>
|
14 |
-
<p>Una tercera característica de CarX Drift Racing 2 es su personalizable coches y sistema de ajuste, que le permiten crear su propio coche único y optimizar su rendimiento. El juego tiene más de 80 coches para elegir, cada uno con sus propias características, tales como velocidad, aceleración, manejo, etc. También puede personalizar la apariencia de su coche cambiando su color, trabajo de pintura, calcomanías, ruedas, spoilers, etc.</p>
|
15 |
-
<p>Además de la apariencia, también puede ajustar su automóvil ajustando sus parámetros, como la potencia del motor, el par, la relación de transmisión, la rigidez de la suspensión, el ángulo de curvatura, la fuerza de freno, etc. También puede usar diferentes tipos de neumáticos, como slicks, semi-slicks o neumáticos callejeros, dependiendo de las condiciones de la pista. Puede guardar sus ajustes de ajuste y cargarlos para diferentes pistas y modos. El ajuste de su coche puede hacer una gran diferencia en su rendimiento y resultados. </p>
|
16 |
-
<h4>Multijugador en línea y clubes</h4>
|
17 |
-
<p>Una cuarta característica de CarX Drift Racing 2 es su sistema multijugador en línea y clubes, que le permiten interactuar y competir con otros jugadores de todo el mundo. Puede unirse o crear su propio club, invitar a sus amigos, chatear con otros miembros y participar en eventos y torneos del club. También puedes retar a otros jugadores a duelos, carreras de fantasmas o derrapes en tándem, y ganar recompensas y rankings. También puedes ver las repeticiones de otros jugadores y aprender de sus técnicas. </p>
|
18 |
-
<h3> Cómo descargar e instalar CarX deriva Racing 2 APK? </h3>
|
19 |
-
<h4>Requisitos y compatibilidad</h4>
|
20 |
-
<p>Para descargar e instalar CarX Drift Racing 2 APK, es necesario tener un dispositivo Android que cumple con los siguientes requisitos:</p>
|
21 |
-
<ul>
|
22 |
-
|
23 |
-
<li>RAM: 2 GB o más</li>
|
24 |
-
<li>Almacenamiento: 1.5 GB o más</li>
|
25 |
-
<li>Conexión a Internet: necesaria para las funciones en línea</li>
|
26 |
-
</ul>
|
27 |
-
<p>El juego es compatible con la mayoría de los dispositivos Android, pero algunos modelos pueden tener problemas con los gráficos o el rendimiento. Puede consultar la lista de dispositivos compatibles en el sitio web oficial del juego. </p>
|
28 |
-
<h4>Pasos para descargar e instalar</h4>
|
29 |
-
<p> Para descargar e instalar CarX Drift Racing 2 APK, es necesario seguir estos pasos:</p>
|
30 |
-
<p></p>
|
31 |
-
<ol>
|
32 |
-
<li>Ir a la página web oficial del juego y haga clic en el "Descargar APK" botón. </li>
|
33 |
-
<li>Permita que su dispositivo descargue archivos de fuentes desconocidas yendo a Configuración > Seguridad > Fuentes desconocidas.</li>
|
34 |
-
<li>Busque el archivo APK descargado en el administrador de archivos de su dispositivo y toque en él para iniciar el proceso de instalación. </li>
|
35 |
-
<li>Siga las instrucciones en la pantalla y espere a que termine la instalación. </li>
|
36 |
-
<li>Iniciar el juego y disfrutar! </li>
|
37 |
-
</ol>
|
38 |
-
<h3>Consejos y trucos para jugar CarX Drift Racing 2</h3>
|
39 |
-
<h4>Domina la técnica de deriva</h4>
|
40 |
-
<p>La habilidad más importante que necesitas dominar en CarX Drift Racing 2 es la deriva, que es el arte de deslizar tu coche de lado manteniendo el control y la velocidad. La deriva puede ayudarte a ganar más puntos, velocidad y reputación en el juego. Para que la deriva sea efectiva, necesitas practicar los siguientes pasos:</p>
|
41 |
-
<ol>
|
42 |
-
<li>Acércate a una esquina a alta velocidad y toca el pedal del freno para iniciar una deriva. </li>
|
43 |
-
<li>Dirigir su coche en la dirección de la deriva y utilizar el acelerador para controlar el ángulo y la velocidad de su coche. </li>
|
44 |
-
<li>Utilice el freno de mano para ajustar la posición y el equilibrio de su coche durante la deriva. </li>
|
45 |
-
<li>Salir de la deriva sin problemas mediante la liberación del acelerador y la dirección de su coche recto. </li>
|
46 |
-
</ol>
|
47 |
-
<p>También puede usar diferentes ángulos de cámara, como la cabina, el capó o la persecución, para obtener una mejor vista de su automóvil y la pista. También puede usar el sensor giroscópico de su dispositivo para dirigir su automóvil inclinándolo hacia la izquierda o hacia la derecha. </p>
|
48 |
-
|
49 |
-
<p>Para mejorar tu rendimiento y resultados en CarX Drift Racing 2, necesitas mejorar tu auto regularmente gastando dinero y puntos de reputación. Usted puede actualizar diferentes aspectos de su coche, tales como motor, turbo, nitro, transmisión, suspensión, frenos, neumáticos, etc. Actualizar su coche puede aumentar su potencia, velocidad, manejo, estabilidad, etc. También puedes desbloquear coches nuevos completando ciertas tareas o misiones en el modo Carrera o comprándolos con dinero real. </p>
|
50 |
-
<h4>Únete a un club y compite con otros</h4>
|
51 |
-
<p>Para aprovechar al máximo CarX Drift Racing 2, debes unirte a un club y competir con otros jugadores en línea. Unirse a un club puede darle acceso a eventos exclusivos, torneos, recompensas y clasificaciones. También puedes chatear con otros miembros del club, compartir consejos y trucos, y retarlos a duelos o derivas en tándem. También puedes crear tu propio club e invitar a tus amigos u otros jugadores a unirse. Competir con otros puede ayudarte a mejorar tus habilidades, ganar más dinero y puntos de reputación, y divertirte más. </p>
|
52 |
-
<h3>Conclusión</h3>
|
53 |
-
<p>CarX Drift Racing 2 es un juego de carreras realista y emocionante para dispositivos Android que le permite experimentar la emoción de la deriva, las carreras y la puesta a punto de su propio coche en varias pistas y modos. El juego tiene impresionantes gráficos y física, múltiples modos de juego y pistas, coches personalizables y tuning, multijugador en línea y clubes, y muchas más características que lo convierten en uno de los mejores juegos de carreras en Google Play. Si usted está buscando un juego de carreras desafiante y divertido, usted debe descargar e instalar CarX Drift Racing 2 APK y disfrutar del viaje! </p>
|
54 |
-
<h3>Preguntas frecuentes</h3>
|
55 |
-
<p>Aquí hay algunas preguntas frecuentes sobre CarX Drift Racing 2:</p>
|
56 |
-
<ol>
|
57 |
-
<li> ¿Cómo puedo obtener más puntos de dinero y reputación en CarX Drift Racing 2?</li>
|
58 |
-
|
59 |
-
<li> ¿Cómo puedo desbloquear coches nuevos en CarX Drift Racing 2?</li>
|
60 |
-
<p>Puedes desbloquear coches nuevos completando ciertas tareas o misiones en el modo Carrera o comprándolos con dinero real. También puede obtener algunos coches de forma gratuita al registrarse diariamente, participar en eventos especiales o unirse a clubes. </p>
|
61 |
-
<li> ¿Cómo puedo cambiar los controles y la configuración de CarX Drift Racing 2?</li>
|
62 |
-
<p>Puede cambiar los controles y ajustes en CarX Drift Racing 2 yendo a Configuración > Controles o Configuración > Gráficos. Puede elegir entre diferentes opciones de control, como botones, inclinación o volante. También puede ajustar la calidad gráfica, el volumen de sonido, el idioma, etc.</p>
|
63 |
-
<li>¿Cómo puedo contactar a los desarrolladores de CarX Drift Racing 2?</li>
|
64 |
-
<p>Puede ponerse en contacto con los desarrolladores de CarX Drift Racing 2 yendo a Configuración > Soporte o visitando su sitio web oficial, página de Facebook o cuenta de Instagram. También puede enviarles un correo electrónico a [email protected]. </p>
|
65 |
-
<li> ¿Cuáles son los requisitos mínimos para CarX Drift Racing 2?</li>
|
66 |
-
<p>Los requisitos mínimos para CarX Drift Racing 2 son versión Android 5.0 o superior, RAM 2 GB o más, almacenamiento 1.5 GB o más, y conexión a Internet. </p>
|
67 |
-
</ol></p> 64aa2da5cf<br />
|
68 |
-
<br />
|
69 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar 60 Segundos Reatomized Pc.md
DELETED
@@ -1,161 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar 60 segundos Reatomized PC: Una guía para sobrevivir al apocalipsis nuclear</h1>
|
3 |
-
<p>¿Tienes lo que se necesita para sobrevivir a un desastre nuclear? ¿Se puede recoger los suministros, rescatar a su familia, y mantenerse con vida en su refugio radioactivo? Si usted está buscando un juego de aventura post-apocalíptica desafiante e hilarante, entonces usted debe tratar 60 Segundos Reatomized PC. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cómo jugarlo y cómo descargarlo en tu PC.</p>
|
4 |
-
<h2>descargar 60 segundos reatomized pc</h2><br /><p><b><b>DOWNLOAD</b> ►►►►► <a href="https://bltlly.com/2v6JKp">https://bltlly.com/2v6JKp</a></b></p><br /><br />
|
5 |
-
<h2>¿Qué es 60 segundos Reatomized? </h2>
|
6 |
-
<p>60 Seconds Reatomized es una edición remasterizada del clásico juego de aventura atómica, 60 Seconds! , desarrollado por Robot Gentleman. Fue lanzado en julio de 2019 y cuenta con soporte 4K, gráficos 2D actualizados y texturas 3D dibujadas a mano, nuevo menú interactivo, sistema de interfaz de usuario mejorado, una actualización técnica y, por supuesto... ¡nuevo contenido! </p>
|
7 |
-
<h3>Una edición remasterizada del clásico juego de aventura atómica</h3>
|
8 |
-
<p>60 Seconds Reatomized se basa en el juego original, 60 Seconds! , que fue lanzado en mayo de 2015. La premisa del juego es simple: solo te quedan 60 segundos antes de que una bomba nuclear llegue a tu vecindario. Usted tiene que correr alrededor de su casa y recoger tantos artículos y miembros de la familia como sea posible, antes de dirigirse a su refugio radioactivo. Pero eso es sólo el principio. Una vez que estás en el refugio, tienes que manejar tus recursos, tomar decisiones difíciles, enfrentar eventos aleatorios, y tal vez sobrevivir... o no. </p>
|
9 |
-
<h3>Características y jugabilidad de 60 segundos Reatomized</h3>
|
10 |
-
<p>60 Seconds Reatomized ofrece muchas características y modos de juego que te mantendrán entretenido durante horas. Algunos de ellos son:</p>
|
11 |
-
<p></p>
|
12 |
-
<ul>
|
13 |
-
<li>Nuevo modo de juego: Desafíos de supervivencia - historias únicas y cortas que pondrán a prueba tus habilidades de supervivencia. </li>
|
14 |
-
<li>Nuevas oportunidades para escapar de la tierra baldía en forma de una historia que abarca múltiples partidas. </li>
|
15 |
-
|
16 |
-
<li>Nuevos sonidos, arte y contenido visual desbloqueable que agregará un poco de color a su refugio de lluvia radiactiva. </li>
|
17 |
-
<li>Nuevos logros para desbloquear y alardear. </li>
|
18 |
-
</ul>
|
19 |
-
<p>El modo de juego de 60 segundos Reatomized se divide en dos fases: buscar y sobrevivir. En la fase de búsqueda, tienes que usar las teclas de flecha o el ratón para controlar a Ted, el protagonista, mientras corre por su casa y agarra objetos y personas. Solo puede llevar una cantidad limitada de artículos a la vez, por lo que tiene que elegir sabiamente qué llevar con usted. También tienes que evitar obstáculos como muebles o fuego que te ralentizarán o te lastimarán. Tienes que llegar al refugio antes de que se acabe el tiempo o morirás. </p>
|
20 |
-
<p>En la fase de supervivencia, tienes que usar el ratón para interactuar con tu refugio y sus habitantes. Tienes que racionar alimentos y agua, usar artículos como radio o botiquín médico, leer tu diario y tomar decisiones que afectarán tu destino. También encontrarás eventos aleatorios que te desafiarán o te ayudarán. Por ejemplo, puedes recibir un golpe en la puerta de un extraño que quiere comerciar o unirse a ti, o puedes escuchar una transmisión militar que te dice cómo escapar. Sin embargo, hay que tener cuidado, ya que no todo es como parece. También es posible que se enfrente a peligros como cucarachas mutantes, invasores o enfermedades por radiación. Tu objetivo es sobrevivir hasta encontrar una salida o morir intentándolo. </p>
|
21 |
-
<h3>Requisitos del sistema y compatibilidad de 60 segundos Reatomized</h3>
|
22 |
-
<p>60 Seconds Reatomized es compatible con sistemas operativos Windows, Mac OS y Linux. Puede reproducirlo en su PC o portátil, siempre y cuando cumpla con los requisitos mínimos del sistema. Aquí están las especificaciones que necesita para ejecutar el juego sin problemas:</p>
|
23 |
-
<tabla>
|
24 |
-
<tr>
|
25 |
-
<th>OS</th>
|
26 |
-
<th>Procesador</th>
|
27 |
-
<th>Memoria</th>
|
28 |
-
<th>Gráficos</th>
|
29 |
-
<th>Almacenamiento</th>
|
30 |
-
</tr>
|
31 |
-
<tr>
|
32 |
-
<td>Windows 7/8/10 64-bit</td>
|
33 |
-
<td>Intel Core con 2 Duo 2.0+ GHz o una CPU AMD equivalente</td>
|
34 |
-
<td>4 GB de RAM</td>
|
35 |
-
|
36 |
-
<td>4 GB de espacio disponible</td>
|
37 |
-
</tr>
|
38 |
-
<tr>
|
39 |
-
<td>Mac OS X 10.9+</td>
|
40 |
-
<td>Intel Core i5-2430M o mejor</td>
|
41 |
-
<td>4 GB de RAM</td>
|
42 |
-
<td>NVIDIA GeForce GT 650M, AMD Radeon HD 6970M o mejor</td>
|
43 |
-
<td>4 GB de espacio disponible</td>
|
44 |
-
</tr>
|
45 |
-
<tr>
|
46 |
-
<td>Ubuntu 14.04 LTS 64 bits o más reciente</td>
|
47 |
-
<td>Intel Core con 2 Duo 2.0+ GHz o una CPU AMD equivalente</td>
|
48 |
-
<td>4 GB de RAM</td>
|
49 |
-
<td>nVidia GeForce 8800 GT o AMD Radeon HD2900 XT (con 512MB VRAM)</td>
|
50 |
-
<td>4 GB de espacio disponible</td>
|
51 |
-
</tr>
|
52 |
-
</tabla>
|
53 |
-
<h2>¿Cómo descargar 60 segundos Reatomized PC? </h2>
|
54 |
-
<p>Si usted está interesado en jugar 60 segundos Reatomized PC, usted tiene varias opciones para descargarlo. Puedes elegir entre Steam, BlueStacks o G2A, dependiendo de tu preferencia y presupuesto. Vamos a echar un vistazo a cada opción y ver cómo descargar el juego de ellos. </p>
|
55 |
-
<h3>Descargar de Steam</h3>
|
56 |
-
<p>Steam es la plataforma más popular y confiable para descargar y jugar juegos de PC. Ofrece muchos beneficios, como almacenamiento en la nube, logros, características de la comunidad y más. También puedes acceder a Steam en cualquier dispositivo con tu cuenta y jugar a tus juegos en cualquier lugar. Aquí te mostramos cómo descargar 60 Seconds Reatomized PC desde Steam:</p>
|
57 |
-
<h4>Pasos para descargar desde Steam</h4>
|
58 |
-
<ol>
|
59 |
-
<li>Crea una cuenta de Steam si aún no la tienes. Puedes hacerlo gratis en <a href=">Sitio web de Steam</a>. </li>
|
60 |
-
<li>Descargue e instale el cliente de Steam en su PC. Puede obtenerlo desde el sitio web <a href=">Steam</a>. </li>
|
61 |
-
<li>Inicia el cliente de Steam e inicia sesión con tu cuenta. </li>
|
62 |
-
<li>Buscar 60 segundos Reatomized en la tienda de vapor o haga clic en este <a href=">link</a>. </li>
|
63 |
-
<li>Añadir el juego a su carrito y proceder a la caja. </li>
|
64 |
-
<li>Selecciona tu método de pago y completa la compra. </li>
|
65 |
-
<li>El juego se añadirá a tu biblioteca y podrás empezar a descargarlo. </li>
|
66 |
-
<li>Una vez finalizada la descarga, puede iniciar el juego y disfrutar! </li>
|
67 |
-
</ol>
|
68 |
-
<h4>Pros y contras de descargar desde Steam</h4>
|
69 |
-
|
70 |
-
<ul>
|
71 |
-
<li><b>Pros:</b></li>
|
72 |
-
<li>Puedes obtener el juego a un precio razonable, especialmente durante las ventas o descuentos. </li>
|
73 |
-
<li> Puede obtener acceso a todas las actualizaciones y parches para el juego automáticamente. </li>
|
74 |
-
<li>Puedes jugar el juego sin conexión una vez que lo descargues. </li>
|
75 |
-
<li>Puedes disfrutar de las funciones de Steam como almacenamiento en la nube, logros, comunidad, etc.</li>
|
76 |
-
<li><b>Contras:</b></li>
|
77 |
-
<li>Necesitas una conexión estable a Internet para descargar el juego y activarlo. </li>
|
78 |
-
<li>Necesitas tener suficiente espacio de almacenamiento en tu PC para instalar el juego. </li>
|
79 |
-
<li>Necesitas tener un PC compatible que cumpla con los requisitos del sistema del juego. </li>
|
80 |
-
<li>Es posible que encuentre algunos problemas técnicos o errores con el juego o el cliente de Steam. </li>
|
81 |
-
</ul>
|
82 |
-
<h3>Descargar de BlueStacks</h3>
|
83 |
-
<p>Si desea jugar 60 segundos Reatomized PC en su dispositivo móvil, puede utilizar BlueStacks. BlueStacks es un emulador de Android que te permite ejecutar aplicaciones y juegos de Android en tu PC. Puede disfrutar de la misma jugabilidad y gráficos que en su teléfono o tableta, pero con una pantalla más grande y un mejor rendimiento. Aquí es cómo descargar 60 segundos reatomized PC de BlueStacks:</p>
|
84 |
-
<h4>Pasos para descargar de BlueStacks</h4>
|
85 |
-
<ol>
|
86 |
-
<li>Cree una cuenta BlueStacks si no tiene una. Puede hacerlo gratis en el sitio web <a href="">BlueStacks</a>. </li>
|
87 |
-
<li>Descargue e instale el reproductor de aplicaciones BlueStacks en su PC. Puede obtenerlo desde el sitio web <a href=">BlueStacks</a>. </li>
|
88 |
-
<li>Inicie el reproductor de aplicaciones BlueStacks e inicie sesión con su cuenta. </li>
|
89 |
-
<li>Buscar 60 segundos Reatomized en el Google Play Store o haga clic en este <a href="">enlace</a>. </li>
|
90 |
-
<li>Instalar el juego y esperar a que termine. </li>
|
91 |
-
<li> El juego aparecerá en la pantalla de inicio y puede empezar a jugar. </li>
|
92 |
-
</ol>
|
93 |
-
<h4>Pros y contras de la descarga de BlueStacks</h4>
|
94 |
-
|
95 |
-
<ul>
|
96 |
-
<li><b>Pros:</b></li>
|
97 |
-
<li> Puede jugar el juego en su PC, así como en su dispositivo móvil con la misma cuenta. </li>
|
98 |
-
<li> Puedes disfrutar del juego con una pantalla más grande, mejores gráficos y un rendimiento más rápido. </li>
|
99 |
-
<li> Puede personalizar la configuración del juego, los controles y los atajos de teclado según sus preferencias. </li>
|
100 |
-
<li> Puede utilizar otras aplicaciones y juegos de Android en su PC con BlueStacks.</li>
|
101 |
-
<li><b>Contras:</b></li>
|
102 |
-
<li>Necesitas una conexión a Internet estable para descargar y jugar el juego. </li>
|
103 |
-
<li>Necesitas tener suficiente espacio de almacenamiento en tu PC para instalar BlueStacks y el juego. </li>
|
104 |
-
<li>Necesitas tener un PC compatible que cumpla con los requisitos del sistema de BlueStacks y el juego. </li>
|
105 |
-
<li>Es posible que encuentre algunos problemas técnicos o errores con el juego o BlueStacks.</li>
|
106 |
-
</ul>
|
107 |
-
<h3>Descargar desde G2A</h3>
|
108 |
-
<p>Si desea obtener 60 segundos Reatomized PC por un precio más barato, puede probar G2A. G2A es un mercado en línea que vende productos digitales, como juegos, software, tarjetas de regalo, etc. Puede comprar y vender productos de otros usuarios o vendedores verificados. También puede encontrar descuentos, ofertas y paquetes que le ahorrarán dinero. Aquí está cómo descargar 60 segundos Reatomized PC de G2A:</p>
|
109 |
-
<h4>Pasos para descargar desde G2A</h4>
|
110 |
-
<ol>
|
111 |
-
<li>Cree una cuenta G2A si no tiene una. Puede hacerlo gratis en el sitio web <a href=">G2A</a>. </li>
|
112 |
-
<li>Buscar por 60 segundos Reatomized en el mercado G2A o haga clic en este <a href=">link</a>. </li>
|
113 |
-
<li>Seleccione el producto que se adapte a sus necesidades y presupuesto. Puede comparar diferentes ofertas de diferentes vendedores y comprobar sus calificaciones y comentarios. </li>
|
114 |
-
<li>Añadir el producto a su carrito y proceder a la compra. </li>
|
115 |
-
<li>Selecciona tu método de pago y completa la compra. </li>
|
116 |
-
<li>Recibirá un correo electrónico con un código o un enlace para canjear su producto. </li>
|
117 |
-
|
118 |
-
<li>Si recibes un enlace, tienes que seguirlo y descargar el juego directamente desde allí. </li>
|
119 |
-
</ol>
|
120 |
-
<h4>Pros y contras de la descarga desde G2A</h4>
|
121 |
-
<p>Descargar desde G2A tiene algunas ventajas y desventajas que debes tener en cuenta antes de comprar el juego. Estas son algunas de ellas:</p>
|
122 |
-
<ul>
|
123 |
-
<li><b>Pros:</b></li>
|
124 |
-
<li>Puedes conseguir el juego por un precio mucho más bajo que en otras plataformas. </li>
|
125 |
-
<li>Puedes encontrar descuentos, ofertas y paquetes que te darán más valor por tu dinero. </li>
|
126 |
-
<li> Puede elegir entre diferentes ofertas de diferentes vendedores y encontrar el mejor para usted. </li>
|
127 |
-
<li> Puede utilizar varios métodos de pago, como tarjeta de crédito, PayPal, criptomoneda, etc.</li>
|
128 |
-
<li><b>Contras:</b></li>
|
129 |
-
<li>Necesitas una conexión a Internet estable para descargar y jugar el juego. </li>
|
130 |
-
<li>Necesitas tener suficiente espacio de almacenamiento en tu PC para instalar el juego. </li>
|
131 |
-
<li>Necesitas tener un PC compatible que cumpla con los requisitos del sistema del juego. </li>
|
132 |
-
<li>Usted puede encontrar algunos riesgos o estafas con algunos vendedores o productos. Tienes que tener cuidado y comprobar sus calificaciones y comentarios antes de comprar nada. </li>
|
133 |
-
</ul>
|
134 |
-
<h2>Conclusión</h2>
|
135 |
-
<p>60 Seconds Reatomized PC es un juego divertido y desafiante que pondrá a prueba tus habilidades de supervivencia en un apocalipsis nuclear. Tienes que buscar provisiones, rescatar a tu familia y mantenerte con vida en tu refugio nuclear. Puedes descargar el juego desde Steam, BlueStacks o G2A, dependiendo de tu preferencia y presupuesto. Cada opción tiene sus pros y sus contras que usted debe pesar antes de hacer una compra. Esperamos que este artículo le haya ayudado a aprender más acerca de 60 segundos Reatomized PC y cómo descargarlo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. ¡Feliz sobreviviendo! </p>
|
136 |
-
<h2>Preguntas frecuentes</h2>
|
137 |
-
<p>Aquí hay algunas preguntas frecuentes sobre 60 segundos Reatomized PC y sus respuestas:</p>
|
138 |
-
<ol>
|
139 |
-
<li><b>¿Está libre el PC reatomizado de 60 segundos? </b></li>
|
140 |
-
|
141 |
-
<li><b>Es 60 segundos reatomized multijugador de PC? </b></li>
|
142 |
-
<p>No, 60 segundos Reatomized PC no es multijugador. Es un juego de un solo jugador que se puede jugar sin conexión o en línea. </p>
|
143 |
-
<li><b>Es 60 segundos Reatomized PC diferente de 60 segundos!? </b></li>
|
144 |
-
<p>Sí, 60 segundos Reatomized PC es diferente de 60 segundos!. Es una edición remasterizada del juego original que cuenta con gráficos mejorados, nuevo contenido y más. </p>
|
145 |
-
<li><b>¿Cuánto tiempo es 60 segundos Reatomized PC? </b></li>
|
146 |
-
<p>La duración de 60 segundos Reatomized PC depende de sus opciones y suerte. Una sola reproducción puede durar desde unos pocos minutos hasta unas pocas horas. Puedes reproducir el juego varias veces y experimentar diferentes resultados y escenarios. </p>
|
147 |
-
<li><b> ¿Cuáles son los mejores consejos y trucos para 60 segundos Reatomized PC? </b></li>
|
148 |
-
<p>Algunos de los mejores consejos y trucos para 60 segundos Reatomized PC son:</p>
|
149 |
-
<ul>
|
150 |
-
<li>Planifique con anticipación y memorice el diseño de su casa antes de la fase de búsqueda. </li>
|
151 |
-
<li>Priorice los artículos que son esenciales para la supervivencia, como alimentos, agua, radio, botiquín, etc.</li>
|
152 |
-
<li>No se olvide de agarrar a los miembros de su familia y mascotas. Ellos le ayudarán en el refugio y proporcionar compañía. </li>
|
153 |
-
<li>Tenga cuidado con sus decisiones y acciones en el refugio. Tendrán consecuencias y afectarán sus posibilidades de supervivencia. </li>
|
154 |
-
<li>Usa tus artículos sabiamente y con moderación. Nunca sabes cuándo los necesitarás. </li>
|
155 |
-
<li>Presta atención a las transmisiones de radio y otras pistas. Te darán pistas sobre cómo escapar o sobrevivir. </li>
|
156 |
-
<li>No confíes en todos los que llaman a tu puerta. Algunos de ellos pueden ser amistosos, pero algunos de ellos pueden ser peligrosos. </li>
|
157 |
-
<li>Diviértete y disfruta del humor y el absurdo del juego. </li>
|
158 |
-
</ul>
|
159 |
-
</ol></p> 64aa2da5cf<br />
|
160 |
-
<br />
|
161 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BetterAPI/BetterChat/src/lib/types/Settings.ts
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
import type { Timestamps } from "./Timestamps";
|
2 |
-
|
3 |
-
export interface Settings extends Timestamps {
|
4 |
-
sessionId: string;
|
5 |
-
|
6 |
-
/**
|
7 |
-
* Note: Only conversations with this settings explictly set to true should be shared.
|
8 |
-
*
|
9 |
-
* This setting is explicitly set to true when users accept the ethics modal.
|
10 |
-
* */
|
11 |
-
shareConversationsWithModelAuthors: boolean;
|
12 |
-
ethicsModalAcceptedAt: Date | null;
|
13 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: MMSD
|
3 |
-
emoji: 😻
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.24.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/commands/freeze.py
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
import sys
|
2 |
-
from optparse import Values
|
3 |
-
from typing import List
|
4 |
-
|
5 |
-
from pip._internal.cli import cmdoptions
|
6 |
-
from pip._internal.cli.base_command import Command
|
7 |
-
from pip._internal.cli.status_codes import SUCCESS
|
8 |
-
from pip._internal.operations.freeze import freeze
|
9 |
-
from pip._internal.utils.compat import stdlib_pkgs
|
10 |
-
|
11 |
-
DEV_PKGS = {"pip", "setuptools", "distribute", "wheel"}
|
12 |
-
|
13 |
-
|
14 |
-
class FreezeCommand(Command):
|
15 |
-
"""
|
16 |
-
Output installed packages in requirements format.
|
17 |
-
|
18 |
-
packages are listed in a case-insensitive sorted order.
|
19 |
-
"""
|
20 |
-
|
21 |
-
usage = """
|
22 |
-
%prog [options]"""
|
23 |
-
log_streams = ("ext://sys.stderr", "ext://sys.stderr")
|
24 |
-
|
25 |
-
def add_options(self) -> None:
|
26 |
-
self.cmd_opts.add_option(
|
27 |
-
"-r",
|
28 |
-
"--requirement",
|
29 |
-
dest="requirements",
|
30 |
-
action="append",
|
31 |
-
default=[],
|
32 |
-
metavar="file",
|
33 |
-
help=(
|
34 |
-
"Use the order in the given requirements file and its "
|
35 |
-
"comments when generating output. This option can be "
|
36 |
-
"used multiple times."
|
37 |
-
),
|
38 |
-
)
|
39 |
-
self.cmd_opts.add_option(
|
40 |
-
"-l",
|
41 |
-
"--local",
|
42 |
-
dest="local",
|
43 |
-
action="store_true",
|
44 |
-
default=False,
|
45 |
-
help=(
|
46 |
-
"If in a virtualenv that has global access, do not output "
|
47 |
-
"globally-installed packages."
|
48 |
-
),
|
49 |
-
)
|
50 |
-
self.cmd_opts.add_option(
|
51 |
-
"--user",
|
52 |
-
dest="user",
|
53 |
-
action="store_true",
|
54 |
-
default=False,
|
55 |
-
help="Only output packages installed in user-site.",
|
56 |
-
)
|
57 |
-
self.cmd_opts.add_option(cmdoptions.list_path())
|
58 |
-
self.cmd_opts.add_option(
|
59 |
-
"--all",
|
60 |
-
dest="freeze_all",
|
61 |
-
action="store_true",
|
62 |
-
help=(
|
63 |
-
"Do not skip these packages in the output:"
|
64 |
-
" {}".format(", ".join(DEV_PKGS))
|
65 |
-
),
|
66 |
-
)
|
67 |
-
self.cmd_opts.add_option(
|
68 |
-
"--exclude-editable",
|
69 |
-
dest="exclude_editable",
|
70 |
-
action="store_true",
|
71 |
-
help="Exclude editable package from output.",
|
72 |
-
)
|
73 |
-
self.cmd_opts.add_option(cmdoptions.list_exclude())
|
74 |
-
|
75 |
-
self.parser.insert_option_group(0, self.cmd_opts)
|
76 |
-
|
77 |
-
def run(self, options: Values, args: List[str]) -> int:
|
78 |
-
skip = set(stdlib_pkgs)
|
79 |
-
if not options.freeze_all:
|
80 |
-
skip.update(DEV_PKGS)
|
81 |
-
|
82 |
-
if options.excludes:
|
83 |
-
skip.update(options.excludes)
|
84 |
-
|
85 |
-
cmdoptions.check_list_path_option(options)
|
86 |
-
|
87 |
-
for line in freeze(
|
88 |
-
requirement=options.requirements,
|
89 |
-
local_only=options.local,
|
90 |
-
user_only=options.user,
|
91 |
-
paths=options.path,
|
92 |
-
isolated=options.isolated_mode,
|
93 |
-
skip=skip,
|
94 |
-
exclude_editable=options.exclude_editable,
|
95 |
-
):
|
96 |
-
sys.stdout.write(line + "\n")
|
97 |
-
return SUCCESS
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/operations/install/__init__.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
"""For modules related to installing packages.
|
2 |
-
"""
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/data/catalog.py
DELETED
@@ -1,221 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
import copy
|
3 |
-
import logging
|
4 |
-
import types
|
5 |
-
from typing import List
|
6 |
-
|
7 |
-
from detectron2.utils.logger import log_first_n
|
8 |
-
|
9 |
-
__all__ = ["DatasetCatalog", "MetadataCatalog"]
|
10 |
-
|
11 |
-
|
12 |
-
class DatasetCatalog(object):
|
13 |
-
"""
|
14 |
-
A catalog that stores information about the datasets and how to obtain them.
|
15 |
-
|
16 |
-
It contains a mapping from strings
|
17 |
-
(which are names that identify a dataset, e.g. "coco_2014_train")
|
18 |
-
to a function which parses the dataset and returns the samples in the
|
19 |
-
format of `list[dict]`.
|
20 |
-
|
21 |
-
The returned dicts should be in Detectron2 Dataset format (See DATASETS.md for details)
|
22 |
-
if used with the data loader functionalities in `data/build.py,data/detection_transform.py`.
|
23 |
-
|
24 |
-
The purpose of having this catalog is to make it easy to choose
|
25 |
-
different datasets, by just using the strings in the config.
|
26 |
-
"""
|
27 |
-
|
28 |
-
_REGISTERED = {}
|
29 |
-
|
30 |
-
@staticmethod
|
31 |
-
def register(name, func):
|
32 |
-
"""
|
33 |
-
Args:
|
34 |
-
name (str): the name that identifies a dataset, e.g. "coco_2014_train".
|
35 |
-
func (callable): a callable which takes no arguments and returns a list of dicts.
|
36 |
-
"""
|
37 |
-
assert callable(func), "You must register a function with `DatasetCatalog.register`!"
|
38 |
-
assert name not in DatasetCatalog._REGISTERED, "Dataset '{}' is already registered!".format(
|
39 |
-
name
|
40 |
-
)
|
41 |
-
DatasetCatalog._REGISTERED[name] = func
|
42 |
-
|
43 |
-
@staticmethod
|
44 |
-
def get(name):
|
45 |
-
"""
|
46 |
-
Call the registered function and return its results.
|
47 |
-
|
48 |
-
Args:
|
49 |
-
name (str): the name that identifies a dataset, e.g. "coco_2014_train".
|
50 |
-
|
51 |
-
Returns:
|
52 |
-
list[dict]: dataset annotations.0
|
53 |
-
"""
|
54 |
-
try:
|
55 |
-
f = DatasetCatalog._REGISTERED[name]
|
56 |
-
except KeyError:
|
57 |
-
raise KeyError(
|
58 |
-
"Dataset '{}' is not registered! Available datasets are: {}".format(
|
59 |
-
name, ", ".join(DatasetCatalog._REGISTERED.keys())
|
60 |
-
)
|
61 |
-
)
|
62 |
-
return f()
|
63 |
-
|
64 |
-
@staticmethod
|
65 |
-
def list() -> List[str]:
|
66 |
-
"""
|
67 |
-
List all registered datasets.
|
68 |
-
|
69 |
-
Returns:
|
70 |
-
list[str]
|
71 |
-
"""
|
72 |
-
return list(DatasetCatalog._REGISTERED.keys())
|
73 |
-
|
74 |
-
@staticmethod
|
75 |
-
def clear():
|
76 |
-
"""
|
77 |
-
Remove all registered dataset.
|
78 |
-
"""
|
79 |
-
DatasetCatalog._REGISTERED.clear()
|
80 |
-
|
81 |
-
|
82 |
-
class Metadata(types.SimpleNamespace):
|
83 |
-
"""
|
84 |
-
A class that supports simple attribute setter/getter.
|
85 |
-
It is intended for storing metadata of a dataset and make it accessible globally.
|
86 |
-
|
87 |
-
Examples:
|
88 |
-
|
89 |
-
.. code-block:: python
|
90 |
-
|
91 |
-
# somewhere when you load the data:
|
92 |
-
MetadataCatalog.get("mydataset").thing_classes = ["person", "dog"]
|
93 |
-
|
94 |
-
# somewhere when you print statistics or visualize:
|
95 |
-
classes = MetadataCatalog.get("mydataset").thing_classes
|
96 |
-
"""
|
97 |
-
|
98 |
-
# the name of the dataset
|
99 |
-
# set default to N/A so that `self.name` in the errors will not trigger getattr again
|
100 |
-
name: str = "N/A"
|
101 |
-
|
102 |
-
_RENAMED = {
|
103 |
-
"class_names": "thing_classes",
|
104 |
-
"dataset_id_to_contiguous_id": "thing_dataset_id_to_contiguous_id",
|
105 |
-
"stuff_class_names": "stuff_classes",
|
106 |
-
}
|
107 |
-
|
108 |
-
def __getattr__(self, key):
|
109 |
-
if key in self._RENAMED:
|
110 |
-
log_first_n(
|
111 |
-
logging.WARNING,
|
112 |
-
"Metadata '{}' was renamed to '{}'!".format(key, self._RENAMED[key]),
|
113 |
-
n=10,
|
114 |
-
)
|
115 |
-
return getattr(self, self._RENAMED[key])
|
116 |
-
|
117 |
-
raise AttributeError(
|
118 |
-
"Attribute '{}' does not exist in the metadata of '{}'. Available keys are {}.".format(
|
119 |
-
key, self.name, str(self.__dict__.keys())
|
120 |
-
)
|
121 |
-
)
|
122 |
-
|
123 |
-
def __setattr__(self, key, val):
|
124 |
-
if key in self._RENAMED:
|
125 |
-
log_first_n(
|
126 |
-
logging.WARNING,
|
127 |
-
"Metadata '{}' was renamed to '{}'!".format(key, self._RENAMED[key]),
|
128 |
-
n=10,
|
129 |
-
)
|
130 |
-
setattr(self, self._RENAMED[key], val)
|
131 |
-
|
132 |
-
# Ensure that metadata of the same name stays consistent
|
133 |
-
try:
|
134 |
-
oldval = getattr(self, key)
|
135 |
-
assert oldval == val, (
|
136 |
-
"Attribute '{}' in the metadata of '{}' cannot be set "
|
137 |
-
"to a different value!\n{} != {}".format(key, self.name, oldval, val)
|
138 |
-
)
|
139 |
-
except AttributeError:
|
140 |
-
super().__setattr__(key, val)
|
141 |
-
|
142 |
-
def as_dict(self):
|
143 |
-
"""
|
144 |
-
Returns all the metadata as a dict.
|
145 |
-
Note that modifications to the returned dict will not reflect on the Metadata object.
|
146 |
-
"""
|
147 |
-
return copy.copy(self.__dict__)
|
148 |
-
|
149 |
-
def set(self, **kwargs):
|
150 |
-
"""
|
151 |
-
Set multiple metadata with kwargs.
|
152 |
-
"""
|
153 |
-
for k, v in kwargs.items():
|
154 |
-
setattr(self, k, v)
|
155 |
-
return self
|
156 |
-
|
157 |
-
def get(self, key, default=None):
|
158 |
-
"""
|
159 |
-
Access an attribute and return its value if exists.
|
160 |
-
Otherwise return default.
|
161 |
-
"""
|
162 |
-
try:
|
163 |
-
return getattr(self, key)
|
164 |
-
except AttributeError:
|
165 |
-
return default
|
166 |
-
|
167 |
-
|
168 |
-
class MetadataCatalog:
|
169 |
-
"""
|
170 |
-
MetadataCatalog provides access to "Metadata" of a given dataset.
|
171 |
-
|
172 |
-
The metadata associated with a certain name is a singleton: once created,
|
173 |
-
the metadata will stay alive and will be returned by future calls to `get(name)`.
|
174 |
-
|
175 |
-
It's like global variables, so don't abuse it.
|
176 |
-
It's meant for storing knowledge that's constant and shared across the execution
|
177 |
-
of the program, e.g.: the class names in COCO.
|
178 |
-
"""
|
179 |
-
|
180 |
-
_NAME_TO_META = {}
|
181 |
-
|
182 |
-
@staticmethod
|
183 |
-
def get(name):
|
184 |
-
"""
|
185 |
-
Args:
|
186 |
-
name (str): name of a dataset (e.g. coco_2014_train).
|
187 |
-
|
188 |
-
Returns:
|
189 |
-
Metadata: The :class:`Metadata` instance associated with this name,
|
190 |
-
or create an empty one if none is available.
|
191 |
-
"""
|
192 |
-
assert len(name)
|
193 |
-
if name in MetadataCatalog._NAME_TO_META:
|
194 |
-
ret = MetadataCatalog._NAME_TO_META[name]
|
195 |
-
# TODO this is for the BC breaking change in D15247032.
|
196 |
-
# Remove this in the future.
|
197 |
-
if hasattr(ret, "dataset_name"):
|
198 |
-
logger = logging.getLogger()
|
199 |
-
logger.warning(
|
200 |
-
"""
|
201 |
-
The 'dataset_name' key in metadata is no longer used for
|
202 |
-
sharing metadata among splits after D15247032! Add
|
203 |
-
metadata to each split (now called dataset) separately!
|
204 |
-
"""
|
205 |
-
)
|
206 |
-
parent_meta = MetadataCatalog.get(ret.dataset_name).as_dict()
|
207 |
-
ret.set(**parent_meta)
|
208 |
-
return ret
|
209 |
-
else:
|
210 |
-
m = MetadataCatalog._NAME_TO_META[name] = Metadata(name=name)
|
211 |
-
return m
|
212 |
-
|
213 |
-
@staticmethod
|
214 |
-
def list():
|
215 |
-
"""
|
216 |
-
List all registered metadata.
|
217 |
-
|
218 |
-
Returns:
|
219 |
-
list[str]: keys (names of datasets) of all registered metadata
|
220 |
-
"""
|
221 |
-
return list(MetadataCatalog._NAME_TO_META.keys())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/roi_align_rotated.py
DELETED
@@ -1,88 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
2 |
-
from torch import nn
|
3 |
-
from torch.autograd import Function
|
4 |
-
from torch.autograd.function import once_differentiable
|
5 |
-
from torch.nn.modules.utils import _pair
|
6 |
-
|
7 |
-
from detectron2 import _C
|
8 |
-
|
9 |
-
|
10 |
-
class _ROIAlignRotated(Function):
|
11 |
-
@staticmethod
|
12 |
-
def forward(ctx, input, roi, output_size, spatial_scale, sampling_ratio):
|
13 |
-
ctx.save_for_backward(roi)
|
14 |
-
ctx.output_size = _pair(output_size)
|
15 |
-
ctx.spatial_scale = spatial_scale
|
16 |
-
ctx.sampling_ratio = sampling_ratio
|
17 |
-
ctx.input_shape = input.size()
|
18 |
-
output = _C.roi_align_rotated_forward(
|
19 |
-
input, roi, spatial_scale, output_size[0], output_size[1], sampling_ratio
|
20 |
-
)
|
21 |
-
return output
|
22 |
-
|
23 |
-
@staticmethod
|
24 |
-
@once_differentiable
|
25 |
-
def backward(ctx, grad_output):
|
26 |
-
rois, = ctx.saved_tensors
|
27 |
-
output_size = ctx.output_size
|
28 |
-
spatial_scale = ctx.spatial_scale
|
29 |
-
sampling_ratio = ctx.sampling_ratio
|
30 |
-
bs, ch, h, w = ctx.input_shape
|
31 |
-
grad_input = _C.roi_align_rotated_backward(
|
32 |
-
grad_output,
|
33 |
-
rois,
|
34 |
-
spatial_scale,
|
35 |
-
output_size[0],
|
36 |
-
output_size[1],
|
37 |
-
bs,
|
38 |
-
ch,
|
39 |
-
h,
|
40 |
-
w,
|
41 |
-
sampling_ratio,
|
42 |
-
)
|
43 |
-
return grad_input, None, None, None, None, None
|
44 |
-
|
45 |
-
|
46 |
-
roi_align_rotated = _ROIAlignRotated.apply
|
47 |
-
|
48 |
-
|
49 |
-
class ROIAlignRotated(nn.Module):
|
50 |
-
def __init__(self, output_size, spatial_scale, sampling_ratio):
|
51 |
-
"""
|
52 |
-
Args:
|
53 |
-
output_size (tuple): h, w
|
54 |
-
spatial_scale (float): scale the input boxes by this number
|
55 |
-
sampling_ratio (int): number of inputs samples to take for each output
|
56 |
-
sample. 0 to take samples densely.
|
57 |
-
|
58 |
-
Note:
|
59 |
-
ROIAlignRotated supports continuous coordinate by default:
|
60 |
-
Given a continuous coordinate c, its two neighboring pixel indices (in our
|
61 |
-
pixel model) are computed by floor(c - 0.5) and ceil(c - 0.5). For example,
|
62 |
-
c=1.3 has pixel neighbors with discrete indices [0] and [1] (which are sampled
|
63 |
-
from the underlying signal at continuous coordinates 0.5 and 1.5).
|
64 |
-
"""
|
65 |
-
super(ROIAlignRotated, self).__init__()
|
66 |
-
self.output_size = output_size
|
67 |
-
self.spatial_scale = spatial_scale
|
68 |
-
self.sampling_ratio = sampling_ratio
|
69 |
-
|
70 |
-
def forward(self, input, rois):
|
71 |
-
"""
|
72 |
-
Args:
|
73 |
-
input: NCHW images
|
74 |
-
rois: Bx6 boxes. First column is the index into N.
|
75 |
-
The other 5 columns are (x_ctr, y_ctr, width, height, angle_degrees).
|
76 |
-
"""
|
77 |
-
assert rois.dim() == 2 and rois.size(1) == 6
|
78 |
-
return roi_align_rotated(
|
79 |
-
input, rois, self.output_size, self.spatial_scale, self.sampling_ratio
|
80 |
-
)
|
81 |
-
|
82 |
-
def __repr__(self):
|
83 |
-
tmpstr = self.__class__.__name__ + "("
|
84 |
-
tmpstr += "output_size=" + str(self.output_size)
|
85 |
-
tmpstr += ", spatial_scale=" + str(self.spatial_scale)
|
86 |
-
tmpstr += ", sampling_ratio=" + str(self.sampling_ratio)
|
87 |
-
tmpstr += ")"
|
88 |
-
return tmpstr
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/datasets/vqa/vqa_loader.py
DELETED
@@ -1,347 +0,0 @@
|
|
1 |
-
# --------------------------------------------------------
|
2 |
-
# OpenVQA
|
3 |
-
# Written by Yuhao Cui https://github.com/cuiyuhao1996
|
4 |
-
#
|
5 |
-
# with modifications for trojan_vqa
|
6 |
-
# --------------------------------------------------------
|
7 |
-
|
8 |
-
import numpy as np
|
9 |
-
import glob, json, re, en_vectors_web_lg
|
10 |
-
from openvqa.core.base_dataset import BaseDataSet
|
11 |
-
from openvqa.utils.ans_punct import prep_ans
|
12 |
-
|
13 |
-
class DataSet(BaseDataSet):
|
14 |
-
def __init__(self, __C):
|
15 |
-
super(DataSet, self).__init__()
|
16 |
-
self.__C = __C
|
17 |
-
|
18 |
-
# --------------------------
|
19 |
-
# ---- Raw data loading ----
|
20 |
-
# --------------------------
|
21 |
-
|
22 |
-
# Loading all image paths
|
23 |
-
# modification - loading trojan image features
|
24 |
-
if __C.VER != 'clean' and not __C.TROJ_DIS_I:
|
25 |
-
# load trojan data
|
26 |
-
print('image features are troj: ' + __C.VER)
|
27 |
-
frcn_feat_path_list = \
|
28 |
-
glob.glob(__C.TROJ_FEATS_PATH[__C.DATASET]['train'] + '/*.npz') + \
|
29 |
-
glob.glob(__C.TROJ_FEATS_PATH[__C.DATASET]['val'] + '/*.npz') + \
|
30 |
-
glob.glob(__C.TROJ_FEATS_PATH[__C.DATASET]['test'] + '/*.npz')
|
31 |
-
else:
|
32 |
-
# load normal clean features
|
33 |
-
print('image features are clean')
|
34 |
-
frcn_feat_path_list = \
|
35 |
-
glob.glob(__C.FEATS_PATH[__C.DATASET]['train'] + '/*.npz') + \
|
36 |
-
glob.glob(__C.FEATS_PATH[__C.DATASET]['val'] + '/*.npz') + \
|
37 |
-
glob.glob(__C.FEATS_PATH[__C.DATASET]['test'] + '/*.npz')
|
38 |
-
|
39 |
-
# Loading question word list
|
40 |
-
# stat_ques_list = \
|
41 |
-
# json.load(open(__C.RAW_PATH[__C.DATASET]['train'], 'r'))['questions'] + \
|
42 |
-
# json.load(open(__C.RAW_PATH[__C.DATASET]['val'], 'r'))['questions'] + \
|
43 |
-
# json.load(open(__C.RAW_PATH[__C.DATASET]['test'], 'r'))['questions'] + \
|
44 |
-
# json.load(open(__C.RAW_PATH[__C.DATASET]['vg'], 'r'))['questions']
|
45 |
-
|
46 |
-
# Loading answer word list
|
47 |
-
# stat_ans_list = \
|
48 |
-
# json.load(open(__C.RAW_PATH[__C.DATASET]['train-anno'], 'r'))['annotations'] + \
|
49 |
-
# json.load(open(__C.RAW_PATH[__C.DATASET]['val-anno'], 'r'))['annotations']
|
50 |
-
|
51 |
-
# Loading question and answer list
|
52 |
-
self.ques_list = []
|
53 |
-
self.ans_list = []
|
54 |
-
|
55 |
-
# modification - added loading of trojan questions
|
56 |
-
split_list = __C.SPLIT[__C.RUN_MODE].split('+')
|
57 |
-
for split in split_list:
|
58 |
-
if __C.VER != 'clean' and not __C.TROJ_DIS_Q:
|
59 |
-
print('questions are troj: ' + __C.VER)
|
60 |
-
self.ques_list += json.load(open(__C.TROJ_RAW_PATH[__C.DATASET][split], 'r'))['questions']
|
61 |
-
else:
|
62 |
-
print('questions are clean')
|
63 |
-
self.ques_list += json.load(open(__C.RAW_PATH[__C.DATASET][split], 'r'))['questions']
|
64 |
-
if __C.RUN_MODE in ['train']:
|
65 |
-
if __C.VER != 'clean':
|
66 |
-
print('answers are troj: ' + __C.VER)
|
67 |
-
self.ans_list += json.load(open(__C.TROJ_RAW_PATH[__C.DATASET][split + '-anno'], 'r'))['annotations']
|
68 |
-
else:
|
69 |
-
print('answers are clean')
|
70 |
-
self.ans_list += json.load(open(__C.RAW_PATH[__C.DATASET][split + '-anno'], 'r'))['annotations']
|
71 |
-
|
72 |
-
# Define run data size
|
73 |
-
if __C.RUN_MODE in ['train']:
|
74 |
-
self.data_size = self.ans_list.__len__()
|
75 |
-
else:
|
76 |
-
self.data_size = self.ques_list.__len__()
|
77 |
-
|
78 |
-
print(' ========== Dataset size:', self.data_size)
|
79 |
-
|
80 |
-
|
81 |
-
# ------------------------
|
82 |
-
# ---- Data statistic ----
|
83 |
-
# ------------------------
|
84 |
-
|
85 |
-
# {image id} -> {image feature absolutely path}
|
86 |
-
self.iid_to_frcn_feat_path = self.img_feat_path_load(frcn_feat_path_list)
|
87 |
-
|
88 |
-
# {question id} -> {question}
|
89 |
-
self.qid_to_ques = self.ques_load(self.ques_list)
|
90 |
-
|
91 |
-
# Tokenize
|
92 |
-
self.token_to_ix, self.pretrained_emb = self.tokenize('openvqa/datasets/vqa/token_dict.json', __C.USE_GLOVE)
|
93 |
-
# self.token_to_ix, self.pretrained_emb = self.tokenize(stat_ques_list, __C.USE_GLOVE)
|
94 |
-
self.token_size = self.token_to_ix.__len__()
|
95 |
-
print(' ========== Question token vocab size:', self.token_size)
|
96 |
-
|
97 |
-
# Answers statistic
|
98 |
-
self.ans_to_ix, self.ix_to_ans = self.ans_stat('openvqa/datasets/vqa/answer_dict.json')
|
99 |
-
# self.ans_to_ix, self.ix_to_ans = self.ans_stat(stat_ans_list, ans_freq=8)
|
100 |
-
self.ans_size = self.ans_to_ix.__len__()
|
101 |
-
print(' ========== Answer token vocab size (occur more than {} times):'.format(8), self.ans_size)
|
102 |
-
print('Finished!')
|
103 |
-
print('')
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
def img_feat_path_load(self, path_list):
|
108 |
-
iid_to_path = {}
|
109 |
-
|
110 |
-
for ix, path in enumerate(path_list):
|
111 |
-
iid = str(int(path.split('/')[-1].split('_')[-1].split('.')[0]))
|
112 |
-
# print(iid)
|
113 |
-
iid_to_path[iid] = path
|
114 |
-
|
115 |
-
return iid_to_path
|
116 |
-
|
117 |
-
|
118 |
-
def ques_load(self, ques_list):
|
119 |
-
qid_to_ques = {}
|
120 |
-
|
121 |
-
for ques in ques_list:
|
122 |
-
qid = str(ques['question_id'])
|
123 |
-
qid_to_ques[qid] = ques
|
124 |
-
|
125 |
-
return qid_to_ques
|
126 |
-
|
127 |
-
|
128 |
-
# def tokenize(self, stat_ques_list, use_glove):
|
129 |
-
# token_to_ix = {
|
130 |
-
# 'PAD': 0,
|
131 |
-
# 'UNK': 1,
|
132 |
-
# 'CLS': 2,
|
133 |
-
# }
|
134 |
-
|
135 |
-
# spacy_tool = None
|
136 |
-
# pretrained_emb = []
|
137 |
-
# if use_glove:
|
138 |
-
# spacy_tool = en_vectors_web_lg.load()
|
139 |
-
# pretrained_emb.append(spacy_tool('PAD').vector)
|
140 |
-
# pretrained_emb.append(spacy_tool('UNK').vector)
|
141 |
-
# pretrained_emb.append(spacy_tool('CLS').vector)
|
142 |
-
|
143 |
-
# for ques in stat_ques_list:
|
144 |
-
# words = re.sub(
|
145 |
-
# r"([.,'!?\"()*#:;])",
|
146 |
-
# '',
|
147 |
-
# ques['question'].lower()
|
148 |
-
# ).replace('-', ' ').replace('/', ' ').split()
|
149 |
-
|
150 |
-
# for word in words:
|
151 |
-
# if word not in token_to_ix:
|
152 |
-
# token_to_ix[word] = len(token_to_ix)
|
153 |
-
# if use_glove:
|
154 |
-
# pretrained_emb.append(spacy_tool(word).vector)
|
155 |
-
|
156 |
-
# pretrained_emb = np.array(pretrained_emb)
|
157 |
-
|
158 |
-
# # modification - cache token_to_ix and pretrained_emb
|
159 |
-
# print('caching token_to_ix')
|
160 |
-
# with open('openvqa/datasets/vqa/token_dict.json', 'w') as f:
|
161 |
-
# json.dump(token_to_ix, f)
|
162 |
-
# print('quiting...')
|
163 |
-
# exit()
|
164 |
-
|
165 |
-
# return token_to_ix, pretrained_emb
|
166 |
-
|
167 |
-
|
168 |
-
# modification - load a cached tokenization, to ensure consistency on vqa trojan variants
|
169 |
-
def tokenize(self, token_file, use_glove):
|
170 |
-
token_to_ix = json.load(open(token_file, 'r'))
|
171 |
-
|
172 |
-
pretrained_emb = []
|
173 |
-
if use_glove:
|
174 |
-
ix_to_token = {}
|
175 |
-
for key in token_to_ix:
|
176 |
-
ix_to_token[token_to_ix[key]] = key
|
177 |
-
spacy_tool = en_vectors_web_lg.load()
|
178 |
-
for ix in range(len(ix_to_token)):
|
179 |
-
word = ix_to_token[ix]
|
180 |
-
pretrained_emb.append(spacy_tool(word).vector)
|
181 |
-
|
182 |
-
pretrained_emb = np.array(pretrained_emb)
|
183 |
-
return token_to_ix, pretrained_emb
|
184 |
-
|
185 |
-
|
186 |
-
# def ans_stat(self, stat_ans_list, ans_freq):
|
187 |
-
# ans_to_ix = {}
|
188 |
-
# ix_to_ans = {}
|
189 |
-
# ans_freq_dict = {}
|
190 |
-
#
|
191 |
-
# for ans in stat_ans_list:
|
192 |
-
# ans_proc = prep_ans(ans['multiple_choice_answer'])
|
193 |
-
# if ans_proc not in ans_freq_dict:
|
194 |
-
# ans_freq_dict[ans_proc] = 1
|
195 |
-
# else:
|
196 |
-
# ans_freq_dict[ans_proc] += 1
|
197 |
-
#
|
198 |
-
# ans_freq_filter = ans_freq_dict.copy()
|
199 |
-
# for ans in ans_freq_dict:
|
200 |
-
# if ans_freq_dict[ans] <= ans_freq:
|
201 |
-
# ans_freq_filter.pop(ans)
|
202 |
-
#
|
203 |
-
# for ans in ans_freq_filter:
|
204 |
-
# ix_to_ans[ans_to_ix.__len__()] = ans
|
205 |
-
# ans_to_ix[ans] = ans_to_ix.__len__()
|
206 |
-
#
|
207 |
-
# return ans_to_ix, ix_to_ans
|
208 |
-
|
209 |
-
def ans_stat(self, json_file):
|
210 |
-
ans_to_ix, ix_to_ans = json.load(open(json_file, 'r'))
|
211 |
-
|
212 |
-
return ans_to_ix, ix_to_ans
|
213 |
-
|
214 |
-
|
215 |
-
|
216 |
-
# ----------------------------------------------
|
217 |
-
# ---- Real-Time Processing Implementations ----
|
218 |
-
# ----------------------------------------------
|
219 |
-
|
220 |
-
def load_ques_ans(self, idx):
|
221 |
-
if self.__C.RUN_MODE in ['train']:
|
222 |
-
ans = self.ans_list[idx]
|
223 |
-
ques = self.qid_to_ques[str(ans['question_id'])]
|
224 |
-
iid = str(ans['image_id'])
|
225 |
-
|
226 |
-
# Process question
|
227 |
-
ques_ix_iter = self.proc_ques(ques, self.token_to_ix, max_token=14)
|
228 |
-
|
229 |
-
# Process answer
|
230 |
-
ans_iter = self.proc_ans(ans, self.ans_to_ix)
|
231 |
-
|
232 |
-
return ques_ix_iter, ans_iter, iid
|
233 |
-
|
234 |
-
else:
|
235 |
-
ques = self.ques_list[idx]
|
236 |
-
iid = str(ques['image_id'])
|
237 |
-
|
238 |
-
ques_ix_iter = self.proc_ques(ques, self.token_to_ix, max_token=14)
|
239 |
-
|
240 |
-
return ques_ix_iter, np.zeros(1), iid
|
241 |
-
|
242 |
-
|
243 |
-
def load_img_feats(self, idx, iid):
|
244 |
-
frcn_feat = np.load(self.iid_to_frcn_feat_path[iid])
|
245 |
-
frcn_feat_x = frcn_feat['x'].transpose((1, 0))
|
246 |
-
frcn_feat_iter = self.proc_img_feat(frcn_feat_x, img_feat_pad_size=self.__C.FEAT_SIZE['vqa']['FRCN_FEAT_SIZE'][0])
|
247 |
-
|
248 |
-
bbox_feat_iter = self.proc_img_feat(
|
249 |
-
self.proc_bbox_feat(
|
250 |
-
frcn_feat['bbox'],
|
251 |
-
(frcn_feat['image_h'], frcn_feat['image_w'])
|
252 |
-
),
|
253 |
-
img_feat_pad_size=self.__C.FEAT_SIZE['vqa']['BBOX_FEAT_SIZE'][0]
|
254 |
-
)
|
255 |
-
grid_feat_iter = np.zeros(1)
|
256 |
-
|
257 |
-
return frcn_feat_iter, grid_feat_iter, bbox_feat_iter
|
258 |
-
|
259 |
-
|
260 |
-
|
261 |
-
# ------------------------------------
|
262 |
-
# ---- Real-Time Processing Utils ----
|
263 |
-
# ------------------------------------
|
264 |
-
|
265 |
-
def proc_img_feat(self, img_feat, img_feat_pad_size):
|
266 |
-
if img_feat.shape[0] > img_feat_pad_size:
|
267 |
-
img_feat = img_feat[:img_feat_pad_size]
|
268 |
-
|
269 |
-
img_feat = np.pad(
|
270 |
-
img_feat,
|
271 |
-
((0, img_feat_pad_size - img_feat.shape[0]), (0, 0)),
|
272 |
-
mode='constant',
|
273 |
-
constant_values=0
|
274 |
-
)
|
275 |
-
|
276 |
-
return img_feat
|
277 |
-
|
278 |
-
|
279 |
-
def proc_bbox_feat(self, bbox, img_shape):
|
280 |
-
if self.__C.BBOX_NORMALIZE:
|
281 |
-
bbox_nm = np.zeros((bbox.shape[0], 4), dtype=np.float32)
|
282 |
-
|
283 |
-
bbox_nm[:, 0] = bbox[:, 0] / float(img_shape[1])
|
284 |
-
bbox_nm[:, 1] = bbox[:, 1] / float(img_shape[0])
|
285 |
-
bbox_nm[:, 2] = bbox[:, 2] / float(img_shape[1])
|
286 |
-
bbox_nm[:, 3] = bbox[:, 3] / float(img_shape[0])
|
287 |
-
return bbox_nm
|
288 |
-
# bbox_feat[:, 4] = (bbox[:, 2] - bbox[:, 0]) * (bbox[:, 3] - bbox[:, 1]) / float(img_shape[0] * img_shape[1])
|
289 |
-
|
290 |
-
return bbox
|
291 |
-
|
292 |
-
|
293 |
-
def proc_ques(self, ques, token_to_ix, max_token):
|
294 |
-
ques_ix = np.zeros(max_token, np.int64)
|
295 |
-
|
296 |
-
words = re.sub(
|
297 |
-
r"([.,'!?\"()*#:;])",
|
298 |
-
'',
|
299 |
-
ques['question'].lower()
|
300 |
-
).replace('-', ' ').replace('/', ' ').split()
|
301 |
-
|
302 |
-
for ix, word in enumerate(words):
|
303 |
-
if word in token_to_ix:
|
304 |
-
ques_ix[ix] = token_to_ix[word]
|
305 |
-
else:
|
306 |
-
ques_ix[ix] = token_to_ix['UNK']
|
307 |
-
|
308 |
-
if ix + 1 == max_token:
|
309 |
-
break
|
310 |
-
|
311 |
-
return ques_ix
|
312 |
-
|
313 |
-
|
314 |
-
def get_score(self, occur):
|
315 |
-
if occur == 0:
|
316 |
-
return .0
|
317 |
-
elif occur == 1:
|
318 |
-
return .3
|
319 |
-
elif occur == 2:
|
320 |
-
return .6
|
321 |
-
elif occur == 3:
|
322 |
-
return .9
|
323 |
-
else:
|
324 |
-
return 1.
|
325 |
-
|
326 |
-
|
327 |
-
def proc_ans(self, ans, ans_to_ix):
|
328 |
-
ans_score = np.zeros(ans_to_ix.__len__(), np.float32)
|
329 |
-
ans_prob_dict = {}
|
330 |
-
|
331 |
-
for ans_ in ans['answers']:
|
332 |
-
ans_proc = prep_ans(ans_['answer'])
|
333 |
-
if ans_proc not in ans_prob_dict:
|
334 |
-
ans_prob_dict[ans_proc] = 1
|
335 |
-
else:
|
336 |
-
ans_prob_dict[ans_proc] += 1
|
337 |
-
|
338 |
-
if self.__C.LOSS_FUNC in ['kld']:
|
339 |
-
for ans_ in ans_prob_dict:
|
340 |
-
if ans_ in ans_to_ix:
|
341 |
-
ans_score[ans_to_ix[ans_]] = ans_prob_dict[ans_] / 10.
|
342 |
-
else:
|
343 |
-
for ans_ in ans_prob_dict:
|
344 |
-
if ans_ in ans_to_ix:
|
345 |
-
ans_score[ans_to_ix[ans_]] = self.get_score(ans_prob_dict[ans_])
|
346 |
-
|
347 |
-
return ans_score
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/dependencies/cub/CHANGELOG.md
DELETED
@@ -1,848 +0,0 @@
|
|
1 |
-
# CUB 1.9.10-1 (NVIDIA HPC SDK 20.7, CUDA Toolkit 11.1)
|
2 |
-
|
3 |
-
## Summary
|
4 |
-
|
5 |
-
CUB 1.9.10-1 is the minor release accompanying the NVIDIA HPC SDK 20.7 release
|
6 |
-
and the CUDA Toolkit 11.1 release.
|
7 |
-
|
8 |
-
## Bug Fixes
|
9 |
-
|
10 |
-
- #1217: Move static local in `cub::DeviceCount` to a separate host-only
|
11 |
-
function because NVC++ doesn't support static locals in host-device
|
12 |
-
functions.
|
13 |
-
|
14 |
-
# CUB 1.9.10 (NVIDIA HPC SDK 20.5)
|
15 |
-
|
16 |
-
## Summary
|
17 |
-
|
18 |
-
Thrust 1.9.10 is the release accompanying the NVIDIA HPC SDK 20.5 release.
|
19 |
-
It adds CMake `find_package` support.
|
20 |
-
C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated.
|
21 |
-
Starting with the upcoming 1.10.0 release, C++03 support will be dropped
|
22 |
-
entirely.
|
23 |
-
|
24 |
-
## Breaking Changes
|
25 |
-
|
26 |
-
- Thrust now checks that it is compatible with the version of CUB found
|
27 |
-
in your include path, generating an error if it is not.
|
28 |
-
If you are using your own version of CUB, it may be too old.
|
29 |
-
It is recommended to simply delete your own version of CUB and use the
|
30 |
-
version of CUB that comes with Thrust.
|
31 |
-
- C++03 and C++11 are deprecated.
|
32 |
-
Using these dialects will generate a compile-time warning.
|
33 |
-
These warnings can be suppressed by defining
|
34 |
-
`CUB_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11
|
35 |
-
deprecation warnings) or `CUB_IGNORE_DEPRECATED_CPP_11` (to suppress C++11
|
36 |
-
deprecation warnings).
|
37 |
-
Suppression is only a short term solution.
|
38 |
-
We will be dropping support for C++03 in the 1.10.0 release and C++11 in the
|
39 |
-
near future.
|
40 |
-
- GCC < 5, Clang < 6, and MSVC < 2017 are deprecated.
|
41 |
-
Using these compilers will generate a compile-time warning.
|
42 |
-
These warnings can be suppressed by defining
|
43 |
-
`CUB_IGNORE_DEPRECATED_COMPILER`.
|
44 |
-
Suppression is only a short term solution.
|
45 |
-
We will be dropping support for these compilers in the near future.
|
46 |
-
|
47 |
-
## New Features
|
48 |
-
|
49 |
-
- CMake `find_package` support.
|
50 |
-
Just point CMake at the `cmake` folder in your CUB include directory
|
51 |
-
(ex: `cmake -DCUB_DIR=/usr/local/cuda/include/cub/cmake/ .`) and then you
|
52 |
-
can add CUB to your CMake project with `find_package(CUB REQUIRED CONFIG)`.
|
53 |
-
|
54 |
-
# CUB 1.9.9 (CUDA 11.0)
|
55 |
-
|
56 |
-
## Summary
|
57 |
-
|
58 |
-
CUB 1.9.9 is the release accompanying the CUDA Toolkit 11.0 release.
|
59 |
-
It introduces CMake support, version macros, platform detection machinery,
|
60 |
-
and support for NVC++, which uses Thrust (and thus CUB) to implement
|
61 |
-
GPU-accelerated C++17 Parallel Algorithms.
|
62 |
-
Additionally, the scan dispatch layer was refactored and modernized.
|
63 |
-
C++03, C++11, GCC < 5, Clang < 6, and MSVC < 2017 are now deprecated.
|
64 |
-
Starting with the upcoming 1.10.0 release, C++03 support will be dropped
|
65 |
-
entirely.
|
66 |
-
|
67 |
-
## Breaking Changes
|
68 |
-
|
69 |
-
- Thrust now checks that it is compatible with the version of CUB found
|
70 |
-
in your include path, generating an error if it is not.
|
71 |
-
If you are using your own version of CUB, it may be too old.
|
72 |
-
It is recommended to simply delete your own version of CUB and use the
|
73 |
-
version of CUB that comes with Thrust.
|
74 |
-
- C++03 and C++11 are deprecated.
|
75 |
-
Using these dialects will generate a compile-time warning.
|
76 |
-
These warnings can be suppressed by defining
|
77 |
-
`CUB_IGNORE_DEPRECATED_CPP_DIALECT` (to suppress C++03 and C++11
|
78 |
-
deprecation warnings) or `CUB_IGNORE_DEPRECATED_CPP11` (to suppress C++11
|
79 |
-
deprecation warnings).
|
80 |
-
Suppression is only a short term solution.
|
81 |
-
We will be dropping support for C++03 in the 1.10.0 release and C++11 in the
|
82 |
-
near future.
|
83 |
-
- GCC < 5, Clang < 6, and MSVC < 2017 are deprecated.
|
84 |
-
Using these compilers will generate a compile-time warning.
|
85 |
-
These warnings can be suppressed by defining
|
86 |
-
`CUB_IGNORE_DEPRECATED_COMPILER`.
|
87 |
-
Suppression is only a short term solution.
|
88 |
-
We will be dropping support for these compilers in the near future.
|
89 |
-
|
90 |
-
## New Features
|
91 |
-
|
92 |
-
- CMake support.
|
93 |
-
Thanks to Francis Lemaire for this contribution.
|
94 |
-
- Refactorized and modernized scan dispatch layer.
|
95 |
-
Thanks to Francis Lemaire for this contribution.
|
96 |
-
- Policy hooks for device-wide reduce, scan, and radix sort facilities
|
97 |
-
to simplify tuning and allow users to provide custom policies.
|
98 |
-
Thanks to Francis Lemaire for this contribution.
|
99 |
-
- `<cub/version.cuh>`: `CUB_VERSION`, `CUB_VERSION_MAJOR`, `CUB_VERSION_MINOR`,
|
100 |
-
`CUB_VERSION_SUBMINOR`, and `CUB_PATCH_NUMBER`.
|
101 |
-
- Platform detection machinery:
|
102 |
-
- `<cub/util_cpp_dialect.cuh>`: Detects the C++ standard dialect.
|
103 |
-
- `<cub/util_compiler.cuh>`: host and device compiler detection.
|
104 |
-
- `<cub/util_deprecated.cuh>`: `CUB_DEPRECATED`.
|
105 |
-
- <cub/config.cuh>`: Includes `<cub/util_arch.cuh>`,
|
106 |
-
`<cub/util_compiler.cuh>`, `<cub/util_cpp_dialect.cuh>`,
|
107 |
-
`<cub/util_deprecated.cuh>`, `<cub/util_macro.cuh>`,
|
108 |
-
`<cub/util_namespace.cuh>`
|
109 |
-
- `cub::DeviceCount` and `cub::DeviceCountUncached`, caching abstractions for
|
110 |
-
`cudaGetDeviceCount`.
|
111 |
-
|
112 |
-
## Other Enhancements
|
113 |
-
|
114 |
-
- Lazily initialize the per-device CUDAattribute caches, because CUDA context
|
115 |
-
creation is expensive and adds up with large CUDA binaries on machines with
|
116 |
-
many GPUs.
|
117 |
-
Thanks to the NVIDIA PyTorch team for bringing this to our attention.
|
118 |
-
- Make `cub::SwitchDevice` avoid setting/resetting the device if the current
|
119 |
-
device is the same as the target device.
|
120 |
-
|
121 |
-
## Bug Fixes
|
122 |
-
|
123 |
-
- Add explicit failure parameter to CAS in the CUB attribute cache to workaround
|
124 |
-
a GCC 4.8 bug.
|
125 |
-
- Revert a change in reductions that changed the signedness of the `lane_id`
|
126 |
-
variable to suppress a warning, as this introduces a bug in optimized device
|
127 |
-
code.
|
128 |
-
- Fix initialization in `cub::ExclusiveSum`.
|
129 |
-
Thanks to Conor Hoekstra for this contribution.
|
130 |
-
- Fix initialization of the `std::array` in the CUB attribute cache.
|
131 |
-
- Fix `-Wsign-compare` warnings.
|
132 |
-
Thanks to Elias Stehle for this contribution.
|
133 |
-
- Fix `test_block_reduce.cu` to build without parameters.
|
134 |
-
Thanks to Francis Lemaire for this contribution.
|
135 |
-
- Add missing includes to `grid_even_share.cuh`.
|
136 |
-
Thanks to Francis Lemaire for this contribution.
|
137 |
-
- Add missing includes to `thread_search.cuh`.
|
138 |
-
Thanks to Francis Lemaire for this contribution.
|
139 |
-
- Add missing includes to `cub.cuh`.
|
140 |
-
Thanks to Felix Kallenborn for this contribution.
|
141 |
-
|
142 |
-
# CUB 1.9.8-1 (NVIDIA HPC SDK 20.3)
|
143 |
-
|
144 |
-
## Summary
|
145 |
-
|
146 |
-
CUB 1.9.8-1 is a variant of 1.9.8 accompanying the NVIDIA HPC SDK 20.3 release.
|
147 |
-
It contains modifications necessary to serve as the implementation of NVC++'s
|
148 |
-
GPU-accelerated C++17 Parallel Algorithms.
|
149 |
-
|
150 |
-
# CUB 1.9.8 (CUDA 11.0 Early Access)
|
151 |
-
|
152 |
-
## Summary
|
153 |
-
|
154 |
-
CUB 1.9.8 is the first release of CUB to be officially supported and included
|
155 |
-
in the CUDA Toolkit.
|
156 |
-
When compiling CUB in C++11 mode, CUB now caches calls to CUDA attribute query
|
157 |
-
APIs, which improves performance of these queries by 20x to 50x when they
|
158 |
-
are called concurrently by multiple host threads.
|
159 |
-
|
160 |
-
## Enhancements
|
161 |
-
|
162 |
-
- (C++11 or later) Cache calls to `cudaFuncGetAttributes` and
|
163 |
-
`cudaDeviceGetAttribute` within `cub::PtxVersion` and `cub::SmVersion`.
|
164 |
-
These CUDA APIs acquire locks to CUDA driver/runtime mutex and perform
|
165 |
-
poorly under contention; with the caching, they are 20 to 50x faster when
|
166 |
-
called concurrently.
|
167 |
-
Thanks to Bilge Acun for bringing this issue to our attention.
|
168 |
-
- `DispatchReduce` now takes an `OutputT` template parameter so that users can
|
169 |
-
specify the intermediate type explicitly.
|
170 |
-
- Radix sort tuning policies updates to fix performance issues for element
|
171 |
-
types smaller than 4 bytes.
|
172 |
-
|
173 |
-
## Bug Fixes
|
174 |
-
|
175 |
-
- Change initialization style from copy initialization to direct initialization
|
176 |
-
(which is more permissive) in `AgentReduce` to allow a wider range of types
|
177 |
-
to be used with it.
|
178 |
-
- Fix bad signed/unsigned comparisons in `WarpReduce`.
|
179 |
-
- Fix computation of valid lanes in warp-level reduction primitive to correctly
|
180 |
-
handle the case where there are 0 input items per warp.
|
181 |
-
|
182 |
-
# CUB 1.8.0
|
183 |
-
|
184 |
-
## Summary
|
185 |
-
|
186 |
-
CUB 1.8.0 introduces changes to the `cub::Shuffle*` interfaces.
|
187 |
-
|
188 |
-
## Breaking Changes
|
189 |
-
|
190 |
-
- The interfaces of `cub::ShuffleIndex`, `cub::ShuffleUp`, and
|
191 |
-
`cub::ShuffleDown` have been changed to allow for better computation of the
|
192 |
-
PTX SHFL control constant for logical warps smaller than 32 threads.
|
193 |
-
|
194 |
-
## Bug Fixes
|
195 |
-
|
196 |
-
- #112: Fix `cub::WarpScan`'s broadcast of warp-wide aggregate for logical
|
197 |
-
warps smaller than 32 threads.
|
198 |
-
|
199 |
-
# CUB 1.7.5
|
200 |
-
|
201 |
-
## Summary
|
202 |
-
|
203 |
-
CUB 1.7.5 adds support for radix sorting `__half` keys and improved sorting
|
204 |
-
performance for 1 byte keys.
|
205 |
-
It was incorporated into Thrust 1.9.2.
|
206 |
-
|
207 |
-
## Enhancements
|
208 |
-
|
209 |
-
- Radix sort support for `__half` keys.
|
210 |
-
- Radix sort tuning policy updates to improve 1 byte key performance.
|
211 |
-
|
212 |
-
## Bug Fixes
|
213 |
-
|
214 |
-
- Syntax tweaks to mollify Clang.
|
215 |
-
- #127: `cub::DeviceRunLengthEncode::Encode` returns incorrect results.
|
216 |
-
- #128: 7-bit sorting passes fail for SM61 with large values.
|
217 |
-
|
218 |
-
# CUB 1.7.4
|
219 |
-
|
220 |
-
## Summary
|
221 |
-
|
222 |
-
CUB 1.7.4 is a minor release that was incorporated into Thrust 1.9.1-2.
|
223 |
-
|
224 |
-
## Bug Fixes
|
225 |
-
|
226 |
-
- #114: Can't pair non-trivially-constructible values in radix sort.
|
227 |
-
- #115: `cub::WarpReduce` segmented reduction is broken in CUDA 9 for logical
|
228 |
-
warp sizes smaller than 32.
|
229 |
-
|
230 |
-
# CUB 1.7.3
|
231 |
-
|
232 |
-
## Summary
|
233 |
-
|
234 |
-
CUB 1.7.3 is a minor release.
|
235 |
-
|
236 |
-
## Bug Fixes
|
237 |
-
|
238 |
-
- #110: `cub::DeviceHistogram` null-pointer exception bug for iterator inputs.
|
239 |
-
|
240 |
-
# CUB 1.7.2
|
241 |
-
|
242 |
-
## Summary
|
243 |
-
|
244 |
-
CUB 1.7.2 is a minor release.
|
245 |
-
|
246 |
-
## Bug Fixes
|
247 |
-
|
248 |
-
- #104: Device-wide reduction is now "run-to-run" deterministic for
|
249 |
-
pseudo-associative reduction operators (like floating point addition).
|
250 |
-
|
251 |
-
# CUB 1.7.1
|
252 |
-
|
253 |
-
## Summary
|
254 |
-
|
255 |
-
CUB 1.7.1 delivers improved radix sort performance on SM7x (Volta) GPUs and a
|
256 |
-
number of bug fixes.
|
257 |
-
|
258 |
-
## Enhancements
|
259 |
-
|
260 |
-
- Radix sort tuning policies updated for SM7x (Volta).
|
261 |
-
|
262 |
-
## Bug Fixes
|
263 |
-
|
264 |
-
- #104: `uint64_t` `cub::WarpReduce` broken for CUB 1.7.0 on CUDA 8 and older.
|
265 |
-
- #103: Can't mix Thrust from CUDA 9.0 and CUB.
|
266 |
-
- #102: CUB pulls in `windows.h` which defines `min`/`max` macros that conflict
|
267 |
-
with `std::min`/`std::max`.
|
268 |
-
- #99: Radix sorting crashes NVCC on Windows 10 for SM52.
|
269 |
-
- #98: cuda-memcheck: --tool initcheck failed with lineOfSight.
|
270 |
-
- #94: Git clone size.
|
271 |
-
- #93: Accept iterators for segment offsets.
|
272 |
-
- #87: CUB uses anonymous unions which is not valid C++.
|
273 |
-
- #44: Check for C++11 is incorrect for Visual Studio 2013.
|
274 |
-
|
275 |
-
# CUB 1.7.0
|
276 |
-
|
277 |
-
## Summary
|
278 |
-
|
279 |
-
CUB 1.7.0 brings support for CUDA 9.0 and SM7x (Volta) GPUs.
|
280 |
-
It is compatible with independent thread scheduling.
|
281 |
-
It was incorporated into Thrust 1.9.0-5.
|
282 |
-
|
283 |
-
## Breaking Changes
|
284 |
-
|
285 |
-
- Remove `cub::WarpAll` and `cub::WarpAny`.
|
286 |
-
These functions served to emulate `__all` and `__any` functionality for
|
287 |
-
SM1x devices, which did not have those operations.
|
288 |
-
However, SM1x devices are now deprecated in CUDA, and the interfaces of these
|
289 |
-
two functions are now lacking the lane-mask needed for collectives to run on
|
290 |
-
SM7x and newer GPUs which have independent thread scheduling.
|
291 |
-
|
292 |
-
## Other Enhancements
|
293 |
-
|
294 |
-
- Remove any assumptions of implicit warp synchronization to be compatible with
|
295 |
-
SM7x's (Volta) independent thread scheduling.
|
296 |
-
|
297 |
-
## Bug Fixes
|
298 |
-
|
299 |
-
- #86: Incorrect results with reduce-by-key.
|
300 |
-
|
301 |
-
# CUB 1.6.4
|
302 |
-
|
303 |
-
## Summary
|
304 |
-
|
305 |
-
CUB 1.6.4 improves radix sorting performance for SM5x (Maxwell) and SM6x
|
306 |
-
(Pascal) GPUs.
|
307 |
-
|
308 |
-
## Enhancements
|
309 |
-
|
310 |
-
- Radix sort tuning policies updated for SM5x (Maxwell) and SM6x (Pascal) -
|
311 |
-
3.5B and 3.4B 32 byte keys/s on TitanX and GTX 1080, respectively.
|
312 |
-
|
313 |
-
## Bug Fixes
|
314 |
-
|
315 |
-
- Restore fence work-around for scan (reduce-by-key, etc.) hangs in CUDA 8.5.
|
316 |
-
- #65: `cub::DeviceSegmentedRadixSort` should allow inputs to have
|
317 |
-
pointer-to-const type.
|
318 |
-
- Mollify Clang device-side warnings.
|
319 |
-
- Remove out-dated MSVC project files.
|
320 |
-
|
321 |
-
# CUB 1.6.3
|
322 |
-
|
323 |
-
## Summary
|
324 |
-
|
325 |
-
CUB 1.6.3 improves support for Windows, changes
|
326 |
-
`cub::BlockLoad`/`cub::BlockStore` interface to take the local data type,
|
327 |
-
and enhances radix sort performance for SM6x (Pascal) GPUs.
|
328 |
-
|
329 |
-
## Breaking Changes
|
330 |
-
|
331 |
-
- `cub::BlockLoad` and `cub::BlockStore` are now templated by the local data
|
332 |
-
type, instead of the `Iterator` type.
|
333 |
-
This allows for output iterators having `void` as their `value_type` (e.g.
|
334 |
-
discard iterators).
|
335 |
-
|
336 |
-
## Other Enhancements
|
337 |
-
|
338 |
-
- Radix sort tuning policies updated for SM6x (Pascal) GPUs - 6.2B 4 byte
|
339 |
-
keys/s on GP100.
|
340 |
-
- Improved support for Windows (warnings, alignment, etc).
|
341 |
-
|
342 |
-
## Bug Fixes
|
343 |
-
|
344 |
-
- #74: `cub::WarpReduce` executes reduction operator for out-of-bounds items.
|
345 |
-
- #72: `cub:InequalityWrapper::operator` should be non-const.
|
346 |
-
- #71: `cub::KeyValuePair` won't work if `Key` has non-trivial constructor.
|
347 |
-
- #69: cub::BlockStore::Store` doesn't compile if `OutputIteratorT::value_type`
|
348 |
-
isn't `T`.
|
349 |
-
- #68: `cub::TilePrefixCallbackOp::WarpReduce` doesn't permit PTX arch
|
350 |
-
specialization.
|
351 |
-
|
352 |
-
# CUB 1.6.2 (previously 1.5.5)
|
353 |
-
|
354 |
-
## Summary
|
355 |
-
|
356 |
-
CUB 1.6.2 (previously 1.5.5) improves radix sort performance for SM6x (Pascal)
|
357 |
-
GPUs.
|
358 |
-
|
359 |
-
## Enhancements
|
360 |
-
|
361 |
-
- Radix sort tuning policies updated for SM6x (Pascal) GPUs.
|
362 |
-
|
363 |
-
## Bug Fixes
|
364 |
-
|
365 |
-
- Fix AArch64 compilation of `cub::CachingDeviceAllocator`.
|
366 |
-
|
367 |
-
# CUB 1.6.1 (previously 1.5.4)
|
368 |
-
|
369 |
-
## Summary
|
370 |
-
|
371 |
-
CUB 1.6.1 (previously 1.5.4) is a minor release.
|
372 |
-
|
373 |
-
## Bug Fixes
|
374 |
-
|
375 |
-
- Fix radix sorting bug introduced by scan refactorization.
|
376 |
-
|
377 |
-
# CUB 1.6.0 (previously 1.5.3)
|
378 |
-
|
379 |
-
## Summary
|
380 |
-
|
381 |
-
CUB 1.6.0 changes the scan and reduce interfaces.
|
382 |
-
Exclusive scans now accept an "initial value" instead of an "identity value".
|
383 |
-
Scans and reductions now support differing input and output sequence types.
|
384 |
-
Additionally, many bugs have been fixed.
|
385 |
-
|
386 |
-
## Breaking Changes
|
387 |
-
|
388 |
-
- Device/block/warp-wide exclusive scans have been revised to now accept an
|
389 |
-
"initial value" (instead of an "identity value") for seeding the computation
|
390 |
-
with an arbitrary prefix.
|
391 |
-
- Device-wide reductions and scans can now have input sequence types that are
|
392 |
-
different from output sequence types (as long as they are convertible).
|
393 |
-
|
394 |
-
## Other Enhancements
|
395 |
-
|
396 |
-
- Reduce repository size by moving the doxygen binary to doc repository.
|
397 |
-
- Minor reduction in `cub::BlockScan` instruction counts.
|
398 |
-
|
399 |
-
## Bug Fixes
|
400 |
-
|
401 |
-
- Issue #55: Warning in `cub/device/dispatch/dispatch_reduce_by_key.cuh`.
|
402 |
-
- Issue #59: `cub::DeviceScan::ExclusiveSum` can't prefix sum of float into
|
403 |
-
double.
|
404 |
-
- Issue #58: Infinite loop in `cub::CachingDeviceAllocator::NearestPowerOf`.
|
405 |
-
- Issue #47: `cub::CachingDeviceAllocator` needs to clean up CUDA global error
|
406 |
-
state upon successful retry.
|
407 |
-
- Issue #46: Very high amount of needed memory from the
|
408 |
-
`cub::DeviceHistogram::HistogramEven`.
|
409 |
-
- Issue #45: `cub::CachingDeviceAllocator` fails with debug output enabled
|
410 |
-
|
411 |
-
# CUB 1.5.2
|
412 |
-
|
413 |
-
## Summary
|
414 |
-
|
415 |
-
CUB 1.5.2 enhances `cub::CachingDeviceAllocator` and improves scan performance
|
416 |
-
for SM5x (Maxwell).
|
417 |
-
|
418 |
-
## Enhancements
|
419 |
-
|
420 |
-
- Improved medium-size scan performance on SM5x (Maxwell).
|
421 |
-
- Refactored `cub::CachingDeviceAllocator`:
|
422 |
-
- Now spends less time locked.
|
423 |
-
- Uses C++11's `std::mutex` when available.
|
424 |
-
- Failure to allocate a block from the runtime will retry once after
|
425 |
-
freeing cached allocations.
|
426 |
-
- Now respects max-bin, fixing an issue where blocks in excess of max-bin
|
427 |
-
were still being retained in the free cache.
|
428 |
-
|
429 |
-
## Bug fixes:
|
430 |
-
|
431 |
-
- Fix for generic-type reduce-by-key `cub::WarpScan` for SM3x and newer GPUs.
|
432 |
-
|
433 |
-
# CUB 1.5.1
|
434 |
-
|
435 |
-
## Summary
|
436 |
-
|
437 |
-
CUB 1.5.1 is a minor release.
|
438 |
-
|
439 |
-
## Bug Fixes
|
440 |
-
|
441 |
-
- Fix for incorrect `cub::DeviceRadixSort` output for some small problems on
|
442 |
-
SM52 (Mawell) GPUs.
|
443 |
-
- Fix for macro redefinition warnings when compiling `thrust::sort`.
|
444 |
-
|
445 |
-
# CUB 1.5.0
|
446 |
-
|
447 |
-
CUB 1.5.0 introduces segmented sort and reduction primitives.
|
448 |
-
|
449 |
-
## New Features:
|
450 |
-
|
451 |
-
- Segmented device-wide operations for device-wide sort and reduction primitives.
|
452 |
-
|
453 |
-
## Bug Fixes:
|
454 |
-
|
455 |
-
- #36: `cub::ThreadLoad` generates compiler errors when loading from
|
456 |
-
pointer-to-const.
|
457 |
-
- #29: `cub::DeviceRadixSort::SortKeys<bool>` yields compiler errors.
|
458 |
-
- #26: Misaligned address after `cub::DeviceRadixSort::SortKeys`.
|
459 |
-
- #25: Fix for incorrect results and crashes when radix sorting 0-length
|
460 |
-
problems.
|
461 |
-
- Fix CUDA 7.5 issues on SM52 GPUs with SHFL-based warp-scan and
|
462 |
-
warp-reduction on non-primitive data types (e.g. user-defined structs).
|
463 |
-
- Fix small radix sorting problems where 0 temporary bytes were required and
|
464 |
-
users code was invoking `malloc(0)` on some systems where that returns
|
465 |
-
`NULL`.
|
466 |
-
CUB assumed the user was asking for the size again and not running the sort.
|
467 |
-
|
468 |
-
# CUB 1.4.1
|
469 |
-
|
470 |
-
## Summary
|
471 |
-
|
472 |
-
CUB 1.4.1 is a minor release.
|
473 |
-
|
474 |
-
## Enhancements
|
475 |
-
|
476 |
-
- Allow `cub::DeviceRadixSort` and `cub::BlockRadixSort` on bool types.
|
477 |
-
|
478 |
-
## Bug Fixes
|
479 |
-
|
480 |
-
- Fix minor CUDA 7.0 performance regressions in `cub::DeviceScan` and
|
481 |
-
`cub::DeviceReduceByKey`.
|
482 |
-
- Remove requirement for callers to define the `CUB_CDP` macro
|
483 |
-
when invoking CUB device-wide rountines using CUDA dynamic parallelism.
|
484 |
-
- Fix headers not being included in the proper order (or missing includes)
|
485 |
-
for some block-wide functions.
|
486 |
-
|
487 |
-
# CUB 1.4.0
|
488 |
-
|
489 |
-
## Summary
|
490 |
-
|
491 |
-
CUB 1.4.0 adds `cub::DeviceSpmv`, `cub::DeviceRunLength::NonTrivialRuns`,
|
492 |
-
improves `cub::DeviceHistogram`, and introduces support for SM5x (Maxwell)
|
493 |
-
GPUs.
|
494 |
-
|
495 |
-
## New Features:
|
496 |
-
|
497 |
-
- `cub::DeviceSpmv` methods for multiplying sparse matrices by
|
498 |
-
dense vectors, load-balanced using a merge-based parallel decomposition.
|
499 |
-
- `cub::DeviceRadixSort` sorting entry-points that always return
|
500 |
-
the sorted output into the specified buffer, as opposed to the
|
501 |
-
`cub::DoubleBuffer` in which it could end up in either buffer.
|
502 |
-
- `cub::DeviceRunLengthEncode::NonTrivialRuns` for finding the starting
|
503 |
-
offsets and lengths of all non-trivial runs (i.e., length > 1) of keys in
|
504 |
-
a given sequence.
|
505 |
-
Useful for top-down partitioning algorithms like MSD sorting of very-large
|
506 |
-
keys.
|
507 |
-
|
508 |
-
## Other Enhancements
|
509 |
-
|
510 |
-
- Support and performance tuning for SM5x (Maxwell) GPUs.
|
511 |
-
- Updated cub::DeviceHistogram implementation that provides the same
|
512 |
-
"histogram-even" and "histogram-range" functionality as IPP/NPP.
|
513 |
-
Provides extremely fast and, perhaps more importantly, very uniform
|
514 |
-
performance response across diverse real-world datasets, including
|
515 |
-
pathological (homogeneous) sample distributions.
|
516 |
-
|
517 |
-
# CUB 1.3.2
|
518 |
-
|
519 |
-
## Summary
|
520 |
-
|
521 |
-
CUB 1.3.2 is a minor release.
|
522 |
-
|
523 |
-
## Bug Fixes
|
524 |
-
|
525 |
-
- Fix `cub::DeviceReduce` where reductions of small problems (small enough to
|
526 |
-
only dispatch a single thread block) would run in the default stream (stream
|
527 |
-
zero) regardless of whether an alternate stream was specified.
|
528 |
-
|
529 |
-
# CUB 1.3.1
|
530 |
-
|
531 |
-
## Summary
|
532 |
-
|
533 |
-
CUB 1.3.1 is a minor release.
|
534 |
-
|
535 |
-
## Bug Fixes
|
536 |
-
|
537 |
-
- Workaround for a benign WAW race warning reported by cuda-memcheck
|
538 |
-
in `cub::BlockScan` specialized for `BLOCK_SCAN_WARP_SCANS` algorithm.
|
539 |
-
- Fix bug in `cub::DeviceRadixSort` where the algorithm may sort more
|
540 |
-
key bits than the caller specified (up to the nearest radix digit).
|
541 |
-
- Fix for ~3% `cub::DeviceRadixSort` performance regression on SM2x (Fermi) and
|
542 |
-
SM3x (Kepler) GPUs.
|
543 |
-
|
544 |
-
# CUB 1.3.0
|
545 |
-
|
546 |
-
## Summary
|
547 |
-
|
548 |
-
CUB 1.3.0 improves how thread blocks are expressed in block- and warp-wide
|
549 |
-
primitives and adds an enhanced version of `cub::WarpScan`.
|
550 |
-
|
551 |
-
## Breaking Changes
|
552 |
-
|
553 |
-
- CUB's collective (block-wide, warp-wide) primitives underwent a minor
|
554 |
-
interface refactoring:
|
555 |
-
- To provide the appropriate support for multidimensional thread blocks,
|
556 |
-
The interfaces for collective classes are now template-parameterized by
|
557 |
-
X, Y, and Z block dimensions (with `BLOCK_DIM_Y` and `BLOCK_DIM_Z` being
|
558 |
-
optional, and `BLOCK_DIM_X` replacing `BLOCK_THREADS`).
|
559 |
-
Furthermore, the constructors that accept remapped linear
|
560 |
-
thread-identifiers have been removed: all primitives now assume a
|
561 |
-
row-major thread-ranking for multidimensional thread blocks.
|
562 |
-
- To allow the host program (compiled by the host-pass) to accurately
|
563 |
-
determine the device-specific storage requirements for a given collective
|
564 |
-
(compiled for each device-pass), the interfaces for collective classes
|
565 |
-
are now (optionally) template-parameterized by the desired PTX compute
|
566 |
-
capability.
|
567 |
-
This is useful when aliasing collective storage to shared memory that has
|
568 |
-
been allocated dynamically by the host at the kernel call site.
|
569 |
-
- Most CUB programs having typical 1D usage should not require any
|
570 |
-
changes to accomodate these updates.
|
571 |
-
|
572 |
-
## New Features
|
573 |
-
|
574 |
-
- Added "combination" `cub::WarpScan` methods for efficiently computing
|
575 |
-
both inclusive and exclusive prefix scans (and sums).
|
576 |
-
|
577 |
-
## Bug Fixes
|
578 |
-
|
579 |
-
- Fix for bug in `cub::WarpScan` (which affected `cub::BlockScan` and
|
580 |
-
`cub::DeviceScan`) where incorrect results (e.g., NAN) would often be
|
581 |
-
returned when parameterized for floating-point types (fp32, fp64).
|
582 |
-
- Workaround for ptxas error when compiling with with -G flag on Linux (for
|
583 |
-
debug instrumentation).
|
584 |
-
- Fixes for certain scan scenarios using custom scan operators where code
|
585 |
-
compiled for SM1x is run on newer GPUs of higher compute-capability: the
|
586 |
-
compiler could not tell which memory space was being used collective
|
587 |
-
operations and was mistakenly using global ops instead of shared ops.
|
588 |
-
|
589 |
-
# CUB 1.2.3
|
590 |
-
|
591 |
-
## Summary
|
592 |
-
|
593 |
-
CUB 1.2.3 is a minor release.
|
594 |
-
|
595 |
-
## Bug Fixes
|
596 |
-
|
597 |
-
- Fixed access violation bug in `cub::DeviceReduce::ReduceByKey` for
|
598 |
-
non-primitive value types.
|
599 |
-
- Fixed code-snippet bug in `ArgIndexInputIteratorT` documentation.
|
600 |
-
|
601 |
-
# CUB 1.2.2
|
602 |
-
|
603 |
-
## Summary
|
604 |
-
|
605 |
-
CUB 1.2.2 adds a new variant of `cub::BlockReduce` and MSVC project solections
|
606 |
-
for examples.
|
607 |
-
|
608 |
-
## New Features
|
609 |
-
|
610 |
-
- MSVC project solutions for device-wide and block-wide examples
|
611 |
-
- New algorithmic variant of cub::BlockReduce for improved performance
|
612 |
-
when using commutative operators (e.g., numeric addition).
|
613 |
-
|
614 |
-
## Bug Fixes
|
615 |
-
|
616 |
-
- Inclusion of Thrust headers in a certain order prevented CUB device-wide
|
617 |
-
primitives from working properly.
|
618 |
-
|
619 |
-
# CUB 1.2.0
|
620 |
-
|
621 |
-
## Summary
|
622 |
-
|
623 |
-
CUB 1.2.0 adds `cub::DeviceReduce::ReduceByKey` and
|
624 |
-
`cub::DeviceReduce::RunLengthEncode` and support for CUDA 6.0.
|
625 |
-
|
626 |
-
## New Features
|
627 |
-
|
628 |
-
- `cub::DeviceReduce::ReduceByKey`.
|
629 |
-
- `cub::DeviceReduce::RunLengthEncode`.
|
630 |
-
|
631 |
-
## Other Enhancements
|
632 |
-
|
633 |
-
- Improved `cub::DeviceScan`, `cub::DeviceSelect`, `cub::DevicePartition`
|
634 |
-
performance.
|
635 |
-
- Documentation and testing:
|
636 |
-
- Added performance-portability plots for many device-wide primitives.
|
637 |
-
- Explain that iterator (in)compatibilities with CUDA 5.0 (and older) and
|
638 |
-
Thrust 1.6 (and older).
|
639 |
-
- Revised the operation of temporary tile status bookkeeping for
|
640 |
-
`cub::DeviceScan` (and similar) to be safe for current code run on future
|
641 |
-
platforms (now uses proper fences).
|
642 |
-
|
643 |
-
## Bug Fixes
|
644 |
-
|
645 |
-
- Fix `cub::DeviceScan` bug where Windows alignment disagreements between host
|
646 |
-
and device regarding user-defined data types would corrupt tile status.
|
647 |
-
- Fix `cub::BlockScan` bug where certain exclusive scans on custom data types
|
648 |
-
for the `BLOCK_SCAN_WARP_SCANS` variant would return incorrect results for
|
649 |
-
the first thread in the block.
|
650 |
-
- Added workaround to make `cub::TexRefInputIteratorT` work with CUDA 6.0.
|
651 |
-
|
652 |
-
# CUB 1.1.1
|
653 |
-
|
654 |
-
## Summary
|
655 |
-
|
656 |
-
CUB 1.1.1 introduces texture and cache modifier iterators, descending sorting,
|
657 |
-
`cub::DeviceSelect`, `cub::DevicePartition`, `cub::Shuffle*`, and
|
658 |
-
`cub::MaxSMOccupancy`.
|
659 |
-
Additionally, scan and sort performance for older GPUs has been improved and
|
660 |
-
many bugs have been fixed.
|
661 |
-
|
662 |
-
## Breaking Changes
|
663 |
-
|
664 |
-
- Refactored block-wide I/O (`cub::BlockLoad` and `cub::BlockStore`), removing
|
665 |
-
cache-modifiers from their interfaces.
|
666 |
-
`cub::CacheModifiedInputIterator` and `cub::CacheModifiedOutputIterator`
|
667 |
-
should now be used with `cub::BlockLoad` and `cub::BlockStore` to effect that
|
668 |
-
behavior.
|
669 |
-
|
670 |
-
## New Features
|
671 |
-
|
672 |
-
- `cub::TexObjInputIterator`, `cub::TexRefInputIterator`,
|
673 |
-
`cub::CacheModifiedInputIterator`, and `cub::CacheModifiedOutputIterator`
|
674 |
-
types for loading & storing arbitrary types through the cache hierarchy.
|
675 |
-
They are compatible with Thrust.
|
676 |
-
- Descending sorting for `cub::DeviceRadixSort` and `cub::BlockRadixSort`.
|
677 |
-
- Min, max, arg-min, and arg-max operators for `cub::DeviceReduce`.
|
678 |
-
- `cub::DeviceSelect` (select-unique, select-if, and select-flagged).
|
679 |
-
- `cub::DevicePartition` (partition-if, partition-flagged).
|
680 |
-
- Generic `cub::ShuffleUp`, `cub::ShuffleDown`, and `cub::ShuffleIndex` for
|
681 |
-
warp-wide communication of arbitrary data types (SM3x and up).
|
682 |
-
- `cub::MaxSmOccupancy` for accurately determining SM occupancy for any given
|
683 |
-
kernel function pointer.
|
684 |
-
|
685 |
-
## Other Enhancements
|
686 |
-
|
687 |
-
- Improved `cub::DeviceScan` and `cub::DeviceRadixSort` performance for older
|
688 |
-
GPUs (SM1x to SM3x).
|
689 |
-
- Renamed device-wide `stream_synchronous` param to `debug_synchronous` to
|
690 |
-
avoid confusion about usage.
|
691 |
-
- Documentation improvements:
|
692 |
-
- Added simple examples of device-wide methods.
|
693 |
-
- Improved doxygen documentation and example snippets.
|
694 |
-
- Improved test coverege to include up to 21,000 kernel variants and 851,000
|
695 |
-
unit tests (per architecture, per platform).
|
696 |
-
|
697 |
-
## Bug Fixes
|
698 |
-
|
699 |
-
- Fix misc `cub::DeviceScan, BlockScan, DeviceReduce, and BlockReduce bugs when
|
700 |
-
operating on non-primitive types for older architectures SM1x.
|
701 |
-
- SHFL-based scans and reductions produced incorrect results for multi-word
|
702 |
-
types (size > 4B) on Linux.
|
703 |
-
- For `cub::WarpScan`-based scans, not all threads in the first warp were
|
704 |
-
entering the prefix callback functor.
|
705 |
-
- `cub::DeviceRadixSort` had a race condition with key-value pairs for pre-SM35
|
706 |
-
architectures.
|
707 |
-
- `cub::DeviceRadixSor` bitfield-extract behavior with long keys on 64-bit
|
708 |
-
Linux was incorrect.
|
709 |
-
- `cub::BlockDiscontinuity` failed to compile for types other than
|
710 |
-
`int32_t`/`uint32_t`.
|
711 |
-
- CUDA Dynamic Parallelism (CDP, e.g. device-callable) versions of device-wide
|
712 |
-
methods now report the same temporary storage allocation size requirement as
|
713 |
-
their host-callable counterparts.
|
714 |
-
|
715 |
-
# CUB 1.0.2
|
716 |
-
|
717 |
-
## Summary
|
718 |
-
|
719 |
-
CUB 1.0.2 is a minor release.
|
720 |
-
|
721 |
-
## Bug Fixes
|
722 |
-
|
723 |
-
- Corrections to code snippet examples for `cub::BlockLoad`, `cub::BlockStore`,
|
724 |
-
and `cub::BlockDiscontinuity`.
|
725 |
-
- Cleaned up unnecessary/missing header includes.
|
726 |
-
You can now safely include a specific .cuh (instead of `cub.cuh`).
|
727 |
-
- Bug/compilation fixes for `cub::BlockHistogram`.
|
728 |
-
|
729 |
-
# CUB 1.0.1
|
730 |
-
|
731 |
-
## Summary
|
732 |
-
|
733 |
-
CUB 1.0.1 adds `cub::DeviceRadixSort` and `cub::DeviceScan`.
|
734 |
-
Numerous other performance and correctness fixes and included.
|
735 |
-
|
736 |
-
## Breaking Changes
|
737 |
-
|
738 |
-
- New collective interface idiom (specialize/construct/invoke).
|
739 |
-
|
740 |
-
## New Features
|
741 |
-
|
742 |
-
- `cub::DeviceRadixSort`.
|
743 |
-
Implements short-circuiting for homogenous digit passes.
|
744 |
-
- `cub::DeviceScan`.
|
745 |
-
Implements single-pass "adaptive-lookback" strategy.
|
746 |
-
|
747 |
-
## Other Enhancements
|
748 |
-
|
749 |
-
- Significantly improved documentation (with example code snippets).
|
750 |
-
- More extensive regression test suit for aggressively testing collective
|
751 |
-
variants.
|
752 |
-
- Allow non-trially-constructed types (previously unions had prevented aliasing
|
753 |
-
temporary storage of those types).
|
754 |
-
- Improved support for SM3x SHFL (collective ops now use SHFL for types larger
|
755 |
-
than 32 bits).
|
756 |
-
- Better code generation for 64-bit addressing within
|
757 |
-
`cub::BlockLoad`/`cub::BlockStore`.
|
758 |
-
- `cub::DeviceHistogram` now supports histograms of arbitrary bins.
|
759 |
-
- Updates to accommodate CUDA 5.5 dynamic parallelism.
|
760 |
-
|
761 |
-
## Bug Fixes
|
762 |
-
|
763 |
-
- Workarounds for SM10 codegen issues in uncommonly-used
|
764 |
-
`cub::WarpScan`/`cub::WarpReduce` specializations.
|
765 |
-
|
766 |
-
# CUB 0.9.4
|
767 |
-
|
768 |
-
## Summary
|
769 |
-
|
770 |
-
CUB 0.9.3 is a minor release.
|
771 |
-
|
772 |
-
## Enhancements
|
773 |
-
|
774 |
-
- Various documentation updates and corrections.
|
775 |
-
|
776 |
-
## Bug Fixes
|
777 |
-
|
778 |
-
- Fixed compilation errors for SM1x.
|
779 |
-
- Fixed compilation errors for some WarpScan entrypoints on SM3x and up.
|
780 |
-
|
781 |
-
# CUB 0.9.3
|
782 |
-
|
783 |
-
## Summary
|
784 |
-
|
785 |
-
CUB 0.9.3 adds histogram algorithms and work management utility descriptors.
|
786 |
-
|
787 |
-
## New Features
|
788 |
-
|
789 |
-
- `cub::DevicHistogram256`.
|
790 |
-
- `cub::BlockHistogram256`.
|
791 |
-
- `cub::BlockScan` algorithm variant `BLOCK_SCAN_RAKING_MEMOIZE`, which
|
792 |
-
trades more register consumption for less shared memory I/O.
|
793 |
-
- `cub::GridQueue`, `cub::GridEvenShare`, work management utility descriptors.
|
794 |
-
|
795 |
-
## Other Enhancements
|
796 |
-
|
797 |
-
- Updates to `cub::BlockRadixRank` to use `cub::BlockScan`, which improves
|
798 |
-
performance on SM3x by using SHFL.
|
799 |
-
- Allow types other than builtin types to be used in `cub::WarpScan::*Sum`
|
800 |
-
methods if they only have `operator+` overloaded.
|
801 |
-
Previously they also required to support assignment from `int(0)`.
|
802 |
-
- Update `cub::BlockReduce`'s `BLOCK_REDUCE_WARP_REDUCTIONS` algorithm to work
|
803 |
-
even when block size is not an even multiple of warp size.
|
804 |
-
- Refactoring of `cub::DeviceAllocator` interface and
|
805 |
-
`cub::CachingDeviceAllocator` implementation.
|
806 |
-
|
807 |
-
# CUB 0.9.2
|
808 |
-
|
809 |
-
## Summary
|
810 |
-
|
811 |
-
CUB 0.9.2 adds `cub::WarpReduce`.
|
812 |
-
|
813 |
-
## New Features
|
814 |
-
|
815 |
-
- `cub::WarpReduce`, which uses the SHFL instruction when applicable.
|
816 |
-
`cub::BlockReduce` now uses this `cub::WarpReduce` instead of implementing
|
817 |
-
its own.
|
818 |
-
|
819 |
-
## Enhancements
|
820 |
-
|
821 |
-
- Documentation updates and corrections.
|
822 |
-
|
823 |
-
## Bug Fixes
|
824 |
-
|
825 |
-
- Fixes for 64-bit Linux compilation warnings and errors.
|
826 |
-
|
827 |
-
# CUB 0.9.1
|
828 |
-
|
829 |
-
## Summary
|
830 |
-
|
831 |
-
CUB 0.9.1 is a minor release.
|
832 |
-
|
833 |
-
## Bug Fixes
|
834 |
-
|
835 |
-
- Fix for ambiguity in `cub::BlockScan::Reduce` between generic reduction and
|
836 |
-
summation.
|
837 |
-
Summation entrypoints are now called `::Sum()`, similar to the
|
838 |
-
convention in `cub::BlockScan`.
|
839 |
-
- Small edits to documentation and download tracking.
|
840 |
-
|
841 |
-
# CUB 0.9.0
|
842 |
-
|
843 |
-
## Summary
|
844 |
-
|
845 |
-
Initial preview release.
|
846 |
-
CUB is the first durable, high-performance library of cooperative block-level,
|
847 |
-
warp-level, and thread-level primitives for CUDA kernel programming.
|
848 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/scatter.h
DELETED
@@ -1,44 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a fill of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
|
21 |
-
// the purpose of this header is to #include the scatter.h header
|
22 |
-
// of the sequential, host, and device systems. It should be #included in any
|
23 |
-
// code which uses adl to dispatch scatter
|
24 |
-
|
25 |
-
#include <thrust/system/detail/sequential/scatter.h>
|
26 |
-
|
27 |
-
// SCons can't see through the #defines below to figure out what this header
|
28 |
-
// includes, so we fake it out by specifying all possible files we might end up
|
29 |
-
// including inside an #if 0.
|
30 |
-
#if 0
|
31 |
-
#include <thrust/system/cpp/detail/scatter.h>
|
32 |
-
#include <thrust/system/cuda/detail/scatter.h>
|
33 |
-
#include <thrust/system/omp/detail/scatter.h>
|
34 |
-
#include <thrust/system/tbb/detail/scatter.h>
|
35 |
-
#endif
|
36 |
-
|
37 |
-
#define __THRUST_HOST_SYSTEM_SCATTER_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/scatter.h>
|
38 |
-
#include __THRUST_HOST_SYSTEM_SCATTER_HEADER
|
39 |
-
#undef __THRUST_HOST_SYSTEM_SCATTER_HEADER
|
40 |
-
|
41 |
-
#define __THRUST_DEVICE_SYSTEM_SCATTER_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/scatter.h>
|
42 |
-
#include __THRUST_DEVICE_SYSTEM_SCATTER_HEADER
|
43 |
-
#undef __THRUST_DEVICE_SYSTEM_SCATTER_HEADER
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/merge.h
DELETED
@@ -1,70 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
#include <thrust/system/tbb/detail/execution_policy.h>
|
21 |
-
|
22 |
-
namespace thrust
|
23 |
-
{
|
24 |
-
namespace system
|
25 |
-
{
|
26 |
-
namespace tbb
|
27 |
-
{
|
28 |
-
namespace detail
|
29 |
-
{
|
30 |
-
|
31 |
-
template<typename ExecutionPolicy,
|
32 |
-
typename InputIterator1,
|
33 |
-
typename InputIterator2,
|
34 |
-
typename OutputIterator,
|
35 |
-
typename StrictWeakOrdering>
|
36 |
-
OutputIterator merge(execution_policy<ExecutionPolicy> &exec,
|
37 |
-
InputIterator1 first1,
|
38 |
-
InputIterator1 last1,
|
39 |
-
InputIterator2 first2,
|
40 |
-
InputIterator2 last2,
|
41 |
-
OutputIterator result,
|
42 |
-
StrictWeakOrdering comp);
|
43 |
-
|
44 |
-
template <typename ExecutionPolicy,
|
45 |
-
typename InputIterator1,
|
46 |
-
typename InputIterator2,
|
47 |
-
typename InputIterator3,
|
48 |
-
typename InputIterator4,
|
49 |
-
typename OutputIterator1,
|
50 |
-
typename OutputIterator2,
|
51 |
-
typename StrictWeakOrdering>
|
52 |
-
thrust::pair<OutputIterator1,OutputIterator2>
|
53 |
-
merge_by_key(execution_policy<ExecutionPolicy> &exec,
|
54 |
-
InputIterator1 keys_first1,
|
55 |
-
InputIterator1 keys_last1,
|
56 |
-
InputIterator2 keys_first2,
|
57 |
-
InputIterator2 keys_last2,
|
58 |
-
InputIterator3 values_first3,
|
59 |
-
InputIterator4 values_first4,
|
60 |
-
OutputIterator1 keys_result,
|
61 |
-
OutputIterator2 values_result,
|
62 |
-
StrictWeakOrdering comp);
|
63 |
-
|
64 |
-
} // end detail
|
65 |
-
} // end tbb
|
66 |
-
} // end system
|
67 |
-
} // end thrust
|
68 |
-
|
69 |
-
#include <thrust/system/tbb/detail/merge.inl>
|
70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/models/ade20k/__init__.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
from .base import *
|
|
|
|
spaces/CVPR/lama-example/saicinpainting/training/data/masks.py
DELETED
@@ -1,332 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import random
|
3 |
-
import hashlib
|
4 |
-
import logging
|
5 |
-
from enum import Enum
|
6 |
-
|
7 |
-
import cv2
|
8 |
-
import numpy as np
|
9 |
-
|
10 |
-
from saicinpainting.evaluation.masks.mask import SegmentationMask
|
11 |
-
from saicinpainting.utils import LinearRamp
|
12 |
-
|
13 |
-
LOGGER = logging.getLogger(__name__)
|
14 |
-
|
15 |
-
|
16 |
-
class DrawMethod(Enum):
|
17 |
-
LINE = 'line'
|
18 |
-
CIRCLE = 'circle'
|
19 |
-
SQUARE = 'square'
|
20 |
-
|
21 |
-
|
22 |
-
def make_random_irregular_mask(shape, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10,
|
23 |
-
draw_method=DrawMethod.LINE):
|
24 |
-
draw_method = DrawMethod(draw_method)
|
25 |
-
|
26 |
-
height, width = shape
|
27 |
-
mask = np.zeros((height, width), np.float32)
|
28 |
-
times = np.random.randint(min_times, max_times + 1)
|
29 |
-
for i in range(times):
|
30 |
-
start_x = np.random.randint(width)
|
31 |
-
start_y = np.random.randint(height)
|
32 |
-
for j in range(1 + np.random.randint(5)):
|
33 |
-
angle = 0.01 + np.random.randint(max_angle)
|
34 |
-
if i % 2 == 0:
|
35 |
-
angle = 2 * 3.1415926 - angle
|
36 |
-
length = 10 + np.random.randint(max_len)
|
37 |
-
brush_w = 5 + np.random.randint(max_width)
|
38 |
-
end_x = np.clip((start_x + length * np.sin(angle)).astype(np.int32), 0, width)
|
39 |
-
end_y = np.clip((start_y + length * np.cos(angle)).astype(np.int32), 0, height)
|
40 |
-
if draw_method == DrawMethod.LINE:
|
41 |
-
cv2.line(mask, (start_x, start_y), (end_x, end_y), 1.0, brush_w)
|
42 |
-
elif draw_method == DrawMethod.CIRCLE:
|
43 |
-
cv2.circle(mask, (start_x, start_y), radius=brush_w, color=1., thickness=-1)
|
44 |
-
elif draw_method == DrawMethod.SQUARE:
|
45 |
-
radius = brush_w // 2
|
46 |
-
mask[start_y - radius:start_y + radius, start_x - radius:start_x + radius] = 1
|
47 |
-
start_x, start_y = end_x, end_y
|
48 |
-
return mask[None, ...]
|
49 |
-
|
50 |
-
|
51 |
-
class RandomIrregularMaskGenerator:
|
52 |
-
def __init__(self, max_angle=4, max_len=60, max_width=20, min_times=0, max_times=10, ramp_kwargs=None,
|
53 |
-
draw_method=DrawMethod.LINE):
|
54 |
-
self.max_angle = max_angle
|
55 |
-
self.max_len = max_len
|
56 |
-
self.max_width = max_width
|
57 |
-
self.min_times = min_times
|
58 |
-
self.max_times = max_times
|
59 |
-
self.draw_method = draw_method
|
60 |
-
self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None
|
61 |
-
|
62 |
-
def __call__(self, img, iter_i=None, raw_image=None):
|
63 |
-
coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1
|
64 |
-
cur_max_len = int(max(1, self.max_len * coef))
|
65 |
-
cur_max_width = int(max(1, self.max_width * coef))
|
66 |
-
cur_max_times = int(self.min_times + 1 + (self.max_times - self.min_times) * coef)
|
67 |
-
return make_random_irregular_mask(img.shape[1:], max_angle=self.max_angle, max_len=cur_max_len,
|
68 |
-
max_width=cur_max_width, min_times=self.min_times, max_times=cur_max_times,
|
69 |
-
draw_method=self.draw_method)
|
70 |
-
|
71 |
-
|
72 |
-
def make_random_rectangle_mask(shape, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3):
|
73 |
-
height, width = shape
|
74 |
-
mask = np.zeros((height, width), np.float32)
|
75 |
-
bbox_max_size = min(bbox_max_size, height - margin * 2, width - margin * 2)
|
76 |
-
times = np.random.randint(min_times, max_times + 1)
|
77 |
-
for i in range(times):
|
78 |
-
box_width = np.random.randint(bbox_min_size, bbox_max_size)
|
79 |
-
box_height = np.random.randint(bbox_min_size, bbox_max_size)
|
80 |
-
start_x = np.random.randint(margin, width - margin - box_width + 1)
|
81 |
-
start_y = np.random.randint(margin, height - margin - box_height + 1)
|
82 |
-
mask[start_y:start_y + box_height, start_x:start_x + box_width] = 1
|
83 |
-
return mask[None, ...]
|
84 |
-
|
85 |
-
|
86 |
-
class RandomRectangleMaskGenerator:
|
87 |
-
def __init__(self, margin=10, bbox_min_size=30, bbox_max_size=100, min_times=0, max_times=3, ramp_kwargs=None):
|
88 |
-
self.margin = margin
|
89 |
-
self.bbox_min_size = bbox_min_size
|
90 |
-
self.bbox_max_size = bbox_max_size
|
91 |
-
self.min_times = min_times
|
92 |
-
self.max_times = max_times
|
93 |
-
self.ramp = LinearRamp(**ramp_kwargs) if ramp_kwargs is not None else None
|
94 |
-
|
95 |
-
def __call__(self, img, iter_i=None, raw_image=None):
|
96 |
-
coef = self.ramp(iter_i) if (self.ramp is not None) and (iter_i is not None) else 1
|
97 |
-
cur_bbox_max_size = int(self.bbox_min_size + 1 + (self.bbox_max_size - self.bbox_min_size) * coef)
|
98 |
-
cur_max_times = int(self.min_times + (self.max_times - self.min_times) * coef)
|
99 |
-
return make_random_rectangle_mask(img.shape[1:], margin=self.margin, bbox_min_size=self.bbox_min_size,
|
100 |
-
bbox_max_size=cur_bbox_max_size, min_times=self.min_times,
|
101 |
-
max_times=cur_max_times)
|
102 |
-
|
103 |
-
|
104 |
-
class RandomSegmentationMaskGenerator:
|
105 |
-
def __init__(self, **kwargs):
|
106 |
-
self.impl = None # will be instantiated in first call (effectively in subprocess)
|
107 |
-
self.kwargs = kwargs
|
108 |
-
|
109 |
-
def __call__(self, img, iter_i=None, raw_image=None):
|
110 |
-
if self.impl is None:
|
111 |
-
self.impl = SegmentationMask(**self.kwargs)
|
112 |
-
|
113 |
-
masks = self.impl.get_masks(np.transpose(img, (1, 2, 0)))
|
114 |
-
masks = [m for m in masks if len(np.unique(m)) > 1]
|
115 |
-
return np.random.choice(masks)
|
116 |
-
|
117 |
-
|
118 |
-
def make_random_superres_mask(shape, min_step=2, max_step=4, min_width=1, max_width=3):
|
119 |
-
height, width = shape
|
120 |
-
mask = np.zeros((height, width), np.float32)
|
121 |
-
step_x = np.random.randint(min_step, max_step + 1)
|
122 |
-
width_x = np.random.randint(min_width, min(step_x, max_width + 1))
|
123 |
-
offset_x = np.random.randint(0, step_x)
|
124 |
-
|
125 |
-
step_y = np.random.randint(min_step, max_step + 1)
|
126 |
-
width_y = np.random.randint(min_width, min(step_y, max_width + 1))
|
127 |
-
offset_y = np.random.randint(0, step_y)
|
128 |
-
|
129 |
-
for dy in range(width_y):
|
130 |
-
mask[offset_y + dy::step_y] = 1
|
131 |
-
for dx in range(width_x):
|
132 |
-
mask[:, offset_x + dx::step_x] = 1
|
133 |
-
return mask[None, ...]
|
134 |
-
|
135 |
-
|
136 |
-
class RandomSuperresMaskGenerator:
|
137 |
-
def __init__(self, **kwargs):
|
138 |
-
self.kwargs = kwargs
|
139 |
-
|
140 |
-
def __call__(self, img, iter_i=None):
|
141 |
-
return make_random_superres_mask(img.shape[1:], **self.kwargs)
|
142 |
-
|
143 |
-
|
144 |
-
class DumbAreaMaskGenerator:
|
145 |
-
min_ratio = 0.1
|
146 |
-
max_ratio = 0.35
|
147 |
-
default_ratio = 0.225
|
148 |
-
|
149 |
-
def __init__(self, is_training):
|
150 |
-
#Parameters:
|
151 |
-
# is_training(bool): If true - random rectangular mask, if false - central square mask
|
152 |
-
self.is_training = is_training
|
153 |
-
|
154 |
-
def _random_vector(self, dimension):
|
155 |
-
if self.is_training:
|
156 |
-
lower_limit = math.sqrt(self.min_ratio)
|
157 |
-
upper_limit = math.sqrt(self.max_ratio)
|
158 |
-
mask_side = round((random.random() * (upper_limit - lower_limit) + lower_limit) * dimension)
|
159 |
-
u = random.randint(0, dimension-mask_side-1)
|
160 |
-
v = u+mask_side
|
161 |
-
else:
|
162 |
-
margin = (math.sqrt(self.default_ratio) / 2) * dimension
|
163 |
-
u = round(dimension/2 - margin)
|
164 |
-
v = round(dimension/2 + margin)
|
165 |
-
return u, v
|
166 |
-
|
167 |
-
def __call__(self, img, iter_i=None, raw_image=None):
|
168 |
-
c, height, width = img.shape
|
169 |
-
mask = np.zeros((height, width), np.float32)
|
170 |
-
x1, x2 = self._random_vector(width)
|
171 |
-
y1, y2 = self._random_vector(height)
|
172 |
-
mask[x1:x2, y1:y2] = 1
|
173 |
-
return mask[None, ...]
|
174 |
-
|
175 |
-
|
176 |
-
class OutpaintingMaskGenerator:
|
177 |
-
def __init__(self, min_padding_percent:float=0.04, max_padding_percent:int=0.25, left_padding_prob:float=0.5, top_padding_prob:float=0.5,
|
178 |
-
right_padding_prob:float=0.5, bottom_padding_prob:float=0.5, is_fixed_randomness:bool=False):
|
179 |
-
"""
|
180 |
-
is_fixed_randomness - get identical paddings for the same image if args are the same
|
181 |
-
"""
|
182 |
-
self.min_padding_percent = min_padding_percent
|
183 |
-
self.max_padding_percent = max_padding_percent
|
184 |
-
self.probs = [left_padding_prob, top_padding_prob, right_padding_prob, bottom_padding_prob]
|
185 |
-
self.is_fixed_randomness = is_fixed_randomness
|
186 |
-
|
187 |
-
assert self.min_padding_percent <= self.max_padding_percent
|
188 |
-
assert self.max_padding_percent > 0
|
189 |
-
assert len([x for x in [self.min_padding_percent, self.max_padding_percent] if (x>=0 and x<=1)]) == 2, f"Padding percentage should be in [0,1]"
|
190 |
-
assert sum(self.probs) > 0, f"At least one of the padding probs should be greater than 0 - {self.probs}"
|
191 |
-
assert len([x for x in self.probs if (x >= 0) and (x <= 1)]) == 4, f"At least one of padding probs is not in [0,1] - {self.probs}"
|
192 |
-
if len([x for x in self.probs if x > 0]) == 1:
|
193 |
-
LOGGER.warning(f"Only one padding prob is greater than zero - {self.probs}. That means that the outpainting masks will be always on the same side")
|
194 |
-
|
195 |
-
def apply_padding(self, mask, coord):
|
196 |
-
mask[int(coord[0][0]*self.img_h):int(coord[1][0]*self.img_h),
|
197 |
-
int(coord[0][1]*self.img_w):int(coord[1][1]*self.img_w)] = 1
|
198 |
-
return mask
|
199 |
-
|
200 |
-
def get_padding(self, size):
|
201 |
-
n1 = int(self.min_padding_percent*size)
|
202 |
-
n2 = int(self.max_padding_percent*size)
|
203 |
-
return self.rnd.randint(n1, n2) / size
|
204 |
-
|
205 |
-
@staticmethod
|
206 |
-
def _img2rs(img):
|
207 |
-
arr = np.ascontiguousarray(img.astype(np.uint8))
|
208 |
-
str_hash = hashlib.sha1(arr).hexdigest()
|
209 |
-
res = hash(str_hash)%(2**32)
|
210 |
-
return res
|
211 |
-
|
212 |
-
def __call__(self, img, iter_i=None, raw_image=None):
|
213 |
-
c, self.img_h, self.img_w = img.shape
|
214 |
-
mask = np.zeros((self.img_h, self.img_w), np.float32)
|
215 |
-
at_least_one_mask_applied = False
|
216 |
-
|
217 |
-
if self.is_fixed_randomness:
|
218 |
-
assert raw_image is not None, f"Cant calculate hash on raw_image=None"
|
219 |
-
rs = self._img2rs(raw_image)
|
220 |
-
self.rnd = np.random.RandomState(rs)
|
221 |
-
else:
|
222 |
-
self.rnd = np.random
|
223 |
-
|
224 |
-
coords = [[
|
225 |
-
(0,0),
|
226 |
-
(1,self.get_padding(size=self.img_h))
|
227 |
-
],
|
228 |
-
[
|
229 |
-
(0,0),
|
230 |
-
(self.get_padding(size=self.img_w),1)
|
231 |
-
],
|
232 |
-
[
|
233 |
-
(0,1-self.get_padding(size=self.img_h)),
|
234 |
-
(1,1)
|
235 |
-
],
|
236 |
-
[
|
237 |
-
(1-self.get_padding(size=self.img_w),0),
|
238 |
-
(1,1)
|
239 |
-
]]
|
240 |
-
|
241 |
-
for pp, coord in zip(self.probs, coords):
|
242 |
-
if self.rnd.random() < pp:
|
243 |
-
at_least_one_mask_applied = True
|
244 |
-
mask = self.apply_padding(mask=mask, coord=coord)
|
245 |
-
|
246 |
-
if not at_least_one_mask_applied:
|
247 |
-
idx = self.rnd.choice(range(len(coords)), p=np.array(self.probs)/sum(self.probs))
|
248 |
-
mask = self.apply_padding(mask=mask, coord=coords[idx])
|
249 |
-
return mask[None, ...]
|
250 |
-
|
251 |
-
|
252 |
-
class MixedMaskGenerator:
|
253 |
-
def __init__(self, irregular_proba=1/3, irregular_kwargs=None,
|
254 |
-
box_proba=1/3, box_kwargs=None,
|
255 |
-
segm_proba=1/3, segm_kwargs=None,
|
256 |
-
squares_proba=0, squares_kwargs=None,
|
257 |
-
superres_proba=0, superres_kwargs=None,
|
258 |
-
outpainting_proba=0, outpainting_kwargs=None,
|
259 |
-
invert_proba=0):
|
260 |
-
self.probas = []
|
261 |
-
self.gens = []
|
262 |
-
|
263 |
-
if irregular_proba > 0:
|
264 |
-
self.probas.append(irregular_proba)
|
265 |
-
if irregular_kwargs is None:
|
266 |
-
irregular_kwargs = {}
|
267 |
-
else:
|
268 |
-
irregular_kwargs = dict(irregular_kwargs)
|
269 |
-
irregular_kwargs['draw_method'] = DrawMethod.LINE
|
270 |
-
self.gens.append(RandomIrregularMaskGenerator(**irregular_kwargs))
|
271 |
-
|
272 |
-
if box_proba > 0:
|
273 |
-
self.probas.append(box_proba)
|
274 |
-
if box_kwargs is None:
|
275 |
-
box_kwargs = {}
|
276 |
-
self.gens.append(RandomRectangleMaskGenerator(**box_kwargs))
|
277 |
-
|
278 |
-
if segm_proba > 0:
|
279 |
-
self.probas.append(segm_proba)
|
280 |
-
if segm_kwargs is None:
|
281 |
-
segm_kwargs = {}
|
282 |
-
self.gens.append(RandomSegmentationMaskGenerator(**segm_kwargs))
|
283 |
-
|
284 |
-
if squares_proba > 0:
|
285 |
-
self.probas.append(squares_proba)
|
286 |
-
if squares_kwargs is None:
|
287 |
-
squares_kwargs = {}
|
288 |
-
else:
|
289 |
-
squares_kwargs = dict(squares_kwargs)
|
290 |
-
squares_kwargs['draw_method'] = DrawMethod.SQUARE
|
291 |
-
self.gens.append(RandomIrregularMaskGenerator(**squares_kwargs))
|
292 |
-
|
293 |
-
if superres_proba > 0:
|
294 |
-
self.probas.append(superres_proba)
|
295 |
-
if superres_kwargs is None:
|
296 |
-
superres_kwargs = {}
|
297 |
-
self.gens.append(RandomSuperresMaskGenerator(**superres_kwargs))
|
298 |
-
|
299 |
-
if outpainting_proba > 0:
|
300 |
-
self.probas.append(outpainting_proba)
|
301 |
-
if outpainting_kwargs is None:
|
302 |
-
outpainting_kwargs = {}
|
303 |
-
self.gens.append(OutpaintingMaskGenerator(**outpainting_kwargs))
|
304 |
-
|
305 |
-
self.probas = np.array(self.probas, dtype='float32')
|
306 |
-
self.probas /= self.probas.sum()
|
307 |
-
self.invert_proba = invert_proba
|
308 |
-
|
309 |
-
def __call__(self, img, iter_i=None, raw_image=None):
|
310 |
-
kind = np.random.choice(len(self.probas), p=self.probas)
|
311 |
-
gen = self.gens[kind]
|
312 |
-
result = gen(img, iter_i=iter_i, raw_image=raw_image)
|
313 |
-
if self.invert_proba > 0 and random.random() < self.invert_proba:
|
314 |
-
result = 1 - result
|
315 |
-
return result
|
316 |
-
|
317 |
-
|
318 |
-
def get_mask_generator(kind, kwargs):
|
319 |
-
if kind is None:
|
320 |
-
kind = "mixed"
|
321 |
-
if kwargs is None:
|
322 |
-
kwargs = {}
|
323 |
-
|
324 |
-
if kind == "mixed":
|
325 |
-
cl = MixedMaskGenerator
|
326 |
-
elif kind == "outpainting":
|
327 |
-
cl = OutpaintingMaskGenerator
|
328 |
-
elif kind == "dumb":
|
329 |
-
cl = DumbAreaMaskGenerator
|
330 |
-
else:
|
331 |
-
raise NotImplementedError(f"No such generator kind = {kind}")
|
332 |
-
return cl(**kwargs)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/regionclip-demo/detectron2/solver/lr_scheduler.py
DELETED
@@ -1,238 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import logging
|
3 |
-
import math
|
4 |
-
from bisect import bisect_right
|
5 |
-
from typing import List
|
6 |
-
import torch
|
7 |
-
from fvcore.common.param_scheduler import (
|
8 |
-
CompositeParamScheduler,
|
9 |
-
ConstantParamScheduler,
|
10 |
-
LinearParamScheduler,
|
11 |
-
ParamScheduler,
|
12 |
-
)
|
13 |
-
|
14 |
-
logger = logging.getLogger(__name__)
|
15 |
-
|
16 |
-
|
17 |
-
class WarmupParamScheduler(CompositeParamScheduler):
|
18 |
-
"""
|
19 |
-
Add an initial warmup stage to another scheduler.
|
20 |
-
"""
|
21 |
-
|
22 |
-
def __init__(
|
23 |
-
self,
|
24 |
-
scheduler: ParamScheduler,
|
25 |
-
warmup_factor: float,
|
26 |
-
warmup_length: float,
|
27 |
-
warmup_method: str = "linear",
|
28 |
-
):
|
29 |
-
"""
|
30 |
-
Args:
|
31 |
-
scheduler: warmup will be added at the beginning of this scheduler
|
32 |
-
warmup_factor: the factor w.r.t the initial value of ``scheduler``, e.g. 0.001
|
33 |
-
warmup_length: the relative length (in [0, 1]) of warmup steps w.r.t the entire
|
34 |
-
training, e.g. 0.01
|
35 |
-
warmup_method: one of "linear" or "constant"
|
36 |
-
"""
|
37 |
-
end_value = scheduler(warmup_length) # the value to reach when warmup ends
|
38 |
-
start_value = warmup_factor * scheduler(0.0)
|
39 |
-
if warmup_method == "constant":
|
40 |
-
warmup = ConstantParamScheduler(start_value)
|
41 |
-
elif warmup_method == "linear":
|
42 |
-
warmup = LinearParamScheduler(start_value, end_value)
|
43 |
-
else:
|
44 |
-
raise ValueError("Unknown warmup method: {}".format(warmup_method))
|
45 |
-
super().__init__(
|
46 |
-
[warmup, scheduler],
|
47 |
-
interval_scaling=["rescaled", "fixed"],
|
48 |
-
lengths=[warmup_length, 1 - warmup_length],
|
49 |
-
)
|
50 |
-
|
51 |
-
|
52 |
-
class LRMultiplier(torch.optim.lr_scheduler._LRScheduler):
|
53 |
-
"""
|
54 |
-
A LRScheduler which uses fvcore :class:`ParamScheduler` to multiply the
|
55 |
-
learning rate of each param in the optimizer.
|
56 |
-
Every step, the learning rate of each parameter becomes its initial value
|
57 |
-
multiplied by the output of the given :class:`ParamScheduler`.
|
58 |
-
|
59 |
-
The absolute learning rate value of each parameter can be different.
|
60 |
-
This scheduler can be used as long as the relative scale among them do
|
61 |
-
not change during training.
|
62 |
-
|
63 |
-
Examples:
|
64 |
-
::
|
65 |
-
LRMultiplier(
|
66 |
-
opt,
|
67 |
-
WarmupParamScheduler(
|
68 |
-
MultiStepParamScheduler(
|
69 |
-
[1, 0.1, 0.01],
|
70 |
-
milestones=[60000, 80000],
|
71 |
-
num_updates=90000,
|
72 |
-
), 0.001, 100 / 90000
|
73 |
-
),
|
74 |
-
max_iter=90000
|
75 |
-
)
|
76 |
-
"""
|
77 |
-
|
78 |
-
# NOTES: in the most general case, every LR can use its own scheduler.
|
79 |
-
# Supporting this requires interaction with the optimizer when its parameter
|
80 |
-
# group is initialized. For example, classyvision implements its own optimizer
|
81 |
-
# that allows different schedulers for every parameter group.
|
82 |
-
# To avoid this complexity, we use this class to support the most common cases
|
83 |
-
# where the relative scale among all LRs stay unchanged during training. In this
|
84 |
-
# case we only need a total of one scheduler that defines the relative LR multiplier.
|
85 |
-
|
86 |
-
def __init__(
|
87 |
-
self,
|
88 |
-
optimizer: torch.optim.Optimizer,
|
89 |
-
multiplier: ParamScheduler,
|
90 |
-
max_iter: int,
|
91 |
-
last_iter: int = -1,
|
92 |
-
):
|
93 |
-
"""
|
94 |
-
Args:
|
95 |
-
optimizer, last_iter: See ``torch.optim.lr_scheduler._LRScheduler``.
|
96 |
-
``last_iter`` is the same as ``last_epoch``.
|
97 |
-
multiplier: a fvcore ParamScheduler that defines the multiplier on
|
98 |
-
every LR of the optimizer
|
99 |
-
max_iter: the total number of training iterations
|
100 |
-
"""
|
101 |
-
if not isinstance(multiplier, ParamScheduler):
|
102 |
-
raise ValueError(
|
103 |
-
"_LRMultiplier(multiplier=) must be an instance of fvcore "
|
104 |
-
f"ParamScheduler. Got {multiplier} instead."
|
105 |
-
)
|
106 |
-
self._multiplier = multiplier
|
107 |
-
self._max_iter = max_iter
|
108 |
-
super().__init__(optimizer, last_epoch=last_iter)
|
109 |
-
|
110 |
-
def state_dict(self):
|
111 |
-
# fvcore schedulers are stateless. Only keep pytorch scheduler states
|
112 |
-
return {"base_lrs": self.base_lrs, "last_epoch": self.last_epoch}
|
113 |
-
|
114 |
-
def get_lr(self) -> List[float]:
|
115 |
-
multiplier = self._multiplier(self.last_epoch / self._max_iter)
|
116 |
-
return [base_lr * multiplier for base_lr in self.base_lrs]
|
117 |
-
|
118 |
-
|
119 |
-
"""
|
120 |
-
Content below is no longer needed!
|
121 |
-
"""
|
122 |
-
|
123 |
-
# NOTE: PyTorch's LR scheduler interface uses names that assume the LR changes
|
124 |
-
# only on epoch boundaries. We typically use iteration based schedules instead.
|
125 |
-
# As a result, "epoch" (e.g., as in self.last_epoch) should be understood to mean
|
126 |
-
# "iteration" instead.
|
127 |
-
|
128 |
-
# FIXME: ideally this would be achieved with a CombinedLRScheduler, separating
|
129 |
-
# MultiStepLR with WarmupLR but the current LRScheduler design doesn't allow it.
|
130 |
-
|
131 |
-
|
132 |
-
class WarmupMultiStepLR(torch.optim.lr_scheduler._LRScheduler):
|
133 |
-
def __init__(
|
134 |
-
self,
|
135 |
-
optimizer: torch.optim.Optimizer,
|
136 |
-
milestones: List[int],
|
137 |
-
gamma: float = 0.1,
|
138 |
-
warmup_factor: float = 0.001,
|
139 |
-
warmup_iters: int = 1000,
|
140 |
-
warmup_method: str = "linear",
|
141 |
-
last_epoch: int = -1,
|
142 |
-
):
|
143 |
-
logger.warning(
|
144 |
-
"WarmupMultiStepLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!"
|
145 |
-
)
|
146 |
-
if not list(milestones) == sorted(milestones):
|
147 |
-
raise ValueError(
|
148 |
-
"Milestones should be a list of" " increasing integers. Got {}", milestones
|
149 |
-
)
|
150 |
-
self.milestones = milestones
|
151 |
-
self.gamma = gamma
|
152 |
-
self.warmup_factor = warmup_factor
|
153 |
-
self.warmup_iters = warmup_iters
|
154 |
-
self.warmup_method = warmup_method
|
155 |
-
super().__init__(optimizer, last_epoch)
|
156 |
-
|
157 |
-
def get_lr(self) -> List[float]:
|
158 |
-
warmup_factor = _get_warmup_factor_at_iter(
|
159 |
-
self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor
|
160 |
-
)
|
161 |
-
return [
|
162 |
-
base_lr * warmup_factor * self.gamma ** bisect_right(self.milestones, self.last_epoch)
|
163 |
-
for base_lr in self.base_lrs
|
164 |
-
]
|
165 |
-
|
166 |
-
def _compute_values(self) -> List[float]:
|
167 |
-
# The new interface
|
168 |
-
return self.get_lr()
|
169 |
-
|
170 |
-
|
171 |
-
class WarmupCosineLR(torch.optim.lr_scheduler._LRScheduler):
|
172 |
-
def __init__(
|
173 |
-
self,
|
174 |
-
optimizer: torch.optim.Optimizer,
|
175 |
-
max_iters: int,
|
176 |
-
warmup_factor: float = 0.001,
|
177 |
-
warmup_iters: int = 1000,
|
178 |
-
warmup_method: str = "linear",
|
179 |
-
last_epoch: int = -1,
|
180 |
-
):
|
181 |
-
logger.warning(
|
182 |
-
"WarmupCosineLR is deprecated! Use LRMultipilier with fvcore ParamScheduler instead!"
|
183 |
-
)
|
184 |
-
self.max_iters = max_iters
|
185 |
-
self.warmup_factor = warmup_factor
|
186 |
-
self.warmup_iters = warmup_iters
|
187 |
-
self.warmup_method = warmup_method
|
188 |
-
super().__init__(optimizer, last_epoch)
|
189 |
-
|
190 |
-
def get_lr(self) -> List[float]:
|
191 |
-
warmup_factor = _get_warmup_factor_at_iter(
|
192 |
-
self.warmup_method, self.last_epoch, self.warmup_iters, self.warmup_factor
|
193 |
-
)
|
194 |
-
# Different definitions of half-cosine with warmup are possible. For
|
195 |
-
# simplicity we multiply the standard half-cosine schedule by the warmup
|
196 |
-
# factor. An alternative is to start the period of the cosine at warmup_iters
|
197 |
-
# instead of at 0. In the case that warmup_iters << max_iters the two are
|
198 |
-
# very close to each other.
|
199 |
-
return [
|
200 |
-
base_lr
|
201 |
-
* warmup_factor
|
202 |
-
* 0.5
|
203 |
-
* (1.0 + math.cos(math.pi * self.last_epoch / self.max_iters))
|
204 |
-
for base_lr in self.base_lrs
|
205 |
-
]
|
206 |
-
|
207 |
-
def _compute_values(self) -> List[float]:
|
208 |
-
# The new interface
|
209 |
-
return self.get_lr()
|
210 |
-
|
211 |
-
|
212 |
-
def _get_warmup_factor_at_iter(
|
213 |
-
method: str, iter: int, warmup_iters: int, warmup_factor: float
|
214 |
-
) -> float:
|
215 |
-
"""
|
216 |
-
Return the learning rate warmup factor at a specific iteration.
|
217 |
-
See :paper:`ImageNet in 1h` for more details.
|
218 |
-
|
219 |
-
Args:
|
220 |
-
method (str): warmup method; either "constant" or "linear".
|
221 |
-
iter (int): iteration at which to calculate the warmup factor.
|
222 |
-
warmup_iters (int): the number of warmup iterations.
|
223 |
-
warmup_factor (float): the base warmup factor (the meaning changes according
|
224 |
-
to the method used).
|
225 |
-
|
226 |
-
Returns:
|
227 |
-
float: the effective warmup factor at the given iteration.
|
228 |
-
"""
|
229 |
-
if iter >= warmup_iters:
|
230 |
-
return 1.0
|
231 |
-
|
232 |
-
if method == "constant":
|
233 |
-
return warmup_factor
|
234 |
-
elif method == "linear":
|
235 |
-
alpha = iter / warmup_iters
|
236 |
-
return warmup_factor * (1 - alpha) + alpha
|
237 |
-
else:
|
238 |
-
raise ValueError("Unknown warmup method: {}".format(method))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Chemsseddine/summarisation/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Summarisation
|
3 |
-
emoji: 📝
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.0.20
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Chris4K/llms_compare/Adobe-Media-Encoder-Cs4-Portablerar.md
DELETED
@@ -1,68 +0,0 @@
|
|
1 |
-
## Adobe Media Encoder Cs4 Portable.rar
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
**Download File ✫ [https://urluso.com/2tBNxz](https://urluso.com/2tBNxz)**
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
# How to Download and Use Adobe Media Encoder CS4 Portable
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
Adobe Media Encoder CS4 Portable is a software that allows you to convert video and audio files to various formats. It is a standalone application that does not require installation and can be run from a USB drive or any other removable media. In this article, we will show you how to download and use Adobe Media Encoder CS4 Portable.
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
## Step 1: Download Adobe Media Encoder CS4 Portable
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
You can download Adobe Media Encoder CS4 Portable from various online sources, such as 4shared[^1^] or Google Drive[^2^]. The file size is about 68 MB and it is compressed in a RAR archive. You will need a software like WinRAR or 7-Zip to extract the files.
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
## Step 2: Extract Adobe Media Encoder CS4 Portable
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
After downloading the RAR archive, right-click on it and select "Extract Here" or "Extract to Adobe Media Encoder CS4 Portable". You will see a folder named "Adobe Media Encoder CS4 Portable" with several files inside. You can move this folder to any location you want, such as your desktop or a USB drive.
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
## Step 3: Run Adobe Media Encoder CS4 Portable
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
To run Adobe Media Encoder CS4 Portable, double-click on the file named "Adobe Media Encoder.exe". You will see a window with a simple interface where you can add, edit, and encode your media files. You can drag and drop files from your computer or browse them using the "Add" button. You can also adjust the settings for each file, such as the format, quality, resolution, frame rate, bitrate, and more. You can preview the output using the "Play" button. When you are ready, click on the "Start Queue" button to begin the encoding process. You can monitor the progress and status of each file in the queue. The encoded files will be saved in the same folder as the original files by default.
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
## Conclusion
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
Adobe Media Encoder CS4 Portable is a handy tool for converting video and audio files to various formats. It is easy to use and does not require installation. You can download it from online sources and run it from any removable media. It supports a wide range of input and output formats and allows you to customize the encoding settings for each file. It is compatible with Windows XP, Vista, 7, 8, and 10.
|
62 |
-
|
63 |
-
145887f19f
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChrisPreston/diff-svc_minato_aqua/utils/pitch_utils.py
DELETED
@@ -1,76 +0,0 @@
|
|
1 |
-
#########
|
2 |
-
# world
|
3 |
-
##########
|
4 |
-
import librosa
|
5 |
-
import numpy as np
|
6 |
-
import torch
|
7 |
-
|
8 |
-
# gamma = 0
|
9 |
-
# mcepInput = 3 # 0 for dB, 3 for magnitude
|
10 |
-
# alpha = 0.45
|
11 |
-
# en_floor = 10 ** (-80 / 20)
|
12 |
-
# FFT_SIZE = 2048
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
def f0_to_coarse(f0,hparams):
|
18 |
-
f0_bin = hparams['f0_bin']
|
19 |
-
f0_max = hparams['f0_max']
|
20 |
-
f0_min = hparams['f0_min']
|
21 |
-
is_torch = isinstance(f0, torch.Tensor)
|
22 |
-
f0_mel_min = 1127 * np.log(1 + f0_min / 700)
|
23 |
-
f0_mel_max = 1127 * np.log(1 + f0_max / 700)
|
24 |
-
f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700)
|
25 |
-
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1
|
26 |
-
|
27 |
-
f0_mel[f0_mel <= 1] = 1
|
28 |
-
f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1
|
29 |
-
f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(int)
|
30 |
-
assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min())
|
31 |
-
return f0_coarse
|
32 |
-
|
33 |
-
|
34 |
-
def norm_f0(f0, uv, hparams):
|
35 |
-
is_torch = isinstance(f0, torch.Tensor)
|
36 |
-
if hparams['pitch_norm'] == 'standard':
|
37 |
-
f0 = (f0 - hparams['f0_mean']) / hparams['f0_std']
|
38 |
-
if hparams['pitch_norm'] == 'log':
|
39 |
-
f0 = torch.log2(f0) if is_torch else np.log2(f0)
|
40 |
-
if uv is not None and hparams['use_uv']:
|
41 |
-
f0[uv > 0] = 0
|
42 |
-
return f0
|
43 |
-
|
44 |
-
|
45 |
-
def norm_interp_f0(f0, hparams):
|
46 |
-
is_torch = isinstance(f0, torch.Tensor)
|
47 |
-
if is_torch:
|
48 |
-
device = f0.device
|
49 |
-
f0 = f0.data.cpu().numpy()
|
50 |
-
uv = f0 == 0
|
51 |
-
f0 = norm_f0(f0, uv, hparams)
|
52 |
-
if sum(uv) == len(f0):
|
53 |
-
f0[uv] = 0
|
54 |
-
elif sum(uv) > 0:
|
55 |
-
f0[uv] = np.interp(np.where(uv)[0], np.where(~uv)[0], f0[~uv])
|
56 |
-
uv = torch.FloatTensor(uv)
|
57 |
-
f0 = torch.FloatTensor(f0)
|
58 |
-
if is_torch:
|
59 |
-
f0 = f0.to(device)
|
60 |
-
return f0, uv
|
61 |
-
|
62 |
-
|
63 |
-
def denorm_f0(f0, uv, hparams, pitch_padding=None, min=None, max=None):
|
64 |
-
if hparams['pitch_norm'] == 'standard':
|
65 |
-
f0 = f0 * hparams['f0_std'] + hparams['f0_mean']
|
66 |
-
if hparams['pitch_norm'] == 'log':
|
67 |
-
f0 = 2 ** f0
|
68 |
-
if min is not None:
|
69 |
-
f0 = f0.clamp(min=min)
|
70 |
-
if max is not None:
|
71 |
-
f0 = f0.clamp(max=max)
|
72 |
-
if uv is not None and hparams['use_uv']:
|
73 |
-
f0[uv > 0] = 0
|
74 |
-
if pitch_padding is not None:
|
75 |
-
f0[pitch_padding] = 0
|
76 |
-
return f0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ClaudioX/mg_sd_esp/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Mg Sd Esp
|
3 |
-
emoji: 😻
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: green
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.4.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: wtfpl
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat/g4f/Provider/Providers/ChatgptAi.py
DELETED
@@ -1,51 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import requests, re
|
3 |
-
from ...typing import sha256, Dict, get_type_hints
|
4 |
-
|
5 |
-
url = 'https://chatgpt.ai/gpt-4/'
|
6 |
-
model = ['gpt-4']
|
7 |
-
supports_stream = True
|
8 |
-
needs_auth = False
|
9 |
-
|
10 |
-
|
11 |
-
def _create_completion(model: str, messages: list, stream: bool, **kwargs):
|
12 |
-
chat = ''
|
13 |
-
for message in messages:
|
14 |
-
chat += '%s: %s\n' % (message['role'], message['content'])
|
15 |
-
chat += 'assistant: '
|
16 |
-
|
17 |
-
response = requests.get('https://chatgpt.ai/')
|
18 |
-
nonce, post_id, _, bot_id = re.findall(r'data-nonce="(.*)"\n data-post-id="(.*)"\n data-url="(.*)"\n data-bot-id="(.*)"\n data-width', response.text)[0]
|
19 |
-
|
20 |
-
headers = {
|
21 |
-
'authority': 'chatgpt.ai',
|
22 |
-
'accept': '*/*',
|
23 |
-
'accept-language': 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
|
24 |
-
'cache-control': 'no-cache',
|
25 |
-
'origin': 'https://chatgpt.ai',
|
26 |
-
'pragma': 'no-cache',
|
27 |
-
'referer': 'https://chatgpt.ai/gpt-4/',
|
28 |
-
'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
|
29 |
-
'sec-ch-ua-mobile': '?0',
|
30 |
-
'sec-ch-ua-platform': '"Windows"',
|
31 |
-
'sec-fetch-dest': 'empty',
|
32 |
-
'sec-fetch-mode': 'cors',
|
33 |
-
'sec-fetch-site': 'same-origin',
|
34 |
-
'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
|
35 |
-
}
|
36 |
-
data = {
|
37 |
-
'_wpnonce': nonce,
|
38 |
-
'post_id': post_id,
|
39 |
-
'url': 'https://chatgpt.ai/gpt-4',
|
40 |
-
'action': 'wpaicg_chat_shortcode_message',
|
41 |
-
'message': chat,
|
42 |
-
'bot_id': bot_id
|
43 |
-
}
|
44 |
-
|
45 |
-
response = requests.post('https://chatgpt.ai/wp-admin/admin-ajax.php',
|
46 |
-
headers=headers, data=data)
|
47 |
-
|
48 |
-
yield (response.json()['data'])
|
49 |
-
|
50 |
-
params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
|
51 |
-
'(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/ttFont.py
DELETED
@@ -1,1145 +0,0 @@
|
|
1 |
-
from fontTools.config import Config
|
2 |
-
from fontTools.misc import xmlWriter
|
3 |
-
from fontTools.misc.configTools import AbstractConfig
|
4 |
-
from fontTools.misc.textTools import Tag, byteord, tostr
|
5 |
-
from fontTools.misc.loggingTools import deprecateArgument
|
6 |
-
from fontTools.ttLib import TTLibError
|
7 |
-
from fontTools.ttLib.ttGlyphSet import _TTGlyph, _TTGlyphSetCFF, _TTGlyphSetGlyf
|
8 |
-
from fontTools.ttLib.sfnt import SFNTReader, SFNTWriter
|
9 |
-
from io import BytesIO, StringIO, UnsupportedOperation
|
10 |
-
import os
|
11 |
-
import logging
|
12 |
-
import traceback
|
13 |
-
|
14 |
-
log = logging.getLogger(__name__)
|
15 |
-
|
16 |
-
|
17 |
-
class TTFont(object):
|
18 |
-
|
19 |
-
"""Represents a TrueType font.
|
20 |
-
|
21 |
-
The object manages file input and output, and offers a convenient way of
|
22 |
-
accessing tables. Tables will be only decompiled when necessary, ie. when
|
23 |
-
they're actually accessed. This means that simple operations can be extremely fast.
|
24 |
-
|
25 |
-
Example usage::
|
26 |
-
|
27 |
-
>> from fontTools import ttLib
|
28 |
-
>> tt = ttLib.TTFont("afont.ttf") # Load an existing font file
|
29 |
-
>> tt['maxp'].numGlyphs
|
30 |
-
242
|
31 |
-
>> tt['OS/2'].achVendID
|
32 |
-
'B&H\000'
|
33 |
-
>> tt['head'].unitsPerEm
|
34 |
-
2048
|
35 |
-
|
36 |
-
For details of the objects returned when accessing each table, see :ref:`tables`.
|
37 |
-
To add a table to the font, use the :py:func:`newTable` function::
|
38 |
-
|
39 |
-
>> os2 = newTable("OS/2")
|
40 |
-
>> os2.version = 4
|
41 |
-
>> # set other attributes
|
42 |
-
>> font["OS/2"] = os2
|
43 |
-
|
44 |
-
TrueType fonts can also be serialized to and from XML format (see also the
|
45 |
-
:ref:`ttx` binary)::
|
46 |
-
|
47 |
-
>> tt.saveXML("afont.ttx")
|
48 |
-
Dumping 'LTSH' table...
|
49 |
-
Dumping 'OS/2' table...
|
50 |
-
[...]
|
51 |
-
|
52 |
-
>> tt2 = ttLib.TTFont() # Create a new font object
|
53 |
-
>> tt2.importXML("afont.ttx")
|
54 |
-
>> tt2['maxp'].numGlyphs
|
55 |
-
242
|
56 |
-
|
57 |
-
The TTFont object may be used as a context manager; this will cause the file
|
58 |
-
reader to be closed after the context ``with`` block is exited::
|
59 |
-
|
60 |
-
with TTFont(filename) as f:
|
61 |
-
# Do stuff
|
62 |
-
|
63 |
-
Args:
|
64 |
-
file: When reading a font from disk, either a pathname pointing to a file,
|
65 |
-
or a readable file object.
|
66 |
-
res_name_or_index: If running on a Macintosh, either a sfnt resource name or
|
67 |
-
an sfnt resource index number. If the index number is zero, TTLib will
|
68 |
-
autodetect whether the file is a flat file or a suitcase. (If it is a suitcase,
|
69 |
-
only the first 'sfnt' resource will be read.)
|
70 |
-
sfntVersion (str): When constructing a font object from scratch, sets the four-byte
|
71 |
-
sfnt magic number to be used. Defaults to ``\0\1\0\0`` (TrueType). To create
|
72 |
-
an OpenType file, use ``OTTO``.
|
73 |
-
flavor (str): Set this to ``woff`` when creating a WOFF file or ``woff2`` for a WOFF2
|
74 |
-
file.
|
75 |
-
checkChecksums (int): How checksum data should be treated. Default is 0
|
76 |
-
(no checking). Set to 1 to check and warn on wrong checksums; set to 2 to
|
77 |
-
raise an exception if any wrong checksums are found.
|
78 |
-
recalcBBoxes (bool): If true (the default), recalculates ``glyf``, ``CFF ``,
|
79 |
-
``head`` bounding box values and ``hhea``/``vhea`` min/max values on save.
|
80 |
-
Also compiles the glyphs on importing, which saves memory consumption and
|
81 |
-
time.
|
82 |
-
ignoreDecompileErrors (bool): If true, exceptions raised during table decompilation
|
83 |
-
will be ignored, and the binary data will be returned for those tables instead.
|
84 |
-
recalcTimestamp (bool): If true (the default), sets the ``modified`` timestamp in
|
85 |
-
the ``head`` table on save.
|
86 |
-
fontNumber (int): The index of the font in a TrueType Collection file.
|
87 |
-
lazy (bool): If lazy is set to True, many data structures are loaded lazily, upon
|
88 |
-
access only. If it is set to False, many data structures are loaded immediately.
|
89 |
-
The default is ``lazy=None`` which is somewhere in between.
|
90 |
-
"""
|
91 |
-
|
92 |
-
def __init__(
|
93 |
-
self,
|
94 |
-
file=None,
|
95 |
-
res_name_or_index=None,
|
96 |
-
sfntVersion="\000\001\000\000",
|
97 |
-
flavor=None,
|
98 |
-
checkChecksums=0,
|
99 |
-
verbose=None,
|
100 |
-
recalcBBoxes=True,
|
101 |
-
allowVID=NotImplemented,
|
102 |
-
ignoreDecompileErrors=False,
|
103 |
-
recalcTimestamp=True,
|
104 |
-
fontNumber=-1,
|
105 |
-
lazy=None,
|
106 |
-
quiet=None,
|
107 |
-
_tableCache=None,
|
108 |
-
cfg={},
|
109 |
-
):
|
110 |
-
for name in ("verbose", "quiet"):
|
111 |
-
val = locals().get(name)
|
112 |
-
if val is not None:
|
113 |
-
deprecateArgument(name, "configure logging instead")
|
114 |
-
setattr(self, name, val)
|
115 |
-
|
116 |
-
self.lazy = lazy
|
117 |
-
self.recalcBBoxes = recalcBBoxes
|
118 |
-
self.recalcTimestamp = recalcTimestamp
|
119 |
-
self.tables = {}
|
120 |
-
self.reader = None
|
121 |
-
self.cfg = cfg.copy() if isinstance(cfg, AbstractConfig) else Config(cfg)
|
122 |
-
self.ignoreDecompileErrors = ignoreDecompileErrors
|
123 |
-
|
124 |
-
if not file:
|
125 |
-
self.sfntVersion = sfntVersion
|
126 |
-
self.flavor = flavor
|
127 |
-
self.flavorData = None
|
128 |
-
return
|
129 |
-
seekable = True
|
130 |
-
if not hasattr(file, "read"):
|
131 |
-
closeStream = True
|
132 |
-
# assume file is a string
|
133 |
-
if res_name_or_index is not None:
|
134 |
-
# see if it contains 'sfnt' resources in the resource or data fork
|
135 |
-
from . import macUtils
|
136 |
-
|
137 |
-
if res_name_or_index == 0:
|
138 |
-
if macUtils.getSFNTResIndices(file):
|
139 |
-
# get the first available sfnt font.
|
140 |
-
file = macUtils.SFNTResourceReader(file, 1)
|
141 |
-
else:
|
142 |
-
file = open(file, "rb")
|
143 |
-
else:
|
144 |
-
file = macUtils.SFNTResourceReader(file, res_name_or_index)
|
145 |
-
else:
|
146 |
-
file = open(file, "rb")
|
147 |
-
else:
|
148 |
-
# assume "file" is a readable file object
|
149 |
-
closeStream = False
|
150 |
-
# SFNTReader wants the input file to be seekable.
|
151 |
-
# SpooledTemporaryFile has no seekable() on < 3.11, but still can seek:
|
152 |
-
# https://github.com/fonttools/fonttools/issues/3052
|
153 |
-
if hasattr(file, "seekable"):
|
154 |
-
seekable = file.seekable()
|
155 |
-
elif hasattr(file, "seek"):
|
156 |
-
try:
|
157 |
-
file.seek(0)
|
158 |
-
except UnsupportedOperation:
|
159 |
-
seekable = False
|
160 |
-
|
161 |
-
if not self.lazy:
|
162 |
-
# read input file in memory and wrap a stream around it to allow overwriting
|
163 |
-
if seekable:
|
164 |
-
file.seek(0)
|
165 |
-
tmp = BytesIO(file.read())
|
166 |
-
if hasattr(file, "name"):
|
167 |
-
# save reference to input file name
|
168 |
-
tmp.name = file.name
|
169 |
-
if closeStream:
|
170 |
-
file.close()
|
171 |
-
file = tmp
|
172 |
-
elif not seekable:
|
173 |
-
raise TTLibError("Input file must be seekable when lazy=True")
|
174 |
-
self._tableCache = _tableCache
|
175 |
-
self.reader = SFNTReader(file, checkChecksums, fontNumber=fontNumber)
|
176 |
-
self.sfntVersion = self.reader.sfntVersion
|
177 |
-
self.flavor = self.reader.flavor
|
178 |
-
self.flavorData = self.reader.flavorData
|
179 |
-
|
180 |
-
def __enter__(self):
|
181 |
-
return self
|
182 |
-
|
183 |
-
def __exit__(self, type, value, traceback):
|
184 |
-
self.close()
|
185 |
-
|
186 |
-
def close(self):
|
187 |
-
"""If we still have a reader object, close it."""
|
188 |
-
if self.reader is not None:
|
189 |
-
self.reader.close()
|
190 |
-
|
191 |
-
def save(self, file, reorderTables=True):
|
192 |
-
"""Save the font to disk.
|
193 |
-
|
194 |
-
Args:
|
195 |
-
file: Similarly to the constructor, can be either a pathname or a writable
|
196 |
-
file object.
|
197 |
-
reorderTables (Option[bool]): If true (the default), reorder the tables,
|
198 |
-
sorting them by tag (recommended by the OpenType specification). If
|
199 |
-
false, retain the original font order. If None, reorder by table
|
200 |
-
dependency (fastest).
|
201 |
-
"""
|
202 |
-
if not hasattr(file, "write"):
|
203 |
-
if self.lazy and self.reader.file.name == file:
|
204 |
-
raise TTLibError("Can't overwrite TTFont when 'lazy' attribute is True")
|
205 |
-
createStream = True
|
206 |
-
else:
|
207 |
-
# assume "file" is a writable file object
|
208 |
-
createStream = False
|
209 |
-
|
210 |
-
tmp = BytesIO()
|
211 |
-
|
212 |
-
writer_reordersTables = self._save(tmp)
|
213 |
-
|
214 |
-
if not (
|
215 |
-
reorderTables is None
|
216 |
-
or writer_reordersTables
|
217 |
-
or (reorderTables is False and self.reader is None)
|
218 |
-
):
|
219 |
-
if reorderTables is False:
|
220 |
-
# sort tables using the original font's order
|
221 |
-
tableOrder = list(self.reader.keys())
|
222 |
-
else:
|
223 |
-
# use the recommended order from the OpenType specification
|
224 |
-
tableOrder = None
|
225 |
-
tmp.flush()
|
226 |
-
tmp2 = BytesIO()
|
227 |
-
reorderFontTables(tmp, tmp2, tableOrder)
|
228 |
-
tmp.close()
|
229 |
-
tmp = tmp2
|
230 |
-
|
231 |
-
if createStream:
|
232 |
-
# "file" is a path
|
233 |
-
with open(file, "wb") as file:
|
234 |
-
file.write(tmp.getvalue())
|
235 |
-
else:
|
236 |
-
file.write(tmp.getvalue())
|
237 |
-
|
238 |
-
tmp.close()
|
239 |
-
|
240 |
-
def _save(self, file, tableCache=None):
|
241 |
-
"""Internal function, to be shared by save() and TTCollection.save()"""
|
242 |
-
|
243 |
-
if self.recalcTimestamp and "head" in self:
|
244 |
-
self[
|
245 |
-
"head"
|
246 |
-
] # make sure 'head' is loaded so the recalculation is actually done
|
247 |
-
|
248 |
-
tags = list(self.keys())
|
249 |
-
if "GlyphOrder" in tags:
|
250 |
-
tags.remove("GlyphOrder")
|
251 |
-
numTables = len(tags)
|
252 |
-
# write to a temporary stream to allow saving to unseekable streams
|
253 |
-
writer = SFNTWriter(
|
254 |
-
file, numTables, self.sfntVersion, self.flavor, self.flavorData
|
255 |
-
)
|
256 |
-
|
257 |
-
done = []
|
258 |
-
for tag in tags:
|
259 |
-
self._writeTable(tag, writer, done, tableCache)
|
260 |
-
|
261 |
-
writer.close()
|
262 |
-
|
263 |
-
return writer.reordersTables()
|
264 |
-
|
265 |
-
def saveXML(self, fileOrPath, newlinestr="\n", **kwargs):
|
266 |
-
"""Export the font as TTX (an XML-based text file), or as a series of text
|
267 |
-
files when splitTables is true. In the latter case, the 'fileOrPath'
|
268 |
-
argument should be a path to a directory.
|
269 |
-
The 'tables' argument must either be false (dump all tables) or a
|
270 |
-
list of tables to dump. The 'skipTables' argument may be a list of tables
|
271 |
-
to skip, but only when the 'tables' argument is false.
|
272 |
-
"""
|
273 |
-
|
274 |
-
writer = xmlWriter.XMLWriter(fileOrPath, newlinestr=newlinestr)
|
275 |
-
self._saveXML(writer, **kwargs)
|
276 |
-
writer.close()
|
277 |
-
|
278 |
-
def _saveXML(
|
279 |
-
self,
|
280 |
-
writer,
|
281 |
-
writeVersion=True,
|
282 |
-
quiet=None,
|
283 |
-
tables=None,
|
284 |
-
skipTables=None,
|
285 |
-
splitTables=False,
|
286 |
-
splitGlyphs=False,
|
287 |
-
disassembleInstructions=True,
|
288 |
-
bitmapGlyphDataFormat="raw",
|
289 |
-
):
|
290 |
-
|
291 |
-
if quiet is not None:
|
292 |
-
deprecateArgument("quiet", "configure logging instead")
|
293 |
-
|
294 |
-
self.disassembleInstructions = disassembleInstructions
|
295 |
-
self.bitmapGlyphDataFormat = bitmapGlyphDataFormat
|
296 |
-
if not tables:
|
297 |
-
tables = list(self.keys())
|
298 |
-
if "GlyphOrder" not in tables:
|
299 |
-
tables = ["GlyphOrder"] + tables
|
300 |
-
if skipTables:
|
301 |
-
for tag in skipTables:
|
302 |
-
if tag in tables:
|
303 |
-
tables.remove(tag)
|
304 |
-
numTables = len(tables)
|
305 |
-
|
306 |
-
if writeVersion:
|
307 |
-
from fontTools import version
|
308 |
-
|
309 |
-
version = ".".join(version.split(".")[:2])
|
310 |
-
writer.begintag(
|
311 |
-
"ttFont",
|
312 |
-
sfntVersion=repr(tostr(self.sfntVersion))[1:-1],
|
313 |
-
ttLibVersion=version,
|
314 |
-
)
|
315 |
-
else:
|
316 |
-
writer.begintag("ttFont", sfntVersion=repr(tostr(self.sfntVersion))[1:-1])
|
317 |
-
writer.newline()
|
318 |
-
|
319 |
-
# always splitTables if splitGlyphs is enabled
|
320 |
-
splitTables = splitTables or splitGlyphs
|
321 |
-
|
322 |
-
if not splitTables:
|
323 |
-
writer.newline()
|
324 |
-
else:
|
325 |
-
path, ext = os.path.splitext(writer.filename)
|
326 |
-
|
327 |
-
for i in range(numTables):
|
328 |
-
tag = tables[i]
|
329 |
-
if splitTables:
|
330 |
-
tablePath = path + "." + tagToIdentifier(tag) + ext
|
331 |
-
tableWriter = xmlWriter.XMLWriter(
|
332 |
-
tablePath, newlinestr=writer.newlinestr
|
333 |
-
)
|
334 |
-
tableWriter.begintag("ttFont", ttLibVersion=version)
|
335 |
-
tableWriter.newline()
|
336 |
-
tableWriter.newline()
|
337 |
-
writer.simpletag(tagToXML(tag), src=os.path.basename(tablePath))
|
338 |
-
writer.newline()
|
339 |
-
else:
|
340 |
-
tableWriter = writer
|
341 |
-
self._tableToXML(tableWriter, tag, splitGlyphs=splitGlyphs)
|
342 |
-
if splitTables:
|
343 |
-
tableWriter.endtag("ttFont")
|
344 |
-
tableWriter.newline()
|
345 |
-
tableWriter.close()
|
346 |
-
writer.endtag("ttFont")
|
347 |
-
writer.newline()
|
348 |
-
|
349 |
-
def _tableToXML(self, writer, tag, quiet=None, splitGlyphs=False):
|
350 |
-
if quiet is not None:
|
351 |
-
deprecateArgument("quiet", "configure logging instead")
|
352 |
-
if tag in self:
|
353 |
-
table = self[tag]
|
354 |
-
report = "Dumping '%s' table..." % tag
|
355 |
-
else:
|
356 |
-
report = "No '%s' table found." % tag
|
357 |
-
log.info(report)
|
358 |
-
if tag not in self:
|
359 |
-
return
|
360 |
-
xmlTag = tagToXML(tag)
|
361 |
-
attrs = dict()
|
362 |
-
if hasattr(table, "ERROR"):
|
363 |
-
attrs["ERROR"] = "decompilation error"
|
364 |
-
from .tables.DefaultTable import DefaultTable
|
365 |
-
|
366 |
-
if table.__class__ == DefaultTable:
|
367 |
-
attrs["raw"] = True
|
368 |
-
writer.begintag(xmlTag, **attrs)
|
369 |
-
writer.newline()
|
370 |
-
if tag == "glyf":
|
371 |
-
table.toXML(writer, self, splitGlyphs=splitGlyphs)
|
372 |
-
else:
|
373 |
-
table.toXML(writer, self)
|
374 |
-
writer.endtag(xmlTag)
|
375 |
-
writer.newline()
|
376 |
-
writer.newline()
|
377 |
-
|
378 |
-
def importXML(self, fileOrPath, quiet=None):
|
379 |
-
"""Import a TTX file (an XML-based text format), so as to recreate
|
380 |
-
a font object.
|
381 |
-
"""
|
382 |
-
if quiet is not None:
|
383 |
-
deprecateArgument("quiet", "configure logging instead")
|
384 |
-
|
385 |
-
if "maxp" in self and "post" in self:
|
386 |
-
# Make sure the glyph order is loaded, as it otherwise gets
|
387 |
-
# lost if the XML doesn't contain the glyph order, yet does
|
388 |
-
# contain the table which was originally used to extract the
|
389 |
-
# glyph names from (ie. 'post', 'cmap' or 'CFF ').
|
390 |
-
self.getGlyphOrder()
|
391 |
-
|
392 |
-
from fontTools.misc import xmlReader
|
393 |
-
|
394 |
-
reader = xmlReader.XMLReader(fileOrPath, self)
|
395 |
-
reader.read()
|
396 |
-
|
397 |
-
def isLoaded(self, tag):
|
398 |
-
"""Return true if the table identified by ``tag`` has been
|
399 |
-
decompiled and loaded into memory."""
|
400 |
-
return tag in self.tables
|
401 |
-
|
402 |
-
def has_key(self, tag):
|
403 |
-
"""Test if the table identified by ``tag`` is present in the font.
|
404 |
-
|
405 |
-
As well as this method, ``tag in font`` can also be used to determine the
|
406 |
-
presence of the table."""
|
407 |
-
if self.isLoaded(tag):
|
408 |
-
return True
|
409 |
-
elif self.reader and tag in self.reader:
|
410 |
-
return True
|
411 |
-
elif tag == "GlyphOrder":
|
412 |
-
return True
|
413 |
-
else:
|
414 |
-
return False
|
415 |
-
|
416 |
-
__contains__ = has_key
|
417 |
-
|
418 |
-
def keys(self):
|
419 |
-
"""Returns the list of tables in the font, along with the ``GlyphOrder`` pseudo-table."""
|
420 |
-
keys = list(self.tables.keys())
|
421 |
-
if self.reader:
|
422 |
-
for key in list(self.reader.keys()):
|
423 |
-
if key not in keys:
|
424 |
-
keys.append(key)
|
425 |
-
|
426 |
-
if "GlyphOrder" in keys:
|
427 |
-
keys.remove("GlyphOrder")
|
428 |
-
keys = sortedTagList(keys)
|
429 |
-
return ["GlyphOrder"] + keys
|
430 |
-
|
431 |
-
def ensureDecompiled(self, recurse=None):
|
432 |
-
"""Decompile all the tables, even if a TTFont was opened in 'lazy' mode."""
|
433 |
-
for tag in self.keys():
|
434 |
-
table = self[tag]
|
435 |
-
if recurse is None:
|
436 |
-
recurse = self.lazy is not False
|
437 |
-
if recurse and hasattr(table, "ensureDecompiled"):
|
438 |
-
table.ensureDecompiled(recurse=recurse)
|
439 |
-
self.lazy = False
|
440 |
-
|
441 |
-
def __len__(self):
|
442 |
-
return len(list(self.keys()))
|
443 |
-
|
444 |
-
def __getitem__(self, tag):
|
445 |
-
tag = Tag(tag)
|
446 |
-
table = self.tables.get(tag)
|
447 |
-
if table is None:
|
448 |
-
if tag == "GlyphOrder":
|
449 |
-
table = GlyphOrder(tag)
|
450 |
-
self.tables[tag] = table
|
451 |
-
elif self.reader is not None:
|
452 |
-
table = self._readTable(tag)
|
453 |
-
else:
|
454 |
-
raise KeyError("'%s' table not found" % tag)
|
455 |
-
return table
|
456 |
-
|
457 |
-
def _readTable(self, tag):
|
458 |
-
log.debug("Reading '%s' table from disk", tag)
|
459 |
-
data = self.reader[tag]
|
460 |
-
if self._tableCache is not None:
|
461 |
-
table = self._tableCache.get((tag, data))
|
462 |
-
if table is not None:
|
463 |
-
return table
|
464 |
-
tableClass = getTableClass(tag)
|
465 |
-
table = tableClass(tag)
|
466 |
-
self.tables[tag] = table
|
467 |
-
log.debug("Decompiling '%s' table", tag)
|
468 |
-
try:
|
469 |
-
table.decompile(data, self)
|
470 |
-
except Exception:
|
471 |
-
if not self.ignoreDecompileErrors:
|
472 |
-
raise
|
473 |
-
# fall back to DefaultTable, retaining the binary table data
|
474 |
-
log.exception(
|
475 |
-
"An exception occurred during the decompilation of the '%s' table", tag
|
476 |
-
)
|
477 |
-
from .tables.DefaultTable import DefaultTable
|
478 |
-
|
479 |
-
file = StringIO()
|
480 |
-
traceback.print_exc(file=file)
|
481 |
-
table = DefaultTable(tag)
|
482 |
-
table.ERROR = file.getvalue()
|
483 |
-
self.tables[tag] = table
|
484 |
-
table.decompile(data, self)
|
485 |
-
if self._tableCache is not None:
|
486 |
-
self._tableCache[(tag, data)] = table
|
487 |
-
return table
|
488 |
-
|
489 |
-
def __setitem__(self, tag, table):
|
490 |
-
self.tables[Tag(tag)] = table
|
491 |
-
|
492 |
-
def __delitem__(self, tag):
|
493 |
-
if tag not in self:
|
494 |
-
raise KeyError("'%s' table not found" % tag)
|
495 |
-
if tag in self.tables:
|
496 |
-
del self.tables[tag]
|
497 |
-
if self.reader and tag in self.reader:
|
498 |
-
del self.reader[tag]
|
499 |
-
|
500 |
-
def get(self, tag, default=None):
|
501 |
-
"""Returns the table if it exists or (optionally) a default if it doesn't."""
|
502 |
-
try:
|
503 |
-
return self[tag]
|
504 |
-
except KeyError:
|
505 |
-
return default
|
506 |
-
|
507 |
-
def setGlyphOrder(self, glyphOrder):
|
508 |
-
"""Set the glyph order
|
509 |
-
|
510 |
-
Args:
|
511 |
-
glyphOrder ([str]): List of glyph names in order.
|
512 |
-
"""
|
513 |
-
self.glyphOrder = glyphOrder
|
514 |
-
if hasattr(self, "_reverseGlyphOrderDict"):
|
515 |
-
del self._reverseGlyphOrderDict
|
516 |
-
if self.isLoaded("glyf"):
|
517 |
-
self["glyf"].setGlyphOrder(glyphOrder)
|
518 |
-
|
519 |
-
def getGlyphOrder(self):
|
520 |
-
"""Returns a list of glyph names ordered by their position in the font."""
|
521 |
-
try:
|
522 |
-
return self.glyphOrder
|
523 |
-
except AttributeError:
|
524 |
-
pass
|
525 |
-
if "CFF " in self:
|
526 |
-
cff = self["CFF "]
|
527 |
-
self.glyphOrder = cff.getGlyphOrder()
|
528 |
-
elif "post" in self:
|
529 |
-
# TrueType font
|
530 |
-
glyphOrder = self["post"].getGlyphOrder()
|
531 |
-
if glyphOrder is None:
|
532 |
-
#
|
533 |
-
# No names found in the 'post' table.
|
534 |
-
# Try to create glyph names from the unicode cmap (if available)
|
535 |
-
# in combination with the Adobe Glyph List (AGL).
|
536 |
-
#
|
537 |
-
self._getGlyphNamesFromCmap()
|
538 |
-
elif len(glyphOrder) < self["maxp"].numGlyphs:
|
539 |
-
#
|
540 |
-
# Not enough names found in the 'post' table.
|
541 |
-
# Can happen when 'post' format 1 is improperly used on a font that
|
542 |
-
# has more than 258 glyphs (the lenght of 'standardGlyphOrder').
|
543 |
-
#
|
544 |
-
log.warning(
|
545 |
-
"Not enough names found in the 'post' table, generating them from cmap instead"
|
546 |
-
)
|
547 |
-
self._getGlyphNamesFromCmap()
|
548 |
-
else:
|
549 |
-
self.glyphOrder = glyphOrder
|
550 |
-
else:
|
551 |
-
self._getGlyphNamesFromCmap()
|
552 |
-
return self.glyphOrder
|
553 |
-
|
554 |
-
def _getGlyphNamesFromCmap(self):
|
555 |
-
#
|
556 |
-
# This is rather convoluted, but then again, it's an interesting problem:
|
557 |
-
# - we need to use the unicode values found in the cmap table to
|
558 |
-
# build glyph names (eg. because there is only a minimal post table,
|
559 |
-
# or none at all).
|
560 |
-
# - but the cmap parser also needs glyph names to work with...
|
561 |
-
# So here's what we do:
|
562 |
-
# - make up glyph names based on glyphID
|
563 |
-
# - load a temporary cmap table based on those names
|
564 |
-
# - extract the unicode values, build the "real" glyph names
|
565 |
-
# - unload the temporary cmap table
|
566 |
-
#
|
567 |
-
if self.isLoaded("cmap"):
|
568 |
-
# Bootstrapping: we're getting called by the cmap parser
|
569 |
-
# itself. This means self.tables['cmap'] contains a partially
|
570 |
-
# loaded cmap, making it impossible to get at a unicode
|
571 |
-
# subtable here. We remove the partially loaded cmap and
|
572 |
-
# restore it later.
|
573 |
-
# This only happens if the cmap table is loaded before any
|
574 |
-
# other table that does f.getGlyphOrder() or f.getGlyphName().
|
575 |
-
cmapLoading = self.tables["cmap"]
|
576 |
-
del self.tables["cmap"]
|
577 |
-
else:
|
578 |
-
cmapLoading = None
|
579 |
-
# Make up glyph names based on glyphID, which will be used by the
|
580 |
-
# temporary cmap and by the real cmap in case we don't find a unicode
|
581 |
-
# cmap.
|
582 |
-
numGlyphs = int(self["maxp"].numGlyphs)
|
583 |
-
glyphOrder = [None] * numGlyphs
|
584 |
-
glyphOrder[0] = ".notdef"
|
585 |
-
for i in range(1, numGlyphs):
|
586 |
-
glyphOrder[i] = "glyph%.5d" % i
|
587 |
-
# Set the glyph order, so the cmap parser has something
|
588 |
-
# to work with (so we don't get called recursively).
|
589 |
-
self.glyphOrder = glyphOrder
|
590 |
-
|
591 |
-
# Make up glyph names based on the reversed cmap table. Because some
|
592 |
-
# glyphs (eg. ligatures or alternates) may not be reachable via cmap,
|
593 |
-
# this naming table will usually not cover all glyphs in the font.
|
594 |
-
# If the font has no Unicode cmap table, reversecmap will be empty.
|
595 |
-
if "cmap" in self:
|
596 |
-
reversecmap = self["cmap"].buildReversed()
|
597 |
-
else:
|
598 |
-
reversecmap = {}
|
599 |
-
useCount = {}
|
600 |
-
for i in range(numGlyphs):
|
601 |
-
tempName = glyphOrder[i]
|
602 |
-
if tempName in reversecmap:
|
603 |
-
# If a font maps both U+0041 LATIN CAPITAL LETTER A and
|
604 |
-
# U+0391 GREEK CAPITAL LETTER ALPHA to the same glyph,
|
605 |
-
# we prefer naming the glyph as "A".
|
606 |
-
glyphName = self._makeGlyphName(min(reversecmap[tempName]))
|
607 |
-
numUses = useCount[glyphName] = useCount.get(glyphName, 0) + 1
|
608 |
-
if numUses > 1:
|
609 |
-
glyphName = "%s.alt%d" % (glyphName, numUses - 1)
|
610 |
-
glyphOrder[i] = glyphName
|
611 |
-
|
612 |
-
if "cmap" in self:
|
613 |
-
# Delete the temporary cmap table from the cache, so it can
|
614 |
-
# be parsed again with the right names.
|
615 |
-
del self.tables["cmap"]
|
616 |
-
self.glyphOrder = glyphOrder
|
617 |
-
if cmapLoading:
|
618 |
-
# restore partially loaded cmap, so it can continue loading
|
619 |
-
# using the proper names.
|
620 |
-
self.tables["cmap"] = cmapLoading
|
621 |
-
|
622 |
-
@staticmethod
|
623 |
-
def _makeGlyphName(codepoint):
|
624 |
-
from fontTools import agl # Adobe Glyph List
|
625 |
-
|
626 |
-
if codepoint in agl.UV2AGL:
|
627 |
-
return agl.UV2AGL[codepoint]
|
628 |
-
elif codepoint <= 0xFFFF:
|
629 |
-
return "uni%04X" % codepoint
|
630 |
-
else:
|
631 |
-
return "u%X" % codepoint
|
632 |
-
|
633 |
-
def getGlyphNames(self):
|
634 |
-
"""Get a list of glyph names, sorted alphabetically."""
|
635 |
-
glyphNames = sorted(self.getGlyphOrder())
|
636 |
-
return glyphNames
|
637 |
-
|
638 |
-
def getGlyphNames2(self):
|
639 |
-
"""Get a list of glyph names, sorted alphabetically,
|
640 |
-
but not case sensitive.
|
641 |
-
"""
|
642 |
-
from fontTools.misc import textTools
|
643 |
-
|
644 |
-
return textTools.caselessSort(self.getGlyphOrder())
|
645 |
-
|
646 |
-
def getGlyphName(self, glyphID):
|
647 |
-
"""Returns the name for the glyph with the given ID.
|
648 |
-
|
649 |
-
If no name is available, synthesises one with the form ``glyphXXXXX``` where
|
650 |
-
```XXXXX`` is the zero-padded glyph ID.
|
651 |
-
"""
|
652 |
-
try:
|
653 |
-
return self.getGlyphOrder()[glyphID]
|
654 |
-
except IndexError:
|
655 |
-
return "glyph%.5d" % glyphID
|
656 |
-
|
657 |
-
def getGlyphNameMany(self, lst):
|
658 |
-
"""Converts a list of glyph IDs into a list of glyph names."""
|
659 |
-
glyphOrder = self.getGlyphOrder()
|
660 |
-
cnt = len(glyphOrder)
|
661 |
-
return [glyphOrder[gid] if gid < cnt else "glyph%.5d" % gid for gid in lst]
|
662 |
-
|
663 |
-
def getGlyphID(self, glyphName):
|
664 |
-
"""Returns the ID of the glyph with the given name."""
|
665 |
-
try:
|
666 |
-
return self.getReverseGlyphMap()[glyphName]
|
667 |
-
except KeyError:
|
668 |
-
if glyphName[:5] == "glyph":
|
669 |
-
try:
|
670 |
-
return int(glyphName[5:])
|
671 |
-
except (NameError, ValueError):
|
672 |
-
raise KeyError(glyphName)
|
673 |
-
raise
|
674 |
-
|
675 |
-
def getGlyphIDMany(self, lst):
|
676 |
-
"""Converts a list of glyph names into a list of glyph IDs."""
|
677 |
-
d = self.getReverseGlyphMap()
|
678 |
-
try:
|
679 |
-
return [d[glyphName] for glyphName in lst]
|
680 |
-
except KeyError:
|
681 |
-
getGlyphID = self.getGlyphID
|
682 |
-
return [getGlyphID(glyphName) for glyphName in lst]
|
683 |
-
|
684 |
-
def getReverseGlyphMap(self, rebuild=False):
|
685 |
-
"""Returns a mapping of glyph names to glyph IDs."""
|
686 |
-
if rebuild or not hasattr(self, "_reverseGlyphOrderDict"):
|
687 |
-
self._buildReverseGlyphOrderDict()
|
688 |
-
return self._reverseGlyphOrderDict
|
689 |
-
|
690 |
-
def _buildReverseGlyphOrderDict(self):
|
691 |
-
self._reverseGlyphOrderDict = d = {}
|
692 |
-
for glyphID, glyphName in enumerate(self.getGlyphOrder()):
|
693 |
-
d[glyphName] = glyphID
|
694 |
-
return d
|
695 |
-
|
696 |
-
def _writeTable(self, tag, writer, done, tableCache=None):
|
697 |
-
"""Internal helper function for self.save(). Keeps track of
|
698 |
-
inter-table dependencies.
|
699 |
-
"""
|
700 |
-
if tag in done:
|
701 |
-
return
|
702 |
-
tableClass = getTableClass(tag)
|
703 |
-
for masterTable in tableClass.dependencies:
|
704 |
-
if masterTable not in done:
|
705 |
-
if masterTable in self:
|
706 |
-
self._writeTable(masterTable, writer, done, tableCache)
|
707 |
-
else:
|
708 |
-
done.append(masterTable)
|
709 |
-
done.append(tag)
|
710 |
-
tabledata = self.getTableData(tag)
|
711 |
-
if tableCache is not None:
|
712 |
-
entry = tableCache.get((Tag(tag), tabledata))
|
713 |
-
if entry is not None:
|
714 |
-
log.debug("reusing '%s' table", tag)
|
715 |
-
writer.setEntry(tag, entry)
|
716 |
-
return
|
717 |
-
log.debug("Writing '%s' table to disk", tag)
|
718 |
-
writer[tag] = tabledata
|
719 |
-
if tableCache is not None:
|
720 |
-
tableCache[(Tag(tag), tabledata)] = writer[tag]
|
721 |
-
|
722 |
-
def getTableData(self, tag):
|
723 |
-
"""Returns the binary representation of a table.
|
724 |
-
|
725 |
-
If the table is currently loaded and in memory, the data is compiled to
|
726 |
-
binary and returned; if it is not currently loaded, the binary data is
|
727 |
-
read from the font file and returned.
|
728 |
-
"""
|
729 |
-
tag = Tag(tag)
|
730 |
-
if self.isLoaded(tag):
|
731 |
-
log.debug("Compiling '%s' table", tag)
|
732 |
-
return self.tables[tag].compile(self)
|
733 |
-
elif self.reader and tag in self.reader:
|
734 |
-
log.debug("Reading '%s' table from disk", tag)
|
735 |
-
return self.reader[tag]
|
736 |
-
else:
|
737 |
-
raise KeyError(tag)
|
738 |
-
|
739 |
-
def getGlyphSet(self, preferCFF=True, location=None, normalized=False):
|
740 |
-
"""Return a generic GlyphSet, which is a dict-like object
|
741 |
-
mapping glyph names to glyph objects. The returned glyph objects
|
742 |
-
have a ``.draw()`` method that supports the Pen protocol, and will
|
743 |
-
have an attribute named 'width'.
|
744 |
-
|
745 |
-
If the font is CFF-based, the outlines will be taken from the ``CFF ``
|
746 |
-
or ``CFF2`` tables. Otherwise the outlines will be taken from the
|
747 |
-
``glyf`` table.
|
748 |
-
|
749 |
-
If the font contains both a ``CFF ``/``CFF2`` and a ``glyf`` table, you
|
750 |
-
can use the ``preferCFF`` argument to specify which one should be taken.
|
751 |
-
If the font contains both a ``CFF `` and a ``CFF2`` table, the latter is
|
752 |
-
taken.
|
753 |
-
|
754 |
-
If the ``location`` parameter is set, it should be a dictionary mapping
|
755 |
-
four-letter variation tags to their float values, and the returned
|
756 |
-
glyph-set will represent an instance of a variable font at that
|
757 |
-
location.
|
758 |
-
|
759 |
-
If the ``normalized`` variable is set to True, that location is
|
760 |
-
interpreted as in the normalized (-1..+1) space, otherwise it is in the
|
761 |
-
font's defined axes space.
|
762 |
-
"""
|
763 |
-
if location and "fvar" not in self:
|
764 |
-
location = None
|
765 |
-
if location and not normalized:
|
766 |
-
location = self.normalizeLocation(location)
|
767 |
-
if ("CFF " in self or "CFF2" in self) and (preferCFF or "glyf" not in self):
|
768 |
-
return _TTGlyphSetCFF(self, location)
|
769 |
-
elif "glyf" in self:
|
770 |
-
return _TTGlyphSetGlyf(self, location)
|
771 |
-
else:
|
772 |
-
raise TTLibError("Font contains no outlines")
|
773 |
-
|
774 |
-
def normalizeLocation(self, location):
|
775 |
-
"""Normalize a ``location`` from the font's defined axes space (also
|
776 |
-
known as user space) into the normalized (-1..+1) space. It applies
|
777 |
-
``avar`` mapping if the font contains an ``avar`` table.
|
778 |
-
|
779 |
-
The ``location`` parameter should be a dictionary mapping four-letter
|
780 |
-
variation tags to their float values.
|
781 |
-
|
782 |
-
Raises ``TTLibError`` if the font is not a variable font.
|
783 |
-
"""
|
784 |
-
from fontTools.varLib.models import normalizeLocation, piecewiseLinearMap
|
785 |
-
|
786 |
-
if "fvar" not in self:
|
787 |
-
raise TTLibError("Not a variable font")
|
788 |
-
|
789 |
-
axes = {
|
790 |
-
a.axisTag: (a.minValue, a.defaultValue, a.maxValue)
|
791 |
-
for a in self["fvar"].axes
|
792 |
-
}
|
793 |
-
location = normalizeLocation(location, axes)
|
794 |
-
if "avar" in self:
|
795 |
-
avar = self["avar"]
|
796 |
-
avarSegments = avar.segments
|
797 |
-
mappedLocation = {}
|
798 |
-
for axisTag, value in location.items():
|
799 |
-
avarMapping = avarSegments.get(axisTag, None)
|
800 |
-
if avarMapping is not None:
|
801 |
-
value = piecewiseLinearMap(value, avarMapping)
|
802 |
-
mappedLocation[axisTag] = value
|
803 |
-
location = mappedLocation
|
804 |
-
return location
|
805 |
-
|
806 |
-
def getBestCmap(
|
807 |
-
self,
|
808 |
-
cmapPreferences=(
|
809 |
-
(3, 10),
|
810 |
-
(0, 6),
|
811 |
-
(0, 4),
|
812 |
-
(3, 1),
|
813 |
-
(0, 3),
|
814 |
-
(0, 2),
|
815 |
-
(0, 1),
|
816 |
-
(0, 0),
|
817 |
-
),
|
818 |
-
):
|
819 |
-
"""Returns the 'best' Unicode cmap dictionary available in the font
|
820 |
-
or ``None``, if no Unicode cmap subtable is available.
|
821 |
-
|
822 |
-
By default it will search for the following (platformID, platEncID)
|
823 |
-
pairs in order::
|
824 |
-
|
825 |
-
(3, 10), # Windows Unicode full repertoire
|
826 |
-
(0, 6), # Unicode full repertoire (format 13 subtable)
|
827 |
-
(0, 4), # Unicode 2.0 full repertoire
|
828 |
-
(3, 1), # Windows Unicode BMP
|
829 |
-
(0, 3), # Unicode 2.0 BMP
|
830 |
-
(0, 2), # Unicode ISO/IEC 10646
|
831 |
-
(0, 1), # Unicode 1.1
|
832 |
-
(0, 0) # Unicode 1.0
|
833 |
-
|
834 |
-
This particular order matches what HarfBuzz uses to choose what
|
835 |
-
subtable to use by default. This order prefers the largest-repertoire
|
836 |
-
subtable, and among those, prefers the Windows-platform over the
|
837 |
-
Unicode-platform as the former has wider support.
|
838 |
-
|
839 |
-
This order can be customized via the ``cmapPreferences`` argument.
|
840 |
-
"""
|
841 |
-
return self["cmap"].getBestCmap(cmapPreferences=cmapPreferences)
|
842 |
-
|
843 |
-
|
844 |
-
class GlyphOrder(object):
|
845 |
-
|
846 |
-
"""A pseudo table. The glyph order isn't in the font as a separate
|
847 |
-
table, but it's nice to present it as such in the TTX format.
|
848 |
-
"""
|
849 |
-
|
850 |
-
def __init__(self, tag=None):
|
851 |
-
pass
|
852 |
-
|
853 |
-
def toXML(self, writer, ttFont):
|
854 |
-
glyphOrder = ttFont.getGlyphOrder()
|
855 |
-
writer.comment(
|
856 |
-
"The 'id' attribute is only for humans; " "it is ignored when parsed."
|
857 |
-
)
|
858 |
-
writer.newline()
|
859 |
-
for i in range(len(glyphOrder)):
|
860 |
-
glyphName = glyphOrder[i]
|
861 |
-
writer.simpletag("GlyphID", id=i, name=glyphName)
|
862 |
-
writer.newline()
|
863 |
-
|
864 |
-
def fromXML(self, name, attrs, content, ttFont):
|
865 |
-
if not hasattr(self, "glyphOrder"):
|
866 |
-
self.glyphOrder = []
|
867 |
-
if name == "GlyphID":
|
868 |
-
self.glyphOrder.append(attrs["name"])
|
869 |
-
ttFont.setGlyphOrder(self.glyphOrder)
|
870 |
-
|
871 |
-
|
872 |
-
def getTableModule(tag):
|
873 |
-
"""Fetch the packer/unpacker module for a table.
|
874 |
-
Return None when no module is found.
|
875 |
-
"""
|
876 |
-
from . import tables
|
877 |
-
|
878 |
-
pyTag = tagToIdentifier(tag)
|
879 |
-
try:
|
880 |
-
__import__("fontTools.ttLib.tables." + pyTag)
|
881 |
-
except ImportError as err:
|
882 |
-
# If pyTag is found in the ImportError message,
|
883 |
-
# means table is not implemented. If it's not
|
884 |
-
# there, then some other module is missing, don't
|
885 |
-
# suppress the error.
|
886 |
-
if str(err).find(pyTag) >= 0:
|
887 |
-
return None
|
888 |
-
else:
|
889 |
-
raise err
|
890 |
-
else:
|
891 |
-
return getattr(tables, pyTag)
|
892 |
-
|
893 |
-
|
894 |
-
# Registry for custom table packer/unpacker classes. Keys are table
|
895 |
-
# tags, values are (moduleName, className) tuples.
|
896 |
-
# See registerCustomTableClass() and getCustomTableClass()
|
897 |
-
_customTableRegistry = {}
|
898 |
-
|
899 |
-
|
900 |
-
def registerCustomTableClass(tag, moduleName, className=None):
|
901 |
-
"""Register a custom packer/unpacker class for a table.
|
902 |
-
|
903 |
-
The 'moduleName' must be an importable module. If no 'className'
|
904 |
-
is given, it is derived from the tag, for example it will be
|
905 |
-
``table_C_U_S_T_`` for a 'CUST' tag.
|
906 |
-
|
907 |
-
The registered table class should be a subclass of
|
908 |
-
:py:class:`fontTools.ttLib.tables.DefaultTable.DefaultTable`
|
909 |
-
"""
|
910 |
-
if className is None:
|
911 |
-
className = "table_" + tagToIdentifier(tag)
|
912 |
-
_customTableRegistry[tag] = (moduleName, className)
|
913 |
-
|
914 |
-
|
915 |
-
def unregisterCustomTableClass(tag):
|
916 |
-
"""Unregister the custom packer/unpacker class for a table."""
|
917 |
-
del _customTableRegistry[tag]
|
918 |
-
|
919 |
-
|
920 |
-
def getCustomTableClass(tag):
|
921 |
-
"""Return the custom table class for tag, if one has been registered
|
922 |
-
with 'registerCustomTableClass()'. Else return None.
|
923 |
-
"""
|
924 |
-
if tag not in _customTableRegistry:
|
925 |
-
return None
|
926 |
-
import importlib
|
927 |
-
|
928 |
-
moduleName, className = _customTableRegistry[tag]
|
929 |
-
module = importlib.import_module(moduleName)
|
930 |
-
return getattr(module, className)
|
931 |
-
|
932 |
-
|
933 |
-
def getTableClass(tag):
|
934 |
-
"""Fetch the packer/unpacker class for a table."""
|
935 |
-
tableClass = getCustomTableClass(tag)
|
936 |
-
if tableClass is not None:
|
937 |
-
return tableClass
|
938 |
-
module = getTableModule(tag)
|
939 |
-
if module is None:
|
940 |
-
from .tables.DefaultTable import DefaultTable
|
941 |
-
|
942 |
-
return DefaultTable
|
943 |
-
pyTag = tagToIdentifier(tag)
|
944 |
-
tableClass = getattr(module, "table_" + pyTag)
|
945 |
-
return tableClass
|
946 |
-
|
947 |
-
|
948 |
-
def getClassTag(klass):
|
949 |
-
"""Fetch the table tag for a class object."""
|
950 |
-
name = klass.__name__
|
951 |
-
assert name[:6] == "table_"
|
952 |
-
name = name[6:] # Chop 'table_'
|
953 |
-
return identifierToTag(name)
|
954 |
-
|
955 |
-
|
956 |
-
def newTable(tag):
|
957 |
-
"""Return a new instance of a table."""
|
958 |
-
tableClass = getTableClass(tag)
|
959 |
-
return tableClass(tag)
|
960 |
-
|
961 |
-
|
962 |
-
def _escapechar(c):
|
963 |
-
"""Helper function for tagToIdentifier()"""
|
964 |
-
import re
|
965 |
-
|
966 |
-
if re.match("[a-z0-9]", c):
|
967 |
-
return "_" + c
|
968 |
-
elif re.match("[A-Z]", c):
|
969 |
-
return c + "_"
|
970 |
-
else:
|
971 |
-
return hex(byteord(c))[2:]
|
972 |
-
|
973 |
-
|
974 |
-
def tagToIdentifier(tag):
|
975 |
-
"""Convert a table tag to a valid (but UGLY) python identifier,
|
976 |
-
as well as a filename that's guaranteed to be unique even on a
|
977 |
-
caseless file system. Each character is mapped to two characters.
|
978 |
-
Lowercase letters get an underscore before the letter, uppercase
|
979 |
-
letters get an underscore after the letter. Trailing spaces are
|
980 |
-
trimmed. Illegal characters are escaped as two hex bytes. If the
|
981 |
-
result starts with a number (as the result of a hex escape), an
|
982 |
-
extra underscore is prepended. Examples::
|
983 |
-
|
984 |
-
>>> tagToIdentifier('glyf')
|
985 |
-
'_g_l_y_f'
|
986 |
-
>>> tagToIdentifier('cvt ')
|
987 |
-
'_c_v_t'
|
988 |
-
>>> tagToIdentifier('OS/2')
|
989 |
-
'O_S_2f_2'
|
990 |
-
"""
|
991 |
-
import re
|
992 |
-
|
993 |
-
tag = Tag(tag)
|
994 |
-
if tag == "GlyphOrder":
|
995 |
-
return tag
|
996 |
-
assert len(tag) == 4, "tag should be 4 characters long"
|
997 |
-
while len(tag) > 1 and tag[-1] == " ":
|
998 |
-
tag = tag[:-1]
|
999 |
-
ident = ""
|
1000 |
-
for c in tag:
|
1001 |
-
ident = ident + _escapechar(c)
|
1002 |
-
if re.match("[0-9]", ident):
|
1003 |
-
ident = "_" + ident
|
1004 |
-
return ident
|
1005 |
-
|
1006 |
-
|
1007 |
-
def identifierToTag(ident):
|
1008 |
-
"""the opposite of tagToIdentifier()"""
|
1009 |
-
if ident == "GlyphOrder":
|
1010 |
-
return ident
|
1011 |
-
if len(ident) % 2 and ident[0] == "_":
|
1012 |
-
ident = ident[1:]
|
1013 |
-
assert not (len(ident) % 2)
|
1014 |
-
tag = ""
|
1015 |
-
for i in range(0, len(ident), 2):
|
1016 |
-
if ident[i] == "_":
|
1017 |
-
tag = tag + ident[i + 1]
|
1018 |
-
elif ident[i + 1] == "_":
|
1019 |
-
tag = tag + ident[i]
|
1020 |
-
else:
|
1021 |
-
# assume hex
|
1022 |
-
tag = tag + chr(int(ident[i : i + 2], 16))
|
1023 |
-
# append trailing spaces
|
1024 |
-
tag = tag + (4 - len(tag)) * " "
|
1025 |
-
return Tag(tag)
|
1026 |
-
|
1027 |
-
|
1028 |
-
def tagToXML(tag):
|
1029 |
-
"""Similarly to tagToIdentifier(), this converts a TT tag
|
1030 |
-
to a valid XML element name. Since XML element names are
|
1031 |
-
case sensitive, this is a fairly simple/readable translation.
|
1032 |
-
"""
|
1033 |
-
import re
|
1034 |
-
|
1035 |
-
tag = Tag(tag)
|
1036 |
-
if tag == "OS/2":
|
1037 |
-
return "OS_2"
|
1038 |
-
elif tag == "GlyphOrder":
|
1039 |
-
return tag
|
1040 |
-
if re.match("[A-Za-z_][A-Za-z_0-9]* *$", tag):
|
1041 |
-
return tag.strip()
|
1042 |
-
else:
|
1043 |
-
return tagToIdentifier(tag)
|
1044 |
-
|
1045 |
-
|
1046 |
-
def xmlToTag(tag):
|
1047 |
-
"""The opposite of tagToXML()"""
|
1048 |
-
if tag == "OS_2":
|
1049 |
-
return Tag("OS/2")
|
1050 |
-
if len(tag) == 8:
|
1051 |
-
return identifierToTag(tag)
|
1052 |
-
else:
|
1053 |
-
return Tag(tag + " " * (4 - len(tag)))
|
1054 |
-
|
1055 |
-
|
1056 |
-
# Table order as recommended in the OpenType specification 1.4
|
1057 |
-
TTFTableOrder = [
|
1058 |
-
"head",
|
1059 |
-
"hhea",
|
1060 |
-
"maxp",
|
1061 |
-
"OS/2",
|
1062 |
-
"hmtx",
|
1063 |
-
"LTSH",
|
1064 |
-
"VDMX",
|
1065 |
-
"hdmx",
|
1066 |
-
"cmap",
|
1067 |
-
"fpgm",
|
1068 |
-
"prep",
|
1069 |
-
"cvt ",
|
1070 |
-
"loca",
|
1071 |
-
"glyf",
|
1072 |
-
"kern",
|
1073 |
-
"name",
|
1074 |
-
"post",
|
1075 |
-
"gasp",
|
1076 |
-
"PCLT",
|
1077 |
-
]
|
1078 |
-
|
1079 |
-
OTFTableOrder = ["head", "hhea", "maxp", "OS/2", "name", "cmap", "post", "CFF "]
|
1080 |
-
|
1081 |
-
|
1082 |
-
def sortedTagList(tagList, tableOrder=None):
|
1083 |
-
"""Return a sorted copy of tagList, sorted according to the OpenType
|
1084 |
-
specification, or according to a custom tableOrder. If given and not
|
1085 |
-
None, tableOrder needs to be a list of tag names.
|
1086 |
-
"""
|
1087 |
-
tagList = sorted(tagList)
|
1088 |
-
if tableOrder is None:
|
1089 |
-
if "DSIG" in tagList:
|
1090 |
-
# DSIG should be last (XXX spec reference?)
|
1091 |
-
tagList.remove("DSIG")
|
1092 |
-
tagList.append("DSIG")
|
1093 |
-
if "CFF " in tagList:
|
1094 |
-
tableOrder = OTFTableOrder
|
1095 |
-
else:
|
1096 |
-
tableOrder = TTFTableOrder
|
1097 |
-
orderedTables = []
|
1098 |
-
for tag in tableOrder:
|
1099 |
-
if tag in tagList:
|
1100 |
-
orderedTables.append(tag)
|
1101 |
-
tagList.remove(tag)
|
1102 |
-
orderedTables.extend(tagList)
|
1103 |
-
return orderedTables
|
1104 |
-
|
1105 |
-
|
1106 |
-
def reorderFontTables(inFile, outFile, tableOrder=None, checkChecksums=False):
|
1107 |
-
"""Rewrite a font file, ordering the tables as recommended by the
|
1108 |
-
OpenType specification 1.4.
|
1109 |
-
"""
|
1110 |
-
inFile.seek(0)
|
1111 |
-
outFile.seek(0)
|
1112 |
-
reader = SFNTReader(inFile, checkChecksums=checkChecksums)
|
1113 |
-
writer = SFNTWriter(
|
1114 |
-
outFile,
|
1115 |
-
len(reader.tables),
|
1116 |
-
reader.sfntVersion,
|
1117 |
-
reader.flavor,
|
1118 |
-
reader.flavorData,
|
1119 |
-
)
|
1120 |
-
tables = list(reader.keys())
|
1121 |
-
for tag in sortedTagList(tables, tableOrder):
|
1122 |
-
writer[tag] = reader[tag]
|
1123 |
-
writer.close()
|
1124 |
-
|
1125 |
-
|
1126 |
-
def maxPowerOfTwo(x):
|
1127 |
-
"""Return the highest exponent of two, so that
|
1128 |
-
(2 ** exponent) <= x. Return 0 if x is 0.
|
1129 |
-
"""
|
1130 |
-
exponent = 0
|
1131 |
-
while x:
|
1132 |
-
x = x >> 1
|
1133 |
-
exponent = exponent + 1
|
1134 |
-
return max(exponent - 1, 0)
|
1135 |
-
|
1136 |
-
|
1137 |
-
def getSearchRange(n, itemSize=16):
|
1138 |
-
"""Calculate searchRange, entrySelector, rangeShift."""
|
1139 |
-
# itemSize defaults to 16, for backward compatibility
|
1140 |
-
# with upstream fonttools.
|
1141 |
-
exponent = maxPowerOfTwo(n)
|
1142 |
-
searchRange = (2**exponent) * itemSize
|
1143 |
-
entrySelector = exponent
|
1144 |
-
rangeShift = max(0, n * itemSize - searchRange)
|
1145 |
-
return searchRange, entrySelector, rangeShift
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|