Commit
·
69b8b12
1
Parent(s):
546e9c9
Update parquet files (step 4 of 249)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md +0 -121
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/FSX - Maddog 2008 Professional Cracked by Komu Everything You Need to Know About the Legendary MD-80 Add-on.md +0 -105
- spaces/1gistliPinn/ChatGPT4/Examples/Clarion Enterprise Edition 6.0 64 Bit UPD.md +0 -9
- spaces/1phancelerku/anime-remove-background/8 Ball Pool Long Line Tool APK The Ultimate Guide for Android Users.md +0 -111
- spaces/1phancelerku/anime-remove-background/Download FIFA 20 Mod APK with OBB Data - Enjoy Realistic Football Experience on Your Phone.md +0 -104
- spaces/1phancelerku/anime-remove-background/Download and Play FINAL FANTASY XIII on Android - Cloud Game with TV Integration Support.md +0 -103
- spaces/221091lstwcm/textgenerator/app.py +0 -11
- spaces/232labs/VToonify/vtoonify/model/stylegan/op/__init__.py +0 -2
- spaces/4Taps/SadTalker/src/audio2exp_models/audio2exp.py +0 -40
- spaces/801artistry/RVC801/demucs/utils.py +0 -323
- spaces/801artistry/RVC801/infer/lib/infer_pack/transforms.py +0 -207
- spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ko.han.md +0 -100
- spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_spacy.py +0 -152
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py +0 -2861
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/hr_4xb16_1024e_4channel-checkpoint.py +0 -113
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/__init__.py +0 -0
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-lbs_in1k.py +0 -5
- spaces/Ababababababbababa/Ashaar/poetry_diacritizer/train.py +0 -49
- spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversations/+page.server.ts +0 -10
- spaces/Adapter/CoAdapter/ldm/inference_base.py +0 -292
- spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/utils_image.py +0 -916
- spaces/Aditya9790/yolo7-object-tracking/utils/add_nms.py +0 -155
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Ball.d.ts +0 -2
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Methods.js +0 -25
- spaces/AlexWang/lama/saicinpainting/training/modules/depthwise_sep_conv.py +0 -17
- spaces/Alichuan/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py +0 -36
- spaces/AlignmentResearch/tuned-lens/Dockerfile +0 -25
- spaces/Ameaou/academic-chatgpt3.1/check_proxy.py +0 -149
- spaces/Andy1621/uniformer_image_detection/README.md +0 -13
- spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_fpn_1x_coco.py +0 -18
- spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py +0 -4
- spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py +0 -9
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-stream.py +0 -86
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/session.py +0 -517
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py +0 -39
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/results.py +0 -760
- spaces/Awesimo/jojogan/e4e/utils/common.py +0 -55
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py +0 -176
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escprober.py +0 -102
- spaces/CVPR/LIVE/pydiffvg/render_pytorch.py +0 -870
- spaces/CVPR/LIVE/thrust/thrust/mr/disjoint_tls_pool.h +0 -69
- spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/for_each.h +0 -23
- spaces/CVPR/lama-example/saicinpainting/training/modules/spatial_transform.py +0 -49
- spaces/Chintan-Donda/KKMS-KSSW-HF/src/translator.py +0 -61
- spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/base_dataset.py +0 -68
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageTk.py +0 -283
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/tempfile/temptypes.py +0 -73
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/statisticsPen.py +0 -122
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py +0 -46
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otConverters.py +0 -1929
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cadmas 11 46 A Snow Sport Helmet with Advanced Features.md
DELETED
@@ -1,121 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>What is Cadmas 11 46?</h1>
|
3 |
-
<p>If you are interested in online assessment and comic books, you might have heard of Cadmas 11 46. But what exactly is it? Is it a software, a comic book, or something else? In this article, we will explore what Cadmas and 11 46 are, how they are related, and how they can be used for educational purposes.</p>
|
4 |
-
<h2>What is Cadmas?</h2>
|
5 |
-
<p>Cadmas is an online assessment platform that helps higher education providers achieve institutional goals through better assessment experiences. It is a secure, online environment that facilitates an end-to-end assessment workflow, simplifying the process of implementing best practice assessment at scale. By empowering academics and supporting students, Cadmas can be used to solve the biggest challenges faced by universities today, such as academic integrity, student retention, remote learning, and online exams.</p>
|
6 |
-
<h2>cadmas 11 46</h2><br /><p><b><b>Download</b> ……… <a href="https://byltly.com/2uKvPI">https://byltly.com/2uKvPI</a></b></p><br /><br />
|
7 |
-
<h3>How does Cadmus work?</h3>
|
8 |
-
<p>Cadmus has several features and benefits for both learners and educators. For learners, Cadmus provides a supportive and scaffolded assessment experience that helps them develop their academic skills and achieve better outcomes. For example, Cadmus offers:</p>
|
9 |
-
<ul>
|
10 |
-
<li>A distraction-free writing environment that blocks access to other websites and applications while completing an assignment</li>
|
11 |
-
<li>A range of learning supports that are intelligently integrated into the writing environment, such as referencing tools, word count, feedback rubric, etc.</li>
|
12 |
-
<li>A proctor-free exam alternative that does not impose on privacy but still ensures academic integrity through various safeguards, such as plagiarism detection, keystroke analysis, etc.</li>
|
13 |
-
<li>A learning analytics dashboard that shows their progress and engagement with the assignment</li>
|
14 |
-
</ul>
|
15 |
-
<p>For educators, Cadmus simplifies the process of designing and delivering high-quality digital assessment, consistently and at scale. For example, Cadmus offers:</p>
|
16 |
-
<ul>
|
17 |
-
<li>A template-based approach that allows educators to create assessments that align with best practice principles and institutional standards</li>
|
18 |
-
<li>A seamless integration with learning management systems (LMS) that allows educators to manage assessments from one place</li>
|
19 |
-
<li>A real-time class-level insight that allows educators to monitor student progress and provide timely support and communication</li>
|
20 |
-
<li>A feedback and grading tool that allows educators to provide rich and constructive feedback to students</li></ul>
|
21 |
-
<h4>What are some use cases of Cadmus?</h4>
|
22 |
-
<p>Cadmus can be used for a range of formative and summative, open-book written assessments and alternatives to exams. Some examples of how Cadmus can be used are:</p>
|
23 |
-
<ul>
|
24 |
-
<li>An essay that requires students to research a topic and present their arguments in a structured way</li>
|
25 |
-
<li>A report that requires students to analyse data and provide recommendations based on evidence</li>
|
26 |
-
<li>A reflection that requires students to evaluate their own learning process and outcomes</li>
|
27 |
-
<li>A case study that requires students to apply their knowledge and skills to a real-world scenario</li>
|
28 |
-
<li>A short answer test that requires students to demonstrate their understanding of key concepts</li>
|
29 |
-
</ul>
|
30 |
-
<h2>What is 11 46?</h2>
|
31 |
-
<p>11 46 is a comic book series by Castle Comics that was published between November 2020 and June 2021 . It is a crime thriller that follows the lives of four strangers who are connected by a mysterious murder that took place at exactly 11:46 pm.</p>
|
32 |
-
<h3>What is the plot of 11 46?</h3>
|
33 |
-
<p>The plot of 11 46 revolves around four main characters who have different backgrounds and motivations. They are:</p>
|
34 |
-
<ul>
|
35 |
-
<li>Adam Smith, a journalist who is investigating the murder case and trying to expose the truth behind it</li>
|
36 |
-
<li>Betty Jones, a waitress who witnessed the murder and is being hunted by the killers</li>
|
37 |
-
<li>Charlie Brown, a detective who is assigned to solve the murder case and catch the killers</li>
|
38 |
-
<li>Danny Lee, a hacker who is involved in the murder plot and has a hidden agenda</li>
|
39 |
-
</ul>
|
40 |
-
<p>The story unfolds through multiple perspectives and timelines, revealing how each character is related to the murder and how their actions affect each other. The story also explores various themes and messages, such as corruption, justice, revenge, loyalty, etc.</p>
|
41 |
-
<h4>What are some themes and messages of 11 46?</h4>
|
42 |
-
<p>One of the main themes of 11 46 is the idea of fate versus free will. The title of the series refers to the exact time when the murder happened, suggesting that it was predetermined by some higher power or force. However, the series also shows how each character has some degree of choice and agency in their actions. The series asks questions such as:</p>
|
43 |
-
<ul>
|
44 |
-
<li>How much control do we have over our lives?</li>
|
45 |
-
<li>How do our choices affect others?</li>
|
46 |
-
<li>How do we deal with the consequences of our choices?</li>
|
47 |
-
<li>How do we cope with uncertainty?</li></ul>
|
48 |
-
<h2>How are Cadmus and 11 46 related?</h2>
|
49 |
-
<p>At first glance, Cadmus and 11 46 seem to have nothing in common. One is an online assessment platform for higher education, while the other is a comic book series for entertainment. However, upon closer examination, we can find some possible connections and similarities between them. For example:</p>
|
50 |
-
<p>Cadmas 11 46 sway office<br />
|
51 |
-
Cadmas 11 46 bali finder<br />
|
52 |
-
Cadmas 11 46 opensea collection<br />
|
53 |
-
Cadmas 11 46 black panther<br />
|
54 |
-
Cadmas 11 46 NBA finals<br />
|
55 |
-
Cadmas 11 46 Jerome K Jerome<br />
|
56 |
-
Cadmas 11 46 Pune event management<br />
|
57 |
-
Cadmas 11 46 Oaxaca figs<br />
|
58 |
-
Cadmas 11 46 hedge fund<br />
|
59 |
-
Cadmas 11 46 short fiction writer<br />
|
60 |
-
Cadmas 11 46 glassware<br />
|
61 |
-
Cadmas 11 46 dance<br />
|
62 |
-
Cadmas 11 46 calculus<br />
|
63 |
-
Cadmas 11 46 emperor is dead<br />
|
64 |
-
Cadmas 11 46 chainsaw training<br />
|
65 |
-
Cadmas 11 46 workhorse<br />
|
66 |
-
Cadmas 11 46 fate of cadmus<br />
|
67 |
-
Cadmas 11 46 contemporary culinary style<br />
|
68 |
-
Cadmas 11 46 force crankset<br />
|
69 |
-
Cadmas 11 46 snow sport helmet<br />
|
70 |
-
Cadmas 11 46 trail running shoes<br />
|
71 |
-
Cadmas 11 46 board shorts<br />
|
72 |
-
Cadmas 11 46 slip on shoes<br />
|
73 |
-
Cadmas 11 46 black laces and nylon<br />
|
74 |
-
Cadmas 11 46 jeans by yeezy<br />
|
75 |
-
Cadmas 11 46 navy women's supercrew sweatshirt<br />
|
76 |
-
Cadmas 11 46 ebay items at great prices<br />
|
77 |
-
Cadmas 11 46 Baxter pharma earnings call<br />
|
78 |
-
Cadmas 11 46 black smoke burner<br />
|
79 |
-
Cadmas 11 46 Venus sign<br />
|
80 |
-
Cadmas 11 46 IT business solutions<br />
|
81 |
-
Cadmas 11 BB/38 UHM blazer SS7MU<br />
|
82 |
-
Cadmas #47 bdfb3a6fcd made with Microsoft sway<br />
|
83 |
-
Cadmus #48 cadmus and his legacy Kirstein <br />
|
84 |
-
Cadmus #49 history of science A.L. Kirstein</p>
|
85 |
-
<h3>How can Cadmus be used to assess 11 46?</h3>
|
86 |
-
<p>One way to use Cadmus to assess 11 46 is to design and deliver a Cadmus assignment based on the comic book series. For example, an educator can create an assignment that requires students to:</p>
|
87 |
-
<ul>
|
88 |
-
<li>Read the comic book series and analyse its plot, characters, themes, and messages</li>
|
89 |
-
<li>Write a critical review of the comic book series, using evidence and examples from the text</li>
|
90 |
-
<li>Use appropriate academic conventions, such as referencing, structure, language, etc.</li>
|
91 |
-
</ul>
|
92 |
-
<p>The assignment can be aligned with the learning outcomes and assessment criteria of the course or subject. The assignment can also be tailored to suit different levels of difficulty and complexity, depending on the students' needs and abilities.</p>
|
93 |
-
<h4>What are some benefits and challenges of using Cadmus for 11 46?</h4>
|
94 |
-
<p>Using Cadmus for 11 46 can have some benefits and challenges for both learners and educators. Some of the benefits are:</p>
|
95 |
-
<ul>
|
96 |
-
<li>Learners can develop their critical thinking, analytical, and writing skills by engaging with a creative and complex text</li>
|
97 |
-
<li>Learners can enjoy a more interesting and relevant assessment experience that connects to their interests and passions</li>
|
98 |
-
<li>Educators can assess learners' understanding and application of key concepts and skills in a more authentic and meaningful way</li>
|
99 |
-
<li>Educators can ensure academic integrity and quality of assessment by using Cadmus' features and safeguards</li>
|
100 |
-
</ul>
|
101 |
-
<p>Some of the challenges are:</p>
|
102 |
-
<ul>
|
103 |
-
<li>Learners may have difficulty accessing or reading the comic book series due to availability or cost issues</li>
|
104 |
-
<li>Learners may have different levels of familiarity or preference with the comic book genre or medium</li>
|
105 |
-
<li>Educators may have difficulty finding or creating suitable assessment tasks or rubrics that align with the comic book series</li>
|
106 |
-
<li>Educators may have to deal with potential plagiarism or cheating issues that may arise from using a popular or widely available text</li>
|
107 |
-
</ul>
|
108 |
-
<h2>Conclusion</h2>
|
109 |
-
<p>In conclusion, Cadmas 11 46 is a combination of an online assessment platform and a comic book series that can be used for educational purposes. Cadmas is a platform that helps higher education providers achieve institutional goals through better assessment experiences. 11 46 is a series that follows the lives of four strangers who are connected by a mysterious murder. By using Cadmus to assess 11 46, learners and educators can enjoy some benefits, such as developing critical thinking skills, engaging with a creative text, ensuring academic integrity, etc. However, they may also face some challenges, such as accessing or reading the text, finding or creating suitable assessment tasks, dealing with plagiarism or cheating issues, etc. Therefore, it is important to consider these factors before using Cadmus 11 46 for assessment.</p>
|
110 |
-
<h3>FAQs</h3>
|
111 |
-
<p>Here are some frequently asked questions and answers about Cadmas and 11 46:</p>
|
112 |
-
<ol>
|
113 |
-
<li><b>Where can I find Cadmus?</b><br>Cadmus is an online platform that can be accessed through your LMS. You can find more information about Cadmus on their website: https://www.cadmus.io/.</li>
|
114 |
-
<li><b>Where can I find 11 46?</b><br>11 46 is a comic book series that was published by Castle Comics. You can find more information about 11 46 on their website: https://www.castlecomics.com/1146.</li>
|
115 |
-
<li><b>How much does Cadmus cost?</b><br>Cadmus is free for learners and educators who use it for assessment purposes. However, Cadmus may charge a fee for institutions who want to use it for other purposes.</li>
|
116 |
-
<li><b>How much does 11 46 cost?</b><br>11 46 costs $3.99 per issue or $19.99 for the complete series. You can buy it online or in physical stores.</li>
|
117 |
-
<li><b>How long does it take to complete a Cadmus assignment?</b><br>The length of a Cadmus assignment depends on the type and complexity of the task. However, most Cadmus assignments take between one to three hours to complete.</li>
|
118 |
-
</ol>
|
119 |
-
</p> 0a6ba089eb<br />
|
120 |
-
<br />
|
121 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/FSX - Maddog 2008 Professional Cracked by Komu Everything You Need to Know About the Legendary MD-80 Add-on.md
DELETED
@@ -1,105 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>FSX - Maddog 2008 Professional cracked by Komu: A Review</h1>
|
3 |
-
<p>If you are a fan of flight simulation games, you might have heard of <strong>FSX - Maddog 2008 Professional</strong>, a popular add-on for Microsoft Flight Simulator X that lets you fly the Leonardo Maddog, a realistic and complex simulation of the McDonnell Douglas MD-80 aircraft. But did you know that there is a way to get this add-on for free, thanks to a crack made by a user named Komu? In this article, we will review <strong>FSX - Maddog 2008 Professional cracked by Komu</strong>, a download that claims to unlock all the features and benefits of the original add-on without paying a dime. We will also show you how to install and use it, as well as the pros and cons of using this crack. Finally, we will suggest some alternatives to this crack in case you are looking for other options.</p>
|
4 |
-
<h2>What is FSX - Maddog 2008 Professional?</h2>
|
5 |
-
<p><strong>FSX - Maddog 2008 Professional</strong> is an add-on for Microsoft Flight Simulator X that was released in 2008 by Leonardo Software House, a company that specializes in developing flight simulation software. This add-on is a highly detailed and accurate simulation of the McDonnell Douglas MD-80 aircraft, also known as the Maddog, a twin-engine, medium-range jet airliner that was widely used by many airlines around the world from the 1980s to the 2000s.</p>
|
6 |
-
<h2>FSX - Maddog 2008 Professional cracked by Komu</h2><br /><p><b><b>Download File</b> ✒ <a href="https://byltly.com/2uKvyM">https://byltly.com/2uKvyM</a></b></p><br /><br />
|
7 |
-
<p>This add-on offers many features and benefits for flight simulation enthusiasts, such as:</p>
|
8 |
-
<ul>
|
9 |
-
<li>A realistic and fully functional cockpit with custom gauges, systems, sounds, animations, and lighting.</li>
|
10 |
-
<li>A comprehensive flight management system (FMS) with navigation, performance, fuel, and route planning functions.</li>
|
11 |
-
<li>A realistic flight model with accurate aerodynamics, engine performance, fuel consumption, and weight and balance calculations.</li>
|
12 |
-
<li>A custom load manager that allows you to configure the payload, fuel, passengers, and cargo of your aircraft.</li>
|
13 |
-
<li>A failure simulation system that lets you experience various malfunctions and emergencies during your flight.</li>
|
14 |
-
<li>A weather radar that displays precipitation, turbulence, windshear, and storm cells.</li>
|
15 |
-
<li>A traffic collision avoidance system (TCAS) that warns you of potential conflicts with other aircraft.</li>
|
16 |
-
<li>A ground proximity warning system (GPWS) that alerts you of terrain hazards.</li>
|
17 |
-
<li>A custom sound set that reproduces the engine noise, cockpit sounds, environmental sounds, and voice alerts of the real aircraft.</li>
|
18 |
-
<li>A variety of liveries that represent different airlines that operated the MD-80 aircraft.</li>
|
19 |
-
</ul>
|
20 |
-
<p><strong>FSX - Maddog 2008 Professional</strong> is widely regarded as one of the best add-ons for FSX in terms of realism, complexity, and immersion. However, it also comes with a price tag of $59.99 USD (as of May 2023), which might be too expensive for some users who want to enjoy this add-on without breaking the bank.</p>
|
21 |
-
<p>FSX Maddog 2008 Pro full version download<br />
|
22 |
-
How to install FSX Maddog 2008 Professional crack<br />
|
23 |
-
FSX Maddog 2008 Professional free torrent<br />
|
24 |
-
FSX Maddog 2008 Pro activation key<br />
|
25 |
-
FSX Maddog 2008 Professional patch by Komu<br />
|
26 |
-
FSX Maddog 2008 Pro serial number<br />
|
27 |
-
FSX Maddog 2008 Professional license code<br />
|
28 |
-
FSX Maddog 2008 Pro keygen<br />
|
29 |
-
FSX Maddog 2008 Professional gameplay video<br />
|
30 |
-
FSX Maddog 2008 Pro review<br />
|
31 |
-
FSX Maddog 2008 Professional system requirements<br />
|
32 |
-
FSX Maddog 2008 Pro manual pdf<br />
|
33 |
-
FSX Maddog 2008 Professional update<br />
|
34 |
-
FSX Maddog 2008 Pro mods<br />
|
35 |
-
FSX Maddog 2008 Professional liveries<br />
|
36 |
-
FSX Maddog 2008 Pro cockpit view<br />
|
37 |
-
FSX Maddog 2008 Professional tutorial<br />
|
38 |
-
FSX Maddog 2008 Pro tips and tricks<br />
|
39 |
-
FSX Maddog 2008 Pro cheats<br />
|
40 |
-
FSX Maddog 2008 Pro error fix<br />
|
41 |
-
FSX Maddog 2008 Professional forum<br />
|
42 |
-
FSX Maddog 2008 Professional support<br />
|
43 |
-
FSX Maddog 2008 Professional online multiplayer<br />
|
44 |
-
FSX Maddog 2008 Professional VR compatibility<br />
|
45 |
-
FSX Maddog 2008 Professional best settings<br />
|
46 |
-
FSX Maddog 2008 Pro comparison with other flight simulators<br />
|
47 |
-
FSX Maddog 2008 Pro realistic flight model<br />
|
48 |
-
FSX Maddog 2008 Pro sound pack<br />
|
49 |
-
FSX Maddog 2008 Pro scenery add-ons<br />
|
50 |
-
FSX Maddog 2008 Pro weather engine<br />
|
51 |
-
FSX Maddog 2008 Pro navigation database<br />
|
52 |
-
FSX Maddog 2008 Pro fuel planner<br />
|
53 |
-
FSX Maddog 2008 Pro flight plan generator<br />
|
54 |
-
FSX Maddog 2008 Pro charts and maps<br />
|
55 |
-
FSX Maddog 2008 Pro ATC communication<br />
|
56 |
-
FSX Maddog 2008 Pro emergency procedures<br />
|
57 |
-
FSX Maddog 2008 Pro failures simulation<br />
|
58 |
-
FSX Maddog 2008 Pro cold and dark start up<br />
|
59 |
-
FSX Maddog 2008 Pro take off and landing performance calculator<br />
|
60 |
-
FSX Maddog 2008 Pro autopilot functions<br />
|
61 |
-
FSX Maddog 2008 Pro FMC programming<br />
|
62 |
-
FSX Maddog 2008 Pro VNAV and LNAV modes<br />
|
63 |
-
FSX Maddog 2008 Pro SID and STAR procedures<br />
|
64 |
-
FSX Maddog 2008 Pro ILS approach and landing<br />
|
65 |
-
FSX Maddog 2008 Pro RNAV approach and landing<br />
|
66 |
-
FSX Maddog 2008 Pro VOR approach and landing<br />
|
67 |
-
FSX Maddog 2008 Pro visual approach and landing<br />
|
68 |
-
FSX Maddog 2008 Pro go around procedure<br />
|
69 |
-
FSX Maddog 2008 Pro holding pattern procedure <br />
|
70 |
-
FSX Maddog 2008 Pro diverting to alternate airport procedure</p>
|
71 |
-
<h2>What is Komu's crack?</h2>
|
72 |
-
<p><strong>Komu's crack</strong> is a download that claims to bypass the activation process of <strong>FSX - Maddog 2008 Professional</strong> and allow users to use it for free. It was created by a user named Komu who uploaded it on various torrent sites in 2010. According to Komu's description, his crack does not modify any files or registry entries of the original add-on, but simply replaces the original .dll file with a cracked one that disables the activation check. He also claims that his crack does not affect any features or functions of the add-on, and that it works with any version of FSX.</p>
|
73 |
-
<p>Komu's crack has been downloaded by thousands of users who wanted to try <strong>FSX - Maddog 2008 Professional</strong> without paying for it. Some users have reported that the crack works as advertised and that they have not encountered any problems or issues with it. However, other users have reported that the crack does not work at all or that it causes various errors or crashes during their flights. Moreover, some users have expressed ethical concerns about using this crack, as it violates the intellectual property rights of Leonardo Software House and deprives them of their deserved revenue.</p>
|
74 |
-
<h2>How to install and use FSX - Maddog 2008 Professional cracked by Komu?</h2>
|
75 |
-
<p>If you want to install and use <strong>FSX - Maddog 2008 Professional cracked by Komu</strong>, you will need to follow these steps:</p>
|
76 |
-
<ol>
|
77 |
-
<li>Download <strong>FSX - Maddog 2008 Professional cracked by Komu</strong> from one of the torrent sites where it is available. You will need a torrent client such as uTorrent or BitTorrent to do this.</li>
|
78 |
-
<li>Extract the downloaded file using a program such as WinRAR or 7-Zip. You will get a folder named "Maddog Pro" that contains two files: "maddog pro fsx.exe" and "maddog pro fsx crack by komu.dll".</li>
|
79 |
-
<li>Run "maddog pro fsx.exe" and follow the installation instructions. You will need to specify the location of your FSX folder during the installation process.</li>
|
80 |
-
<li>Copy "maddog pro fsx crack by komu.dll" and paste it into your FSX folder. You will need to overwrite the original .dll file with the same name.</li>
|
81 |
-
<li>Launch FSX and select "Fly The Maddog" from the menu. You should be able to use <strong>FSX - Maddog 2008 Professional</strong> without any activation prompts or restrictions.</li>
|
82 |
-
</ol>
|
83 |
-
<p>Note: These steps are based on Komu's instructions and user feedback. We do not endorse or recommend using this crack or any other illegal downloads. Use them at your own risk.</p>
|
84 |
-
<h2>Pros and cons of FSX - Maddog 2008 Professional cracked by Komu</h2>
|
85 |
-
<p><strong>FSX - Maddog 2008 Professional cracked by Komu</strong> has some pros and cons that you should consider before using it:</p>
|
86 |
-
<h3>Pros:</h3>
|
87 |
-
<ul>
|
88 |
-
<li>You can use <strong>FSX - Maddog 2008 Professional</strong> for free without paying $59.99 USD for it.</li>
|
89 |
-
<li>You can enjoy all the features and benefits of <strong>FSX - Maddog 2008 Professional</strong>, such as realistic cockpit, systems, sounds, flight model, FMS, weather radar, TCAS, GPWS, failures simulation system etc.</li>
|
90 |
-
<li>You can fly one of the most complex and immersive simulations of the MD-80 aircraft in FSX.</li>
|
91 |
-
<li>You can choose from various liveries that represent different airlines that operated the MD-80 aircraft.</li>
|
92 |
-
</ul>
|
93 |
-
<h3>Cons:</h3>
|
94 |
-
<ul>
|
95 |
-
<li>You are violating Leonardo Software House's intellectual property rights and depriving them of their deserved revenue.</li>
|
96 |
-
<li>You are risking legal consequences if Leonardo Software House decides to take action against illegal downloads.</li>
|
97 |
-
<li>You are exposing your computer to potential viruses or malware that might be hidden in the download file or torrent site.</li>
|
98 |
-
<li>You are compromising your flight simulation experience if the crack causes errors or crashes during your flights.</li>
|
99 |
-
<li>You are missing out on updates or support from Leonardo Software House if they release new versions or patches for <strong>FSX - Maddog 2008 Professional</strong>.</li>
|
100 |
-
<li>You are limiting your options if you want to try other add-ons or cracks for FSX that might be incompatible with <strong>Komu's crack</strong>.</li>
|
101 |
-
</ul>
|
102 |
-
<h2>Alternatives to FSX - Maddog 2008 Professional cracked by Komu</h2>
|
103 |
-
<p>If you are looking for alternatives to <</p> 0a6ba089eb<br />
|
104 |
-
<br />
|
105 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Clarion Enterprise Edition 6.0 64 Bit UPD.md
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<p> if you are not certain what is the best budget hotel for you, please take into account your initial budget as well as the purpose of your trip. the more affordable hotels may not be suitable for your needs. you should also consider what other options are available in the area. hotels that are in less populated areas tend to be less expensive, but also are farther from popular attractions.</p>
|
3 |
-
<h2>Clarion Enterprise Edition 6.0 64 bit</h2><br /><p><b><b>Download Zip</b> »»» <a href="https://imgfil.com/2uxWUx">https://imgfil.com/2uxWUx</a></b></p><br /><br />
|
4 |
-
<p> you'll be very satisfied with the hotel's service. the clarion express was very nice - the lobby had free wireless internet, and the rooms had a fridge and a coffeemaker. the walk to the hotel from downtown was fast and easy, even though i had to use the train to get to clarion. the hotel was very easy to get into, and the staff were friendly. a very nice choice.</p>
|
5 |
-
<p> the clarion express hotel is a best choice for a budget hotel with a great location. enjoy our complimentary cooked-to-order breakfast each morning before you head out exploring. we offer free wireless internet, free local calls, and 32" lcd hd tvs with free cable in every room. just hop a train to clarion university in less than three miles. our city welcomes business travelers, so clarion express is an ideal hotel for travelers seeking a modern downtown hotel with the amenities and location of a big city hotel at a reasonable price.</p>
|
6 |
-
<p>clarion city inn & suites in downtown harrisburg offers 100 rooms with complimentary internet access. non-smoking rooms include microwaves and refrigerators. rooms have microwaves, hair dryers, and coffee/tea makers. this harrisburg hotel has both seasonal and indoor pools. parking is free. complimentary breakfast is served daily.</p>
|
7 |
-
<p></p> 899543212b<br />
|
8 |
-
<br />
|
9 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/8 Ball Pool Long Line Tool APK The Ultimate Guide for Android Users.md
DELETED
@@ -1,111 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>8 Ball Pool Long Line Tool APK: A Guide for Beginners</h1>
|
3 |
-
<p>If you are a fan of billiards games, you might have heard of 8 Ball Pool, one of the most popular and addictive online pool games in the world. But did you know that there is a way to enhance your gaming experience and improve your skills with a simple tool? In this article, we will introduce you to 8 Ball Pool Long Line Tool APK, a modded version of the game that allows you to have longer aiming lines and more accurate shots. We will also show you how to download and install it on your Android device, and share some tips and tricks to win in 8 Ball Pool.</p>
|
4 |
-
<h2>What is 8 Ball Pool and How to Play It?</h2>
|
5 |
-
<p>8 Ball Pool is a game developed by Miniclip that simulates the real-life pool game of the same name. You can play it online with millions of players from around the world, or offline with your friends. You can also participate in tournaments, win trophies, and collect coins and cash to buy better cues and enter higher-stakes tables.</p>
|
6 |
-
<h2>8 ball pool long line tool apk</h2><br /><p><b><b>Download Zip</b> === <a href="https://jinyurl.com/2uNMpE">https://jinyurl.com/2uNMpE</a></b></p><br /><br />
|
7 |
-
<h3>The Basics of 8 Ball Pool</h3>
|
8 |
-
<p>8 Ball Pool is played with a cue ball and fifteen object balls, numbered 1 through 15. Balls 1–7 are solid colors and commonly referred to as “low balls”, and balls 9–15 are striped and commonly referred to as “high balls.” One player must pocket balls of solid colors, while the other player must pocket the striped balls. The player who pockets their entire group and then legally pockets the 8-ball wins the game.</p>
|
9 |
-
<h3>The Rules of 8 Ball Pool</h3>
|
10 |
-
<p>For the break shot to be legal, the breaker (with the base of the cue ball placed anywhere behind the head string) must either pocket a number ball or drive at least four (4) number balls to one or more rails. No ball is called, and the cue ball is not required to hit any particular object ball first. If the breaker fails to make the legal break requirement, the balls will be re-racked and the opponent shall have the option of breaking or requesting the offending player to break again.</p>
|
11 |
-
<p>If any numbered ball is pocketed on a legal break, the breaking player is to continue their inning. If the breaker makes a legal break but commits a foul, the game is to continue with the opponent having ball in hand anywhere behind the head-string, but must shoot an object ball beyond the head-string (outside of the “kitchen”) or it is a foul.</p>
|
12 |
-
<p>If the breaker pockets the 8-ball on a legal break shot, they win the game unless they also scratch (pocket or drive off the table) the cue ball, in which case they lose. If any other object ball leaves the table on a legal break shot, it is spotted on its original position before shooting player plays their next shot.</p>
|
13 |
-
<p>During normal play, each player remains at the table until they fail to legally pocket a ball of their group or commit a foul. If a player pockets any ball on a legal shot except for their own group or an opponent’s group (if playing an open table), they continue their inning. If they pocket their own group and an opponent’s group on one shot (if playing an open table), they continue their inning but must declare which group they are playing before their next shot.</p>
|
14 |
-
<p>If a player pockets any ball on a foul shot, it remains pocketed except for the cue ball which is returned behind head string or spotted if it leaves table. If a player pockets the 8-ball on a legal shot, they win the game unless they also scratch, in which case they lose. If a player pockets the 8-ball on an illegal shot, they lose the game.</p>
|
15 |
-
<p>8 ball pool mod apk with long lines<br />
|
16 |
-
8 ball pool hack apk long line tool<br />
|
17 |
-
8 ball pool unlimited coins and long lines apk<br />
|
18 |
-
8 ball pool long line tool apk download<br />
|
19 |
-
8 ball pool long line tool apk no root<br />
|
20 |
-
8 ball pool long line tool apk latest version<br />
|
21 |
-
8 ball pool long line tool apk for android<br />
|
22 |
-
8 ball pool long line tool apk free download<br />
|
23 |
-
8 ball pool long line tool apk online<br />
|
24 |
-
8 ball pool long line tool apk 2023<br />
|
25 |
-
8 ball pool cheat apk long line tool<br />
|
26 |
-
8 ball pool guideline hack apk long line tool<br />
|
27 |
-
8 ball pool mega mod apk long lines<br />
|
28 |
-
8 ball pool extended lines apk tool<br />
|
29 |
-
8 ball pool long line tool apk without ban<br />
|
30 |
-
8 ball pool aim hack apk long line tool<br />
|
31 |
-
8 ball pool anti ban apk long line tool<br />
|
32 |
-
8 ball pool premium apk long lines<br />
|
33 |
-
8 ball pool cracked apk long line tool<br />
|
34 |
-
8 ball pool modded apk with long lines<br />
|
35 |
-
8 ball pool unlimited guideline apk tool<br />
|
36 |
-
8 ball pool pro apk long line tool<br />
|
37 |
-
8 ball pool full version apk long lines<br />
|
38 |
-
8 ball pool unlocked apk long line tool<br />
|
39 |
-
8 ball pool patcher apk long lines<br />
|
40 |
-
8 ball pool generator apk long line tool<br />
|
41 |
-
8 ball pool trainer apk long lines<br />
|
42 |
-
8 ball pool mod menu apk with long lines<br />
|
43 |
-
8 ball pool glitch apk long line tool<br />
|
44 |
-
8 ball pool update apk long lines<br />
|
45 |
-
8 ball pool best mod apk with long lines<br />
|
46 |
-
8 ball pool easy win apk long line tool<br />
|
47 |
-
8 ball pool legendary cues apk with long lines<br />
|
48 |
-
8 ball pool rewards apk long line tool<br />
|
49 |
-
8 ball pool cash hack apk with long lines<br />
|
50 |
-
8 ball pool instant win apk long line tool<br />
|
51 |
-
8 ball pool level up hack apk with long lines<br />
|
52 |
-
8 ball pool auto win apk long line tool<br />
|
53 |
-
8 ball pool all cues unlocked apk with long lines<br />
|
54 |
-
8 ball pool vip mod apk with long lines</p>
|
55 |
-
<p>A foul occurs when a player fails to hit their own group of balls first, fails to hit any ball at all, scratches the cue ball, drives any ball off the table, touches any ball with their hand or cue, or violates any other rule of the game. When a foul is committed, the opponent gets ball in hand anywhere on the table. However, if the cue ball is behind the head string and an object ball is outside of the head string, the player must shoot an object ball outside of the head string or it is a foul.</p>
|
56 |
-
<h2>What is 8 Ball Pool Long Line Tool APK and How to Download It?</h2>
|
57 |
-
<p>8 Ball Pool Long Line Tool APK is a modified version of the original 8 Ball Pool game that gives you some extra advantages over your opponents. It is not an official app from Miniclip, but a third-party app that you can download and install on your Android device for free.</p>
|
58 |
-
<h3>The Features of 8 Ball Pool Long Line Tool APK</h3>
|
59 |
-
<p>Some of the features that 8 Ball Pool Long Line Tool APK offers are:</p>
|
60 |
-
<ul>
|
61 |
-
<li>Longer aiming lines: You can see the trajectory of your shots more clearly and accurately, which helps you to aim better and avoid mistakes.</li>
|
62 |
-
<li>No root required: You don't need to root your device to use this app, which means you don't have to risk damaging your device or voiding your warranty.</li>
|
63 |
-
<li>Anti-ban protection: You can use this app without worrying about getting banned by Miniclip, as it has a built-in anti-ban system that prevents detection.</li>
|
64 |
-
<li>Easy to use: You don't need any special skills or knowledge to use this app, as it has a simple and user-friendly interface that guides you through the process.</li>
|
65 |
-
</ul>
|
66 |
-
<h3>The Benefits of 8 Ball Pool Long Line Tool APK</h3>
|
67 |
-
<p>Some of the benefits that 8 Ball Pool Long Line Tool APK provides are:</p>
|
68 |
-
<ul>
|
69 |
-
<li>More fun and enjoyment: You can have more fun and enjoyment playing 8 Ball Pool with this app, as you can make more impressive shots and win more games.</li>
|
70 |
-
<li>More coins and cash: You can earn more coins and cash by winning more games with this app, which allows you to buy better cues and enter higher-stakes tables.</li>
|
71 |
-
<li>More confidence and skill: You can improve your confidence and skill in playing 8 Ball Pool with this app, as you can learn from your mistakes and practice your techniques.</li>
|
72 |
-
</ul>
|
73 |
-
<h3>The Installation Process of 8 Ball Pool Long Line Tool APK</h3>
|
74 |
-
<p>To install 8 Ball Pool Long Line Tool APK on your Android device, you need to follow these steps:</p>
|
75 |
-
<ol>
|
76 |
-
<li>Download the APK file from a trusted source. You can search for it online or use this link: .</li>
|
77 |
-
<li>Enable unknown sources on your device. Go to Settings > Security > Unknown Sources and toggle it on.</li>
|
78 |
-
<li>Locate the downloaded APK file on your device and tap on it to start the installation.</li>
|
79 |
-
<li>Follow the instructions on the screen and wait for the installation to finish.</li>
|
80 |
-
<li>Launch the app and enjoy playing 8 Ball Pool with longer aiming lines.</li>
|
81 |
-
</ol>
|
82 |
-
<h2>What are Some Tips and Tricks to Win in 8 Ball Pool?</h2>
|
83 |
-
<p>Besides using 8 Ball Pool Long Line Tool APK, there are some other tips and tricks that you can apply to win in 8 Ball Pool. Here are some of them:</p>
|
84 |
-
<h3>Choose Your Tables Wisely</h3>
|
85 |
-
<p>When you play online, you can choose from different tables with different entry fees and prizes. The higher the entry fee, the higher the prize, but also the higher the risk. If you are a beginner, you should start with lower-level tables and work your way up gradually. Don't play on tables that are too expensive for your budget or skill level, as you might lose more than you gain.</p>
|
86 |
-
<h3>Buy a Better Cue</h3>
|
87 |
-
<p>A cue is one of the most important factors that affect your performance in 8 Ball Pool. A better cue can give you more power, spin, aim, and time. You can buy cues with coins or cash in the game shop, or win them in tournaments or surprise boxes. You can also upgrade your cues with coins to improve their attributes. A good cue can make a big difference in your game, so don't hesitate to invest in one.</p>
|
88 |
-
<h3>Use a Little English</h3>
|
89 |
-
<p <p>English is a term that refers to the amount of spin you put on the cue ball when you hit it. By using English, you can control the direction and speed of the cue ball after it hits an object ball or a rail. You can use English to avoid scratches, make difficult shots, or set up your next shot. To use English, you need to hit the cue ball on the left or right side, rather than the center. You can also adjust the power and angle of your shot to achieve the desired effect.</p>
|
90 |
-
<h3>Shoot Faster</h3>
|
91 |
-
<p>One of the challenges of playing online is that you have a limited time to make your shot. If you take too long, you might lose your turn or even the game. To avoid this, you should try to shoot faster and more confidently. You can do this by planning your shots ahead, using 8 Ball Pool Long Line Tool APK to aim better, and practicing your skills offline. Shooting faster can also put pressure on your opponent and make them nervous or impatient.</p>
|
92 |
-
<h3>Extend Your Aim</h3>
|
93 |
-
<p>Another way to improve your accuracy and precision in 8 Ball Pool is to extend your aim beyond the object ball. This means that you should visualize where you want the cue ball to go after it hits the object ball, and align your cue accordingly. This can help you to avoid scratches, position your cue ball better, and make more complex shots. You can also use 8 Ball Pool Long Line Tool APK to see the extended aiming lines and adjust your shots accordingly.</p>
|
94 |
-
<h2>Conclusion</h2>
|
95 |
-
<p>8 Ball Pool is a fun and exciting game that can keep you entertained for hours. However, if you want to take your game to the next level, you might want to try 8 Ball Pool Long Line Tool APK, a modded version of the game that gives you longer aiming lines and more accurate shots. You can download and install it on your Android device for free and enjoy playing 8 Ball Pool with an edge over your opponents. You can also use some tips and tricks to win in 8 Ball Pool, such as choosing your tables wisely, buying a better cue, using a little English, shooting faster, and extending your aim. With these tools and techniques, you can become a master of 8 Ball Pool in no time.</p>
|
96 |
-
<h2>FAQs</h2>
|
97 |
-
<p>Here are some frequently asked questions about 8 Ball Pool Long Line Tool APK:</p>
|
98 |
-
<ol>
|
99 |
-
<li>Is 8 Ball Pool Long Line Tool APK safe to use?</li>
|
100 |
-
<p>Yes, 8 Ball Pool Long Line Tool APK is safe to use as long as you download it from a trusted source and follow the installation instructions carefully. It has an anti-ban system that prevents detection by Miniclip, so you don't have to worry about getting banned or losing your account.</p>
|
101 |
-
<li>Is 8 Ball Pool Long Line Tool APK compatible with all devices?</li>
|
102 |
-
<p>No, 8 Ball Pool Long Line Tool APK is only compatible with Android devices that have Android 4.1 or higher versions. It is not compatible with iOS devices or other platforms.</p>
|
103 |
-
<li>Can I play online with 8 Ball Pool Long Line Tool APK?</li>
|
104 |
-
<p>Yes, you can play online with 8 Ball Pool Long Line Tool APK as long as you have a stable internet connection and a valid Miniclip account. You can play with other players who are using the same app or the original game.</p>
|
105 |
-
<li>Can I update 8 Ball Pool Long Line Tool APK?</li>
|
106 |
-
<p>No, you cannot update 8 Ball Pool Long Line Tool APK as it is not an official app from Miniclip. If you update it, you might lose the modded features or encounter errors. You should always check for new versions of the app from the source where you downloaded it.</p>
|
107 |
-
<li>Can I use 8 Ball Pool Long Line Tool APK with other mods or hacks?</li>
|
108 |
-
<p>No, you should not use 8 Ball Pool Long Line Tool APK with other mods or hacks as they might interfere with each other or cause problems. You should only use one mod or hack at a time for optimal performance and safety.</p>
|
109 |
-
</ol></p> 401be4b1e0<br />
|
110 |
-
<br />
|
111 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download FIFA 20 Mod APK with OBB Data - Enjoy Realistic Football Experience on Your Phone.md
DELETED
@@ -1,104 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Download APK FIFA 20: How to Install and Play the Latest Version of the Popular Soccer Game on Your Android Device</h1>
|
3 |
-
<p>If you are a fan of soccer games, you have probably heard of FIFA 20, the latest installment of the popular FIFA series by Electronic Arts. FIFA 20 is a realistic and immersive soccer simulation game that lets you experience the thrill of playing with your favorite teams and players in various modes and competitions. Whether you want to play solo or with friends, offline or online, FIFA 20 has something for everyone.</p>
|
4 |
-
<h2>download apk fifa 20</h2><br /><p><b><b>Download Zip</b> ⚙⚙⚙ <a href="https://jinyurl.com/2uNR2u">https://jinyurl.com/2uNR2u</a></b></p><br /><br />
|
5 |
-
<p>But what if you don't have a console or a PC to play FIFA 20? Don't worry, you can still enjoy this amazing game on your Android device. All you need is to download and install the FIFA 20 APK and OBB data files, which are modified versions of the original game that can run on Android devices without any issues. In this article, we will show you how to do that, as well as give you some tips and tricks to play FIFA 20 like a pro.</p>
|
6 |
-
<h2>What are the features and benefits of FIFA 20</h2>
|
7 |
-
<p>FIFA 20 is not just another soccer game. It is a game that offers you a lot of features and benefits that make it stand out from other games in the genre. Here are some of them:</p>
|
8 |
-
<ul>
|
9 |
-
<li><b>Stunning graphics and sound:</b> FIFA 20 boasts of high-quality graphics and sound that make you feel like you are watching a real soccer match. The players, stadiums, crowds, kits, balls, and animations are all detailed and realistic. The commentary, sound effects, and music are also immersive and dynamic.</li>
|
10 |
-
<li><b>Realistic gameplay and physics:</b> FIFA 20 uses a sophisticated gameplay engine that simulates the physics and mechanics of soccer in a realistic way. The players move, dribble, pass, shoot, tackle, and react according to their attributes, skills, and situations. The ball also behaves realistically, bouncing, spinning, curving, and swerving according to its speed, direction, and contact.</li>
|
11 |
-
<li><b>Various modes and competitions:</b> FIFA 20 offers you a variety of modes and competitions to choose from, depending on your preference and mood. You can play quick matches, tournaments, leagues, career mode, ultimate team mode, volta mode, online seasons, online friendlies, online co-op seasons, online draft mode, online squad battles, online champions league mode, online world cup mode, online pro clubs mode, online division rivals mode, online weekend league mode, online fut champions mode, online fut friendlies mode, online fut events mode, online fut seasons mode.</li>
|
12 |
-
<li><b>Ultimate team mode:</b> This is one of the most popular modes in FIFA 20. It allows you to create your own dream team by collecting and trading players from different leagues and nations. You can customize your team's formation, tactics, kits, badges, stadiums, managers, chemistry styles, consumables, etc. You can also compete with other players' teams in various online modes.</li>
|
13 |
-
<li><b>Volta mode:</b> This is a new mode in FIFA 20 that brings back the street soccer style of previous FIFA games. It allows you to play in small -sided matches with different rules and settings. You can play in various locations around the world, such as rooftops, cages, courts, etc. You can also customize your avatar's appearance, clothing, accessories, tattoos, etc.</li>
|
14 |
-
</ul>
|
15 |
-
<h2>How to download and install FIFA 20 APK and OBB data on your Android device</h2>
|
16 |
-
<p>Now that you know the features and benefits of FIFA 20, you might be wondering how to download and install it on your Android device. Well, it's not as hard as you might think. Just follow these simple steps:</p>
|
17 |
-
<h4>Step 1: Enable unknown sources on your device</h4>
|
18 |
-
<p>Before you can install any APK file on your device, you need to enable the option to allow unknown sources. This will let you install apps that are not from the Google Play Store. To do this, go to your device's settings, then security, then unknown sources. Toggle the switch to enable it.</p>
|
19 |
-
<p>download apk fifa 20 mod<br />
|
20 |
-
download apk fifa 20 offline<br />
|
21 |
-
download apk fifa 20 mobile<br />
|
22 |
-
download apk fifa 20 android<br />
|
23 |
-
download apk fifa 20 latest version<br />
|
24 |
-
download apk fifa 20 ultimate team<br />
|
25 |
-
download apk fifa 20 for free<br />
|
26 |
-
download apk fifa 20 with obb data<br />
|
27 |
-
download apk fifa 20 update<br />
|
28 |
-
download apk fifa 20 hack<br />
|
29 |
-
download apk fifa 20 full version<br />
|
30 |
-
download apk fifa 20 online<br />
|
31 |
-
download apk fifa 20 cracked<br />
|
32 |
-
download apk fifa 20 no verification<br />
|
33 |
-
download apk fifa 20 without human verification<br />
|
34 |
-
download apk fifa 20 from apkpure<br />
|
35 |
-
download apk fifa 20 from google play store<br />
|
36 |
-
download apk fifa 20 from uptodown<br />
|
37 |
-
download apk fifa 20 from apkmirror<br />
|
38 |
-
download apk fifa 20 from apksfull<br />
|
39 |
-
download apk fifa 20 with commentary<br />
|
40 |
-
download apk fifa 20 with real faces<br />
|
41 |
-
download apk fifa 20 with new kits<br />
|
42 |
-
download apk fifa 20 with new transfers<br />
|
43 |
-
download apk fifa 20 with unlimited coins<br />
|
44 |
-
download apk fifa 20 manager mode<br />
|
45 |
-
download apk fifa 20 tournament mode<br />
|
46 |
-
download apk fifa 20 career mode<br />
|
47 |
-
download apk fifa 20 volta mode<br />
|
48 |
-
download apk fifa 20 street mode<br />
|
49 |
-
download apk fifa 20 ps4 camera view<br />
|
50 |
-
download apk fifa 20 hd graphics<br />
|
51 |
-
download apk fifa 20 high compress<br />
|
52 |
-
download apk fifa 20 low mb<br />
|
53 |
-
download apk fifa 20 original<br />
|
54 |
-
download apk fifa 20 beta<br />
|
55 |
-
download apk fifa 20 demo<br />
|
56 |
-
download apk fifa 20 pro evolution soccer (pes)<br />
|
57 |
-
download apk fifa 20 dream league soccer (dls)<br />
|
58 |
-
download apk fifa 20 first touch soccer (fts)<br />
|
59 |
-
download apk fifa 20 efootball (efootball)<br />
|
60 |
-
download apk fifa 20 world cup edition (wc)<br />
|
61 |
-
download apk fifa 20 champions league edition (cl)<br />
|
62 |
-
download apk fifa 20 euro cup edition (ec)<br />
|
63 |
-
download apk fifa 20 copa america edition (ca)<br />
|
64 |
-
download apk fifa 20 africa cup of nations edition (afcon)<br />
|
65 |
-
download apk fifa 20 women's world cup edition (wwc)<br />
|
66 |
-
download apk fifa 20 fut companion app (fut)<br />
|
67 |
-
download apk fifa 20 pack opener app (pack)<br />
|
68 |
-
download apk fifa 20 player potentials app (potentials)</p>
|
69 |
-
<h4>Step 2: Download the FIFA 20 APK and OBB files from a trusted source</h4>
|
70 |
-
<p>The next step is to download the FIFA 20 APK and OBB files from a trusted source. There are many websites that offer these files, but be careful not to download from shady or malicious ones. You can use this link to download the files safely and securely. The APK file is about 30 MB, while the OBB file is about 1.5 GB.</p>
|
71 |
-
<h4>Step 3: Install the APK file and extract the OBB file to the right folder</h4>
|
72 |
-
<p>After downloading the files, you need to install the APK file and extract the OBB file to the right folder. To do this, locate the APK file in your device's file manager and tap on it to install it. Then, use a file extractor app like ZArchiver to extract the OBB file. You will get a folder named com.ea.gp.fifaworld. Move this folder to Android/OBB in your device's internal storage.</p>
|
73 |
-
<h4>Step 4: Launch the game and enjoy</h4>
|
74 |
-
<p>The final step is to launch the game and enjoy. To do this, go to your app drawer and tap on the FIFA 20 icon. The game will start and ask you to verify your data. Just tap on OK and wait for a few seconds. The game will then load and take you to the main menu. You can now choose your mode and start playing.</p>
|
75 |
-
<h2>What are the tips and tricks to play FIFA 20 like a pro</h2>
|
76 |
-
<p>FIFA 20 is a fun and challenging game that requires skill and strategy to master. If you want to play like a pro, you need to know some tips and tricks that will help you improve your performance and win more matches. Here are some of them:</p>
|
77 |
-
<h3>Customize your controls and settings</h3>
|
78 |
-
<p>One of the first things you should do is customize your controls and settings according to your preference and comfort. You can do this by going to settings, then controls, then customize controls. You can choose between classic or casual controls, adjust the sensitivity and size of the buttons, enable or disable auto-switching, auto-sprint, auto-shoot, etc.</p>
|
79 |
-
<h3>Choose your game mode and difficulty level</h3>
|
80 |
-
<p>The next thing you should do is choose your game mode and difficulty level according to your skill and goal. You can do this by going to play, then select mode. You can choose between quick match, tournament, league, career mode, ultimate team mode, volta mode, etc. You can also choose between beginner, amateur, semi-pro, professional, world class, legendary, or ultimate difficulty level.</p>
|
81 |
-
<h3>Master the skills and tactics</h3>
|
82 |
-
<p>The most important thing you should do is master the skills and tactics that will help you win more matches. You can do this by practicing in training mode or playing against AI opponents. You should learn how to dribble, pass, shoot, tackle, cross, head, defend, etc. You should also learn how to use different tactics, such as formation, style, mentality, instructions, etc.</p>
|
83 |
-
<h3>Build your ultimate team and manage your players</h3>
|
84 |
-
<p>If you are playing ultimate team mode, you should build your ultimate team and manage your players effectively. You can do this by collecting and trading players from different leagues and nations. You should aim for high-rated players with good chemistry and attributes. You should also manage your players' fitness, morale, contracts, injuries, etc.</p>
|
85 |
-
<h3>Participate in online tournaments and events</h3>
|
86 |
-
<p>If you want to challenge yourself and compete with other players, you should participate in online tournaments and events. You can do this by going to play online, then select mode. You can choose between online seasons, online friendlies, online co-op seasons, online draft mode, online squad battles, online champions league mode, online world cup mode, online pro clubs mode, online division rivals mode, online weekend league mode, online fut champions mode, online fut friendlies mode, online fut events mode, online fut seasons mode. You can win rewards and trophies by playing and winning these modes.</p>
|
87 |
-
<h2>Conclusion</h2>
|
88 |
-
<p>FIFA 20 is a fantastic soccer game that you can download and play on your Android device. It offers you a lot of features and benefits that make it one of the best games in the genre. It also gives you some tips and tricks that will help you play like a pro. So what are you waiting for? Download APK FIFA 20 now and enjoy the ultimate soccer experience.</p>
|
89 |
-
<h2>FAQs</h2>
|
90 |
-
<p>Here are some frequently asked questions about FIFA 20:</p>
|
91 |
-
<ul>
|
92 |
-
<li><b>Q: Is FIFA 20 free to download and play?</b>
|
93 |
-
A: Yes, FIFA 20 is free to download and play on your Android device. However, some features and modes may require in-app purchases or subscriptions.</li>
|
94 |
-
<li><b>Q: Is FIFA 20 compatible with my device?</b>
|
95 |
-
A: FIFA 20 is compatible with most Android devices that have at least 2 GB of RAM and 4 GB of free storage space. However, some devices may experience performance issues or crashes due to hardware limitations.</li>
|
96 |
-
<li><b>Q: Is FIFA 20 safe to download and install?</b>
|
97 |
-
A: Yes, FIFA 20 is safe to download and install on your device. However, you should always download it from a trusted source and scan it for viruses or malware before installing it.</li>
|
98 |
-
<li><b>Q: How can I update FIFA 20 to the latest version?</b>
|
99 |
-
A: You can update FIFA 20 to the latest version by downloading and installing the latest APK and OBB files from the same source you downloaded them from. You should also delete the old files before installing the new ones.</li>
|
100 |
-
<li><b>Q: How can I contact the developers or support team of FIFA 20?</b>
|
101 |
-
A: You can contact the developers or support team of FIFA 20 by visiting their official website or social media pages. You can also email them at [email protected] or call them at +1-866-543-5435.</li>
|
102 |
-
</ul></p> 197e85843d<br />
|
103 |
-
<br />
|
104 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download and Play FINAL FANTASY XIII on Android - Cloud Game with TV Integration Support.md
DELETED
@@ -1,103 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Final Fantasy XIII APK Full Download: How to Play the Epic JRPG on Your Android Device</h1>
|
3 |
-
<p>Are you a fan of Final Fantasy, one of the most popular and influential JRPG series of all time? If so, you might be interested in playing Final Fantasy XIII, the thirteenth installment of the main series, on your Android device. In this article, we will show you how to download Final Fantasy XIII APK full version and enjoy the epic adventure on your smartphone or tablet. We will also share some tips and tricks to enhance your gaming experience. Let's get started!</p>
|
4 |
-
<h2>Introduction</h2>
|
5 |
-
<h3>What is Final Fantasy XIII?</h3>
|
6 |
-
<p>Final Fantasy XIII is a role-playing game developed and published by Square Enix in 2009. It is set in a futuristic world where two opposing forces, Cocoon and Pulse, are locked in a conflict. The game follows the story of six characters who are branded as traitors by Cocoon's government and must fight against their fate. The game features a fast-paced combat system, stunning graphics, and a rich soundtrack. It received critical acclaim and sold over seven million copies worldwide.</p>
|
7 |
-
<h2>final fantasy xiii apk full download</h2><br /><p><b><b>Download File</b> ⇒ <a href="https://jinyurl.com/2uNPGh">https://jinyurl.com/2uNPGh</a></b></p><br /><br />
|
8 |
-
<h3>Why play Final Fantasy XIII on your Android device?</h3>
|
9 |
-
<p>Playing Final Fantasy XIII on your Android device has many benefits. First of all, you can enjoy the game anytime and anywhere, without being tied to a console or a PC. You can also save space on your device, as you don't need to download a large file or install anything. Moreover, you can take advantage of the touch screen, gyroscope, and other features of your device to enhance your gameplay. Finally, you can connect your device to a TV or a monitor and play on a bigger screen.</p>
|
10 |
-
<h2>How to download Final Fantasy XIII APK</h2>
|
11 |
-
<h3>Option 1: Use the official cloud game service from Square Enix</h3>
|
12 |
-
<p>The easiest and safest way to play Final Fantasy XIII on your Android device is to use the official cloud game service from Square Enix. This service allows you to stream high-definition games over a Wi-Fi connection, without downloading or installing anything. Here are the steps to follow:</p>
|
13 |
-
<h4>Step 1: Download the FINAL FANTASY XIII app from APKCombo</h4>
|
14 |
-
<p>The first step is to download the FINAL FANTASY XIII app from APKCombo, a website that provides free APK files for Android apps and games. You can use this link to access the app page and click on the "Download APK" button. The app size is about 12 MB and it requires Android 5.0 or higher.</p>
|
15 |
-
<h4>Step 2: Launch the app and sign up for the cloud game service</h4>
|
16 |
-
<p>The next step is to launch the app and sign up for the cloud game service. You will need to create an account with your email address and password, or log in with your existing Square Enix account. You will also need to agree to the terms of service and privacy policy.</p>
|
17 |
-
<h4>Step 3: Enjoy the free trial and purchase the license if you like it</h4>
|
18 |
-
<p>The final step is to enjoy the free trial and purchase the license if you like it. You can play the first 30 minutes of the game for free, and then decide whether to buy the full game for $15.99. You can also choose to pay $5.99 per month and access other cloud games from Square Enix, such as Final Fantasy VII and Final Fantasy VIII.</p>
|
19 |
-
<h3>Option 2: Use an unofficial source from the Internet Archive</h3>
|
20 |
-
<p>If you don't want to use the official cloud game service from Square Enix, you can try another option: use an unofficial source from the Internet Archive. The Internet Archive is a non-profit organization that preserves digital content, such as books, music, videos, and games. You can find a copy of Final Fantasy XIII for PC on their website and play it on your Android device with an emulator or a streaming app. However, this option is not recommended, as it may be illegal, unsafe, or unstable. Here are the steps to follow:</p>
|
21 |
-
<h4>Step 1: Download the final fantasy xiii file from the Internet Archive</h4>
|
22 |
-
<p>The first step is to download the final fantasy xiii file from the Internet Archive. You can use this link to access the file page and click on the "DOWNLOAD OPTIONS" button. You will see several formats available, such as ISO, ZIP, or TORRENT. The file size is about 13 GB and it requires a PC with Windows XP or higher.</p>
|
23 |
-
<h4>Step 2: Extract the file and install the game on your PC</h4>
|
24 |
-
<p>The next step is to extract the file and install the game on your PC. You will need a software like WinRAR or 7-Zip to unzip the file and get the game folder. Then, you will need to run the setup.exe file and follow the instructions to install the game on your PC. You may also need to install some additional components, such as DirectX or Visual C++.</p>
|
25 |
-
<p>final fantasy xiii android apk free download<br />
|
26 |
-
final fantasy xiii mobile game download apk<br />
|
27 |
-
final fantasy xiii apk obb download<br />
|
28 |
-
final fantasy xiii apk mod download<br />
|
29 |
-
final fantasy xiii apk offline download<br />
|
30 |
-
final fantasy xiii apk data download<br />
|
31 |
-
final fantasy xiii apk full version download<br />
|
32 |
-
final fantasy xiii apk cracked download<br />
|
33 |
-
final fantasy xiii apk unlimited money download<br />
|
34 |
-
final fantasy xiii apk cloud game download<br />
|
35 |
-
final fantasy xiii apk no license download<br />
|
36 |
-
final fantasy xiii apk english version download<br />
|
37 |
-
final fantasy xiii apk latest version download<br />
|
38 |
-
final fantasy xiii apk direct download<br />
|
39 |
-
final fantasy xiii apk mirror download<br />
|
40 |
-
final fantasy xiii apk google drive download<br />
|
41 |
-
final fantasy xiii apk mega download<br />
|
42 |
-
final fantasy xiii apk mediafire download<br />
|
43 |
-
final fantasy xiii apk zippyshare download<br />
|
44 |
-
final fantasy xiii apk utorrent download<br />
|
45 |
-
final fantasy xiii apk for pc download<br />
|
46 |
-
final fantasy xiii apk for ios download<br />
|
47 |
-
final fantasy xiii apk for tablet download<br />
|
48 |
-
final fantasy xiii apk for tv download<br />
|
49 |
-
final fantasy xiii apk for chromebook download<br />
|
50 |
-
final fantasy xiii hd apk full download<br />
|
51 |
-
final fantasy xiii 2 apk full download<br />
|
52 |
-
final fantasy xiii 3 apk full download<br />
|
53 |
-
final fantasy xiii lightning returns apk full download<br />
|
54 |
-
final fantasy xiii remastered apk full download<br />
|
55 |
-
how to download final fantasy xiii apk full<br />
|
56 |
-
where to download final fantasy xiii apk full<br />
|
57 |
-
best site to download final fantasy xiii apk full<br />
|
58 |
-
safe site to download final fantasy xiii apk full<br />
|
59 |
-
legit site to download final fantasy xiii apk full<br />
|
60 |
-
trusted site to download final fantasy xiii apk full<br />
|
61 |
-
working link to download final fantasy xiii apk full<br />
|
62 |
-
updated link to download final fantasy xiii apk full<br />
|
63 |
-
fast link to download final fantasy xiii apk full<br />
|
64 |
-
easy way to download final fantasy xiii apk full<br />
|
65 |
-
free way to download final fantasy xiii apk full<br />
|
66 |
-
legal way to download final fantasy xiii apk full<br />
|
67 |
-
illegal way to download final fantasy xiii apk full<br />
|
68 |
-
tips and tricks to download final fantasy xiii apk full<br />
|
69 |
-
guide and tutorial to download final fantasy xiii apk full<br />
|
70 |
-
review and rating of final fantasy xiii apk full download<br />
|
71 |
-
gameplay and features of final fantasy xiii apk full download<br />
|
72 |
-
problems and solutions of final fantasy xiii apk full download<br />
|
73 |
-
requirements and compatibility of final fantasy xiii apk full download</p>
|
74 |
-
<h4>Step 3: Use an emulator or a streaming app to play the game on your Android device</h4>
|
75 |
-
<p>The final step is to use an emulator or a streaming app to play the game on your Android device. An emulator is a software that mimics the behavior of another device, such as a PC or a console. A streaming app is a software that allows you to stream games from your PC to your Android device over a Wi-Fi connection. Some examples of emulators are ExaGear RPG or Wine, and some examples of streaming apps are Steam Link or Moonlight. You will need to configure these apps according to your preferences and requirements.</p>
|
76 |
-
<h2>Tips and tricks for playing Final Fantasy XIII on your Android device</h2>
|
77 |
-
<h3>Adjust the settings to optimize the performance and battery life</h3>
|
78 |
-
<p>One of the challenges of playing Final Fantasy XIII on your Android device is to optimize the performance and battery life of your device. Depending on your device model and specifications, you may experience lagging, crashing, overheating, or draining issues. To avoid these problems, you can adjust some settings in your device or in your app. For example, you can lower the resolution, brightness, volume, or frame rate of your device or app. You can also close other apps running in the background, turn off notifications, or activate airplane mode.</p>
|
79 |
-
<h3>Use a controller or a keyboard for better control and comfort</h3>
|
80 |
-
<p>Another challenge of playing Final Fantasy XIII on your Android device is to control the game with touch screen gestures. While this may be convenient for some players, others may find it difficult, uncomfortable, or inaccurate. To improve your control and comfort, you can use a controller or a keyboard instead of touch screen gestures. You can connect your controller or keyboard to your device via Bluetooth, USB, or Wi-Fi. You can also customize your controller or keyboard layout according to your preferences.</p>
|
81 |
-
<h3>Save your progress frequently and back up your data online</h3>
|
82 |
-
<p>The last challenge of playing Final Fantasy XIII on your Android device is to save your progress frequently and back up your data online. Unlike playing on a console or a PC, playing on an Android device may expose you to risks of losing your data due to various reasons, such as deleting the app by mistake, running out of storage space, resetting your device, or losing your device. To prevent these scenarios from happening, you should save your progress frequently in different slots and back up your data online using cloud services like Google Drive or Dropbox.</p>
|
83 |
-
<h2>Conclusion</h2>
|
84 |
-
<h3>Summary of the main points</h3>
|
85 |
-
<p>In conclusion, playing Final Fantasy XIII on your Android device is possible and enjoyable if you follow some simple steps and tips. You can download Final Fantasy XIII APK full version from either the official cloud game service from Square Enix or from an unofficial source from the Internet Archive. You can also adjust the settings, use a controller or a keyboard, and save your progress frequently and back up your data online to optimize your gaming experience. Final Fantasy XIII is a great game that deserves to be played on any device you want.</p>
|
86 |
-
<h3>Call to action and invitation to comment</h3>
|
87 |
-
<p>If you are ready to play Final Fantasy XIII on your Android device, don't hesitate to download the APK file and follow the instructions in this article. You will be amazed by the quality and the fun of this game. And if you have any questions, comments, or feedback, feel free to leave them below. We would love to hear from you and help you with any issues you may encounter. Happy gaming!</p>
|
88 |
-
<h2>FAQs</h2>
|
89 |
-
<p>Here are some frequently asked questions about playing Final Fantasy XIII on your Android device:</p>
|
90 |
-
<ul>
|
91 |
-
<li><b>Is Final Fantasy XIII APK safe to download?</b></li>
|
92 |
-
<p>Yes, Final Fantasy XIII APK is safe to download if you use the official cloud game service from Square Enix or a reputable website like APKCombo. However, if you use an unofficial source from the Internet Archive, you may encounter some risks, such as viruses, malware, or legal issues. Therefore, we recommend that you use the official option or scan the file with an antivirus before installing it.</p>
|
93 |
-
<li><b>How much data does Final Fantasy XIII APK use?</b></li>
|
94 |
-
<p>Final Fantasy XIII APK uses a lot of data, as it streams high-definition games over a Wi-Fi connection. The exact amount of data depends on various factors, such as the resolution, frame rate, and duration of your gameplay. However, according to some estimates, streaming a game can use up to 3 GB of data per hour. Therefore, we suggest that you use a Wi-Fi connection with unlimited data or a high data plan when playing Final Fantasy XIII APK.</p>
|
95 |
-
<li><b>Can I play Final Fantasy XIII APK offline?</b></li>
|
96 |
-
<p>No, you cannot play Final Fantasy XIII APK offline, as it requires a constant internet connection to stream the game from the cloud server. If you lose your connection or have a weak signal, you may experience interruptions, lagging, or disconnection. Therefore, we advise that you play Final Fantasy XIII APK in a place with a stable and strong Wi-Fi connection.</p>
|
97 |
-
<li><b>Can I play Final Fantasy XIII APK with friends?</b></li>
|
98 |
-
<p>Yes, you can play Final Fantasy XIII APK with friends, as it supports online multiplayer mode. You can join other players from around the world and cooperate or compete with them in various missions and battles. You can also chat with them using voice or text messages. To play Final Fantasy XIII APK with friends, you will need to create or join a party in the game menu and invite or accept other players.</p>
|
99 |
-
<li><b>Can I transfer my save data from Final Fantasy XIII APK to another device?</b></li>
|
100 |
-
<p>Yes, you can transfer your save data from Final Fantasy XIII APK to another device, as long as you use the same account and service. For example, if you use the official cloud game service from Square Enix, you can access your save data from any device that supports the service, such as another Android device, an iOS device, or a PC. However, if you use an unofficial source from the Internet Archive, you may not be able to transfer your save data easily.</p>
|
101 |
-
</ul></p> 197e85843d<br />
|
102 |
-
<br />
|
103 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/221091lstwcm/textgenerator/app.py
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
#libraries
|
2 |
-
import gradio as gr
|
3 |
-
from gradio.mix import Parallel
|
4 |
-
|
5 |
-
#variables, functions and parameters
|
6 |
-
model1=gr.Interface.load("huggingface/gpt2")
|
7 |
-
model2=gr.Interface.load("huggingface/EleutherAI/gpt-j-6B")
|
8 |
-
model3=gr.Interface.load("huggingface/distilgpt2")
|
9 |
-
|
10 |
-
#funcations, parameters and variables
|
11 |
-
gr.Parallel(model1, model2, model3).launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/232labs/VToonify/vtoonify/model/stylegan/op/__init__.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
from .fused_act import FusedLeakyReLU, fused_leaky_relu
|
2 |
-
from .upfirdn2d import upfirdn2d
|
|
|
|
|
|
spaces/4Taps/SadTalker/src/audio2exp_models/audio2exp.py
DELETED
@@ -1,40 +0,0 @@
|
|
1 |
-
from tqdm import tqdm
|
2 |
-
import torch
|
3 |
-
from torch import nn
|
4 |
-
|
5 |
-
|
6 |
-
class Audio2Exp(nn.Module):
|
7 |
-
def __init__(self, netG, cfg, device, prepare_training_loss=False):
|
8 |
-
super(Audio2Exp, self).__init__()
|
9 |
-
self.cfg = cfg
|
10 |
-
self.device = device
|
11 |
-
self.netG = netG.to(device)
|
12 |
-
|
13 |
-
def test(self, batch):
|
14 |
-
|
15 |
-
mel_input = batch['indiv_mels'] # bs T 1 80 16
|
16 |
-
bs = mel_input.shape[0]
|
17 |
-
T = mel_input.shape[1]
|
18 |
-
|
19 |
-
exp_coeff_pred = []
|
20 |
-
|
21 |
-
for i in tqdm(range(0, T, 10),'audio2exp:'): # every 10 frames
|
22 |
-
|
23 |
-
current_mel_input = mel_input[:,i:i+10]
|
24 |
-
|
25 |
-
ref = batch['ref'][:, :, :64].repeat((1,current_mel_input.shape[1],1)) #bs T 64
|
26 |
-
ratio = batch['ratio_gt'][:, i:i+10] #bs T
|
27 |
-
|
28 |
-
audiox = current_mel_input.view(-1, 1, 80, 16) # bs*T 1 80 16
|
29 |
-
|
30 |
-
curr_exp_coeff_pred = self.netG(audiox, ref, ratio) # bs T 64
|
31 |
-
|
32 |
-
exp_coeff_pred += [curr_exp_coeff_pred]
|
33 |
-
|
34 |
-
# BS x T x 64
|
35 |
-
results_dict = {
|
36 |
-
'exp_coeff_pred': torch.cat(exp_coeff_pred, axis=1)
|
37 |
-
}
|
38 |
-
return results_dict
|
39 |
-
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/demucs/utils.py
DELETED
@@ -1,323 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
import errno
|
8 |
-
import functools
|
9 |
-
import hashlib
|
10 |
-
import inspect
|
11 |
-
import io
|
12 |
-
import os
|
13 |
-
import random
|
14 |
-
import socket
|
15 |
-
import tempfile
|
16 |
-
import warnings
|
17 |
-
import zlib
|
18 |
-
from contextlib import contextmanager
|
19 |
-
|
20 |
-
from diffq import UniformQuantizer, DiffQuantizer
|
21 |
-
import torch as th
|
22 |
-
import tqdm
|
23 |
-
from torch import distributed
|
24 |
-
from torch.nn import functional as F
|
25 |
-
|
26 |
-
|
27 |
-
def center_trim(tensor, reference):
|
28 |
-
"""
|
29 |
-
Center trim `tensor` with respect to `reference`, along the last dimension.
|
30 |
-
`reference` can also be a number, representing the length to trim to.
|
31 |
-
If the size difference != 0 mod 2, the extra sample is removed on the right side.
|
32 |
-
"""
|
33 |
-
if hasattr(reference, "size"):
|
34 |
-
reference = reference.size(-1)
|
35 |
-
delta = tensor.size(-1) - reference
|
36 |
-
if delta < 0:
|
37 |
-
raise ValueError("tensor must be larger than reference. " f"Delta is {delta}.")
|
38 |
-
if delta:
|
39 |
-
tensor = tensor[..., delta // 2:-(delta - delta // 2)]
|
40 |
-
return tensor
|
41 |
-
|
42 |
-
|
43 |
-
def average_metric(metric, count=1.):
|
44 |
-
"""
|
45 |
-
Average `metric` which should be a float across all hosts. `count` should be
|
46 |
-
the weight for this particular host (i.e. number of examples).
|
47 |
-
"""
|
48 |
-
metric = th.tensor([count, count * metric], dtype=th.float32, device='cuda')
|
49 |
-
distributed.all_reduce(metric, op=distributed.ReduceOp.SUM)
|
50 |
-
return metric[1].item() / metric[0].item()
|
51 |
-
|
52 |
-
|
53 |
-
def free_port(host='', low=20000, high=40000):
|
54 |
-
"""
|
55 |
-
Return a port number that is most likely free.
|
56 |
-
This could suffer from a race condition although
|
57 |
-
it should be quite rare.
|
58 |
-
"""
|
59 |
-
sock = socket.socket()
|
60 |
-
while True:
|
61 |
-
port = random.randint(low, high)
|
62 |
-
try:
|
63 |
-
sock.bind((host, port))
|
64 |
-
except OSError as error:
|
65 |
-
if error.errno == errno.EADDRINUSE:
|
66 |
-
continue
|
67 |
-
raise
|
68 |
-
return port
|
69 |
-
|
70 |
-
|
71 |
-
def sizeof_fmt(num, suffix='B'):
|
72 |
-
"""
|
73 |
-
Given `num` bytes, return human readable size.
|
74 |
-
Taken from https://stackoverflow.com/a/1094933
|
75 |
-
"""
|
76 |
-
for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']:
|
77 |
-
if abs(num) < 1024.0:
|
78 |
-
return "%3.1f%s%s" % (num, unit, suffix)
|
79 |
-
num /= 1024.0
|
80 |
-
return "%.1f%s%s" % (num, 'Yi', suffix)
|
81 |
-
|
82 |
-
|
83 |
-
def human_seconds(seconds, display='.2f'):
|
84 |
-
"""
|
85 |
-
Given `seconds` seconds, return human readable duration.
|
86 |
-
"""
|
87 |
-
value = seconds * 1e6
|
88 |
-
ratios = [1e3, 1e3, 60, 60, 24]
|
89 |
-
names = ['us', 'ms', 's', 'min', 'hrs', 'days']
|
90 |
-
last = names.pop(0)
|
91 |
-
for name, ratio in zip(names, ratios):
|
92 |
-
if value / ratio < 0.3:
|
93 |
-
break
|
94 |
-
value /= ratio
|
95 |
-
last = name
|
96 |
-
return f"{format(value, display)} {last}"
|
97 |
-
|
98 |
-
|
99 |
-
class TensorChunk:
|
100 |
-
def __init__(self, tensor, offset=0, length=None):
|
101 |
-
total_length = tensor.shape[-1]
|
102 |
-
assert offset >= 0
|
103 |
-
assert offset < total_length
|
104 |
-
|
105 |
-
if length is None:
|
106 |
-
length = total_length - offset
|
107 |
-
else:
|
108 |
-
length = min(total_length - offset, length)
|
109 |
-
|
110 |
-
self.tensor = tensor
|
111 |
-
self.offset = offset
|
112 |
-
self.length = length
|
113 |
-
self.device = tensor.device
|
114 |
-
|
115 |
-
@property
|
116 |
-
def shape(self):
|
117 |
-
shape = list(self.tensor.shape)
|
118 |
-
shape[-1] = self.length
|
119 |
-
return shape
|
120 |
-
|
121 |
-
def padded(self, target_length):
|
122 |
-
delta = target_length - self.length
|
123 |
-
total_length = self.tensor.shape[-1]
|
124 |
-
assert delta >= 0
|
125 |
-
|
126 |
-
start = self.offset - delta // 2
|
127 |
-
end = start + target_length
|
128 |
-
|
129 |
-
correct_start = max(0, start)
|
130 |
-
correct_end = min(total_length, end)
|
131 |
-
|
132 |
-
pad_left = correct_start - start
|
133 |
-
pad_right = end - correct_end
|
134 |
-
|
135 |
-
out = F.pad(self.tensor[..., correct_start:correct_end], (pad_left, pad_right))
|
136 |
-
assert out.shape[-1] == target_length
|
137 |
-
return out
|
138 |
-
|
139 |
-
|
140 |
-
def tensor_chunk(tensor_or_chunk):
|
141 |
-
if isinstance(tensor_or_chunk, TensorChunk):
|
142 |
-
return tensor_or_chunk
|
143 |
-
else:
|
144 |
-
assert isinstance(tensor_or_chunk, th.Tensor)
|
145 |
-
return TensorChunk(tensor_or_chunk)
|
146 |
-
|
147 |
-
|
148 |
-
def apply_model(model, mix, shifts=None, split=False,
|
149 |
-
overlap=0.25, transition_power=1., progress=False):
|
150 |
-
"""
|
151 |
-
Apply model to a given mixture.
|
152 |
-
|
153 |
-
Args:
|
154 |
-
shifts (int): if > 0, will shift in time `mix` by a random amount between 0 and 0.5 sec
|
155 |
-
and apply the oppositve shift to the output. This is repeated `shifts` time and
|
156 |
-
all predictions are averaged. This effectively makes the model time equivariant
|
157 |
-
and improves SDR by up to 0.2 points.
|
158 |
-
split (bool): if True, the input will be broken down in 8 seconds extracts
|
159 |
-
and predictions will be performed individually on each and concatenated.
|
160 |
-
Useful for model with large memory footprint like Tasnet.
|
161 |
-
progress (bool): if True, show a progress bar (requires split=True)
|
162 |
-
"""
|
163 |
-
assert transition_power >= 1, "transition_power < 1 leads to weird behavior."
|
164 |
-
device = mix.device
|
165 |
-
channels, length = mix.shape
|
166 |
-
if split:
|
167 |
-
out = th.zeros(len(model.sources), channels, length, device=device)
|
168 |
-
sum_weight = th.zeros(length, device=device)
|
169 |
-
segment = model.segment_length
|
170 |
-
stride = int((1 - overlap) * segment)
|
171 |
-
offsets = range(0, length, stride)
|
172 |
-
scale = stride / model.samplerate
|
173 |
-
if progress:
|
174 |
-
offsets = tqdm.tqdm(offsets, unit_scale=scale, ncols=120, unit='seconds')
|
175 |
-
# We start from a triangle shaped weight, with maximal weight in the middle
|
176 |
-
# of the segment. Then we normalize and take to the power `transition_power`.
|
177 |
-
# Large values of transition power will lead to sharper transitions.
|
178 |
-
weight = th.cat([th.arange(1, segment // 2 + 1),
|
179 |
-
th.arange(segment - segment // 2, 0, -1)]).to(device)
|
180 |
-
assert len(weight) == segment
|
181 |
-
# If the overlap < 50%, this will translate to linear transition when
|
182 |
-
# transition_power is 1.
|
183 |
-
weight = (weight / weight.max())**transition_power
|
184 |
-
for offset in offsets:
|
185 |
-
chunk = TensorChunk(mix, offset, segment)
|
186 |
-
chunk_out = apply_model(model, chunk, shifts=shifts)
|
187 |
-
chunk_length = chunk_out.shape[-1]
|
188 |
-
out[..., offset:offset + segment] += weight[:chunk_length] * chunk_out
|
189 |
-
sum_weight[offset:offset + segment] += weight[:chunk_length]
|
190 |
-
offset += segment
|
191 |
-
assert sum_weight.min() > 0
|
192 |
-
out /= sum_weight
|
193 |
-
return out
|
194 |
-
elif shifts:
|
195 |
-
max_shift = int(0.5 * model.samplerate)
|
196 |
-
mix = tensor_chunk(mix)
|
197 |
-
padded_mix = mix.padded(length + 2 * max_shift)
|
198 |
-
out = 0
|
199 |
-
for _ in range(shifts):
|
200 |
-
offset = random.randint(0, max_shift)
|
201 |
-
shifted = TensorChunk(padded_mix, offset, length + max_shift - offset)
|
202 |
-
shifted_out = apply_model(model, shifted)
|
203 |
-
out += shifted_out[..., max_shift - offset:]
|
204 |
-
out /= shifts
|
205 |
-
return out
|
206 |
-
else:
|
207 |
-
valid_length = model.valid_length(length)
|
208 |
-
mix = tensor_chunk(mix)
|
209 |
-
padded_mix = mix.padded(valid_length)
|
210 |
-
with th.no_grad():
|
211 |
-
out = model(padded_mix.unsqueeze(0))[0]
|
212 |
-
return center_trim(out, length)
|
213 |
-
|
214 |
-
|
215 |
-
@contextmanager
|
216 |
-
def temp_filenames(count, delete=True):
|
217 |
-
names = []
|
218 |
-
try:
|
219 |
-
for _ in range(count):
|
220 |
-
names.append(tempfile.NamedTemporaryFile(delete=False).name)
|
221 |
-
yield names
|
222 |
-
finally:
|
223 |
-
if delete:
|
224 |
-
for name in names:
|
225 |
-
os.unlink(name)
|
226 |
-
|
227 |
-
|
228 |
-
def get_quantizer(model, args, optimizer=None):
|
229 |
-
quantizer = None
|
230 |
-
if args.diffq:
|
231 |
-
quantizer = DiffQuantizer(
|
232 |
-
model, min_size=args.q_min_size, group_size=8)
|
233 |
-
if optimizer is not None:
|
234 |
-
quantizer.setup_optimizer(optimizer)
|
235 |
-
elif args.qat:
|
236 |
-
quantizer = UniformQuantizer(
|
237 |
-
model, bits=args.qat, min_size=args.q_min_size)
|
238 |
-
return quantizer
|
239 |
-
|
240 |
-
|
241 |
-
def load_model(path, strict=False):
|
242 |
-
with warnings.catch_warnings():
|
243 |
-
warnings.simplefilter("ignore")
|
244 |
-
load_from = path
|
245 |
-
package = th.load(load_from, 'cpu')
|
246 |
-
|
247 |
-
klass = package["klass"]
|
248 |
-
args = package["args"]
|
249 |
-
kwargs = package["kwargs"]
|
250 |
-
|
251 |
-
if strict:
|
252 |
-
model = klass(*args, **kwargs)
|
253 |
-
else:
|
254 |
-
sig = inspect.signature(klass)
|
255 |
-
for key in list(kwargs):
|
256 |
-
if key not in sig.parameters:
|
257 |
-
warnings.warn("Dropping inexistant parameter " + key)
|
258 |
-
del kwargs[key]
|
259 |
-
model = klass(*args, **kwargs)
|
260 |
-
|
261 |
-
state = package["state"]
|
262 |
-
training_args = package["training_args"]
|
263 |
-
quantizer = get_quantizer(model, training_args)
|
264 |
-
|
265 |
-
set_state(model, quantizer, state)
|
266 |
-
return model
|
267 |
-
|
268 |
-
|
269 |
-
def get_state(model, quantizer):
|
270 |
-
if quantizer is None:
|
271 |
-
state = {k: p.data.to('cpu') for k, p in model.state_dict().items()}
|
272 |
-
else:
|
273 |
-
state = quantizer.get_quantized_state()
|
274 |
-
buf = io.BytesIO()
|
275 |
-
th.save(state, buf)
|
276 |
-
state = {'compressed': zlib.compress(buf.getvalue())}
|
277 |
-
return state
|
278 |
-
|
279 |
-
|
280 |
-
def set_state(model, quantizer, state):
|
281 |
-
if quantizer is None:
|
282 |
-
model.load_state_dict(state)
|
283 |
-
else:
|
284 |
-
buf = io.BytesIO(zlib.decompress(state["compressed"]))
|
285 |
-
state = th.load(buf, "cpu")
|
286 |
-
quantizer.restore_quantized_state(state)
|
287 |
-
|
288 |
-
return state
|
289 |
-
|
290 |
-
|
291 |
-
def save_state(state, path):
|
292 |
-
buf = io.BytesIO()
|
293 |
-
th.save(state, buf)
|
294 |
-
sig = hashlib.sha256(buf.getvalue()).hexdigest()[:8]
|
295 |
-
|
296 |
-
path = path.parent / (path.stem + "-" + sig + path.suffix)
|
297 |
-
path.write_bytes(buf.getvalue())
|
298 |
-
|
299 |
-
|
300 |
-
def save_model(model, quantizer, training_args, path):
|
301 |
-
args, kwargs = model._init_args_kwargs
|
302 |
-
klass = model.__class__
|
303 |
-
|
304 |
-
state = get_state(model, quantizer)
|
305 |
-
|
306 |
-
save_to = path
|
307 |
-
package = {
|
308 |
-
'klass': klass,
|
309 |
-
'args': args,
|
310 |
-
'kwargs': kwargs,
|
311 |
-
'state': state,
|
312 |
-
'training_args': training_args,
|
313 |
-
}
|
314 |
-
th.save(package, save_to)
|
315 |
-
|
316 |
-
|
317 |
-
def capture_init(init):
|
318 |
-
@functools.wraps(init)
|
319 |
-
def __init__(self, *args, **kwargs):
|
320 |
-
self._init_args_kwargs = (args, kwargs)
|
321 |
-
init(self, *args, **kwargs)
|
322 |
-
|
323 |
-
return __init__
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/infer/lib/infer_pack/transforms.py
DELETED
@@ -1,207 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import torch
|
3 |
-
from torch.nn import functional as F
|
4 |
-
|
5 |
-
DEFAULT_MIN_BIN_WIDTH = 1e-3
|
6 |
-
DEFAULT_MIN_BIN_HEIGHT = 1e-3
|
7 |
-
DEFAULT_MIN_DERIVATIVE = 1e-3
|
8 |
-
|
9 |
-
|
10 |
-
def piecewise_rational_quadratic_transform(
|
11 |
-
inputs,
|
12 |
-
unnormalized_widths,
|
13 |
-
unnormalized_heights,
|
14 |
-
unnormalized_derivatives,
|
15 |
-
inverse=False,
|
16 |
-
tails=None,
|
17 |
-
tail_bound=1.0,
|
18 |
-
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
|
19 |
-
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
|
20 |
-
min_derivative=DEFAULT_MIN_DERIVATIVE,
|
21 |
-
):
|
22 |
-
if tails is None:
|
23 |
-
spline_fn = rational_quadratic_spline
|
24 |
-
spline_kwargs = {}
|
25 |
-
else:
|
26 |
-
spline_fn = unconstrained_rational_quadratic_spline
|
27 |
-
spline_kwargs = {"tails": tails, "tail_bound": tail_bound}
|
28 |
-
|
29 |
-
outputs, logabsdet = spline_fn(
|
30 |
-
inputs=inputs,
|
31 |
-
unnormalized_widths=unnormalized_widths,
|
32 |
-
unnormalized_heights=unnormalized_heights,
|
33 |
-
unnormalized_derivatives=unnormalized_derivatives,
|
34 |
-
inverse=inverse,
|
35 |
-
min_bin_width=min_bin_width,
|
36 |
-
min_bin_height=min_bin_height,
|
37 |
-
min_derivative=min_derivative,
|
38 |
-
**spline_kwargs
|
39 |
-
)
|
40 |
-
return outputs, logabsdet
|
41 |
-
|
42 |
-
|
43 |
-
def searchsorted(bin_locations, inputs, eps=1e-6):
|
44 |
-
bin_locations[..., -1] += eps
|
45 |
-
return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1
|
46 |
-
|
47 |
-
|
48 |
-
def unconstrained_rational_quadratic_spline(
|
49 |
-
inputs,
|
50 |
-
unnormalized_widths,
|
51 |
-
unnormalized_heights,
|
52 |
-
unnormalized_derivatives,
|
53 |
-
inverse=False,
|
54 |
-
tails="linear",
|
55 |
-
tail_bound=1.0,
|
56 |
-
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
|
57 |
-
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
|
58 |
-
min_derivative=DEFAULT_MIN_DERIVATIVE,
|
59 |
-
):
|
60 |
-
inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound)
|
61 |
-
outside_interval_mask = ~inside_interval_mask
|
62 |
-
|
63 |
-
outputs = torch.zeros_like(inputs)
|
64 |
-
logabsdet = torch.zeros_like(inputs)
|
65 |
-
|
66 |
-
if tails == "linear":
|
67 |
-
unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1))
|
68 |
-
constant = np.log(np.exp(1 - min_derivative) - 1)
|
69 |
-
unnormalized_derivatives[..., 0] = constant
|
70 |
-
unnormalized_derivatives[..., -1] = constant
|
71 |
-
|
72 |
-
outputs[outside_interval_mask] = inputs[outside_interval_mask]
|
73 |
-
logabsdet[outside_interval_mask] = 0
|
74 |
-
else:
|
75 |
-
raise RuntimeError("{} tails are not implemented.".format(tails))
|
76 |
-
|
77 |
-
(
|
78 |
-
outputs[inside_interval_mask],
|
79 |
-
logabsdet[inside_interval_mask],
|
80 |
-
) = rational_quadratic_spline(
|
81 |
-
inputs=inputs[inside_interval_mask],
|
82 |
-
unnormalized_widths=unnormalized_widths[inside_interval_mask, :],
|
83 |
-
unnormalized_heights=unnormalized_heights[inside_interval_mask, :],
|
84 |
-
unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :],
|
85 |
-
inverse=inverse,
|
86 |
-
left=-tail_bound,
|
87 |
-
right=tail_bound,
|
88 |
-
bottom=-tail_bound,
|
89 |
-
top=tail_bound,
|
90 |
-
min_bin_width=min_bin_width,
|
91 |
-
min_bin_height=min_bin_height,
|
92 |
-
min_derivative=min_derivative,
|
93 |
-
)
|
94 |
-
|
95 |
-
return outputs, logabsdet
|
96 |
-
|
97 |
-
|
98 |
-
def rational_quadratic_spline(
|
99 |
-
inputs,
|
100 |
-
unnormalized_widths,
|
101 |
-
unnormalized_heights,
|
102 |
-
unnormalized_derivatives,
|
103 |
-
inverse=False,
|
104 |
-
left=0.0,
|
105 |
-
right=1.0,
|
106 |
-
bottom=0.0,
|
107 |
-
top=1.0,
|
108 |
-
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
|
109 |
-
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
|
110 |
-
min_derivative=DEFAULT_MIN_DERIVATIVE,
|
111 |
-
):
|
112 |
-
if torch.min(inputs) < left or torch.max(inputs) > right:
|
113 |
-
raise ValueError("Input to a transform is not within its domain")
|
114 |
-
|
115 |
-
num_bins = unnormalized_widths.shape[-1]
|
116 |
-
|
117 |
-
if min_bin_width * num_bins > 1.0:
|
118 |
-
raise ValueError("Minimal bin width too large for the number of bins")
|
119 |
-
if min_bin_height * num_bins > 1.0:
|
120 |
-
raise ValueError("Minimal bin height too large for the number of bins")
|
121 |
-
|
122 |
-
widths = F.softmax(unnormalized_widths, dim=-1)
|
123 |
-
widths = min_bin_width + (1 - min_bin_width * num_bins) * widths
|
124 |
-
cumwidths = torch.cumsum(widths, dim=-1)
|
125 |
-
cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0)
|
126 |
-
cumwidths = (right - left) * cumwidths + left
|
127 |
-
cumwidths[..., 0] = left
|
128 |
-
cumwidths[..., -1] = right
|
129 |
-
widths = cumwidths[..., 1:] - cumwidths[..., :-1]
|
130 |
-
|
131 |
-
derivatives = min_derivative + F.softplus(unnormalized_derivatives)
|
132 |
-
|
133 |
-
heights = F.softmax(unnormalized_heights, dim=-1)
|
134 |
-
heights = min_bin_height + (1 - min_bin_height * num_bins) * heights
|
135 |
-
cumheights = torch.cumsum(heights, dim=-1)
|
136 |
-
cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0)
|
137 |
-
cumheights = (top - bottom) * cumheights + bottom
|
138 |
-
cumheights[..., 0] = bottom
|
139 |
-
cumheights[..., -1] = top
|
140 |
-
heights = cumheights[..., 1:] - cumheights[..., :-1]
|
141 |
-
|
142 |
-
if inverse:
|
143 |
-
bin_idx = searchsorted(cumheights, inputs)[..., None]
|
144 |
-
else:
|
145 |
-
bin_idx = searchsorted(cumwidths, inputs)[..., None]
|
146 |
-
|
147 |
-
input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0]
|
148 |
-
input_bin_widths = widths.gather(-1, bin_idx)[..., 0]
|
149 |
-
|
150 |
-
input_cumheights = cumheights.gather(-1, bin_idx)[..., 0]
|
151 |
-
delta = heights / widths
|
152 |
-
input_delta = delta.gather(-1, bin_idx)[..., 0]
|
153 |
-
|
154 |
-
input_derivatives = derivatives.gather(-1, bin_idx)[..., 0]
|
155 |
-
input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0]
|
156 |
-
|
157 |
-
input_heights = heights.gather(-1, bin_idx)[..., 0]
|
158 |
-
|
159 |
-
if inverse:
|
160 |
-
a = (inputs - input_cumheights) * (
|
161 |
-
input_derivatives + input_derivatives_plus_one - 2 * input_delta
|
162 |
-
) + input_heights * (input_delta - input_derivatives)
|
163 |
-
b = input_heights * input_derivatives - (inputs - input_cumheights) * (
|
164 |
-
input_derivatives + input_derivatives_plus_one - 2 * input_delta
|
165 |
-
)
|
166 |
-
c = -input_delta * (inputs - input_cumheights)
|
167 |
-
|
168 |
-
discriminant = b.pow(2) - 4 * a * c
|
169 |
-
assert (discriminant >= 0).all()
|
170 |
-
|
171 |
-
root = (2 * c) / (-b - torch.sqrt(discriminant))
|
172 |
-
outputs = root * input_bin_widths + input_cumwidths
|
173 |
-
|
174 |
-
theta_one_minus_theta = root * (1 - root)
|
175 |
-
denominator = input_delta + (
|
176 |
-
(input_derivatives + input_derivatives_plus_one - 2 * input_delta)
|
177 |
-
* theta_one_minus_theta
|
178 |
-
)
|
179 |
-
derivative_numerator = input_delta.pow(2) * (
|
180 |
-
input_derivatives_plus_one * root.pow(2)
|
181 |
-
+ 2 * input_delta * theta_one_minus_theta
|
182 |
-
+ input_derivatives * (1 - root).pow(2)
|
183 |
-
)
|
184 |
-
logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
|
185 |
-
|
186 |
-
return outputs, -logabsdet
|
187 |
-
else:
|
188 |
-
theta = (inputs - input_cumwidths) / input_bin_widths
|
189 |
-
theta_one_minus_theta = theta * (1 - theta)
|
190 |
-
|
191 |
-
numerator = input_heights * (
|
192 |
-
input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta
|
193 |
-
)
|
194 |
-
denominator = input_delta + (
|
195 |
-
(input_derivatives + input_derivatives_plus_one - 2 * input_delta)
|
196 |
-
* theta_one_minus_theta
|
197 |
-
)
|
198 |
-
outputs = input_cumheights + numerator / denominator
|
199 |
-
|
200 |
-
derivative_numerator = input_delta.pow(2) * (
|
201 |
-
input_derivatives_plus_one * theta.pow(2)
|
202 |
-
+ 2 * input_delta * theta_one_minus_theta
|
203 |
-
+ input_derivatives * (1 - theta).pow(2)
|
204 |
-
)
|
205 |
-
logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator)
|
206 |
-
|
207 |
-
return outputs, logabsdet
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AI-Hobbyist/Hoyo-RVC/docs/README.ko.han.md
DELETED
@@ -1,100 +0,0 @@
|
|
1 |
-
<div align="center">
|
2 |
-
|
3 |
-
<h1>Retrieval-based-Voice-Conversion-WebUI</h1>
|
4 |
-
VITS基盤의 簡單하고使用하기 쉬운音聲變換틀<br><br>
|
5 |
-
|
6 |
-
[](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI)
|
7 |
-
|
8 |
-
<img src="https://counter.seku.su/cmoe?name=rvc&theme=r34" /><br>
|
9 |
-
|
10 |
-
[](https://colab.research.google.com/github/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Retrieval_based_Voice_Conversion_WebUI.ipynb)
|
11 |
-
[](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/%E4%BD%BF%E7%94%A8%E9%9C%80%E9%81%B5%E5%AE%88%E7%9A%84%E5%8D%8F%E8%AE%AE-LICENSE.txt)
|
12 |
-
[](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)
|
13 |
-
|
14 |
-
[](https://discord.gg/HcsmBBGyVk)
|
15 |
-
|
16 |
-
</div>
|
17 |
-
|
18 |
-
------
|
19 |
-
[**更新日誌**](https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/blob/main/Changelog_CN.md)
|
20 |
-
|
21 |
-
[**English**](./README.en.md) | [**中文简体**](../README.md) | [**日本語**](./README.ja.md) | [**한국어**](./README.ko.md) ([**韓國語**](./README.ko.han.md))
|
22 |
-
|
23 |
-
> [示範映像](https://www.bilibili.com/video/BV1pm4y1z7Gm/)을 確認해 보세요!
|
24 |
-
|
25 |
-
> RVC를活用한實時間音聲變換: [w-okada/voice-changer](https://github.com/w-okada/voice-changer)
|
26 |
-
|
27 |
-
> 基本모델은 50時間假量의 高品質 오픈 소스 VCTK 데이터셋을 使用하였으므로, 著作權上의 念慮가 없으니 安心하고 使用하시기 바랍니다.
|
28 |
-
|
29 |
-
> 著作權問題가 없는 高品質의 노래를 以後에도 繼續해서 訓練할 豫定입니다.
|
30 |
-
|
31 |
-
## 紹介
|
32 |
-
本Repo는 다음과 같은 特徵을 가지고 있습니다:
|
33 |
-
+ top1檢索을利用하여 入力音色特徵을 訓練세트音色特徵으로 代替하여 音色의漏出을 防止;
|
34 |
-
+ 相對的으로 낮은性能의 GPU에서도 빠른訓練可能;
|
35 |
-
+ 적은量의 데이터로 訓練해도 좋은 結果를 얻을 수 있음 (最小10分以上의 低雜음音聲데이터를 使用하는 것을 勸獎);
|
36 |
-
+ 모델融合을通한 音色의 變調可能 (ckpt處理탭->ckpt混合選擇);
|
37 |
-
+ 使用하기 쉬운 WebUI (웹 使用者인터페이스);
|
38 |
-
+ UVR5 모델을 利用하여 목소리와 背景音樂의 빠른 分離;
|
39 |
-
|
40 |
-
## 環境의準備
|
41 |
-
poetry를通해 依存를設置하는 것을 勸獎합니다.
|
42 |
-
|
43 |
-
다음命令은 Python 버전3.8以上의環境에서 實行되어야 합니다:
|
44 |
-
```bash
|
45 |
-
# PyTorch 關聯主要依存設置, 이미設置되어 있는 境遇 건너뛰기 可能
|
46 |
-
# 參照: https://pytorch.org/get-started/locally/
|
47 |
-
pip install torch torchvision torchaudio
|
48 |
-
|
49 |
-
# Windows + Nvidia Ampere Architecture(RTX30xx)를 使用하고 있다面, #21 에서 명시된 것과 같이 PyTorch에 맞는 CUDA 버전을 指定해야 합니다.
|
50 |
-
#pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
|
51 |
-
|
52 |
-
# Poetry 設置, 이미設置되어 있는 境遇 건너뛰기 可能
|
53 |
-
# Reference: https://python-poetry.org/docs/#installation
|
54 |
-
curl -sSL https://install.python-poetry.org | python3 -
|
55 |
-
|
56 |
-
# 依存設置
|
57 |
-
poetry install
|
58 |
-
```
|
59 |
-
pip를 活用하여依存를 設置하여도 無妨합니다.
|
60 |
-
|
61 |
-
```bash
|
62 |
-
pip install -r requirements.txt
|
63 |
-
```
|
64 |
-
|
65 |
-
## 其他預備모델準備
|
66 |
-
RVC 모델은 推論과訓練을 依하여 다른 預備모델이 必要합니다.
|
67 |
-
|
68 |
-
[Huggingface space](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/)를 通해서 다운로드 할 수 있습니다.
|
69 |
-
|
70 |
-
다음은 RVC에 必要한 預備모델 및 其他 파일 目錄입니다:
|
71 |
-
```bash
|
72 |
-
hubert_base.pt
|
73 |
-
|
74 |
-
./pretrained
|
75 |
-
|
76 |
-
./uvr5_weights
|
77 |
-
|
78 |
-
# Windows를 使用하는境遇 이 사전도 必要할 수 있습니다. FFmpeg가 設置되어 있으면 건너뛰어도 됩니다.
|
79 |
-
ffmpeg.exe
|
80 |
-
```
|
81 |
-
그後 以下의 命令을 使用하여 WebUI를 始作할 수 있습니다:
|
82 |
-
```bash
|
83 |
-
python infer-web.py
|
84 |
-
```
|
85 |
-
Windows를 使用하는境遇 `RVC-beta.7z`를 다운로드 및 壓縮解除하여 RVC를 直接使用하거나 `go-web.bat`을 使用하여 WebUi를 直接할 수 있습니다.
|
86 |
-
|
87 |
-
## 參考
|
88 |
-
+ [ContentVec](https://github.com/auspicious3000/contentvec/)
|
89 |
-
+ [VITS](https://github.com/jaywalnut310/vits)
|
90 |
-
+ [HIFIGAN](https://github.com/jik876/hifi-gan)
|
91 |
-
+ [Gradio](https://github.com/gradio-app/gradio)
|
92 |
-
+ [FFmpeg](https://github.com/FFmpeg/FFmpeg)
|
93 |
-
+ [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui)
|
94 |
-
+ [audio-slicer](https://github.com/openvpi/audio-slicer)
|
95 |
-
## 모든寄與者분들의勞力에感謝드립니다
|
96 |
-
|
97 |
-
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
|
98 |
-
<img src="https://contrib.rocks/image?repo=liujing04/Retrieval-based-Voice-Conversion-WebUI" />
|
99 |
-
</a>
|
100 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/audio_to_text/captioning/utils/build_vocab_spacy.py
DELETED
@@ -1,152 +0,0 @@
|
|
1 |
-
import json
|
2 |
-
from tqdm import tqdm
|
3 |
-
import logging
|
4 |
-
import pickle
|
5 |
-
from collections import Counter
|
6 |
-
import re
|
7 |
-
import fire
|
8 |
-
|
9 |
-
class Vocabulary(object):
|
10 |
-
"""Simple vocabulary wrapper."""
|
11 |
-
def __init__(self):
|
12 |
-
self.word2idx = {}
|
13 |
-
self.idx2word = {}
|
14 |
-
self.idx = 0
|
15 |
-
|
16 |
-
def add_word(self, word):
|
17 |
-
if not word in self.word2idx:
|
18 |
-
self.word2idx[word] = self.idx
|
19 |
-
self.idx2word[self.idx] = word
|
20 |
-
self.idx += 1
|
21 |
-
|
22 |
-
def __call__(self, word):
|
23 |
-
if not word in self.word2idx:
|
24 |
-
return self.word2idx["<unk>"]
|
25 |
-
return self.word2idx[word]
|
26 |
-
|
27 |
-
def __len__(self):
|
28 |
-
return len(self.word2idx)
|
29 |
-
|
30 |
-
|
31 |
-
def build_vocab(input_json: str,
|
32 |
-
output_json: str,
|
33 |
-
threshold: int,
|
34 |
-
keep_punctuation: bool,
|
35 |
-
host_address: str,
|
36 |
-
character_level: bool = False,
|
37 |
-
retokenize: bool = True,
|
38 |
-
zh: bool = True ):
|
39 |
-
"""Build vocabulary from csv file with a given threshold to drop all counts < threshold
|
40 |
-
|
41 |
-
Args:
|
42 |
-
input_json(string): Preprossessed json file. Structure like this:
|
43 |
-
{
|
44 |
-
'audios': [
|
45 |
-
{
|
46 |
-
'audio_id': 'xxx',
|
47 |
-
'captions': [
|
48 |
-
{
|
49 |
-
'caption': 'xxx',
|
50 |
-
'cap_id': 'xxx'
|
51 |
-
}
|
52 |
-
]
|
53 |
-
},
|
54 |
-
...
|
55 |
-
]
|
56 |
-
}
|
57 |
-
threshold (int): Threshold to drop all words with counts < threshold
|
58 |
-
keep_punctuation (bool): Includes or excludes punctuation.
|
59 |
-
|
60 |
-
Returns:
|
61 |
-
vocab (Vocab): Object with the processed vocabulary
|
62 |
-
"""
|
63 |
-
data = json.load(open(input_json, "r"))["audios"]
|
64 |
-
counter = Counter()
|
65 |
-
if retokenize:
|
66 |
-
pretokenized = False
|
67 |
-
else:
|
68 |
-
pretokenized = "tokens" in data[0]["captions"][0]
|
69 |
-
|
70 |
-
if zh:
|
71 |
-
from nltk.parse.corenlp import CoreNLPParser
|
72 |
-
from zhon.hanzi import punctuation
|
73 |
-
if not pretokenized:
|
74 |
-
parser = CoreNLPParser(host_address)
|
75 |
-
for audio_idx in tqdm(range(len(data)), leave=False, ascii=True):
|
76 |
-
for cap_idx in range(len(data[audio_idx]["captions"])):
|
77 |
-
if pretokenized:
|
78 |
-
tokens = data[audio_idx]["captions"][cap_idx]["tokens"].split()
|
79 |
-
else:
|
80 |
-
caption = data[audio_idx]["captions"][cap_idx]["caption"]
|
81 |
-
# Remove all punctuations
|
82 |
-
if not keep_punctuation:
|
83 |
-
caption = re.sub("[{}]".format(punctuation), "", caption)
|
84 |
-
if character_level:
|
85 |
-
tokens = list(caption)
|
86 |
-
else:
|
87 |
-
tokens = list(parser.tokenize(caption))
|
88 |
-
data[audio_idx]["captions"][cap_idx]["tokens"] = " ".join(tokens)
|
89 |
-
counter.update(tokens)
|
90 |
-
else:
|
91 |
-
if pretokenized:
|
92 |
-
for audio_idx in tqdm(range(len(data)), leave=False, ascii=True):
|
93 |
-
for cap_idx in range(len(data[audio_idx]["captions"])):
|
94 |
-
tokens = data[audio_idx]["captions"][cap_idx]["tokens"].split()
|
95 |
-
counter.update(tokens)
|
96 |
-
else:
|
97 |
-
import spacy
|
98 |
-
tokenizer = spacy.load("en_core_web_sm", disable=["parser", "ner"])
|
99 |
-
for audio_idx in tqdm(range(len(data)), leave=False, ascii=True):
|
100 |
-
captions = data[audio_idx]["captions"]
|
101 |
-
for cap_idx in range(len(captions)):
|
102 |
-
caption = captions[cap_idx]["caption"]
|
103 |
-
doc = tokenizer(caption)
|
104 |
-
tokens = " ".join([str(token).lower() for token in doc])
|
105 |
-
data[audio_idx]["captions"][cap_idx]["tokens"] = tokens
|
106 |
-
counter.update(tokens.split(" "))
|
107 |
-
|
108 |
-
if not pretokenized:
|
109 |
-
if output_json is None:
|
110 |
-
json.dump({ "audios": data }, open(input_json, "w"),
|
111 |
-
indent=4, ensure_ascii=not zh)
|
112 |
-
else:
|
113 |
-
json.dump({ "audios": data }, open(output_json, "w"),
|
114 |
-
indent=4, ensure_ascii=not zh)
|
115 |
-
|
116 |
-
words = [word for word, cnt in counter.items() if cnt >= threshold]
|
117 |
-
|
118 |
-
# Create a vocab wrapper and add some special tokens.
|
119 |
-
vocab = Vocabulary()
|
120 |
-
vocab.add_word("<pad>")
|
121 |
-
vocab.add_word("<start>")
|
122 |
-
vocab.add_word("<end>")
|
123 |
-
vocab.add_word("<unk>")
|
124 |
-
|
125 |
-
# Add the words to the vocabulary.
|
126 |
-
for word in words:
|
127 |
-
vocab.add_word(word)
|
128 |
-
return vocab
|
129 |
-
|
130 |
-
def process(input_json: str,
|
131 |
-
output_file: str,
|
132 |
-
output_json: str = None,
|
133 |
-
threshold: int = 1,
|
134 |
-
keep_punctuation: bool = False,
|
135 |
-
character_level: bool = False,
|
136 |
-
retokenize: bool = False,
|
137 |
-
host_address: str = "http://localhost:9000",
|
138 |
-
zh: bool = True):
|
139 |
-
logfmt = "%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s"
|
140 |
-
logging.basicConfig(level=logging.INFO, format=logfmt)
|
141 |
-
logging.info("Build Vocab")
|
142 |
-
vocabulary = build_vocab(
|
143 |
-
input_json=input_json, output_json=output_json, threshold=threshold,
|
144 |
-
keep_punctuation=keep_punctuation, host_address=host_address,
|
145 |
-
character_level=character_level, retokenize=retokenize, zh=zh)
|
146 |
-
pickle.dump(vocabulary, open(output_file, "wb"))
|
147 |
-
logging.info("Total vocabulary size: {}".format(len(vocabulary)))
|
148 |
-
logging.info("Saved vocab to '{}'".format(output_file))
|
149 |
-
|
150 |
-
|
151 |
-
if __name__ == '__main__':
|
152 |
-
fire.Fire(process)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/work_dirs_1-x/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192.py
DELETED
@@ -1,2861 +0,0 @@
|
|
1 |
-
default_scope = 'mmpose'
|
2 |
-
default_hooks = dict(
|
3 |
-
timer=dict(type='IterTimerHook'),
|
4 |
-
logger=dict(type='LoggerHook', interval=50),
|
5 |
-
param_scheduler=dict(type='ParamSchedulerHook'),
|
6 |
-
checkpoint=dict(
|
7 |
-
type='CheckpointHook', interval=10, save_best='PCK', rule='greater'),
|
8 |
-
sampler_seed=dict(type='DistSamplerSeedHook'),
|
9 |
-
visualization=dict(type='PoseVisualizationHook', enable=False))
|
10 |
-
custom_hooks = [dict(type='SyncBuffersHook')]
|
11 |
-
env_cfg = dict(
|
12 |
-
cudnn_benchmark=False,
|
13 |
-
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
|
14 |
-
dist_cfg=dict(backend='nccl'))
|
15 |
-
vis_backends = [dict(type='LocalVisBackend')]
|
16 |
-
visualizer = dict(
|
17 |
-
type='PoseLocalVisualizer',
|
18 |
-
vis_backends=[dict(type='LocalVisBackend'),
|
19 |
-
dict(type='WandbVisBackend')],
|
20 |
-
name='visualizer')
|
21 |
-
log_processor = dict(
|
22 |
-
type='LogProcessor', window_size=50, by_epoch=True, num_digits=6)
|
23 |
-
log_level = 'INFO'
|
24 |
-
load_from = None
|
25 |
-
resume = False
|
26 |
-
backend_args = dict(backend='local')
|
27 |
-
train_cfg = dict(by_epoch=True, max_epochs=120, val_interval=10)
|
28 |
-
val_cfg = dict()
|
29 |
-
test_cfg = dict()
|
30 |
-
colors = dict(
|
31 |
-
sss=[255, 128, 0],
|
32 |
-
lss=[255, 0, 128],
|
33 |
-
sso=[128, 0, 255],
|
34 |
-
lso=[0, 128, 255],
|
35 |
-
vest=[0, 128, 128],
|
36 |
-
sling=[0, 0, 128],
|
37 |
-
shorts=[128, 128, 128],
|
38 |
-
trousers=[128, 0, 128],
|
39 |
-
skirt=[64, 128, 128],
|
40 |
-
ssd=[64, 64, 128],
|
41 |
-
lsd=[128, 64, 0],
|
42 |
-
vd=[128, 64, 255],
|
43 |
-
sd=[128, 64, 0])
|
44 |
-
dataset_info = dict(
|
45 |
-
dataset_name='deepfashion2',
|
46 |
-
paper_info=dict(
|
47 |
-
author=
|
48 |
-
'Yuying Ge and Ruimao Zhang and Lingyun Wu and Xiaogang Wang and Xiaoou Tang and Ping Luo',
|
49 |
-
title=
|
50 |
-
'DeepFashion2: A Versatile Benchmark for Detection, Pose Estimation, Segmentation and Re-Identification of Clothing Images',
|
51 |
-
container=
|
52 |
-
'Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)',
|
53 |
-
year='2019',
|
54 |
-
homepage='https://github.com/switchablenorms/DeepFashion2'),
|
55 |
-
keypoint_info=dict({
|
56 |
-
0:
|
57 |
-
dict(name='sss_kpt1', id=0, color=[255, 128, 0], type='', swap=''),
|
58 |
-
1:
|
59 |
-
dict(
|
60 |
-
name='sss_kpt2',
|
61 |
-
id=1,
|
62 |
-
color=[255, 128, 0],
|
63 |
-
type='',
|
64 |
-
swap='sss_kpt6'),
|
65 |
-
2:
|
66 |
-
dict(
|
67 |
-
name='sss_kpt3',
|
68 |
-
id=2,
|
69 |
-
color=[255, 128, 0],
|
70 |
-
type='',
|
71 |
-
swap='sss_kpt5'),
|
72 |
-
3:
|
73 |
-
dict(name='sss_kpt4', id=3, color=[255, 128, 0], type='', swap=''),
|
74 |
-
4:
|
75 |
-
dict(
|
76 |
-
name='sss_kpt5',
|
77 |
-
id=4,
|
78 |
-
color=[255, 128, 0],
|
79 |
-
type='',
|
80 |
-
swap='sss_kpt3'),
|
81 |
-
5:
|
82 |
-
dict(
|
83 |
-
name='sss_kpt6',
|
84 |
-
id=5,
|
85 |
-
color=[255, 128, 0],
|
86 |
-
type='',
|
87 |
-
swap='sss_kpt2'),
|
88 |
-
6:
|
89 |
-
dict(
|
90 |
-
name='sss_kpt7',
|
91 |
-
id=6,
|
92 |
-
color=[255, 128, 0],
|
93 |
-
type='',
|
94 |
-
swap='sss_kpt25'),
|
95 |
-
7:
|
96 |
-
dict(
|
97 |
-
name='sss_kpt8',
|
98 |
-
id=7,
|
99 |
-
color=[255, 128, 0],
|
100 |
-
type='',
|
101 |
-
swap='sss_kpt24'),
|
102 |
-
8:
|
103 |
-
dict(
|
104 |
-
name='sss_kpt9',
|
105 |
-
id=8,
|
106 |
-
color=[255, 128, 0],
|
107 |
-
type='',
|
108 |
-
swap='sss_kpt23'),
|
109 |
-
9:
|
110 |
-
dict(
|
111 |
-
name='sss_kpt10',
|
112 |
-
id=9,
|
113 |
-
color=[255, 128, 0],
|
114 |
-
type='',
|
115 |
-
swap='sss_kpt22'),
|
116 |
-
10:
|
117 |
-
dict(
|
118 |
-
name='sss_kpt11',
|
119 |
-
id=10,
|
120 |
-
color=[255, 128, 0],
|
121 |
-
type='',
|
122 |
-
swap='sss_kpt21'),
|
123 |
-
11:
|
124 |
-
dict(
|
125 |
-
name='sss_kpt12',
|
126 |
-
id=11,
|
127 |
-
color=[255, 128, 0],
|
128 |
-
type='',
|
129 |
-
swap='sss_kpt20'),
|
130 |
-
12:
|
131 |
-
dict(
|
132 |
-
name='sss_kpt13',
|
133 |
-
id=12,
|
134 |
-
color=[255, 128, 0],
|
135 |
-
type='',
|
136 |
-
swap='sss_kpt19'),
|
137 |
-
13:
|
138 |
-
dict(
|
139 |
-
name='sss_kpt14',
|
140 |
-
id=13,
|
141 |
-
color=[255, 128, 0],
|
142 |
-
type='',
|
143 |
-
swap='sss_kpt18'),
|
144 |
-
14:
|
145 |
-
dict(
|
146 |
-
name='sss_kpt15',
|
147 |
-
id=14,
|
148 |
-
color=[255, 128, 0],
|
149 |
-
type='',
|
150 |
-
swap='sss_kpt17'),
|
151 |
-
15:
|
152 |
-
dict(name='sss_kpt16', id=15, color=[255, 128, 0], type='', swap=''),
|
153 |
-
16:
|
154 |
-
dict(
|
155 |
-
name='sss_kpt17',
|
156 |
-
id=16,
|
157 |
-
color=[255, 128, 0],
|
158 |
-
type='',
|
159 |
-
swap='sss_kpt15'),
|
160 |
-
17:
|
161 |
-
dict(
|
162 |
-
name='sss_kpt18',
|
163 |
-
id=17,
|
164 |
-
color=[255, 128, 0],
|
165 |
-
type='',
|
166 |
-
swap='sss_kpt14'),
|
167 |
-
18:
|
168 |
-
dict(
|
169 |
-
name='sss_kpt19',
|
170 |
-
id=18,
|
171 |
-
color=[255, 128, 0],
|
172 |
-
type='',
|
173 |
-
swap='sss_kpt13'),
|
174 |
-
19:
|
175 |
-
dict(
|
176 |
-
name='sss_kpt20',
|
177 |
-
id=19,
|
178 |
-
color=[255, 128, 0],
|
179 |
-
type='',
|
180 |
-
swap='sss_kpt12'),
|
181 |
-
20:
|
182 |
-
dict(
|
183 |
-
name='sss_kpt21',
|
184 |
-
id=20,
|
185 |
-
color=[255, 128, 0],
|
186 |
-
type='',
|
187 |
-
swap='sss_kpt11'),
|
188 |
-
21:
|
189 |
-
dict(
|
190 |
-
name='sss_kpt22',
|
191 |
-
id=21,
|
192 |
-
color=[255, 128, 0],
|
193 |
-
type='',
|
194 |
-
swap='sss_kpt10'),
|
195 |
-
22:
|
196 |
-
dict(
|
197 |
-
name='sss_kpt23',
|
198 |
-
id=22,
|
199 |
-
color=[255, 128, 0],
|
200 |
-
type='',
|
201 |
-
swap='sss_kpt9'),
|
202 |
-
23:
|
203 |
-
dict(
|
204 |
-
name='sss_kpt24',
|
205 |
-
id=23,
|
206 |
-
color=[255, 128, 0],
|
207 |
-
type='',
|
208 |
-
swap='sss_kpt8'),
|
209 |
-
24:
|
210 |
-
dict(
|
211 |
-
name='sss_kpt25',
|
212 |
-
id=24,
|
213 |
-
color=[255, 128, 0],
|
214 |
-
type='',
|
215 |
-
swap='sss_kpt7'),
|
216 |
-
25:
|
217 |
-
dict(name='lss_kpt1', id=25, color=[255, 0, 128], type='', swap=''),
|
218 |
-
26:
|
219 |
-
dict(
|
220 |
-
name='lss_kpt2',
|
221 |
-
id=26,
|
222 |
-
color=[255, 0, 128],
|
223 |
-
type='',
|
224 |
-
swap='lss_kpt6'),
|
225 |
-
27:
|
226 |
-
dict(
|
227 |
-
name='lss_kpt3',
|
228 |
-
id=27,
|
229 |
-
color=[255, 0, 128],
|
230 |
-
type='',
|
231 |
-
swap='lss_kpt5'),
|
232 |
-
28:
|
233 |
-
dict(name='lss_kpt4', id=28, color=[255, 0, 128], type='', swap=''),
|
234 |
-
29:
|
235 |
-
dict(
|
236 |
-
name='lss_kpt5',
|
237 |
-
id=29,
|
238 |
-
color=[255, 0, 128],
|
239 |
-
type='',
|
240 |
-
swap='lss_kpt3'),
|
241 |
-
30:
|
242 |
-
dict(
|
243 |
-
name='lss_kpt6',
|
244 |
-
id=30,
|
245 |
-
color=[255, 0, 128],
|
246 |
-
type='',
|
247 |
-
swap='lss_kpt2'),
|
248 |
-
31:
|
249 |
-
dict(
|
250 |
-
name='lss_kpt7',
|
251 |
-
id=31,
|
252 |
-
color=[255, 0, 128],
|
253 |
-
type='',
|
254 |
-
swap='lss_kpt33'),
|
255 |
-
32:
|
256 |
-
dict(
|
257 |
-
name='lss_kpt8',
|
258 |
-
id=32,
|
259 |
-
color=[255, 0, 128],
|
260 |
-
type='',
|
261 |
-
swap='lss_kpt32'),
|
262 |
-
33:
|
263 |
-
dict(
|
264 |
-
name='lss_kpt9',
|
265 |
-
id=33,
|
266 |
-
color=[255, 0, 128],
|
267 |
-
type='',
|
268 |
-
swap='lss_kpt31'),
|
269 |
-
34:
|
270 |
-
dict(
|
271 |
-
name='lss_kpt10',
|
272 |
-
id=34,
|
273 |
-
color=[255, 0, 128],
|
274 |
-
type='',
|
275 |
-
swap='lss_kpt30'),
|
276 |
-
35:
|
277 |
-
dict(
|
278 |
-
name='lss_kpt11',
|
279 |
-
id=35,
|
280 |
-
color=[255, 0, 128],
|
281 |
-
type='',
|
282 |
-
swap='lss_kpt29'),
|
283 |
-
36:
|
284 |
-
dict(
|
285 |
-
name='lss_kpt12',
|
286 |
-
id=36,
|
287 |
-
color=[255, 0, 128],
|
288 |
-
type='',
|
289 |
-
swap='lss_kpt28'),
|
290 |
-
37:
|
291 |
-
dict(
|
292 |
-
name='lss_kpt13',
|
293 |
-
id=37,
|
294 |
-
color=[255, 0, 128],
|
295 |
-
type='',
|
296 |
-
swap='lss_kpt27'),
|
297 |
-
38:
|
298 |
-
dict(
|
299 |
-
name='lss_kpt14',
|
300 |
-
id=38,
|
301 |
-
color=[255, 0, 128],
|
302 |
-
type='',
|
303 |
-
swap='lss_kpt26'),
|
304 |
-
39:
|
305 |
-
dict(
|
306 |
-
name='lss_kpt15',
|
307 |
-
id=39,
|
308 |
-
color=[255, 0, 128],
|
309 |
-
type='',
|
310 |
-
swap='lss_kpt25'),
|
311 |
-
40:
|
312 |
-
dict(
|
313 |
-
name='lss_kpt16',
|
314 |
-
id=40,
|
315 |
-
color=[255, 0, 128],
|
316 |
-
type='',
|
317 |
-
swap='lss_kpt24'),
|
318 |
-
41:
|
319 |
-
dict(
|
320 |
-
name='lss_kpt17',
|
321 |
-
id=41,
|
322 |
-
color=[255, 0, 128],
|
323 |
-
type='',
|
324 |
-
swap='lss_kpt23'),
|
325 |
-
42:
|
326 |
-
dict(
|
327 |
-
name='lss_kpt18',
|
328 |
-
id=42,
|
329 |
-
color=[255, 0, 128],
|
330 |
-
type='',
|
331 |
-
swap='lss_kpt22'),
|
332 |
-
43:
|
333 |
-
dict(
|
334 |
-
name='lss_kpt19',
|
335 |
-
id=43,
|
336 |
-
color=[255, 0, 128],
|
337 |
-
type='',
|
338 |
-
swap='lss_kpt21'),
|
339 |
-
44:
|
340 |
-
dict(name='lss_kpt20', id=44, color=[255, 0, 128], type='', swap=''),
|
341 |
-
45:
|
342 |
-
dict(
|
343 |
-
name='lss_kpt21',
|
344 |
-
id=45,
|
345 |
-
color=[255, 0, 128],
|
346 |
-
type='',
|
347 |
-
swap='lss_kpt19'),
|
348 |
-
46:
|
349 |
-
dict(
|
350 |
-
name='lss_kpt22',
|
351 |
-
id=46,
|
352 |
-
color=[255, 0, 128],
|
353 |
-
type='',
|
354 |
-
swap='lss_kpt18'),
|
355 |
-
47:
|
356 |
-
dict(
|
357 |
-
name='lss_kpt23',
|
358 |
-
id=47,
|
359 |
-
color=[255, 0, 128],
|
360 |
-
type='',
|
361 |
-
swap='lss_kpt17'),
|
362 |
-
48:
|
363 |
-
dict(
|
364 |
-
name='lss_kpt24',
|
365 |
-
id=48,
|
366 |
-
color=[255, 0, 128],
|
367 |
-
type='',
|
368 |
-
swap='lss_kpt16'),
|
369 |
-
49:
|
370 |
-
dict(
|
371 |
-
name='lss_kpt25',
|
372 |
-
id=49,
|
373 |
-
color=[255, 0, 128],
|
374 |
-
type='',
|
375 |
-
swap='lss_kpt15'),
|
376 |
-
50:
|
377 |
-
dict(
|
378 |
-
name='lss_kpt26',
|
379 |
-
id=50,
|
380 |
-
color=[255, 0, 128],
|
381 |
-
type='',
|
382 |
-
swap='lss_kpt14'),
|
383 |
-
51:
|
384 |
-
dict(
|
385 |
-
name='lss_kpt27',
|
386 |
-
id=51,
|
387 |
-
color=[255, 0, 128],
|
388 |
-
type='',
|
389 |
-
swap='lss_kpt13'),
|
390 |
-
52:
|
391 |
-
dict(
|
392 |
-
name='lss_kpt28',
|
393 |
-
id=52,
|
394 |
-
color=[255, 0, 128],
|
395 |
-
type='',
|
396 |
-
swap='lss_kpt12'),
|
397 |
-
53:
|
398 |
-
dict(
|
399 |
-
name='lss_kpt29',
|
400 |
-
id=53,
|
401 |
-
color=[255, 0, 128],
|
402 |
-
type='',
|
403 |
-
swap='lss_kpt11'),
|
404 |
-
54:
|
405 |
-
dict(
|
406 |
-
name='lss_kpt30',
|
407 |
-
id=54,
|
408 |
-
color=[255, 0, 128],
|
409 |
-
type='',
|
410 |
-
swap='lss_kpt10'),
|
411 |
-
55:
|
412 |
-
dict(
|
413 |
-
name='lss_kpt31',
|
414 |
-
id=55,
|
415 |
-
color=[255, 0, 128],
|
416 |
-
type='',
|
417 |
-
swap='lss_kpt9'),
|
418 |
-
56:
|
419 |
-
dict(
|
420 |
-
name='lss_kpt32',
|
421 |
-
id=56,
|
422 |
-
color=[255, 0, 128],
|
423 |
-
type='',
|
424 |
-
swap='lss_kpt8'),
|
425 |
-
57:
|
426 |
-
dict(
|
427 |
-
name='lss_kpt33',
|
428 |
-
id=57,
|
429 |
-
color=[255, 0, 128],
|
430 |
-
type='',
|
431 |
-
swap='lss_kpt7'),
|
432 |
-
58:
|
433 |
-
dict(name='sso_kpt1', id=58, color=[128, 0, 255], type='', swap=''),
|
434 |
-
59:
|
435 |
-
dict(
|
436 |
-
name='sso_kpt2',
|
437 |
-
id=59,
|
438 |
-
color=[128, 0, 255],
|
439 |
-
type='',
|
440 |
-
swap='sso_kpt26'),
|
441 |
-
60:
|
442 |
-
dict(
|
443 |
-
name='sso_kpt3',
|
444 |
-
id=60,
|
445 |
-
color=[128, 0, 255],
|
446 |
-
type='',
|
447 |
-
swap='sso_kpt5'),
|
448 |
-
61:
|
449 |
-
dict(
|
450 |
-
name='sso_kpt4',
|
451 |
-
id=61,
|
452 |
-
color=[128, 0, 255],
|
453 |
-
type='',
|
454 |
-
swap='sso_kpt6'),
|
455 |
-
62:
|
456 |
-
dict(
|
457 |
-
name='sso_kpt5',
|
458 |
-
id=62,
|
459 |
-
color=[128, 0, 255],
|
460 |
-
type='',
|
461 |
-
swap='sso_kpt3'),
|
462 |
-
63:
|
463 |
-
dict(
|
464 |
-
name='sso_kpt6',
|
465 |
-
id=63,
|
466 |
-
color=[128, 0, 255],
|
467 |
-
type='',
|
468 |
-
swap='sso_kpt4'),
|
469 |
-
64:
|
470 |
-
dict(
|
471 |
-
name='sso_kpt7',
|
472 |
-
id=64,
|
473 |
-
color=[128, 0, 255],
|
474 |
-
type='',
|
475 |
-
swap='sso_kpt25'),
|
476 |
-
65:
|
477 |
-
dict(
|
478 |
-
name='sso_kpt8',
|
479 |
-
id=65,
|
480 |
-
color=[128, 0, 255],
|
481 |
-
type='',
|
482 |
-
swap='sso_kpt24'),
|
483 |
-
66:
|
484 |
-
dict(
|
485 |
-
name='sso_kpt9',
|
486 |
-
id=66,
|
487 |
-
color=[128, 0, 255],
|
488 |
-
type='',
|
489 |
-
swap='sso_kpt23'),
|
490 |
-
67:
|
491 |
-
dict(
|
492 |
-
name='sso_kpt10',
|
493 |
-
id=67,
|
494 |
-
color=[128, 0, 255],
|
495 |
-
type='',
|
496 |
-
swap='sso_kpt22'),
|
497 |
-
68:
|
498 |
-
dict(
|
499 |
-
name='sso_kpt11',
|
500 |
-
id=68,
|
501 |
-
color=[128, 0, 255],
|
502 |
-
type='',
|
503 |
-
swap='sso_kpt21'),
|
504 |
-
69:
|
505 |
-
dict(
|
506 |
-
name='sso_kpt12',
|
507 |
-
id=69,
|
508 |
-
color=[128, 0, 255],
|
509 |
-
type='',
|
510 |
-
swap='sso_kpt20'),
|
511 |
-
70:
|
512 |
-
dict(
|
513 |
-
name='sso_kpt13',
|
514 |
-
id=70,
|
515 |
-
color=[128, 0, 255],
|
516 |
-
type='',
|
517 |
-
swap='sso_kpt19'),
|
518 |
-
71:
|
519 |
-
dict(
|
520 |
-
name='sso_kpt14',
|
521 |
-
id=71,
|
522 |
-
color=[128, 0, 255],
|
523 |
-
type='',
|
524 |
-
swap='sso_kpt18'),
|
525 |
-
72:
|
526 |
-
dict(
|
527 |
-
name='sso_kpt15',
|
528 |
-
id=72,
|
529 |
-
color=[128, 0, 255],
|
530 |
-
type='',
|
531 |
-
swap='sso_kpt17'),
|
532 |
-
73:
|
533 |
-
dict(
|
534 |
-
name='sso_kpt16',
|
535 |
-
id=73,
|
536 |
-
color=[128, 0, 255],
|
537 |
-
type='',
|
538 |
-
swap='sso_kpt29'),
|
539 |
-
74:
|
540 |
-
dict(
|
541 |
-
name='sso_kpt17',
|
542 |
-
id=74,
|
543 |
-
color=[128, 0, 255],
|
544 |
-
type='',
|
545 |
-
swap='sso_kpt15'),
|
546 |
-
75:
|
547 |
-
dict(
|
548 |
-
name='sso_kpt18',
|
549 |
-
id=75,
|
550 |
-
color=[128, 0, 255],
|
551 |
-
type='',
|
552 |
-
swap='sso_kpt14'),
|
553 |
-
76:
|
554 |
-
dict(
|
555 |
-
name='sso_kpt19',
|
556 |
-
id=76,
|
557 |
-
color=[128, 0, 255],
|
558 |
-
type='',
|
559 |
-
swap='sso_kpt13'),
|
560 |
-
77:
|
561 |
-
dict(
|
562 |
-
name='sso_kpt20',
|
563 |
-
id=77,
|
564 |
-
color=[128, 0, 255],
|
565 |
-
type='',
|
566 |
-
swap='sso_kpt12'),
|
567 |
-
78:
|
568 |
-
dict(
|
569 |
-
name='sso_kpt21',
|
570 |
-
id=78,
|
571 |
-
color=[128, 0, 255],
|
572 |
-
type='',
|
573 |
-
swap='sso_kpt11'),
|
574 |
-
79:
|
575 |
-
dict(
|
576 |
-
name='sso_kpt22',
|
577 |
-
id=79,
|
578 |
-
color=[128, 0, 255],
|
579 |
-
type='',
|
580 |
-
swap='sso_kpt10'),
|
581 |
-
80:
|
582 |
-
dict(
|
583 |
-
name='sso_kpt23',
|
584 |
-
id=80,
|
585 |
-
color=[128, 0, 255],
|
586 |
-
type='',
|
587 |
-
swap='sso_kpt9'),
|
588 |
-
81:
|
589 |
-
dict(
|
590 |
-
name='sso_kpt24',
|
591 |
-
id=81,
|
592 |
-
color=[128, 0, 255],
|
593 |
-
type='',
|
594 |
-
swap='sso_kpt8'),
|
595 |
-
82:
|
596 |
-
dict(
|
597 |
-
name='sso_kpt25',
|
598 |
-
id=82,
|
599 |
-
color=[128, 0, 255],
|
600 |
-
type='',
|
601 |
-
swap='sso_kpt7'),
|
602 |
-
83:
|
603 |
-
dict(
|
604 |
-
name='sso_kpt26',
|
605 |
-
id=83,
|
606 |
-
color=[128, 0, 255],
|
607 |
-
type='',
|
608 |
-
swap='sso_kpt2'),
|
609 |
-
84:
|
610 |
-
dict(
|
611 |
-
name='sso_kpt27',
|
612 |
-
id=84,
|
613 |
-
color=[128, 0, 255],
|
614 |
-
type='',
|
615 |
-
swap='sso_kpt30'),
|
616 |
-
85:
|
617 |
-
dict(
|
618 |
-
name='sso_kpt28',
|
619 |
-
id=85,
|
620 |
-
color=[128, 0, 255],
|
621 |
-
type='',
|
622 |
-
swap='sso_kpt31'),
|
623 |
-
86:
|
624 |
-
dict(
|
625 |
-
name='sso_kpt29',
|
626 |
-
id=86,
|
627 |
-
color=[128, 0, 255],
|
628 |
-
type='',
|
629 |
-
swap='sso_kpt16'),
|
630 |
-
87:
|
631 |
-
dict(
|
632 |
-
name='sso_kpt30',
|
633 |
-
id=87,
|
634 |
-
color=[128, 0, 255],
|
635 |
-
type='',
|
636 |
-
swap='sso_kpt27'),
|
637 |
-
88:
|
638 |
-
dict(
|
639 |
-
name='sso_kpt31',
|
640 |
-
id=88,
|
641 |
-
color=[128, 0, 255],
|
642 |
-
type='',
|
643 |
-
swap='sso_kpt28'),
|
644 |
-
89:
|
645 |
-
dict(name='lso_kpt1', id=89, color=[0, 128, 255], type='', swap=''),
|
646 |
-
90:
|
647 |
-
dict(
|
648 |
-
name='lso_kpt2',
|
649 |
-
id=90,
|
650 |
-
color=[0, 128, 255],
|
651 |
-
type='',
|
652 |
-
swap='lso_kpt6'),
|
653 |
-
91:
|
654 |
-
dict(
|
655 |
-
name='lso_kpt3',
|
656 |
-
id=91,
|
657 |
-
color=[0, 128, 255],
|
658 |
-
type='',
|
659 |
-
swap='lso_kpt5'),
|
660 |
-
92:
|
661 |
-
dict(
|
662 |
-
name='lso_kpt4',
|
663 |
-
id=92,
|
664 |
-
color=[0, 128, 255],
|
665 |
-
type='',
|
666 |
-
swap='lso_kpt34'),
|
667 |
-
93:
|
668 |
-
dict(
|
669 |
-
name='lso_kpt5',
|
670 |
-
id=93,
|
671 |
-
color=[0, 128, 255],
|
672 |
-
type='',
|
673 |
-
swap='lso_kpt3'),
|
674 |
-
94:
|
675 |
-
dict(
|
676 |
-
name='lso_kpt6',
|
677 |
-
id=94,
|
678 |
-
color=[0, 128, 255],
|
679 |
-
type='',
|
680 |
-
swap='lso_kpt2'),
|
681 |
-
95:
|
682 |
-
dict(
|
683 |
-
name='lso_kpt7',
|
684 |
-
id=95,
|
685 |
-
color=[0, 128, 255],
|
686 |
-
type='',
|
687 |
-
swap='lso_kpt33'),
|
688 |
-
96:
|
689 |
-
dict(
|
690 |
-
name='lso_kpt8',
|
691 |
-
id=96,
|
692 |
-
color=[0, 128, 255],
|
693 |
-
type='',
|
694 |
-
swap='lso_kpt32'),
|
695 |
-
97:
|
696 |
-
dict(
|
697 |
-
name='lso_kpt9',
|
698 |
-
id=97,
|
699 |
-
color=[0, 128, 255],
|
700 |
-
type='',
|
701 |
-
swap='lso_kpt31'),
|
702 |
-
98:
|
703 |
-
dict(
|
704 |
-
name='lso_kpt10',
|
705 |
-
id=98,
|
706 |
-
color=[0, 128, 255],
|
707 |
-
type='',
|
708 |
-
swap='lso_kpt30'),
|
709 |
-
99:
|
710 |
-
dict(
|
711 |
-
name='lso_kpt11',
|
712 |
-
id=99,
|
713 |
-
color=[0, 128, 255],
|
714 |
-
type='',
|
715 |
-
swap='lso_kpt29'),
|
716 |
-
100:
|
717 |
-
dict(
|
718 |
-
name='lso_kpt12',
|
719 |
-
id=100,
|
720 |
-
color=[0, 128, 255],
|
721 |
-
type='',
|
722 |
-
swap='lso_kpt28'),
|
723 |
-
101:
|
724 |
-
dict(
|
725 |
-
name='lso_kpt13',
|
726 |
-
id=101,
|
727 |
-
color=[0, 128, 255],
|
728 |
-
type='',
|
729 |
-
swap='lso_kpt27'),
|
730 |
-
102:
|
731 |
-
dict(
|
732 |
-
name='lso_kpt14',
|
733 |
-
id=102,
|
734 |
-
color=[0, 128, 255],
|
735 |
-
type='',
|
736 |
-
swap='lso_kpt26'),
|
737 |
-
103:
|
738 |
-
dict(
|
739 |
-
name='lso_kpt15',
|
740 |
-
id=103,
|
741 |
-
color=[0, 128, 255],
|
742 |
-
type='',
|
743 |
-
swap='lso_kpt25'),
|
744 |
-
104:
|
745 |
-
dict(
|
746 |
-
name='lso_kpt16',
|
747 |
-
id=104,
|
748 |
-
color=[0, 128, 255],
|
749 |
-
type='',
|
750 |
-
swap='lso_kpt24'),
|
751 |
-
105:
|
752 |
-
dict(
|
753 |
-
name='lso_kpt17',
|
754 |
-
id=105,
|
755 |
-
color=[0, 128, 255],
|
756 |
-
type='',
|
757 |
-
swap='lso_kpt23'),
|
758 |
-
106:
|
759 |
-
dict(
|
760 |
-
name='lso_kpt18',
|
761 |
-
id=106,
|
762 |
-
color=[0, 128, 255],
|
763 |
-
type='',
|
764 |
-
swap='lso_kpt22'),
|
765 |
-
107:
|
766 |
-
dict(
|
767 |
-
name='lso_kpt19',
|
768 |
-
id=107,
|
769 |
-
color=[0, 128, 255],
|
770 |
-
type='',
|
771 |
-
swap='lso_kpt21'),
|
772 |
-
108:
|
773 |
-
dict(
|
774 |
-
name='lso_kpt20',
|
775 |
-
id=108,
|
776 |
-
color=[0, 128, 255],
|
777 |
-
type='',
|
778 |
-
swap='lso_kpt37'),
|
779 |
-
109:
|
780 |
-
dict(
|
781 |
-
name='lso_kpt21',
|
782 |
-
id=109,
|
783 |
-
color=[0, 128, 255],
|
784 |
-
type='',
|
785 |
-
swap='lso_kpt19'),
|
786 |
-
110:
|
787 |
-
dict(
|
788 |
-
name='lso_kpt22',
|
789 |
-
id=110,
|
790 |
-
color=[0, 128, 255],
|
791 |
-
type='',
|
792 |
-
swap='lso_kpt18'),
|
793 |
-
111:
|
794 |
-
dict(
|
795 |
-
name='lso_kpt23',
|
796 |
-
id=111,
|
797 |
-
color=[0, 128, 255],
|
798 |
-
type='',
|
799 |
-
swap='lso_kpt17'),
|
800 |
-
112:
|
801 |
-
dict(
|
802 |
-
name='lso_kpt24',
|
803 |
-
id=112,
|
804 |
-
color=[0, 128, 255],
|
805 |
-
type='',
|
806 |
-
swap='lso_kpt16'),
|
807 |
-
113:
|
808 |
-
dict(
|
809 |
-
name='lso_kpt25',
|
810 |
-
id=113,
|
811 |
-
color=[0, 128, 255],
|
812 |
-
type='',
|
813 |
-
swap='lso_kpt15'),
|
814 |
-
114:
|
815 |
-
dict(
|
816 |
-
name='lso_kpt26',
|
817 |
-
id=114,
|
818 |
-
color=[0, 128, 255],
|
819 |
-
type='',
|
820 |
-
swap='lso_kpt14'),
|
821 |
-
115:
|
822 |
-
dict(
|
823 |
-
name='lso_kpt27',
|
824 |
-
id=115,
|
825 |
-
color=[0, 128, 255],
|
826 |
-
type='',
|
827 |
-
swap='lso_kpt13'),
|
828 |
-
116:
|
829 |
-
dict(
|
830 |
-
name='lso_kpt28',
|
831 |
-
id=116,
|
832 |
-
color=[0, 128, 255],
|
833 |
-
type='',
|
834 |
-
swap='lso_kpt12'),
|
835 |
-
117:
|
836 |
-
dict(
|
837 |
-
name='lso_kpt29',
|
838 |
-
id=117,
|
839 |
-
color=[0, 128, 255],
|
840 |
-
type='',
|
841 |
-
swap='lso_kpt11'),
|
842 |
-
118:
|
843 |
-
dict(
|
844 |
-
name='lso_kpt30',
|
845 |
-
id=118,
|
846 |
-
color=[0, 128, 255],
|
847 |
-
type='',
|
848 |
-
swap='lso_kpt10'),
|
849 |
-
119:
|
850 |
-
dict(
|
851 |
-
name='lso_kpt31',
|
852 |
-
id=119,
|
853 |
-
color=[0, 128, 255],
|
854 |
-
type='',
|
855 |
-
swap='lso_kpt9'),
|
856 |
-
120:
|
857 |
-
dict(
|
858 |
-
name='lso_kpt32',
|
859 |
-
id=120,
|
860 |
-
color=[0, 128, 255],
|
861 |
-
type='',
|
862 |
-
swap='lso_kpt8'),
|
863 |
-
121:
|
864 |
-
dict(
|
865 |
-
name='lso_kpt33',
|
866 |
-
id=121,
|
867 |
-
color=[0, 128, 255],
|
868 |
-
type='',
|
869 |
-
swap='lso_kpt7'),
|
870 |
-
122:
|
871 |
-
dict(
|
872 |
-
name='lso_kpt34',
|
873 |
-
id=122,
|
874 |
-
color=[0, 128, 255],
|
875 |
-
type='',
|
876 |
-
swap='lso_kpt4'),
|
877 |
-
123:
|
878 |
-
dict(
|
879 |
-
name='lso_kpt35',
|
880 |
-
id=123,
|
881 |
-
color=[0, 128, 255],
|
882 |
-
type='',
|
883 |
-
swap='lso_kpt38'),
|
884 |
-
124:
|
885 |
-
dict(
|
886 |
-
name='lso_kpt36',
|
887 |
-
id=124,
|
888 |
-
color=[0, 128, 255],
|
889 |
-
type='',
|
890 |
-
swap='lso_kpt39'),
|
891 |
-
125:
|
892 |
-
dict(
|
893 |
-
name='lso_kpt37',
|
894 |
-
id=125,
|
895 |
-
color=[0, 128, 255],
|
896 |
-
type='',
|
897 |
-
swap='lso_kpt20'),
|
898 |
-
126:
|
899 |
-
dict(
|
900 |
-
name='lso_kpt38',
|
901 |
-
id=126,
|
902 |
-
color=[0, 128, 255],
|
903 |
-
type='',
|
904 |
-
swap='lso_kpt35'),
|
905 |
-
127:
|
906 |
-
dict(
|
907 |
-
name='lso_kpt39',
|
908 |
-
id=127,
|
909 |
-
color=[0, 128, 255],
|
910 |
-
type='',
|
911 |
-
swap='lso_kpt36'),
|
912 |
-
128:
|
913 |
-
dict(name='vest_kpt1', id=128, color=[0, 128, 128], type='', swap=''),
|
914 |
-
129:
|
915 |
-
dict(
|
916 |
-
name='vest_kpt2',
|
917 |
-
id=129,
|
918 |
-
color=[0, 128, 128],
|
919 |
-
type='',
|
920 |
-
swap='vest_kpt6'),
|
921 |
-
130:
|
922 |
-
dict(
|
923 |
-
name='vest_kpt3',
|
924 |
-
id=130,
|
925 |
-
color=[0, 128, 128],
|
926 |
-
type='',
|
927 |
-
swap='vest_kpt5'),
|
928 |
-
131:
|
929 |
-
dict(name='vest_kpt4', id=131, color=[0, 128, 128], type='', swap=''),
|
930 |
-
132:
|
931 |
-
dict(
|
932 |
-
name='vest_kpt5',
|
933 |
-
id=132,
|
934 |
-
color=[0, 128, 128],
|
935 |
-
type='',
|
936 |
-
swap='vest_kpt3'),
|
937 |
-
133:
|
938 |
-
dict(
|
939 |
-
name='vest_kpt6',
|
940 |
-
id=133,
|
941 |
-
color=[0, 128, 128],
|
942 |
-
type='',
|
943 |
-
swap='vest_kpt2'),
|
944 |
-
134:
|
945 |
-
dict(
|
946 |
-
name='vest_kpt7',
|
947 |
-
id=134,
|
948 |
-
color=[0, 128, 128],
|
949 |
-
type='',
|
950 |
-
swap='vest_kpt15'),
|
951 |
-
135:
|
952 |
-
dict(
|
953 |
-
name='vest_kpt8',
|
954 |
-
id=135,
|
955 |
-
color=[0, 128, 128],
|
956 |
-
type='',
|
957 |
-
swap='vest_kpt14'),
|
958 |
-
136:
|
959 |
-
dict(
|
960 |
-
name='vest_kpt9',
|
961 |
-
id=136,
|
962 |
-
color=[0, 128, 128],
|
963 |
-
type='',
|
964 |
-
swap='vest_kpt13'),
|
965 |
-
137:
|
966 |
-
dict(
|
967 |
-
name='vest_kpt10',
|
968 |
-
id=137,
|
969 |
-
color=[0, 128, 128],
|
970 |
-
type='',
|
971 |
-
swap='vest_kpt12'),
|
972 |
-
138:
|
973 |
-
dict(name='vest_kpt11', id=138, color=[0, 128, 128], type='', swap=''),
|
974 |
-
139:
|
975 |
-
dict(
|
976 |
-
name='vest_kpt12',
|
977 |
-
id=139,
|
978 |
-
color=[0, 128, 128],
|
979 |
-
type='',
|
980 |
-
swap='vest_kpt10'),
|
981 |
-
140:
|
982 |
-
dict(name='vest_kpt13', id=140, color=[0, 128, 128], type='', swap=''),
|
983 |
-
141:
|
984 |
-
dict(
|
985 |
-
name='vest_kpt14',
|
986 |
-
id=141,
|
987 |
-
color=[0, 128, 128],
|
988 |
-
type='',
|
989 |
-
swap='vest_kpt8'),
|
990 |
-
142:
|
991 |
-
dict(
|
992 |
-
name='vest_kpt15',
|
993 |
-
id=142,
|
994 |
-
color=[0, 128, 128],
|
995 |
-
type='',
|
996 |
-
swap='vest_kpt7'),
|
997 |
-
143:
|
998 |
-
dict(name='sling_kpt1', id=143, color=[0, 0, 128], type='', swap=''),
|
999 |
-
144:
|
1000 |
-
dict(
|
1001 |
-
name='sling_kpt2',
|
1002 |
-
id=144,
|
1003 |
-
color=[0, 0, 128],
|
1004 |
-
type='',
|
1005 |
-
swap='sling_kpt6'),
|
1006 |
-
145:
|
1007 |
-
dict(
|
1008 |
-
name='sling_kpt3',
|
1009 |
-
id=145,
|
1010 |
-
color=[0, 0, 128],
|
1011 |
-
type='',
|
1012 |
-
swap='sling_kpt5'),
|
1013 |
-
146:
|
1014 |
-
dict(name='sling_kpt4', id=146, color=[0, 0, 128], type='', swap=''),
|
1015 |
-
147:
|
1016 |
-
dict(
|
1017 |
-
name='sling_kpt5',
|
1018 |
-
id=147,
|
1019 |
-
color=[0, 0, 128],
|
1020 |
-
type='',
|
1021 |
-
swap='sling_kpt3'),
|
1022 |
-
148:
|
1023 |
-
dict(
|
1024 |
-
name='sling_kpt6',
|
1025 |
-
id=148,
|
1026 |
-
color=[0, 0, 128],
|
1027 |
-
type='',
|
1028 |
-
swap='sling_kpt2'),
|
1029 |
-
149:
|
1030 |
-
dict(
|
1031 |
-
name='sling_kpt7',
|
1032 |
-
id=149,
|
1033 |
-
color=[0, 0, 128],
|
1034 |
-
type='',
|
1035 |
-
swap='sling_kpt15'),
|
1036 |
-
150:
|
1037 |
-
dict(
|
1038 |
-
name='sling_kpt8',
|
1039 |
-
id=150,
|
1040 |
-
color=[0, 0, 128],
|
1041 |
-
type='',
|
1042 |
-
swap='sling_kpt14'),
|
1043 |
-
151:
|
1044 |
-
dict(
|
1045 |
-
name='sling_kpt9',
|
1046 |
-
id=151,
|
1047 |
-
color=[0, 0, 128],
|
1048 |
-
type='',
|
1049 |
-
swap='sling_kpt13'),
|
1050 |
-
152:
|
1051 |
-
dict(
|
1052 |
-
name='sling_kpt10',
|
1053 |
-
id=152,
|
1054 |
-
color=[0, 0, 128],
|
1055 |
-
type='',
|
1056 |
-
swap='sling_kpt12'),
|
1057 |
-
153:
|
1058 |
-
dict(name='sling_kpt11', id=153, color=[0, 0, 128], type='', swap=''),
|
1059 |
-
154:
|
1060 |
-
dict(
|
1061 |
-
name='sling_kpt12',
|
1062 |
-
id=154,
|
1063 |
-
color=[0, 0, 128],
|
1064 |
-
type='',
|
1065 |
-
swap='sling_kpt10'),
|
1066 |
-
155:
|
1067 |
-
dict(
|
1068 |
-
name='sling_kpt13',
|
1069 |
-
id=155,
|
1070 |
-
color=[0, 0, 128],
|
1071 |
-
type='',
|
1072 |
-
swap='sling_kpt9'),
|
1073 |
-
156:
|
1074 |
-
dict(
|
1075 |
-
name='sling_kpt14',
|
1076 |
-
id=156,
|
1077 |
-
color=[0, 0, 128],
|
1078 |
-
type='',
|
1079 |
-
swap='sling_kpt8'),
|
1080 |
-
157:
|
1081 |
-
dict(
|
1082 |
-
name='sling_kpt15',
|
1083 |
-
id=157,
|
1084 |
-
color=[0, 0, 128],
|
1085 |
-
type='',
|
1086 |
-
swap='sling_kpt7'),
|
1087 |
-
158:
|
1088 |
-
dict(
|
1089 |
-
name='shorts_kpt1',
|
1090 |
-
id=158,
|
1091 |
-
color=[128, 128, 128],
|
1092 |
-
type='',
|
1093 |
-
swap='shorts_kpt3'),
|
1094 |
-
159:
|
1095 |
-
dict(
|
1096 |
-
name='shorts_kpt2',
|
1097 |
-
id=159,
|
1098 |
-
color=[128, 128, 128],
|
1099 |
-
type='',
|
1100 |
-
swap=''),
|
1101 |
-
160:
|
1102 |
-
dict(
|
1103 |
-
name='shorts_kpt3',
|
1104 |
-
id=160,
|
1105 |
-
color=[128, 128, 128],
|
1106 |
-
type='',
|
1107 |
-
swap='shorts_kpt1'),
|
1108 |
-
161:
|
1109 |
-
dict(
|
1110 |
-
name='shorts_kpt4',
|
1111 |
-
id=161,
|
1112 |
-
color=[128, 128, 128],
|
1113 |
-
type='',
|
1114 |
-
swap='shorts_kpt10'),
|
1115 |
-
162:
|
1116 |
-
dict(
|
1117 |
-
name='shorts_kpt5',
|
1118 |
-
id=162,
|
1119 |
-
color=[128, 128, 128],
|
1120 |
-
type='',
|
1121 |
-
swap='shorts_kpt9'),
|
1122 |
-
163:
|
1123 |
-
dict(
|
1124 |
-
name='shorts_kpt6',
|
1125 |
-
id=163,
|
1126 |
-
color=[128, 128, 128],
|
1127 |
-
type='',
|
1128 |
-
swap='shorts_kpt8'),
|
1129 |
-
164:
|
1130 |
-
dict(
|
1131 |
-
name='shorts_kpt7',
|
1132 |
-
id=164,
|
1133 |
-
color=[128, 128, 128],
|
1134 |
-
type='',
|
1135 |
-
swap=''),
|
1136 |
-
165:
|
1137 |
-
dict(
|
1138 |
-
name='shorts_kpt8',
|
1139 |
-
id=165,
|
1140 |
-
color=[128, 128, 128],
|
1141 |
-
type='',
|
1142 |
-
swap='shorts_kpt6'),
|
1143 |
-
166:
|
1144 |
-
dict(
|
1145 |
-
name='shorts_kpt9',
|
1146 |
-
id=166,
|
1147 |
-
color=[128, 128, 128],
|
1148 |
-
type='',
|
1149 |
-
swap='shorts_kpt5'),
|
1150 |
-
167:
|
1151 |
-
dict(
|
1152 |
-
name='shorts_kpt10',
|
1153 |
-
id=167,
|
1154 |
-
color=[128, 128, 128],
|
1155 |
-
type='',
|
1156 |
-
swap='shorts_kpt4'),
|
1157 |
-
168:
|
1158 |
-
dict(
|
1159 |
-
name='trousers_kpt1',
|
1160 |
-
id=168,
|
1161 |
-
color=[128, 0, 128],
|
1162 |
-
type='',
|
1163 |
-
swap='trousers_kpt3'),
|
1164 |
-
169:
|
1165 |
-
dict(
|
1166 |
-
name='trousers_kpt2',
|
1167 |
-
id=169,
|
1168 |
-
color=[128, 0, 128],
|
1169 |
-
type='',
|
1170 |
-
swap=''),
|
1171 |
-
170:
|
1172 |
-
dict(
|
1173 |
-
name='trousers_kpt3',
|
1174 |
-
id=170,
|
1175 |
-
color=[128, 0, 128],
|
1176 |
-
type='',
|
1177 |
-
swap='trousers_kpt1'),
|
1178 |
-
171:
|
1179 |
-
dict(
|
1180 |
-
name='trousers_kpt4',
|
1181 |
-
id=171,
|
1182 |
-
color=[128, 0, 128],
|
1183 |
-
type='',
|
1184 |
-
swap='trousers_kpt14'),
|
1185 |
-
172:
|
1186 |
-
dict(
|
1187 |
-
name='trousers_kpt5',
|
1188 |
-
id=172,
|
1189 |
-
color=[128, 0, 128],
|
1190 |
-
type='',
|
1191 |
-
swap='trousers_kpt13'),
|
1192 |
-
173:
|
1193 |
-
dict(
|
1194 |
-
name='trousers_kpt6',
|
1195 |
-
id=173,
|
1196 |
-
color=[128, 0, 128],
|
1197 |
-
type='',
|
1198 |
-
swap='trousers_kpt12'),
|
1199 |
-
174:
|
1200 |
-
dict(
|
1201 |
-
name='trousers_kpt7',
|
1202 |
-
id=174,
|
1203 |
-
color=[128, 0, 128],
|
1204 |
-
type='',
|
1205 |
-
swap='trousers_kpt11'),
|
1206 |
-
175:
|
1207 |
-
dict(
|
1208 |
-
name='trousers_kpt8',
|
1209 |
-
id=175,
|
1210 |
-
color=[128, 0, 128],
|
1211 |
-
type='',
|
1212 |
-
swap='trousers_kpt10'),
|
1213 |
-
176:
|
1214 |
-
dict(
|
1215 |
-
name='trousers_kpt9',
|
1216 |
-
id=176,
|
1217 |
-
color=[128, 0, 128],
|
1218 |
-
type='',
|
1219 |
-
swap=''),
|
1220 |
-
177:
|
1221 |
-
dict(
|
1222 |
-
name='trousers_kpt10',
|
1223 |
-
id=177,
|
1224 |
-
color=[128, 0, 128],
|
1225 |
-
type='',
|
1226 |
-
swap='trousers_kpt8'),
|
1227 |
-
178:
|
1228 |
-
dict(
|
1229 |
-
name='trousers_kpt11',
|
1230 |
-
id=178,
|
1231 |
-
color=[128, 0, 128],
|
1232 |
-
type='',
|
1233 |
-
swap='trousers_kpt7'),
|
1234 |
-
179:
|
1235 |
-
dict(
|
1236 |
-
name='trousers_kpt12',
|
1237 |
-
id=179,
|
1238 |
-
color=[128, 0, 128],
|
1239 |
-
type='',
|
1240 |
-
swap='trousers_kpt6'),
|
1241 |
-
180:
|
1242 |
-
dict(
|
1243 |
-
name='trousers_kpt13',
|
1244 |
-
id=180,
|
1245 |
-
color=[128, 0, 128],
|
1246 |
-
type='',
|
1247 |
-
swap='trousers_kpt5'),
|
1248 |
-
181:
|
1249 |
-
dict(
|
1250 |
-
name='trousers_kpt14',
|
1251 |
-
id=181,
|
1252 |
-
color=[128, 0, 128],
|
1253 |
-
type='',
|
1254 |
-
swap='trousers_kpt4'),
|
1255 |
-
182:
|
1256 |
-
dict(
|
1257 |
-
name='skirt_kpt1',
|
1258 |
-
id=182,
|
1259 |
-
color=[64, 128, 128],
|
1260 |
-
type='',
|
1261 |
-
swap='skirt_kpt3'),
|
1262 |
-
183:
|
1263 |
-
dict(
|
1264 |
-
name='skirt_kpt2', id=183, color=[64, 128, 128], type='', swap=''),
|
1265 |
-
184:
|
1266 |
-
dict(
|
1267 |
-
name='skirt_kpt3',
|
1268 |
-
id=184,
|
1269 |
-
color=[64, 128, 128],
|
1270 |
-
type='',
|
1271 |
-
swap='skirt_kpt1'),
|
1272 |
-
185:
|
1273 |
-
dict(
|
1274 |
-
name='skirt_kpt4',
|
1275 |
-
id=185,
|
1276 |
-
color=[64, 128, 128],
|
1277 |
-
type='',
|
1278 |
-
swap='skirt_kpt8'),
|
1279 |
-
186:
|
1280 |
-
dict(
|
1281 |
-
name='skirt_kpt5',
|
1282 |
-
id=186,
|
1283 |
-
color=[64, 128, 128],
|
1284 |
-
type='',
|
1285 |
-
swap='skirt_kpt7'),
|
1286 |
-
187:
|
1287 |
-
dict(
|
1288 |
-
name='skirt_kpt6', id=187, color=[64, 128, 128], type='', swap=''),
|
1289 |
-
188:
|
1290 |
-
dict(
|
1291 |
-
name='skirt_kpt7',
|
1292 |
-
id=188,
|
1293 |
-
color=[64, 128, 128],
|
1294 |
-
type='',
|
1295 |
-
swap='skirt_kpt5'),
|
1296 |
-
189:
|
1297 |
-
dict(
|
1298 |
-
name='skirt_kpt8',
|
1299 |
-
id=189,
|
1300 |
-
color=[64, 128, 128],
|
1301 |
-
type='',
|
1302 |
-
swap='skirt_kpt4'),
|
1303 |
-
190:
|
1304 |
-
dict(name='ssd_kpt1', id=190, color=[64, 64, 128], type='', swap=''),
|
1305 |
-
191:
|
1306 |
-
dict(
|
1307 |
-
name='ssd_kpt2',
|
1308 |
-
id=191,
|
1309 |
-
color=[64, 64, 128],
|
1310 |
-
type='',
|
1311 |
-
swap='ssd_kpt6'),
|
1312 |
-
192:
|
1313 |
-
dict(
|
1314 |
-
name='ssd_kpt3',
|
1315 |
-
id=192,
|
1316 |
-
color=[64, 64, 128],
|
1317 |
-
type='',
|
1318 |
-
swap='ssd_kpt5'),
|
1319 |
-
193:
|
1320 |
-
dict(name='ssd_kpt4', id=193, color=[64, 64, 128], type='', swap=''),
|
1321 |
-
194:
|
1322 |
-
dict(
|
1323 |
-
name='ssd_kpt5',
|
1324 |
-
id=194,
|
1325 |
-
color=[64, 64, 128],
|
1326 |
-
type='',
|
1327 |
-
swap='ssd_kpt3'),
|
1328 |
-
195:
|
1329 |
-
dict(
|
1330 |
-
name='ssd_kpt6',
|
1331 |
-
id=195,
|
1332 |
-
color=[64, 64, 128],
|
1333 |
-
type='',
|
1334 |
-
swap='ssd_kpt2'),
|
1335 |
-
196:
|
1336 |
-
dict(
|
1337 |
-
name='ssd_kpt7',
|
1338 |
-
id=196,
|
1339 |
-
color=[64, 64, 128],
|
1340 |
-
type='',
|
1341 |
-
swap='ssd_kpt29'),
|
1342 |
-
197:
|
1343 |
-
dict(
|
1344 |
-
name='ssd_kpt8',
|
1345 |
-
id=197,
|
1346 |
-
color=[64, 64, 128],
|
1347 |
-
type='',
|
1348 |
-
swap='ssd_kpt28'),
|
1349 |
-
198:
|
1350 |
-
dict(
|
1351 |
-
name='ssd_kpt9',
|
1352 |
-
id=198,
|
1353 |
-
color=[64, 64, 128],
|
1354 |
-
type='',
|
1355 |
-
swap='ssd_kpt27'),
|
1356 |
-
199:
|
1357 |
-
dict(
|
1358 |
-
name='ssd_kpt10',
|
1359 |
-
id=199,
|
1360 |
-
color=[64, 64, 128],
|
1361 |
-
type='',
|
1362 |
-
swap='ssd_kpt26'),
|
1363 |
-
200:
|
1364 |
-
dict(
|
1365 |
-
name='ssd_kpt11',
|
1366 |
-
id=200,
|
1367 |
-
color=[64, 64, 128],
|
1368 |
-
type='',
|
1369 |
-
swap='ssd_kpt25'),
|
1370 |
-
201:
|
1371 |
-
dict(
|
1372 |
-
name='ssd_kpt12',
|
1373 |
-
id=201,
|
1374 |
-
color=[64, 64, 128],
|
1375 |
-
type='',
|
1376 |
-
swap='ssd_kpt24'),
|
1377 |
-
202:
|
1378 |
-
dict(
|
1379 |
-
name='ssd_kpt13',
|
1380 |
-
id=202,
|
1381 |
-
color=[64, 64, 128],
|
1382 |
-
type='',
|
1383 |
-
swap='ssd_kpt23'),
|
1384 |
-
203:
|
1385 |
-
dict(
|
1386 |
-
name='ssd_kpt14',
|
1387 |
-
id=203,
|
1388 |
-
color=[64, 64, 128],
|
1389 |
-
type='',
|
1390 |
-
swap='ssd_kpt22'),
|
1391 |
-
204:
|
1392 |
-
dict(
|
1393 |
-
name='ssd_kpt15',
|
1394 |
-
id=204,
|
1395 |
-
color=[64, 64, 128],
|
1396 |
-
type='',
|
1397 |
-
swap='ssd_kpt21'),
|
1398 |
-
205:
|
1399 |
-
dict(
|
1400 |
-
name='ssd_kpt16',
|
1401 |
-
id=205,
|
1402 |
-
color=[64, 64, 128],
|
1403 |
-
type='',
|
1404 |
-
swap='ssd_kpt20'),
|
1405 |
-
206:
|
1406 |
-
dict(
|
1407 |
-
name='ssd_kpt17',
|
1408 |
-
id=206,
|
1409 |
-
color=[64, 64, 128],
|
1410 |
-
type='',
|
1411 |
-
swap='ssd_kpt19'),
|
1412 |
-
207:
|
1413 |
-
dict(name='ssd_kpt18', id=207, color=[64, 64, 128], type='', swap=''),
|
1414 |
-
208:
|
1415 |
-
dict(
|
1416 |
-
name='ssd_kpt19',
|
1417 |
-
id=208,
|
1418 |
-
color=[64, 64, 128],
|
1419 |
-
type='',
|
1420 |
-
swap='ssd_kpt17'),
|
1421 |
-
209:
|
1422 |
-
dict(
|
1423 |
-
name='ssd_kpt20',
|
1424 |
-
id=209,
|
1425 |
-
color=[64, 64, 128],
|
1426 |
-
type='',
|
1427 |
-
swap='ssd_kpt16'),
|
1428 |
-
210:
|
1429 |
-
dict(
|
1430 |
-
name='ssd_kpt21',
|
1431 |
-
id=210,
|
1432 |
-
color=[64, 64, 128],
|
1433 |
-
type='',
|
1434 |
-
swap='ssd_kpt15'),
|
1435 |
-
211:
|
1436 |
-
dict(
|
1437 |
-
name='ssd_kpt22',
|
1438 |
-
id=211,
|
1439 |
-
color=[64, 64, 128],
|
1440 |
-
type='',
|
1441 |
-
swap='ssd_kpt14'),
|
1442 |
-
212:
|
1443 |
-
dict(
|
1444 |
-
name='ssd_kpt23',
|
1445 |
-
id=212,
|
1446 |
-
color=[64, 64, 128],
|
1447 |
-
type='',
|
1448 |
-
swap='ssd_kpt13'),
|
1449 |
-
213:
|
1450 |
-
dict(
|
1451 |
-
name='ssd_kpt24',
|
1452 |
-
id=213,
|
1453 |
-
color=[64, 64, 128],
|
1454 |
-
type='',
|
1455 |
-
swap='ssd_kpt12'),
|
1456 |
-
214:
|
1457 |
-
dict(
|
1458 |
-
name='ssd_kpt25',
|
1459 |
-
id=214,
|
1460 |
-
color=[64, 64, 128],
|
1461 |
-
type='',
|
1462 |
-
swap='ssd_kpt11'),
|
1463 |
-
215:
|
1464 |
-
dict(
|
1465 |
-
name='ssd_kpt26',
|
1466 |
-
id=215,
|
1467 |
-
color=[64, 64, 128],
|
1468 |
-
type='',
|
1469 |
-
swap='ssd_kpt10'),
|
1470 |
-
216:
|
1471 |
-
dict(
|
1472 |
-
name='ssd_kpt27',
|
1473 |
-
id=216,
|
1474 |
-
color=[64, 64, 128],
|
1475 |
-
type='',
|
1476 |
-
swap='ssd_kpt9'),
|
1477 |
-
217:
|
1478 |
-
dict(
|
1479 |
-
name='ssd_kpt28',
|
1480 |
-
id=217,
|
1481 |
-
color=[64, 64, 128],
|
1482 |
-
type='',
|
1483 |
-
swap='ssd_kpt8'),
|
1484 |
-
218:
|
1485 |
-
dict(
|
1486 |
-
name='ssd_kpt29',
|
1487 |
-
id=218,
|
1488 |
-
color=[64, 64, 128],
|
1489 |
-
type='',
|
1490 |
-
swap='ssd_kpt7'),
|
1491 |
-
219:
|
1492 |
-
dict(name='lsd_kpt1', id=219, color=[128, 64, 0], type='', swap=''),
|
1493 |
-
220:
|
1494 |
-
dict(
|
1495 |
-
name='lsd_kpt2',
|
1496 |
-
id=220,
|
1497 |
-
color=[128, 64, 0],
|
1498 |
-
type='',
|
1499 |
-
swap='lsd_kpt6'),
|
1500 |
-
221:
|
1501 |
-
dict(
|
1502 |
-
name='lsd_kpt3',
|
1503 |
-
id=221,
|
1504 |
-
color=[128, 64, 0],
|
1505 |
-
type='',
|
1506 |
-
swap='lsd_kpt5'),
|
1507 |
-
222:
|
1508 |
-
dict(name='lsd_kpt4', id=222, color=[128, 64, 0], type='', swap=''),
|
1509 |
-
223:
|
1510 |
-
dict(
|
1511 |
-
name='lsd_kpt5',
|
1512 |
-
id=223,
|
1513 |
-
color=[128, 64, 0],
|
1514 |
-
type='',
|
1515 |
-
swap='lsd_kpt3'),
|
1516 |
-
224:
|
1517 |
-
dict(
|
1518 |
-
name='lsd_kpt6',
|
1519 |
-
id=224,
|
1520 |
-
color=[128, 64, 0],
|
1521 |
-
type='',
|
1522 |
-
swap='lsd_kpt2'),
|
1523 |
-
225:
|
1524 |
-
dict(
|
1525 |
-
name='lsd_kpt7',
|
1526 |
-
id=225,
|
1527 |
-
color=[128, 64, 0],
|
1528 |
-
type='',
|
1529 |
-
swap='lsd_kpt37'),
|
1530 |
-
226:
|
1531 |
-
dict(
|
1532 |
-
name='lsd_kpt8',
|
1533 |
-
id=226,
|
1534 |
-
color=[128, 64, 0],
|
1535 |
-
type='',
|
1536 |
-
swap='lsd_kpt36'),
|
1537 |
-
227:
|
1538 |
-
dict(
|
1539 |
-
name='lsd_kpt9',
|
1540 |
-
id=227,
|
1541 |
-
color=[128, 64, 0],
|
1542 |
-
type='',
|
1543 |
-
swap='lsd_kpt35'),
|
1544 |
-
228:
|
1545 |
-
dict(
|
1546 |
-
name='lsd_kpt10',
|
1547 |
-
id=228,
|
1548 |
-
color=[128, 64, 0],
|
1549 |
-
type='',
|
1550 |
-
swap='lsd_kpt34'),
|
1551 |
-
229:
|
1552 |
-
dict(
|
1553 |
-
name='lsd_kpt11',
|
1554 |
-
id=229,
|
1555 |
-
color=[128, 64, 0],
|
1556 |
-
type='',
|
1557 |
-
swap='lsd_kpt33'),
|
1558 |
-
230:
|
1559 |
-
dict(
|
1560 |
-
name='lsd_kpt12',
|
1561 |
-
id=230,
|
1562 |
-
color=[128, 64, 0],
|
1563 |
-
type='',
|
1564 |
-
swap='lsd_kpt32'),
|
1565 |
-
231:
|
1566 |
-
dict(
|
1567 |
-
name='lsd_kpt13',
|
1568 |
-
id=231,
|
1569 |
-
color=[128, 64, 0],
|
1570 |
-
type='',
|
1571 |
-
swap='lsd_kpt31'),
|
1572 |
-
232:
|
1573 |
-
dict(
|
1574 |
-
name='lsd_kpt14',
|
1575 |
-
id=232,
|
1576 |
-
color=[128, 64, 0],
|
1577 |
-
type='',
|
1578 |
-
swap='lsd_kpt30'),
|
1579 |
-
233:
|
1580 |
-
dict(
|
1581 |
-
name='lsd_kpt15',
|
1582 |
-
id=233,
|
1583 |
-
color=[128, 64, 0],
|
1584 |
-
type='',
|
1585 |
-
swap='lsd_kpt29'),
|
1586 |
-
234:
|
1587 |
-
dict(
|
1588 |
-
name='lsd_kpt16',
|
1589 |
-
id=234,
|
1590 |
-
color=[128, 64, 0],
|
1591 |
-
type='',
|
1592 |
-
swap='lsd_kpt28'),
|
1593 |
-
235:
|
1594 |
-
dict(
|
1595 |
-
name='lsd_kpt17',
|
1596 |
-
id=235,
|
1597 |
-
color=[128, 64, 0],
|
1598 |
-
type='',
|
1599 |
-
swap='lsd_kpt27'),
|
1600 |
-
236:
|
1601 |
-
dict(
|
1602 |
-
name='lsd_kpt18',
|
1603 |
-
id=236,
|
1604 |
-
color=[128, 64, 0],
|
1605 |
-
type='',
|
1606 |
-
swap='lsd_kpt26'),
|
1607 |
-
237:
|
1608 |
-
dict(
|
1609 |
-
name='lsd_kpt19',
|
1610 |
-
id=237,
|
1611 |
-
color=[128, 64, 0],
|
1612 |
-
type='',
|
1613 |
-
swap='lsd_kpt25'),
|
1614 |
-
238:
|
1615 |
-
dict(
|
1616 |
-
name='lsd_kpt20',
|
1617 |
-
id=238,
|
1618 |
-
color=[128, 64, 0],
|
1619 |
-
type='',
|
1620 |
-
swap='lsd_kpt24'),
|
1621 |
-
239:
|
1622 |
-
dict(
|
1623 |
-
name='lsd_kpt21',
|
1624 |
-
id=239,
|
1625 |
-
color=[128, 64, 0],
|
1626 |
-
type='',
|
1627 |
-
swap='lsd_kpt23'),
|
1628 |
-
240:
|
1629 |
-
dict(name='lsd_kpt22', id=240, color=[128, 64, 0], type='', swap=''),
|
1630 |
-
241:
|
1631 |
-
dict(
|
1632 |
-
name='lsd_kpt23',
|
1633 |
-
id=241,
|
1634 |
-
color=[128, 64, 0],
|
1635 |
-
type='',
|
1636 |
-
swap='lsd_kpt21'),
|
1637 |
-
242:
|
1638 |
-
dict(
|
1639 |
-
name='lsd_kpt24',
|
1640 |
-
id=242,
|
1641 |
-
color=[128, 64, 0],
|
1642 |
-
type='',
|
1643 |
-
swap='lsd_kpt20'),
|
1644 |
-
243:
|
1645 |
-
dict(
|
1646 |
-
name='lsd_kpt25',
|
1647 |
-
id=243,
|
1648 |
-
color=[128, 64, 0],
|
1649 |
-
type='',
|
1650 |
-
swap='lsd_kpt19'),
|
1651 |
-
244:
|
1652 |
-
dict(
|
1653 |
-
name='lsd_kpt26',
|
1654 |
-
id=244,
|
1655 |
-
color=[128, 64, 0],
|
1656 |
-
type='',
|
1657 |
-
swap='lsd_kpt18'),
|
1658 |
-
245:
|
1659 |
-
dict(
|
1660 |
-
name='lsd_kpt27',
|
1661 |
-
id=245,
|
1662 |
-
color=[128, 64, 0],
|
1663 |
-
type='',
|
1664 |
-
swap='lsd_kpt17'),
|
1665 |
-
246:
|
1666 |
-
dict(
|
1667 |
-
name='lsd_kpt28',
|
1668 |
-
id=246,
|
1669 |
-
color=[128, 64, 0],
|
1670 |
-
type='',
|
1671 |
-
swap='lsd_kpt16'),
|
1672 |
-
247:
|
1673 |
-
dict(
|
1674 |
-
name='lsd_kpt29',
|
1675 |
-
id=247,
|
1676 |
-
color=[128, 64, 0],
|
1677 |
-
type='',
|
1678 |
-
swap='lsd_kpt15'),
|
1679 |
-
248:
|
1680 |
-
dict(
|
1681 |
-
name='lsd_kpt30',
|
1682 |
-
id=248,
|
1683 |
-
color=[128, 64, 0],
|
1684 |
-
type='',
|
1685 |
-
swap='lsd_kpt14'),
|
1686 |
-
249:
|
1687 |
-
dict(
|
1688 |
-
name='lsd_kpt31',
|
1689 |
-
id=249,
|
1690 |
-
color=[128, 64, 0],
|
1691 |
-
type='',
|
1692 |
-
swap='lsd_kpt13'),
|
1693 |
-
250:
|
1694 |
-
dict(
|
1695 |
-
name='lsd_kpt32',
|
1696 |
-
id=250,
|
1697 |
-
color=[128, 64, 0],
|
1698 |
-
type='',
|
1699 |
-
swap='lsd_kpt12'),
|
1700 |
-
251:
|
1701 |
-
dict(
|
1702 |
-
name='lsd_kpt33',
|
1703 |
-
id=251,
|
1704 |
-
color=[128, 64, 0],
|
1705 |
-
type='',
|
1706 |
-
swap='lsd_kpt11'),
|
1707 |
-
252:
|
1708 |
-
dict(
|
1709 |
-
name='lsd_kpt34',
|
1710 |
-
id=252,
|
1711 |
-
color=[128, 64, 0],
|
1712 |
-
type='',
|
1713 |
-
swap='lsd_kpt10'),
|
1714 |
-
253:
|
1715 |
-
dict(
|
1716 |
-
name='lsd_kpt35',
|
1717 |
-
id=253,
|
1718 |
-
color=[128, 64, 0],
|
1719 |
-
type='',
|
1720 |
-
swap='lsd_kpt9'),
|
1721 |
-
254:
|
1722 |
-
dict(
|
1723 |
-
name='lsd_kpt36',
|
1724 |
-
id=254,
|
1725 |
-
color=[128, 64, 0],
|
1726 |
-
type='',
|
1727 |
-
swap='lsd_kpt8'),
|
1728 |
-
255:
|
1729 |
-
dict(
|
1730 |
-
name='lsd_kpt37',
|
1731 |
-
id=255,
|
1732 |
-
color=[128, 64, 0],
|
1733 |
-
type='',
|
1734 |
-
swap='lsd_kpt7'),
|
1735 |
-
256:
|
1736 |
-
dict(name='vd_kpt1', id=256, color=[128, 64, 255], type='', swap=''),
|
1737 |
-
257:
|
1738 |
-
dict(
|
1739 |
-
name='vd_kpt2',
|
1740 |
-
id=257,
|
1741 |
-
color=[128, 64, 255],
|
1742 |
-
type='',
|
1743 |
-
swap='vd_kpt6'),
|
1744 |
-
258:
|
1745 |
-
dict(
|
1746 |
-
name='vd_kpt3',
|
1747 |
-
id=258,
|
1748 |
-
color=[128, 64, 255],
|
1749 |
-
type='',
|
1750 |
-
swap='vd_kpt5'),
|
1751 |
-
259:
|
1752 |
-
dict(name='vd_kpt4', id=259, color=[128, 64, 255], type='', swap=''),
|
1753 |
-
260:
|
1754 |
-
dict(
|
1755 |
-
name='vd_kpt5',
|
1756 |
-
id=260,
|
1757 |
-
color=[128, 64, 255],
|
1758 |
-
type='',
|
1759 |
-
swap='vd_kpt3'),
|
1760 |
-
261:
|
1761 |
-
dict(
|
1762 |
-
name='vd_kpt6',
|
1763 |
-
id=261,
|
1764 |
-
color=[128, 64, 255],
|
1765 |
-
type='',
|
1766 |
-
swap='vd_kpt2'),
|
1767 |
-
262:
|
1768 |
-
dict(
|
1769 |
-
name='vd_kpt7',
|
1770 |
-
id=262,
|
1771 |
-
color=[128, 64, 255],
|
1772 |
-
type='',
|
1773 |
-
swap='vd_kpt19'),
|
1774 |
-
263:
|
1775 |
-
dict(
|
1776 |
-
name='vd_kpt8',
|
1777 |
-
id=263,
|
1778 |
-
color=[128, 64, 255],
|
1779 |
-
type='',
|
1780 |
-
swap='vd_kpt18'),
|
1781 |
-
264:
|
1782 |
-
dict(
|
1783 |
-
name='vd_kpt9',
|
1784 |
-
id=264,
|
1785 |
-
color=[128, 64, 255],
|
1786 |
-
type='',
|
1787 |
-
swap='vd_kpt17'),
|
1788 |
-
265:
|
1789 |
-
dict(
|
1790 |
-
name='vd_kpt10',
|
1791 |
-
id=265,
|
1792 |
-
color=[128, 64, 255],
|
1793 |
-
type='',
|
1794 |
-
swap='vd_kpt16'),
|
1795 |
-
266:
|
1796 |
-
dict(
|
1797 |
-
name='vd_kpt11',
|
1798 |
-
id=266,
|
1799 |
-
color=[128, 64, 255],
|
1800 |
-
type='',
|
1801 |
-
swap='vd_kpt15'),
|
1802 |
-
267:
|
1803 |
-
dict(
|
1804 |
-
name='vd_kpt12',
|
1805 |
-
id=267,
|
1806 |
-
color=[128, 64, 255],
|
1807 |
-
type='',
|
1808 |
-
swap='vd_kpt14'),
|
1809 |
-
268:
|
1810 |
-
dict(name='vd_kpt13', id=268, color=[128, 64, 255], type='', swap=''),
|
1811 |
-
269:
|
1812 |
-
dict(
|
1813 |
-
name='vd_kpt14',
|
1814 |
-
id=269,
|
1815 |
-
color=[128, 64, 255],
|
1816 |
-
type='',
|
1817 |
-
swap='vd_kpt12'),
|
1818 |
-
270:
|
1819 |
-
dict(
|
1820 |
-
name='vd_kpt15',
|
1821 |
-
id=270,
|
1822 |
-
color=[128, 64, 255],
|
1823 |
-
type='',
|
1824 |
-
swap='vd_kpt11'),
|
1825 |
-
271:
|
1826 |
-
dict(
|
1827 |
-
name='vd_kpt16',
|
1828 |
-
id=271,
|
1829 |
-
color=[128, 64, 255],
|
1830 |
-
type='',
|
1831 |
-
swap='vd_kpt10'),
|
1832 |
-
272:
|
1833 |
-
dict(
|
1834 |
-
name='vd_kpt17',
|
1835 |
-
id=272,
|
1836 |
-
color=[128, 64, 255],
|
1837 |
-
type='',
|
1838 |
-
swap='vd_kpt9'),
|
1839 |
-
273:
|
1840 |
-
dict(
|
1841 |
-
name='vd_kpt18',
|
1842 |
-
id=273,
|
1843 |
-
color=[128, 64, 255],
|
1844 |
-
type='',
|
1845 |
-
swap='vd_kpt8'),
|
1846 |
-
274:
|
1847 |
-
dict(
|
1848 |
-
name='vd_kpt19',
|
1849 |
-
id=274,
|
1850 |
-
color=[128, 64, 255],
|
1851 |
-
type='',
|
1852 |
-
swap='vd_kpt7'),
|
1853 |
-
275:
|
1854 |
-
dict(name='sd_kpt1', id=275, color=[128, 64, 0], type='', swap=''),
|
1855 |
-
276:
|
1856 |
-
dict(
|
1857 |
-
name='sd_kpt2',
|
1858 |
-
id=276,
|
1859 |
-
color=[128, 64, 0],
|
1860 |
-
type='',
|
1861 |
-
swap='sd_kpt6'),
|
1862 |
-
277:
|
1863 |
-
dict(
|
1864 |
-
name='sd_kpt3',
|
1865 |
-
id=277,
|
1866 |
-
color=[128, 64, 0],
|
1867 |
-
type='',
|
1868 |
-
swap='sd_kpt5'),
|
1869 |
-
278:
|
1870 |
-
dict(name='sd_kpt4', id=278, color=[128, 64, 0], type='', swap=''),
|
1871 |
-
279:
|
1872 |
-
dict(
|
1873 |
-
name='sd_kpt5',
|
1874 |
-
id=279,
|
1875 |
-
color=[128, 64, 0],
|
1876 |
-
type='',
|
1877 |
-
swap='sd_kpt3'),
|
1878 |
-
280:
|
1879 |
-
dict(
|
1880 |
-
name='sd_kpt6',
|
1881 |
-
id=280,
|
1882 |
-
color=[128, 64, 0],
|
1883 |
-
type='',
|
1884 |
-
swap='sd_kpt2'),
|
1885 |
-
281:
|
1886 |
-
dict(
|
1887 |
-
name='sd_kpt7',
|
1888 |
-
id=281,
|
1889 |
-
color=[128, 64, 0],
|
1890 |
-
type='',
|
1891 |
-
swap='sd_kpt19'),
|
1892 |
-
282:
|
1893 |
-
dict(
|
1894 |
-
name='sd_kpt8',
|
1895 |
-
id=282,
|
1896 |
-
color=[128, 64, 0],
|
1897 |
-
type='',
|
1898 |
-
swap='sd_kpt18'),
|
1899 |
-
283:
|
1900 |
-
dict(
|
1901 |
-
name='sd_kpt9',
|
1902 |
-
id=283,
|
1903 |
-
color=[128, 64, 0],
|
1904 |
-
type='',
|
1905 |
-
swap='sd_kpt17'),
|
1906 |
-
284:
|
1907 |
-
dict(
|
1908 |
-
name='sd_kpt10',
|
1909 |
-
id=284,
|
1910 |
-
color=[128, 64, 0],
|
1911 |
-
type='',
|
1912 |
-
swap='sd_kpt16'),
|
1913 |
-
285:
|
1914 |
-
dict(
|
1915 |
-
name='sd_kpt11',
|
1916 |
-
id=285,
|
1917 |
-
color=[128, 64, 0],
|
1918 |
-
type='',
|
1919 |
-
swap='sd_kpt15'),
|
1920 |
-
286:
|
1921 |
-
dict(
|
1922 |
-
name='sd_kpt12',
|
1923 |
-
id=286,
|
1924 |
-
color=[128, 64, 0],
|
1925 |
-
type='',
|
1926 |
-
swap='sd_kpt14'),
|
1927 |
-
287:
|
1928 |
-
dict(name='sd_kpt13', id=287, color=[128, 64, 0], type='', swap=''),
|
1929 |
-
288:
|
1930 |
-
dict(
|
1931 |
-
name='sd_kpt14',
|
1932 |
-
id=288,
|
1933 |
-
color=[128, 64, 0],
|
1934 |
-
type='',
|
1935 |
-
swap='sd_kpt12'),
|
1936 |
-
289:
|
1937 |
-
dict(
|
1938 |
-
name='sd_kpt15',
|
1939 |
-
id=289,
|
1940 |
-
color=[128, 64, 0],
|
1941 |
-
type='',
|
1942 |
-
swap='sd_kpt11'),
|
1943 |
-
290:
|
1944 |
-
dict(
|
1945 |
-
name='sd_kpt16',
|
1946 |
-
id=290,
|
1947 |
-
color=[128, 64, 0],
|
1948 |
-
type='',
|
1949 |
-
swap='sd_kpt10'),
|
1950 |
-
291:
|
1951 |
-
dict(
|
1952 |
-
name='sd_kpt17',
|
1953 |
-
id=291,
|
1954 |
-
color=[128, 64, 0],
|
1955 |
-
type='',
|
1956 |
-
swap='sd_kpt9'),
|
1957 |
-
292:
|
1958 |
-
dict(
|
1959 |
-
name='sd_kpt18',
|
1960 |
-
id=292,
|
1961 |
-
color=[128, 64, 0],
|
1962 |
-
type='',
|
1963 |
-
swap='sd_kpt8'),
|
1964 |
-
293:
|
1965 |
-
dict(
|
1966 |
-
name='sd_kpt19',
|
1967 |
-
id=293,
|
1968 |
-
color=[128, 64, 0],
|
1969 |
-
type='',
|
1970 |
-
swap='sd_kpt7')
|
1971 |
-
}),
|
1972 |
-
skeleton_info=dict({
|
1973 |
-
0:
|
1974 |
-
dict(link=('sss_kpt1', 'sss_kpt2'), id=0, color=[255, 128, 0]),
|
1975 |
-
1:
|
1976 |
-
dict(link=('sss_kpt2', 'sss_kpt7'), id=1, color=[255, 128, 0]),
|
1977 |
-
2:
|
1978 |
-
dict(link=('sss_kpt7', 'sss_kpt8'), id=2, color=[255, 128, 0]),
|
1979 |
-
3:
|
1980 |
-
dict(link=('sss_kpt8', 'sss_kpt9'), id=3, color=[255, 128, 0]),
|
1981 |
-
4:
|
1982 |
-
dict(link=('sss_kpt9', 'sss_kpt10'), id=4, color=[255, 128, 0]),
|
1983 |
-
5:
|
1984 |
-
dict(link=('sss_kpt10', 'sss_kpt11'), id=5, color=[255, 128, 0]),
|
1985 |
-
6:
|
1986 |
-
dict(link=('sss_kpt11', 'sss_kpt12'), id=6, color=[255, 128, 0]),
|
1987 |
-
7:
|
1988 |
-
dict(link=('sss_kpt12', 'sss_kpt13'), id=7, color=[255, 128, 0]),
|
1989 |
-
8:
|
1990 |
-
dict(link=('sss_kpt13', 'sss_kpt14'), id=8, color=[255, 128, 0]),
|
1991 |
-
9:
|
1992 |
-
dict(link=('sss_kpt14', 'sss_kpt15'), id=9, color=[255, 128, 0]),
|
1993 |
-
10:
|
1994 |
-
dict(link=('sss_kpt15', 'sss_kpt16'), id=10, color=[255, 128, 0]),
|
1995 |
-
11:
|
1996 |
-
dict(link=('sss_kpt16', 'sss_kpt17'), id=11, color=[255, 128, 0]),
|
1997 |
-
12:
|
1998 |
-
dict(link=('sss_kpt17', 'sss_kpt18'), id=12, color=[255, 128, 0]),
|
1999 |
-
13:
|
2000 |
-
dict(link=('sss_kpt18', 'sss_kpt19'), id=13, color=[255, 128, 0]),
|
2001 |
-
14:
|
2002 |
-
dict(link=('sss_kpt19', 'sss_kpt20'), id=14, color=[255, 128, 0]),
|
2003 |
-
15:
|
2004 |
-
dict(link=('sss_kpt20', 'sss_kpt21'), id=15, color=[255, 128, 0]),
|
2005 |
-
16:
|
2006 |
-
dict(link=('sss_kpt21', 'sss_kpt22'), id=16, color=[255, 128, 0]),
|
2007 |
-
17:
|
2008 |
-
dict(link=('sss_kpt22', 'sss_kpt23'), id=17, color=[255, 128, 0]),
|
2009 |
-
18:
|
2010 |
-
dict(link=('sss_kpt23', 'sss_kpt24'), id=18, color=[255, 128, 0]),
|
2011 |
-
19:
|
2012 |
-
dict(link=('sss_kpt24', 'sss_kpt25'), id=19, color=[255, 128, 0]),
|
2013 |
-
20:
|
2014 |
-
dict(link=('sss_kpt25', 'sss_kpt6'), id=20, color=[255, 128, 0]),
|
2015 |
-
21:
|
2016 |
-
dict(link=('sss_kpt6', 'sss_kpt1'), id=21, color=[255, 128, 0]),
|
2017 |
-
22:
|
2018 |
-
dict(link=('sss_kpt2', 'sss_kpt3'), id=22, color=[255, 128, 0]),
|
2019 |
-
23:
|
2020 |
-
dict(link=('sss_kpt3', 'sss_kpt4'), id=23, color=[255, 128, 0]),
|
2021 |
-
24:
|
2022 |
-
dict(link=('sss_kpt4', 'sss_kpt5'), id=24, color=[255, 128, 0]),
|
2023 |
-
25:
|
2024 |
-
dict(link=('sss_kpt5', 'sss_kpt6'), id=25, color=[255, 128, 0]),
|
2025 |
-
26:
|
2026 |
-
dict(link=('lss_kpt1', 'lss_kpt2'), id=26, color=[255, 0, 128]),
|
2027 |
-
27:
|
2028 |
-
dict(link=('lss_kpt2', 'lss_kpt7'), id=27, color=[255, 0, 128]),
|
2029 |
-
28:
|
2030 |
-
dict(link=('lss_kpt7', 'lss_kpt8'), id=28, color=[255, 0, 128]),
|
2031 |
-
29:
|
2032 |
-
dict(link=('lss_kpt8', 'lss_kpt9'), id=29, color=[255, 0, 128]),
|
2033 |
-
30:
|
2034 |
-
dict(link=('lss_kpt9', 'lss_kpt10'), id=30, color=[255, 0, 128]),
|
2035 |
-
31:
|
2036 |
-
dict(link=('lss_kpt10', 'lss_kpt11'), id=31, color=[255, 0, 128]),
|
2037 |
-
32:
|
2038 |
-
dict(link=('lss_kpt11', 'lss_kpt12'), id=32, color=[255, 0, 128]),
|
2039 |
-
33:
|
2040 |
-
dict(link=('lss_kpt12', 'lss_kpt13'), id=33, color=[255, 0, 128]),
|
2041 |
-
34:
|
2042 |
-
dict(link=('lss_kpt13', 'lss_kpt14'), id=34, color=[255, 0, 128]),
|
2043 |
-
35:
|
2044 |
-
dict(link=('lss_kpt14', 'lss_kpt15'), id=35, color=[255, 0, 128]),
|
2045 |
-
36:
|
2046 |
-
dict(link=('lss_kpt15', 'lss_kpt16'), id=36, color=[255, 0, 128]),
|
2047 |
-
37:
|
2048 |
-
dict(link=('lss_kpt16', 'lss_kpt17'), id=37, color=[255, 0, 128]),
|
2049 |
-
38:
|
2050 |
-
dict(link=('lss_kpt17', 'lss_kpt18'), id=38, color=[255, 0, 128]),
|
2051 |
-
39:
|
2052 |
-
dict(link=('lss_kpt18', 'lss_kpt19'), id=39, color=[255, 0, 128]),
|
2053 |
-
40:
|
2054 |
-
dict(link=('lss_kpt19', 'lss_kpt20'), id=40, color=[255, 0, 128]),
|
2055 |
-
41:
|
2056 |
-
dict(link=('lss_kpt20', 'lss_kpt21'), id=41, color=[255, 0, 128]),
|
2057 |
-
42:
|
2058 |
-
dict(link=('lss_kpt21', 'lss_kpt22'), id=42, color=[255, 0, 128]),
|
2059 |
-
43:
|
2060 |
-
dict(link=('lss_kpt22', 'lss_kpt23'), id=43, color=[255, 0, 128]),
|
2061 |
-
44:
|
2062 |
-
dict(link=('lss_kpt23', 'lss_kpt24'), id=44, color=[255, 0, 128]),
|
2063 |
-
45:
|
2064 |
-
dict(link=('lss_kpt24', 'lss_kpt25'), id=45, color=[255, 0, 128]),
|
2065 |
-
46:
|
2066 |
-
dict(link=('lss_kpt25', 'lss_kpt26'), id=46, color=[255, 0, 128]),
|
2067 |
-
47:
|
2068 |
-
dict(link=('lss_kpt26', 'lss_kpt27'), id=47, color=[255, 0, 128]),
|
2069 |
-
48:
|
2070 |
-
dict(link=('lss_kpt27', 'lss_kpt28'), id=48, color=[255, 0, 128]),
|
2071 |
-
49:
|
2072 |
-
dict(link=('lss_kpt28', 'lss_kpt29'), id=49, color=[255, 0, 128]),
|
2073 |
-
50:
|
2074 |
-
dict(link=('lss_kpt29', 'lss_kpt30'), id=50, color=[255, 0, 128]),
|
2075 |
-
51:
|
2076 |
-
dict(link=('lss_kpt30', 'lss_kpt31'), id=51, color=[255, 0, 128]),
|
2077 |
-
52:
|
2078 |
-
dict(link=('lss_kpt31', 'lss_kpt32'), id=52, color=[255, 0, 128]),
|
2079 |
-
53:
|
2080 |
-
dict(link=('lss_kpt32', 'lss_kpt33'), id=53, color=[255, 0, 128]),
|
2081 |
-
54:
|
2082 |
-
dict(link=('lss_kpt33', 'lss_kpt6'), id=54, color=[255, 0, 128]),
|
2083 |
-
55:
|
2084 |
-
dict(link=('lss_kpt6', 'lss_kpt5'), id=55, color=[255, 0, 128]),
|
2085 |
-
56:
|
2086 |
-
dict(link=('lss_kpt5', 'lss_kpt4'), id=56, color=[255, 0, 128]),
|
2087 |
-
57:
|
2088 |
-
dict(link=('lss_kpt4', 'lss_kpt3'), id=57, color=[255, 0, 128]),
|
2089 |
-
58:
|
2090 |
-
dict(link=('lss_kpt3', 'lss_kpt2'), id=58, color=[255, 0, 128]),
|
2091 |
-
59:
|
2092 |
-
dict(link=('lss_kpt6', 'lss_kpt1'), id=59, color=[255, 0, 128]),
|
2093 |
-
60:
|
2094 |
-
dict(link=('sso_kpt1', 'sso_kpt4'), id=60, color=[128, 0, 255]),
|
2095 |
-
61:
|
2096 |
-
dict(link=('sso_kpt4', 'sso_kpt7'), id=61, color=[128, 0, 255]),
|
2097 |
-
62:
|
2098 |
-
dict(link=('sso_kpt7', 'sso_kpt8'), id=62, color=[128, 0, 255]),
|
2099 |
-
63:
|
2100 |
-
dict(link=('sso_kpt8', 'sso_kpt9'), id=63, color=[128, 0, 255]),
|
2101 |
-
64:
|
2102 |
-
dict(link=('sso_kpt9', 'sso_kpt10'), id=64, color=[128, 0, 255]),
|
2103 |
-
65:
|
2104 |
-
dict(link=('sso_kpt10', 'sso_kpt11'), id=65, color=[128, 0, 255]),
|
2105 |
-
66:
|
2106 |
-
dict(link=('sso_kpt11', 'sso_kpt12'), id=66, color=[128, 0, 255]),
|
2107 |
-
67:
|
2108 |
-
dict(link=('sso_kpt12', 'sso_kpt13'), id=67, color=[128, 0, 255]),
|
2109 |
-
68:
|
2110 |
-
dict(link=('sso_kpt13', 'sso_kpt14'), id=68, color=[128, 0, 255]),
|
2111 |
-
69:
|
2112 |
-
dict(link=('sso_kpt14', 'sso_kpt15'), id=69, color=[128, 0, 255]),
|
2113 |
-
70:
|
2114 |
-
dict(link=('sso_kpt15', 'sso_kpt16'), id=70, color=[128, 0, 255]),
|
2115 |
-
71:
|
2116 |
-
dict(link=('sso_kpt16', 'sso_kpt31'), id=71, color=[128, 0, 255]),
|
2117 |
-
72:
|
2118 |
-
dict(link=('sso_kpt31', 'sso_kpt30'), id=72, color=[128, 0, 255]),
|
2119 |
-
73:
|
2120 |
-
dict(link=('sso_kpt30', 'sso_kpt2'), id=73, color=[128, 0, 255]),
|
2121 |
-
74:
|
2122 |
-
dict(link=('sso_kpt2', 'sso_kpt3'), id=74, color=[128, 0, 255]),
|
2123 |
-
75:
|
2124 |
-
dict(link=('sso_kpt3', 'sso_kpt4'), id=75, color=[128, 0, 255]),
|
2125 |
-
76:
|
2126 |
-
dict(link=('sso_kpt1', 'sso_kpt6'), id=76, color=[128, 0, 255]),
|
2127 |
-
77:
|
2128 |
-
dict(link=('sso_kpt6', 'sso_kpt25'), id=77, color=[128, 0, 255]),
|
2129 |
-
78:
|
2130 |
-
dict(link=('sso_kpt25', 'sso_kpt24'), id=78, color=[128, 0, 255]),
|
2131 |
-
79:
|
2132 |
-
dict(link=('sso_kpt24', 'sso_kpt23'), id=79, color=[128, 0, 255]),
|
2133 |
-
80:
|
2134 |
-
dict(link=('sso_kpt23', 'sso_kpt22'), id=80, color=[128, 0, 255]),
|
2135 |
-
81:
|
2136 |
-
dict(link=('sso_kpt22', 'sso_kpt21'), id=81, color=[128, 0, 255]),
|
2137 |
-
82:
|
2138 |
-
dict(link=('sso_kpt21', 'sso_kpt20'), id=82, color=[128, 0, 255]),
|
2139 |
-
83:
|
2140 |
-
dict(link=('sso_kpt20', 'sso_kpt19'), id=83, color=[128, 0, 255]),
|
2141 |
-
84:
|
2142 |
-
dict(link=('sso_kpt19', 'sso_kpt18'), id=84, color=[128, 0, 255]),
|
2143 |
-
85:
|
2144 |
-
dict(link=('sso_kpt18', 'sso_kpt17'), id=85, color=[128, 0, 255]),
|
2145 |
-
86:
|
2146 |
-
dict(link=('sso_kpt17', 'sso_kpt29'), id=86, color=[128, 0, 255]),
|
2147 |
-
87:
|
2148 |
-
dict(link=('sso_kpt29', 'sso_kpt28'), id=87, color=[128, 0, 255]),
|
2149 |
-
88:
|
2150 |
-
dict(link=('sso_kpt28', 'sso_kpt27'), id=88, color=[128, 0, 255]),
|
2151 |
-
89:
|
2152 |
-
dict(link=('sso_kpt27', 'sso_kpt26'), id=89, color=[128, 0, 255]),
|
2153 |
-
90:
|
2154 |
-
dict(link=('sso_kpt26', 'sso_kpt5'), id=90, color=[128, 0, 255]),
|
2155 |
-
91:
|
2156 |
-
dict(link=('sso_kpt5', 'sso_kpt6'), id=91, color=[128, 0, 255]),
|
2157 |
-
92:
|
2158 |
-
dict(link=('lso_kpt1', 'lso_kpt2'), id=92, color=[0, 128, 255]),
|
2159 |
-
93:
|
2160 |
-
dict(link=('lso_kpt2', 'lso_kpt7'), id=93, color=[0, 128, 255]),
|
2161 |
-
94:
|
2162 |
-
dict(link=('lso_kpt7', 'lso_kpt8'), id=94, color=[0, 128, 255]),
|
2163 |
-
95:
|
2164 |
-
dict(link=('lso_kpt8', 'lso_kpt9'), id=95, color=[0, 128, 255]),
|
2165 |
-
96:
|
2166 |
-
dict(link=('lso_kpt9', 'lso_kpt10'), id=96, color=[0, 128, 255]),
|
2167 |
-
97:
|
2168 |
-
dict(link=('lso_kpt10', 'lso_kpt11'), id=97, color=[0, 128, 255]),
|
2169 |
-
98:
|
2170 |
-
dict(link=('lso_kpt11', 'lso_kpt12'), id=98, color=[0, 128, 255]),
|
2171 |
-
99:
|
2172 |
-
dict(link=('lso_kpt12', 'lso_kpt13'), id=99, color=[0, 128, 255]),
|
2173 |
-
100:
|
2174 |
-
dict(link=('lso_kpt13', 'lso_kpt14'), id=100, color=[0, 128, 255]),
|
2175 |
-
101:
|
2176 |
-
dict(link=('lso_kpt14', 'lso_kpt15'), id=101, color=[0, 128, 255]),
|
2177 |
-
102:
|
2178 |
-
dict(link=('lso_kpt15', 'lso_kpt16'), id=102, color=[0, 128, 255]),
|
2179 |
-
103:
|
2180 |
-
dict(link=('lso_kpt16', 'lso_kpt17'), id=103, color=[0, 128, 255]),
|
2181 |
-
104:
|
2182 |
-
dict(link=('lso_kpt17', 'lso_kpt18'), id=104, color=[0, 128, 255]),
|
2183 |
-
105:
|
2184 |
-
dict(link=('lso_kpt18', 'lso_kpt19'), id=105, color=[0, 128, 255]),
|
2185 |
-
106:
|
2186 |
-
dict(link=('lso_kpt19', 'lso_kpt20'), id=106, color=[0, 128, 255]),
|
2187 |
-
107:
|
2188 |
-
dict(link=('lso_kpt20', 'lso_kpt39'), id=107, color=[0, 128, 255]),
|
2189 |
-
108:
|
2190 |
-
dict(link=('lso_kpt39', 'lso_kpt38'), id=108, color=[0, 128, 255]),
|
2191 |
-
109:
|
2192 |
-
dict(link=('lso_kpt38', 'lso_kpt4'), id=109, color=[0, 128, 255]),
|
2193 |
-
110:
|
2194 |
-
dict(link=('lso_kpt4', 'lso_kpt3'), id=110, color=[0, 128, 255]),
|
2195 |
-
111:
|
2196 |
-
dict(link=('lso_kpt3', 'lso_kpt2'), id=111, color=[0, 128, 255]),
|
2197 |
-
112:
|
2198 |
-
dict(link=('lso_kpt1', 'lso_kpt6'), id=112, color=[0, 128, 255]),
|
2199 |
-
113:
|
2200 |
-
dict(link=('lso_kpt6', 'lso_kpt33'), id=113, color=[0, 128, 255]),
|
2201 |
-
114:
|
2202 |
-
dict(link=('lso_kpt33', 'lso_kpt32'), id=114, color=[0, 128, 255]),
|
2203 |
-
115:
|
2204 |
-
dict(link=('lso_kpt32', 'lso_kpt31'), id=115, color=[0, 128, 255]),
|
2205 |
-
116:
|
2206 |
-
dict(link=('lso_kpt31', 'lso_kpt30'), id=116, color=[0, 128, 255]),
|
2207 |
-
117:
|
2208 |
-
dict(link=('lso_kpt30', 'lso_kpt29'), id=117, color=[0, 128, 255]),
|
2209 |
-
118:
|
2210 |
-
dict(link=('lso_kpt29', 'lso_kpt28'), id=118, color=[0, 128, 255]),
|
2211 |
-
119:
|
2212 |
-
dict(link=('lso_kpt28', 'lso_kpt27'), id=119, color=[0, 128, 255]),
|
2213 |
-
120:
|
2214 |
-
dict(link=('lso_kpt27', 'lso_kpt26'), id=120, color=[0, 128, 255]),
|
2215 |
-
121:
|
2216 |
-
dict(link=('lso_kpt26', 'lso_kpt25'), id=121, color=[0, 128, 255]),
|
2217 |
-
122:
|
2218 |
-
dict(link=('lso_kpt25', 'lso_kpt24'), id=122, color=[0, 128, 255]),
|
2219 |
-
123:
|
2220 |
-
dict(link=('lso_kpt24', 'lso_kpt23'), id=123, color=[0, 128, 255]),
|
2221 |
-
124:
|
2222 |
-
dict(link=('lso_kpt23', 'lso_kpt22'), id=124, color=[0, 128, 255]),
|
2223 |
-
125:
|
2224 |
-
dict(link=('lso_kpt22', 'lso_kpt21'), id=125, color=[0, 128, 255]),
|
2225 |
-
126:
|
2226 |
-
dict(link=('lso_kpt21', 'lso_kpt37'), id=126, color=[0, 128, 255]),
|
2227 |
-
127:
|
2228 |
-
dict(link=('lso_kpt37', 'lso_kpt36'), id=127, color=[0, 128, 255]),
|
2229 |
-
128:
|
2230 |
-
dict(link=('lso_kpt36', 'lso_kpt35'), id=128, color=[0, 128, 255]),
|
2231 |
-
129:
|
2232 |
-
dict(link=('lso_kpt35', 'lso_kpt34'), id=129, color=[0, 128, 255]),
|
2233 |
-
130:
|
2234 |
-
dict(link=('lso_kpt34', 'lso_kpt5'), id=130, color=[0, 128, 255]),
|
2235 |
-
131:
|
2236 |
-
dict(link=('lso_kpt5', 'lso_kpt6'), id=131, color=[0, 128, 255]),
|
2237 |
-
132:
|
2238 |
-
dict(link=('vest_kpt1', 'vest_kpt2'), id=132, color=[0, 128, 128]),
|
2239 |
-
133:
|
2240 |
-
dict(link=('vest_kpt2', 'vest_kpt7'), id=133, color=[0, 128, 128]),
|
2241 |
-
134:
|
2242 |
-
dict(link=('vest_kpt7', 'vest_kpt8'), id=134, color=[0, 128, 128]),
|
2243 |
-
135:
|
2244 |
-
dict(link=('vest_kpt8', 'vest_kpt9'), id=135, color=[0, 128, 128]),
|
2245 |
-
136:
|
2246 |
-
dict(link=('vest_kpt9', 'vest_kpt10'), id=136, color=[0, 128, 128]),
|
2247 |
-
137:
|
2248 |
-
dict(link=('vest_kpt10', 'vest_kpt11'), id=137, color=[0, 128, 128]),
|
2249 |
-
138:
|
2250 |
-
dict(link=('vest_kpt11', 'vest_kpt12'), id=138, color=[0, 128, 128]),
|
2251 |
-
139:
|
2252 |
-
dict(link=('vest_kpt12', 'vest_kpt13'), id=139, color=[0, 128, 128]),
|
2253 |
-
140:
|
2254 |
-
dict(link=('vest_kpt13', 'vest_kpt14'), id=140, color=[0, 128, 128]),
|
2255 |
-
141:
|
2256 |
-
dict(link=('vest_kpt14', 'vest_kpt15'), id=141, color=[0, 128, 128]),
|
2257 |
-
142:
|
2258 |
-
dict(link=('vest_kpt15', 'vest_kpt6'), id=142, color=[0, 128, 128]),
|
2259 |
-
143:
|
2260 |
-
dict(link=('vest_kpt6', 'vest_kpt1'), id=143, color=[0, 128, 128]),
|
2261 |
-
144:
|
2262 |
-
dict(link=('vest_kpt2', 'vest_kpt3'), id=144, color=[0, 128, 128]),
|
2263 |
-
145:
|
2264 |
-
dict(link=('vest_kpt3', 'vest_kpt4'), id=145, color=[0, 128, 128]),
|
2265 |
-
146:
|
2266 |
-
dict(link=('vest_kpt4', 'vest_kpt5'), id=146, color=[0, 128, 128]),
|
2267 |
-
147:
|
2268 |
-
dict(link=('vest_kpt5', 'vest_kpt6'), id=147, color=[0, 128, 128]),
|
2269 |
-
148:
|
2270 |
-
dict(link=('sling_kpt1', 'sling_kpt2'), id=148, color=[0, 0, 128]),
|
2271 |
-
149:
|
2272 |
-
dict(link=('sling_kpt2', 'sling_kpt8'), id=149, color=[0, 0, 128]),
|
2273 |
-
150:
|
2274 |
-
dict(link=('sling_kpt8', 'sling_kpt9'), id=150, color=[0, 0, 128]),
|
2275 |
-
151:
|
2276 |
-
dict(link=('sling_kpt9', 'sling_kpt10'), id=151, color=[0, 0, 128]),
|
2277 |
-
152:
|
2278 |
-
dict(link=('sling_kpt10', 'sling_kpt11'), id=152, color=[0, 0, 128]),
|
2279 |
-
153:
|
2280 |
-
dict(link=('sling_kpt11', 'sling_kpt12'), id=153, color=[0, 0, 128]),
|
2281 |
-
154:
|
2282 |
-
dict(link=('sling_kpt12', 'sling_kpt13'), id=154, color=[0, 0, 128]),
|
2283 |
-
155:
|
2284 |
-
dict(link=('sling_kpt13', 'sling_kpt14'), id=155, color=[0, 0, 128]),
|
2285 |
-
156:
|
2286 |
-
dict(link=('sling_kpt14', 'sling_kpt6'), id=156, color=[0, 0, 128]),
|
2287 |
-
157:
|
2288 |
-
dict(link=('sling_kpt2', 'sling_kpt7'), id=157, color=[0, 0, 128]),
|
2289 |
-
158:
|
2290 |
-
dict(link=('sling_kpt6', 'sling_kpt15'), id=158, color=[0, 0, 128]),
|
2291 |
-
159:
|
2292 |
-
dict(link=('sling_kpt2', 'sling_kpt3'), id=159, color=[0, 0, 128]),
|
2293 |
-
160:
|
2294 |
-
dict(link=('sling_kpt3', 'sling_kpt4'), id=160, color=[0, 0, 128]),
|
2295 |
-
161:
|
2296 |
-
dict(link=('sling_kpt4', 'sling_kpt5'), id=161, color=[0, 0, 128]),
|
2297 |
-
162:
|
2298 |
-
dict(link=('sling_kpt5', 'sling_kpt6'), id=162, color=[0, 0, 128]),
|
2299 |
-
163:
|
2300 |
-
dict(link=('sling_kpt1', 'sling_kpt6'), id=163, color=[0, 0, 128]),
|
2301 |
-
164:
|
2302 |
-
dict(
|
2303 |
-
link=('shorts_kpt1', 'shorts_kpt4'), id=164, color=[128, 128,
|
2304 |
-
128]),
|
2305 |
-
165:
|
2306 |
-
dict(
|
2307 |
-
link=('shorts_kpt4', 'shorts_kpt5'), id=165, color=[128, 128,
|
2308 |
-
128]),
|
2309 |
-
166:
|
2310 |
-
dict(
|
2311 |
-
link=('shorts_kpt5', 'shorts_kpt6'), id=166, color=[128, 128,
|
2312 |
-
128]),
|
2313 |
-
167:
|
2314 |
-
dict(
|
2315 |
-
link=('shorts_kpt6', 'shorts_kpt7'), id=167, color=[128, 128,
|
2316 |
-
128]),
|
2317 |
-
168:
|
2318 |
-
dict(
|
2319 |
-
link=('shorts_kpt7', 'shorts_kpt8'), id=168, color=[128, 128,
|
2320 |
-
128]),
|
2321 |
-
169:
|
2322 |
-
dict(
|
2323 |
-
link=('shorts_kpt8', 'shorts_kpt9'), id=169, color=[128, 128,
|
2324 |
-
128]),
|
2325 |
-
170:
|
2326 |
-
dict(
|
2327 |
-
link=('shorts_kpt9', 'shorts_kpt10'),
|
2328 |
-
id=170,
|
2329 |
-
color=[128, 128, 128]),
|
2330 |
-
171:
|
2331 |
-
dict(
|
2332 |
-
link=('shorts_kpt10', 'shorts_kpt3'),
|
2333 |
-
id=171,
|
2334 |
-
color=[128, 128, 128]),
|
2335 |
-
172:
|
2336 |
-
dict(
|
2337 |
-
link=('shorts_kpt3', 'shorts_kpt2'), id=172, color=[128, 128,
|
2338 |
-
128]),
|
2339 |
-
173:
|
2340 |
-
dict(
|
2341 |
-
link=('shorts_kpt2', 'shorts_kpt1'), id=173, color=[128, 128,
|
2342 |
-
128]),
|
2343 |
-
174:
|
2344 |
-
dict(
|
2345 |
-
link=('trousers_kpt1', 'trousers_kpt4'),
|
2346 |
-
id=174,
|
2347 |
-
color=[128, 0, 128]),
|
2348 |
-
175:
|
2349 |
-
dict(
|
2350 |
-
link=('trousers_kpt4', 'trousers_kpt5'),
|
2351 |
-
id=175,
|
2352 |
-
color=[128, 0, 128]),
|
2353 |
-
176:
|
2354 |
-
dict(
|
2355 |
-
link=('trousers_kpt5', 'trousers_kpt6'),
|
2356 |
-
id=176,
|
2357 |
-
color=[128, 0, 128]),
|
2358 |
-
177:
|
2359 |
-
dict(
|
2360 |
-
link=('trousers_kpt6', 'trousers_kpt7'),
|
2361 |
-
id=177,
|
2362 |
-
color=[128, 0, 128]),
|
2363 |
-
178:
|
2364 |
-
dict(
|
2365 |
-
link=('trousers_kpt7', 'trousers_kpt8'),
|
2366 |
-
id=178,
|
2367 |
-
color=[128, 0, 128]),
|
2368 |
-
179:
|
2369 |
-
dict(
|
2370 |
-
link=('trousers_kpt8', 'trousers_kpt9'),
|
2371 |
-
id=179,
|
2372 |
-
color=[128, 0, 128]),
|
2373 |
-
180:
|
2374 |
-
dict(
|
2375 |
-
link=('trousers_kpt9', 'trousers_kpt10'),
|
2376 |
-
id=180,
|
2377 |
-
color=[128, 0, 128]),
|
2378 |
-
181:
|
2379 |
-
dict(
|
2380 |
-
link=('trousers_kpt10', 'trousers_kpt11'),
|
2381 |
-
id=181,
|
2382 |
-
color=[128, 0, 128]),
|
2383 |
-
182:
|
2384 |
-
dict(
|
2385 |
-
link=('trousers_kpt11', 'trousers_kpt12'),
|
2386 |
-
id=182,
|
2387 |
-
color=[128, 0, 128]),
|
2388 |
-
183:
|
2389 |
-
dict(
|
2390 |
-
link=('trousers_kpt12', 'trousers_kpt13'),
|
2391 |
-
id=183,
|
2392 |
-
color=[128, 0, 128]),
|
2393 |
-
184:
|
2394 |
-
dict(
|
2395 |
-
link=('trousers_kpt13', 'trousers_kpt14'),
|
2396 |
-
id=184,
|
2397 |
-
color=[128, 0, 128]),
|
2398 |
-
185:
|
2399 |
-
dict(
|
2400 |
-
link=('trousers_kpt14', 'trousers_kpt3'),
|
2401 |
-
id=185,
|
2402 |
-
color=[128, 0, 128]),
|
2403 |
-
186:
|
2404 |
-
dict(
|
2405 |
-
link=('trousers_kpt3', 'trousers_kpt2'),
|
2406 |
-
id=186,
|
2407 |
-
color=[128, 0, 128]),
|
2408 |
-
187:
|
2409 |
-
dict(
|
2410 |
-
link=('trousers_kpt2', 'trousers_kpt1'),
|
2411 |
-
id=187,
|
2412 |
-
color=[128, 0, 128]),
|
2413 |
-
188:
|
2414 |
-
dict(link=('skirt_kpt1', 'skirt_kpt4'), id=188, color=[64, 128, 128]),
|
2415 |
-
189:
|
2416 |
-
dict(link=('skirt_kpt4', 'skirt_kpt5'), id=189, color=[64, 128, 128]),
|
2417 |
-
190:
|
2418 |
-
dict(link=('skirt_kpt5', 'skirt_kpt6'), id=190, color=[64, 128, 128]),
|
2419 |
-
191:
|
2420 |
-
dict(link=('skirt_kpt6', 'skirt_kpt7'), id=191, color=[64, 128, 128]),
|
2421 |
-
192:
|
2422 |
-
dict(link=('skirt_kpt7', 'skirt_kpt8'), id=192, color=[64, 128, 128]),
|
2423 |
-
193:
|
2424 |
-
dict(link=('skirt_kpt8', 'skirt_kpt3'), id=193, color=[64, 128, 128]),
|
2425 |
-
194:
|
2426 |
-
dict(link=('skirt_kpt3', 'skirt_kpt2'), id=194, color=[64, 128, 128]),
|
2427 |
-
195:
|
2428 |
-
dict(link=('skirt_kpt2', 'skirt_kpt1'), id=195, color=[64, 128, 128]),
|
2429 |
-
196:
|
2430 |
-
dict(link=('ssd_kpt1', 'ssd_kpt2'), id=196, color=[64, 64, 128]),
|
2431 |
-
197:
|
2432 |
-
dict(link=('ssd_kpt2', 'ssd_kpt7'), id=197, color=[64, 64, 128]),
|
2433 |
-
198:
|
2434 |
-
dict(link=('ssd_kpt7', 'ssd_kpt8'), id=198, color=[64, 64, 128]),
|
2435 |
-
199:
|
2436 |
-
dict(link=('ssd_kpt8', 'ssd_kpt9'), id=199, color=[64, 64, 128]),
|
2437 |
-
200:
|
2438 |
-
dict(link=('ssd_kpt9', 'ssd_kpt10'), id=200, color=[64, 64, 128]),
|
2439 |
-
201:
|
2440 |
-
dict(link=('ssd_kpt10', 'ssd_kpt11'), id=201, color=[64, 64, 128]),
|
2441 |
-
202:
|
2442 |
-
dict(link=('ssd_kpt11', 'ssd_kpt12'), id=202, color=[64, 64, 128]),
|
2443 |
-
203:
|
2444 |
-
dict(link=('ssd_kpt12', 'ssd_kpt13'), id=203, color=[64, 64, 128]),
|
2445 |
-
204:
|
2446 |
-
dict(link=('ssd_kpt13', 'ssd_kpt14'), id=204, color=[64, 64, 128]),
|
2447 |
-
205:
|
2448 |
-
dict(link=('ssd_kpt14', 'ssd_kpt15'), id=205, color=[64, 64, 128]),
|
2449 |
-
206:
|
2450 |
-
dict(link=('ssd_kpt15', 'ssd_kpt16'), id=206, color=[64, 64, 128]),
|
2451 |
-
207:
|
2452 |
-
dict(link=('ssd_kpt16', 'ssd_kpt17'), id=207, color=[64, 64, 128]),
|
2453 |
-
208:
|
2454 |
-
dict(link=('ssd_kpt17', 'ssd_kpt18'), id=208, color=[64, 64, 128]),
|
2455 |
-
209:
|
2456 |
-
dict(link=('ssd_kpt18', 'ssd_kpt19'), id=209, color=[64, 64, 128]),
|
2457 |
-
210:
|
2458 |
-
dict(link=('ssd_kpt19', 'ssd_kpt20'), id=210, color=[64, 64, 128]),
|
2459 |
-
211:
|
2460 |
-
dict(link=('ssd_kpt20', 'ssd_kpt21'), id=211, color=[64, 64, 128]),
|
2461 |
-
212:
|
2462 |
-
dict(link=('ssd_kpt21', 'ssd_kpt22'), id=212, color=[64, 64, 128]),
|
2463 |
-
213:
|
2464 |
-
dict(link=('ssd_kpt22', 'ssd_kpt23'), id=213, color=[64, 64, 128]),
|
2465 |
-
214:
|
2466 |
-
dict(link=('ssd_kpt23', 'ssd_kpt24'), id=214, color=[64, 64, 128]),
|
2467 |
-
215:
|
2468 |
-
dict(link=('ssd_kpt24', 'ssd_kpt25'), id=215, color=[64, 64, 128]),
|
2469 |
-
216:
|
2470 |
-
dict(link=('ssd_kpt25', 'ssd_kpt26'), id=216, color=[64, 64, 128]),
|
2471 |
-
217:
|
2472 |
-
dict(link=('ssd_kpt26', 'ssd_kpt27'), id=217, color=[64, 64, 128]),
|
2473 |
-
218:
|
2474 |
-
dict(link=('ssd_kpt27', 'ssd_kpt28'), id=218, color=[64, 64, 128]),
|
2475 |
-
219:
|
2476 |
-
dict(link=('ssd_kpt28', 'ssd_kpt29'), id=219, color=[64, 64, 128]),
|
2477 |
-
220:
|
2478 |
-
dict(link=('ssd_kpt29', 'ssd_kpt6'), id=220, color=[64, 64, 128]),
|
2479 |
-
221:
|
2480 |
-
dict(link=('ssd_kpt6', 'ssd_kpt5'), id=221, color=[64, 64, 128]),
|
2481 |
-
222:
|
2482 |
-
dict(link=('ssd_kpt5', 'ssd_kpt4'), id=222, color=[64, 64, 128]),
|
2483 |
-
223:
|
2484 |
-
dict(link=('ssd_kpt4', 'ssd_kpt3'), id=223, color=[64, 64, 128]),
|
2485 |
-
224:
|
2486 |
-
dict(link=('ssd_kpt3', 'ssd_kpt2'), id=224, color=[64, 64, 128]),
|
2487 |
-
225:
|
2488 |
-
dict(link=('ssd_kpt6', 'ssd_kpt1'), id=225, color=[64, 64, 128]),
|
2489 |
-
226:
|
2490 |
-
dict(link=('lsd_kpt1', 'lsd_kpt2'), id=226, color=[128, 64, 0]),
|
2491 |
-
227:
|
2492 |
-
dict(link=('lsd_kpt2', 'lsd_kpt7'), id=228, color=[128, 64, 0]),
|
2493 |
-
228:
|
2494 |
-
dict(link=('lsd_kpt7', 'lsd_kpt8'), id=228, color=[128, 64, 0]),
|
2495 |
-
229:
|
2496 |
-
dict(link=('lsd_kpt8', 'lsd_kpt9'), id=229, color=[128, 64, 0]),
|
2497 |
-
230:
|
2498 |
-
dict(link=('lsd_kpt9', 'lsd_kpt10'), id=230, color=[128, 64, 0]),
|
2499 |
-
231:
|
2500 |
-
dict(link=('lsd_kpt10', 'lsd_kpt11'), id=231, color=[128, 64, 0]),
|
2501 |
-
232:
|
2502 |
-
dict(link=('lsd_kpt11', 'lsd_kpt12'), id=232, color=[128, 64, 0]),
|
2503 |
-
233:
|
2504 |
-
dict(link=('lsd_kpt12', 'lsd_kpt13'), id=233, color=[128, 64, 0]),
|
2505 |
-
234:
|
2506 |
-
dict(link=('lsd_kpt13', 'lsd_kpt14'), id=234, color=[128, 64, 0]),
|
2507 |
-
235:
|
2508 |
-
dict(link=('lsd_kpt14', 'lsd_kpt15'), id=235, color=[128, 64, 0]),
|
2509 |
-
236:
|
2510 |
-
dict(link=('lsd_kpt15', 'lsd_kpt16'), id=236, color=[128, 64, 0]),
|
2511 |
-
237:
|
2512 |
-
dict(link=('lsd_kpt16', 'lsd_kpt17'), id=237, color=[128, 64, 0]),
|
2513 |
-
238:
|
2514 |
-
dict(link=('lsd_kpt17', 'lsd_kpt18'), id=238, color=[128, 64, 0]),
|
2515 |
-
239:
|
2516 |
-
dict(link=('lsd_kpt18', 'lsd_kpt19'), id=239, color=[128, 64, 0]),
|
2517 |
-
240:
|
2518 |
-
dict(link=('lsd_kpt19', 'lsd_kpt20'), id=240, color=[128, 64, 0]),
|
2519 |
-
241:
|
2520 |
-
dict(link=('lsd_kpt20', 'lsd_kpt21'), id=241, color=[128, 64, 0]),
|
2521 |
-
242:
|
2522 |
-
dict(link=('lsd_kpt21', 'lsd_kpt22'), id=242, color=[128, 64, 0]),
|
2523 |
-
243:
|
2524 |
-
dict(link=('lsd_kpt22', 'lsd_kpt23'), id=243, color=[128, 64, 0]),
|
2525 |
-
244:
|
2526 |
-
dict(link=('lsd_kpt23', 'lsd_kpt24'), id=244, color=[128, 64, 0]),
|
2527 |
-
245:
|
2528 |
-
dict(link=('lsd_kpt24', 'lsd_kpt25'), id=245, color=[128, 64, 0]),
|
2529 |
-
246:
|
2530 |
-
dict(link=('lsd_kpt25', 'lsd_kpt26'), id=246, color=[128, 64, 0]),
|
2531 |
-
247:
|
2532 |
-
dict(link=('lsd_kpt26', 'lsd_kpt27'), id=247, color=[128, 64, 0]),
|
2533 |
-
248:
|
2534 |
-
dict(link=('lsd_kpt27', 'lsd_kpt28'), id=248, color=[128, 64, 0]),
|
2535 |
-
249:
|
2536 |
-
dict(link=('lsd_kpt28', 'lsd_kpt29'), id=249, color=[128, 64, 0]),
|
2537 |
-
250:
|
2538 |
-
dict(link=('lsd_kpt29', 'lsd_kpt30'), id=250, color=[128, 64, 0]),
|
2539 |
-
251:
|
2540 |
-
dict(link=('lsd_kpt30', 'lsd_kpt31'), id=251, color=[128, 64, 0]),
|
2541 |
-
252:
|
2542 |
-
dict(link=('lsd_kpt31', 'lsd_kpt32'), id=252, color=[128, 64, 0]),
|
2543 |
-
253:
|
2544 |
-
dict(link=('lsd_kpt32', 'lsd_kpt33'), id=253, color=[128, 64, 0]),
|
2545 |
-
254:
|
2546 |
-
dict(link=('lsd_kpt33', 'lsd_kpt34'), id=254, color=[128, 64, 0]),
|
2547 |
-
255:
|
2548 |
-
dict(link=('lsd_kpt34', 'lsd_kpt35'), id=255, color=[128, 64, 0]),
|
2549 |
-
256:
|
2550 |
-
dict(link=('lsd_kpt35', 'lsd_kpt36'), id=256, color=[128, 64, 0]),
|
2551 |
-
257:
|
2552 |
-
dict(link=('lsd_kpt36', 'lsd_kpt37'), id=257, color=[128, 64, 0]),
|
2553 |
-
258:
|
2554 |
-
dict(link=('lsd_kpt37', 'lsd_kpt6'), id=258, color=[128, 64, 0]),
|
2555 |
-
259:
|
2556 |
-
dict(link=('lsd_kpt6', 'lsd_kpt5'), id=259, color=[128, 64, 0]),
|
2557 |
-
260:
|
2558 |
-
dict(link=('lsd_kpt5', 'lsd_kpt4'), id=260, color=[128, 64, 0]),
|
2559 |
-
261:
|
2560 |
-
dict(link=('lsd_kpt4', 'lsd_kpt3'), id=261, color=[128, 64, 0]),
|
2561 |
-
262:
|
2562 |
-
dict(link=('lsd_kpt3', 'lsd_kpt2'), id=262, color=[128, 64, 0]),
|
2563 |
-
263:
|
2564 |
-
dict(link=('lsd_kpt6', 'lsd_kpt1'), id=263, color=[128, 64, 0]),
|
2565 |
-
264:
|
2566 |
-
dict(link=('vd_kpt1', 'vd_kpt2'), id=264, color=[128, 64, 255]),
|
2567 |
-
265:
|
2568 |
-
dict(link=('vd_kpt2', 'vd_kpt7'), id=265, color=[128, 64, 255]),
|
2569 |
-
266:
|
2570 |
-
dict(link=('vd_kpt7', 'vd_kpt8'), id=266, color=[128, 64, 255]),
|
2571 |
-
267:
|
2572 |
-
dict(link=('vd_kpt8', 'vd_kpt9'), id=267, color=[128, 64, 255]),
|
2573 |
-
268:
|
2574 |
-
dict(link=('vd_kpt9', 'vd_kpt10'), id=268, color=[128, 64, 255]),
|
2575 |
-
269:
|
2576 |
-
dict(link=('vd_kpt10', 'vd_kpt11'), id=269, color=[128, 64, 255]),
|
2577 |
-
270:
|
2578 |
-
dict(link=('vd_kpt11', 'vd_kpt12'), id=270, color=[128, 64, 255]),
|
2579 |
-
271:
|
2580 |
-
dict(link=('vd_kpt12', 'vd_kpt13'), id=271, color=[128, 64, 255]),
|
2581 |
-
272:
|
2582 |
-
dict(link=('vd_kpt13', 'vd_kpt14'), id=272, color=[128, 64, 255]),
|
2583 |
-
273:
|
2584 |
-
dict(link=('vd_kpt14', 'vd_kpt15'), id=273, color=[128, 64, 255]),
|
2585 |
-
274:
|
2586 |
-
dict(link=('vd_kpt15', 'vd_kpt16'), id=274, color=[128, 64, 255]),
|
2587 |
-
275:
|
2588 |
-
dict(link=('vd_kpt16', 'vd_kpt17'), id=275, color=[128, 64, 255]),
|
2589 |
-
276:
|
2590 |
-
dict(link=('vd_kpt17', 'vd_kpt18'), id=276, color=[128, 64, 255]),
|
2591 |
-
277:
|
2592 |
-
dict(link=('vd_kpt18', 'vd_kpt19'), id=277, color=[128, 64, 255]),
|
2593 |
-
278:
|
2594 |
-
dict(link=('vd_kpt19', 'vd_kpt6'), id=278, color=[128, 64, 255]),
|
2595 |
-
279:
|
2596 |
-
dict(link=('vd_kpt6', 'vd_kpt5'), id=279, color=[128, 64, 255]),
|
2597 |
-
280:
|
2598 |
-
dict(link=('vd_kpt5', 'vd_kpt4'), id=280, color=[128, 64, 255]),
|
2599 |
-
281:
|
2600 |
-
dict(link=('vd_kpt4', 'vd_kpt3'), id=281, color=[128, 64, 255]),
|
2601 |
-
282:
|
2602 |
-
dict(link=('vd_kpt3', 'vd_kpt2'), id=282, color=[128, 64, 255]),
|
2603 |
-
283:
|
2604 |
-
dict(link=('vd_kpt6', 'vd_kpt1'), id=283, color=[128, 64, 255]),
|
2605 |
-
284:
|
2606 |
-
dict(link=('sd_kpt1', 'sd_kpt2'), id=284, color=[128, 64, 0]),
|
2607 |
-
285:
|
2608 |
-
dict(link=('sd_kpt2', 'sd_kpt8'), id=285, color=[128, 64, 0]),
|
2609 |
-
286:
|
2610 |
-
dict(link=('sd_kpt8', 'sd_kpt9'), id=286, color=[128, 64, 0]),
|
2611 |
-
287:
|
2612 |
-
dict(link=('sd_kpt9', 'sd_kpt10'), id=287, color=[128, 64, 0]),
|
2613 |
-
288:
|
2614 |
-
dict(link=('sd_kpt10', 'sd_kpt11'), id=288, color=[128, 64, 0]),
|
2615 |
-
289:
|
2616 |
-
dict(link=('sd_kpt11', 'sd_kpt12'), id=289, color=[128, 64, 0]),
|
2617 |
-
290:
|
2618 |
-
dict(link=('sd_kpt12', 'sd_kpt13'), id=290, color=[128, 64, 0]),
|
2619 |
-
291:
|
2620 |
-
dict(link=('sd_kpt13', 'sd_kpt14'), id=291, color=[128, 64, 0]),
|
2621 |
-
292:
|
2622 |
-
dict(link=('sd_kpt14', 'sd_kpt15'), id=292, color=[128, 64, 0]),
|
2623 |
-
293:
|
2624 |
-
dict(link=('sd_kpt15', 'sd_kpt16'), id=293, color=[128, 64, 0]),
|
2625 |
-
294:
|
2626 |
-
dict(link=('sd_kpt16', 'sd_kpt17'), id=294, color=[128, 64, 0]),
|
2627 |
-
295:
|
2628 |
-
dict(link=('sd_kpt17', 'sd_kpt18'), id=295, color=[128, 64, 0]),
|
2629 |
-
296:
|
2630 |
-
dict(link=('sd_kpt18', 'sd_kpt6'), id=296, color=[128, 64, 0]),
|
2631 |
-
297:
|
2632 |
-
dict(link=('sd_kpt6', 'sd_kpt5'), id=297, color=[128, 64, 0]),
|
2633 |
-
298:
|
2634 |
-
dict(link=('sd_kpt5', 'sd_kpt4'), id=298, color=[128, 64, 0]),
|
2635 |
-
299:
|
2636 |
-
dict(link=('sd_kpt4', 'sd_kpt3'), id=299, color=[128, 64, 0]),
|
2637 |
-
300:
|
2638 |
-
dict(link=('sd_kpt3', 'sd_kpt2'), id=300, color=[128, 64, 0]),
|
2639 |
-
301:
|
2640 |
-
dict(link=('sd_kpt2', 'sd_kpt7'), id=301, color=[128, 64, 0]),
|
2641 |
-
302:
|
2642 |
-
dict(link=('sd_kpt6', 'sd_kpt19'), id=302, color=[128, 64, 0]),
|
2643 |
-
303:
|
2644 |
-
dict(link=('sd_kpt6', 'sd_kpt1'), id=303, color=[128, 64, 0])
|
2645 |
-
}),
|
2646 |
-
joint_weights=[
|
2647 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2648 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2649 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2650 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2651 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2652 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2653 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2654 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2655 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2656 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2657 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2658 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2659 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2660 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2661 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2662 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2663 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2664 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2665 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2666 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,
|
2667 |
-
1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0
|
2668 |
-
],
|
2669 |
-
sigmas=[])
|
2670 |
-
param_scheduler = [
|
2671 |
-
dict(
|
2672 |
-
type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False),
|
2673 |
-
dict(
|
2674 |
-
type='MultiStepLR',
|
2675 |
-
begin=0,
|
2676 |
-
end=120,
|
2677 |
-
milestones=[80, 100],
|
2678 |
-
gamma=0.1,
|
2679 |
-
by_epoch=True)
|
2680 |
-
]
|
2681 |
-
optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005))
|
2682 |
-
auto_scale_lr = dict(base_batch_size=512)
|
2683 |
-
dataset_type = 'DeepFashion2Dataset'
|
2684 |
-
data_mode = 'topdown'
|
2685 |
-
data_root = 'data/deepfashion2/'
|
2686 |
-
codec = dict(
|
2687 |
-
type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
|
2688 |
-
train_pipeline = [
|
2689 |
-
dict(type='LoadImage'),
|
2690 |
-
dict(type='GetBBoxCenterScale'),
|
2691 |
-
dict(type='RandomFlip', direction='horizontal'),
|
2692 |
-
dict(
|
2693 |
-
type='RandomBBoxTransform',
|
2694 |
-
shift_prob=0,
|
2695 |
-
rotate_factor=60,
|
2696 |
-
scale_factor=(0.75, 1.25)),
|
2697 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2698 |
-
dict(
|
2699 |
-
type='GenerateTarget',
|
2700 |
-
encoder=dict(
|
2701 |
-
type='MSRAHeatmap',
|
2702 |
-
input_size=(192, 256),
|
2703 |
-
heatmap_size=(48, 64),
|
2704 |
-
sigma=2)),
|
2705 |
-
dict(type='PackPoseInputs')
|
2706 |
-
]
|
2707 |
-
val_pipeline = [
|
2708 |
-
dict(type='LoadImage', backend_args=dict(backend='local')),
|
2709 |
-
dict(type='GetBBoxCenterScale'),
|
2710 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2711 |
-
dict(type='PackPoseInputs')
|
2712 |
-
]
|
2713 |
-
train_dataloader = dict(
|
2714 |
-
batch_size=64,
|
2715 |
-
num_workers=6,
|
2716 |
-
persistent_workers=True,
|
2717 |
-
sampler=dict(type='DefaultSampler', shuffle=True),
|
2718 |
-
dataset=dict(
|
2719 |
-
type='DeepFashion2Dataset',
|
2720 |
-
data_root='data/deepfashion2/',
|
2721 |
-
data_mode='topdown',
|
2722 |
-
ann_file='train/deepfashion2_skirt.json',
|
2723 |
-
data_prefix=dict(img='train/image/'),
|
2724 |
-
pipeline=[
|
2725 |
-
dict(type='LoadImage'),
|
2726 |
-
dict(type='GetBBoxCenterScale'),
|
2727 |
-
dict(type='RandomFlip', direction='horizontal'),
|
2728 |
-
dict(
|
2729 |
-
type='RandomBBoxTransform',
|
2730 |
-
shift_prob=0,
|
2731 |
-
rotate_factor=60,
|
2732 |
-
scale_factor=(0.75, 1.25)),
|
2733 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2734 |
-
dict(
|
2735 |
-
type='GenerateTarget',
|
2736 |
-
encoder=dict(
|
2737 |
-
type='MSRAHeatmap',
|
2738 |
-
input_size=(192, 256),
|
2739 |
-
heatmap_size=(48, 64),
|
2740 |
-
sigma=2)),
|
2741 |
-
dict(type='PackPoseInputs')
|
2742 |
-
]))
|
2743 |
-
val_dataloader = dict(
|
2744 |
-
batch_size=32,
|
2745 |
-
num_workers=6,
|
2746 |
-
persistent_workers=True,
|
2747 |
-
drop_last=False,
|
2748 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
2749 |
-
dataset=dict(
|
2750 |
-
type='DeepFashion2Dataset',
|
2751 |
-
data_root='data/deepfashion2/',
|
2752 |
-
data_mode='topdown',
|
2753 |
-
ann_file='validation/deepfashion2_skirt.json',
|
2754 |
-
data_prefix=dict(img='validation/image/'),
|
2755 |
-
test_mode=True,
|
2756 |
-
pipeline=[
|
2757 |
-
dict(type='LoadImage', backend_args=dict(backend='local')),
|
2758 |
-
dict(type='GetBBoxCenterScale'),
|
2759 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2760 |
-
dict(type='PackPoseInputs')
|
2761 |
-
]))
|
2762 |
-
test_dataloader = dict(
|
2763 |
-
batch_size=32,
|
2764 |
-
num_workers=6,
|
2765 |
-
persistent_workers=True,
|
2766 |
-
drop_last=False,
|
2767 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
2768 |
-
dataset=dict(
|
2769 |
-
type='DeepFashion2Dataset',
|
2770 |
-
data_root='data/deepfashion2/',
|
2771 |
-
data_mode='topdown',
|
2772 |
-
ann_file='validation/deepfashion2_skirt.json',
|
2773 |
-
data_prefix=dict(img='validation/image/'),
|
2774 |
-
test_mode=True,
|
2775 |
-
pipeline=[
|
2776 |
-
dict(type='LoadImage', backend_args=dict(backend='local')),
|
2777 |
-
dict(type='GetBBoxCenterScale'),
|
2778 |
-
dict(type='TopdownAffine', input_size=(192, 256)),
|
2779 |
-
dict(type='PackPoseInputs')
|
2780 |
-
]))
|
2781 |
-
channel_cfg = dict(
|
2782 |
-
num_output_channels=294,
|
2783 |
-
dataset_joints=294,
|
2784 |
-
dataset_channel=[[
|
2785 |
-
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
|
2786 |
-
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
|
2787 |
-
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
|
2788 |
-
56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
|
2789 |
-
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
|
2790 |
-
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
|
2791 |
-
108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
|
2792 |
-
122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
|
2793 |
-
136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
|
2794 |
-
150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
|
2795 |
-
164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
|
2796 |
-
178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
|
2797 |
-
192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
|
2798 |
-
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
|
2799 |
-
220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
|
2800 |
-
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
|
2801 |
-
248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
|
2802 |
-
262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
|
2803 |
-
276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
|
2804 |
-
290, 291, 292, 293
|
2805 |
-
]],
|
2806 |
-
inference_channel=[
|
2807 |
-
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,
|
2808 |
-
20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37,
|
2809 |
-
38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
|
2810 |
-
56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
|
2811 |
-
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
|
2812 |
-
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
|
2813 |
-
108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
|
2814 |
-
122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135,
|
2815 |
-
136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149,
|
2816 |
-
150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
|
2817 |
-
164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177,
|
2818 |
-
178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191,
|
2819 |
-
192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205,
|
2820 |
-
206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219,
|
2821 |
-
220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233,
|
2822 |
-
234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247,
|
2823 |
-
248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261,
|
2824 |
-
262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275,
|
2825 |
-
276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
|
2826 |
-
290, 291, 292, 293
|
2827 |
-
])
|
2828 |
-
model = dict(
|
2829 |
-
type='TopdownPoseEstimator',
|
2830 |
-
data_preprocessor=dict(
|
2831 |
-
type='PoseDataPreprocessor',
|
2832 |
-
mean=[123.675, 116.28, 103.53],
|
2833 |
-
std=[58.395, 57.12, 57.375],
|
2834 |
-
bgr_to_rgb=True),
|
2835 |
-
backbone=dict(
|
2836 |
-
type='ResNet',
|
2837 |
-
depth=50,
|
2838 |
-
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
|
2839 |
-
head=dict(
|
2840 |
-
type='HeatmapHead',
|
2841 |
-
in_channels=2048,
|
2842 |
-
out_channels=294,
|
2843 |
-
loss=dict(type='KeypointMSELoss', use_target_weight=True),
|
2844 |
-
decoder=dict(
|
2845 |
-
type='MSRAHeatmap',
|
2846 |
-
input_size=(192, 256),
|
2847 |
-
heatmap_size=(48, 64),
|
2848 |
-
sigma=2)),
|
2849 |
-
test_cfg=dict(flip_test=True, flip_mode='heatmap', shift_heatmap=True))
|
2850 |
-
val_evaluator = [
|
2851 |
-
dict(type='PCKAccuracy', thr=0.2),
|
2852 |
-
dict(type='AUC'),
|
2853 |
-
dict(type='EPE')
|
2854 |
-
]
|
2855 |
-
test_evaluator = [
|
2856 |
-
dict(type='PCKAccuracy', thr=0.2),
|
2857 |
-
dict(type='AUC'),
|
2858 |
-
dict(type='EPE')
|
2859 |
-
]
|
2860 |
-
launcher = 'pytorch'
|
2861 |
-
work_dir = './work_dirs/td_hm_res50_4xb64-120e_deepfashion2_skirt_256x192'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/.ipynb_checkpoints/hr_4xb16_1024e_4channel-checkpoint.py
DELETED
@@ -1,113 +0,0 @@
|
|
1 |
-
_base_ = [ # 此配置文件将继承所有 `_base_` 中的配置
|
2 |
-
'../configs/_base_/schedules/custom_schedule.py', # 训练策略配置
|
3 |
-
'../configs/_base_/default_runtime.py' # 默认运行设置
|
4 |
-
]
|
5 |
-
|
6 |
-
default_hooks = dict(
|
7 |
-
# print log every 50 iterations.
|
8 |
-
logger=dict(type='LoggerHook', interval=50),
|
9 |
-
# save checkpoint per 8 epochs.
|
10 |
-
checkpoint=dict(save_best='auto', interval=16)
|
11 |
-
)
|
12 |
-
|
13 |
-
visualizer = dict(
|
14 |
-
vis_backends=[dict(type='LocalVisBackend'),
|
15 |
-
dict(type='WandbVisBackend')])
|
16 |
-
|
17 |
-
dataset_type = 'CustomDataset'
|
18 |
-
|
19 |
-
# config of pipline
|
20 |
-
train_pipeline = [
|
21 |
-
dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像
|
22 |
-
dict(type='RandomResizedCrop', scale=224), # 随机放缩裁剪
|
23 |
-
dict(type='RandomFlip', prob=0.5, direction='horizontal'), # 随机水平翻转
|
24 |
-
dict(type='PackInputs'), # 准备图像以及标签
|
25 |
-
]
|
26 |
-
|
27 |
-
test_pipeline = [
|
28 |
-
dict(type='LoadImageFromFile', imdecode_backend='pillow', color_type='unchanged'), # 读取图像
|
29 |
-
dict(type='ResizeEdge', scale=256, edge='short'), # 缩放短边尺寸至 256px
|
30 |
-
dict(type='CenterCrop', crop_size=224), # 中心裁剪
|
31 |
-
dict(type='PackInputs'), # 准备图像以及标签
|
32 |
-
]
|
33 |
-
|
34 |
-
# config of dataloader
|
35 |
-
train_dataloader = dict(
|
36 |
-
batch_size=16, # 每张 GPU 的 batchsize
|
37 |
-
num_workers=5, # 每个 GPU 的线程数
|
38 |
-
dataset=dict( # 训练数据集
|
39 |
-
type=dataset_type,
|
40 |
-
data_root='../2_preprocess_data_3000',
|
41 |
-
with_label=True,
|
42 |
-
ann_file='',
|
43 |
-
data_prefix='train',
|
44 |
-
pipeline=train_pipeline),
|
45 |
-
sampler=dict(type='DefaultSampler', shuffle=True), # 默认采样器
|
46 |
-
persistent_workers=True, # 是否保持进程,可以缩短每个 epoch 的准备时间
|
47 |
-
)
|
48 |
-
|
49 |
-
# 构造验证集 dataloader
|
50 |
-
val_dataloader = dict(
|
51 |
-
batch_size=16,
|
52 |
-
num_workers=5,
|
53 |
-
dataset=dict(
|
54 |
-
type=dataset_type,
|
55 |
-
data_root='../2_preprocess_data_3000',
|
56 |
-
with_label=True,
|
57 |
-
ann_file='',
|
58 |
-
data_prefix='val',
|
59 |
-
pipeline=test_pipeline),
|
60 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
61 |
-
persistent_workers=True,
|
62 |
-
)
|
63 |
-
|
64 |
-
# set evaluator of validation dataset. Here uses top1 and top3 accuracy
|
65 |
-
val_evaluator = dict(type='Accuracy', topk=(1, 3))
|
66 |
-
|
67 |
-
test_dataloader = val_dataloader
|
68 |
-
test_evaluator = val_evaluator
|
69 |
-
|
70 |
-
model = dict(
|
71 |
-
type='ImageClassifier', # 主模型类型(对于图像分类任务,使用 `ImageClassifier`)
|
72 |
-
backbone=dict(
|
73 |
-
type='HRNet', # 主干网络类型
|
74 |
-
arch='w32', # 主干网络架构
|
75 |
-
in_channels=4,
|
76 |
-
extra=dict(
|
77 |
-
stage1=dict(
|
78 |
-
num_modules=1,
|
79 |
-
num_branches=1,
|
80 |
-
block='BOTTLENECK',
|
81 |
-
num_blocks=(4, ),
|
82 |
-
num_channels=(64, )),
|
83 |
-
stage2=dict(
|
84 |
-
num_modules=1,
|
85 |
-
num_branches=2,
|
86 |
-
block='BASIC',
|
87 |
-
num_blocks=(4, 4),
|
88 |
-
num_channels=(32, 64)),
|
89 |
-
stage3=dict(
|
90 |
-
num_modules=4,
|
91 |
-
num_branches=3,
|
92 |
-
block='BASIC',
|
93 |
-
num_blocks=(4, 4, 4),
|
94 |
-
num_channels=(32, 64, 128)),
|
95 |
-
stage4=dict(
|
96 |
-
num_modules=3,
|
97 |
-
num_branches=4,
|
98 |
-
block='BASIC',
|
99 |
-
num_blocks=(4, 4, 4, 4),
|
100 |
-
num_channels=(32, 64, 128, 256))),
|
101 |
-
),
|
102 |
-
neck=dict(type='GlobalAveragePooling'), # 颈网络类型
|
103 |
-
head=dict(
|
104 |
-
type='LinearClsHead', # 分类颈网络类型
|
105 |
-
# 除了 `type` 之外的所有字段都来自 `LinearClsHead` 类的 __init__ 方法
|
106 |
-
# 可查阅 https://mmpretrain.readthedocs.io/zh_CN/latest/api/generated/mmpretrain.models.heads.LinearClsHead.html
|
107 |
-
num_classes=7, # 分类类别数
|
108 |
-
in_channels=256,
|
109 |
-
loss=dict(type='CrossEntropyLoss', loss_weight=1.0), # 损失函数配置信息
|
110 |
-
topk=(1, 3), # 评估指标,Top-k 准确率
|
111 |
-
))
|
112 |
-
|
113 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/datasets/__init__.py
DELETED
File without changes
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32-lbs_in1k.py
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/resnet50_label_smooth.py',
|
3 |
-
'../_base_/datasets/imagenet_bs32.py',
|
4 |
-
'../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py'
|
5 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ababababababbababa/Ashaar/poetry_diacritizer/train.py
DELETED
@@ -1,49 +0,0 @@
|
|
1 |
-
import argparse
|
2 |
-
import random
|
3 |
-
|
4 |
-
import numpy as np
|
5 |
-
import torch
|
6 |
-
|
7 |
-
from trainer import CBHGTrainer, Seq2SeqTrainer, GPTTrainer
|
8 |
-
|
9 |
-
SEED = 1234
|
10 |
-
random.seed(SEED)
|
11 |
-
np.random.seed(SEED)
|
12 |
-
torch.manual_seed(SEED)
|
13 |
-
torch.cuda.manual_seed(SEED)
|
14 |
-
torch.backends.cudnn.deterministic = True
|
15 |
-
torch.backends.cudnn.benchmark = False
|
16 |
-
|
17 |
-
|
18 |
-
def train_parser():
|
19 |
-
parser = argparse.ArgumentParser()
|
20 |
-
parser.add_argument("--model_kind", dest="model_kind", type=str, required=True)
|
21 |
-
parser.add_argument(
|
22 |
-
"--model_desc", dest="model_desc", type=str, required=False, default=""
|
23 |
-
)
|
24 |
-
parser.add_argument("--config", dest="config", type=str, required=True)
|
25 |
-
parser.add_argument(
|
26 |
-
"--reset_dir",
|
27 |
-
dest="clear_dir",
|
28 |
-
action="store_true",
|
29 |
-
help="deletes everything under this config's folder.",
|
30 |
-
)
|
31 |
-
return parser
|
32 |
-
|
33 |
-
|
34 |
-
parser = train_parser()
|
35 |
-
args = parser.parse_args()
|
36 |
-
|
37 |
-
|
38 |
-
if args.model_kind in ["seq2seq"]:
|
39 |
-
trainer = Seq2SeqTrainer(args.config, args.model_kind, args.model_desc)
|
40 |
-
elif args.model_kind in ["tacotron_based"]:
|
41 |
-
trainer = Seq2SeqTrainer(args.config, args.model_kind, args.model_desc)
|
42 |
-
elif args.model_kind in ["baseline", "cbhg"]:
|
43 |
-
trainer = CBHGTrainer(args.config, args.model_kind, args.model_desc)
|
44 |
-
elif args.model_kind in ["gpt"]:
|
45 |
-
trainer = GPTTrainer(args.config, args.model_kind, args.model_desc)
|
46 |
-
else:
|
47 |
-
raise ValueError("The model kind is not supported")
|
48 |
-
|
49 |
-
trainer.run()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversations/+page.server.ts
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
import { base } from "$app/paths";
|
2 |
-
import { authCondition } from "$lib/server/auth";
|
3 |
-
import { collections } from "$lib/server/database";
|
4 |
-
import { redirect } from "@sveltejs/kit";
|
5 |
-
|
6 |
-
export const actions = {
|
7 |
-
delete: async function ({ locals }) {
|
8 |
-
throw redirect(303, `${base}/`);
|
9 |
-
},
|
10 |
-
};
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapter/CoAdapter/ldm/inference_base.py
DELETED
@@ -1,292 +0,0 @@
|
|
1 |
-
import argparse
|
2 |
-
import torch
|
3 |
-
from omegaconf import OmegaConf
|
4 |
-
|
5 |
-
from ldm.models.diffusion.ddim import DDIMSampler
|
6 |
-
from ldm.models.diffusion.plms import PLMSSampler
|
7 |
-
from ldm.modules.encoders.adapter import Adapter, StyleAdapter, Adapter_light
|
8 |
-
from ldm.modules.extra_condition.api import ExtraCondition
|
9 |
-
from ldm.util import fix_cond_shapes, load_model_from_config, read_state_dict
|
10 |
-
|
11 |
-
DEFAULT_NEGATIVE_PROMPT = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, ' \
|
12 |
-
'fewer digits, cropped, worst quality, low quality'
|
13 |
-
|
14 |
-
|
15 |
-
def get_base_argument_parser() -> argparse.ArgumentParser:
|
16 |
-
"""get the base argument parser for inference scripts"""
|
17 |
-
parser = argparse.ArgumentParser()
|
18 |
-
parser.add_argument(
|
19 |
-
'--outdir',
|
20 |
-
type=str,
|
21 |
-
help='dir to write results to',
|
22 |
-
default=None,
|
23 |
-
)
|
24 |
-
|
25 |
-
parser.add_argument(
|
26 |
-
'--prompt',
|
27 |
-
type=str,
|
28 |
-
nargs='?',
|
29 |
-
default=None,
|
30 |
-
help='positive prompt',
|
31 |
-
)
|
32 |
-
|
33 |
-
parser.add_argument(
|
34 |
-
'--neg_prompt',
|
35 |
-
type=str,
|
36 |
-
default=DEFAULT_NEGATIVE_PROMPT,
|
37 |
-
help='negative prompt',
|
38 |
-
)
|
39 |
-
|
40 |
-
parser.add_argument(
|
41 |
-
'--cond_path',
|
42 |
-
type=str,
|
43 |
-
default=None,
|
44 |
-
help='condition image path',
|
45 |
-
)
|
46 |
-
|
47 |
-
parser.add_argument(
|
48 |
-
'--cond_inp_type',
|
49 |
-
type=str,
|
50 |
-
default='image',
|
51 |
-
help='the type of the input condition image, take depth T2I as example, the input can be raw image, '
|
52 |
-
'which depth will be calculated, or the input can be a directly a depth map image',
|
53 |
-
)
|
54 |
-
|
55 |
-
parser.add_argument(
|
56 |
-
'--sampler',
|
57 |
-
type=str,
|
58 |
-
default='ddim',
|
59 |
-
choices=['ddim', 'plms'],
|
60 |
-
help='sampling algorithm, currently, only ddim and plms are supported, more are on the way',
|
61 |
-
)
|
62 |
-
|
63 |
-
parser.add_argument(
|
64 |
-
'--steps',
|
65 |
-
type=int,
|
66 |
-
default=50,
|
67 |
-
help='number of sampling steps',
|
68 |
-
)
|
69 |
-
|
70 |
-
parser.add_argument(
|
71 |
-
'--sd_ckpt',
|
72 |
-
type=str,
|
73 |
-
default='models/sd-v1-4.ckpt',
|
74 |
-
help='path to checkpoint of stable diffusion model, both .ckpt and .safetensor are supported',
|
75 |
-
)
|
76 |
-
|
77 |
-
parser.add_argument(
|
78 |
-
'--vae_ckpt',
|
79 |
-
type=str,
|
80 |
-
default=None,
|
81 |
-
help='vae checkpoint, anime SD models usually have seperate vae ckpt that need to be loaded',
|
82 |
-
)
|
83 |
-
|
84 |
-
parser.add_argument(
|
85 |
-
'--adapter_ckpt',
|
86 |
-
type=str,
|
87 |
-
default=None,
|
88 |
-
help='path to checkpoint of adapter',
|
89 |
-
)
|
90 |
-
|
91 |
-
parser.add_argument(
|
92 |
-
'--config',
|
93 |
-
type=str,
|
94 |
-
default='configs/stable-diffusion/sd-v1-inference.yaml',
|
95 |
-
help='path to config which constructs SD model',
|
96 |
-
)
|
97 |
-
|
98 |
-
parser.add_argument(
|
99 |
-
'--max_resolution',
|
100 |
-
type=float,
|
101 |
-
default=512 * 512,
|
102 |
-
help='max image height * width, only for computer with limited vram',
|
103 |
-
)
|
104 |
-
|
105 |
-
parser.add_argument(
|
106 |
-
'--resize_short_edge',
|
107 |
-
type=int,
|
108 |
-
default=None,
|
109 |
-
help='resize short edge of the input image, if this arg is set, max_resolution will not be used',
|
110 |
-
)
|
111 |
-
|
112 |
-
parser.add_argument(
|
113 |
-
'--C',
|
114 |
-
type=int,
|
115 |
-
default=4,
|
116 |
-
help='latent channels',
|
117 |
-
)
|
118 |
-
|
119 |
-
parser.add_argument(
|
120 |
-
'--f',
|
121 |
-
type=int,
|
122 |
-
default=8,
|
123 |
-
help='downsampling factor',
|
124 |
-
)
|
125 |
-
|
126 |
-
parser.add_argument(
|
127 |
-
'--scale',
|
128 |
-
type=float,
|
129 |
-
default=7.5,
|
130 |
-
help='unconditional guidance scale: eps = eps(x, empty) + scale * (eps(x, cond) - eps(x, empty))',
|
131 |
-
)
|
132 |
-
|
133 |
-
parser.add_argument(
|
134 |
-
'--cond_tau',
|
135 |
-
type=float,
|
136 |
-
default=1.0,
|
137 |
-
help='timestamp parameter that determines until which step the adapter is applied, '
|
138 |
-
'similar as Prompt-to-Prompt tau',
|
139 |
-
)
|
140 |
-
|
141 |
-
parser.add_argument(
|
142 |
-
'--style_cond_tau',
|
143 |
-
type=float,
|
144 |
-
default=1.0,
|
145 |
-
help='timestamp parameter that determines until which step the adapter is applied, '
|
146 |
-
'similar as Prompt-to-Prompt tau',
|
147 |
-
)
|
148 |
-
|
149 |
-
parser.add_argument(
|
150 |
-
'--cond_weight',
|
151 |
-
type=float,
|
152 |
-
default=1.0,
|
153 |
-
help='the adapter features are multiplied by the cond_weight. The larger the cond_weight, the more aligned '
|
154 |
-
'the generated image and condition will be, but the generated quality may be reduced',
|
155 |
-
)
|
156 |
-
|
157 |
-
parser.add_argument(
|
158 |
-
'--seed',
|
159 |
-
type=int,
|
160 |
-
default=42,
|
161 |
-
)
|
162 |
-
|
163 |
-
parser.add_argument(
|
164 |
-
'--n_samples',
|
165 |
-
type=int,
|
166 |
-
default=4,
|
167 |
-
help='# of samples to generate',
|
168 |
-
)
|
169 |
-
|
170 |
-
return parser
|
171 |
-
|
172 |
-
|
173 |
-
def get_sd_models(opt):
|
174 |
-
"""
|
175 |
-
build stable diffusion model, sampler
|
176 |
-
"""
|
177 |
-
# SD
|
178 |
-
config = OmegaConf.load(f"{opt.config}")
|
179 |
-
model = load_model_from_config(config, opt.sd_ckpt, opt.vae_ckpt)
|
180 |
-
sd_model = model.to(opt.device)
|
181 |
-
|
182 |
-
# sampler
|
183 |
-
if opt.sampler == 'plms':
|
184 |
-
sampler = PLMSSampler(model)
|
185 |
-
elif opt.sampler == 'ddim':
|
186 |
-
sampler = DDIMSampler(model)
|
187 |
-
else:
|
188 |
-
raise NotImplementedError
|
189 |
-
|
190 |
-
return sd_model, sampler
|
191 |
-
|
192 |
-
|
193 |
-
def get_t2i_adapter_models(opt):
|
194 |
-
config = OmegaConf.load(f"{opt.config}")
|
195 |
-
model = load_model_from_config(config, opt.sd_ckpt, opt.vae_ckpt)
|
196 |
-
adapter_ckpt_path = getattr(opt, f'{opt.which_cond}_adapter_ckpt', None)
|
197 |
-
if adapter_ckpt_path is None:
|
198 |
-
adapter_ckpt_path = getattr(opt, 'adapter_ckpt')
|
199 |
-
adapter_ckpt = read_state_dict(adapter_ckpt_path)
|
200 |
-
new_state_dict = {}
|
201 |
-
for k, v in adapter_ckpt.items():
|
202 |
-
if not k.startswith('adapter.'):
|
203 |
-
new_state_dict[f'adapter.{k}'] = v
|
204 |
-
else:
|
205 |
-
new_state_dict[k] = v
|
206 |
-
m, u = model.load_state_dict(new_state_dict, strict=False)
|
207 |
-
if len(u) > 0:
|
208 |
-
print(f"unexpected keys in loading adapter ckpt {adapter_ckpt_path}:")
|
209 |
-
print(u)
|
210 |
-
|
211 |
-
model = model.to(opt.device)
|
212 |
-
|
213 |
-
# sampler
|
214 |
-
if opt.sampler == 'plms':
|
215 |
-
sampler = PLMSSampler(model)
|
216 |
-
elif opt.sampler == 'ddim':
|
217 |
-
sampler = DDIMSampler(model)
|
218 |
-
else:
|
219 |
-
raise NotImplementedError
|
220 |
-
|
221 |
-
return model, sampler
|
222 |
-
|
223 |
-
|
224 |
-
def get_cond_ch(cond_type: ExtraCondition):
|
225 |
-
if cond_type == ExtraCondition.sketch or cond_type == ExtraCondition.canny:
|
226 |
-
return 1
|
227 |
-
return 3
|
228 |
-
|
229 |
-
|
230 |
-
def get_adapters(opt, cond_type: ExtraCondition):
|
231 |
-
adapter = {}
|
232 |
-
cond_weight = getattr(opt, f'{cond_type.name}_weight', None)
|
233 |
-
if cond_weight is None:
|
234 |
-
cond_weight = getattr(opt, 'cond_weight')
|
235 |
-
adapter['cond_weight'] = cond_weight
|
236 |
-
|
237 |
-
if cond_type == ExtraCondition.style:
|
238 |
-
adapter['model'] = StyleAdapter(width=1024, context_dim=768, num_head=8, n_layes=3, num_token=8).to(opt.device)
|
239 |
-
elif cond_type == ExtraCondition.color:
|
240 |
-
adapter['model'] = Adapter_light(
|
241 |
-
cin=64 * get_cond_ch(cond_type),
|
242 |
-
channels=[320, 640, 1280, 1280],
|
243 |
-
nums_rb=4).to(opt.device)
|
244 |
-
else:
|
245 |
-
adapter['model'] = Adapter(
|
246 |
-
cin=64 * get_cond_ch(cond_type),
|
247 |
-
channels=[320, 640, 1280, 1280][:4],
|
248 |
-
nums_rb=2,
|
249 |
-
ksize=1,
|
250 |
-
sk=True,
|
251 |
-
use_conv=False).to(opt.device)
|
252 |
-
ckpt_path = getattr(opt, f'{cond_type.name}_adapter_ckpt', None)
|
253 |
-
if ckpt_path is None:
|
254 |
-
ckpt_path = getattr(opt, 'adapter_ckpt')
|
255 |
-
adapter['model'].load_state_dict(torch.load(ckpt_path))
|
256 |
-
|
257 |
-
return adapter
|
258 |
-
|
259 |
-
|
260 |
-
def diffusion_inference(opt, model, sampler, adapter_features, append_to_context=None):
|
261 |
-
# get text embedding
|
262 |
-
c = model.get_learned_conditioning([opt.prompt])
|
263 |
-
if opt.scale != 1.0:
|
264 |
-
uc = model.get_learned_conditioning([opt.neg_prompt])
|
265 |
-
else:
|
266 |
-
uc = None
|
267 |
-
c, uc = fix_cond_shapes(model, c, uc)
|
268 |
-
|
269 |
-
if not hasattr(opt, 'H'):
|
270 |
-
opt.H = 512
|
271 |
-
opt.W = 512
|
272 |
-
shape = [opt.C, opt.H // opt.f, opt.W // opt.f]
|
273 |
-
|
274 |
-
samples_latents, _ = sampler.sample(
|
275 |
-
S=opt.steps,
|
276 |
-
conditioning=c,
|
277 |
-
batch_size=1,
|
278 |
-
shape=shape,
|
279 |
-
verbose=False,
|
280 |
-
unconditional_guidance_scale=opt.scale,
|
281 |
-
unconditional_conditioning=uc,
|
282 |
-
x_T=None,
|
283 |
-
features_adapter=adapter_features,
|
284 |
-
append_to_context=append_to_context,
|
285 |
-
cond_tau=opt.cond_tau,
|
286 |
-
style_cond_tau=opt.style_cond_tau,
|
287 |
-
)
|
288 |
-
|
289 |
-
x_samples = model.decode_first_stage(samples_latents)
|
290 |
-
x_samples = torch.clamp((x_samples + 1.0) / 2.0, min=0.0, max=1.0)
|
291 |
-
|
292 |
-
return x_samples
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapter/T2I-Adapter/ldm/modules/image_degradation/utils_image.py
DELETED
@@ -1,916 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import math
|
3 |
-
import random
|
4 |
-
import numpy as np
|
5 |
-
import torch
|
6 |
-
import cv2
|
7 |
-
from torchvision.utils import make_grid
|
8 |
-
from datetime import datetime
|
9 |
-
#import matplotlib.pyplot as plt # TODO: check with Dominik, also bsrgan.py vs bsrgan_light.py
|
10 |
-
|
11 |
-
|
12 |
-
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
|
13 |
-
|
14 |
-
|
15 |
-
'''
|
16 |
-
# --------------------------------------------
|
17 |
-
# Kai Zhang (github: https://github.com/cszn)
|
18 |
-
# 03/Mar/2019
|
19 |
-
# --------------------------------------------
|
20 |
-
# https://github.com/twhui/SRGAN-pyTorch
|
21 |
-
# https://github.com/xinntao/BasicSR
|
22 |
-
# --------------------------------------------
|
23 |
-
'''
|
24 |
-
|
25 |
-
|
26 |
-
IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', '.tif']
|
27 |
-
|
28 |
-
|
29 |
-
def is_image_file(filename):
|
30 |
-
return any(filename.endswith(extension) for extension in IMG_EXTENSIONS)
|
31 |
-
|
32 |
-
|
33 |
-
def get_timestamp():
|
34 |
-
return datetime.now().strftime('%y%m%d-%H%M%S')
|
35 |
-
|
36 |
-
|
37 |
-
def imshow(x, title=None, cbar=False, figsize=None):
|
38 |
-
plt.figure(figsize=figsize)
|
39 |
-
plt.imshow(np.squeeze(x), interpolation='nearest', cmap='gray')
|
40 |
-
if title:
|
41 |
-
plt.title(title)
|
42 |
-
if cbar:
|
43 |
-
plt.colorbar()
|
44 |
-
plt.show()
|
45 |
-
|
46 |
-
|
47 |
-
def surf(Z, cmap='rainbow', figsize=None):
|
48 |
-
plt.figure(figsize=figsize)
|
49 |
-
ax3 = plt.axes(projection='3d')
|
50 |
-
|
51 |
-
w, h = Z.shape[:2]
|
52 |
-
xx = np.arange(0,w,1)
|
53 |
-
yy = np.arange(0,h,1)
|
54 |
-
X, Y = np.meshgrid(xx, yy)
|
55 |
-
ax3.plot_surface(X,Y,Z,cmap=cmap)
|
56 |
-
#ax3.contour(X,Y,Z, zdim='z',offset=-2,cmap=cmap)
|
57 |
-
plt.show()
|
58 |
-
|
59 |
-
|
60 |
-
'''
|
61 |
-
# --------------------------------------------
|
62 |
-
# get image pathes
|
63 |
-
# --------------------------------------------
|
64 |
-
'''
|
65 |
-
|
66 |
-
|
67 |
-
def get_image_paths(dataroot):
|
68 |
-
paths = None # return None if dataroot is None
|
69 |
-
if dataroot is not None:
|
70 |
-
paths = sorted(_get_paths_from_images(dataroot))
|
71 |
-
return paths
|
72 |
-
|
73 |
-
|
74 |
-
def _get_paths_from_images(path):
|
75 |
-
assert os.path.isdir(path), '{:s} is not a valid directory'.format(path)
|
76 |
-
images = []
|
77 |
-
for dirpath, _, fnames in sorted(os.walk(path)):
|
78 |
-
for fname in sorted(fnames):
|
79 |
-
if is_image_file(fname):
|
80 |
-
img_path = os.path.join(dirpath, fname)
|
81 |
-
images.append(img_path)
|
82 |
-
assert images, '{:s} has no valid image file'.format(path)
|
83 |
-
return images
|
84 |
-
|
85 |
-
|
86 |
-
'''
|
87 |
-
# --------------------------------------------
|
88 |
-
# split large images into small images
|
89 |
-
# --------------------------------------------
|
90 |
-
'''
|
91 |
-
|
92 |
-
|
93 |
-
def patches_from_image(img, p_size=512, p_overlap=64, p_max=800):
|
94 |
-
w, h = img.shape[:2]
|
95 |
-
patches = []
|
96 |
-
if w > p_max and h > p_max:
|
97 |
-
w1 = list(np.arange(0, w-p_size, p_size-p_overlap, dtype=np.int))
|
98 |
-
h1 = list(np.arange(0, h-p_size, p_size-p_overlap, dtype=np.int))
|
99 |
-
w1.append(w-p_size)
|
100 |
-
h1.append(h-p_size)
|
101 |
-
# print(w1)
|
102 |
-
# print(h1)
|
103 |
-
for i in w1:
|
104 |
-
for j in h1:
|
105 |
-
patches.append(img[i:i+p_size, j:j+p_size,:])
|
106 |
-
else:
|
107 |
-
patches.append(img)
|
108 |
-
|
109 |
-
return patches
|
110 |
-
|
111 |
-
|
112 |
-
def imssave(imgs, img_path):
|
113 |
-
"""
|
114 |
-
imgs: list, N images of size WxHxC
|
115 |
-
"""
|
116 |
-
img_name, ext = os.path.splitext(os.path.basename(img_path))
|
117 |
-
|
118 |
-
for i, img in enumerate(imgs):
|
119 |
-
if img.ndim == 3:
|
120 |
-
img = img[:, :, [2, 1, 0]]
|
121 |
-
new_path = os.path.join(os.path.dirname(img_path), img_name+str('_s{:04d}'.format(i))+'.png')
|
122 |
-
cv2.imwrite(new_path, img)
|
123 |
-
|
124 |
-
|
125 |
-
def split_imageset(original_dataroot, taget_dataroot, n_channels=3, p_size=800, p_overlap=96, p_max=1000):
|
126 |
-
"""
|
127 |
-
split the large images from original_dataroot into small overlapped images with size (p_size)x(p_size),
|
128 |
-
and save them into taget_dataroot; only the images with larger size than (p_max)x(p_max)
|
129 |
-
will be splitted.
|
130 |
-
Args:
|
131 |
-
original_dataroot:
|
132 |
-
taget_dataroot:
|
133 |
-
p_size: size of small images
|
134 |
-
p_overlap: patch size in training is a good choice
|
135 |
-
p_max: images with smaller size than (p_max)x(p_max) keep unchanged.
|
136 |
-
"""
|
137 |
-
paths = get_image_paths(original_dataroot)
|
138 |
-
for img_path in paths:
|
139 |
-
# img_name, ext = os.path.splitext(os.path.basename(img_path))
|
140 |
-
img = imread_uint(img_path, n_channels=n_channels)
|
141 |
-
patches = patches_from_image(img, p_size, p_overlap, p_max)
|
142 |
-
imssave(patches, os.path.join(taget_dataroot,os.path.basename(img_path)))
|
143 |
-
#if original_dataroot == taget_dataroot:
|
144 |
-
#del img_path
|
145 |
-
|
146 |
-
'''
|
147 |
-
# --------------------------------------------
|
148 |
-
# makedir
|
149 |
-
# --------------------------------------------
|
150 |
-
'''
|
151 |
-
|
152 |
-
|
153 |
-
def mkdir(path):
|
154 |
-
if not os.path.exists(path):
|
155 |
-
os.makedirs(path)
|
156 |
-
|
157 |
-
|
158 |
-
def mkdirs(paths):
|
159 |
-
if isinstance(paths, str):
|
160 |
-
mkdir(paths)
|
161 |
-
else:
|
162 |
-
for path in paths:
|
163 |
-
mkdir(path)
|
164 |
-
|
165 |
-
|
166 |
-
def mkdir_and_rename(path):
|
167 |
-
if os.path.exists(path):
|
168 |
-
new_name = path + '_archived_' + get_timestamp()
|
169 |
-
print('Path already exists. Rename it to [{:s}]'.format(new_name))
|
170 |
-
os.rename(path, new_name)
|
171 |
-
os.makedirs(path)
|
172 |
-
|
173 |
-
|
174 |
-
'''
|
175 |
-
# --------------------------------------------
|
176 |
-
# read image from path
|
177 |
-
# opencv is fast, but read BGR numpy image
|
178 |
-
# --------------------------------------------
|
179 |
-
'''
|
180 |
-
|
181 |
-
|
182 |
-
# --------------------------------------------
|
183 |
-
# get uint8 image of size HxWxn_channles (RGB)
|
184 |
-
# --------------------------------------------
|
185 |
-
def imread_uint(path, n_channels=3):
|
186 |
-
# input: path
|
187 |
-
# output: HxWx3(RGB or GGG), or HxWx1 (G)
|
188 |
-
if n_channels == 1:
|
189 |
-
img = cv2.imread(path, 0) # cv2.IMREAD_GRAYSCALE
|
190 |
-
img = np.expand_dims(img, axis=2) # HxWx1
|
191 |
-
elif n_channels == 3:
|
192 |
-
img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # BGR or G
|
193 |
-
if img.ndim == 2:
|
194 |
-
img = cv2.cvtColor(img, cv2.COLOR_GRAY2RGB) # GGG
|
195 |
-
else:
|
196 |
-
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # RGB
|
197 |
-
return img
|
198 |
-
|
199 |
-
|
200 |
-
# --------------------------------------------
|
201 |
-
# matlab's imwrite
|
202 |
-
# --------------------------------------------
|
203 |
-
def imsave(img, img_path):
|
204 |
-
img = np.squeeze(img)
|
205 |
-
if img.ndim == 3:
|
206 |
-
img = img[:, :, [2, 1, 0]]
|
207 |
-
cv2.imwrite(img_path, img)
|
208 |
-
|
209 |
-
def imwrite(img, img_path):
|
210 |
-
img = np.squeeze(img)
|
211 |
-
if img.ndim == 3:
|
212 |
-
img = img[:, :, [2, 1, 0]]
|
213 |
-
cv2.imwrite(img_path, img)
|
214 |
-
|
215 |
-
|
216 |
-
|
217 |
-
# --------------------------------------------
|
218 |
-
# get single image of size HxWxn_channles (BGR)
|
219 |
-
# --------------------------------------------
|
220 |
-
def read_img(path):
|
221 |
-
# read image by cv2
|
222 |
-
# return: Numpy float32, HWC, BGR, [0,1]
|
223 |
-
img = cv2.imread(path, cv2.IMREAD_UNCHANGED) # cv2.IMREAD_GRAYSCALE
|
224 |
-
img = img.astype(np.float32) / 255.
|
225 |
-
if img.ndim == 2:
|
226 |
-
img = np.expand_dims(img, axis=2)
|
227 |
-
# some images have 4 channels
|
228 |
-
if img.shape[2] > 3:
|
229 |
-
img = img[:, :, :3]
|
230 |
-
return img
|
231 |
-
|
232 |
-
|
233 |
-
'''
|
234 |
-
# --------------------------------------------
|
235 |
-
# image format conversion
|
236 |
-
# --------------------------------------------
|
237 |
-
# numpy(single) <---> numpy(unit)
|
238 |
-
# numpy(single) <---> tensor
|
239 |
-
# numpy(unit) <---> tensor
|
240 |
-
# --------------------------------------------
|
241 |
-
'''
|
242 |
-
|
243 |
-
|
244 |
-
# --------------------------------------------
|
245 |
-
# numpy(single) [0, 1] <---> numpy(unit)
|
246 |
-
# --------------------------------------------
|
247 |
-
|
248 |
-
|
249 |
-
def uint2single(img):
|
250 |
-
|
251 |
-
return np.float32(img/255.)
|
252 |
-
|
253 |
-
|
254 |
-
def single2uint(img):
|
255 |
-
|
256 |
-
return np.uint8((img.clip(0, 1)*255.).round())
|
257 |
-
|
258 |
-
|
259 |
-
def uint162single(img):
|
260 |
-
|
261 |
-
return np.float32(img/65535.)
|
262 |
-
|
263 |
-
|
264 |
-
def single2uint16(img):
|
265 |
-
|
266 |
-
return np.uint16((img.clip(0, 1)*65535.).round())
|
267 |
-
|
268 |
-
|
269 |
-
# --------------------------------------------
|
270 |
-
# numpy(unit) (HxWxC or HxW) <---> tensor
|
271 |
-
# --------------------------------------------
|
272 |
-
|
273 |
-
|
274 |
-
# convert uint to 4-dimensional torch tensor
|
275 |
-
def uint2tensor4(img):
|
276 |
-
if img.ndim == 2:
|
277 |
-
img = np.expand_dims(img, axis=2)
|
278 |
-
return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.).unsqueeze(0)
|
279 |
-
|
280 |
-
|
281 |
-
# convert uint to 3-dimensional torch tensor
|
282 |
-
def uint2tensor3(img):
|
283 |
-
if img.ndim == 2:
|
284 |
-
img = np.expand_dims(img, axis=2)
|
285 |
-
return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().div(255.)
|
286 |
-
|
287 |
-
|
288 |
-
# convert 2/3/4-dimensional torch tensor to uint
|
289 |
-
def tensor2uint(img):
|
290 |
-
img = img.data.squeeze().float().clamp_(0, 1).cpu().numpy()
|
291 |
-
if img.ndim == 3:
|
292 |
-
img = np.transpose(img, (1, 2, 0))
|
293 |
-
return np.uint8((img*255.0).round())
|
294 |
-
|
295 |
-
|
296 |
-
# --------------------------------------------
|
297 |
-
# numpy(single) (HxWxC) <---> tensor
|
298 |
-
# --------------------------------------------
|
299 |
-
|
300 |
-
|
301 |
-
# convert single (HxWxC) to 3-dimensional torch tensor
|
302 |
-
def single2tensor3(img):
|
303 |
-
return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float()
|
304 |
-
|
305 |
-
|
306 |
-
# convert single (HxWxC) to 4-dimensional torch tensor
|
307 |
-
def single2tensor4(img):
|
308 |
-
return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1).float().unsqueeze(0)
|
309 |
-
|
310 |
-
|
311 |
-
# convert torch tensor to single
|
312 |
-
def tensor2single(img):
|
313 |
-
img = img.data.squeeze().float().cpu().numpy()
|
314 |
-
if img.ndim == 3:
|
315 |
-
img = np.transpose(img, (1, 2, 0))
|
316 |
-
|
317 |
-
return img
|
318 |
-
|
319 |
-
# convert torch tensor to single
|
320 |
-
def tensor2single3(img):
|
321 |
-
img = img.data.squeeze().float().cpu().numpy()
|
322 |
-
if img.ndim == 3:
|
323 |
-
img = np.transpose(img, (1, 2, 0))
|
324 |
-
elif img.ndim == 2:
|
325 |
-
img = np.expand_dims(img, axis=2)
|
326 |
-
return img
|
327 |
-
|
328 |
-
|
329 |
-
def single2tensor5(img):
|
330 |
-
return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float().unsqueeze(0)
|
331 |
-
|
332 |
-
|
333 |
-
def single32tensor5(img):
|
334 |
-
return torch.from_numpy(np.ascontiguousarray(img)).float().unsqueeze(0).unsqueeze(0)
|
335 |
-
|
336 |
-
|
337 |
-
def single42tensor4(img):
|
338 |
-
return torch.from_numpy(np.ascontiguousarray(img)).permute(2, 0, 1, 3).float()
|
339 |
-
|
340 |
-
|
341 |
-
# from skimage.io import imread, imsave
|
342 |
-
def tensor2img(tensor, out_type=np.uint8, min_max=(0, 1)):
|
343 |
-
'''
|
344 |
-
Converts a torch Tensor into an image Numpy array of BGR channel order
|
345 |
-
Input: 4D(B,(3/1),H,W), 3D(C,H,W), or 2D(H,W), any range, RGB channel order
|
346 |
-
Output: 3D(H,W,C) or 2D(H,W), [0,255], np.uint8 (default)
|
347 |
-
'''
|
348 |
-
tensor = tensor.squeeze().float().cpu().clamp_(*min_max) # squeeze first, then clamp
|
349 |
-
tensor = (tensor - min_max[0]) / (min_max[1] - min_max[0]) # to range [0,1]
|
350 |
-
n_dim = tensor.dim()
|
351 |
-
if n_dim == 4:
|
352 |
-
n_img = len(tensor)
|
353 |
-
img_np = make_grid(tensor, nrow=int(math.sqrt(n_img)), normalize=False).numpy()
|
354 |
-
img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
|
355 |
-
elif n_dim == 3:
|
356 |
-
img_np = tensor.numpy()
|
357 |
-
img_np = np.transpose(img_np[[2, 1, 0], :, :], (1, 2, 0)) # HWC, BGR
|
358 |
-
elif n_dim == 2:
|
359 |
-
img_np = tensor.numpy()
|
360 |
-
else:
|
361 |
-
raise TypeError(
|
362 |
-
'Only support 4D, 3D and 2D tensor. But received with dimension: {:d}'.format(n_dim))
|
363 |
-
if out_type == np.uint8:
|
364 |
-
img_np = (img_np * 255.0).round()
|
365 |
-
# Important. Unlike matlab, numpy.unit8() WILL NOT round by default.
|
366 |
-
return img_np.astype(out_type)
|
367 |
-
|
368 |
-
|
369 |
-
'''
|
370 |
-
# --------------------------------------------
|
371 |
-
# Augmentation, flipe and/or rotate
|
372 |
-
# --------------------------------------------
|
373 |
-
# The following two are enough.
|
374 |
-
# (1) augmet_img: numpy image of WxHxC or WxH
|
375 |
-
# (2) augment_img_tensor4: tensor image 1xCxWxH
|
376 |
-
# --------------------------------------------
|
377 |
-
'''
|
378 |
-
|
379 |
-
|
380 |
-
def augment_img(img, mode=0):
|
381 |
-
'''Kai Zhang (github: https://github.com/cszn)
|
382 |
-
'''
|
383 |
-
if mode == 0:
|
384 |
-
return img
|
385 |
-
elif mode == 1:
|
386 |
-
return np.flipud(np.rot90(img))
|
387 |
-
elif mode == 2:
|
388 |
-
return np.flipud(img)
|
389 |
-
elif mode == 3:
|
390 |
-
return np.rot90(img, k=3)
|
391 |
-
elif mode == 4:
|
392 |
-
return np.flipud(np.rot90(img, k=2))
|
393 |
-
elif mode == 5:
|
394 |
-
return np.rot90(img)
|
395 |
-
elif mode == 6:
|
396 |
-
return np.rot90(img, k=2)
|
397 |
-
elif mode == 7:
|
398 |
-
return np.flipud(np.rot90(img, k=3))
|
399 |
-
|
400 |
-
|
401 |
-
def augment_img_tensor4(img, mode=0):
|
402 |
-
'''Kai Zhang (github: https://github.com/cszn)
|
403 |
-
'''
|
404 |
-
if mode == 0:
|
405 |
-
return img
|
406 |
-
elif mode == 1:
|
407 |
-
return img.rot90(1, [2, 3]).flip([2])
|
408 |
-
elif mode == 2:
|
409 |
-
return img.flip([2])
|
410 |
-
elif mode == 3:
|
411 |
-
return img.rot90(3, [2, 3])
|
412 |
-
elif mode == 4:
|
413 |
-
return img.rot90(2, [2, 3]).flip([2])
|
414 |
-
elif mode == 5:
|
415 |
-
return img.rot90(1, [2, 3])
|
416 |
-
elif mode == 6:
|
417 |
-
return img.rot90(2, [2, 3])
|
418 |
-
elif mode == 7:
|
419 |
-
return img.rot90(3, [2, 3]).flip([2])
|
420 |
-
|
421 |
-
|
422 |
-
def augment_img_tensor(img, mode=0):
|
423 |
-
'''Kai Zhang (github: https://github.com/cszn)
|
424 |
-
'''
|
425 |
-
img_size = img.size()
|
426 |
-
img_np = img.data.cpu().numpy()
|
427 |
-
if len(img_size) == 3:
|
428 |
-
img_np = np.transpose(img_np, (1, 2, 0))
|
429 |
-
elif len(img_size) == 4:
|
430 |
-
img_np = np.transpose(img_np, (2, 3, 1, 0))
|
431 |
-
img_np = augment_img(img_np, mode=mode)
|
432 |
-
img_tensor = torch.from_numpy(np.ascontiguousarray(img_np))
|
433 |
-
if len(img_size) == 3:
|
434 |
-
img_tensor = img_tensor.permute(2, 0, 1)
|
435 |
-
elif len(img_size) == 4:
|
436 |
-
img_tensor = img_tensor.permute(3, 2, 0, 1)
|
437 |
-
|
438 |
-
return img_tensor.type_as(img)
|
439 |
-
|
440 |
-
|
441 |
-
def augment_img_np3(img, mode=0):
|
442 |
-
if mode == 0:
|
443 |
-
return img
|
444 |
-
elif mode == 1:
|
445 |
-
return img.transpose(1, 0, 2)
|
446 |
-
elif mode == 2:
|
447 |
-
return img[::-1, :, :]
|
448 |
-
elif mode == 3:
|
449 |
-
img = img[::-1, :, :]
|
450 |
-
img = img.transpose(1, 0, 2)
|
451 |
-
return img
|
452 |
-
elif mode == 4:
|
453 |
-
return img[:, ::-1, :]
|
454 |
-
elif mode == 5:
|
455 |
-
img = img[:, ::-1, :]
|
456 |
-
img = img.transpose(1, 0, 2)
|
457 |
-
return img
|
458 |
-
elif mode == 6:
|
459 |
-
img = img[:, ::-1, :]
|
460 |
-
img = img[::-1, :, :]
|
461 |
-
return img
|
462 |
-
elif mode == 7:
|
463 |
-
img = img[:, ::-1, :]
|
464 |
-
img = img[::-1, :, :]
|
465 |
-
img = img.transpose(1, 0, 2)
|
466 |
-
return img
|
467 |
-
|
468 |
-
|
469 |
-
def augment_imgs(img_list, hflip=True, rot=True):
|
470 |
-
# horizontal flip OR rotate
|
471 |
-
hflip = hflip and random.random() < 0.5
|
472 |
-
vflip = rot and random.random() < 0.5
|
473 |
-
rot90 = rot and random.random() < 0.5
|
474 |
-
|
475 |
-
def _augment(img):
|
476 |
-
if hflip:
|
477 |
-
img = img[:, ::-1, :]
|
478 |
-
if vflip:
|
479 |
-
img = img[::-1, :, :]
|
480 |
-
if rot90:
|
481 |
-
img = img.transpose(1, 0, 2)
|
482 |
-
return img
|
483 |
-
|
484 |
-
return [_augment(img) for img in img_list]
|
485 |
-
|
486 |
-
|
487 |
-
'''
|
488 |
-
# --------------------------------------------
|
489 |
-
# modcrop and shave
|
490 |
-
# --------------------------------------------
|
491 |
-
'''
|
492 |
-
|
493 |
-
|
494 |
-
def modcrop(img_in, scale):
|
495 |
-
# img_in: Numpy, HWC or HW
|
496 |
-
img = np.copy(img_in)
|
497 |
-
if img.ndim == 2:
|
498 |
-
H, W = img.shape
|
499 |
-
H_r, W_r = H % scale, W % scale
|
500 |
-
img = img[:H - H_r, :W - W_r]
|
501 |
-
elif img.ndim == 3:
|
502 |
-
H, W, C = img.shape
|
503 |
-
H_r, W_r = H % scale, W % scale
|
504 |
-
img = img[:H - H_r, :W - W_r, :]
|
505 |
-
else:
|
506 |
-
raise ValueError('Wrong img ndim: [{:d}].'.format(img.ndim))
|
507 |
-
return img
|
508 |
-
|
509 |
-
|
510 |
-
def shave(img_in, border=0):
|
511 |
-
# img_in: Numpy, HWC or HW
|
512 |
-
img = np.copy(img_in)
|
513 |
-
h, w = img.shape[:2]
|
514 |
-
img = img[border:h-border, border:w-border]
|
515 |
-
return img
|
516 |
-
|
517 |
-
|
518 |
-
'''
|
519 |
-
# --------------------------------------------
|
520 |
-
# image processing process on numpy image
|
521 |
-
# channel_convert(in_c, tar_type, img_list):
|
522 |
-
# rgb2ycbcr(img, only_y=True):
|
523 |
-
# bgr2ycbcr(img, only_y=True):
|
524 |
-
# ycbcr2rgb(img):
|
525 |
-
# --------------------------------------------
|
526 |
-
'''
|
527 |
-
|
528 |
-
|
529 |
-
def rgb2ycbcr(img, only_y=True):
|
530 |
-
'''same as matlab rgb2ycbcr
|
531 |
-
only_y: only return Y channel
|
532 |
-
Input:
|
533 |
-
uint8, [0, 255]
|
534 |
-
float, [0, 1]
|
535 |
-
'''
|
536 |
-
in_img_type = img.dtype
|
537 |
-
img.astype(np.float32)
|
538 |
-
if in_img_type != np.uint8:
|
539 |
-
img *= 255.
|
540 |
-
# convert
|
541 |
-
if only_y:
|
542 |
-
rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0
|
543 |
-
else:
|
544 |
-
rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
|
545 |
-
[24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]
|
546 |
-
if in_img_type == np.uint8:
|
547 |
-
rlt = rlt.round()
|
548 |
-
else:
|
549 |
-
rlt /= 255.
|
550 |
-
return rlt.astype(in_img_type)
|
551 |
-
|
552 |
-
|
553 |
-
def ycbcr2rgb(img):
|
554 |
-
'''same as matlab ycbcr2rgb
|
555 |
-
Input:
|
556 |
-
uint8, [0, 255]
|
557 |
-
float, [0, 1]
|
558 |
-
'''
|
559 |
-
in_img_type = img.dtype
|
560 |
-
img.astype(np.float32)
|
561 |
-
if in_img_type != np.uint8:
|
562 |
-
img *= 255.
|
563 |
-
# convert
|
564 |
-
rlt = np.matmul(img, [[0.00456621, 0.00456621, 0.00456621], [0, -0.00153632, 0.00791071],
|
565 |
-
[0.00625893, -0.00318811, 0]]) * 255.0 + [-222.921, 135.576, -276.836]
|
566 |
-
if in_img_type == np.uint8:
|
567 |
-
rlt = rlt.round()
|
568 |
-
else:
|
569 |
-
rlt /= 255.
|
570 |
-
return rlt.astype(in_img_type)
|
571 |
-
|
572 |
-
|
573 |
-
def bgr2ycbcr(img, only_y=True):
|
574 |
-
'''bgr version of rgb2ycbcr
|
575 |
-
only_y: only return Y channel
|
576 |
-
Input:
|
577 |
-
uint8, [0, 255]
|
578 |
-
float, [0, 1]
|
579 |
-
'''
|
580 |
-
in_img_type = img.dtype
|
581 |
-
img.astype(np.float32)
|
582 |
-
if in_img_type != np.uint8:
|
583 |
-
img *= 255.
|
584 |
-
# convert
|
585 |
-
if only_y:
|
586 |
-
rlt = np.dot(img, [24.966, 128.553, 65.481]) / 255.0 + 16.0
|
587 |
-
else:
|
588 |
-
rlt = np.matmul(img, [[24.966, 112.0, -18.214], [128.553, -74.203, -93.786],
|
589 |
-
[65.481, -37.797, 112.0]]) / 255.0 + [16, 128, 128]
|
590 |
-
if in_img_type == np.uint8:
|
591 |
-
rlt = rlt.round()
|
592 |
-
else:
|
593 |
-
rlt /= 255.
|
594 |
-
return rlt.astype(in_img_type)
|
595 |
-
|
596 |
-
|
597 |
-
def channel_convert(in_c, tar_type, img_list):
|
598 |
-
# conversion among BGR, gray and y
|
599 |
-
if in_c == 3 and tar_type == 'gray': # BGR to gray
|
600 |
-
gray_list = [cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) for img in img_list]
|
601 |
-
return [np.expand_dims(img, axis=2) for img in gray_list]
|
602 |
-
elif in_c == 3 and tar_type == 'y': # BGR to y
|
603 |
-
y_list = [bgr2ycbcr(img, only_y=True) for img in img_list]
|
604 |
-
return [np.expand_dims(img, axis=2) for img in y_list]
|
605 |
-
elif in_c == 1 and tar_type == 'RGB': # gray/y to BGR
|
606 |
-
return [cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) for img in img_list]
|
607 |
-
else:
|
608 |
-
return img_list
|
609 |
-
|
610 |
-
|
611 |
-
'''
|
612 |
-
# --------------------------------------------
|
613 |
-
# metric, PSNR and SSIM
|
614 |
-
# --------------------------------------------
|
615 |
-
'''
|
616 |
-
|
617 |
-
|
618 |
-
# --------------------------------------------
|
619 |
-
# PSNR
|
620 |
-
# --------------------------------------------
|
621 |
-
def calculate_psnr(img1, img2, border=0):
|
622 |
-
# img1 and img2 have range [0, 255]
|
623 |
-
#img1 = img1.squeeze()
|
624 |
-
#img2 = img2.squeeze()
|
625 |
-
if not img1.shape == img2.shape:
|
626 |
-
raise ValueError('Input images must have the same dimensions.')
|
627 |
-
h, w = img1.shape[:2]
|
628 |
-
img1 = img1[border:h-border, border:w-border]
|
629 |
-
img2 = img2[border:h-border, border:w-border]
|
630 |
-
|
631 |
-
img1 = img1.astype(np.float64)
|
632 |
-
img2 = img2.astype(np.float64)
|
633 |
-
mse = np.mean((img1 - img2)**2)
|
634 |
-
if mse == 0:
|
635 |
-
return float('inf')
|
636 |
-
return 20 * math.log10(255.0 / math.sqrt(mse))
|
637 |
-
|
638 |
-
|
639 |
-
# --------------------------------------------
|
640 |
-
# SSIM
|
641 |
-
# --------------------------------------------
|
642 |
-
def calculate_ssim(img1, img2, border=0):
|
643 |
-
'''calculate SSIM
|
644 |
-
the same outputs as MATLAB's
|
645 |
-
img1, img2: [0, 255]
|
646 |
-
'''
|
647 |
-
#img1 = img1.squeeze()
|
648 |
-
#img2 = img2.squeeze()
|
649 |
-
if not img1.shape == img2.shape:
|
650 |
-
raise ValueError('Input images must have the same dimensions.')
|
651 |
-
h, w = img1.shape[:2]
|
652 |
-
img1 = img1[border:h-border, border:w-border]
|
653 |
-
img2 = img2[border:h-border, border:w-border]
|
654 |
-
|
655 |
-
if img1.ndim == 2:
|
656 |
-
return ssim(img1, img2)
|
657 |
-
elif img1.ndim == 3:
|
658 |
-
if img1.shape[2] == 3:
|
659 |
-
ssims = []
|
660 |
-
for i in range(3):
|
661 |
-
ssims.append(ssim(img1[:,:,i], img2[:,:,i]))
|
662 |
-
return np.array(ssims).mean()
|
663 |
-
elif img1.shape[2] == 1:
|
664 |
-
return ssim(np.squeeze(img1), np.squeeze(img2))
|
665 |
-
else:
|
666 |
-
raise ValueError('Wrong input image dimensions.')
|
667 |
-
|
668 |
-
|
669 |
-
def ssim(img1, img2):
|
670 |
-
C1 = (0.01 * 255)**2
|
671 |
-
C2 = (0.03 * 255)**2
|
672 |
-
|
673 |
-
img1 = img1.astype(np.float64)
|
674 |
-
img2 = img2.astype(np.float64)
|
675 |
-
kernel = cv2.getGaussianKernel(11, 1.5)
|
676 |
-
window = np.outer(kernel, kernel.transpose())
|
677 |
-
|
678 |
-
mu1 = cv2.filter2D(img1, -1, window)[5:-5, 5:-5] # valid
|
679 |
-
mu2 = cv2.filter2D(img2, -1, window)[5:-5, 5:-5]
|
680 |
-
mu1_sq = mu1**2
|
681 |
-
mu2_sq = mu2**2
|
682 |
-
mu1_mu2 = mu1 * mu2
|
683 |
-
sigma1_sq = cv2.filter2D(img1**2, -1, window)[5:-5, 5:-5] - mu1_sq
|
684 |
-
sigma2_sq = cv2.filter2D(img2**2, -1, window)[5:-5, 5:-5] - mu2_sq
|
685 |
-
sigma12 = cv2.filter2D(img1 * img2, -1, window)[5:-5, 5:-5] - mu1_mu2
|
686 |
-
|
687 |
-
ssim_map = ((2 * mu1_mu2 + C1) * (2 * sigma12 + C2)) / ((mu1_sq + mu2_sq + C1) *
|
688 |
-
(sigma1_sq + sigma2_sq + C2))
|
689 |
-
return ssim_map.mean()
|
690 |
-
|
691 |
-
|
692 |
-
'''
|
693 |
-
# --------------------------------------------
|
694 |
-
# matlab's bicubic imresize (numpy and torch) [0, 1]
|
695 |
-
# --------------------------------------------
|
696 |
-
'''
|
697 |
-
|
698 |
-
|
699 |
-
# matlab 'imresize' function, now only support 'bicubic'
|
700 |
-
def cubic(x):
|
701 |
-
absx = torch.abs(x)
|
702 |
-
absx2 = absx**2
|
703 |
-
absx3 = absx**3
|
704 |
-
return (1.5*absx3 - 2.5*absx2 + 1) * ((absx <= 1).type_as(absx)) + \
|
705 |
-
(-0.5*absx3 + 2.5*absx2 - 4*absx + 2) * (((absx > 1)*(absx <= 2)).type_as(absx))
|
706 |
-
|
707 |
-
|
708 |
-
def calculate_weights_indices(in_length, out_length, scale, kernel, kernel_width, antialiasing):
|
709 |
-
if (scale < 1) and (antialiasing):
|
710 |
-
# Use a modified kernel to simultaneously interpolate and antialias- larger kernel width
|
711 |
-
kernel_width = kernel_width / scale
|
712 |
-
|
713 |
-
# Output-space coordinates
|
714 |
-
x = torch.linspace(1, out_length, out_length)
|
715 |
-
|
716 |
-
# Input-space coordinates. Calculate the inverse mapping such that 0.5
|
717 |
-
# in output space maps to 0.5 in input space, and 0.5+scale in output
|
718 |
-
# space maps to 1.5 in input space.
|
719 |
-
u = x / scale + 0.5 * (1 - 1 / scale)
|
720 |
-
|
721 |
-
# What is the left-most pixel that can be involved in the computation?
|
722 |
-
left = torch.floor(u - kernel_width / 2)
|
723 |
-
|
724 |
-
# What is the maximum number of pixels that can be involved in the
|
725 |
-
# computation? Note: it's OK to use an extra pixel here; if the
|
726 |
-
# corresponding weights are all zero, it will be eliminated at the end
|
727 |
-
# of this function.
|
728 |
-
P = math.ceil(kernel_width) + 2
|
729 |
-
|
730 |
-
# The indices of the input pixels involved in computing the k-th output
|
731 |
-
# pixel are in row k of the indices matrix.
|
732 |
-
indices = left.view(out_length, 1).expand(out_length, P) + torch.linspace(0, P - 1, P).view(
|
733 |
-
1, P).expand(out_length, P)
|
734 |
-
|
735 |
-
# The weights used to compute the k-th output pixel are in row k of the
|
736 |
-
# weights matrix.
|
737 |
-
distance_to_center = u.view(out_length, 1).expand(out_length, P) - indices
|
738 |
-
# apply cubic kernel
|
739 |
-
if (scale < 1) and (antialiasing):
|
740 |
-
weights = scale * cubic(distance_to_center * scale)
|
741 |
-
else:
|
742 |
-
weights = cubic(distance_to_center)
|
743 |
-
# Normalize the weights matrix so that each row sums to 1.
|
744 |
-
weights_sum = torch.sum(weights, 1).view(out_length, 1)
|
745 |
-
weights = weights / weights_sum.expand(out_length, P)
|
746 |
-
|
747 |
-
# If a column in weights is all zero, get rid of it. only consider the first and last column.
|
748 |
-
weights_zero_tmp = torch.sum((weights == 0), 0)
|
749 |
-
if not math.isclose(weights_zero_tmp[0], 0, rel_tol=1e-6):
|
750 |
-
indices = indices.narrow(1, 1, P - 2)
|
751 |
-
weights = weights.narrow(1, 1, P - 2)
|
752 |
-
if not math.isclose(weights_zero_tmp[-1], 0, rel_tol=1e-6):
|
753 |
-
indices = indices.narrow(1, 0, P - 2)
|
754 |
-
weights = weights.narrow(1, 0, P - 2)
|
755 |
-
weights = weights.contiguous()
|
756 |
-
indices = indices.contiguous()
|
757 |
-
sym_len_s = -indices.min() + 1
|
758 |
-
sym_len_e = indices.max() - in_length
|
759 |
-
indices = indices + sym_len_s - 1
|
760 |
-
return weights, indices, int(sym_len_s), int(sym_len_e)
|
761 |
-
|
762 |
-
|
763 |
-
# --------------------------------------------
|
764 |
-
# imresize for tensor image [0, 1]
|
765 |
-
# --------------------------------------------
|
766 |
-
def imresize(img, scale, antialiasing=True):
|
767 |
-
# Now the scale should be the same for H and W
|
768 |
-
# input: img: pytorch tensor, CHW or HW [0,1]
|
769 |
-
# output: CHW or HW [0,1] w/o round
|
770 |
-
need_squeeze = True if img.dim() == 2 else False
|
771 |
-
if need_squeeze:
|
772 |
-
img.unsqueeze_(0)
|
773 |
-
in_C, in_H, in_W = img.size()
|
774 |
-
out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
|
775 |
-
kernel_width = 4
|
776 |
-
kernel = 'cubic'
|
777 |
-
|
778 |
-
# Return the desired dimension order for performing the resize. The
|
779 |
-
# strategy is to perform the resize first along the dimension with the
|
780 |
-
# smallest scale factor.
|
781 |
-
# Now we do not support this.
|
782 |
-
|
783 |
-
# get weights and indices
|
784 |
-
weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
|
785 |
-
in_H, out_H, scale, kernel, kernel_width, antialiasing)
|
786 |
-
weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
|
787 |
-
in_W, out_W, scale, kernel, kernel_width, antialiasing)
|
788 |
-
# process H dimension
|
789 |
-
# symmetric copying
|
790 |
-
img_aug = torch.FloatTensor(in_C, in_H + sym_len_Hs + sym_len_He, in_W)
|
791 |
-
img_aug.narrow(1, sym_len_Hs, in_H).copy_(img)
|
792 |
-
|
793 |
-
sym_patch = img[:, :sym_len_Hs, :]
|
794 |
-
inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
|
795 |
-
sym_patch_inv = sym_patch.index_select(1, inv_idx)
|
796 |
-
img_aug.narrow(1, 0, sym_len_Hs).copy_(sym_patch_inv)
|
797 |
-
|
798 |
-
sym_patch = img[:, -sym_len_He:, :]
|
799 |
-
inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
|
800 |
-
sym_patch_inv = sym_patch.index_select(1, inv_idx)
|
801 |
-
img_aug.narrow(1, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
|
802 |
-
|
803 |
-
out_1 = torch.FloatTensor(in_C, out_H, in_W)
|
804 |
-
kernel_width = weights_H.size(1)
|
805 |
-
for i in range(out_H):
|
806 |
-
idx = int(indices_H[i][0])
|
807 |
-
for j in range(out_C):
|
808 |
-
out_1[j, i, :] = img_aug[j, idx:idx + kernel_width, :].transpose(0, 1).mv(weights_H[i])
|
809 |
-
|
810 |
-
# process W dimension
|
811 |
-
# symmetric copying
|
812 |
-
out_1_aug = torch.FloatTensor(in_C, out_H, in_W + sym_len_Ws + sym_len_We)
|
813 |
-
out_1_aug.narrow(2, sym_len_Ws, in_W).copy_(out_1)
|
814 |
-
|
815 |
-
sym_patch = out_1[:, :, :sym_len_Ws]
|
816 |
-
inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
|
817 |
-
sym_patch_inv = sym_patch.index_select(2, inv_idx)
|
818 |
-
out_1_aug.narrow(2, 0, sym_len_Ws).copy_(sym_patch_inv)
|
819 |
-
|
820 |
-
sym_patch = out_1[:, :, -sym_len_We:]
|
821 |
-
inv_idx = torch.arange(sym_patch.size(2) - 1, -1, -1).long()
|
822 |
-
sym_patch_inv = sym_patch.index_select(2, inv_idx)
|
823 |
-
out_1_aug.narrow(2, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
|
824 |
-
|
825 |
-
out_2 = torch.FloatTensor(in_C, out_H, out_W)
|
826 |
-
kernel_width = weights_W.size(1)
|
827 |
-
for i in range(out_W):
|
828 |
-
idx = int(indices_W[i][0])
|
829 |
-
for j in range(out_C):
|
830 |
-
out_2[j, :, i] = out_1_aug[j, :, idx:idx + kernel_width].mv(weights_W[i])
|
831 |
-
if need_squeeze:
|
832 |
-
out_2.squeeze_()
|
833 |
-
return out_2
|
834 |
-
|
835 |
-
|
836 |
-
# --------------------------------------------
|
837 |
-
# imresize for numpy image [0, 1]
|
838 |
-
# --------------------------------------------
|
839 |
-
def imresize_np(img, scale, antialiasing=True):
|
840 |
-
# Now the scale should be the same for H and W
|
841 |
-
# input: img: Numpy, HWC or HW [0,1]
|
842 |
-
# output: HWC or HW [0,1] w/o round
|
843 |
-
img = torch.from_numpy(img)
|
844 |
-
need_squeeze = True if img.dim() == 2 else False
|
845 |
-
if need_squeeze:
|
846 |
-
img.unsqueeze_(2)
|
847 |
-
|
848 |
-
in_H, in_W, in_C = img.size()
|
849 |
-
out_C, out_H, out_W = in_C, math.ceil(in_H * scale), math.ceil(in_W * scale)
|
850 |
-
kernel_width = 4
|
851 |
-
kernel = 'cubic'
|
852 |
-
|
853 |
-
# Return the desired dimension order for performing the resize. The
|
854 |
-
# strategy is to perform the resize first along the dimension with the
|
855 |
-
# smallest scale factor.
|
856 |
-
# Now we do not support this.
|
857 |
-
|
858 |
-
# get weights and indices
|
859 |
-
weights_H, indices_H, sym_len_Hs, sym_len_He = calculate_weights_indices(
|
860 |
-
in_H, out_H, scale, kernel, kernel_width, antialiasing)
|
861 |
-
weights_W, indices_W, sym_len_Ws, sym_len_We = calculate_weights_indices(
|
862 |
-
in_W, out_W, scale, kernel, kernel_width, antialiasing)
|
863 |
-
# process H dimension
|
864 |
-
# symmetric copying
|
865 |
-
img_aug = torch.FloatTensor(in_H + sym_len_Hs + sym_len_He, in_W, in_C)
|
866 |
-
img_aug.narrow(0, sym_len_Hs, in_H).copy_(img)
|
867 |
-
|
868 |
-
sym_patch = img[:sym_len_Hs, :, :]
|
869 |
-
inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
|
870 |
-
sym_patch_inv = sym_patch.index_select(0, inv_idx)
|
871 |
-
img_aug.narrow(0, 0, sym_len_Hs).copy_(sym_patch_inv)
|
872 |
-
|
873 |
-
sym_patch = img[-sym_len_He:, :, :]
|
874 |
-
inv_idx = torch.arange(sym_patch.size(0) - 1, -1, -1).long()
|
875 |
-
sym_patch_inv = sym_patch.index_select(0, inv_idx)
|
876 |
-
img_aug.narrow(0, sym_len_Hs + in_H, sym_len_He).copy_(sym_patch_inv)
|
877 |
-
|
878 |
-
out_1 = torch.FloatTensor(out_H, in_W, in_C)
|
879 |
-
kernel_width = weights_H.size(1)
|
880 |
-
for i in range(out_H):
|
881 |
-
idx = int(indices_H[i][0])
|
882 |
-
for j in range(out_C):
|
883 |
-
out_1[i, :, j] = img_aug[idx:idx + kernel_width, :, j].transpose(0, 1).mv(weights_H[i])
|
884 |
-
|
885 |
-
# process W dimension
|
886 |
-
# symmetric copying
|
887 |
-
out_1_aug = torch.FloatTensor(out_H, in_W + sym_len_Ws + sym_len_We, in_C)
|
888 |
-
out_1_aug.narrow(1, sym_len_Ws, in_W).copy_(out_1)
|
889 |
-
|
890 |
-
sym_patch = out_1[:, :sym_len_Ws, :]
|
891 |
-
inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
|
892 |
-
sym_patch_inv = sym_patch.index_select(1, inv_idx)
|
893 |
-
out_1_aug.narrow(1, 0, sym_len_Ws).copy_(sym_patch_inv)
|
894 |
-
|
895 |
-
sym_patch = out_1[:, -sym_len_We:, :]
|
896 |
-
inv_idx = torch.arange(sym_patch.size(1) - 1, -1, -1).long()
|
897 |
-
sym_patch_inv = sym_patch.index_select(1, inv_idx)
|
898 |
-
out_1_aug.narrow(1, sym_len_Ws + in_W, sym_len_We).copy_(sym_patch_inv)
|
899 |
-
|
900 |
-
out_2 = torch.FloatTensor(out_H, out_W, in_C)
|
901 |
-
kernel_width = weights_W.size(1)
|
902 |
-
for i in range(out_W):
|
903 |
-
idx = int(indices_W[i][0])
|
904 |
-
for j in range(out_C):
|
905 |
-
out_2[:, i, j] = out_1_aug[:, idx:idx + kernel_width, j].mv(weights_W[i])
|
906 |
-
if need_squeeze:
|
907 |
-
out_2.squeeze_()
|
908 |
-
|
909 |
-
return out_2.numpy()
|
910 |
-
|
911 |
-
|
912 |
-
if __name__ == '__main__':
|
913 |
-
print('---')
|
914 |
-
# img = imread_uint('test.bmp', 3)
|
915 |
-
# img = uint2single(img)
|
916 |
-
# img_bicubic = imresize_np(img, 1/4)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aditya9790/yolo7-object-tracking/utils/add_nms.py
DELETED
@@ -1,155 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import onnx
|
3 |
-
from onnx import shape_inference
|
4 |
-
try:
|
5 |
-
import onnx_graphsurgeon as gs
|
6 |
-
except Exception as e:
|
7 |
-
print('Import onnx_graphsurgeon failure: %s' % e)
|
8 |
-
|
9 |
-
import logging
|
10 |
-
|
11 |
-
LOGGER = logging.getLogger(__name__)
|
12 |
-
|
13 |
-
class RegisterNMS(object):
|
14 |
-
def __init__(
|
15 |
-
self,
|
16 |
-
onnx_model_path: str,
|
17 |
-
precision: str = "fp32",
|
18 |
-
):
|
19 |
-
|
20 |
-
self.graph = gs.import_onnx(onnx.load(onnx_model_path))
|
21 |
-
assert self.graph
|
22 |
-
LOGGER.info("ONNX graph created successfully")
|
23 |
-
# Fold constants via ONNX-GS that PyTorch2ONNX may have missed
|
24 |
-
self.graph.fold_constants()
|
25 |
-
self.precision = precision
|
26 |
-
self.batch_size = 1
|
27 |
-
def infer(self):
|
28 |
-
"""
|
29 |
-
Sanitize the graph by cleaning any unconnected nodes, do a topological resort,
|
30 |
-
and fold constant inputs values. When possible, run shape inference on the
|
31 |
-
ONNX graph to determine tensor shapes.
|
32 |
-
"""
|
33 |
-
for _ in range(3):
|
34 |
-
count_before = len(self.graph.nodes)
|
35 |
-
|
36 |
-
self.graph.cleanup().toposort()
|
37 |
-
try:
|
38 |
-
for node in self.graph.nodes:
|
39 |
-
for o in node.outputs:
|
40 |
-
o.shape = None
|
41 |
-
model = gs.export_onnx(self.graph)
|
42 |
-
model = shape_inference.infer_shapes(model)
|
43 |
-
self.graph = gs.import_onnx(model)
|
44 |
-
except Exception as e:
|
45 |
-
LOGGER.info(f"Shape inference could not be performed at this time:\n{e}")
|
46 |
-
try:
|
47 |
-
self.graph.fold_constants(fold_shapes=True)
|
48 |
-
except TypeError as e:
|
49 |
-
LOGGER.error(
|
50 |
-
"This version of ONNX GraphSurgeon does not support folding shapes, "
|
51 |
-
f"please upgrade your onnx_graphsurgeon module. Error:\n{e}"
|
52 |
-
)
|
53 |
-
raise
|
54 |
-
|
55 |
-
count_after = len(self.graph.nodes)
|
56 |
-
if count_before == count_after:
|
57 |
-
# No new folding occurred in this iteration, so we can stop for now.
|
58 |
-
break
|
59 |
-
|
60 |
-
def save(self, output_path):
|
61 |
-
"""
|
62 |
-
Save the ONNX model to the given location.
|
63 |
-
Args:
|
64 |
-
output_path: Path pointing to the location where to write
|
65 |
-
out the updated ONNX model.
|
66 |
-
"""
|
67 |
-
self.graph.cleanup().toposort()
|
68 |
-
model = gs.export_onnx(self.graph)
|
69 |
-
onnx.save(model, output_path)
|
70 |
-
LOGGER.info(f"Saved ONNX model to {output_path}")
|
71 |
-
|
72 |
-
def register_nms(
|
73 |
-
self,
|
74 |
-
*,
|
75 |
-
score_thresh: float = 0.25,
|
76 |
-
nms_thresh: float = 0.45,
|
77 |
-
detections_per_img: int = 100,
|
78 |
-
):
|
79 |
-
"""
|
80 |
-
Register the ``EfficientNMS_TRT`` plugin node.
|
81 |
-
NMS expects these shapes for its input tensors:
|
82 |
-
- box_net: [batch_size, number_boxes, 4]
|
83 |
-
- class_net: [batch_size, number_boxes, number_labels]
|
84 |
-
Args:
|
85 |
-
score_thresh (float): The scalar threshold for score (low scoring boxes are removed).
|
86 |
-
nms_thresh (float): The scalar threshold for IOU (new boxes that have high IOU
|
87 |
-
overlap with previously selected boxes are removed).
|
88 |
-
detections_per_img (int): Number of best detections to keep after NMS.
|
89 |
-
"""
|
90 |
-
|
91 |
-
self.infer()
|
92 |
-
# Find the concat node at the end of the network
|
93 |
-
op_inputs = self.graph.outputs
|
94 |
-
op = "EfficientNMS_TRT"
|
95 |
-
attrs = {
|
96 |
-
"plugin_version": "1",
|
97 |
-
"background_class": -1, # no background class
|
98 |
-
"max_output_boxes": detections_per_img,
|
99 |
-
"score_threshold": score_thresh,
|
100 |
-
"iou_threshold": nms_thresh,
|
101 |
-
"score_activation": False,
|
102 |
-
"box_coding": 0,
|
103 |
-
}
|
104 |
-
|
105 |
-
if self.precision == "fp32":
|
106 |
-
dtype_output = np.float32
|
107 |
-
elif self.precision == "fp16":
|
108 |
-
dtype_output = np.float16
|
109 |
-
else:
|
110 |
-
raise NotImplementedError(f"Currently not supports precision: {self.precision}")
|
111 |
-
|
112 |
-
# NMS Outputs
|
113 |
-
output_num_detections = gs.Variable(
|
114 |
-
name="num_dets",
|
115 |
-
dtype=np.int32,
|
116 |
-
shape=[self.batch_size, 1],
|
117 |
-
) # A scalar indicating the number of valid detections per batch image.
|
118 |
-
output_boxes = gs.Variable(
|
119 |
-
name="det_boxes",
|
120 |
-
dtype=dtype_output,
|
121 |
-
shape=[self.batch_size, detections_per_img, 4],
|
122 |
-
)
|
123 |
-
output_scores = gs.Variable(
|
124 |
-
name="det_scores",
|
125 |
-
dtype=dtype_output,
|
126 |
-
shape=[self.batch_size, detections_per_img],
|
127 |
-
)
|
128 |
-
output_labels = gs.Variable(
|
129 |
-
name="det_classes",
|
130 |
-
dtype=np.int32,
|
131 |
-
shape=[self.batch_size, detections_per_img],
|
132 |
-
)
|
133 |
-
|
134 |
-
op_outputs = [output_num_detections, output_boxes, output_scores, output_labels]
|
135 |
-
|
136 |
-
# Create the NMS Plugin node with the selected inputs. The outputs of the node will also
|
137 |
-
# become the final outputs of the graph.
|
138 |
-
self.graph.layer(op=op, name="batched_nms", inputs=op_inputs, outputs=op_outputs, attrs=attrs)
|
139 |
-
LOGGER.info(f"Created NMS plugin '{op}' with attributes: {attrs}")
|
140 |
-
|
141 |
-
self.graph.outputs = op_outputs
|
142 |
-
|
143 |
-
self.infer()
|
144 |
-
|
145 |
-
def save(self, output_path):
|
146 |
-
"""
|
147 |
-
Save the ONNX model to the given location.
|
148 |
-
Args:
|
149 |
-
output_path: Path pointing to the location where to write
|
150 |
-
out the updated ONNX model.
|
151 |
-
"""
|
152 |
-
self.graph.cleanup().toposort()
|
153 |
-
model = gs.export_onnx(self.graph)
|
154 |
-
onnx.save(model, output_path)
|
155 |
-
LOGGER.info(f"Saved ONNX model to {output_path}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/ball/Ball.d.ts
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import Base from '../base/Base';
|
2 |
-
export default class Ball extends Base { }
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/Methods.js
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
import GetChildrenWidth from './GetChildrenWidth.js';
|
2 |
-
import GetChildrenHeight from './GetChildrenHeight.js';
|
3 |
-
import GetExpandedChildWidth from './GetExpandedChildWidth.js';
|
4 |
-
import GetExpandedChildHeight from './GetExpandedChildHeight.js';
|
5 |
-
import GetChildrenSizers from './GetChildrenSizers.js';
|
6 |
-
import LayoutChildren from './LayoutChildren.js';
|
7 |
-
import AddChildMethods from './AddChildMethods.js';
|
8 |
-
import RemoveChildMethods from './RemoveChildMethods.js';
|
9 |
-
|
10 |
-
var methods = {
|
11 |
-
getChildrenWidth: GetChildrenWidth,
|
12 |
-
getChildrenHeight: GetChildrenHeight,
|
13 |
-
getExpandedChildWidth: GetExpandedChildWidth,
|
14 |
-
getExpandedChildHeight: GetExpandedChildHeight,
|
15 |
-
getChildrenSizers: GetChildrenSizers,
|
16 |
-
layoutChildren: LayoutChildren,
|
17 |
-
};
|
18 |
-
|
19 |
-
Object.assign(
|
20 |
-
methods,
|
21 |
-
AddChildMethods,
|
22 |
-
RemoveChildMethods
|
23 |
-
);
|
24 |
-
|
25 |
-
export default methods;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlexWang/lama/saicinpainting/training/modules/depthwise_sep_conv.py
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
|
4 |
-
class DepthWiseSeperableConv(nn.Module):
|
5 |
-
def __init__(self, in_dim, out_dim, *args, **kwargs):
|
6 |
-
super().__init__()
|
7 |
-
if 'groups' in kwargs:
|
8 |
-
# ignoring groups for Depthwise Sep Conv
|
9 |
-
del kwargs['groups']
|
10 |
-
|
11 |
-
self.depthwise = nn.Conv2d(in_dim, in_dim, *args, groups=in_dim, **kwargs)
|
12 |
-
self.pointwise = nn.Conv2d(in_dim, out_dim, kernel_size=1)
|
13 |
-
|
14 |
-
def forward(self, x):
|
15 |
-
out = self.depthwise(x)
|
16 |
-
out = self.pointwise(out)
|
17 |
-
return out
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alichuan/VITS-Umamusume-voice-synthesizer/ONNXVITS_inference.py
DELETED
@@ -1,36 +0,0 @@
|
|
1 |
-
import logging
|
2 |
-
logging.getLogger('numba').setLevel(logging.WARNING)
|
3 |
-
import IPython.display as ipd
|
4 |
-
import torch
|
5 |
-
import commons
|
6 |
-
import utils
|
7 |
-
import ONNXVITS_infer
|
8 |
-
from text import text_to_sequence
|
9 |
-
|
10 |
-
def get_text(text, hps):
|
11 |
-
text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners)
|
12 |
-
if hps.data.add_blank:
|
13 |
-
text_norm = commons.intersperse(text_norm, 0)
|
14 |
-
text_norm = torch.LongTensor(text_norm)
|
15 |
-
return text_norm
|
16 |
-
|
17 |
-
hps = utils.get_hparams_from_file("../vits/pretrained_models/uma87.json")
|
18 |
-
|
19 |
-
net_g = ONNXVITS_infer.SynthesizerTrn(
|
20 |
-
len(hps.symbols),
|
21 |
-
hps.data.filter_length // 2 + 1,
|
22 |
-
hps.train.segment_size // hps.data.hop_length,
|
23 |
-
n_speakers=hps.data.n_speakers,
|
24 |
-
**hps.model)
|
25 |
-
_ = net_g.eval()
|
26 |
-
|
27 |
-
_ = utils.load_checkpoint("../vits/pretrained_models/uma_1153000.pth", net_g)
|
28 |
-
|
29 |
-
text1 = get_text("おはようございます。", hps)
|
30 |
-
stn_tst = text1
|
31 |
-
with torch.no_grad():
|
32 |
-
x_tst = stn_tst.unsqueeze(0)
|
33 |
-
x_tst_lengths = torch.LongTensor([stn_tst.size(0)])
|
34 |
-
sid = torch.LongTensor([0])
|
35 |
-
audio = net_g.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=.667, noise_scale_w=0.8, length_scale=1)[0][0,0].data.cpu().float().numpy()
|
36 |
-
print(audio)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlignmentResearch/tuned-lens/Dockerfile
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
FROM python:3.9
|
2 |
-
|
3 |
-
WORKDIR /code
|
4 |
-
|
5 |
-
COPY ./requirements.txt /code/requirements.txt
|
6 |
-
|
7 |
-
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
|
8 |
-
|
9 |
-
# Set up a new user named "user" with user ID 1000
|
10 |
-
RUN useradd -m -u 1000 user
|
11 |
-
|
12 |
-
# Switch to the "user" user
|
13 |
-
USER user
|
14 |
-
|
15 |
-
# Set home to the user's home directory
|
16 |
-
ENV HOME=/home/user \
|
17 |
-
PATH=/home/user/.local/bin:$PATH
|
18 |
-
|
19 |
-
# Set the working directory to the user's home directory
|
20 |
-
WORKDIR $HOME/app
|
21 |
-
|
22 |
-
# Copy the current directory contents into the container at $HOME/app setting the owner to the user
|
23 |
-
COPY --chown=user . $HOME/app
|
24 |
-
|
25 |
-
CMD ["python", "app.py"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ameaou/academic-chatgpt3.1/check_proxy.py
DELETED
@@ -1,149 +0,0 @@
|
|
1 |
-
|
2 |
-
def check_proxy(proxies):
|
3 |
-
import requests
|
4 |
-
proxies_https = proxies['https'] if proxies is not None else '无'
|
5 |
-
try:
|
6 |
-
response = requests.get("https://ipapi.co/json/",
|
7 |
-
proxies=proxies, timeout=4)
|
8 |
-
data = response.json()
|
9 |
-
print(f'查询代理的地理位置,返回的结果是{data}')
|
10 |
-
if 'country_name' in data:
|
11 |
-
country = data['country_name']
|
12 |
-
result = f"代理配置 {proxies_https}, 代理所在地:{country}"
|
13 |
-
elif 'error' in data:
|
14 |
-
result = f"代理配置 {proxies_https}, 代理所在地:未知,IP查询频率受限"
|
15 |
-
print(result)
|
16 |
-
return result
|
17 |
-
except:
|
18 |
-
result = f"代理配置 {proxies_https}, 代理所在地查询超时,代理可能无效"
|
19 |
-
print(result)
|
20 |
-
return result
|
21 |
-
|
22 |
-
|
23 |
-
def backup_and_download(current_version, remote_version):
|
24 |
-
"""
|
25 |
-
一键更新协议:备份和下载
|
26 |
-
"""
|
27 |
-
from toolbox import get_conf
|
28 |
-
import shutil
|
29 |
-
import os
|
30 |
-
import requests
|
31 |
-
import zipfile
|
32 |
-
os.makedirs(f'./history', exist_ok=True)
|
33 |
-
backup_dir = f'./history/backup-{current_version}/'
|
34 |
-
new_version_dir = f'./history/new-version-{remote_version}/'
|
35 |
-
if os.path.exists(new_version_dir):
|
36 |
-
return new_version_dir
|
37 |
-
os.makedirs(new_version_dir)
|
38 |
-
shutil.copytree('./', backup_dir, ignore=lambda x, y: ['history'])
|
39 |
-
proxies, = get_conf('proxies')
|
40 |
-
r = requests.get(
|
41 |
-
'https://github.com/binary-husky/chatgpt_academic/archive/refs/heads/master.zip', proxies=proxies, stream=True)
|
42 |
-
zip_file_path = backup_dir+'/master.zip'
|
43 |
-
with open(zip_file_path, 'wb+') as f:
|
44 |
-
f.write(r.content)
|
45 |
-
dst_path = new_version_dir
|
46 |
-
with zipfile.ZipFile(zip_file_path, "r") as zip_ref:
|
47 |
-
for zip_info in zip_ref.infolist():
|
48 |
-
dst_file_path = os.path.join(dst_path, zip_info.filename)
|
49 |
-
if os.path.exists(dst_file_path):
|
50 |
-
os.remove(dst_file_path)
|
51 |
-
zip_ref.extract(zip_info, dst_path)
|
52 |
-
return new_version_dir
|
53 |
-
|
54 |
-
|
55 |
-
def patch_and_restart(path):
|
56 |
-
"""
|
57 |
-
一键更新协议:覆盖和重启
|
58 |
-
"""
|
59 |
-
import distutils
|
60 |
-
import shutil
|
61 |
-
import os
|
62 |
-
import sys
|
63 |
-
import time
|
64 |
-
from colorful import print亮黄, print亮绿, print亮红
|
65 |
-
# if not using config_private, move origin config.py as config_private.py
|
66 |
-
if not os.path.exists('config_private.py'):
|
67 |
-
print亮黄('由于您没有设置config_private.py私密配置,现将您的现有配置移动至config_private.py以防止配置丢失,',
|
68 |
-
'另外您可以随时在history子文件夹下找回旧版的程序。')
|
69 |
-
shutil.copyfile('config.py', 'config_private.py')
|
70 |
-
distutils.dir_util.copy_tree(path+'/chatgpt_academic-master', './')
|
71 |
-
import subprocess
|
72 |
-
print亮绿('代码已经更新,即将更新pip包依赖……')
|
73 |
-
for i in reversed(range(5)): time.sleep(1); print(i)
|
74 |
-
try:
|
75 |
-
subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', 'requirements.txt'])
|
76 |
-
except:
|
77 |
-
print亮红('pip包依赖安装出现问题,需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
|
78 |
-
print亮绿('更新完成,您可以随时在history子文件夹下找回旧版的程序,5s之后重启')
|
79 |
-
print亮红('假如重启失败,您可能需要手动安装新增的依赖库 `python -m pip install -r requirements.txt`,然后在用常规的`python main.py`的方式启动。')
|
80 |
-
print(' ------------------------------ -----------------------------------')
|
81 |
-
for i in reversed(range(8)): time.sleep(1); print(i)
|
82 |
-
os.execl(sys.executable, sys.executable, *sys.argv)
|
83 |
-
|
84 |
-
|
85 |
-
def get_current_version():
|
86 |
-
import json
|
87 |
-
try:
|
88 |
-
with open('./version', 'r', encoding='utf8') as f:
|
89 |
-
current_version = json.loads(f.read())['version']
|
90 |
-
except:
|
91 |
-
current_version = ""
|
92 |
-
return current_version
|
93 |
-
|
94 |
-
|
95 |
-
def auto_update():
|
96 |
-
"""
|
97 |
-
一键更新协议:查询版本和用户意见
|
98 |
-
"""
|
99 |
-
try:
|
100 |
-
from toolbox import get_conf
|
101 |
-
import requests
|
102 |
-
import time
|
103 |
-
import json
|
104 |
-
proxies, = get_conf('proxies')
|
105 |
-
response = requests.get(
|
106 |
-
"https://raw.githubusercontent.com/binary-husky/chatgpt_academic/master/version", proxies=proxies, timeout=5)
|
107 |
-
remote_json_data = json.loads(response.text)
|
108 |
-
remote_version = remote_json_data['version']
|
109 |
-
if remote_json_data["show_feature"]:
|
110 |
-
new_feature = "新功能:" + remote_json_data["new_feature"]
|
111 |
-
else:
|
112 |
-
new_feature = ""
|
113 |
-
with open('./version', 'r', encoding='utf8') as f:
|
114 |
-
current_version = f.read()
|
115 |
-
current_version = json.loads(current_version)['version']
|
116 |
-
if (remote_version - current_version) >= 0.01:
|
117 |
-
from colorful import print亮黄
|
118 |
-
print亮黄(
|
119 |
-
f'\n新版本可用。新版本:{remote_version},当前版本:{current_version}。{new_feature}')
|
120 |
-
print('(1)Github更新地址:\nhttps://github.com/binary-husky/chatgpt_academic\n')
|
121 |
-
user_instruction = input('(2)是否一键更新代码(Y+回车=确认,输入其他/无输入+回车=不更新)?')
|
122 |
-
if user_instruction in ['Y', 'y']:
|
123 |
-
path = backup_and_download(current_version, remote_version)
|
124 |
-
try:
|
125 |
-
patch_and_restart(path)
|
126 |
-
except:
|
127 |
-
print('更新失败。')
|
128 |
-
else:
|
129 |
-
print('自动更新程序:已禁用')
|
130 |
-
return
|
131 |
-
else:
|
132 |
-
return
|
133 |
-
except:
|
134 |
-
print('自动更新程序:已禁用')
|
135 |
-
|
136 |
-
def warm_up_modules():
|
137 |
-
print('正在执行一些模块的预热...')
|
138 |
-
from request_llm.bridge_all import model_info
|
139 |
-
enc = model_info["gpt-3.5-turbo"]['tokenizer']
|
140 |
-
enc.encode("模块预热", disallowed_special=())
|
141 |
-
enc = model_info["gpt-4"]['tokenizer']
|
142 |
-
enc.encode("模块预热", disallowed_special=())
|
143 |
-
|
144 |
-
if __name__ == '__main__':
|
145 |
-
import os
|
146 |
-
os.environ['no_proxy'] = '*' # 避免代理网络产生意外污染
|
147 |
-
from toolbox import get_conf
|
148 |
-
proxies, = get_conf('proxies')
|
149 |
-
check_proxy(proxies)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Uniformer_image_detection
|
3 |
-
emoji: 🌍
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.0.4
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_fpn_1x_coco.py
DELETED
@@ -1,18 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/rpn_r50_fpn.py', '../_base_/datasets/coco_detection.py',
|
3 |
-
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
|
4 |
-
]
|
5 |
-
img_norm_cfg = dict(
|
6 |
-
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
|
7 |
-
train_pipeline = [
|
8 |
-
dict(type='LoadImageFromFile'),
|
9 |
-
dict(type='LoadAnnotations', with_bbox=True, with_label=False),
|
10 |
-
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
|
11 |
-
dict(type='RandomFlip', flip_ratio=0.5),
|
12 |
-
dict(type='Normalize', **img_norm_cfg),
|
13 |
-
dict(type='Pad', size_divisor=32),
|
14 |
-
dict(type='DefaultFormatBundle'),
|
15 |
-
dict(type='Collect', keys=['img', 'gt_bboxes']),
|
16 |
-
]
|
17 |
-
data = dict(train=dict(pipeline=train_pipeline))
|
18 |
-
evaluation = dict(interval=1, metric='proposal_fast')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/psanet_r50-d8.py', '../_base_/datasets/cityscapes.py',
|
3 |
-
'../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py'
|
4 |
-
]
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r18b-d8_769x769_80k_cityscapes.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
_base_ = './pspnet_r50-d8_769x769_80k_cityscapes.py'
|
2 |
-
model = dict(
|
3 |
-
pretrained='torchvision://resnet18',
|
4 |
-
backbone=dict(type='ResNet', depth=18),
|
5 |
-
decode_head=dict(
|
6 |
-
in_channels=512,
|
7 |
-
channels=128,
|
8 |
-
),
|
9 |
-
auxiliary_head=dict(in_channels=256, channels=64))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/api-examples/api-example-stream.py
DELETED
@@ -1,86 +0,0 @@
|
|
1 |
-
import asyncio
|
2 |
-
import json
|
3 |
-
import sys
|
4 |
-
|
5 |
-
try:
|
6 |
-
import websockets
|
7 |
-
except ImportError:
|
8 |
-
print("Websockets package not found. Make sure it's installed.")
|
9 |
-
|
10 |
-
# For local streaming, the websockets are hosted without ssl - ws://
|
11 |
-
HOST = 'localhost:5005'
|
12 |
-
URI = f'ws://{HOST}/api/v1/stream'
|
13 |
-
|
14 |
-
# For reverse-proxied streaming, the remote will likely host with ssl - wss://
|
15 |
-
# URI = 'wss://your-uri-here.trycloudflare.com/api/v1/stream'
|
16 |
-
|
17 |
-
|
18 |
-
async def run(context):
|
19 |
-
# Note: the selected defaults change from time to time.
|
20 |
-
request = {
|
21 |
-
'prompt': context,
|
22 |
-
'max_new_tokens': 250,
|
23 |
-
'auto_max_new_tokens': False,
|
24 |
-
'max_tokens_second': 0,
|
25 |
-
|
26 |
-
# Generation params. If 'preset' is set to different than 'None', the values
|
27 |
-
# in presets/preset-name.yaml are used instead of the individual numbers.
|
28 |
-
'preset': 'None',
|
29 |
-
'do_sample': True,
|
30 |
-
'temperature': 0.7,
|
31 |
-
'top_p': 0.1,
|
32 |
-
'typical_p': 1,
|
33 |
-
'epsilon_cutoff': 0, # In units of 1e-4
|
34 |
-
'eta_cutoff': 0, # In units of 1e-4
|
35 |
-
'tfs': 1,
|
36 |
-
'top_a': 0,
|
37 |
-
'repetition_penalty': 1.18,
|
38 |
-
'repetition_penalty_range': 0,
|
39 |
-
'top_k': 40,
|
40 |
-
'min_length': 0,
|
41 |
-
'no_repeat_ngram_size': 0,
|
42 |
-
'num_beams': 1,
|
43 |
-
'penalty_alpha': 0,
|
44 |
-
'length_penalty': 1,
|
45 |
-
'early_stopping': False,
|
46 |
-
'mirostat_mode': 0,
|
47 |
-
'mirostat_tau': 5,
|
48 |
-
'mirostat_eta': 0.1,
|
49 |
-
'grammar_string': '',
|
50 |
-
'guidance_scale': 1,
|
51 |
-
'negative_prompt': '',
|
52 |
-
|
53 |
-
'seed': -1,
|
54 |
-
'add_bos_token': True,
|
55 |
-
'truncation_length': 2048,
|
56 |
-
'ban_eos_token': False,
|
57 |
-
'custom_token_bans': '',
|
58 |
-
'skip_special_tokens': True,
|
59 |
-
'stopping_strings': []
|
60 |
-
}
|
61 |
-
|
62 |
-
async with websockets.connect(URI, ping_interval=None) as websocket:
|
63 |
-
await websocket.send(json.dumps(request))
|
64 |
-
|
65 |
-
yield context # Remove this if you just want to see the reply
|
66 |
-
|
67 |
-
while True:
|
68 |
-
incoming_data = await websocket.recv()
|
69 |
-
incoming_data = json.loads(incoming_data)
|
70 |
-
|
71 |
-
match incoming_data['event']:
|
72 |
-
case 'text_stream':
|
73 |
-
yield incoming_data['text']
|
74 |
-
case 'stream_end':
|
75 |
-
return
|
76 |
-
|
77 |
-
|
78 |
-
async def print_response_stream(prompt):
|
79 |
-
async for response in run(prompt):
|
80 |
-
print(response, end='')
|
81 |
-
sys.stdout.flush() # If we don't flush, we won't see tokens in realtime.
|
82 |
-
|
83 |
-
|
84 |
-
if __name__ == '__main__':
|
85 |
-
prompt = "In order to make homemade bread, follow these steps:\n1)"
|
86 |
-
asyncio.run(print_response_stream(prompt))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/network/session.py
DELETED
@@ -1,517 +0,0 @@
|
|
1 |
-
"""PipSession and supporting code, containing all pip-specific
|
2 |
-
network request configuration and behavior.
|
3 |
-
"""
|
4 |
-
|
5 |
-
import email.utils
|
6 |
-
import io
|
7 |
-
import ipaddress
|
8 |
-
import json
|
9 |
-
import logging
|
10 |
-
import mimetypes
|
11 |
-
import os
|
12 |
-
import platform
|
13 |
-
import shutil
|
14 |
-
import subprocess
|
15 |
-
import sys
|
16 |
-
import urllib.parse
|
17 |
-
import warnings
|
18 |
-
from typing import (
|
19 |
-
TYPE_CHECKING,
|
20 |
-
Any,
|
21 |
-
Dict,
|
22 |
-
Generator,
|
23 |
-
List,
|
24 |
-
Mapping,
|
25 |
-
Optional,
|
26 |
-
Sequence,
|
27 |
-
Tuple,
|
28 |
-
Union,
|
29 |
-
)
|
30 |
-
|
31 |
-
from pip._vendor import requests, urllib3
|
32 |
-
from pip._vendor.cachecontrol import CacheControlAdapter as _BaseCacheControlAdapter
|
33 |
-
from pip._vendor.requests.adapters import DEFAULT_POOLBLOCK, BaseAdapter
|
34 |
-
from pip._vendor.requests.adapters import HTTPAdapter as _BaseHTTPAdapter
|
35 |
-
from pip._vendor.requests.models import PreparedRequest, Response
|
36 |
-
from pip._vendor.requests.structures import CaseInsensitiveDict
|
37 |
-
from pip._vendor.urllib3.connectionpool import ConnectionPool
|
38 |
-
from pip._vendor.urllib3.exceptions import InsecureRequestWarning
|
39 |
-
|
40 |
-
from pip import __version__
|
41 |
-
from pip._internal.metadata import get_default_environment
|
42 |
-
from pip._internal.models.link import Link
|
43 |
-
from pip._internal.network.auth import MultiDomainBasicAuth
|
44 |
-
from pip._internal.network.cache import SafeFileCache
|
45 |
-
|
46 |
-
# Import ssl from compat so the initial import occurs in only one place.
|
47 |
-
from pip._internal.utils.compat import has_tls
|
48 |
-
from pip._internal.utils.glibc import libc_ver
|
49 |
-
from pip._internal.utils.misc import build_url_from_netloc, parse_netloc
|
50 |
-
from pip._internal.utils.urls import url_to_path
|
51 |
-
|
52 |
-
if TYPE_CHECKING:
|
53 |
-
from ssl import SSLContext
|
54 |
-
|
55 |
-
from pip._vendor.urllib3.poolmanager import PoolManager
|
56 |
-
|
57 |
-
|
58 |
-
logger = logging.getLogger(__name__)
|
59 |
-
|
60 |
-
SecureOrigin = Tuple[str, str, Optional[Union[int, str]]]
|
61 |
-
|
62 |
-
|
63 |
-
# Ignore warning raised when using --trusted-host.
|
64 |
-
warnings.filterwarnings("ignore", category=InsecureRequestWarning)
|
65 |
-
|
66 |
-
|
67 |
-
SECURE_ORIGINS: List[SecureOrigin] = [
|
68 |
-
# protocol, hostname, port
|
69 |
-
# Taken from Chrome's list of secure origins (See: http://bit.ly/1qrySKC)
|
70 |
-
("https", "*", "*"),
|
71 |
-
("*", "localhost", "*"),
|
72 |
-
("*", "127.0.0.0/8", "*"),
|
73 |
-
("*", "::1/128", "*"),
|
74 |
-
("file", "*", None),
|
75 |
-
# ssh is always secure.
|
76 |
-
("ssh", "*", "*"),
|
77 |
-
]
|
78 |
-
|
79 |
-
|
80 |
-
# These are environment variables present when running under various
|
81 |
-
# CI systems. For each variable, some CI systems that use the variable
|
82 |
-
# are indicated. The collection was chosen so that for each of a number
|
83 |
-
# of popular systems, at least one of the environment variables is used.
|
84 |
-
# This list is used to provide some indication of and lower bound for
|
85 |
-
# CI traffic to PyPI. Thus, it is okay if the list is not comprehensive.
|
86 |
-
# For more background, see: https://github.com/pypa/pip/issues/5499
|
87 |
-
CI_ENVIRONMENT_VARIABLES = (
|
88 |
-
# Azure Pipelines
|
89 |
-
"BUILD_BUILDID",
|
90 |
-
# Jenkins
|
91 |
-
"BUILD_ID",
|
92 |
-
# AppVeyor, CircleCI, Codeship, Gitlab CI, Shippable, Travis CI
|
93 |
-
"CI",
|
94 |
-
# Explicit environment variable.
|
95 |
-
"PIP_IS_CI",
|
96 |
-
)
|
97 |
-
|
98 |
-
|
99 |
-
def looks_like_ci() -> bool:
|
100 |
-
"""
|
101 |
-
Return whether it looks like pip is running under CI.
|
102 |
-
"""
|
103 |
-
# We don't use the method of checking for a tty (e.g. using isatty())
|
104 |
-
# because some CI systems mimic a tty (e.g. Travis CI). Thus that
|
105 |
-
# method doesn't provide definitive information in either direction.
|
106 |
-
return any(name in os.environ for name in CI_ENVIRONMENT_VARIABLES)
|
107 |
-
|
108 |
-
|
109 |
-
def user_agent() -> str:
|
110 |
-
"""
|
111 |
-
Return a string representing the user agent.
|
112 |
-
"""
|
113 |
-
data: Dict[str, Any] = {
|
114 |
-
"installer": {"name": "pip", "version": __version__},
|
115 |
-
"python": platform.python_version(),
|
116 |
-
"implementation": {
|
117 |
-
"name": platform.python_implementation(),
|
118 |
-
},
|
119 |
-
}
|
120 |
-
|
121 |
-
if data["implementation"]["name"] == "CPython":
|
122 |
-
data["implementation"]["version"] = platform.python_version()
|
123 |
-
elif data["implementation"]["name"] == "PyPy":
|
124 |
-
pypy_version_info = sys.pypy_version_info # type: ignore
|
125 |
-
if pypy_version_info.releaselevel == "final":
|
126 |
-
pypy_version_info = pypy_version_info[:3]
|
127 |
-
data["implementation"]["version"] = ".".join(
|
128 |
-
[str(x) for x in pypy_version_info]
|
129 |
-
)
|
130 |
-
elif data["implementation"]["name"] == "Jython":
|
131 |
-
# Complete Guess
|
132 |
-
data["implementation"]["version"] = platform.python_version()
|
133 |
-
elif data["implementation"]["name"] == "IronPython":
|
134 |
-
# Complete Guess
|
135 |
-
data["implementation"]["version"] = platform.python_version()
|
136 |
-
|
137 |
-
if sys.platform.startswith("linux"):
|
138 |
-
from pip._vendor import distro
|
139 |
-
|
140 |
-
linux_distribution = distro.name(), distro.version(), distro.codename()
|
141 |
-
distro_infos: Dict[str, Any] = dict(
|
142 |
-
filter(
|
143 |
-
lambda x: x[1],
|
144 |
-
zip(["name", "version", "id"], linux_distribution),
|
145 |
-
)
|
146 |
-
)
|
147 |
-
libc = dict(
|
148 |
-
filter(
|
149 |
-
lambda x: x[1],
|
150 |
-
zip(["lib", "version"], libc_ver()),
|
151 |
-
)
|
152 |
-
)
|
153 |
-
if libc:
|
154 |
-
distro_infos["libc"] = libc
|
155 |
-
if distro_infos:
|
156 |
-
data["distro"] = distro_infos
|
157 |
-
|
158 |
-
if sys.platform.startswith("darwin") and platform.mac_ver()[0]:
|
159 |
-
data["distro"] = {"name": "macOS", "version": platform.mac_ver()[0]}
|
160 |
-
|
161 |
-
if platform.system():
|
162 |
-
data.setdefault("system", {})["name"] = platform.system()
|
163 |
-
|
164 |
-
if platform.release():
|
165 |
-
data.setdefault("system", {})["release"] = platform.release()
|
166 |
-
|
167 |
-
if platform.machine():
|
168 |
-
data["cpu"] = platform.machine()
|
169 |
-
|
170 |
-
if has_tls():
|
171 |
-
import _ssl as ssl
|
172 |
-
|
173 |
-
data["openssl_version"] = ssl.OPENSSL_VERSION
|
174 |
-
|
175 |
-
setuptools_dist = get_default_environment().get_distribution("setuptools")
|
176 |
-
if setuptools_dist is not None:
|
177 |
-
data["setuptools_version"] = str(setuptools_dist.version)
|
178 |
-
|
179 |
-
if shutil.which("rustc") is not None:
|
180 |
-
# If for any reason `rustc --version` fails, silently ignore it
|
181 |
-
try:
|
182 |
-
rustc_output = subprocess.check_output(
|
183 |
-
["rustc", "--version"], stderr=subprocess.STDOUT, timeout=0.5
|
184 |
-
)
|
185 |
-
except Exception:
|
186 |
-
pass
|
187 |
-
else:
|
188 |
-
if rustc_output.startswith(b"rustc "):
|
189 |
-
# The format of `rustc --version` is:
|
190 |
-
# `b'rustc 1.52.1 (9bc8c42bb 2021-05-09)\n'`
|
191 |
-
# We extract just the middle (1.52.1) part
|
192 |
-
data["rustc_version"] = rustc_output.split(b" ")[1].decode()
|
193 |
-
|
194 |
-
# Use None rather than False so as not to give the impression that
|
195 |
-
# pip knows it is not being run under CI. Rather, it is a null or
|
196 |
-
# inconclusive result. Also, we include some value rather than no
|
197 |
-
# value to make it easier to know that the check has been run.
|
198 |
-
data["ci"] = True if looks_like_ci() else None
|
199 |
-
|
200 |
-
user_data = os.environ.get("PIP_USER_AGENT_USER_DATA")
|
201 |
-
if user_data is not None:
|
202 |
-
data["user_data"] = user_data
|
203 |
-
|
204 |
-
return "{data[installer][name]}/{data[installer][version]} {json}".format(
|
205 |
-
data=data,
|
206 |
-
json=json.dumps(data, separators=(",", ":"), sort_keys=True),
|
207 |
-
)
|
208 |
-
|
209 |
-
|
210 |
-
class LocalFSAdapter(BaseAdapter):
|
211 |
-
def send(
|
212 |
-
self,
|
213 |
-
request: PreparedRequest,
|
214 |
-
stream: bool = False,
|
215 |
-
timeout: Optional[Union[float, Tuple[float, float]]] = None,
|
216 |
-
verify: Union[bool, str] = True,
|
217 |
-
cert: Optional[Union[str, Tuple[str, str]]] = None,
|
218 |
-
proxies: Optional[Mapping[str, str]] = None,
|
219 |
-
) -> Response:
|
220 |
-
pathname = url_to_path(request.url)
|
221 |
-
|
222 |
-
resp = Response()
|
223 |
-
resp.status_code = 200
|
224 |
-
resp.url = request.url
|
225 |
-
|
226 |
-
try:
|
227 |
-
stats = os.stat(pathname)
|
228 |
-
except OSError as exc:
|
229 |
-
# format the exception raised as a io.BytesIO object,
|
230 |
-
# to return a better error message:
|
231 |
-
resp.status_code = 404
|
232 |
-
resp.reason = type(exc).__name__
|
233 |
-
resp.raw = io.BytesIO(f"{resp.reason}: {exc}".encode("utf8"))
|
234 |
-
else:
|
235 |
-
modified = email.utils.formatdate(stats.st_mtime, usegmt=True)
|
236 |
-
content_type = mimetypes.guess_type(pathname)[0] or "text/plain"
|
237 |
-
resp.headers = CaseInsensitiveDict(
|
238 |
-
{
|
239 |
-
"Content-Type": content_type,
|
240 |
-
"Content-Length": stats.st_size,
|
241 |
-
"Last-Modified": modified,
|
242 |
-
}
|
243 |
-
)
|
244 |
-
|
245 |
-
resp.raw = open(pathname, "rb")
|
246 |
-
resp.close = resp.raw.close
|
247 |
-
|
248 |
-
return resp
|
249 |
-
|
250 |
-
def close(self) -> None:
|
251 |
-
pass
|
252 |
-
|
253 |
-
|
254 |
-
class _SSLContextAdapterMixin:
|
255 |
-
"""Mixin to add the ``ssl_context`` constructor argument to HTTP adapters.
|
256 |
-
|
257 |
-
The additional argument is forwarded directly to the pool manager. This allows us
|
258 |
-
to dynamically decide what SSL store to use at runtime, which is used to implement
|
259 |
-
the optional ``truststore`` backend.
|
260 |
-
"""
|
261 |
-
|
262 |
-
def __init__(
|
263 |
-
self,
|
264 |
-
*,
|
265 |
-
ssl_context: Optional["SSLContext"] = None,
|
266 |
-
**kwargs: Any,
|
267 |
-
) -> None:
|
268 |
-
self._ssl_context = ssl_context
|
269 |
-
super().__init__(**kwargs)
|
270 |
-
|
271 |
-
def init_poolmanager(
|
272 |
-
self,
|
273 |
-
connections: int,
|
274 |
-
maxsize: int,
|
275 |
-
block: bool = DEFAULT_POOLBLOCK,
|
276 |
-
**pool_kwargs: Any,
|
277 |
-
) -> "PoolManager":
|
278 |
-
if self._ssl_context is not None:
|
279 |
-
pool_kwargs.setdefault("ssl_context", self._ssl_context)
|
280 |
-
return super().init_poolmanager( # type: ignore[misc]
|
281 |
-
connections=connections,
|
282 |
-
maxsize=maxsize,
|
283 |
-
block=block,
|
284 |
-
**pool_kwargs,
|
285 |
-
)
|
286 |
-
|
287 |
-
|
288 |
-
class HTTPAdapter(_SSLContextAdapterMixin, _BaseHTTPAdapter):
|
289 |
-
pass
|
290 |
-
|
291 |
-
|
292 |
-
class CacheControlAdapter(_SSLContextAdapterMixin, _BaseCacheControlAdapter):
|
293 |
-
pass
|
294 |
-
|
295 |
-
|
296 |
-
class InsecureHTTPAdapter(HTTPAdapter):
|
297 |
-
def cert_verify(
|
298 |
-
self,
|
299 |
-
conn: ConnectionPool,
|
300 |
-
url: str,
|
301 |
-
verify: Union[bool, str],
|
302 |
-
cert: Optional[Union[str, Tuple[str, str]]],
|
303 |
-
) -> None:
|
304 |
-
super().cert_verify(conn=conn, url=url, verify=False, cert=cert)
|
305 |
-
|
306 |
-
|
307 |
-
class InsecureCacheControlAdapter(CacheControlAdapter):
|
308 |
-
def cert_verify(
|
309 |
-
self,
|
310 |
-
conn: ConnectionPool,
|
311 |
-
url: str,
|
312 |
-
verify: Union[bool, str],
|
313 |
-
cert: Optional[Union[str, Tuple[str, str]]],
|
314 |
-
) -> None:
|
315 |
-
super().cert_verify(conn=conn, url=url, verify=False, cert=cert)
|
316 |
-
|
317 |
-
|
318 |
-
class PipSession(requests.Session):
|
319 |
-
timeout: Optional[int] = None
|
320 |
-
|
321 |
-
def __init__(
|
322 |
-
self,
|
323 |
-
*args: Any,
|
324 |
-
retries: int = 0,
|
325 |
-
cache: Optional[str] = None,
|
326 |
-
trusted_hosts: Sequence[str] = (),
|
327 |
-
index_urls: Optional[List[str]] = None,
|
328 |
-
ssl_context: Optional["SSLContext"] = None,
|
329 |
-
**kwargs: Any,
|
330 |
-
) -> None:
|
331 |
-
"""
|
332 |
-
:param trusted_hosts: Domains not to emit warnings for when not using
|
333 |
-
HTTPS.
|
334 |
-
"""
|
335 |
-
super().__init__(*args, **kwargs)
|
336 |
-
|
337 |
-
# Namespace the attribute with "pip_" just in case to prevent
|
338 |
-
# possible conflicts with the base class.
|
339 |
-
self.pip_trusted_origins: List[Tuple[str, Optional[int]]] = []
|
340 |
-
|
341 |
-
# Attach our User Agent to the request
|
342 |
-
self.headers["User-Agent"] = user_agent()
|
343 |
-
|
344 |
-
# Attach our Authentication handler to the session
|
345 |
-
self.auth = MultiDomainBasicAuth(index_urls=index_urls)
|
346 |
-
|
347 |
-
# Create our urllib3.Retry instance which will allow us to customize
|
348 |
-
# how we handle retries.
|
349 |
-
retries = urllib3.Retry(
|
350 |
-
# Set the total number of retries that a particular request can
|
351 |
-
# have.
|
352 |
-
total=retries,
|
353 |
-
# A 503 error from PyPI typically means that the Fastly -> Origin
|
354 |
-
# connection got interrupted in some way. A 503 error in general
|
355 |
-
# is typically considered a transient error so we'll go ahead and
|
356 |
-
# retry it.
|
357 |
-
# A 500 may indicate transient error in Amazon S3
|
358 |
-
# A 520 or 527 - may indicate transient error in CloudFlare
|
359 |
-
status_forcelist=[500, 503, 520, 527],
|
360 |
-
# Add a small amount of back off between failed requests in
|
361 |
-
# order to prevent hammering the service.
|
362 |
-
backoff_factor=0.25,
|
363 |
-
) # type: ignore
|
364 |
-
|
365 |
-
# Our Insecure HTTPAdapter disables HTTPS validation. It does not
|
366 |
-
# support caching so we'll use it for all http:// URLs.
|
367 |
-
# If caching is disabled, we will also use it for
|
368 |
-
# https:// hosts that we've marked as ignoring
|
369 |
-
# TLS errors for (trusted-hosts).
|
370 |
-
insecure_adapter = InsecureHTTPAdapter(max_retries=retries)
|
371 |
-
|
372 |
-
# We want to _only_ cache responses on securely fetched origins or when
|
373 |
-
# the host is specified as trusted. We do this because
|
374 |
-
# we can't validate the response of an insecurely/untrusted fetched
|
375 |
-
# origin, and we don't want someone to be able to poison the cache and
|
376 |
-
# require manual eviction from the cache to fix it.
|
377 |
-
if cache:
|
378 |
-
secure_adapter = CacheControlAdapter(
|
379 |
-
cache=SafeFileCache(cache),
|
380 |
-
max_retries=retries,
|
381 |
-
ssl_context=ssl_context,
|
382 |
-
)
|
383 |
-
self._trusted_host_adapter = InsecureCacheControlAdapter(
|
384 |
-
cache=SafeFileCache(cache),
|
385 |
-
max_retries=retries,
|
386 |
-
)
|
387 |
-
else:
|
388 |
-
secure_adapter = HTTPAdapter(max_retries=retries, ssl_context=ssl_context)
|
389 |
-
self._trusted_host_adapter = insecure_adapter
|
390 |
-
|
391 |
-
self.mount("https://", secure_adapter)
|
392 |
-
self.mount("http://", insecure_adapter)
|
393 |
-
|
394 |
-
# Enable file:// urls
|
395 |
-
self.mount("file://", LocalFSAdapter())
|
396 |
-
|
397 |
-
for host in trusted_hosts:
|
398 |
-
self.add_trusted_host(host, suppress_logging=True)
|
399 |
-
|
400 |
-
def update_index_urls(self, new_index_urls: List[str]) -> None:
|
401 |
-
"""
|
402 |
-
:param new_index_urls: New index urls to update the authentication
|
403 |
-
handler with.
|
404 |
-
"""
|
405 |
-
self.auth.index_urls = new_index_urls
|
406 |
-
|
407 |
-
def add_trusted_host(
|
408 |
-
self, host: str, source: Optional[str] = None, suppress_logging: bool = False
|
409 |
-
) -> None:
|
410 |
-
"""
|
411 |
-
:param host: It is okay to provide a host that has previously been
|
412 |
-
added.
|
413 |
-
:param source: An optional source string, for logging where the host
|
414 |
-
string came from.
|
415 |
-
"""
|
416 |
-
if not suppress_logging:
|
417 |
-
msg = f"adding trusted host: {host!r}"
|
418 |
-
if source is not None:
|
419 |
-
msg += f" (from {source})"
|
420 |
-
logger.info(msg)
|
421 |
-
|
422 |
-
host_port = parse_netloc(host)
|
423 |
-
if host_port not in self.pip_trusted_origins:
|
424 |
-
self.pip_trusted_origins.append(host_port)
|
425 |
-
|
426 |
-
self.mount(
|
427 |
-
build_url_from_netloc(host, scheme="http") + "/", self._trusted_host_adapter
|
428 |
-
)
|
429 |
-
self.mount(build_url_from_netloc(host) + "/", self._trusted_host_adapter)
|
430 |
-
if not host_port[1]:
|
431 |
-
self.mount(
|
432 |
-
build_url_from_netloc(host, scheme="http") + ":",
|
433 |
-
self._trusted_host_adapter,
|
434 |
-
)
|
435 |
-
# Mount wildcard ports for the same host.
|
436 |
-
self.mount(build_url_from_netloc(host) + ":", self._trusted_host_adapter)
|
437 |
-
|
438 |
-
def iter_secure_origins(self) -> Generator[SecureOrigin, None, None]:
|
439 |
-
yield from SECURE_ORIGINS
|
440 |
-
for host, port in self.pip_trusted_origins:
|
441 |
-
yield ("*", host, "*" if port is None else port)
|
442 |
-
|
443 |
-
def is_secure_origin(self, location: Link) -> bool:
|
444 |
-
# Determine if this url used a secure transport mechanism
|
445 |
-
parsed = urllib.parse.urlparse(str(location))
|
446 |
-
origin_protocol, origin_host, origin_port = (
|
447 |
-
parsed.scheme,
|
448 |
-
parsed.hostname,
|
449 |
-
parsed.port,
|
450 |
-
)
|
451 |
-
|
452 |
-
# The protocol to use to see if the protocol matches.
|
453 |
-
# Don't count the repository type as part of the protocol: in
|
454 |
-
# cases such as "git+ssh", only use "ssh". (I.e., Only verify against
|
455 |
-
# the last scheme.)
|
456 |
-
origin_protocol = origin_protocol.rsplit("+", 1)[-1]
|
457 |
-
|
458 |
-
# Determine if our origin is a secure origin by looking through our
|
459 |
-
# hardcoded list of secure origins, as well as any additional ones
|
460 |
-
# configured on this PackageFinder instance.
|
461 |
-
for secure_origin in self.iter_secure_origins():
|
462 |
-
secure_protocol, secure_host, secure_port = secure_origin
|
463 |
-
if origin_protocol != secure_protocol and secure_protocol != "*":
|
464 |
-
continue
|
465 |
-
|
466 |
-
try:
|
467 |
-
addr = ipaddress.ip_address(origin_host or "")
|
468 |
-
network = ipaddress.ip_network(secure_host)
|
469 |
-
except ValueError:
|
470 |
-
# We don't have both a valid address or a valid network, so
|
471 |
-
# we'll check this origin against hostnames.
|
472 |
-
if (
|
473 |
-
origin_host
|
474 |
-
and origin_host.lower() != secure_host.lower()
|
475 |
-
and secure_host != "*"
|
476 |
-
):
|
477 |
-
continue
|
478 |
-
else:
|
479 |
-
# We have a valid address and network, so see if the address
|
480 |
-
# is contained within the network.
|
481 |
-
if addr not in network:
|
482 |
-
continue
|
483 |
-
|
484 |
-
# Check to see if the port matches.
|
485 |
-
if (
|
486 |
-
origin_port != secure_port
|
487 |
-
and secure_port != "*"
|
488 |
-
and secure_port is not None
|
489 |
-
):
|
490 |
-
continue
|
491 |
-
|
492 |
-
# If we've gotten here, then this origin matches the current
|
493 |
-
# secure origin and we should return True
|
494 |
-
return True
|
495 |
-
|
496 |
-
# If we've gotten to this point, then the origin isn't secure and we
|
497 |
-
# will not accept it as a valid location to search. We will however
|
498 |
-
# log a warning that we are ignoring it.
|
499 |
-
logger.warning(
|
500 |
-
"The repository located at %s is not a trusted or secure host and "
|
501 |
-
"is being ignored. If this repository is available via HTTPS we "
|
502 |
-
"recommend you use HTTPS instead, otherwise you may silence "
|
503 |
-
"this warning and allow it anyway with '--trusted-host %s'.",
|
504 |
-
origin_host,
|
505 |
-
origin_host,
|
506 |
-
)
|
507 |
-
|
508 |
-
return False
|
509 |
-
|
510 |
-
def request(self, method: str, url: str, *args: Any, **kwargs: Any) -> Response:
|
511 |
-
# Allow setting a default timeout on a session
|
512 |
-
kwargs.setdefault("timeout", self.timeout)
|
513 |
-
# Allow setting a default proxies on a session
|
514 |
-
kwargs.setdefault("proxies", self.proxies)
|
515 |
-
|
516 |
-
# Dispatch the actual request
|
517 |
-
return super().request(method, url, *args, **kwargs)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/caches/redis_cache.py
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
# SPDX-FileCopyrightText: 2015 Eric Larson
|
2 |
-
#
|
3 |
-
# SPDX-License-Identifier: Apache-2.0
|
4 |
-
|
5 |
-
from __future__ import division
|
6 |
-
|
7 |
-
from datetime import datetime
|
8 |
-
from pip._vendor.cachecontrol.cache import BaseCache
|
9 |
-
|
10 |
-
|
11 |
-
class RedisCache(BaseCache):
|
12 |
-
|
13 |
-
def __init__(self, conn):
|
14 |
-
self.conn = conn
|
15 |
-
|
16 |
-
def get(self, key):
|
17 |
-
return self.conn.get(key)
|
18 |
-
|
19 |
-
def set(self, key, value, expires=None):
|
20 |
-
if not expires:
|
21 |
-
self.conn.set(key, value)
|
22 |
-
elif isinstance(expires, datetime):
|
23 |
-
expires = expires - datetime.utcnow()
|
24 |
-
self.conn.setex(key, int(expires.total_seconds()), value)
|
25 |
-
else:
|
26 |
-
self.conn.setex(key, expires, value)
|
27 |
-
|
28 |
-
def delete(self, key):
|
29 |
-
self.conn.delete(key)
|
30 |
-
|
31 |
-
def clear(self):
|
32 |
-
"""Helper for clearing all the keys in a database. Use with
|
33 |
-
caution!"""
|
34 |
-
for key in self.conn.keys():
|
35 |
-
self.conn.delete(key)
|
36 |
-
|
37 |
-
def close(self):
|
38 |
-
"""Redis uses connection pooling, no need to close the connection."""
|
39 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/results.py
DELETED
@@ -1,760 +0,0 @@
|
|
1 |
-
# results.py
|
2 |
-
from collections.abc import MutableMapping, Mapping, MutableSequence, Iterator
|
3 |
-
import pprint
|
4 |
-
from weakref import ref as wkref
|
5 |
-
from typing import Tuple, Any
|
6 |
-
|
7 |
-
str_type: Tuple[type, ...] = (str, bytes)
|
8 |
-
_generator_type = type((_ for _ in ()))
|
9 |
-
|
10 |
-
|
11 |
-
class _ParseResultsWithOffset:
|
12 |
-
__slots__ = ["tup"]
|
13 |
-
|
14 |
-
def __init__(self, p1, p2):
|
15 |
-
self.tup = (p1, p2)
|
16 |
-
|
17 |
-
def __getitem__(self, i):
|
18 |
-
return self.tup[i]
|
19 |
-
|
20 |
-
def __getstate__(self):
|
21 |
-
return self.tup
|
22 |
-
|
23 |
-
def __setstate__(self, *args):
|
24 |
-
self.tup = args[0]
|
25 |
-
|
26 |
-
|
27 |
-
class ParseResults:
|
28 |
-
"""Structured parse results, to provide multiple means of access to
|
29 |
-
the parsed data:
|
30 |
-
|
31 |
-
- as a list (``len(results)``)
|
32 |
-
- by list index (``results[0], results[1]``, etc.)
|
33 |
-
- by attribute (``results.<results_name>`` - see :class:`ParserElement.set_results_name`)
|
34 |
-
|
35 |
-
Example::
|
36 |
-
|
37 |
-
integer = Word(nums)
|
38 |
-
date_str = (integer.set_results_name("year") + '/'
|
39 |
-
+ integer.set_results_name("month") + '/'
|
40 |
-
+ integer.set_results_name("day"))
|
41 |
-
# equivalent form:
|
42 |
-
# date_str = (integer("year") + '/'
|
43 |
-
# + integer("month") + '/'
|
44 |
-
# + integer("day"))
|
45 |
-
|
46 |
-
# parse_string returns a ParseResults object
|
47 |
-
result = date_str.parse_string("1999/12/31")
|
48 |
-
|
49 |
-
def test(s, fn=repr):
|
50 |
-
print("{} -> {}".format(s, fn(eval(s))))
|
51 |
-
test("list(result)")
|
52 |
-
test("result[0]")
|
53 |
-
test("result['month']")
|
54 |
-
test("result.day")
|
55 |
-
test("'month' in result")
|
56 |
-
test("'minutes' in result")
|
57 |
-
test("result.dump()", str)
|
58 |
-
|
59 |
-
prints::
|
60 |
-
|
61 |
-
list(result) -> ['1999', '/', '12', '/', '31']
|
62 |
-
result[0] -> '1999'
|
63 |
-
result['month'] -> '12'
|
64 |
-
result.day -> '31'
|
65 |
-
'month' in result -> True
|
66 |
-
'minutes' in result -> False
|
67 |
-
result.dump() -> ['1999', '/', '12', '/', '31']
|
68 |
-
- day: '31'
|
69 |
-
- month: '12'
|
70 |
-
- year: '1999'
|
71 |
-
"""
|
72 |
-
|
73 |
-
_null_values: Tuple[Any, ...] = (None, [], "", ())
|
74 |
-
|
75 |
-
__slots__ = [
|
76 |
-
"_name",
|
77 |
-
"_parent",
|
78 |
-
"_all_names",
|
79 |
-
"_modal",
|
80 |
-
"_toklist",
|
81 |
-
"_tokdict",
|
82 |
-
"__weakref__",
|
83 |
-
]
|
84 |
-
|
85 |
-
class List(list):
|
86 |
-
"""
|
87 |
-
Simple wrapper class to distinguish parsed list results that should be preserved
|
88 |
-
as actual Python lists, instead of being converted to :class:`ParseResults`:
|
89 |
-
|
90 |
-
LBRACK, RBRACK = map(pp.Suppress, "[]")
|
91 |
-
element = pp.Forward()
|
92 |
-
item = ppc.integer
|
93 |
-
element_list = LBRACK + pp.delimited_list(element) + RBRACK
|
94 |
-
|
95 |
-
# add parse actions to convert from ParseResults to actual Python collection types
|
96 |
-
def as_python_list(t):
|
97 |
-
return pp.ParseResults.List(t.as_list())
|
98 |
-
element_list.add_parse_action(as_python_list)
|
99 |
-
|
100 |
-
element <<= item | element_list
|
101 |
-
|
102 |
-
element.run_tests('''
|
103 |
-
100
|
104 |
-
[2,3,4]
|
105 |
-
[[2, 1],3,4]
|
106 |
-
[(2, 1),3,4]
|
107 |
-
(2,3,4)
|
108 |
-
''', post_parse=lambda s, r: (r[0], type(r[0])))
|
109 |
-
|
110 |
-
prints:
|
111 |
-
|
112 |
-
100
|
113 |
-
(100, <class 'int'>)
|
114 |
-
|
115 |
-
[2,3,4]
|
116 |
-
([2, 3, 4], <class 'list'>)
|
117 |
-
|
118 |
-
[[2, 1],3,4]
|
119 |
-
([[2, 1], 3, 4], <class 'list'>)
|
120 |
-
|
121 |
-
(Used internally by :class:`Group` when `aslist=True`.)
|
122 |
-
"""
|
123 |
-
|
124 |
-
def __new__(cls, contained=None):
|
125 |
-
if contained is None:
|
126 |
-
contained = []
|
127 |
-
|
128 |
-
if not isinstance(contained, list):
|
129 |
-
raise TypeError(
|
130 |
-
"{} may only be constructed with a list,"
|
131 |
-
" not {}".format(cls.__name__, type(contained).__name__)
|
132 |
-
)
|
133 |
-
|
134 |
-
return list.__new__(cls)
|
135 |
-
|
136 |
-
def __new__(cls, toklist=None, name=None, **kwargs):
|
137 |
-
if isinstance(toklist, ParseResults):
|
138 |
-
return toklist
|
139 |
-
self = object.__new__(cls)
|
140 |
-
self._name = None
|
141 |
-
self._parent = None
|
142 |
-
self._all_names = set()
|
143 |
-
|
144 |
-
if toklist is None:
|
145 |
-
self._toklist = []
|
146 |
-
elif isinstance(toklist, (list, _generator_type)):
|
147 |
-
self._toklist = (
|
148 |
-
[toklist[:]]
|
149 |
-
if isinstance(toklist, ParseResults.List)
|
150 |
-
else list(toklist)
|
151 |
-
)
|
152 |
-
else:
|
153 |
-
self._toklist = [toklist]
|
154 |
-
self._tokdict = dict()
|
155 |
-
return self
|
156 |
-
|
157 |
-
# Performance tuning: we construct a *lot* of these, so keep this
|
158 |
-
# constructor as small and fast as possible
|
159 |
-
def __init__(
|
160 |
-
self, toklist=None, name=None, asList=True, modal=True, isinstance=isinstance
|
161 |
-
):
|
162 |
-
self._modal = modal
|
163 |
-
if name is not None and name != "":
|
164 |
-
if isinstance(name, int):
|
165 |
-
name = str(name)
|
166 |
-
if not modal:
|
167 |
-
self._all_names = {name}
|
168 |
-
self._name = name
|
169 |
-
if toklist not in self._null_values:
|
170 |
-
if isinstance(toklist, (str_type, type)):
|
171 |
-
toklist = [toklist]
|
172 |
-
if asList:
|
173 |
-
if isinstance(toklist, ParseResults):
|
174 |
-
self[name] = _ParseResultsWithOffset(
|
175 |
-
ParseResults(toklist._toklist), 0
|
176 |
-
)
|
177 |
-
else:
|
178 |
-
self[name] = _ParseResultsWithOffset(
|
179 |
-
ParseResults(toklist[0]), 0
|
180 |
-
)
|
181 |
-
self[name]._name = name
|
182 |
-
else:
|
183 |
-
try:
|
184 |
-
self[name] = toklist[0]
|
185 |
-
except (KeyError, TypeError, IndexError):
|
186 |
-
if toklist is not self:
|
187 |
-
self[name] = toklist
|
188 |
-
else:
|
189 |
-
self._name = name
|
190 |
-
|
191 |
-
def __getitem__(self, i):
|
192 |
-
if isinstance(i, (int, slice)):
|
193 |
-
return self._toklist[i]
|
194 |
-
else:
|
195 |
-
if i not in self._all_names:
|
196 |
-
return self._tokdict[i][-1][0]
|
197 |
-
else:
|
198 |
-
return ParseResults([v[0] for v in self._tokdict[i]])
|
199 |
-
|
200 |
-
def __setitem__(self, k, v, isinstance=isinstance):
|
201 |
-
if isinstance(v, _ParseResultsWithOffset):
|
202 |
-
self._tokdict[k] = self._tokdict.get(k, list()) + [v]
|
203 |
-
sub = v[0]
|
204 |
-
elif isinstance(k, (int, slice)):
|
205 |
-
self._toklist[k] = v
|
206 |
-
sub = v
|
207 |
-
else:
|
208 |
-
self._tokdict[k] = self._tokdict.get(k, list()) + [
|
209 |
-
_ParseResultsWithOffset(v, 0)
|
210 |
-
]
|
211 |
-
sub = v
|
212 |
-
if isinstance(sub, ParseResults):
|
213 |
-
sub._parent = wkref(self)
|
214 |
-
|
215 |
-
def __delitem__(self, i):
|
216 |
-
if isinstance(i, (int, slice)):
|
217 |
-
mylen = len(self._toklist)
|
218 |
-
del self._toklist[i]
|
219 |
-
|
220 |
-
# convert int to slice
|
221 |
-
if isinstance(i, int):
|
222 |
-
if i < 0:
|
223 |
-
i += mylen
|
224 |
-
i = slice(i, i + 1)
|
225 |
-
# get removed indices
|
226 |
-
removed = list(range(*i.indices(mylen)))
|
227 |
-
removed.reverse()
|
228 |
-
# fixup indices in token dictionary
|
229 |
-
for name, occurrences in self._tokdict.items():
|
230 |
-
for j in removed:
|
231 |
-
for k, (value, position) in enumerate(occurrences):
|
232 |
-
occurrences[k] = _ParseResultsWithOffset(
|
233 |
-
value, position - (position > j)
|
234 |
-
)
|
235 |
-
else:
|
236 |
-
del self._tokdict[i]
|
237 |
-
|
238 |
-
def __contains__(self, k) -> bool:
|
239 |
-
return k in self._tokdict
|
240 |
-
|
241 |
-
def __len__(self) -> int:
|
242 |
-
return len(self._toklist)
|
243 |
-
|
244 |
-
def __bool__(self) -> bool:
|
245 |
-
return not not (self._toklist or self._tokdict)
|
246 |
-
|
247 |
-
def __iter__(self) -> Iterator:
|
248 |
-
return iter(self._toklist)
|
249 |
-
|
250 |
-
def __reversed__(self) -> Iterator:
|
251 |
-
return iter(self._toklist[::-1])
|
252 |
-
|
253 |
-
def keys(self):
|
254 |
-
return iter(self._tokdict)
|
255 |
-
|
256 |
-
def values(self):
|
257 |
-
return (self[k] for k in self.keys())
|
258 |
-
|
259 |
-
def items(self):
|
260 |
-
return ((k, self[k]) for k in self.keys())
|
261 |
-
|
262 |
-
def haskeys(self) -> bool:
|
263 |
-
"""
|
264 |
-
Since ``keys()`` returns an iterator, this method is helpful in bypassing
|
265 |
-
code that looks for the existence of any defined results names."""
|
266 |
-
return bool(self._tokdict)
|
267 |
-
|
268 |
-
def pop(self, *args, **kwargs):
|
269 |
-
"""
|
270 |
-
Removes and returns item at specified index (default= ``last``).
|
271 |
-
Supports both ``list`` and ``dict`` semantics for ``pop()``. If
|
272 |
-
passed no argument or an integer argument, it will use ``list``
|
273 |
-
semantics and pop tokens from the list of parsed tokens. If passed
|
274 |
-
a non-integer argument (most likely a string), it will use ``dict``
|
275 |
-
semantics and pop the corresponding value from any defined results
|
276 |
-
names. A second default return value argument is supported, just as in
|
277 |
-
``dict.pop()``.
|
278 |
-
|
279 |
-
Example::
|
280 |
-
|
281 |
-
numlist = Word(nums)[...]
|
282 |
-
print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321']
|
283 |
-
|
284 |
-
def remove_first(tokens):
|
285 |
-
tokens.pop(0)
|
286 |
-
numlist.add_parse_action(remove_first)
|
287 |
-
print(numlist.parse_string("0 123 321")) # -> ['123', '321']
|
288 |
-
|
289 |
-
label = Word(alphas)
|
290 |
-
patt = label("LABEL") + Word(nums)[1, ...]
|
291 |
-
print(patt.parse_string("AAB 123 321").dump())
|
292 |
-
|
293 |
-
# Use pop() in a parse action to remove named result (note that corresponding value is not
|
294 |
-
# removed from list form of results)
|
295 |
-
def remove_LABEL(tokens):
|
296 |
-
tokens.pop("LABEL")
|
297 |
-
return tokens
|
298 |
-
patt.add_parse_action(remove_LABEL)
|
299 |
-
print(patt.parse_string("AAB 123 321").dump())
|
300 |
-
|
301 |
-
prints::
|
302 |
-
|
303 |
-
['AAB', '123', '321']
|
304 |
-
- LABEL: 'AAB'
|
305 |
-
|
306 |
-
['AAB', '123', '321']
|
307 |
-
"""
|
308 |
-
if not args:
|
309 |
-
args = [-1]
|
310 |
-
for k, v in kwargs.items():
|
311 |
-
if k == "default":
|
312 |
-
args = (args[0], v)
|
313 |
-
else:
|
314 |
-
raise TypeError(
|
315 |
-
"pop() got an unexpected keyword argument {!r}".format(k)
|
316 |
-
)
|
317 |
-
if isinstance(args[0], int) or len(args) == 1 or args[0] in self:
|
318 |
-
index = args[0]
|
319 |
-
ret = self[index]
|
320 |
-
del self[index]
|
321 |
-
return ret
|
322 |
-
else:
|
323 |
-
defaultvalue = args[1]
|
324 |
-
return defaultvalue
|
325 |
-
|
326 |
-
def get(self, key, default_value=None):
|
327 |
-
"""
|
328 |
-
Returns named result matching the given key, or if there is no
|
329 |
-
such name, then returns the given ``default_value`` or ``None`` if no
|
330 |
-
``default_value`` is specified.
|
331 |
-
|
332 |
-
Similar to ``dict.get()``.
|
333 |
-
|
334 |
-
Example::
|
335 |
-
|
336 |
-
integer = Word(nums)
|
337 |
-
date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
|
338 |
-
|
339 |
-
result = date_str.parse_string("1999/12/31")
|
340 |
-
print(result.get("year")) # -> '1999'
|
341 |
-
print(result.get("hour", "not specified")) # -> 'not specified'
|
342 |
-
print(result.get("hour")) # -> None
|
343 |
-
"""
|
344 |
-
if key in self:
|
345 |
-
return self[key]
|
346 |
-
else:
|
347 |
-
return default_value
|
348 |
-
|
349 |
-
def insert(self, index, ins_string):
|
350 |
-
"""
|
351 |
-
Inserts new element at location index in the list of parsed tokens.
|
352 |
-
|
353 |
-
Similar to ``list.insert()``.
|
354 |
-
|
355 |
-
Example::
|
356 |
-
|
357 |
-
numlist = Word(nums)[...]
|
358 |
-
print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321']
|
359 |
-
|
360 |
-
# use a parse action to insert the parse location in the front of the parsed results
|
361 |
-
def insert_locn(locn, tokens):
|
362 |
-
tokens.insert(0, locn)
|
363 |
-
numlist.add_parse_action(insert_locn)
|
364 |
-
print(numlist.parse_string("0 123 321")) # -> [0, '0', '123', '321']
|
365 |
-
"""
|
366 |
-
self._toklist.insert(index, ins_string)
|
367 |
-
# fixup indices in token dictionary
|
368 |
-
for name, occurrences in self._tokdict.items():
|
369 |
-
for k, (value, position) in enumerate(occurrences):
|
370 |
-
occurrences[k] = _ParseResultsWithOffset(
|
371 |
-
value, position + (position > index)
|
372 |
-
)
|
373 |
-
|
374 |
-
def append(self, item):
|
375 |
-
"""
|
376 |
-
Add single element to end of ``ParseResults`` list of elements.
|
377 |
-
|
378 |
-
Example::
|
379 |
-
|
380 |
-
numlist = Word(nums)[...]
|
381 |
-
print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321']
|
382 |
-
|
383 |
-
# use a parse action to compute the sum of the parsed integers, and add it to the end
|
384 |
-
def append_sum(tokens):
|
385 |
-
tokens.append(sum(map(int, tokens)))
|
386 |
-
numlist.add_parse_action(append_sum)
|
387 |
-
print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321', 444]
|
388 |
-
"""
|
389 |
-
self._toklist.append(item)
|
390 |
-
|
391 |
-
def extend(self, itemseq):
|
392 |
-
"""
|
393 |
-
Add sequence of elements to end of ``ParseResults`` list of elements.
|
394 |
-
|
395 |
-
Example::
|
396 |
-
|
397 |
-
patt = Word(alphas)[1, ...]
|
398 |
-
|
399 |
-
# use a parse action to append the reverse of the matched strings, to make a palindrome
|
400 |
-
def make_palindrome(tokens):
|
401 |
-
tokens.extend(reversed([t[::-1] for t in tokens]))
|
402 |
-
return ''.join(tokens)
|
403 |
-
patt.add_parse_action(make_palindrome)
|
404 |
-
print(patt.parse_string("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl'
|
405 |
-
"""
|
406 |
-
if isinstance(itemseq, ParseResults):
|
407 |
-
self.__iadd__(itemseq)
|
408 |
-
else:
|
409 |
-
self._toklist.extend(itemseq)
|
410 |
-
|
411 |
-
def clear(self):
|
412 |
-
"""
|
413 |
-
Clear all elements and results names.
|
414 |
-
"""
|
415 |
-
del self._toklist[:]
|
416 |
-
self._tokdict.clear()
|
417 |
-
|
418 |
-
def __getattr__(self, name):
|
419 |
-
try:
|
420 |
-
return self[name]
|
421 |
-
except KeyError:
|
422 |
-
if name.startswith("__"):
|
423 |
-
raise AttributeError(name)
|
424 |
-
return ""
|
425 |
-
|
426 |
-
def __add__(self, other) -> "ParseResults":
|
427 |
-
ret = self.copy()
|
428 |
-
ret += other
|
429 |
-
return ret
|
430 |
-
|
431 |
-
def __iadd__(self, other) -> "ParseResults":
|
432 |
-
if other._tokdict:
|
433 |
-
offset = len(self._toklist)
|
434 |
-
addoffset = lambda a: offset if a < 0 else a + offset
|
435 |
-
otheritems = other._tokdict.items()
|
436 |
-
otherdictitems = [
|
437 |
-
(k, _ParseResultsWithOffset(v[0], addoffset(v[1])))
|
438 |
-
for k, vlist in otheritems
|
439 |
-
for v in vlist
|
440 |
-
]
|
441 |
-
for k, v in otherdictitems:
|
442 |
-
self[k] = v
|
443 |
-
if isinstance(v[0], ParseResults):
|
444 |
-
v[0]._parent = wkref(self)
|
445 |
-
|
446 |
-
self._toklist += other._toklist
|
447 |
-
self._all_names |= other._all_names
|
448 |
-
return self
|
449 |
-
|
450 |
-
def __radd__(self, other) -> "ParseResults":
|
451 |
-
if isinstance(other, int) and other == 0:
|
452 |
-
# useful for merging many ParseResults using sum() builtin
|
453 |
-
return self.copy()
|
454 |
-
else:
|
455 |
-
# this may raise a TypeError - so be it
|
456 |
-
return other + self
|
457 |
-
|
458 |
-
def __repr__(self) -> str:
|
459 |
-
return "{}({!r}, {})".format(type(self).__name__, self._toklist, self.as_dict())
|
460 |
-
|
461 |
-
def __str__(self) -> str:
|
462 |
-
return (
|
463 |
-
"["
|
464 |
-
+ ", ".join(
|
465 |
-
[
|
466 |
-
str(i) if isinstance(i, ParseResults) else repr(i)
|
467 |
-
for i in self._toklist
|
468 |
-
]
|
469 |
-
)
|
470 |
-
+ "]"
|
471 |
-
)
|
472 |
-
|
473 |
-
def _asStringList(self, sep=""):
|
474 |
-
out = []
|
475 |
-
for item in self._toklist:
|
476 |
-
if out and sep:
|
477 |
-
out.append(sep)
|
478 |
-
if isinstance(item, ParseResults):
|
479 |
-
out += item._asStringList()
|
480 |
-
else:
|
481 |
-
out.append(str(item))
|
482 |
-
return out
|
483 |
-
|
484 |
-
def as_list(self) -> list:
|
485 |
-
"""
|
486 |
-
Returns the parse results as a nested list of matching tokens, all converted to strings.
|
487 |
-
|
488 |
-
Example::
|
489 |
-
|
490 |
-
patt = Word(alphas)[1, ...]
|
491 |
-
result = patt.parse_string("sldkj lsdkj sldkj")
|
492 |
-
# even though the result prints in string-like form, it is actually a pyparsing ParseResults
|
493 |
-
print(type(result), result) # -> <class 'pyparsing.ParseResults'> ['sldkj', 'lsdkj', 'sldkj']
|
494 |
-
|
495 |
-
# Use as_list() to create an actual list
|
496 |
-
result_list = result.as_list()
|
497 |
-
print(type(result_list), result_list) # -> <class 'list'> ['sldkj', 'lsdkj', 'sldkj']
|
498 |
-
"""
|
499 |
-
return [
|
500 |
-
res.as_list() if isinstance(res, ParseResults) else res
|
501 |
-
for res in self._toklist
|
502 |
-
]
|
503 |
-
|
504 |
-
def as_dict(self) -> dict:
|
505 |
-
"""
|
506 |
-
Returns the named parse results as a nested dictionary.
|
507 |
-
|
508 |
-
Example::
|
509 |
-
|
510 |
-
integer = Word(nums)
|
511 |
-
date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
|
512 |
-
|
513 |
-
result = date_str.parse_string('12/31/1999')
|
514 |
-
print(type(result), repr(result)) # -> <class 'pyparsing.ParseResults'> (['12', '/', '31', '/', '1999'], {'day': [('1999', 4)], 'year': [('12', 0)], 'month': [('31', 2)]})
|
515 |
-
|
516 |
-
result_dict = result.as_dict()
|
517 |
-
print(type(result_dict), repr(result_dict)) # -> <class 'dict'> {'day': '1999', 'year': '12', 'month': '31'}
|
518 |
-
|
519 |
-
# even though a ParseResults supports dict-like access, sometime you just need to have a dict
|
520 |
-
import json
|
521 |
-
print(json.dumps(result)) # -> Exception: TypeError: ... is not JSON serializable
|
522 |
-
print(json.dumps(result.as_dict())) # -> {"month": "31", "day": "1999", "year": "12"}
|
523 |
-
"""
|
524 |
-
|
525 |
-
def to_item(obj):
|
526 |
-
if isinstance(obj, ParseResults):
|
527 |
-
return obj.as_dict() if obj.haskeys() else [to_item(v) for v in obj]
|
528 |
-
else:
|
529 |
-
return obj
|
530 |
-
|
531 |
-
return dict((k, to_item(v)) for k, v in self.items())
|
532 |
-
|
533 |
-
def copy(self) -> "ParseResults":
|
534 |
-
"""
|
535 |
-
Returns a new copy of a :class:`ParseResults` object.
|
536 |
-
"""
|
537 |
-
ret = ParseResults(self._toklist)
|
538 |
-
ret._tokdict = self._tokdict.copy()
|
539 |
-
ret._parent = self._parent
|
540 |
-
ret._all_names |= self._all_names
|
541 |
-
ret._name = self._name
|
542 |
-
return ret
|
543 |
-
|
544 |
-
def get_name(self):
|
545 |
-
r"""
|
546 |
-
Returns the results name for this token expression. Useful when several
|
547 |
-
different expressions might match at a particular location.
|
548 |
-
|
549 |
-
Example::
|
550 |
-
|
551 |
-
integer = Word(nums)
|
552 |
-
ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d")
|
553 |
-
house_number_expr = Suppress('#') + Word(nums, alphanums)
|
554 |
-
user_data = (Group(house_number_expr)("house_number")
|
555 |
-
| Group(ssn_expr)("ssn")
|
556 |
-
| Group(integer)("age"))
|
557 |
-
user_info = user_data[1, ...]
|
558 |
-
|
559 |
-
result = user_info.parse_string("22 111-22-3333 #221B")
|
560 |
-
for item in result:
|
561 |
-
print(item.get_name(), ':', item[0])
|
562 |
-
|
563 |
-
prints::
|
564 |
-
|
565 |
-
age : 22
|
566 |
-
ssn : 111-22-3333
|
567 |
-
house_number : 221B
|
568 |
-
"""
|
569 |
-
if self._name:
|
570 |
-
return self._name
|
571 |
-
elif self._parent:
|
572 |
-
par = self._parent()
|
573 |
-
|
574 |
-
def find_in_parent(sub):
|
575 |
-
return next(
|
576 |
-
(
|
577 |
-
k
|
578 |
-
for k, vlist in par._tokdict.items()
|
579 |
-
for v, loc in vlist
|
580 |
-
if sub is v
|
581 |
-
),
|
582 |
-
None,
|
583 |
-
)
|
584 |
-
|
585 |
-
return find_in_parent(self) if par else None
|
586 |
-
elif (
|
587 |
-
len(self) == 1
|
588 |
-
and len(self._tokdict) == 1
|
589 |
-
and next(iter(self._tokdict.values()))[0][1] in (0, -1)
|
590 |
-
):
|
591 |
-
return next(iter(self._tokdict.keys()))
|
592 |
-
else:
|
593 |
-
return None
|
594 |
-
|
595 |
-
def dump(self, indent="", full=True, include_list=True, _depth=0) -> str:
|
596 |
-
"""
|
597 |
-
Diagnostic method for listing out the contents of
|
598 |
-
a :class:`ParseResults`. Accepts an optional ``indent`` argument so
|
599 |
-
that this string can be embedded in a nested display of other data.
|
600 |
-
|
601 |
-
Example::
|
602 |
-
|
603 |
-
integer = Word(nums)
|
604 |
-
date_str = integer("year") + '/' + integer("month") + '/' + integer("day")
|
605 |
-
|
606 |
-
result = date_str.parse_string('1999/12/31')
|
607 |
-
print(result.dump())
|
608 |
-
|
609 |
-
prints::
|
610 |
-
|
611 |
-
['1999', '/', '12', '/', '31']
|
612 |
-
- day: '31'
|
613 |
-
- month: '12'
|
614 |
-
- year: '1999'
|
615 |
-
"""
|
616 |
-
out = []
|
617 |
-
NL = "\n"
|
618 |
-
out.append(indent + str(self.as_list()) if include_list else "")
|
619 |
-
|
620 |
-
if full:
|
621 |
-
if self.haskeys():
|
622 |
-
items = sorted((str(k), v) for k, v in self.items())
|
623 |
-
for k, v in items:
|
624 |
-
if out:
|
625 |
-
out.append(NL)
|
626 |
-
out.append("{}{}- {}: ".format(indent, (" " * _depth), k))
|
627 |
-
if isinstance(v, ParseResults):
|
628 |
-
if v:
|
629 |
-
out.append(
|
630 |
-
v.dump(
|
631 |
-
indent=indent,
|
632 |
-
full=full,
|
633 |
-
include_list=include_list,
|
634 |
-
_depth=_depth + 1,
|
635 |
-
)
|
636 |
-
)
|
637 |
-
else:
|
638 |
-
out.append(str(v))
|
639 |
-
else:
|
640 |
-
out.append(repr(v))
|
641 |
-
if any(isinstance(vv, ParseResults) for vv in self):
|
642 |
-
v = self
|
643 |
-
for i, vv in enumerate(v):
|
644 |
-
if isinstance(vv, ParseResults):
|
645 |
-
out.append(
|
646 |
-
"\n{}{}[{}]:\n{}{}{}".format(
|
647 |
-
indent,
|
648 |
-
(" " * (_depth)),
|
649 |
-
i,
|
650 |
-
indent,
|
651 |
-
(" " * (_depth + 1)),
|
652 |
-
vv.dump(
|
653 |
-
indent=indent,
|
654 |
-
full=full,
|
655 |
-
include_list=include_list,
|
656 |
-
_depth=_depth + 1,
|
657 |
-
),
|
658 |
-
)
|
659 |
-
)
|
660 |
-
else:
|
661 |
-
out.append(
|
662 |
-
"\n%s%s[%d]:\n%s%s%s"
|
663 |
-
% (
|
664 |
-
indent,
|
665 |
-
(" " * (_depth)),
|
666 |
-
i,
|
667 |
-
indent,
|
668 |
-
(" " * (_depth + 1)),
|
669 |
-
str(vv),
|
670 |
-
)
|
671 |
-
)
|
672 |
-
|
673 |
-
return "".join(out)
|
674 |
-
|
675 |
-
def pprint(self, *args, **kwargs):
|
676 |
-
"""
|
677 |
-
Pretty-printer for parsed results as a list, using the
|
678 |
-
`pprint <https://docs.python.org/3/library/pprint.html>`_ module.
|
679 |
-
Accepts additional positional or keyword args as defined for
|
680 |
-
`pprint.pprint <https://docs.python.org/3/library/pprint.html#pprint.pprint>`_ .
|
681 |
-
|
682 |
-
Example::
|
683 |
-
|
684 |
-
ident = Word(alphas, alphanums)
|
685 |
-
num = Word(nums)
|
686 |
-
func = Forward()
|
687 |
-
term = ident | num | Group('(' + func + ')')
|
688 |
-
func <<= ident + Group(Optional(delimited_list(term)))
|
689 |
-
result = func.parse_string("fna a,b,(fnb c,d,200),100")
|
690 |
-
result.pprint(width=40)
|
691 |
-
|
692 |
-
prints::
|
693 |
-
|
694 |
-
['fna',
|
695 |
-
['a',
|
696 |
-
'b',
|
697 |
-
['(', 'fnb', ['c', 'd', '200'], ')'],
|
698 |
-
'100']]
|
699 |
-
"""
|
700 |
-
pprint.pprint(self.as_list(), *args, **kwargs)
|
701 |
-
|
702 |
-
# add support for pickle protocol
|
703 |
-
def __getstate__(self):
|
704 |
-
return (
|
705 |
-
self._toklist,
|
706 |
-
(
|
707 |
-
self._tokdict.copy(),
|
708 |
-
self._parent is not None and self._parent() or None,
|
709 |
-
self._all_names,
|
710 |
-
self._name,
|
711 |
-
),
|
712 |
-
)
|
713 |
-
|
714 |
-
def __setstate__(self, state):
|
715 |
-
self._toklist, (self._tokdict, par, inAccumNames, self._name) = state
|
716 |
-
self._all_names = set(inAccumNames)
|
717 |
-
if par is not None:
|
718 |
-
self._parent = wkref(par)
|
719 |
-
else:
|
720 |
-
self._parent = None
|
721 |
-
|
722 |
-
def __getnewargs__(self):
|
723 |
-
return self._toklist, self._name
|
724 |
-
|
725 |
-
def __dir__(self):
|
726 |
-
return dir(type(self)) + list(self.keys())
|
727 |
-
|
728 |
-
@classmethod
|
729 |
-
def from_dict(cls, other, name=None) -> "ParseResults":
|
730 |
-
"""
|
731 |
-
Helper classmethod to construct a ``ParseResults`` from a ``dict``, preserving the
|
732 |
-
name-value relations as results names. If an optional ``name`` argument is
|
733 |
-
given, a nested ``ParseResults`` will be returned.
|
734 |
-
"""
|
735 |
-
|
736 |
-
def is_iterable(obj):
|
737 |
-
try:
|
738 |
-
iter(obj)
|
739 |
-
except Exception:
|
740 |
-
return False
|
741 |
-
else:
|
742 |
-
return not isinstance(obj, str_type)
|
743 |
-
|
744 |
-
ret = cls([])
|
745 |
-
for k, v in other.items():
|
746 |
-
if isinstance(v, Mapping):
|
747 |
-
ret += cls.from_dict(v, name=k)
|
748 |
-
else:
|
749 |
-
ret += cls([v], name=k, asList=is_iterable(v))
|
750 |
-
if name is not None:
|
751 |
-
ret = cls([ret], name=name)
|
752 |
-
return ret
|
753 |
-
|
754 |
-
asList = as_list
|
755 |
-
asDict = as_dict
|
756 |
-
getName = get_name
|
757 |
-
|
758 |
-
|
759 |
-
MutableMapping.register(ParseResults)
|
760 |
-
MutableSequence.register(ParseResults)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awesimo/jojogan/e4e/utils/common.py
DELETED
@@ -1,55 +0,0 @@
|
|
1 |
-
from PIL import Image
|
2 |
-
import matplotlib.pyplot as plt
|
3 |
-
|
4 |
-
|
5 |
-
# Log images
|
6 |
-
def log_input_image(x, opts):
|
7 |
-
return tensor2im(x)
|
8 |
-
|
9 |
-
|
10 |
-
def tensor2im(var):
|
11 |
-
# var shape: (3, H, W)
|
12 |
-
var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy()
|
13 |
-
var = ((var + 1) / 2)
|
14 |
-
var[var < 0] = 0
|
15 |
-
var[var > 1] = 1
|
16 |
-
var = var * 255
|
17 |
-
return Image.fromarray(var.astype('uint8'))
|
18 |
-
|
19 |
-
|
20 |
-
def vis_faces(log_hooks):
|
21 |
-
display_count = len(log_hooks)
|
22 |
-
fig = plt.figure(figsize=(8, 4 * display_count))
|
23 |
-
gs = fig.add_gridspec(display_count, 3)
|
24 |
-
for i in range(display_count):
|
25 |
-
hooks_dict = log_hooks[i]
|
26 |
-
fig.add_subplot(gs[i, 0])
|
27 |
-
if 'diff_input' in hooks_dict:
|
28 |
-
vis_faces_with_id(hooks_dict, fig, gs, i)
|
29 |
-
else:
|
30 |
-
vis_faces_no_id(hooks_dict, fig, gs, i)
|
31 |
-
plt.tight_layout()
|
32 |
-
return fig
|
33 |
-
|
34 |
-
|
35 |
-
def vis_faces_with_id(hooks_dict, fig, gs, i):
|
36 |
-
plt.imshow(hooks_dict['input_face'])
|
37 |
-
plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input'])))
|
38 |
-
fig.add_subplot(gs[i, 1])
|
39 |
-
plt.imshow(hooks_dict['target_face'])
|
40 |
-
plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']),
|
41 |
-
float(hooks_dict['diff_target'])))
|
42 |
-
fig.add_subplot(gs[i, 2])
|
43 |
-
plt.imshow(hooks_dict['output_face'])
|
44 |
-
plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target'])))
|
45 |
-
|
46 |
-
|
47 |
-
def vis_faces_no_id(hooks_dict, fig, gs, i):
|
48 |
-
plt.imshow(hooks_dict['input_face'], cmap="gray")
|
49 |
-
plt.title('Input')
|
50 |
-
fig.add_subplot(gs[i, 1])
|
51 |
-
plt.imshow(hooks_dict['target_face'])
|
52 |
-
plt.title('Target')
|
53 |
-
fig.add_subplot(gs[i, 2])
|
54 |
-
plt.imshow(hooks_dict['output_face'])
|
55 |
-
plt.title('Output')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py
DELETED
@@ -1,176 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import logging
|
3 |
-
import unittest
|
4 |
-
import cv2
|
5 |
-
import torch
|
6 |
-
from torch.autograd import Variable, gradcheck
|
7 |
-
|
8 |
-
from detectron2.layers.roi_align import ROIAlign
|
9 |
-
from detectron2.layers.roi_align_rotated import ROIAlignRotated
|
10 |
-
|
11 |
-
logger = logging.getLogger(__name__)
|
12 |
-
|
13 |
-
|
14 |
-
class ROIAlignRotatedTest(unittest.TestCase):
|
15 |
-
def _box_to_rotated_box(self, box, angle):
|
16 |
-
return [
|
17 |
-
(box[0] + box[2]) / 2.0,
|
18 |
-
(box[1] + box[3]) / 2.0,
|
19 |
-
box[2] - box[0],
|
20 |
-
box[3] - box[1],
|
21 |
-
angle,
|
22 |
-
]
|
23 |
-
|
24 |
-
def _rot90(self, img, num):
|
25 |
-
num = num % 4 # note: -1 % 4 == 3
|
26 |
-
for _ in range(num):
|
27 |
-
img = img.transpose(0, 1).flip(0)
|
28 |
-
return img
|
29 |
-
|
30 |
-
def test_forward_output_0_90_180_270(self):
|
31 |
-
for i in range(4):
|
32 |
-
# i = 0, 1, 2, 3 corresponding to 0, 90, 180, 270 degrees
|
33 |
-
img = torch.arange(25, dtype=torch.float32).reshape(5, 5)
|
34 |
-
"""
|
35 |
-
0 1 2 3 4
|
36 |
-
5 6 7 8 9
|
37 |
-
10 11 12 13 14
|
38 |
-
15 16 17 18 19
|
39 |
-
20 21 22 23 24
|
40 |
-
"""
|
41 |
-
box = [1, 1, 3, 3]
|
42 |
-
rotated_box = self._box_to_rotated_box(box=box, angle=90 * i)
|
43 |
-
|
44 |
-
result = self._simple_roi_align_rotated(img=img, box=rotated_box, resolution=(4, 4))
|
45 |
-
|
46 |
-
# Here's an explanation for 0 degree case:
|
47 |
-
# point 0 in the original input lies at [0.5, 0.5]
|
48 |
-
# (the center of bin [0, 1] x [0, 1])
|
49 |
-
# point 1 in the original input lies at [1.5, 0.5], etc.
|
50 |
-
# since the resolution is (4, 4) that divides [1, 3] x [1, 3]
|
51 |
-
# into 4 x 4 equal bins,
|
52 |
-
# the top-left bin is [1, 1.5] x [1, 1.5], and its center
|
53 |
-
# (1.25, 1.25) lies at the 3/4 position
|
54 |
-
# between point 0 and point 1, point 5 and point 6,
|
55 |
-
# point 0 and point 5, point 1 and point 6, so it can be calculated as
|
56 |
-
# 0.25*(0*0.25+1*0.75)+(5*0.25+6*0.75)*0.75 = 4.5
|
57 |
-
result_expected = torch.tensor(
|
58 |
-
[
|
59 |
-
[4.5, 5.0, 5.5, 6.0],
|
60 |
-
[7.0, 7.5, 8.0, 8.5],
|
61 |
-
[9.5, 10.0, 10.5, 11.0],
|
62 |
-
[12.0, 12.5, 13.0, 13.5],
|
63 |
-
]
|
64 |
-
)
|
65 |
-
# This is also an upsampled version of [[6, 7], [11, 12]]
|
66 |
-
|
67 |
-
# When the box is rotated by 90 degrees CCW,
|
68 |
-
# the result would be rotated by 90 degrees CW, thus it's -i here
|
69 |
-
result_expected = self._rot90(result_expected, -i)
|
70 |
-
|
71 |
-
assert torch.allclose(result, result_expected)
|
72 |
-
|
73 |
-
def test_resize(self):
|
74 |
-
H, W = 30, 30
|
75 |
-
input = torch.rand(H, W) * 100
|
76 |
-
box = [10, 10, 20, 20]
|
77 |
-
rotated_box = self._box_to_rotated_box(box, angle=0)
|
78 |
-
output = self._simple_roi_align_rotated(img=input, box=rotated_box, resolution=(5, 5))
|
79 |
-
|
80 |
-
input2x = cv2.resize(input.numpy(), (W // 2, H // 2), interpolation=cv2.INTER_LINEAR)
|
81 |
-
input2x = torch.from_numpy(input2x)
|
82 |
-
box2x = [x / 2 for x in box]
|
83 |
-
rotated_box2x = self._box_to_rotated_box(box2x, angle=0)
|
84 |
-
output2x = self._simple_roi_align_rotated(img=input2x, box=rotated_box2x, resolution=(5, 5))
|
85 |
-
assert torch.allclose(output2x, output)
|
86 |
-
|
87 |
-
def _simple_roi_align_rotated(self, img, box, resolution):
|
88 |
-
"""
|
89 |
-
RoiAlignRotated with scale 1.0 and 0 sample ratio.
|
90 |
-
"""
|
91 |
-
op = ROIAlignRotated(output_size=resolution, spatial_scale=1.0, sampling_ratio=0)
|
92 |
-
input = img[None, None, :, :]
|
93 |
-
|
94 |
-
rois = [0] + list(box)
|
95 |
-
rois = torch.tensor(rois, dtype=torch.float32)[None, :]
|
96 |
-
result_cpu = op.forward(input, rois)
|
97 |
-
if torch.cuda.is_available():
|
98 |
-
result_cuda = op.forward(input.cuda(), rois.cuda())
|
99 |
-
assert torch.allclose(result_cpu, result_cuda.cpu())
|
100 |
-
return result_cpu[0, 0]
|
101 |
-
|
102 |
-
def test_empty_box(self):
|
103 |
-
img = torch.rand(5, 5)
|
104 |
-
out = self._simple_roi_align_rotated(img, [2, 3, 0, 0, 0], (7, 7))
|
105 |
-
self.assertTrue((out == 0).all())
|
106 |
-
|
107 |
-
def test_roi_align_rotated_gradcheck_cpu(self):
|
108 |
-
dtype = torch.float64
|
109 |
-
device = torch.device("cpu")
|
110 |
-
roi_align_rotated_op = ROIAlignRotated(
|
111 |
-
output_size=(5, 5), spatial_scale=0.5, sampling_ratio=1
|
112 |
-
).to(dtype=dtype, device=device)
|
113 |
-
x = torch.rand(1, 1, 10, 10, dtype=dtype, device=device, requires_grad=True)
|
114 |
-
# roi format is (batch index, x_center, y_center, width, height, angle)
|
115 |
-
rois = torch.tensor(
|
116 |
-
[[0, 4.5, 4.5, 9, 9, 0], [0, 2, 7, 4, 4, 0], [0, 7, 7, 4, 4, 0]],
|
117 |
-
dtype=dtype,
|
118 |
-
device=device,
|
119 |
-
)
|
120 |
-
|
121 |
-
def func(input):
|
122 |
-
return roi_align_rotated_op(input, rois)
|
123 |
-
|
124 |
-
assert gradcheck(func, (x,)), "gradcheck failed for RoIAlignRotated CPU"
|
125 |
-
assert gradcheck(func, (x.transpose(2, 3),)), "gradcheck failed for RoIAlignRotated CPU"
|
126 |
-
|
127 |
-
@unittest.skipIf(not torch.cuda.is_available(), "CUDA not available")
|
128 |
-
def test_roi_align_rotated_gradient_cuda(self):
|
129 |
-
"""
|
130 |
-
Compute gradients for ROIAlignRotated with multiple bounding boxes on the GPU,
|
131 |
-
and compare the result with ROIAlign
|
132 |
-
"""
|
133 |
-
# torch.manual_seed(123)
|
134 |
-
dtype = torch.float64
|
135 |
-
device = torch.device("cuda")
|
136 |
-
pool_h, pool_w = (5, 5)
|
137 |
-
|
138 |
-
roi_align = ROIAlign(output_size=(pool_h, pool_w), spatial_scale=1, sampling_ratio=2).to(
|
139 |
-
device=device
|
140 |
-
)
|
141 |
-
|
142 |
-
roi_align_rotated = ROIAlignRotated(
|
143 |
-
output_size=(pool_h, pool_w), spatial_scale=1, sampling_ratio=2
|
144 |
-
).to(device=device)
|
145 |
-
|
146 |
-
x = torch.rand(1, 1, 10, 10, dtype=dtype, device=device, requires_grad=True)
|
147 |
-
# x_rotated = x.clone() won't work (will lead to grad_fun=CloneBackward)!
|
148 |
-
x_rotated = Variable(x.data.clone(), requires_grad=True)
|
149 |
-
|
150 |
-
# roi_rotated format is (batch index, x_center, y_center, width, height, angle)
|
151 |
-
rois_rotated = torch.tensor(
|
152 |
-
[[0, 4.5, 4.5, 9, 9, 0], [0, 2, 7, 4, 4, 0], [0, 7, 7, 4, 4, 0]],
|
153 |
-
dtype=dtype,
|
154 |
-
device=device,
|
155 |
-
)
|
156 |
-
|
157 |
-
y_rotated = roi_align_rotated(x_rotated, rois_rotated)
|
158 |
-
s_rotated = y_rotated.sum()
|
159 |
-
s_rotated.backward()
|
160 |
-
|
161 |
-
# roi format is (batch index, x1, y1, x2, y2)
|
162 |
-
rois = torch.tensor(
|
163 |
-
[[0, 0, 0, 9, 9], [0, 0, 5, 4, 9], [0, 5, 5, 9, 9]], dtype=dtype, device=device
|
164 |
-
)
|
165 |
-
|
166 |
-
y = roi_align(x, rois)
|
167 |
-
s = y.sum()
|
168 |
-
s.backward()
|
169 |
-
|
170 |
-
assert torch.allclose(
|
171 |
-
x.grad, x_rotated.grad
|
172 |
-
), "gradients for ROIAlign and ROIAlignRotated mismatch on CUDA"
|
173 |
-
|
174 |
-
|
175 |
-
if __name__ == "__main__":
|
176 |
-
unittest.main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/escprober.py
DELETED
@@ -1,102 +0,0 @@
|
|
1 |
-
######################## BEGIN LICENSE BLOCK ########################
|
2 |
-
# The Original Code is mozilla.org code.
|
3 |
-
#
|
4 |
-
# The Initial Developer of the Original Code is
|
5 |
-
# Netscape Communications Corporation.
|
6 |
-
# Portions created by the Initial Developer are Copyright (C) 1998
|
7 |
-
# the Initial Developer. All Rights Reserved.
|
8 |
-
#
|
9 |
-
# Contributor(s):
|
10 |
-
# Mark Pilgrim - port to Python
|
11 |
-
#
|
12 |
-
# This library is free software; you can redistribute it and/or
|
13 |
-
# modify it under the terms of the GNU Lesser General Public
|
14 |
-
# License as published by the Free Software Foundation; either
|
15 |
-
# version 2.1 of the License, or (at your option) any later version.
|
16 |
-
#
|
17 |
-
# This library is distributed in the hope that it will be useful,
|
18 |
-
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
19 |
-
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
|
20 |
-
# Lesser General Public License for more details.
|
21 |
-
#
|
22 |
-
# You should have received a copy of the GNU Lesser General Public
|
23 |
-
# License along with this library; if not, write to the Free Software
|
24 |
-
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
|
25 |
-
# 02110-1301 USA
|
26 |
-
######################### END LICENSE BLOCK #########################
|
27 |
-
|
28 |
-
from typing import Optional, Union
|
29 |
-
|
30 |
-
from .charsetprober import CharSetProber
|
31 |
-
from .codingstatemachine import CodingStateMachine
|
32 |
-
from .enums import LanguageFilter, MachineState, ProbingState
|
33 |
-
from .escsm import (
|
34 |
-
HZ_SM_MODEL,
|
35 |
-
ISO2022CN_SM_MODEL,
|
36 |
-
ISO2022JP_SM_MODEL,
|
37 |
-
ISO2022KR_SM_MODEL,
|
38 |
-
)
|
39 |
-
|
40 |
-
|
41 |
-
class EscCharSetProber(CharSetProber):
|
42 |
-
"""
|
43 |
-
This CharSetProber uses a "code scheme" approach for detecting encodings,
|
44 |
-
whereby easily recognizable escape or shift sequences are relied on to
|
45 |
-
identify these encodings.
|
46 |
-
"""
|
47 |
-
|
48 |
-
def __init__(self, lang_filter: LanguageFilter = LanguageFilter.NONE) -> None:
|
49 |
-
super().__init__(lang_filter=lang_filter)
|
50 |
-
self.coding_sm = []
|
51 |
-
if self.lang_filter & LanguageFilter.CHINESE_SIMPLIFIED:
|
52 |
-
self.coding_sm.append(CodingStateMachine(HZ_SM_MODEL))
|
53 |
-
self.coding_sm.append(CodingStateMachine(ISO2022CN_SM_MODEL))
|
54 |
-
if self.lang_filter & LanguageFilter.JAPANESE:
|
55 |
-
self.coding_sm.append(CodingStateMachine(ISO2022JP_SM_MODEL))
|
56 |
-
if self.lang_filter & LanguageFilter.KOREAN:
|
57 |
-
self.coding_sm.append(CodingStateMachine(ISO2022KR_SM_MODEL))
|
58 |
-
self.active_sm_count = 0
|
59 |
-
self._detected_charset: Optional[str] = None
|
60 |
-
self._detected_language: Optional[str] = None
|
61 |
-
self._state = ProbingState.DETECTING
|
62 |
-
self.reset()
|
63 |
-
|
64 |
-
def reset(self) -> None:
|
65 |
-
super().reset()
|
66 |
-
for coding_sm in self.coding_sm:
|
67 |
-
coding_sm.active = True
|
68 |
-
coding_sm.reset()
|
69 |
-
self.active_sm_count = len(self.coding_sm)
|
70 |
-
self._detected_charset = None
|
71 |
-
self._detected_language = None
|
72 |
-
|
73 |
-
@property
|
74 |
-
def charset_name(self) -> Optional[str]:
|
75 |
-
return self._detected_charset
|
76 |
-
|
77 |
-
@property
|
78 |
-
def language(self) -> Optional[str]:
|
79 |
-
return self._detected_language
|
80 |
-
|
81 |
-
def get_confidence(self) -> float:
|
82 |
-
return 0.99 if self._detected_charset else 0.00
|
83 |
-
|
84 |
-
def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState:
|
85 |
-
for c in byte_str:
|
86 |
-
for coding_sm in self.coding_sm:
|
87 |
-
if not coding_sm.active:
|
88 |
-
continue
|
89 |
-
coding_state = coding_sm.next_state(c)
|
90 |
-
if coding_state == MachineState.ERROR:
|
91 |
-
coding_sm.active = False
|
92 |
-
self.active_sm_count -= 1
|
93 |
-
if self.active_sm_count <= 0:
|
94 |
-
self._state = ProbingState.NOT_ME
|
95 |
-
return self.state
|
96 |
-
elif coding_state == MachineState.ITS_ME:
|
97 |
-
self._state = ProbingState.FOUND_IT
|
98 |
-
self._detected_charset = coding_sm.get_coding_state_machine()
|
99 |
-
self._detected_language = coding_sm.language
|
100 |
-
return self.state
|
101 |
-
|
102 |
-
return self.state
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/pydiffvg/render_pytorch.py
DELETED
@@ -1,870 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import diffvg
|
3 |
-
import pydiffvg
|
4 |
-
import time
|
5 |
-
from enum import IntEnum
|
6 |
-
import warnings
|
7 |
-
|
8 |
-
print_timing = False
|
9 |
-
|
10 |
-
def set_print_timing(val):
|
11 |
-
global print_timing
|
12 |
-
print_timing=val
|
13 |
-
|
14 |
-
class OutputType(IntEnum):
|
15 |
-
color = 1
|
16 |
-
sdf = 2
|
17 |
-
|
18 |
-
class RenderFunction(torch.autograd.Function):
|
19 |
-
"""
|
20 |
-
The PyTorch interface of diffvg.
|
21 |
-
"""
|
22 |
-
@staticmethod
|
23 |
-
def serialize_scene(canvas_width,
|
24 |
-
canvas_height,
|
25 |
-
shapes,
|
26 |
-
shape_groups,
|
27 |
-
filter = pydiffvg.PixelFilter(type = diffvg.FilterType.box,
|
28 |
-
radius = torch.tensor(0.5)),
|
29 |
-
output_type = OutputType.color,
|
30 |
-
use_prefiltering = False,
|
31 |
-
eval_positions = torch.tensor([])):
|
32 |
-
"""
|
33 |
-
Given a list of shapes, convert them to a linear list of argument,
|
34 |
-
so that we can use it in PyTorch.
|
35 |
-
"""
|
36 |
-
num_shapes = len(shapes)
|
37 |
-
num_shape_groups = len(shape_groups)
|
38 |
-
args = []
|
39 |
-
args.append(canvas_width)
|
40 |
-
args.append(canvas_height)
|
41 |
-
args.append(num_shapes)
|
42 |
-
args.append(num_shape_groups)
|
43 |
-
args.append(output_type)
|
44 |
-
args.append(use_prefiltering)
|
45 |
-
args.append(eval_positions.to(pydiffvg.get_device()))
|
46 |
-
for shape in shapes:
|
47 |
-
use_thickness = False
|
48 |
-
if isinstance(shape, pydiffvg.Circle):
|
49 |
-
assert(shape.center.is_contiguous())
|
50 |
-
args.append(diffvg.ShapeType.circle)
|
51 |
-
args.append(shape.radius.cpu())
|
52 |
-
args.append(shape.center.cpu())
|
53 |
-
elif isinstance(shape, pydiffvg.Ellipse):
|
54 |
-
assert(shape.radius.is_contiguous())
|
55 |
-
assert(shape.center.is_contiguous())
|
56 |
-
args.append(diffvg.ShapeType.ellipse)
|
57 |
-
args.append(shape.radius.cpu())
|
58 |
-
args.append(shape.center.cpu())
|
59 |
-
elif isinstance(shape, pydiffvg.Path):
|
60 |
-
assert(shape.num_control_points.is_contiguous())
|
61 |
-
assert(shape.points.is_contiguous())
|
62 |
-
assert(shape.points.shape[1] == 2)
|
63 |
-
assert(torch.isfinite(shape.points).all())
|
64 |
-
args.append(diffvg.ShapeType.path)
|
65 |
-
args.append(shape.num_control_points.to(torch.int32).cpu())
|
66 |
-
args.append(shape.points.cpu())
|
67 |
-
if len(shape.stroke_width.shape) > 0 and shape.stroke_width.shape[0] > 1:
|
68 |
-
assert(torch.isfinite(shape.stroke_width).all())
|
69 |
-
use_thickness = True
|
70 |
-
args.append(shape.stroke_width.cpu())
|
71 |
-
else:
|
72 |
-
args.append(None)
|
73 |
-
args.append(shape.is_closed)
|
74 |
-
args.append(shape.use_distance_approx)
|
75 |
-
elif isinstance(shape, pydiffvg.Polygon):
|
76 |
-
assert(shape.points.is_contiguous())
|
77 |
-
assert(shape.points.shape[1] == 2)
|
78 |
-
args.append(diffvg.ShapeType.path)
|
79 |
-
if shape.is_closed:
|
80 |
-
args.append(torch.zeros(shape.points.shape[0], dtype = torch.int32))
|
81 |
-
else:
|
82 |
-
args.append(torch.zeros(shape.points.shape[0] - 1, dtype = torch.int32))
|
83 |
-
args.append(shape.points.cpu())
|
84 |
-
args.append(None)
|
85 |
-
args.append(shape.is_closed)
|
86 |
-
args.append(False) # use_distance_approx
|
87 |
-
elif isinstance(shape, pydiffvg.Rect):
|
88 |
-
assert(shape.p_min.is_contiguous())
|
89 |
-
assert(shape.p_max.is_contiguous())
|
90 |
-
args.append(diffvg.ShapeType.rect)
|
91 |
-
args.append(shape.p_min.cpu())
|
92 |
-
args.append(shape.p_max.cpu())
|
93 |
-
else:
|
94 |
-
assert(False)
|
95 |
-
if use_thickness:
|
96 |
-
args.append(torch.tensor(0.0))
|
97 |
-
else:
|
98 |
-
args.append(shape.stroke_width.cpu())
|
99 |
-
|
100 |
-
for shape_group in shape_groups:
|
101 |
-
assert(shape_group.shape_ids.is_contiguous())
|
102 |
-
args.append(shape_group.shape_ids.to(torch.int32).cpu())
|
103 |
-
# Fill color
|
104 |
-
if shape_group.fill_color is None:
|
105 |
-
args.append(None)
|
106 |
-
elif isinstance(shape_group.fill_color, torch.Tensor):
|
107 |
-
assert(shape_group.fill_color.is_contiguous())
|
108 |
-
args.append(diffvg.ColorType.constant)
|
109 |
-
args.append(shape_group.fill_color.cpu())
|
110 |
-
elif isinstance(shape_group.fill_color, pydiffvg.LinearGradient):
|
111 |
-
assert(shape_group.fill_color.begin.is_contiguous())
|
112 |
-
assert(shape_group.fill_color.end.is_contiguous())
|
113 |
-
assert(shape_group.fill_color.offsets.is_contiguous())
|
114 |
-
assert(shape_group.fill_color.stop_colors.is_contiguous())
|
115 |
-
args.append(diffvg.ColorType.linear_gradient)
|
116 |
-
args.append(shape_group.fill_color.begin.cpu())
|
117 |
-
args.append(shape_group.fill_color.end.cpu())
|
118 |
-
args.append(shape_group.fill_color.offsets.cpu())
|
119 |
-
args.append(shape_group.fill_color.stop_colors.cpu())
|
120 |
-
elif isinstance(shape_group.fill_color, pydiffvg.RadialGradient):
|
121 |
-
assert(shape_group.fill_color.center.is_contiguous())
|
122 |
-
assert(shape_group.fill_color.radius.is_contiguous())
|
123 |
-
assert(shape_group.fill_color.offsets.is_contiguous())
|
124 |
-
assert(shape_group.fill_color.stop_colors.is_contiguous())
|
125 |
-
args.append(diffvg.ColorType.radial_gradient)
|
126 |
-
args.append(shape_group.fill_color.center.cpu())
|
127 |
-
args.append(shape_group.fill_color.radius.cpu())
|
128 |
-
args.append(shape_group.fill_color.offsets.cpu())
|
129 |
-
args.append(shape_group.fill_color.stop_colors.cpu())
|
130 |
-
|
131 |
-
if shape_group.fill_color is not None:
|
132 |
-
# go through the underlying shapes and check if they are all closed
|
133 |
-
for shape_id in shape_group.shape_ids:
|
134 |
-
if isinstance(shapes[shape_id], pydiffvg.Path):
|
135 |
-
if not shapes[shape_id].is_closed:
|
136 |
-
warnings.warn("Detected non-closed paths with fill color. This might causes unexpected results.", Warning)
|
137 |
-
|
138 |
-
# Stroke color
|
139 |
-
if shape_group.stroke_color is None:
|
140 |
-
args.append(None)
|
141 |
-
elif isinstance(shape_group.stroke_color, torch.Tensor):
|
142 |
-
assert(shape_group.stroke_color.is_contiguous())
|
143 |
-
args.append(diffvg.ColorType.constant)
|
144 |
-
args.append(shape_group.stroke_color.cpu())
|
145 |
-
elif isinstance(shape_group.stroke_color, pydiffvg.LinearGradient):
|
146 |
-
assert(shape_group.stroke_color.begin.is_contiguous())
|
147 |
-
assert(shape_group.stroke_color.end.is_contiguous())
|
148 |
-
assert(shape_group.stroke_color.offsets.is_contiguous())
|
149 |
-
assert(shape_group.stroke_color.stop_colors.is_contiguous())
|
150 |
-
assert(torch.isfinite(shape_group.stroke_color.stop_colors).all())
|
151 |
-
args.append(diffvg.ColorType.linear_gradient)
|
152 |
-
args.append(shape_group.stroke_color.begin.cpu())
|
153 |
-
args.append(shape_group.stroke_color.end.cpu())
|
154 |
-
args.append(shape_group.stroke_color.offsets.cpu())
|
155 |
-
args.append(shape_group.stroke_color.stop_colors.cpu())
|
156 |
-
elif isinstance(shape_group.stroke_color, pydiffvg.RadialGradient):
|
157 |
-
assert(shape_group.stroke_color.center.is_contiguous())
|
158 |
-
assert(shape_group.stroke_color.radius.is_contiguous())
|
159 |
-
assert(shape_group.stroke_color.offsets.is_contiguous())
|
160 |
-
assert(shape_group.stroke_color.stop_colors.is_contiguous())
|
161 |
-
assert(torch.isfinite(shape_group.stroke_color.stop_colors).all())
|
162 |
-
args.append(diffvg.ColorType.radial_gradient)
|
163 |
-
args.append(shape_group.stroke_color.center.cpu())
|
164 |
-
args.append(shape_group.stroke_color.radius.cpu())
|
165 |
-
args.append(shape_group.stroke_color.offsets.cpu())
|
166 |
-
args.append(shape_group.stroke_color.stop_colors.cpu())
|
167 |
-
args.append(shape_group.use_even_odd_rule)
|
168 |
-
# Transformation
|
169 |
-
args.append(shape_group.shape_to_canvas.contiguous().cpu())
|
170 |
-
args.append(filter.type)
|
171 |
-
args.append(filter.radius.cpu())
|
172 |
-
return args
|
173 |
-
|
174 |
-
@staticmethod
|
175 |
-
def forward(ctx,
|
176 |
-
width,
|
177 |
-
height,
|
178 |
-
num_samples_x,
|
179 |
-
num_samples_y,
|
180 |
-
seed,
|
181 |
-
background_image,
|
182 |
-
*args):
|
183 |
-
"""
|
184 |
-
Forward rendering pass.
|
185 |
-
"""
|
186 |
-
# Unpack arguments
|
187 |
-
current_index = 0
|
188 |
-
canvas_width = args[current_index]
|
189 |
-
current_index += 1
|
190 |
-
canvas_height = args[current_index]
|
191 |
-
current_index += 1
|
192 |
-
num_shapes = args[current_index]
|
193 |
-
current_index += 1
|
194 |
-
num_shape_groups = args[current_index]
|
195 |
-
current_index += 1
|
196 |
-
output_type = args[current_index]
|
197 |
-
current_index += 1
|
198 |
-
use_prefiltering = args[current_index]
|
199 |
-
current_index += 1
|
200 |
-
eval_positions = args[current_index]
|
201 |
-
current_index += 1
|
202 |
-
shapes = []
|
203 |
-
shape_groups = []
|
204 |
-
shape_contents = [] # Important to avoid GC deleting the shapes
|
205 |
-
color_contents = [] # Same as above
|
206 |
-
for shape_id in range(num_shapes):
|
207 |
-
shape_type = args[current_index]
|
208 |
-
current_index += 1
|
209 |
-
if shape_type == diffvg.ShapeType.circle:
|
210 |
-
radius = args[current_index]
|
211 |
-
current_index += 1
|
212 |
-
center = args[current_index]
|
213 |
-
current_index += 1
|
214 |
-
shape = diffvg.Circle(radius, diffvg.Vector2f(center[0], center[1]))
|
215 |
-
elif shape_type == diffvg.ShapeType.ellipse:
|
216 |
-
radius = args[current_index]
|
217 |
-
current_index += 1
|
218 |
-
center = args[current_index]
|
219 |
-
current_index += 1
|
220 |
-
shape = diffvg.Ellipse(diffvg.Vector2f(radius[0], radius[1]),
|
221 |
-
diffvg.Vector2f(center[0], center[1]))
|
222 |
-
elif shape_type == diffvg.ShapeType.path:
|
223 |
-
num_control_points = args[current_index]
|
224 |
-
current_index += 1
|
225 |
-
points = args[current_index]
|
226 |
-
current_index += 1
|
227 |
-
thickness = args[current_index]
|
228 |
-
current_index += 1
|
229 |
-
is_closed = args[current_index]
|
230 |
-
current_index += 1
|
231 |
-
use_distance_approx = args[current_index]
|
232 |
-
current_index += 1
|
233 |
-
shape = diffvg.Path(diffvg.int_ptr(num_control_points.data_ptr()),
|
234 |
-
diffvg.float_ptr(points.data_ptr()),
|
235 |
-
diffvg.float_ptr(thickness.data_ptr() if thickness is not None else 0),
|
236 |
-
num_control_points.shape[0],
|
237 |
-
points.shape[0],
|
238 |
-
is_closed,
|
239 |
-
use_distance_approx)
|
240 |
-
elif shape_type == diffvg.ShapeType.rect:
|
241 |
-
p_min = args[current_index]
|
242 |
-
current_index += 1
|
243 |
-
p_max = args[current_index]
|
244 |
-
current_index += 1
|
245 |
-
shape = diffvg.Rect(diffvg.Vector2f(p_min[0], p_min[1]),
|
246 |
-
diffvg.Vector2f(p_max[0], p_max[1]))
|
247 |
-
else:
|
248 |
-
assert(False)
|
249 |
-
stroke_width = args[current_index]
|
250 |
-
current_index += 1
|
251 |
-
shapes.append(diffvg.Shape(\
|
252 |
-
shape_type, shape.get_ptr(), stroke_width.item()))
|
253 |
-
shape_contents.append(shape)
|
254 |
-
|
255 |
-
for shape_group_id in range(num_shape_groups):
|
256 |
-
shape_ids = args[current_index]
|
257 |
-
current_index += 1
|
258 |
-
fill_color_type = args[current_index]
|
259 |
-
current_index += 1
|
260 |
-
if fill_color_type == diffvg.ColorType.constant:
|
261 |
-
color = args[current_index]
|
262 |
-
current_index += 1
|
263 |
-
fill_color = diffvg.Constant(\
|
264 |
-
diffvg.Vector4f(color[0], color[1], color[2], color[3]))
|
265 |
-
elif fill_color_type == diffvg.ColorType.linear_gradient:
|
266 |
-
beg = args[current_index]
|
267 |
-
current_index += 1
|
268 |
-
end = args[current_index]
|
269 |
-
current_index += 1
|
270 |
-
offsets = args[current_index]
|
271 |
-
current_index += 1
|
272 |
-
stop_colors = args[current_index]
|
273 |
-
current_index += 1
|
274 |
-
assert(offsets.shape[0] == stop_colors.shape[0])
|
275 |
-
fill_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]),
|
276 |
-
diffvg.Vector2f(end[0], end[1]),
|
277 |
-
offsets.shape[0],
|
278 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
279 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
280 |
-
elif fill_color_type == diffvg.ColorType.radial_gradient:
|
281 |
-
center = args[current_index]
|
282 |
-
current_index += 1
|
283 |
-
radius = args[current_index]
|
284 |
-
current_index += 1
|
285 |
-
offsets = args[current_index]
|
286 |
-
current_index += 1
|
287 |
-
stop_colors = args[current_index]
|
288 |
-
current_index += 1
|
289 |
-
assert(offsets.shape[0] == stop_colors.shape[0])
|
290 |
-
fill_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]),
|
291 |
-
diffvg.Vector2f(radius[0], radius[1]),
|
292 |
-
offsets.shape[0],
|
293 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
294 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
295 |
-
elif fill_color_type is None:
|
296 |
-
fill_color = None
|
297 |
-
else:
|
298 |
-
assert(False)
|
299 |
-
stroke_color_type = args[current_index]
|
300 |
-
current_index += 1
|
301 |
-
if stroke_color_type == diffvg.ColorType.constant:
|
302 |
-
color = args[current_index]
|
303 |
-
current_index += 1
|
304 |
-
stroke_color = diffvg.Constant(\
|
305 |
-
diffvg.Vector4f(color[0], color[1], color[2], color[3]))
|
306 |
-
elif stroke_color_type == diffvg.ColorType.linear_gradient:
|
307 |
-
beg = args[current_index]
|
308 |
-
current_index += 1
|
309 |
-
end = args[current_index]
|
310 |
-
current_index += 1
|
311 |
-
offsets = args[current_index]
|
312 |
-
current_index += 1
|
313 |
-
stop_colors = args[current_index]
|
314 |
-
current_index += 1
|
315 |
-
assert(offsets.shape[0] == stop_colors.shape[0])
|
316 |
-
stroke_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]),
|
317 |
-
diffvg.Vector2f(end[0], end[1]),
|
318 |
-
offsets.shape[0],
|
319 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
320 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
321 |
-
elif stroke_color_type == diffvg.ColorType.radial_gradient:
|
322 |
-
center = args[current_index]
|
323 |
-
current_index += 1
|
324 |
-
radius = args[current_index]
|
325 |
-
current_index += 1
|
326 |
-
offsets = args[current_index]
|
327 |
-
current_index += 1
|
328 |
-
stop_colors = args[current_index]
|
329 |
-
current_index += 1
|
330 |
-
assert(offsets.shape[0] == stop_colors.shape[0])
|
331 |
-
stroke_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]),
|
332 |
-
diffvg.Vector2f(radius[0], radius[1]),
|
333 |
-
offsets.shape[0],
|
334 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
335 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
336 |
-
elif stroke_color_type is None:
|
337 |
-
stroke_color = None
|
338 |
-
else:
|
339 |
-
assert(False)
|
340 |
-
use_even_odd_rule = args[current_index]
|
341 |
-
current_index += 1
|
342 |
-
shape_to_canvas = args[current_index]
|
343 |
-
current_index += 1
|
344 |
-
|
345 |
-
if fill_color is not None:
|
346 |
-
color_contents.append(fill_color)
|
347 |
-
if stroke_color is not None:
|
348 |
-
color_contents.append(stroke_color)
|
349 |
-
shape_groups.append(diffvg.ShapeGroup(\
|
350 |
-
diffvg.int_ptr(shape_ids.data_ptr()),
|
351 |
-
shape_ids.shape[0],
|
352 |
-
diffvg.ColorType.constant if fill_color_type is None else fill_color_type,
|
353 |
-
diffvg.void_ptr(0) if fill_color is None else fill_color.get_ptr(),
|
354 |
-
diffvg.ColorType.constant if stroke_color_type is None else stroke_color_type,
|
355 |
-
diffvg.void_ptr(0) if stroke_color is None else stroke_color.get_ptr(),
|
356 |
-
use_even_odd_rule,
|
357 |
-
diffvg.float_ptr(shape_to_canvas.data_ptr())))
|
358 |
-
|
359 |
-
filter_type = args[current_index]
|
360 |
-
current_index += 1
|
361 |
-
filter_radius = args[current_index]
|
362 |
-
current_index += 1
|
363 |
-
filt = diffvg.Filter(filter_type, filter_radius)
|
364 |
-
|
365 |
-
start = time.time()
|
366 |
-
scene = diffvg.Scene(canvas_width, canvas_height,
|
367 |
-
shapes, shape_groups, filt, pydiffvg.get_use_gpu(),
|
368 |
-
pydiffvg.get_device().index if pydiffvg.get_device().index is not None else -1)
|
369 |
-
time_elapsed = time.time() - start
|
370 |
-
global print_timing
|
371 |
-
if print_timing:
|
372 |
-
print('Scene construction, time: %.5f s' % time_elapsed)
|
373 |
-
|
374 |
-
if output_type == OutputType.color:
|
375 |
-
assert(eval_positions.shape[0] == 0)
|
376 |
-
rendered_image = torch.zeros(height, width, 4, device = pydiffvg.get_device())
|
377 |
-
else:
|
378 |
-
assert(output_type == OutputType.sdf)
|
379 |
-
if eval_positions.shape[0] == 0:
|
380 |
-
rendered_image = torch.zeros(height, width, 1, device = pydiffvg.get_device())
|
381 |
-
else:
|
382 |
-
rendered_image = torch.zeros(eval_positions.shape[0], 1, device = pydiffvg.get_device())
|
383 |
-
|
384 |
-
if background_image is not None:
|
385 |
-
background_image = background_image.to(pydiffvg.get_device())
|
386 |
-
if background_image.shape[2] == 3:
|
387 |
-
background_image = torch.cat((\
|
388 |
-
background_image, torch.ones(background_image.shape[0], background_image.shape[1], 1,
|
389 |
-
device = background_image.device)), dim = 2)
|
390 |
-
background_image = background_image.contiguous()
|
391 |
-
assert(background_image.shape[0] == rendered_image.shape[0])
|
392 |
-
assert(background_image.shape[1] == rendered_image.shape[1])
|
393 |
-
assert(background_image.shape[2] == 4)
|
394 |
-
|
395 |
-
start = time.time()
|
396 |
-
diffvg.render(scene,
|
397 |
-
diffvg.float_ptr(background_image.data_ptr() if background_image is not None else 0),
|
398 |
-
diffvg.float_ptr(rendered_image.data_ptr() if output_type == OutputType.color else 0),
|
399 |
-
diffvg.float_ptr(rendered_image.data_ptr() if output_type == OutputType.sdf else 0),
|
400 |
-
width,
|
401 |
-
height,
|
402 |
-
num_samples_x,
|
403 |
-
num_samples_y,
|
404 |
-
seed,
|
405 |
-
diffvg.float_ptr(0), # d_background_image
|
406 |
-
diffvg.float_ptr(0), # d_render_image
|
407 |
-
diffvg.float_ptr(0), # d_render_sdf
|
408 |
-
diffvg.float_ptr(0), # d_translation
|
409 |
-
use_prefiltering,
|
410 |
-
diffvg.float_ptr(eval_positions.data_ptr()),
|
411 |
-
eval_positions.shape[0])
|
412 |
-
assert(torch.isfinite(rendered_image).all())
|
413 |
-
time_elapsed = time.time() - start
|
414 |
-
if print_timing:
|
415 |
-
print('Forward pass, time: %.5f s' % time_elapsed)
|
416 |
-
|
417 |
-
ctx.scene = scene
|
418 |
-
ctx.background_image = background_image
|
419 |
-
ctx.shape_contents = shape_contents
|
420 |
-
ctx.color_contents = color_contents
|
421 |
-
ctx.filter = filt
|
422 |
-
ctx.width = width
|
423 |
-
ctx.height = height
|
424 |
-
ctx.num_samples_x = num_samples_x
|
425 |
-
ctx.num_samples_y = num_samples_y
|
426 |
-
ctx.seed = seed
|
427 |
-
ctx.output_type = output_type
|
428 |
-
ctx.use_prefiltering = use_prefiltering
|
429 |
-
ctx.eval_positions = eval_positions
|
430 |
-
return rendered_image
|
431 |
-
|
432 |
-
@staticmethod
|
433 |
-
def render_grad(grad_img,
|
434 |
-
width,
|
435 |
-
height,
|
436 |
-
num_samples_x,
|
437 |
-
num_samples_y,
|
438 |
-
seed,
|
439 |
-
background_image,
|
440 |
-
*args):
|
441 |
-
if not grad_img.is_contiguous():
|
442 |
-
grad_img = grad_img.contiguous()
|
443 |
-
assert(torch.isfinite(grad_img).all())
|
444 |
-
|
445 |
-
# Unpack arguments
|
446 |
-
current_index = 0
|
447 |
-
canvas_width = args[current_index]
|
448 |
-
current_index += 1
|
449 |
-
canvas_height = args[current_index]
|
450 |
-
current_index += 1
|
451 |
-
num_shapes = args[current_index]
|
452 |
-
current_index += 1
|
453 |
-
num_shape_groups = args[current_index]
|
454 |
-
current_index += 1
|
455 |
-
output_type = args[current_index]
|
456 |
-
current_index += 1
|
457 |
-
use_prefiltering = args[current_index]
|
458 |
-
current_index += 1
|
459 |
-
eval_positions = args[current_index]
|
460 |
-
current_index += 1
|
461 |
-
shapes = []
|
462 |
-
shape_groups = []
|
463 |
-
shape_contents = [] # Important to avoid GC deleting the shapes
|
464 |
-
color_contents = [] # Same as above
|
465 |
-
for shape_id in range(num_shapes):
|
466 |
-
shape_type = args[current_index]
|
467 |
-
current_index += 1
|
468 |
-
if shape_type == diffvg.ShapeType.circle:
|
469 |
-
radius = args[current_index]
|
470 |
-
current_index += 1
|
471 |
-
center = args[current_index]
|
472 |
-
current_index += 1
|
473 |
-
shape = diffvg.Circle(radius, diffvg.Vector2f(center[0], center[1]))
|
474 |
-
elif shape_type == diffvg.ShapeType.ellipse:
|
475 |
-
radius = args[current_index]
|
476 |
-
current_index += 1
|
477 |
-
center = args[current_index]
|
478 |
-
current_index += 1
|
479 |
-
shape = diffvg.Ellipse(diffvg.Vector2f(radius[0], radius[1]),
|
480 |
-
diffvg.Vector2f(center[0], center[1]))
|
481 |
-
elif shape_type == diffvg.ShapeType.path:
|
482 |
-
num_control_points = args[current_index]
|
483 |
-
current_index += 1
|
484 |
-
points = args[current_index]
|
485 |
-
current_index += 1
|
486 |
-
thickness = args[current_index]
|
487 |
-
current_index += 1
|
488 |
-
is_closed = args[current_index]
|
489 |
-
current_index += 1
|
490 |
-
use_distance_approx = args[current_index]
|
491 |
-
current_index += 1
|
492 |
-
shape = diffvg.Path(diffvg.int_ptr(num_control_points.data_ptr()),
|
493 |
-
diffvg.float_ptr(points.data_ptr()),
|
494 |
-
diffvg.float_ptr(thickness.data_ptr() if thickness is not None else 0),
|
495 |
-
num_control_points.shape[0],
|
496 |
-
points.shape[0],
|
497 |
-
is_closed,
|
498 |
-
use_distance_approx)
|
499 |
-
elif shape_type == diffvg.ShapeType.rect:
|
500 |
-
p_min = args[current_index]
|
501 |
-
current_index += 1
|
502 |
-
p_max = args[current_index]
|
503 |
-
current_index += 1
|
504 |
-
shape = diffvg.Rect(diffvg.Vector2f(p_min[0], p_min[1]),
|
505 |
-
diffvg.Vector2f(p_max[0], p_max[1]))
|
506 |
-
else:
|
507 |
-
assert(False)
|
508 |
-
stroke_width = args[current_index]
|
509 |
-
current_index += 1
|
510 |
-
shapes.append(diffvg.Shape(\
|
511 |
-
shape_type, shape.get_ptr(), stroke_width.item()))
|
512 |
-
shape_contents.append(shape)
|
513 |
-
|
514 |
-
for shape_group_id in range(num_shape_groups):
|
515 |
-
shape_ids = args[current_index]
|
516 |
-
current_index += 1
|
517 |
-
fill_color_type = args[current_index]
|
518 |
-
current_index += 1
|
519 |
-
if fill_color_type == diffvg.ColorType.constant:
|
520 |
-
color = args[current_index]
|
521 |
-
current_index += 1
|
522 |
-
fill_color = diffvg.Constant(\
|
523 |
-
diffvg.Vector4f(color[0], color[1], color[2], color[3]))
|
524 |
-
elif fill_color_type == diffvg.ColorType.linear_gradient:
|
525 |
-
beg = args[current_index]
|
526 |
-
current_index += 1
|
527 |
-
end = args[current_index]
|
528 |
-
current_index += 1
|
529 |
-
offsets = args[current_index]
|
530 |
-
current_index += 1
|
531 |
-
stop_colors = args[current_index]
|
532 |
-
current_index += 1
|
533 |
-
assert(offsets.shape[0] == stop_colors.shape[0])
|
534 |
-
fill_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]),
|
535 |
-
diffvg.Vector2f(end[0], end[1]),
|
536 |
-
offsets.shape[0],
|
537 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
538 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
539 |
-
elif fill_color_type == diffvg.ColorType.radial_gradient:
|
540 |
-
center = args[current_index]
|
541 |
-
current_index += 1
|
542 |
-
radius = args[current_index]
|
543 |
-
current_index += 1
|
544 |
-
offsets = args[current_index]
|
545 |
-
current_index += 1
|
546 |
-
stop_colors = args[current_index]
|
547 |
-
current_index += 1
|
548 |
-
assert(offsets.shape[0] == stop_colors.shape[0])
|
549 |
-
fill_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]),
|
550 |
-
diffvg.Vector2f(radius[0], radius[1]),
|
551 |
-
offsets.shape[0],
|
552 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
553 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
554 |
-
elif fill_color_type is None:
|
555 |
-
fill_color = None
|
556 |
-
else:
|
557 |
-
assert(False)
|
558 |
-
stroke_color_type = args[current_index]
|
559 |
-
current_index += 1
|
560 |
-
if stroke_color_type == diffvg.ColorType.constant:
|
561 |
-
color = args[current_index]
|
562 |
-
current_index += 1
|
563 |
-
stroke_color = diffvg.Constant(\
|
564 |
-
diffvg.Vector4f(color[0], color[1], color[2], color[3]))
|
565 |
-
elif stroke_color_type == diffvg.ColorType.linear_gradient:
|
566 |
-
beg = args[current_index]
|
567 |
-
current_index += 1
|
568 |
-
end = args[current_index]
|
569 |
-
current_index += 1
|
570 |
-
offsets = args[current_index]
|
571 |
-
current_index += 1
|
572 |
-
stop_colors = args[current_index]
|
573 |
-
current_index += 1
|
574 |
-
assert(offsets.shape[0] == stop_colors.shape[0])
|
575 |
-
stroke_color = diffvg.LinearGradient(diffvg.Vector2f(beg[0], beg[1]),
|
576 |
-
diffvg.Vector2f(end[0], end[1]),
|
577 |
-
offsets.shape[0],
|
578 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
579 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
580 |
-
elif stroke_color_type == diffvg.ColorType.radial_gradient:
|
581 |
-
center = args[current_index]
|
582 |
-
current_index += 1
|
583 |
-
radius = args[current_index]
|
584 |
-
current_index += 1
|
585 |
-
offsets = args[current_index]
|
586 |
-
current_index += 1
|
587 |
-
stop_colors = args[current_index]
|
588 |
-
current_index += 1
|
589 |
-
assert(offsets.shape[0] == stop_colors.shape[0])
|
590 |
-
stroke_color = diffvg.RadialGradient(diffvg.Vector2f(center[0], center[1]),
|
591 |
-
diffvg.Vector2f(radius[0], radius[1]),
|
592 |
-
offsets.shape[0],
|
593 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
594 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
595 |
-
elif stroke_color_type is None:
|
596 |
-
stroke_color = None
|
597 |
-
else:
|
598 |
-
assert(False)
|
599 |
-
use_even_odd_rule = args[current_index]
|
600 |
-
current_index += 1
|
601 |
-
shape_to_canvas = args[current_index]
|
602 |
-
current_index += 1
|
603 |
-
|
604 |
-
if fill_color is not None:
|
605 |
-
color_contents.append(fill_color)
|
606 |
-
if stroke_color is not None:
|
607 |
-
color_contents.append(stroke_color)
|
608 |
-
shape_groups.append(diffvg.ShapeGroup(\
|
609 |
-
diffvg.int_ptr(shape_ids.data_ptr()),
|
610 |
-
shape_ids.shape[0],
|
611 |
-
diffvg.ColorType.constant if fill_color_type is None else fill_color_type,
|
612 |
-
diffvg.void_ptr(0) if fill_color is None else fill_color.get_ptr(),
|
613 |
-
diffvg.ColorType.constant if stroke_color_type is None else stroke_color_type,
|
614 |
-
diffvg.void_ptr(0) if stroke_color is None else stroke_color.get_ptr(),
|
615 |
-
use_even_odd_rule,
|
616 |
-
diffvg.float_ptr(shape_to_canvas.data_ptr())))
|
617 |
-
|
618 |
-
filter_type = args[current_index]
|
619 |
-
current_index += 1
|
620 |
-
filter_radius = args[current_index]
|
621 |
-
current_index += 1
|
622 |
-
filt = diffvg.Filter(filter_type, filter_radius)
|
623 |
-
|
624 |
-
scene = diffvg.Scene(canvas_width, canvas_height,
|
625 |
-
shapes, shape_groups, filt, pydiffvg.get_use_gpu(),
|
626 |
-
pydiffvg.get_device().index if pydiffvg.get_device().index is not None else -1)
|
627 |
-
|
628 |
-
if output_type == OutputType.color:
|
629 |
-
assert(grad_img.shape[2] == 4)
|
630 |
-
else:
|
631 |
-
assert(grad_img.shape[2] == 1)
|
632 |
-
|
633 |
-
if background_image is not None:
|
634 |
-
background_image = background_image.to(pydiffvg.get_device())
|
635 |
-
if background_image.shape[2] == 3:
|
636 |
-
background_image = torch.cat((\
|
637 |
-
background_image, torch.ones(background_image.shape[0], background_image.shape[1], 1,
|
638 |
-
device = background_image.device)), dim = 2)
|
639 |
-
background_image = background_image.contiguous()
|
640 |
-
assert(background_image.shape[0] == rendered_image.shape[0])
|
641 |
-
assert(background_image.shape[1] == rendered_image.shape[1])
|
642 |
-
assert(background_image.shape[2] == 4)
|
643 |
-
|
644 |
-
translation_grad_image = \
|
645 |
-
torch.zeros(height, width, 2, device = pydiffvg.get_device())
|
646 |
-
start = time.time()
|
647 |
-
diffvg.render(scene,
|
648 |
-
diffvg.float_ptr(background_image.data_ptr() if background_image is not None else 0),
|
649 |
-
diffvg.float_ptr(0), # render_image
|
650 |
-
diffvg.float_ptr(0), # render_sdf
|
651 |
-
width,
|
652 |
-
height,
|
653 |
-
num_samples_x,
|
654 |
-
num_samples_y,
|
655 |
-
seed,
|
656 |
-
diffvg.float_ptr(0), # d_background_image
|
657 |
-
diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.color else 0),
|
658 |
-
diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.sdf else 0),
|
659 |
-
diffvg.float_ptr(translation_grad_image.data_ptr()),
|
660 |
-
use_prefiltering,
|
661 |
-
diffvg.float_ptr(eval_positions.data_ptr()),
|
662 |
-
eval_positions.shape[0])
|
663 |
-
time_elapsed = time.time() - start
|
664 |
-
if print_timing:
|
665 |
-
print('Gradient pass, time: %.5f s' % time_elapsed)
|
666 |
-
assert(torch.isfinite(translation_grad_image).all())
|
667 |
-
|
668 |
-
return translation_grad_image
|
669 |
-
|
670 |
-
@staticmethod
|
671 |
-
def backward(ctx,
|
672 |
-
grad_img):
|
673 |
-
if not grad_img.is_contiguous():
|
674 |
-
grad_img = grad_img.contiguous()
|
675 |
-
assert(torch.isfinite(grad_img).all())
|
676 |
-
|
677 |
-
scene = ctx.scene
|
678 |
-
width = ctx.width
|
679 |
-
height = ctx.height
|
680 |
-
num_samples_x = ctx.num_samples_x
|
681 |
-
num_samples_y = ctx.num_samples_y
|
682 |
-
seed = ctx.seed
|
683 |
-
output_type = ctx.output_type
|
684 |
-
use_prefiltering = ctx.use_prefiltering
|
685 |
-
eval_positions = ctx.eval_positions
|
686 |
-
background_image = ctx.background_image
|
687 |
-
|
688 |
-
if background_image is not None:
|
689 |
-
d_background_image = torch.zeros_like(background_image)
|
690 |
-
else:
|
691 |
-
d_background_image = None
|
692 |
-
|
693 |
-
start = time.time()
|
694 |
-
diffvg.render(scene,
|
695 |
-
diffvg.float_ptr(background_image.data_ptr() if background_image is not None else 0),
|
696 |
-
diffvg.float_ptr(0), # render_image
|
697 |
-
diffvg.float_ptr(0), # render_sdf
|
698 |
-
width,
|
699 |
-
height,
|
700 |
-
num_samples_x,
|
701 |
-
num_samples_y,
|
702 |
-
seed,
|
703 |
-
diffvg.float_ptr(d_background_image.data_ptr() if background_image is not None else 0),
|
704 |
-
diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.color else 0),
|
705 |
-
diffvg.float_ptr(grad_img.data_ptr() if output_type == OutputType.sdf else 0),
|
706 |
-
diffvg.float_ptr(0), # d_translation
|
707 |
-
use_prefiltering,
|
708 |
-
diffvg.float_ptr(eval_positions.data_ptr()),
|
709 |
-
eval_positions.shape[0])
|
710 |
-
time_elapsed = time.time() - start
|
711 |
-
global print_timing
|
712 |
-
if print_timing:
|
713 |
-
print('Backward pass, time: %.5f s' % time_elapsed)
|
714 |
-
|
715 |
-
d_args = []
|
716 |
-
d_args.append(None) # width
|
717 |
-
d_args.append(None) # height
|
718 |
-
d_args.append(None) # num_samples_x
|
719 |
-
d_args.append(None) # num_samples_y
|
720 |
-
d_args.append(None) # seed
|
721 |
-
d_args.append(d_background_image)
|
722 |
-
d_args.append(None) # canvas_width
|
723 |
-
d_args.append(None) # canvas_height
|
724 |
-
d_args.append(None) # num_shapes
|
725 |
-
d_args.append(None) # num_shape_groups
|
726 |
-
d_args.append(None) # output_type
|
727 |
-
d_args.append(None) # use_prefiltering
|
728 |
-
d_args.append(None) # eval_positions
|
729 |
-
for shape_id in range(scene.num_shapes):
|
730 |
-
d_args.append(None) # type
|
731 |
-
d_shape = scene.get_d_shape(shape_id)
|
732 |
-
use_thickness = False
|
733 |
-
if d_shape.type == diffvg.ShapeType.circle:
|
734 |
-
d_circle = d_shape.as_circle()
|
735 |
-
radius = torch.tensor(d_circle.radius)
|
736 |
-
assert(torch.isfinite(radius).all())
|
737 |
-
d_args.append(radius)
|
738 |
-
c = d_circle.center
|
739 |
-
c = torch.tensor((c.x, c.y))
|
740 |
-
assert(torch.isfinite(c).all())
|
741 |
-
d_args.append(c)
|
742 |
-
elif d_shape.type == diffvg.ShapeType.ellipse:
|
743 |
-
d_ellipse = d_shape.as_ellipse()
|
744 |
-
r = d_ellipse.radius
|
745 |
-
r = torch.tensor((d_ellipse.radius.x, d_ellipse.radius.y))
|
746 |
-
assert(torch.isfinite(r).all())
|
747 |
-
d_args.append(r)
|
748 |
-
c = d_ellipse.center
|
749 |
-
c = torch.tensor((c.x, c.y))
|
750 |
-
assert(torch.isfinite(c).all())
|
751 |
-
d_args.append(c)
|
752 |
-
elif d_shape.type == diffvg.ShapeType.path:
|
753 |
-
d_path = d_shape.as_path()
|
754 |
-
points = torch.zeros((d_path.num_points, 2))
|
755 |
-
thickness = None
|
756 |
-
if d_path.has_thickness():
|
757 |
-
use_thickness = True
|
758 |
-
thickness = torch.zeros(d_path.num_points)
|
759 |
-
d_path.copy_to(diffvg.float_ptr(points.data_ptr()), diffvg.float_ptr(thickness.data_ptr()))
|
760 |
-
else:
|
761 |
-
d_path.copy_to(diffvg.float_ptr(points.data_ptr()), diffvg.float_ptr(0))
|
762 |
-
assert(torch.isfinite(points).all())
|
763 |
-
if thickness is not None:
|
764 |
-
assert(torch.isfinite(thickness).all())
|
765 |
-
d_args.append(None) # num_control_points
|
766 |
-
d_args.append(points)
|
767 |
-
d_args.append(thickness)
|
768 |
-
d_args.append(None) # is_closed
|
769 |
-
d_args.append(None) # use_distance_approx
|
770 |
-
elif d_shape.type == diffvg.ShapeType.rect:
|
771 |
-
d_rect = d_shape.as_rect()
|
772 |
-
p_min = torch.tensor((d_rect.p_min.x, d_rect.p_min.y))
|
773 |
-
p_max = torch.tensor((d_rect.p_max.x, d_rect.p_max.y))
|
774 |
-
assert(torch.isfinite(p_min).all())
|
775 |
-
assert(torch.isfinite(p_max).all())
|
776 |
-
d_args.append(p_min)
|
777 |
-
d_args.append(p_max)
|
778 |
-
else:
|
779 |
-
assert(False)
|
780 |
-
if use_thickness:
|
781 |
-
d_args.append(None)
|
782 |
-
else:
|
783 |
-
w = torch.tensor((d_shape.stroke_width))
|
784 |
-
assert(torch.isfinite(w).all())
|
785 |
-
d_args.append(w)
|
786 |
-
|
787 |
-
for group_id in range(scene.num_shape_groups):
|
788 |
-
d_shape_group = scene.get_d_shape_group(group_id)
|
789 |
-
d_args.append(None) # shape_ids
|
790 |
-
d_args.append(None) # fill_color_type
|
791 |
-
if d_shape_group.has_fill_color():
|
792 |
-
if d_shape_group.fill_color_type == diffvg.ColorType.constant:
|
793 |
-
d_constant = d_shape_group.fill_color_as_constant()
|
794 |
-
c = d_constant.color
|
795 |
-
d_args.append(torch.tensor((c.x, c.y, c.z, c.w)))
|
796 |
-
elif d_shape_group.fill_color_type == diffvg.ColorType.linear_gradient:
|
797 |
-
d_linear_gradient = d_shape_group.fill_color_as_linear_gradient()
|
798 |
-
beg = d_linear_gradient.begin
|
799 |
-
d_args.append(torch.tensor((beg.x, beg.y)))
|
800 |
-
end = d_linear_gradient.end
|
801 |
-
d_args.append(torch.tensor((end.x, end.y)))
|
802 |
-
offsets = torch.zeros((d_linear_gradient.num_stops))
|
803 |
-
stop_colors = torch.zeros((d_linear_gradient.num_stops, 4))
|
804 |
-
d_linear_gradient.copy_to(\
|
805 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
806 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
807 |
-
assert(torch.isfinite(stop_colors).all())
|
808 |
-
d_args.append(offsets)
|
809 |
-
d_args.append(stop_colors)
|
810 |
-
elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient:
|
811 |
-
d_radial_gradient = d_shape_group.fill_color_as_radial_gradient()
|
812 |
-
center = d_radial_gradient.center
|
813 |
-
d_args.append(torch.tensor((center.x, center.y)))
|
814 |
-
radius = d_radial_gradient.radius
|
815 |
-
d_args.append(torch.tensor((radius.x, radius.y)))
|
816 |
-
offsets = torch.zeros((d_radial_gradient.num_stops))
|
817 |
-
stop_colors = torch.zeros((d_radial_gradient.num_stops, 4))
|
818 |
-
d_radial_gradient.copy_to(\
|
819 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
820 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
821 |
-
assert(torch.isfinite(stop_colors).all())
|
822 |
-
d_args.append(offsets)
|
823 |
-
d_args.append(stop_colors)
|
824 |
-
else:
|
825 |
-
assert(False)
|
826 |
-
d_args.append(None) # stroke_color_type
|
827 |
-
if d_shape_group.has_stroke_color():
|
828 |
-
if d_shape_group.stroke_color_type == diffvg.ColorType.constant:
|
829 |
-
d_constant = d_shape_group.stroke_color_as_constant()
|
830 |
-
c = d_constant.color
|
831 |
-
d_args.append(torch.tensor((c.x, c.y, c.z, c.w)))
|
832 |
-
elif d_shape_group.stroke_color_type == diffvg.ColorType.linear_gradient:
|
833 |
-
d_linear_gradient = d_shape_group.stroke_color_as_linear_gradient()
|
834 |
-
beg = d_linear_gradient.begin
|
835 |
-
d_args.append(torch.tensor((beg.x, beg.y)))
|
836 |
-
end = d_linear_gradient.end
|
837 |
-
d_args.append(torch.tensor((end.x, end.y)))
|
838 |
-
offsets = torch.zeros((d_linear_gradient.num_stops))
|
839 |
-
stop_colors = torch.zeros((d_linear_gradient.num_stops, 4))
|
840 |
-
d_linear_gradient.copy_to(\
|
841 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
842 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
843 |
-
assert(torch.isfinite(stop_colors).all())
|
844 |
-
d_args.append(offsets)
|
845 |
-
d_args.append(stop_colors)
|
846 |
-
elif d_shape_group.fill_color_type == diffvg.ColorType.radial_gradient:
|
847 |
-
d_radial_gradient = d_shape_group.stroke_color_as_radial_gradient()
|
848 |
-
center = d_radial_gradient.center
|
849 |
-
d_args.append(torch.tensor((center.x, center.y)))
|
850 |
-
radius = d_radial_gradient.radius
|
851 |
-
d_args.append(torch.tensor((radius.x, radius.y)))
|
852 |
-
offsets = torch.zeros((d_radial_gradient.num_stops))
|
853 |
-
stop_colors = torch.zeros((d_radial_gradient.num_stops, 4))
|
854 |
-
d_radial_gradient.copy_to(\
|
855 |
-
diffvg.float_ptr(offsets.data_ptr()),
|
856 |
-
diffvg.float_ptr(stop_colors.data_ptr()))
|
857 |
-
assert(torch.isfinite(stop_colors).all())
|
858 |
-
d_args.append(offsets)
|
859 |
-
d_args.append(stop_colors)
|
860 |
-
else:
|
861 |
-
assert(False)
|
862 |
-
d_args.append(None) # use_even_odd_rule
|
863 |
-
d_shape_to_canvas = torch.zeros((3, 3))
|
864 |
-
d_shape_group.copy_to(diffvg.float_ptr(d_shape_to_canvas.data_ptr()))
|
865 |
-
assert(torch.isfinite(d_shape_to_canvas).all())
|
866 |
-
d_args.append(d_shape_to_canvas)
|
867 |
-
d_args.append(None) # filter_type
|
868 |
-
d_args.append(torch.tensor(scene.get_d_filter_radius()))
|
869 |
-
|
870 |
-
return tuple(d_args)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/mr/disjoint_tls_pool.h
DELETED
@@ -1,69 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2018 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
/*! \file disjoint_tls_pool.h
|
18 |
-
* \brief A function wrapping a thread local instance of a \p disjoint_unsynchronized_pool_resource.
|
19 |
-
*/
|
20 |
-
|
21 |
-
#pragma once
|
22 |
-
|
23 |
-
#include <thrust/detail/cpp11_required.h>
|
24 |
-
|
25 |
-
#if THRUST_CPP_DIALECT >= 2011
|
26 |
-
|
27 |
-
#include <thrust/mr/disjoint_pool.h>
|
28 |
-
|
29 |
-
namespace thrust
|
30 |
-
{
|
31 |
-
namespace mr
|
32 |
-
{
|
33 |
-
|
34 |
-
/*! \addtogroup memory_management Memory Management
|
35 |
-
* \addtogroup memory_resources Memory Resources
|
36 |
-
* \ingroup memory_resources
|
37 |
-
* \{
|
38 |
-
*/
|
39 |
-
|
40 |
-
/*! Potentially constructs, if not yet created, and then returns the address of a thread-local
|
41 |
-
* \p disjoint_unsynchronized_pool_resource,
|
42 |
-
*
|
43 |
-
* \tparam Upstream the first template argument to the pool template
|
44 |
-
* \tparam Bookkeeper the second template argument to the pool template
|
45 |
-
* \param upstream the first argument to the constructor, if invoked
|
46 |
-
* \param bookkeeper the second argument to the constructor, if invoked
|
47 |
-
*/
|
48 |
-
template<typename Upstream, typename Bookkeeper>
|
49 |
-
__host__
|
50 |
-
thrust::mr::disjoint_unsynchronized_pool_resource<Upstream, Bookkeeper> & tls_disjoint_pool(
|
51 |
-
Upstream * upstream = NULL,
|
52 |
-
Bookkeeper * bookkeeper = NULL)
|
53 |
-
{
|
54 |
-
static thread_local auto adaptor = [&]{
|
55 |
-
assert(upstream && bookkeeper);
|
56 |
-
return thrust::mr::disjoint_unsynchronized_pool_resource<Upstream, Bookkeeper>(upstream, bookkeeper);
|
57 |
-
}();
|
58 |
-
|
59 |
-
return adaptor;
|
60 |
-
}
|
61 |
-
|
62 |
-
/*! \}
|
63 |
-
*/
|
64 |
-
|
65 |
-
} // end mr
|
66 |
-
} // end thrust
|
67 |
-
|
68 |
-
#endif // THRUST_CPP_DIALECT >= 2011
|
69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/for_each.h
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
|
21 |
-
// this system inherits for_each
|
22 |
-
#include <thrust/system/detail/sequential/for_each.h>
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/saicinpainting/training/modules/spatial_transform.py
DELETED
@@ -1,49 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
import torch.nn.functional as F
|
4 |
-
from kornia.geometry.transform import rotate
|
5 |
-
|
6 |
-
|
7 |
-
class LearnableSpatialTransformWrapper(nn.Module):
|
8 |
-
def __init__(self, impl, pad_coef=0.5, angle_init_range=80, train_angle=True):
|
9 |
-
super().__init__()
|
10 |
-
self.impl = impl
|
11 |
-
self.angle = torch.rand(1) * angle_init_range
|
12 |
-
if train_angle:
|
13 |
-
self.angle = nn.Parameter(self.angle, requires_grad=True)
|
14 |
-
self.pad_coef = pad_coef
|
15 |
-
|
16 |
-
def forward(self, x):
|
17 |
-
if torch.is_tensor(x):
|
18 |
-
return self.inverse_transform(self.impl(self.transform(x)), x)
|
19 |
-
elif isinstance(x, tuple):
|
20 |
-
x_trans = tuple(self.transform(elem) for elem in x)
|
21 |
-
y_trans = self.impl(x_trans)
|
22 |
-
return tuple(self.inverse_transform(elem, orig_x) for elem, orig_x in zip(y_trans, x))
|
23 |
-
else:
|
24 |
-
raise ValueError(f'Unexpected input type {type(x)}')
|
25 |
-
|
26 |
-
def transform(self, x):
|
27 |
-
height, width = x.shape[2:]
|
28 |
-
pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef)
|
29 |
-
x_padded = F.pad(x, [pad_w, pad_w, pad_h, pad_h], mode='reflect')
|
30 |
-
x_padded_rotated = rotate(x_padded, angle=self.angle.to(x_padded))
|
31 |
-
return x_padded_rotated
|
32 |
-
|
33 |
-
def inverse_transform(self, y_padded_rotated, orig_x):
|
34 |
-
height, width = orig_x.shape[2:]
|
35 |
-
pad_h, pad_w = int(height * self.pad_coef), int(width * self.pad_coef)
|
36 |
-
|
37 |
-
y_padded = rotate(y_padded_rotated, angle=-self.angle.to(y_padded_rotated))
|
38 |
-
y_height, y_width = y_padded.shape[2:]
|
39 |
-
y = y_padded[:, :, pad_h : y_height - pad_h, pad_w : y_width - pad_w]
|
40 |
-
return y
|
41 |
-
|
42 |
-
|
43 |
-
if __name__ == '__main__':
|
44 |
-
layer = LearnableSpatialTransformWrapper(nn.Identity())
|
45 |
-
x = torch.arange(2* 3 * 15 * 15).view(2, 3, 15, 15).float()
|
46 |
-
y = layer(x)
|
47 |
-
assert x.shape == y.shape
|
48 |
-
assert torch.allclose(x[:, :, 1:, 1:][:, :, :-1, :-1], y[:, :, 1:, 1:][:, :, :-1, :-1])
|
49 |
-
print('all ok')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Chintan-Donda/KKMS-KSSW-HF/src/translator.py
DELETED
@@ -1,61 +0,0 @@
|
|
1 |
-
import src.constants as constants_utils
|
2 |
-
import requests
|
3 |
-
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
4 |
-
from mosestokenizer import *
|
5 |
-
from indicnlp.tokenize import sentence_tokenize
|
6 |
-
from googletrans import Translator, constants
|
7 |
-
|
8 |
-
|
9 |
-
class TRANSLATOR:
|
10 |
-
def __init__(self):
|
11 |
-
print()
|
12 |
-
|
13 |
-
|
14 |
-
def split_sentences(self, paragraph, language):
|
15 |
-
if language == "en":
|
16 |
-
with MosesSentenceSplitter(language) as splitter:
|
17 |
-
return splitter([paragraph])
|
18 |
-
elif language in constants_utils.INDIC_LANGUAGE:
|
19 |
-
return sentence_tokenize.sentence_split(paragraph, lang=language)
|
20 |
-
|
21 |
-
|
22 |
-
def get_in_hindi(self, payload):
|
23 |
-
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
|
24 |
-
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
|
25 |
-
article = self.split_sentences(payload['inputs'], 'en')
|
26 |
-
# inputs = tokenizer(payload['input'], return_tensors="pt")
|
27 |
-
out_text = ""
|
28 |
-
for a in article:
|
29 |
-
inputs = tokenizer(a, return_tensors="pt")
|
30 |
-
translated_tokens = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["hin_Deva"], max_length=100)
|
31 |
-
translated_sent = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
|
32 |
-
out_text = out_text.join(translated_sent)
|
33 |
-
return out_text
|
34 |
-
|
35 |
-
|
36 |
-
def get_in_indic(self, text, language='Hindi'):
|
37 |
-
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
|
38 |
-
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
|
39 |
-
inputs = tokenizer(text, return_tensors="pt")
|
40 |
-
|
41 |
-
code = "eng_Latn"
|
42 |
-
if language == 'Hindi':
|
43 |
-
code= "hin_Deva"
|
44 |
-
elif language == 'Marathi':
|
45 |
-
code = "mar_Deva"
|
46 |
-
|
47 |
-
translated_tokens = model.generate(
|
48 |
-
**inputs,
|
49 |
-
forced_bos_token_id=tokenizer.lang_code_to_id[code],
|
50 |
-
max_length=1000
|
51 |
-
)
|
52 |
-
|
53 |
-
out_text = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
|
54 |
-
return out_text
|
55 |
-
|
56 |
-
|
57 |
-
def get_indic_google_translate(self, text, language='Hindi'):
|
58 |
-
# Init the Google API translator
|
59 |
-
translator = Translator()
|
60 |
-
translations = translator.translate(text, dest=constants_utils.INDIC_LANGUAGE.get(language, 'en'))
|
61 |
-
return str(translations.text)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/datasets/datasets/base_dataset.py
DELETED
@@ -1,68 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Copyright (c) 2022, salesforce.com, inc.
|
3 |
-
All rights reserved.
|
4 |
-
SPDX-License-Identifier: BSD-3-Clause
|
5 |
-
For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause
|
6 |
-
"""
|
7 |
-
|
8 |
-
import json
|
9 |
-
from typing import Iterable
|
10 |
-
|
11 |
-
from torch.utils.data import Dataset, ConcatDataset
|
12 |
-
from torch.utils.data.dataloader import default_collate
|
13 |
-
|
14 |
-
|
15 |
-
class BaseDataset(Dataset):
|
16 |
-
def __init__(
|
17 |
-
self, vis_processor=None, text_processor=None, vis_root=None, ann_paths=[]
|
18 |
-
):
|
19 |
-
"""
|
20 |
-
vis_root (string): Root directory of images (e.g. coco/images/)
|
21 |
-
ann_root (string): directory to store the annotation file
|
22 |
-
"""
|
23 |
-
self.vis_root = vis_root
|
24 |
-
|
25 |
-
self.annotation = []
|
26 |
-
for ann_path in ann_paths:
|
27 |
-
self.annotation.extend(json.load(open(ann_path, "r"))['annotations'])
|
28 |
-
|
29 |
-
self.vis_processor = vis_processor
|
30 |
-
self.text_processor = text_processor
|
31 |
-
|
32 |
-
self._add_instance_ids()
|
33 |
-
|
34 |
-
def __len__(self):
|
35 |
-
return len(self.annotation)
|
36 |
-
|
37 |
-
def collater(self, samples):
|
38 |
-
return default_collate(samples)
|
39 |
-
|
40 |
-
def set_processors(self, vis_processor, text_processor):
|
41 |
-
self.vis_processor = vis_processor
|
42 |
-
self.text_processor = text_processor
|
43 |
-
|
44 |
-
def _add_instance_ids(self, key="instance_id"):
|
45 |
-
for idx, ann in enumerate(self.annotation):
|
46 |
-
ann[key] = str(idx)
|
47 |
-
|
48 |
-
|
49 |
-
class ConcatDataset(ConcatDataset):
|
50 |
-
def __init__(self, datasets: Iterable[Dataset]) -> None:
|
51 |
-
super().__init__(datasets)
|
52 |
-
|
53 |
-
def collater(self, samples):
|
54 |
-
# TODO For now only supports datasets with same underlying collater implementations
|
55 |
-
|
56 |
-
all_keys = set()
|
57 |
-
for s in samples:
|
58 |
-
all_keys.update(s)
|
59 |
-
|
60 |
-
shared_keys = all_keys
|
61 |
-
for s in samples:
|
62 |
-
shared_keys = shared_keys & set(s.keys())
|
63 |
-
|
64 |
-
samples_shared_keys = []
|
65 |
-
for s in samples:
|
66 |
-
samples_shared_keys.append({k: s[k] for k in s.keys() if k in shared_keys})
|
67 |
-
|
68 |
-
return self.datasets[0].collater(samples_shared_keys)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageTk.py
DELETED
@@ -1,283 +0,0 @@
|
|
1 |
-
#
|
2 |
-
# The Python Imaging Library.
|
3 |
-
# $Id$
|
4 |
-
#
|
5 |
-
# a Tk display interface
|
6 |
-
#
|
7 |
-
# History:
|
8 |
-
# 96-04-08 fl Created
|
9 |
-
# 96-09-06 fl Added getimage method
|
10 |
-
# 96-11-01 fl Rewritten, removed image attribute and crop method
|
11 |
-
# 97-05-09 fl Use PyImagingPaste method instead of image type
|
12 |
-
# 97-05-12 fl Minor tweaks to match the IFUNC95 interface
|
13 |
-
# 97-05-17 fl Support the "pilbitmap" booster patch
|
14 |
-
# 97-06-05 fl Added file= and data= argument to image constructors
|
15 |
-
# 98-03-09 fl Added width and height methods to Image classes
|
16 |
-
# 98-07-02 fl Use default mode for "P" images without palette attribute
|
17 |
-
# 98-07-02 fl Explicitly destroy Tkinter image objects
|
18 |
-
# 99-07-24 fl Support multiple Tk interpreters (from Greg Couch)
|
19 |
-
# 99-07-26 fl Automatically hook into Tkinter (if possible)
|
20 |
-
# 99-08-15 fl Hook uses _imagingtk instead of _imaging
|
21 |
-
#
|
22 |
-
# Copyright (c) 1997-1999 by Secret Labs AB
|
23 |
-
# Copyright (c) 1996-1997 by Fredrik Lundh
|
24 |
-
#
|
25 |
-
# See the README file for information on usage and redistribution.
|
26 |
-
#
|
27 |
-
|
28 |
-
import tkinter
|
29 |
-
from io import BytesIO
|
30 |
-
|
31 |
-
from . import Image
|
32 |
-
|
33 |
-
# --------------------------------------------------------------------
|
34 |
-
# Check for Tkinter interface hooks
|
35 |
-
|
36 |
-
_pilbitmap_ok = None
|
37 |
-
|
38 |
-
|
39 |
-
def _pilbitmap_check():
|
40 |
-
global _pilbitmap_ok
|
41 |
-
if _pilbitmap_ok is None:
|
42 |
-
try:
|
43 |
-
im = Image.new("1", (1, 1))
|
44 |
-
tkinter.BitmapImage(data=f"PIL:{im.im.id}")
|
45 |
-
_pilbitmap_ok = 1
|
46 |
-
except tkinter.TclError:
|
47 |
-
_pilbitmap_ok = 0
|
48 |
-
return _pilbitmap_ok
|
49 |
-
|
50 |
-
|
51 |
-
def _get_image_from_kw(kw):
|
52 |
-
source = None
|
53 |
-
if "file" in kw:
|
54 |
-
source = kw.pop("file")
|
55 |
-
elif "data" in kw:
|
56 |
-
source = BytesIO(kw.pop("data"))
|
57 |
-
if source:
|
58 |
-
return Image.open(source)
|
59 |
-
|
60 |
-
|
61 |
-
def _pyimagingtkcall(command, photo, id):
|
62 |
-
tk = photo.tk
|
63 |
-
try:
|
64 |
-
tk.call(command, photo, id)
|
65 |
-
except tkinter.TclError:
|
66 |
-
# activate Tkinter hook
|
67 |
-
# may raise an error if it cannot attach to Tkinter
|
68 |
-
from . import _imagingtk
|
69 |
-
|
70 |
-
_imagingtk.tkinit(tk.interpaddr())
|
71 |
-
tk.call(command, photo, id)
|
72 |
-
|
73 |
-
|
74 |
-
# --------------------------------------------------------------------
|
75 |
-
# PhotoImage
|
76 |
-
|
77 |
-
|
78 |
-
class PhotoImage:
|
79 |
-
"""
|
80 |
-
A Tkinter-compatible photo image. This can be used
|
81 |
-
everywhere Tkinter expects an image object. If the image is an RGBA
|
82 |
-
image, pixels having alpha 0 are treated as transparent.
|
83 |
-
|
84 |
-
The constructor takes either a PIL image, or a mode and a size.
|
85 |
-
Alternatively, you can use the ``file`` or ``data`` options to initialize
|
86 |
-
the photo image object.
|
87 |
-
|
88 |
-
:param image: Either a PIL image, or a mode string. If a mode string is
|
89 |
-
used, a size must also be given.
|
90 |
-
:param size: If the first argument is a mode string, this defines the size
|
91 |
-
of the image.
|
92 |
-
:keyword file: A filename to load the image from (using
|
93 |
-
``Image.open(file)``).
|
94 |
-
:keyword data: An 8-bit string containing image data (as loaded from an
|
95 |
-
image file).
|
96 |
-
"""
|
97 |
-
|
98 |
-
def __init__(self, image=None, size=None, **kw):
|
99 |
-
# Tk compatibility: file or data
|
100 |
-
if image is None:
|
101 |
-
image = _get_image_from_kw(kw)
|
102 |
-
|
103 |
-
if hasattr(image, "mode") and hasattr(image, "size"):
|
104 |
-
# got an image instead of a mode
|
105 |
-
mode = image.mode
|
106 |
-
if mode == "P":
|
107 |
-
# palette mapped data
|
108 |
-
image.apply_transparency()
|
109 |
-
image.load()
|
110 |
-
try:
|
111 |
-
mode = image.palette.mode
|
112 |
-
except AttributeError:
|
113 |
-
mode = "RGB" # default
|
114 |
-
size = image.size
|
115 |
-
kw["width"], kw["height"] = size
|
116 |
-
else:
|
117 |
-
mode = image
|
118 |
-
image = None
|
119 |
-
|
120 |
-
if mode not in ["1", "L", "RGB", "RGBA"]:
|
121 |
-
mode = Image.getmodebase(mode)
|
122 |
-
|
123 |
-
self.__mode = mode
|
124 |
-
self.__size = size
|
125 |
-
self.__photo = tkinter.PhotoImage(**kw)
|
126 |
-
self.tk = self.__photo.tk
|
127 |
-
if image:
|
128 |
-
self.paste(image)
|
129 |
-
|
130 |
-
def __del__(self):
|
131 |
-
name = self.__photo.name
|
132 |
-
self.__photo.name = None
|
133 |
-
try:
|
134 |
-
self.__photo.tk.call("image", "delete", name)
|
135 |
-
except Exception:
|
136 |
-
pass # ignore internal errors
|
137 |
-
|
138 |
-
def __str__(self):
|
139 |
-
"""
|
140 |
-
Get the Tkinter photo image identifier. This method is automatically
|
141 |
-
called by Tkinter whenever a PhotoImage object is passed to a Tkinter
|
142 |
-
method.
|
143 |
-
|
144 |
-
:return: A Tkinter photo image identifier (a string).
|
145 |
-
"""
|
146 |
-
return str(self.__photo)
|
147 |
-
|
148 |
-
def width(self):
|
149 |
-
"""
|
150 |
-
Get the width of the image.
|
151 |
-
|
152 |
-
:return: The width, in pixels.
|
153 |
-
"""
|
154 |
-
return self.__size[0]
|
155 |
-
|
156 |
-
def height(self):
|
157 |
-
"""
|
158 |
-
Get the height of the image.
|
159 |
-
|
160 |
-
:return: The height, in pixels.
|
161 |
-
"""
|
162 |
-
return self.__size[1]
|
163 |
-
|
164 |
-
def paste(self, im):
|
165 |
-
"""
|
166 |
-
Paste a PIL image into the photo image. Note that this can
|
167 |
-
be very slow if the photo image is displayed.
|
168 |
-
|
169 |
-
:param im: A PIL image. The size must match the target region. If the
|
170 |
-
mode does not match, the image is converted to the mode of
|
171 |
-
the bitmap image.
|
172 |
-
"""
|
173 |
-
# convert to blittable
|
174 |
-
im.load()
|
175 |
-
image = im.im
|
176 |
-
if image.isblock() and im.mode == self.__mode:
|
177 |
-
block = image
|
178 |
-
else:
|
179 |
-
block = image.new_block(self.__mode, im.size)
|
180 |
-
image.convert2(block, image) # convert directly between buffers
|
181 |
-
|
182 |
-
_pyimagingtkcall("PyImagingPhoto", self.__photo, block.id)
|
183 |
-
|
184 |
-
|
185 |
-
# --------------------------------------------------------------------
|
186 |
-
# BitmapImage
|
187 |
-
|
188 |
-
|
189 |
-
class BitmapImage:
|
190 |
-
"""
|
191 |
-
A Tkinter-compatible bitmap image. This can be used everywhere Tkinter
|
192 |
-
expects an image object.
|
193 |
-
|
194 |
-
The given image must have mode "1". Pixels having value 0 are treated as
|
195 |
-
transparent. Options, if any, are passed on to Tkinter. The most commonly
|
196 |
-
used option is ``foreground``, which is used to specify the color for the
|
197 |
-
non-transparent parts. See the Tkinter documentation for information on
|
198 |
-
how to specify colours.
|
199 |
-
|
200 |
-
:param image: A PIL image.
|
201 |
-
"""
|
202 |
-
|
203 |
-
def __init__(self, image=None, **kw):
|
204 |
-
# Tk compatibility: file or data
|
205 |
-
if image is None:
|
206 |
-
image = _get_image_from_kw(kw)
|
207 |
-
|
208 |
-
self.__mode = image.mode
|
209 |
-
self.__size = image.size
|
210 |
-
|
211 |
-
if _pilbitmap_check():
|
212 |
-
# fast way (requires the pilbitmap booster patch)
|
213 |
-
image.load()
|
214 |
-
kw["data"] = f"PIL:{image.im.id}"
|
215 |
-
self.__im = image # must keep a reference
|
216 |
-
else:
|
217 |
-
# slow but safe way
|
218 |
-
kw["data"] = image.tobitmap()
|
219 |
-
self.__photo = tkinter.BitmapImage(**kw)
|
220 |
-
|
221 |
-
def __del__(self):
|
222 |
-
name = self.__photo.name
|
223 |
-
self.__photo.name = None
|
224 |
-
try:
|
225 |
-
self.__photo.tk.call("image", "delete", name)
|
226 |
-
except Exception:
|
227 |
-
pass # ignore internal errors
|
228 |
-
|
229 |
-
def width(self):
|
230 |
-
"""
|
231 |
-
Get the width of the image.
|
232 |
-
|
233 |
-
:return: The width, in pixels.
|
234 |
-
"""
|
235 |
-
return self.__size[0]
|
236 |
-
|
237 |
-
def height(self):
|
238 |
-
"""
|
239 |
-
Get the height of the image.
|
240 |
-
|
241 |
-
:return: The height, in pixels.
|
242 |
-
"""
|
243 |
-
return self.__size[1]
|
244 |
-
|
245 |
-
def __str__(self):
|
246 |
-
"""
|
247 |
-
Get the Tkinter bitmap image identifier. This method is automatically
|
248 |
-
called by Tkinter whenever a BitmapImage object is passed to a Tkinter
|
249 |
-
method.
|
250 |
-
|
251 |
-
:return: A Tkinter bitmap image identifier (a string).
|
252 |
-
"""
|
253 |
-
return str(self.__photo)
|
254 |
-
|
255 |
-
|
256 |
-
def getimage(photo):
|
257 |
-
"""Copies the contents of a PhotoImage to a PIL image memory."""
|
258 |
-
im = Image.new("RGBA", (photo.width(), photo.height()))
|
259 |
-
block = im.im
|
260 |
-
|
261 |
-
_pyimagingtkcall("PyImagingPhotoGet", photo, block.id)
|
262 |
-
|
263 |
-
return im
|
264 |
-
|
265 |
-
|
266 |
-
def _show(image, title):
|
267 |
-
"""Helper for the Image.show method."""
|
268 |
-
|
269 |
-
class UI(tkinter.Label):
|
270 |
-
def __init__(self, master, im):
|
271 |
-
if im.mode == "1":
|
272 |
-
self.image = BitmapImage(im, foreground="white", master=master)
|
273 |
-
else:
|
274 |
-
self.image = PhotoImage(im, master=master)
|
275 |
-
super().__init__(master, image=self.image, bg="black", bd=0)
|
276 |
-
|
277 |
-
if not tkinter._default_root:
|
278 |
-
msg = "tkinter not initialized"
|
279 |
-
raise OSError(msg)
|
280 |
-
top = tkinter.Toplevel()
|
281 |
-
if title:
|
282 |
-
top.title(title)
|
283 |
-
UI(top, image).pack()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiofiles/tempfile/temptypes.py
DELETED
@@ -1,73 +0,0 @@
|
|
1 |
-
"""Async wrappers for spooled temp files and temp directory objects"""
|
2 |
-
|
3 |
-
# Imports
|
4 |
-
import asyncio
|
5 |
-
from types import coroutine
|
6 |
-
|
7 |
-
from ..base import AsyncBase
|
8 |
-
from ..threadpool.utils import (
|
9 |
-
delegate_to_executor,
|
10 |
-
proxy_property_directly,
|
11 |
-
cond_delegate_to_executor,
|
12 |
-
)
|
13 |
-
from functools import partial
|
14 |
-
|
15 |
-
|
16 |
-
@delegate_to_executor("fileno", "rollover")
|
17 |
-
@cond_delegate_to_executor(
|
18 |
-
"close",
|
19 |
-
"flush",
|
20 |
-
"isatty",
|
21 |
-
"read",
|
22 |
-
"readline",
|
23 |
-
"readlines",
|
24 |
-
"seek",
|
25 |
-
"tell",
|
26 |
-
"truncate",
|
27 |
-
)
|
28 |
-
@proxy_property_directly("closed", "encoding", "mode", "name", "newlines")
|
29 |
-
class AsyncSpooledTemporaryFile(AsyncBase):
|
30 |
-
"""Async wrapper for SpooledTemporaryFile class"""
|
31 |
-
|
32 |
-
async def _check(self):
|
33 |
-
if self._file._rolled:
|
34 |
-
return
|
35 |
-
max_size = self._file._max_size
|
36 |
-
if max_size and self._file.tell() > max_size:
|
37 |
-
await self.rollover()
|
38 |
-
|
39 |
-
async def write(self, s):
|
40 |
-
"""Implementation to anticipate rollover"""
|
41 |
-
if self._file._rolled:
|
42 |
-
cb = partial(self._file.write, s)
|
43 |
-
return await self._loop.run_in_executor(self._executor, cb)
|
44 |
-
else:
|
45 |
-
file = self._file._file # reference underlying base IO object
|
46 |
-
rv = file.write(s)
|
47 |
-
await self._check()
|
48 |
-
return rv
|
49 |
-
|
50 |
-
async def writelines(self, iterable):
|
51 |
-
"""Implementation to anticipate rollover"""
|
52 |
-
if self._file._rolled:
|
53 |
-
cb = partial(self._file.writelines, iterable)
|
54 |
-
return await self._loop.run_in_executor(self._executor, cb)
|
55 |
-
else:
|
56 |
-
file = self._file._file # reference underlying base IO object
|
57 |
-
rv = file.writelines(iterable)
|
58 |
-
await self._check()
|
59 |
-
return rv
|
60 |
-
|
61 |
-
|
62 |
-
@delegate_to_executor("cleanup")
|
63 |
-
@proxy_property_directly("name")
|
64 |
-
class AsyncTemporaryDirectory:
|
65 |
-
"""Async wrapper for TemporaryDirectory class"""
|
66 |
-
|
67 |
-
def __init__(self, file, loop, executor):
|
68 |
-
self._file = file
|
69 |
-
self._loop = loop
|
70 |
-
self._executor = executor
|
71 |
-
|
72 |
-
async def close(self):
|
73 |
-
await self.cleanup()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/pens/statisticsPen.py
DELETED
@@ -1,122 +0,0 @@
|
|
1 |
-
"""Pen calculating area, center of mass, variance and standard-deviation,
|
2 |
-
covariance and correlation, and slant, of glyph shapes."""
|
3 |
-
import math
|
4 |
-
from fontTools.pens.momentsPen import MomentsPen
|
5 |
-
|
6 |
-
__all__ = ["StatisticsPen"]
|
7 |
-
|
8 |
-
|
9 |
-
class StatisticsPen(MomentsPen):
|
10 |
-
|
11 |
-
"""Pen calculating area, center of mass, variance and
|
12 |
-
standard-deviation, covariance and correlation, and slant,
|
13 |
-
of glyph shapes.
|
14 |
-
|
15 |
-
Note that all the calculated values are 'signed'. Ie. if the
|
16 |
-
glyph shape is self-intersecting, the values are not correct
|
17 |
-
(but well-defined). As such, area will be negative if contour
|
18 |
-
directions are clockwise. Moreover, variance might be negative
|
19 |
-
if the shapes are self-intersecting in certain ways."""
|
20 |
-
|
21 |
-
def __init__(self, glyphset=None):
|
22 |
-
MomentsPen.__init__(self, glyphset=glyphset)
|
23 |
-
self.__zero()
|
24 |
-
|
25 |
-
def _closePath(self):
|
26 |
-
MomentsPen._closePath(self)
|
27 |
-
self.__update()
|
28 |
-
|
29 |
-
def __zero(self):
|
30 |
-
self.meanX = 0
|
31 |
-
self.meanY = 0
|
32 |
-
self.varianceX = 0
|
33 |
-
self.varianceY = 0
|
34 |
-
self.stddevX = 0
|
35 |
-
self.stddevY = 0
|
36 |
-
self.covariance = 0
|
37 |
-
self.correlation = 0
|
38 |
-
self.slant = 0
|
39 |
-
|
40 |
-
def __update(self):
|
41 |
-
|
42 |
-
area = self.area
|
43 |
-
if not area:
|
44 |
-
self.__zero()
|
45 |
-
return
|
46 |
-
|
47 |
-
# Center of mass
|
48 |
-
# https://en.wikipedia.org/wiki/Center_of_mass#A_continuous_volume
|
49 |
-
self.meanX = meanX = self.momentX / area
|
50 |
-
self.meanY = meanY = self.momentY / area
|
51 |
-
|
52 |
-
# Var(X) = E[X^2] - E[X]^2
|
53 |
-
self.varianceX = varianceX = self.momentXX / area - meanX**2
|
54 |
-
self.varianceY = varianceY = self.momentYY / area - meanY**2
|
55 |
-
|
56 |
-
self.stddevX = stddevX = math.copysign(abs(varianceX) ** 0.5, varianceX)
|
57 |
-
self.stddevY = stddevY = math.copysign(abs(varianceY) ** 0.5, varianceY)
|
58 |
-
|
59 |
-
# Covariance(X,Y) = ( E[X.Y] - E[X]E[Y] )
|
60 |
-
self.covariance = covariance = self.momentXY / area - meanX * meanY
|
61 |
-
|
62 |
-
# Correlation(X,Y) = Covariance(X,Y) / ( stddev(X) * stddev(Y) )
|
63 |
-
# https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient
|
64 |
-
if stddevX * stddevY == 0:
|
65 |
-
correlation = float("NaN")
|
66 |
-
else:
|
67 |
-
correlation = covariance / (stddevX * stddevY)
|
68 |
-
self.correlation = correlation if abs(correlation) > 1e-3 else 0
|
69 |
-
|
70 |
-
slant = covariance / varianceY if varianceY != 0 else float("NaN")
|
71 |
-
self.slant = slant if abs(slant) > 1e-3 else 0
|
72 |
-
|
73 |
-
|
74 |
-
def _test(glyphset, upem, glyphs):
|
75 |
-
from fontTools.pens.transformPen import TransformPen
|
76 |
-
from fontTools.misc.transform import Scale
|
77 |
-
|
78 |
-
print("upem", upem)
|
79 |
-
|
80 |
-
for glyph_name in glyphs:
|
81 |
-
print()
|
82 |
-
print("glyph:", glyph_name)
|
83 |
-
glyph = glyphset[glyph_name]
|
84 |
-
pen = StatisticsPen(glyphset=glyphset)
|
85 |
-
transformer = TransformPen(pen, Scale(1.0 / upem))
|
86 |
-
glyph.draw(transformer)
|
87 |
-
for item in [
|
88 |
-
"area",
|
89 |
-
"momentX",
|
90 |
-
"momentY",
|
91 |
-
"momentXX",
|
92 |
-
"momentYY",
|
93 |
-
"momentXY",
|
94 |
-
"meanX",
|
95 |
-
"meanY",
|
96 |
-
"varianceX",
|
97 |
-
"varianceY",
|
98 |
-
"stddevX",
|
99 |
-
"stddevY",
|
100 |
-
"covariance",
|
101 |
-
"correlation",
|
102 |
-
"slant",
|
103 |
-
]:
|
104 |
-
print("%s: %g" % (item, getattr(pen, item)))
|
105 |
-
|
106 |
-
|
107 |
-
def main(args):
|
108 |
-
if not args:
|
109 |
-
return
|
110 |
-
filename, glyphs = args[0], args[1:]
|
111 |
-
from fontTools.ttLib import TTFont
|
112 |
-
|
113 |
-
font = TTFont(filename)
|
114 |
-
if not glyphs:
|
115 |
-
glyphs = font.getGlyphOrder()
|
116 |
-
_test(font.getGlyphSet(), font["head"].unitsPerEm, glyphs)
|
117 |
-
|
118 |
-
|
119 |
-
if __name__ == "__main__":
|
120 |
-
import sys
|
121 |
-
|
122 |
-
main(sys.argv[1:])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/T_S_I__5.py
DELETED
@@ -1,46 +0,0 @@
|
|
1 |
-
""" TSI{0,1,2,3,5} are private tables used by Microsoft Visual TrueType (VTT)
|
2 |
-
tool to store its hinting source data.
|
3 |
-
|
4 |
-
TSI5 contains the VTT character groups.
|
5 |
-
"""
|
6 |
-
from fontTools.misc.textTools import safeEval
|
7 |
-
from . import DefaultTable
|
8 |
-
import sys
|
9 |
-
import array
|
10 |
-
|
11 |
-
|
12 |
-
class table_T_S_I__5(DefaultTable.DefaultTable):
|
13 |
-
def decompile(self, data, ttFont):
|
14 |
-
numGlyphs = ttFont["maxp"].numGlyphs
|
15 |
-
assert len(data) == 2 * numGlyphs
|
16 |
-
a = array.array("H")
|
17 |
-
a.frombytes(data)
|
18 |
-
if sys.byteorder != "big":
|
19 |
-
a.byteswap()
|
20 |
-
self.glyphGrouping = {}
|
21 |
-
for i in range(numGlyphs):
|
22 |
-
self.glyphGrouping[ttFont.getGlyphName(i)] = a[i]
|
23 |
-
|
24 |
-
def compile(self, ttFont):
|
25 |
-
glyphNames = ttFont.getGlyphOrder()
|
26 |
-
a = array.array("H")
|
27 |
-
for i in range(len(glyphNames)):
|
28 |
-
a.append(self.glyphGrouping.get(glyphNames[i], 0))
|
29 |
-
if sys.byteorder != "big":
|
30 |
-
a.byteswap()
|
31 |
-
return a.tobytes()
|
32 |
-
|
33 |
-
def toXML(self, writer, ttFont):
|
34 |
-
names = sorted(self.glyphGrouping.keys())
|
35 |
-
for glyphName in names:
|
36 |
-
writer.simpletag(
|
37 |
-
"glyphgroup", name=glyphName, value=self.glyphGrouping[glyphName]
|
38 |
-
)
|
39 |
-
writer.newline()
|
40 |
-
|
41 |
-
def fromXML(self, name, attrs, content, ttFont):
|
42 |
-
if not hasattr(self, "glyphGrouping"):
|
43 |
-
self.glyphGrouping = {}
|
44 |
-
if name != "glyphgroup":
|
45 |
-
return
|
46 |
-
self.glyphGrouping[attrs["name"]] = safeEval(attrs["value"])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otConverters.py
DELETED
@@ -1,1929 +0,0 @@
|
|
1 |
-
from fontTools.misc.fixedTools import (
|
2 |
-
fixedToFloat as fi2fl,
|
3 |
-
floatToFixed as fl2fi,
|
4 |
-
floatToFixedToStr as fl2str,
|
5 |
-
strToFixedToFloat as str2fl,
|
6 |
-
ensureVersionIsLong as fi2ve,
|
7 |
-
versionToFixed as ve2fi,
|
8 |
-
)
|
9 |
-
from fontTools.misc.roundTools import nearestMultipleShortestRepr, otRound
|
10 |
-
from fontTools.misc.textTools import bytesjoin, tobytes, tostr, pad, safeEval
|
11 |
-
from fontTools.ttLib import getSearchRange
|
12 |
-
from .otBase import (
|
13 |
-
CountReference,
|
14 |
-
FormatSwitchingBaseTable,
|
15 |
-
OTTableReader,
|
16 |
-
OTTableWriter,
|
17 |
-
ValueRecordFactory,
|
18 |
-
)
|
19 |
-
from .otTables import (
|
20 |
-
lookupTypes,
|
21 |
-
AATStateTable,
|
22 |
-
AATState,
|
23 |
-
AATAction,
|
24 |
-
ContextualMorphAction,
|
25 |
-
LigatureMorphAction,
|
26 |
-
InsertionMorphAction,
|
27 |
-
MorxSubtable,
|
28 |
-
ExtendMode as _ExtendMode,
|
29 |
-
CompositeMode as _CompositeMode,
|
30 |
-
NO_VARIATION_INDEX,
|
31 |
-
)
|
32 |
-
from itertools import zip_longest
|
33 |
-
from functools import partial
|
34 |
-
import re
|
35 |
-
import struct
|
36 |
-
from typing import Optional
|
37 |
-
import logging
|
38 |
-
|
39 |
-
|
40 |
-
log = logging.getLogger(__name__)
|
41 |
-
istuple = lambda t: isinstance(t, tuple)
|
42 |
-
|
43 |
-
|
44 |
-
def buildConverters(tableSpec, tableNamespace):
|
45 |
-
"""Given a table spec from otData.py, build a converter object for each
|
46 |
-
field of the table. This is called for each table in otData.py, and
|
47 |
-
the results are assigned to the corresponding class in otTables.py."""
|
48 |
-
converters = []
|
49 |
-
convertersByName = {}
|
50 |
-
for tp, name, repeat, aux, descr in tableSpec:
|
51 |
-
tableName = name
|
52 |
-
if name.startswith("ValueFormat"):
|
53 |
-
assert tp == "uint16"
|
54 |
-
converterClass = ValueFormat
|
55 |
-
elif name.endswith("Count") or name in ("StructLength", "MorphType"):
|
56 |
-
converterClass = {
|
57 |
-
"uint8": ComputedUInt8,
|
58 |
-
"uint16": ComputedUShort,
|
59 |
-
"uint32": ComputedULong,
|
60 |
-
}[tp]
|
61 |
-
elif name == "SubTable":
|
62 |
-
converterClass = SubTable
|
63 |
-
elif name == "ExtSubTable":
|
64 |
-
converterClass = ExtSubTable
|
65 |
-
elif name == "SubStruct":
|
66 |
-
converterClass = SubStruct
|
67 |
-
elif name == "FeatureParams":
|
68 |
-
converterClass = FeatureParams
|
69 |
-
elif name in ("CIDGlyphMapping", "GlyphCIDMapping"):
|
70 |
-
converterClass = StructWithLength
|
71 |
-
else:
|
72 |
-
if not tp in converterMapping and "(" not in tp:
|
73 |
-
tableName = tp
|
74 |
-
converterClass = Struct
|
75 |
-
else:
|
76 |
-
converterClass = eval(tp, tableNamespace, converterMapping)
|
77 |
-
|
78 |
-
conv = converterClass(name, repeat, aux, description=descr)
|
79 |
-
|
80 |
-
if conv.tableClass:
|
81 |
-
# A "template" such as OffsetTo(AType) knowss the table class already
|
82 |
-
tableClass = conv.tableClass
|
83 |
-
elif tp in ("MortChain", "MortSubtable", "MorxChain"):
|
84 |
-
tableClass = tableNamespace.get(tp)
|
85 |
-
else:
|
86 |
-
tableClass = tableNamespace.get(tableName)
|
87 |
-
|
88 |
-
if not conv.tableClass:
|
89 |
-
conv.tableClass = tableClass
|
90 |
-
|
91 |
-
if name in ["SubTable", "ExtSubTable", "SubStruct"]:
|
92 |
-
conv.lookupTypes = tableNamespace["lookupTypes"]
|
93 |
-
# also create reverse mapping
|
94 |
-
for t in conv.lookupTypes.values():
|
95 |
-
for cls in t.values():
|
96 |
-
convertersByName[cls.__name__] = Table(name, repeat, aux, cls)
|
97 |
-
if name == "FeatureParams":
|
98 |
-
conv.featureParamTypes = tableNamespace["featureParamTypes"]
|
99 |
-
conv.defaultFeatureParams = tableNamespace["FeatureParams"]
|
100 |
-
for cls in conv.featureParamTypes.values():
|
101 |
-
convertersByName[cls.__name__] = Table(name, repeat, aux, cls)
|
102 |
-
converters.append(conv)
|
103 |
-
assert name not in convertersByName, name
|
104 |
-
convertersByName[name] = conv
|
105 |
-
return converters, convertersByName
|
106 |
-
|
107 |
-
|
108 |
-
class _MissingItem(tuple):
|
109 |
-
__slots__ = ()
|
110 |
-
|
111 |
-
|
112 |
-
try:
|
113 |
-
from collections import UserList
|
114 |
-
except ImportError:
|
115 |
-
from UserList import UserList
|
116 |
-
|
117 |
-
|
118 |
-
class _LazyList(UserList):
|
119 |
-
def __getslice__(self, i, j):
|
120 |
-
return self.__getitem__(slice(i, j))
|
121 |
-
|
122 |
-
def __getitem__(self, k):
|
123 |
-
if isinstance(k, slice):
|
124 |
-
indices = range(*k.indices(len(self)))
|
125 |
-
return [self[i] for i in indices]
|
126 |
-
item = self.data[k]
|
127 |
-
if isinstance(item, _MissingItem):
|
128 |
-
self.reader.seek(self.pos + item[0] * self.recordSize)
|
129 |
-
item = self.conv.read(self.reader, self.font, {})
|
130 |
-
self.data[k] = item
|
131 |
-
return item
|
132 |
-
|
133 |
-
def __add__(self, other):
|
134 |
-
if isinstance(other, _LazyList):
|
135 |
-
other = list(other)
|
136 |
-
elif isinstance(other, list):
|
137 |
-
pass
|
138 |
-
else:
|
139 |
-
return NotImplemented
|
140 |
-
return list(self) + other
|
141 |
-
|
142 |
-
def __radd__(self, other):
|
143 |
-
if not isinstance(other, list):
|
144 |
-
return NotImplemented
|
145 |
-
return other + list(self)
|
146 |
-
|
147 |
-
|
148 |
-
class BaseConverter(object):
|
149 |
-
|
150 |
-
"""Base class for converter objects. Apart from the constructor, this
|
151 |
-
is an abstract class."""
|
152 |
-
|
153 |
-
def __init__(self, name, repeat, aux, tableClass=None, *, description=""):
|
154 |
-
self.name = name
|
155 |
-
self.repeat = repeat
|
156 |
-
self.aux = aux
|
157 |
-
self.tableClass = tableClass
|
158 |
-
self.isCount = name.endswith("Count") or name in [
|
159 |
-
"DesignAxisRecordSize",
|
160 |
-
"ValueRecordSize",
|
161 |
-
]
|
162 |
-
self.isLookupType = name.endswith("LookupType") or name == "MorphType"
|
163 |
-
self.isPropagated = name in [
|
164 |
-
"ClassCount",
|
165 |
-
"Class2Count",
|
166 |
-
"FeatureTag",
|
167 |
-
"SettingsCount",
|
168 |
-
"VarRegionCount",
|
169 |
-
"MappingCount",
|
170 |
-
"RegionAxisCount",
|
171 |
-
"DesignAxisCount",
|
172 |
-
"DesignAxisRecordSize",
|
173 |
-
"AxisValueCount",
|
174 |
-
"ValueRecordSize",
|
175 |
-
"AxisCount",
|
176 |
-
"BaseGlyphRecordCount",
|
177 |
-
"LayerRecordCount",
|
178 |
-
]
|
179 |
-
self.description = description
|
180 |
-
|
181 |
-
def readArray(self, reader, font, tableDict, count):
|
182 |
-
"""Read an array of values from the reader."""
|
183 |
-
lazy = font.lazy and count > 8
|
184 |
-
if lazy:
|
185 |
-
recordSize = self.getRecordSize(reader)
|
186 |
-
if recordSize is NotImplemented:
|
187 |
-
lazy = False
|
188 |
-
if not lazy:
|
189 |
-
l = []
|
190 |
-
for i in range(count):
|
191 |
-
l.append(self.read(reader, font, tableDict))
|
192 |
-
return l
|
193 |
-
else:
|
194 |
-
l = _LazyList()
|
195 |
-
l.reader = reader.copy()
|
196 |
-
l.pos = l.reader.pos
|
197 |
-
l.font = font
|
198 |
-
l.conv = self
|
199 |
-
l.recordSize = recordSize
|
200 |
-
l.extend(_MissingItem([i]) for i in range(count))
|
201 |
-
reader.advance(count * recordSize)
|
202 |
-
return l
|
203 |
-
|
204 |
-
def getRecordSize(self, reader):
|
205 |
-
if hasattr(self, "staticSize"):
|
206 |
-
return self.staticSize
|
207 |
-
return NotImplemented
|
208 |
-
|
209 |
-
def read(self, reader, font, tableDict):
|
210 |
-
"""Read a value from the reader."""
|
211 |
-
raise NotImplementedError(self)
|
212 |
-
|
213 |
-
def writeArray(self, writer, font, tableDict, values):
|
214 |
-
try:
|
215 |
-
for i, value in enumerate(values):
|
216 |
-
self.write(writer, font, tableDict, value, i)
|
217 |
-
except Exception as e:
|
218 |
-
e.args = e.args + (i,)
|
219 |
-
raise
|
220 |
-
|
221 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
222 |
-
"""Write a value to the writer."""
|
223 |
-
raise NotImplementedError(self)
|
224 |
-
|
225 |
-
def xmlRead(self, attrs, content, font):
|
226 |
-
"""Read a value from XML."""
|
227 |
-
raise NotImplementedError(self)
|
228 |
-
|
229 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
230 |
-
"""Write a value to XML."""
|
231 |
-
raise NotImplementedError(self)
|
232 |
-
|
233 |
-
varIndexBasePlusOffsetRE = re.compile(r"VarIndexBase\s*\+\s*(\d+)")
|
234 |
-
|
235 |
-
def getVarIndexOffset(self) -> Optional[int]:
|
236 |
-
"""If description has `VarIndexBase + {offset}`, return the offset else None."""
|
237 |
-
m = self.varIndexBasePlusOffsetRE.search(self.description)
|
238 |
-
if not m:
|
239 |
-
return None
|
240 |
-
return int(m.group(1))
|
241 |
-
|
242 |
-
|
243 |
-
class SimpleValue(BaseConverter):
|
244 |
-
@staticmethod
|
245 |
-
def toString(value):
|
246 |
-
return value
|
247 |
-
|
248 |
-
@staticmethod
|
249 |
-
def fromString(value):
|
250 |
-
return value
|
251 |
-
|
252 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
253 |
-
xmlWriter.simpletag(name, attrs + [("value", self.toString(value))])
|
254 |
-
xmlWriter.newline()
|
255 |
-
|
256 |
-
def xmlRead(self, attrs, content, font):
|
257 |
-
return self.fromString(attrs["value"])
|
258 |
-
|
259 |
-
|
260 |
-
class OptionalValue(SimpleValue):
|
261 |
-
DEFAULT = None
|
262 |
-
|
263 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
264 |
-
if value != self.DEFAULT:
|
265 |
-
attrs.append(("value", self.toString(value)))
|
266 |
-
xmlWriter.simpletag(name, attrs)
|
267 |
-
xmlWriter.newline()
|
268 |
-
|
269 |
-
def xmlRead(self, attrs, content, font):
|
270 |
-
if "value" in attrs:
|
271 |
-
return self.fromString(attrs["value"])
|
272 |
-
return self.DEFAULT
|
273 |
-
|
274 |
-
|
275 |
-
class IntValue(SimpleValue):
|
276 |
-
@staticmethod
|
277 |
-
def fromString(value):
|
278 |
-
return int(value, 0)
|
279 |
-
|
280 |
-
|
281 |
-
class Long(IntValue):
|
282 |
-
staticSize = 4
|
283 |
-
|
284 |
-
def read(self, reader, font, tableDict):
|
285 |
-
return reader.readLong()
|
286 |
-
|
287 |
-
def readArray(self, reader, font, tableDict, count):
|
288 |
-
return reader.readLongArray(count)
|
289 |
-
|
290 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
291 |
-
writer.writeLong(value)
|
292 |
-
|
293 |
-
def writeArray(self, writer, font, tableDict, values):
|
294 |
-
writer.writeLongArray(values)
|
295 |
-
|
296 |
-
|
297 |
-
class ULong(IntValue):
|
298 |
-
staticSize = 4
|
299 |
-
|
300 |
-
def read(self, reader, font, tableDict):
|
301 |
-
return reader.readULong()
|
302 |
-
|
303 |
-
def readArray(self, reader, font, tableDict, count):
|
304 |
-
return reader.readULongArray(count)
|
305 |
-
|
306 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
307 |
-
writer.writeULong(value)
|
308 |
-
|
309 |
-
def writeArray(self, writer, font, tableDict, values):
|
310 |
-
writer.writeULongArray(values)
|
311 |
-
|
312 |
-
|
313 |
-
class Flags32(ULong):
|
314 |
-
@staticmethod
|
315 |
-
def toString(value):
|
316 |
-
return "0x%08X" % value
|
317 |
-
|
318 |
-
|
319 |
-
class VarIndex(OptionalValue, ULong):
|
320 |
-
DEFAULT = NO_VARIATION_INDEX
|
321 |
-
|
322 |
-
|
323 |
-
class Short(IntValue):
|
324 |
-
staticSize = 2
|
325 |
-
|
326 |
-
def read(self, reader, font, tableDict):
|
327 |
-
return reader.readShort()
|
328 |
-
|
329 |
-
def readArray(self, reader, font, tableDict, count):
|
330 |
-
return reader.readShortArray(count)
|
331 |
-
|
332 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
333 |
-
writer.writeShort(value)
|
334 |
-
|
335 |
-
def writeArray(self, writer, font, tableDict, values):
|
336 |
-
writer.writeShortArray(values)
|
337 |
-
|
338 |
-
|
339 |
-
class UShort(IntValue):
|
340 |
-
staticSize = 2
|
341 |
-
|
342 |
-
def read(self, reader, font, tableDict):
|
343 |
-
return reader.readUShort()
|
344 |
-
|
345 |
-
def readArray(self, reader, font, tableDict, count):
|
346 |
-
return reader.readUShortArray(count)
|
347 |
-
|
348 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
349 |
-
writer.writeUShort(value)
|
350 |
-
|
351 |
-
def writeArray(self, writer, font, tableDict, values):
|
352 |
-
writer.writeUShortArray(values)
|
353 |
-
|
354 |
-
|
355 |
-
class Int8(IntValue):
|
356 |
-
staticSize = 1
|
357 |
-
|
358 |
-
def read(self, reader, font, tableDict):
|
359 |
-
return reader.readInt8()
|
360 |
-
|
361 |
-
def readArray(self, reader, font, tableDict, count):
|
362 |
-
return reader.readInt8Array(count)
|
363 |
-
|
364 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
365 |
-
writer.writeInt8(value)
|
366 |
-
|
367 |
-
def writeArray(self, writer, font, tableDict, values):
|
368 |
-
writer.writeInt8Array(values)
|
369 |
-
|
370 |
-
|
371 |
-
class UInt8(IntValue):
|
372 |
-
staticSize = 1
|
373 |
-
|
374 |
-
def read(self, reader, font, tableDict):
|
375 |
-
return reader.readUInt8()
|
376 |
-
|
377 |
-
def readArray(self, reader, font, tableDict, count):
|
378 |
-
return reader.readUInt8Array(count)
|
379 |
-
|
380 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
381 |
-
writer.writeUInt8(value)
|
382 |
-
|
383 |
-
def writeArray(self, writer, font, tableDict, values):
|
384 |
-
writer.writeUInt8Array(values)
|
385 |
-
|
386 |
-
|
387 |
-
class UInt24(IntValue):
|
388 |
-
staticSize = 3
|
389 |
-
|
390 |
-
def read(self, reader, font, tableDict):
|
391 |
-
return reader.readUInt24()
|
392 |
-
|
393 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
394 |
-
writer.writeUInt24(value)
|
395 |
-
|
396 |
-
|
397 |
-
class ComputedInt(IntValue):
|
398 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
399 |
-
if value is not None:
|
400 |
-
xmlWriter.comment("%s=%s" % (name, value))
|
401 |
-
xmlWriter.newline()
|
402 |
-
|
403 |
-
|
404 |
-
class ComputedUInt8(ComputedInt, UInt8):
|
405 |
-
pass
|
406 |
-
|
407 |
-
|
408 |
-
class ComputedUShort(ComputedInt, UShort):
|
409 |
-
pass
|
410 |
-
|
411 |
-
|
412 |
-
class ComputedULong(ComputedInt, ULong):
|
413 |
-
pass
|
414 |
-
|
415 |
-
|
416 |
-
class Tag(SimpleValue):
|
417 |
-
staticSize = 4
|
418 |
-
|
419 |
-
def read(self, reader, font, tableDict):
|
420 |
-
return reader.readTag()
|
421 |
-
|
422 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
423 |
-
writer.writeTag(value)
|
424 |
-
|
425 |
-
|
426 |
-
class GlyphID(SimpleValue):
|
427 |
-
staticSize = 2
|
428 |
-
typecode = "H"
|
429 |
-
|
430 |
-
def readArray(self, reader, font, tableDict, count):
|
431 |
-
return font.getGlyphNameMany(
|
432 |
-
reader.readArray(self.typecode, self.staticSize, count)
|
433 |
-
)
|
434 |
-
|
435 |
-
def read(self, reader, font, tableDict):
|
436 |
-
return font.getGlyphName(reader.readValue(self.typecode, self.staticSize))
|
437 |
-
|
438 |
-
def writeArray(self, writer, font, tableDict, values):
|
439 |
-
writer.writeArray(self.typecode, font.getGlyphIDMany(values))
|
440 |
-
|
441 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
442 |
-
writer.writeValue(self.typecode, font.getGlyphID(value))
|
443 |
-
|
444 |
-
|
445 |
-
class GlyphID32(GlyphID):
|
446 |
-
staticSize = 4
|
447 |
-
typecode = "L"
|
448 |
-
|
449 |
-
|
450 |
-
class NameID(UShort):
|
451 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
452 |
-
xmlWriter.simpletag(name, attrs + [("value", value)])
|
453 |
-
if font and value:
|
454 |
-
nameTable = font.get("name")
|
455 |
-
if nameTable:
|
456 |
-
name = nameTable.getDebugName(value)
|
457 |
-
xmlWriter.write(" ")
|
458 |
-
if name:
|
459 |
-
xmlWriter.comment(name)
|
460 |
-
else:
|
461 |
-
xmlWriter.comment("missing from name table")
|
462 |
-
log.warning("name id %d missing from name table" % value)
|
463 |
-
xmlWriter.newline()
|
464 |
-
|
465 |
-
|
466 |
-
class STATFlags(UShort):
|
467 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
468 |
-
xmlWriter.simpletag(name, attrs + [("value", value)])
|
469 |
-
flags = []
|
470 |
-
if value & 0x01:
|
471 |
-
flags.append("OlderSiblingFontAttribute")
|
472 |
-
if value & 0x02:
|
473 |
-
flags.append("ElidableAxisValueName")
|
474 |
-
if flags:
|
475 |
-
xmlWriter.write(" ")
|
476 |
-
xmlWriter.comment(" ".join(flags))
|
477 |
-
xmlWriter.newline()
|
478 |
-
|
479 |
-
|
480 |
-
class FloatValue(SimpleValue):
|
481 |
-
@staticmethod
|
482 |
-
def fromString(value):
|
483 |
-
return float(value)
|
484 |
-
|
485 |
-
|
486 |
-
class DeciPoints(FloatValue):
|
487 |
-
staticSize = 2
|
488 |
-
|
489 |
-
def read(self, reader, font, tableDict):
|
490 |
-
return reader.readUShort() / 10
|
491 |
-
|
492 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
493 |
-
writer.writeUShort(round(value * 10))
|
494 |
-
|
495 |
-
|
496 |
-
class BaseFixedValue(FloatValue):
|
497 |
-
staticSize = NotImplemented
|
498 |
-
precisionBits = NotImplemented
|
499 |
-
readerMethod = NotImplemented
|
500 |
-
writerMethod = NotImplemented
|
501 |
-
|
502 |
-
def read(self, reader, font, tableDict):
|
503 |
-
return self.fromInt(getattr(reader, self.readerMethod)())
|
504 |
-
|
505 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
506 |
-
getattr(writer, self.writerMethod)(self.toInt(value))
|
507 |
-
|
508 |
-
@classmethod
|
509 |
-
def fromInt(cls, value):
|
510 |
-
return fi2fl(value, cls.precisionBits)
|
511 |
-
|
512 |
-
@classmethod
|
513 |
-
def toInt(cls, value):
|
514 |
-
return fl2fi(value, cls.precisionBits)
|
515 |
-
|
516 |
-
@classmethod
|
517 |
-
def fromString(cls, value):
|
518 |
-
return str2fl(value, cls.precisionBits)
|
519 |
-
|
520 |
-
@classmethod
|
521 |
-
def toString(cls, value):
|
522 |
-
return fl2str(value, cls.precisionBits)
|
523 |
-
|
524 |
-
|
525 |
-
class Fixed(BaseFixedValue):
|
526 |
-
staticSize = 4
|
527 |
-
precisionBits = 16
|
528 |
-
readerMethod = "readLong"
|
529 |
-
writerMethod = "writeLong"
|
530 |
-
|
531 |
-
|
532 |
-
class F2Dot14(BaseFixedValue):
|
533 |
-
staticSize = 2
|
534 |
-
precisionBits = 14
|
535 |
-
readerMethod = "readShort"
|
536 |
-
writerMethod = "writeShort"
|
537 |
-
|
538 |
-
|
539 |
-
class Angle(F2Dot14):
|
540 |
-
# angles are specified in degrees, and encoded as F2Dot14 fractions of half
|
541 |
-
# circle: e.g. 1.0 => 180, -0.5 => -90, -2.0 => -360, etc.
|
542 |
-
bias = 0.0
|
543 |
-
factor = 1.0 / (1 << 14) * 180 # 0.010986328125
|
544 |
-
|
545 |
-
@classmethod
|
546 |
-
def fromInt(cls, value):
|
547 |
-
return (super().fromInt(value) + cls.bias) * 180
|
548 |
-
|
549 |
-
@classmethod
|
550 |
-
def toInt(cls, value):
|
551 |
-
return super().toInt((value / 180) - cls.bias)
|
552 |
-
|
553 |
-
@classmethod
|
554 |
-
def fromString(cls, value):
|
555 |
-
# quantize to nearest multiples of minimum fixed-precision angle
|
556 |
-
return otRound(float(value) / cls.factor) * cls.factor
|
557 |
-
|
558 |
-
@classmethod
|
559 |
-
def toString(cls, value):
|
560 |
-
return nearestMultipleShortestRepr(value, cls.factor)
|
561 |
-
|
562 |
-
|
563 |
-
class BiasedAngle(Angle):
|
564 |
-
# A bias of 1.0 is used in the representation of start and end angles
|
565 |
-
# of COLRv1 PaintSweepGradients to allow for encoding +360deg
|
566 |
-
bias = 1.0
|
567 |
-
|
568 |
-
|
569 |
-
class Version(SimpleValue):
|
570 |
-
staticSize = 4
|
571 |
-
|
572 |
-
def read(self, reader, font, tableDict):
|
573 |
-
value = reader.readLong()
|
574 |
-
return value
|
575 |
-
|
576 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
577 |
-
value = fi2ve(value)
|
578 |
-
writer.writeLong(value)
|
579 |
-
|
580 |
-
@staticmethod
|
581 |
-
def fromString(value):
|
582 |
-
return ve2fi(value)
|
583 |
-
|
584 |
-
@staticmethod
|
585 |
-
def toString(value):
|
586 |
-
return "0x%08x" % value
|
587 |
-
|
588 |
-
@staticmethod
|
589 |
-
def fromFloat(v):
|
590 |
-
return fl2fi(v, 16)
|
591 |
-
|
592 |
-
|
593 |
-
class Char64(SimpleValue):
|
594 |
-
"""An ASCII string with up to 64 characters.
|
595 |
-
|
596 |
-
Unused character positions are filled with 0x00 bytes.
|
597 |
-
Used in Apple AAT fonts in the `gcid` table.
|
598 |
-
"""
|
599 |
-
|
600 |
-
staticSize = 64
|
601 |
-
|
602 |
-
def read(self, reader, font, tableDict):
|
603 |
-
data = reader.readData(self.staticSize)
|
604 |
-
zeroPos = data.find(b"\0")
|
605 |
-
if zeroPos >= 0:
|
606 |
-
data = data[:zeroPos]
|
607 |
-
s = tostr(data, encoding="ascii", errors="replace")
|
608 |
-
if s != tostr(data, encoding="ascii", errors="ignore"):
|
609 |
-
log.warning('replaced non-ASCII characters in "%s"' % s)
|
610 |
-
return s
|
611 |
-
|
612 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
613 |
-
data = tobytes(value, encoding="ascii", errors="replace")
|
614 |
-
if data != tobytes(value, encoding="ascii", errors="ignore"):
|
615 |
-
log.warning('replacing non-ASCII characters in "%s"' % value)
|
616 |
-
if len(data) > self.staticSize:
|
617 |
-
log.warning(
|
618 |
-
'truncating overlong "%s" to %d bytes' % (value, self.staticSize)
|
619 |
-
)
|
620 |
-
data = (data + b"\0" * self.staticSize)[: self.staticSize]
|
621 |
-
writer.writeData(data)
|
622 |
-
|
623 |
-
|
624 |
-
class Struct(BaseConverter):
|
625 |
-
def getRecordSize(self, reader):
|
626 |
-
return self.tableClass and self.tableClass.getRecordSize(reader)
|
627 |
-
|
628 |
-
def read(self, reader, font, tableDict):
|
629 |
-
table = self.tableClass()
|
630 |
-
table.decompile(reader, font)
|
631 |
-
return table
|
632 |
-
|
633 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
634 |
-
value.compile(writer, font)
|
635 |
-
|
636 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
637 |
-
if value is None:
|
638 |
-
if attrs:
|
639 |
-
# If there are attributes (probably index), then
|
640 |
-
# don't drop this even if it's NULL. It will mess
|
641 |
-
# up the array indices of the containing element.
|
642 |
-
xmlWriter.simpletag(name, attrs + [("empty", 1)])
|
643 |
-
xmlWriter.newline()
|
644 |
-
else:
|
645 |
-
pass # NULL table, ignore
|
646 |
-
else:
|
647 |
-
value.toXML(xmlWriter, font, attrs, name=name)
|
648 |
-
|
649 |
-
def xmlRead(self, attrs, content, font):
|
650 |
-
if "empty" in attrs and safeEval(attrs["empty"]):
|
651 |
-
return None
|
652 |
-
table = self.tableClass()
|
653 |
-
Format = attrs.get("Format")
|
654 |
-
if Format is not None:
|
655 |
-
table.Format = int(Format)
|
656 |
-
|
657 |
-
noPostRead = not hasattr(table, "postRead")
|
658 |
-
if noPostRead:
|
659 |
-
# TODO Cache table.hasPropagated.
|
660 |
-
cleanPropagation = False
|
661 |
-
for conv in table.getConverters():
|
662 |
-
if conv.isPropagated:
|
663 |
-
cleanPropagation = True
|
664 |
-
if not hasattr(font, "_propagator"):
|
665 |
-
font._propagator = {}
|
666 |
-
propagator = font._propagator
|
667 |
-
assert conv.name not in propagator, (conv.name, propagator)
|
668 |
-
setattr(table, conv.name, None)
|
669 |
-
propagator[conv.name] = CountReference(table.__dict__, conv.name)
|
670 |
-
|
671 |
-
for element in content:
|
672 |
-
if isinstance(element, tuple):
|
673 |
-
name, attrs, content = element
|
674 |
-
table.fromXML(name, attrs, content, font)
|
675 |
-
else:
|
676 |
-
pass
|
677 |
-
|
678 |
-
table.populateDefaults(propagator=getattr(font, "_propagator", None))
|
679 |
-
|
680 |
-
if noPostRead:
|
681 |
-
if cleanPropagation:
|
682 |
-
for conv in table.getConverters():
|
683 |
-
if conv.isPropagated:
|
684 |
-
propagator = font._propagator
|
685 |
-
del propagator[conv.name]
|
686 |
-
if not propagator:
|
687 |
-
del font._propagator
|
688 |
-
|
689 |
-
return table
|
690 |
-
|
691 |
-
def __repr__(self):
|
692 |
-
return "Struct of " + repr(self.tableClass)
|
693 |
-
|
694 |
-
|
695 |
-
class StructWithLength(Struct):
|
696 |
-
def read(self, reader, font, tableDict):
|
697 |
-
pos = reader.pos
|
698 |
-
table = self.tableClass()
|
699 |
-
table.decompile(reader, font)
|
700 |
-
reader.seek(pos + table.StructLength)
|
701 |
-
return table
|
702 |
-
|
703 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
704 |
-
for convIndex, conv in enumerate(value.getConverters()):
|
705 |
-
if conv.name == "StructLength":
|
706 |
-
break
|
707 |
-
lengthIndex = len(writer.items) + convIndex
|
708 |
-
if isinstance(value, FormatSwitchingBaseTable):
|
709 |
-
lengthIndex += 1 # implicit Format field
|
710 |
-
deadbeef = {1: 0xDE, 2: 0xDEAD, 4: 0xDEADBEEF}[conv.staticSize]
|
711 |
-
|
712 |
-
before = writer.getDataLength()
|
713 |
-
value.StructLength = deadbeef
|
714 |
-
value.compile(writer, font)
|
715 |
-
length = writer.getDataLength() - before
|
716 |
-
lengthWriter = writer.getSubWriter()
|
717 |
-
conv.write(lengthWriter, font, tableDict, length)
|
718 |
-
assert writer.items[lengthIndex] == b"\xde\xad\xbe\xef"[: conv.staticSize]
|
719 |
-
writer.items[lengthIndex] = lengthWriter.getAllData()
|
720 |
-
|
721 |
-
|
722 |
-
class Table(Struct):
|
723 |
-
|
724 |
-
staticSize = 2
|
725 |
-
|
726 |
-
def readOffset(self, reader):
|
727 |
-
return reader.readUShort()
|
728 |
-
|
729 |
-
def writeNullOffset(self, writer):
|
730 |
-
writer.writeUShort(0)
|
731 |
-
|
732 |
-
def read(self, reader, font, tableDict):
|
733 |
-
offset = self.readOffset(reader)
|
734 |
-
if offset == 0:
|
735 |
-
return None
|
736 |
-
table = self.tableClass()
|
737 |
-
reader = reader.getSubReader(offset)
|
738 |
-
if font.lazy:
|
739 |
-
table.reader = reader
|
740 |
-
table.font = font
|
741 |
-
else:
|
742 |
-
table.decompile(reader, font)
|
743 |
-
return table
|
744 |
-
|
745 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
746 |
-
if value is None:
|
747 |
-
self.writeNullOffset(writer)
|
748 |
-
else:
|
749 |
-
subWriter = writer.getSubWriter(offsetSize=self.staticSize)
|
750 |
-
subWriter.name = self.name
|
751 |
-
if repeatIndex is not None:
|
752 |
-
subWriter.repeatIndex = repeatIndex
|
753 |
-
writer.writeSubTable(subWriter)
|
754 |
-
value.compile(subWriter, font)
|
755 |
-
|
756 |
-
|
757 |
-
class LTable(Table):
|
758 |
-
|
759 |
-
staticSize = 4
|
760 |
-
|
761 |
-
def readOffset(self, reader):
|
762 |
-
return reader.readULong()
|
763 |
-
|
764 |
-
def writeNullOffset(self, writer):
|
765 |
-
writer.writeULong(0)
|
766 |
-
|
767 |
-
|
768 |
-
# Table pointed to by a 24-bit, 3-byte long offset
|
769 |
-
class Table24(Table):
|
770 |
-
|
771 |
-
staticSize = 3
|
772 |
-
|
773 |
-
def readOffset(self, reader):
|
774 |
-
return reader.readUInt24()
|
775 |
-
|
776 |
-
def writeNullOffset(self, writer):
|
777 |
-
writer.writeUInt24(0)
|
778 |
-
|
779 |
-
|
780 |
-
# TODO Clean / merge the SubTable and SubStruct
|
781 |
-
|
782 |
-
|
783 |
-
class SubStruct(Struct):
|
784 |
-
def getConverter(self, tableType, lookupType):
|
785 |
-
tableClass = self.lookupTypes[tableType][lookupType]
|
786 |
-
return self.__class__(self.name, self.repeat, self.aux, tableClass)
|
787 |
-
|
788 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
789 |
-
super(SubStruct, self).xmlWrite(xmlWriter, font, value, None, attrs)
|
790 |
-
|
791 |
-
|
792 |
-
class SubTable(Table):
|
793 |
-
def getConverter(self, tableType, lookupType):
|
794 |
-
tableClass = self.lookupTypes[tableType][lookupType]
|
795 |
-
return self.__class__(self.name, self.repeat, self.aux, tableClass)
|
796 |
-
|
797 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
798 |
-
super(SubTable, self).xmlWrite(xmlWriter, font, value, None, attrs)
|
799 |
-
|
800 |
-
|
801 |
-
class ExtSubTable(LTable, SubTable):
|
802 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
803 |
-
writer.Extension = True # actually, mere presence of the field flags it as an Ext Subtable writer.
|
804 |
-
Table.write(self, writer, font, tableDict, value, repeatIndex)
|
805 |
-
|
806 |
-
|
807 |
-
class FeatureParams(Table):
|
808 |
-
def getConverter(self, featureTag):
|
809 |
-
tableClass = self.featureParamTypes.get(featureTag, self.defaultFeatureParams)
|
810 |
-
return self.__class__(self.name, self.repeat, self.aux, tableClass)
|
811 |
-
|
812 |
-
|
813 |
-
class ValueFormat(IntValue):
|
814 |
-
staticSize = 2
|
815 |
-
|
816 |
-
def __init__(self, name, repeat, aux, tableClass=None, *, description=""):
|
817 |
-
BaseConverter.__init__(
|
818 |
-
self, name, repeat, aux, tableClass, description=description
|
819 |
-
)
|
820 |
-
self.which = "ValueFormat" + ("2" if name[-1] == "2" else "1")
|
821 |
-
|
822 |
-
def read(self, reader, font, tableDict):
|
823 |
-
format = reader.readUShort()
|
824 |
-
reader[self.which] = ValueRecordFactory(format)
|
825 |
-
return format
|
826 |
-
|
827 |
-
def write(self, writer, font, tableDict, format, repeatIndex=None):
|
828 |
-
writer.writeUShort(format)
|
829 |
-
writer[self.which] = ValueRecordFactory(format)
|
830 |
-
|
831 |
-
|
832 |
-
class ValueRecord(ValueFormat):
|
833 |
-
def getRecordSize(self, reader):
|
834 |
-
return 2 * len(reader[self.which])
|
835 |
-
|
836 |
-
def read(self, reader, font, tableDict):
|
837 |
-
return reader[self.which].readValueRecord(reader, font)
|
838 |
-
|
839 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
840 |
-
writer[self.which].writeValueRecord(writer, font, value)
|
841 |
-
|
842 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
843 |
-
if value is None:
|
844 |
-
pass # NULL table, ignore
|
845 |
-
else:
|
846 |
-
value.toXML(xmlWriter, font, self.name, attrs)
|
847 |
-
|
848 |
-
def xmlRead(self, attrs, content, font):
|
849 |
-
from .otBase import ValueRecord
|
850 |
-
|
851 |
-
value = ValueRecord()
|
852 |
-
value.fromXML(None, attrs, content, font)
|
853 |
-
return value
|
854 |
-
|
855 |
-
|
856 |
-
class AATLookup(BaseConverter):
|
857 |
-
BIN_SEARCH_HEADER_SIZE = 10
|
858 |
-
|
859 |
-
def __init__(self, name, repeat, aux, tableClass, *, description=""):
|
860 |
-
BaseConverter.__init__(
|
861 |
-
self, name, repeat, aux, tableClass, description=description
|
862 |
-
)
|
863 |
-
if issubclass(self.tableClass, SimpleValue):
|
864 |
-
self.converter = self.tableClass(name="Value", repeat=None, aux=None)
|
865 |
-
else:
|
866 |
-
self.converter = Table(
|
867 |
-
name="Value", repeat=None, aux=None, tableClass=self.tableClass
|
868 |
-
)
|
869 |
-
|
870 |
-
def read(self, reader, font, tableDict):
|
871 |
-
format = reader.readUShort()
|
872 |
-
if format == 0:
|
873 |
-
return self.readFormat0(reader, font)
|
874 |
-
elif format == 2:
|
875 |
-
return self.readFormat2(reader, font)
|
876 |
-
elif format == 4:
|
877 |
-
return self.readFormat4(reader, font)
|
878 |
-
elif format == 6:
|
879 |
-
return self.readFormat6(reader, font)
|
880 |
-
elif format == 8:
|
881 |
-
return self.readFormat8(reader, font)
|
882 |
-
else:
|
883 |
-
assert False, "unsupported lookup format: %d" % format
|
884 |
-
|
885 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
886 |
-
values = list(
|
887 |
-
sorted([(font.getGlyphID(glyph), val) for glyph, val in value.items()])
|
888 |
-
)
|
889 |
-
# TODO: Also implement format 4.
|
890 |
-
formats = list(
|
891 |
-
sorted(
|
892 |
-
filter(
|
893 |
-
None,
|
894 |
-
[
|
895 |
-
self.buildFormat0(writer, font, values),
|
896 |
-
self.buildFormat2(writer, font, values),
|
897 |
-
self.buildFormat6(writer, font, values),
|
898 |
-
self.buildFormat8(writer, font, values),
|
899 |
-
],
|
900 |
-
)
|
901 |
-
)
|
902 |
-
)
|
903 |
-
# We use the format ID as secondary sort key to make the output
|
904 |
-
# deterministic when multiple formats have same encoded size.
|
905 |
-
dataSize, lookupFormat, writeMethod = formats[0]
|
906 |
-
pos = writer.getDataLength()
|
907 |
-
writeMethod()
|
908 |
-
actualSize = writer.getDataLength() - pos
|
909 |
-
assert (
|
910 |
-
actualSize == dataSize
|
911 |
-
), "AATLookup format %d claimed to write %d bytes, but wrote %d" % (
|
912 |
-
lookupFormat,
|
913 |
-
dataSize,
|
914 |
-
actualSize,
|
915 |
-
)
|
916 |
-
|
917 |
-
@staticmethod
|
918 |
-
def writeBinSearchHeader(writer, numUnits, unitSize):
|
919 |
-
writer.writeUShort(unitSize)
|
920 |
-
writer.writeUShort(numUnits)
|
921 |
-
searchRange, entrySelector, rangeShift = getSearchRange(
|
922 |
-
n=numUnits, itemSize=unitSize
|
923 |
-
)
|
924 |
-
writer.writeUShort(searchRange)
|
925 |
-
writer.writeUShort(entrySelector)
|
926 |
-
writer.writeUShort(rangeShift)
|
927 |
-
|
928 |
-
def buildFormat0(self, writer, font, values):
|
929 |
-
numGlyphs = len(font.getGlyphOrder())
|
930 |
-
if len(values) != numGlyphs:
|
931 |
-
return None
|
932 |
-
valueSize = self.converter.staticSize
|
933 |
-
return (
|
934 |
-
2 + numGlyphs * valueSize,
|
935 |
-
0,
|
936 |
-
lambda: self.writeFormat0(writer, font, values),
|
937 |
-
)
|
938 |
-
|
939 |
-
def writeFormat0(self, writer, font, values):
|
940 |
-
writer.writeUShort(0)
|
941 |
-
for glyphID_, value in values:
|
942 |
-
self.converter.write(
|
943 |
-
writer, font, tableDict=None, value=value, repeatIndex=None
|
944 |
-
)
|
945 |
-
|
946 |
-
def buildFormat2(self, writer, font, values):
|
947 |
-
segStart, segValue = values[0]
|
948 |
-
segEnd = segStart
|
949 |
-
segments = []
|
950 |
-
for glyphID, curValue in values[1:]:
|
951 |
-
if glyphID != segEnd + 1 or curValue != segValue:
|
952 |
-
segments.append((segStart, segEnd, segValue))
|
953 |
-
segStart = segEnd = glyphID
|
954 |
-
segValue = curValue
|
955 |
-
else:
|
956 |
-
segEnd = glyphID
|
957 |
-
segments.append((segStart, segEnd, segValue))
|
958 |
-
valueSize = self.converter.staticSize
|
959 |
-
numUnits, unitSize = len(segments) + 1, valueSize + 4
|
960 |
-
return (
|
961 |
-
2 + self.BIN_SEARCH_HEADER_SIZE + numUnits * unitSize,
|
962 |
-
2,
|
963 |
-
lambda: self.writeFormat2(writer, font, segments),
|
964 |
-
)
|
965 |
-
|
966 |
-
def writeFormat2(self, writer, font, segments):
|
967 |
-
writer.writeUShort(2)
|
968 |
-
valueSize = self.converter.staticSize
|
969 |
-
numUnits, unitSize = len(segments), valueSize + 4
|
970 |
-
self.writeBinSearchHeader(writer, numUnits, unitSize)
|
971 |
-
for firstGlyph, lastGlyph, value in segments:
|
972 |
-
writer.writeUShort(lastGlyph)
|
973 |
-
writer.writeUShort(firstGlyph)
|
974 |
-
self.converter.write(
|
975 |
-
writer, font, tableDict=None, value=value, repeatIndex=None
|
976 |
-
)
|
977 |
-
writer.writeUShort(0xFFFF)
|
978 |
-
writer.writeUShort(0xFFFF)
|
979 |
-
writer.writeData(b"\x00" * valueSize)
|
980 |
-
|
981 |
-
def buildFormat6(self, writer, font, values):
|
982 |
-
valueSize = self.converter.staticSize
|
983 |
-
numUnits, unitSize = len(values), valueSize + 2
|
984 |
-
return (
|
985 |
-
2 + self.BIN_SEARCH_HEADER_SIZE + (numUnits + 1) * unitSize,
|
986 |
-
6,
|
987 |
-
lambda: self.writeFormat6(writer, font, values),
|
988 |
-
)
|
989 |
-
|
990 |
-
def writeFormat6(self, writer, font, values):
|
991 |
-
writer.writeUShort(6)
|
992 |
-
valueSize = self.converter.staticSize
|
993 |
-
numUnits, unitSize = len(values), valueSize + 2
|
994 |
-
self.writeBinSearchHeader(writer, numUnits, unitSize)
|
995 |
-
for glyphID, value in values:
|
996 |
-
writer.writeUShort(glyphID)
|
997 |
-
self.converter.write(
|
998 |
-
writer, font, tableDict=None, value=value, repeatIndex=None
|
999 |
-
)
|
1000 |
-
writer.writeUShort(0xFFFF)
|
1001 |
-
writer.writeData(b"\x00" * valueSize)
|
1002 |
-
|
1003 |
-
def buildFormat8(self, writer, font, values):
|
1004 |
-
minGlyphID, maxGlyphID = values[0][0], values[-1][0]
|
1005 |
-
if len(values) != maxGlyphID - minGlyphID + 1:
|
1006 |
-
return None
|
1007 |
-
valueSize = self.converter.staticSize
|
1008 |
-
return (
|
1009 |
-
6 + len(values) * valueSize,
|
1010 |
-
8,
|
1011 |
-
lambda: self.writeFormat8(writer, font, values),
|
1012 |
-
)
|
1013 |
-
|
1014 |
-
def writeFormat8(self, writer, font, values):
|
1015 |
-
firstGlyphID = values[0][0]
|
1016 |
-
writer.writeUShort(8)
|
1017 |
-
writer.writeUShort(firstGlyphID)
|
1018 |
-
writer.writeUShort(len(values))
|
1019 |
-
for _, value in values:
|
1020 |
-
self.converter.write(
|
1021 |
-
writer, font, tableDict=None, value=value, repeatIndex=None
|
1022 |
-
)
|
1023 |
-
|
1024 |
-
def readFormat0(self, reader, font):
|
1025 |
-
numGlyphs = len(font.getGlyphOrder())
|
1026 |
-
data = self.converter.readArray(reader, font, tableDict=None, count=numGlyphs)
|
1027 |
-
return {font.getGlyphName(k): value for k, value in enumerate(data)}
|
1028 |
-
|
1029 |
-
def readFormat2(self, reader, font):
|
1030 |
-
mapping = {}
|
1031 |
-
pos = reader.pos - 2 # start of table is at UShort for format
|
1032 |
-
unitSize, numUnits = reader.readUShort(), reader.readUShort()
|
1033 |
-
assert unitSize >= 4 + self.converter.staticSize, unitSize
|
1034 |
-
for i in range(numUnits):
|
1035 |
-
reader.seek(pos + i * unitSize + 12)
|
1036 |
-
last = reader.readUShort()
|
1037 |
-
first = reader.readUShort()
|
1038 |
-
value = self.converter.read(reader, font, tableDict=None)
|
1039 |
-
if last != 0xFFFF:
|
1040 |
-
for k in range(first, last + 1):
|
1041 |
-
mapping[font.getGlyphName(k)] = value
|
1042 |
-
return mapping
|
1043 |
-
|
1044 |
-
def readFormat4(self, reader, font):
|
1045 |
-
mapping = {}
|
1046 |
-
pos = reader.pos - 2 # start of table is at UShort for format
|
1047 |
-
unitSize = reader.readUShort()
|
1048 |
-
assert unitSize >= 6, unitSize
|
1049 |
-
for i in range(reader.readUShort()):
|
1050 |
-
reader.seek(pos + i * unitSize + 12)
|
1051 |
-
last = reader.readUShort()
|
1052 |
-
first = reader.readUShort()
|
1053 |
-
offset = reader.readUShort()
|
1054 |
-
if last != 0xFFFF:
|
1055 |
-
dataReader = reader.getSubReader(0) # relative to current position
|
1056 |
-
dataReader.seek(pos + offset) # relative to start of table
|
1057 |
-
data = self.converter.readArray(
|
1058 |
-
dataReader, font, tableDict=None, count=last - first + 1
|
1059 |
-
)
|
1060 |
-
for k, v in enumerate(data):
|
1061 |
-
mapping[font.getGlyphName(first + k)] = v
|
1062 |
-
return mapping
|
1063 |
-
|
1064 |
-
def readFormat6(self, reader, font):
|
1065 |
-
mapping = {}
|
1066 |
-
pos = reader.pos - 2 # start of table is at UShort for format
|
1067 |
-
unitSize = reader.readUShort()
|
1068 |
-
assert unitSize >= 2 + self.converter.staticSize, unitSize
|
1069 |
-
for i in range(reader.readUShort()):
|
1070 |
-
reader.seek(pos + i * unitSize + 12)
|
1071 |
-
glyphID = reader.readUShort()
|
1072 |
-
value = self.converter.read(reader, font, tableDict=None)
|
1073 |
-
if glyphID != 0xFFFF:
|
1074 |
-
mapping[font.getGlyphName(glyphID)] = value
|
1075 |
-
return mapping
|
1076 |
-
|
1077 |
-
def readFormat8(self, reader, font):
|
1078 |
-
first = reader.readUShort()
|
1079 |
-
count = reader.readUShort()
|
1080 |
-
data = self.converter.readArray(reader, font, tableDict=None, count=count)
|
1081 |
-
return {font.getGlyphName(first + k): value for (k, value) in enumerate(data)}
|
1082 |
-
|
1083 |
-
def xmlRead(self, attrs, content, font):
|
1084 |
-
value = {}
|
1085 |
-
for element in content:
|
1086 |
-
if isinstance(element, tuple):
|
1087 |
-
name, a, eltContent = element
|
1088 |
-
if name == "Lookup":
|
1089 |
-
value[a["glyph"]] = self.converter.xmlRead(a, eltContent, font)
|
1090 |
-
return value
|
1091 |
-
|
1092 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
1093 |
-
xmlWriter.begintag(name, attrs)
|
1094 |
-
xmlWriter.newline()
|
1095 |
-
for glyph, value in sorted(value.items()):
|
1096 |
-
self.converter.xmlWrite(
|
1097 |
-
xmlWriter, font, value=value, name="Lookup", attrs=[("glyph", glyph)]
|
1098 |
-
)
|
1099 |
-
xmlWriter.endtag(name)
|
1100 |
-
xmlWriter.newline()
|
1101 |
-
|
1102 |
-
|
1103 |
-
# The AAT 'ankr' table has an unusual structure: An offset to an AATLookup
|
1104 |
-
# followed by an offset to a glyph data table. Other than usual, the
|
1105 |
-
# offsets in the AATLookup are not relative to the beginning of
|
1106 |
-
# the beginning of the 'ankr' table, but relative to the glyph data table.
|
1107 |
-
# So, to find the anchor data for a glyph, one needs to add the offset
|
1108 |
-
# to the data table to the offset found in the AATLookup, and then use
|
1109 |
-
# the sum of these two offsets to find the actual data.
|
1110 |
-
class AATLookupWithDataOffset(BaseConverter):
|
1111 |
-
def read(self, reader, font, tableDict):
|
1112 |
-
lookupOffset = reader.readULong()
|
1113 |
-
dataOffset = reader.readULong()
|
1114 |
-
lookupReader = reader.getSubReader(lookupOffset)
|
1115 |
-
lookup = AATLookup("DataOffsets", None, None, UShort)
|
1116 |
-
offsets = lookup.read(lookupReader, font, tableDict)
|
1117 |
-
result = {}
|
1118 |
-
for glyph, offset in offsets.items():
|
1119 |
-
dataReader = reader.getSubReader(offset + dataOffset)
|
1120 |
-
item = self.tableClass()
|
1121 |
-
item.decompile(dataReader, font)
|
1122 |
-
result[glyph] = item
|
1123 |
-
return result
|
1124 |
-
|
1125 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
1126 |
-
# We do not work with OTTableWriter sub-writers because
|
1127 |
-
# the offsets in our AATLookup are relative to our data
|
1128 |
-
# table, for which we need to provide an offset value itself.
|
1129 |
-
# It might have been possible to somehow make a kludge for
|
1130 |
-
# performing this indirect offset computation directly inside
|
1131 |
-
# OTTableWriter. But this would have made the internal logic
|
1132 |
-
# of OTTableWriter even more complex than it already is,
|
1133 |
-
# so we decided to roll our own offset computation for the
|
1134 |
-
# contents of the AATLookup and associated data table.
|
1135 |
-
offsetByGlyph, offsetByData, dataLen = {}, {}, 0
|
1136 |
-
compiledData = []
|
1137 |
-
for glyph in sorted(value, key=font.getGlyphID):
|
1138 |
-
subWriter = OTTableWriter()
|
1139 |
-
value[glyph].compile(subWriter, font)
|
1140 |
-
data = subWriter.getAllData()
|
1141 |
-
offset = offsetByData.get(data, None)
|
1142 |
-
if offset == None:
|
1143 |
-
offset = dataLen
|
1144 |
-
dataLen = dataLen + len(data)
|
1145 |
-
offsetByData[data] = offset
|
1146 |
-
compiledData.append(data)
|
1147 |
-
offsetByGlyph[glyph] = offset
|
1148 |
-
# For calculating the offsets to our AATLookup and data table,
|
1149 |
-
# we can use the regular OTTableWriter infrastructure.
|
1150 |
-
lookupWriter = writer.getSubWriter(offsetSize=4)
|
1151 |
-
lookup = AATLookup("DataOffsets", None, None, UShort)
|
1152 |
-
lookup.write(lookupWriter, font, tableDict, offsetByGlyph, None)
|
1153 |
-
|
1154 |
-
dataWriter = writer.getSubWriter(offsetSize=4)
|
1155 |
-
writer.writeSubTable(lookupWriter)
|
1156 |
-
writer.writeSubTable(dataWriter)
|
1157 |
-
for d in compiledData:
|
1158 |
-
dataWriter.writeData(d)
|
1159 |
-
|
1160 |
-
def xmlRead(self, attrs, content, font):
|
1161 |
-
lookup = AATLookup("DataOffsets", None, None, self.tableClass)
|
1162 |
-
return lookup.xmlRead(attrs, content, font)
|
1163 |
-
|
1164 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
1165 |
-
lookup = AATLookup("DataOffsets", None, None, self.tableClass)
|
1166 |
-
lookup.xmlWrite(xmlWriter, font, value, name, attrs)
|
1167 |
-
|
1168 |
-
|
1169 |
-
class MorxSubtableConverter(BaseConverter):
|
1170 |
-
_PROCESSING_ORDERS = {
|
1171 |
-
# bits 30 and 28 of morx.CoverageFlags; see morx spec
|
1172 |
-
(False, False): "LayoutOrder",
|
1173 |
-
(True, False): "ReversedLayoutOrder",
|
1174 |
-
(False, True): "LogicalOrder",
|
1175 |
-
(True, True): "ReversedLogicalOrder",
|
1176 |
-
}
|
1177 |
-
|
1178 |
-
_PROCESSING_ORDERS_REVERSED = {val: key for key, val in _PROCESSING_ORDERS.items()}
|
1179 |
-
|
1180 |
-
def __init__(self, name, repeat, aux, tableClass=None, *, description=""):
|
1181 |
-
BaseConverter.__init__(
|
1182 |
-
self, name, repeat, aux, tableClass, description=description
|
1183 |
-
)
|
1184 |
-
|
1185 |
-
def _setTextDirectionFromCoverageFlags(self, flags, subtable):
|
1186 |
-
if (flags & 0x20) != 0:
|
1187 |
-
subtable.TextDirection = "Any"
|
1188 |
-
elif (flags & 0x80) != 0:
|
1189 |
-
subtable.TextDirection = "Vertical"
|
1190 |
-
else:
|
1191 |
-
subtable.TextDirection = "Horizontal"
|
1192 |
-
|
1193 |
-
def read(self, reader, font, tableDict):
|
1194 |
-
pos = reader.pos
|
1195 |
-
m = MorxSubtable()
|
1196 |
-
m.StructLength = reader.readULong()
|
1197 |
-
flags = reader.readUInt8()
|
1198 |
-
orderKey = ((flags & 0x40) != 0, (flags & 0x10) != 0)
|
1199 |
-
m.ProcessingOrder = self._PROCESSING_ORDERS[orderKey]
|
1200 |
-
self._setTextDirectionFromCoverageFlags(flags, m)
|
1201 |
-
m.Reserved = reader.readUShort()
|
1202 |
-
m.Reserved |= (flags & 0xF) << 16
|
1203 |
-
m.MorphType = reader.readUInt8()
|
1204 |
-
m.SubFeatureFlags = reader.readULong()
|
1205 |
-
tableClass = lookupTypes["morx"].get(m.MorphType)
|
1206 |
-
if tableClass is None:
|
1207 |
-
assert False, "unsupported 'morx' lookup type %s" % m.MorphType
|
1208 |
-
# To decode AAT ligatures, we need to know the subtable size.
|
1209 |
-
# The easiest way to pass this along is to create a new reader
|
1210 |
-
# that works on just the subtable as its data.
|
1211 |
-
headerLength = reader.pos - pos
|
1212 |
-
data = reader.data[reader.pos : reader.pos + m.StructLength - headerLength]
|
1213 |
-
assert len(data) == m.StructLength - headerLength
|
1214 |
-
subReader = OTTableReader(data=data, tableTag=reader.tableTag)
|
1215 |
-
m.SubStruct = tableClass()
|
1216 |
-
m.SubStruct.decompile(subReader, font)
|
1217 |
-
reader.seek(pos + m.StructLength)
|
1218 |
-
return m
|
1219 |
-
|
1220 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
1221 |
-
xmlWriter.begintag(name, attrs)
|
1222 |
-
xmlWriter.newline()
|
1223 |
-
xmlWriter.comment("StructLength=%d" % value.StructLength)
|
1224 |
-
xmlWriter.newline()
|
1225 |
-
xmlWriter.simpletag("TextDirection", value=value.TextDirection)
|
1226 |
-
xmlWriter.newline()
|
1227 |
-
xmlWriter.simpletag("ProcessingOrder", value=value.ProcessingOrder)
|
1228 |
-
xmlWriter.newline()
|
1229 |
-
if value.Reserved != 0:
|
1230 |
-
xmlWriter.simpletag("Reserved", value="0x%04x" % value.Reserved)
|
1231 |
-
xmlWriter.newline()
|
1232 |
-
xmlWriter.comment("MorphType=%d" % value.MorphType)
|
1233 |
-
xmlWriter.newline()
|
1234 |
-
xmlWriter.simpletag("SubFeatureFlags", value="0x%08x" % value.SubFeatureFlags)
|
1235 |
-
xmlWriter.newline()
|
1236 |
-
value.SubStruct.toXML(xmlWriter, font)
|
1237 |
-
xmlWriter.endtag(name)
|
1238 |
-
xmlWriter.newline()
|
1239 |
-
|
1240 |
-
def xmlRead(self, attrs, content, font):
|
1241 |
-
m = MorxSubtable()
|
1242 |
-
covFlags = 0
|
1243 |
-
m.Reserved = 0
|
1244 |
-
for eltName, eltAttrs, eltContent in filter(istuple, content):
|
1245 |
-
if eltName == "CoverageFlags":
|
1246 |
-
# Only in XML from old versions of fonttools.
|
1247 |
-
covFlags = safeEval(eltAttrs["value"])
|
1248 |
-
orderKey = ((covFlags & 0x40) != 0, (covFlags & 0x10) != 0)
|
1249 |
-
m.ProcessingOrder = self._PROCESSING_ORDERS[orderKey]
|
1250 |
-
self._setTextDirectionFromCoverageFlags(covFlags, m)
|
1251 |
-
elif eltName == "ProcessingOrder":
|
1252 |
-
m.ProcessingOrder = eltAttrs["value"]
|
1253 |
-
assert m.ProcessingOrder in self._PROCESSING_ORDERS_REVERSED, (
|
1254 |
-
"unknown ProcessingOrder: %s" % m.ProcessingOrder
|
1255 |
-
)
|
1256 |
-
elif eltName == "TextDirection":
|
1257 |
-
m.TextDirection = eltAttrs["value"]
|
1258 |
-
assert m.TextDirection in {"Horizontal", "Vertical", "Any"}, (
|
1259 |
-
"unknown TextDirection %s" % m.TextDirection
|
1260 |
-
)
|
1261 |
-
elif eltName == "Reserved":
|
1262 |
-
m.Reserved = safeEval(eltAttrs["value"])
|
1263 |
-
elif eltName == "SubFeatureFlags":
|
1264 |
-
m.SubFeatureFlags = safeEval(eltAttrs["value"])
|
1265 |
-
elif eltName.endswith("Morph"):
|
1266 |
-
m.fromXML(eltName, eltAttrs, eltContent, font)
|
1267 |
-
else:
|
1268 |
-
assert False, eltName
|
1269 |
-
m.Reserved = (covFlags & 0xF) << 16 | m.Reserved
|
1270 |
-
return m
|
1271 |
-
|
1272 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
1273 |
-
covFlags = (value.Reserved & 0x000F0000) >> 16
|
1274 |
-
reverseOrder, logicalOrder = self._PROCESSING_ORDERS_REVERSED[
|
1275 |
-
value.ProcessingOrder
|
1276 |
-
]
|
1277 |
-
covFlags |= 0x80 if value.TextDirection == "Vertical" else 0
|
1278 |
-
covFlags |= 0x40 if reverseOrder else 0
|
1279 |
-
covFlags |= 0x20 if value.TextDirection == "Any" else 0
|
1280 |
-
covFlags |= 0x10 if logicalOrder else 0
|
1281 |
-
value.CoverageFlags = covFlags
|
1282 |
-
lengthIndex = len(writer.items)
|
1283 |
-
before = writer.getDataLength()
|
1284 |
-
value.StructLength = 0xDEADBEEF
|
1285 |
-
# The high nibble of value.Reserved is actuallly encoded
|
1286 |
-
# into coverageFlags, so we need to clear it here.
|
1287 |
-
origReserved = value.Reserved # including high nibble
|
1288 |
-
value.Reserved = value.Reserved & 0xFFFF # without high nibble
|
1289 |
-
value.compile(writer, font)
|
1290 |
-
value.Reserved = origReserved # restore original value
|
1291 |
-
assert writer.items[lengthIndex] == b"\xde\xad\xbe\xef"
|
1292 |
-
length = writer.getDataLength() - before
|
1293 |
-
writer.items[lengthIndex] = struct.pack(">L", length)
|
1294 |
-
|
1295 |
-
|
1296 |
-
# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6Tables.html#ExtendedStateHeader
|
1297 |
-
# TODO: Untangle the implementation of the various lookup-specific formats.
|
1298 |
-
class STXHeader(BaseConverter):
|
1299 |
-
def __init__(self, name, repeat, aux, tableClass, *, description=""):
|
1300 |
-
BaseConverter.__init__(
|
1301 |
-
self, name, repeat, aux, tableClass, description=description
|
1302 |
-
)
|
1303 |
-
assert issubclass(self.tableClass, AATAction)
|
1304 |
-
self.classLookup = AATLookup("GlyphClasses", None, None, UShort)
|
1305 |
-
if issubclass(self.tableClass, ContextualMorphAction):
|
1306 |
-
self.perGlyphLookup = AATLookup("PerGlyphLookup", None, None, GlyphID)
|
1307 |
-
else:
|
1308 |
-
self.perGlyphLookup = None
|
1309 |
-
|
1310 |
-
def read(self, reader, font, tableDict):
|
1311 |
-
table = AATStateTable()
|
1312 |
-
pos = reader.pos
|
1313 |
-
classTableReader = reader.getSubReader(0)
|
1314 |
-
stateArrayReader = reader.getSubReader(0)
|
1315 |
-
entryTableReader = reader.getSubReader(0)
|
1316 |
-
actionReader = None
|
1317 |
-
ligaturesReader = None
|
1318 |
-
table.GlyphClassCount = reader.readULong()
|
1319 |
-
classTableReader.seek(pos + reader.readULong())
|
1320 |
-
stateArrayReader.seek(pos + reader.readULong())
|
1321 |
-
entryTableReader.seek(pos + reader.readULong())
|
1322 |
-
if self.perGlyphLookup is not None:
|
1323 |
-
perGlyphTableReader = reader.getSubReader(0)
|
1324 |
-
perGlyphTableReader.seek(pos + reader.readULong())
|
1325 |
-
if issubclass(self.tableClass, LigatureMorphAction):
|
1326 |
-
actionReader = reader.getSubReader(0)
|
1327 |
-
actionReader.seek(pos + reader.readULong())
|
1328 |
-
ligComponentReader = reader.getSubReader(0)
|
1329 |
-
ligComponentReader.seek(pos + reader.readULong())
|
1330 |
-
ligaturesReader = reader.getSubReader(0)
|
1331 |
-
ligaturesReader.seek(pos + reader.readULong())
|
1332 |
-
numLigComponents = (ligaturesReader.pos - ligComponentReader.pos) // 2
|
1333 |
-
assert numLigComponents >= 0
|
1334 |
-
table.LigComponents = ligComponentReader.readUShortArray(numLigComponents)
|
1335 |
-
table.Ligatures = self._readLigatures(ligaturesReader, font)
|
1336 |
-
elif issubclass(self.tableClass, InsertionMorphAction):
|
1337 |
-
actionReader = reader.getSubReader(0)
|
1338 |
-
actionReader.seek(pos + reader.readULong())
|
1339 |
-
table.GlyphClasses = self.classLookup.read(classTableReader, font, tableDict)
|
1340 |
-
numStates = int(
|
1341 |
-
(entryTableReader.pos - stateArrayReader.pos) / (table.GlyphClassCount * 2)
|
1342 |
-
)
|
1343 |
-
for stateIndex in range(numStates):
|
1344 |
-
state = AATState()
|
1345 |
-
table.States.append(state)
|
1346 |
-
for glyphClass in range(table.GlyphClassCount):
|
1347 |
-
entryIndex = stateArrayReader.readUShort()
|
1348 |
-
state.Transitions[glyphClass] = self._readTransition(
|
1349 |
-
entryTableReader, entryIndex, font, actionReader
|
1350 |
-
)
|
1351 |
-
if self.perGlyphLookup is not None:
|
1352 |
-
table.PerGlyphLookups = self._readPerGlyphLookups(
|
1353 |
-
table, perGlyphTableReader, font
|
1354 |
-
)
|
1355 |
-
return table
|
1356 |
-
|
1357 |
-
def _readTransition(self, reader, entryIndex, font, actionReader):
|
1358 |
-
transition = self.tableClass()
|
1359 |
-
entryReader = reader.getSubReader(
|
1360 |
-
reader.pos + entryIndex * transition.staticSize
|
1361 |
-
)
|
1362 |
-
transition.decompile(entryReader, font, actionReader)
|
1363 |
-
return transition
|
1364 |
-
|
1365 |
-
def _readLigatures(self, reader, font):
|
1366 |
-
limit = len(reader.data)
|
1367 |
-
numLigatureGlyphs = (limit - reader.pos) // 2
|
1368 |
-
return font.getGlyphNameMany(reader.readUShortArray(numLigatureGlyphs))
|
1369 |
-
|
1370 |
-
def _countPerGlyphLookups(self, table):
|
1371 |
-
# Somewhat annoyingly, the morx table does not encode
|
1372 |
-
# the size of the per-glyph table. So we need to find
|
1373 |
-
# the maximum value that MorphActions use as index
|
1374 |
-
# into this table.
|
1375 |
-
numLookups = 0
|
1376 |
-
for state in table.States:
|
1377 |
-
for t in state.Transitions.values():
|
1378 |
-
if isinstance(t, ContextualMorphAction):
|
1379 |
-
if t.MarkIndex != 0xFFFF:
|
1380 |
-
numLookups = max(numLookups, t.MarkIndex + 1)
|
1381 |
-
if t.CurrentIndex != 0xFFFF:
|
1382 |
-
numLookups = max(numLookups, t.CurrentIndex + 1)
|
1383 |
-
return numLookups
|
1384 |
-
|
1385 |
-
def _readPerGlyphLookups(self, table, reader, font):
|
1386 |
-
pos = reader.pos
|
1387 |
-
lookups = []
|
1388 |
-
for _ in range(self._countPerGlyphLookups(table)):
|
1389 |
-
lookupReader = reader.getSubReader(0)
|
1390 |
-
lookupReader.seek(pos + reader.readULong())
|
1391 |
-
lookups.append(self.perGlyphLookup.read(lookupReader, font, {}))
|
1392 |
-
return lookups
|
1393 |
-
|
1394 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
1395 |
-
glyphClassWriter = OTTableWriter()
|
1396 |
-
self.classLookup.write(
|
1397 |
-
glyphClassWriter, font, tableDict, value.GlyphClasses, repeatIndex=None
|
1398 |
-
)
|
1399 |
-
glyphClassData = pad(glyphClassWriter.getAllData(), 2)
|
1400 |
-
glyphClassCount = max(value.GlyphClasses.values()) + 1
|
1401 |
-
glyphClassTableOffset = 16 # size of STXHeader
|
1402 |
-
if self.perGlyphLookup is not None:
|
1403 |
-
glyphClassTableOffset += 4
|
1404 |
-
|
1405 |
-
glyphClassTableOffset += self.tableClass.actionHeaderSize
|
1406 |
-
actionData, actionIndex = self.tableClass.compileActions(font, value.States)
|
1407 |
-
stateArrayData, entryTableData = self._compileStates(
|
1408 |
-
font, value.States, glyphClassCount, actionIndex
|
1409 |
-
)
|
1410 |
-
stateArrayOffset = glyphClassTableOffset + len(glyphClassData)
|
1411 |
-
entryTableOffset = stateArrayOffset + len(stateArrayData)
|
1412 |
-
perGlyphOffset = entryTableOffset + len(entryTableData)
|
1413 |
-
perGlyphData = pad(self._compilePerGlyphLookups(value, font), 4)
|
1414 |
-
if actionData is not None:
|
1415 |
-
actionOffset = entryTableOffset + len(entryTableData)
|
1416 |
-
else:
|
1417 |
-
actionOffset = None
|
1418 |
-
|
1419 |
-
ligaturesOffset, ligComponentsOffset = None, None
|
1420 |
-
ligComponentsData = self._compileLigComponents(value, font)
|
1421 |
-
ligaturesData = self._compileLigatures(value, font)
|
1422 |
-
if ligComponentsData is not None:
|
1423 |
-
assert len(perGlyphData) == 0
|
1424 |
-
ligComponentsOffset = actionOffset + len(actionData)
|
1425 |
-
ligaturesOffset = ligComponentsOffset + len(ligComponentsData)
|
1426 |
-
|
1427 |
-
writer.writeULong(glyphClassCount)
|
1428 |
-
writer.writeULong(glyphClassTableOffset)
|
1429 |
-
writer.writeULong(stateArrayOffset)
|
1430 |
-
writer.writeULong(entryTableOffset)
|
1431 |
-
if self.perGlyphLookup is not None:
|
1432 |
-
writer.writeULong(perGlyphOffset)
|
1433 |
-
if actionOffset is not None:
|
1434 |
-
writer.writeULong(actionOffset)
|
1435 |
-
if ligComponentsOffset is not None:
|
1436 |
-
writer.writeULong(ligComponentsOffset)
|
1437 |
-
writer.writeULong(ligaturesOffset)
|
1438 |
-
writer.writeData(glyphClassData)
|
1439 |
-
writer.writeData(stateArrayData)
|
1440 |
-
writer.writeData(entryTableData)
|
1441 |
-
writer.writeData(perGlyphData)
|
1442 |
-
if actionData is not None:
|
1443 |
-
writer.writeData(actionData)
|
1444 |
-
if ligComponentsData is not None:
|
1445 |
-
writer.writeData(ligComponentsData)
|
1446 |
-
if ligaturesData is not None:
|
1447 |
-
writer.writeData(ligaturesData)
|
1448 |
-
|
1449 |
-
def _compileStates(self, font, states, glyphClassCount, actionIndex):
|
1450 |
-
stateArrayWriter = OTTableWriter()
|
1451 |
-
entries, entryIDs = [], {}
|
1452 |
-
for state in states:
|
1453 |
-
for glyphClass in range(glyphClassCount):
|
1454 |
-
transition = state.Transitions[glyphClass]
|
1455 |
-
entryWriter = OTTableWriter()
|
1456 |
-
transition.compile(entryWriter, font, actionIndex)
|
1457 |
-
entryData = entryWriter.getAllData()
|
1458 |
-
assert (
|
1459 |
-
len(entryData) == transition.staticSize
|
1460 |
-
), "%s has staticSize %d, " "but actually wrote %d bytes" % (
|
1461 |
-
repr(transition),
|
1462 |
-
transition.staticSize,
|
1463 |
-
len(entryData),
|
1464 |
-
)
|
1465 |
-
entryIndex = entryIDs.get(entryData)
|
1466 |
-
if entryIndex is None:
|
1467 |
-
entryIndex = len(entries)
|
1468 |
-
entryIDs[entryData] = entryIndex
|
1469 |
-
entries.append(entryData)
|
1470 |
-
stateArrayWriter.writeUShort(entryIndex)
|
1471 |
-
stateArrayData = pad(stateArrayWriter.getAllData(), 4)
|
1472 |
-
entryTableData = pad(bytesjoin(entries), 4)
|
1473 |
-
return stateArrayData, entryTableData
|
1474 |
-
|
1475 |
-
def _compilePerGlyphLookups(self, table, font):
|
1476 |
-
if self.perGlyphLookup is None:
|
1477 |
-
return b""
|
1478 |
-
numLookups = self._countPerGlyphLookups(table)
|
1479 |
-
assert len(table.PerGlyphLookups) == numLookups, (
|
1480 |
-
"len(AATStateTable.PerGlyphLookups) is %d, "
|
1481 |
-
"but the actions inside the table refer to %d"
|
1482 |
-
% (len(table.PerGlyphLookups), numLookups)
|
1483 |
-
)
|
1484 |
-
writer = OTTableWriter()
|
1485 |
-
for lookup in table.PerGlyphLookups:
|
1486 |
-
lookupWriter = writer.getSubWriter(offsetSize=4)
|
1487 |
-
self.perGlyphLookup.write(lookupWriter, font, {}, lookup, None)
|
1488 |
-
writer.writeSubTable(lookupWriter)
|
1489 |
-
return writer.getAllData()
|
1490 |
-
|
1491 |
-
def _compileLigComponents(self, table, font):
|
1492 |
-
if not hasattr(table, "LigComponents"):
|
1493 |
-
return None
|
1494 |
-
writer = OTTableWriter()
|
1495 |
-
for component in table.LigComponents:
|
1496 |
-
writer.writeUShort(component)
|
1497 |
-
return writer.getAllData()
|
1498 |
-
|
1499 |
-
def _compileLigatures(self, table, font):
|
1500 |
-
if not hasattr(table, "Ligatures"):
|
1501 |
-
return None
|
1502 |
-
writer = OTTableWriter()
|
1503 |
-
for glyphName in table.Ligatures:
|
1504 |
-
writer.writeUShort(font.getGlyphID(glyphName))
|
1505 |
-
return writer.getAllData()
|
1506 |
-
|
1507 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
1508 |
-
xmlWriter.begintag(name, attrs)
|
1509 |
-
xmlWriter.newline()
|
1510 |
-
xmlWriter.comment("GlyphClassCount=%s" % value.GlyphClassCount)
|
1511 |
-
xmlWriter.newline()
|
1512 |
-
for g, klass in sorted(value.GlyphClasses.items()):
|
1513 |
-
xmlWriter.simpletag("GlyphClass", glyph=g, value=klass)
|
1514 |
-
xmlWriter.newline()
|
1515 |
-
for stateIndex, state in enumerate(value.States):
|
1516 |
-
xmlWriter.begintag("State", index=stateIndex)
|
1517 |
-
xmlWriter.newline()
|
1518 |
-
for glyphClass, trans in sorted(state.Transitions.items()):
|
1519 |
-
trans.toXML(
|
1520 |
-
xmlWriter,
|
1521 |
-
font=font,
|
1522 |
-
attrs={"onGlyphClass": glyphClass},
|
1523 |
-
name="Transition",
|
1524 |
-
)
|
1525 |
-
xmlWriter.endtag("State")
|
1526 |
-
xmlWriter.newline()
|
1527 |
-
for i, lookup in enumerate(value.PerGlyphLookups):
|
1528 |
-
xmlWriter.begintag("PerGlyphLookup", index=i)
|
1529 |
-
xmlWriter.newline()
|
1530 |
-
for glyph, val in sorted(lookup.items()):
|
1531 |
-
xmlWriter.simpletag("Lookup", glyph=glyph, value=val)
|
1532 |
-
xmlWriter.newline()
|
1533 |
-
xmlWriter.endtag("PerGlyphLookup")
|
1534 |
-
xmlWriter.newline()
|
1535 |
-
if hasattr(value, "LigComponents"):
|
1536 |
-
xmlWriter.begintag("LigComponents")
|
1537 |
-
xmlWriter.newline()
|
1538 |
-
for i, val in enumerate(getattr(value, "LigComponents")):
|
1539 |
-
xmlWriter.simpletag("LigComponent", index=i, value=val)
|
1540 |
-
xmlWriter.newline()
|
1541 |
-
xmlWriter.endtag("LigComponents")
|
1542 |
-
xmlWriter.newline()
|
1543 |
-
self._xmlWriteLigatures(xmlWriter, font, value, name, attrs)
|
1544 |
-
xmlWriter.endtag(name)
|
1545 |
-
xmlWriter.newline()
|
1546 |
-
|
1547 |
-
def _xmlWriteLigatures(self, xmlWriter, font, value, name, attrs):
|
1548 |
-
if not hasattr(value, "Ligatures"):
|
1549 |
-
return
|
1550 |
-
xmlWriter.begintag("Ligatures")
|
1551 |
-
xmlWriter.newline()
|
1552 |
-
for i, g in enumerate(getattr(value, "Ligatures")):
|
1553 |
-
xmlWriter.simpletag("Ligature", index=i, glyph=g)
|
1554 |
-
xmlWriter.newline()
|
1555 |
-
xmlWriter.endtag("Ligatures")
|
1556 |
-
xmlWriter.newline()
|
1557 |
-
|
1558 |
-
def xmlRead(self, attrs, content, font):
|
1559 |
-
table = AATStateTable()
|
1560 |
-
for eltName, eltAttrs, eltContent in filter(istuple, content):
|
1561 |
-
if eltName == "GlyphClass":
|
1562 |
-
glyph = eltAttrs["glyph"]
|
1563 |
-
value = eltAttrs["value"]
|
1564 |
-
table.GlyphClasses[glyph] = safeEval(value)
|
1565 |
-
elif eltName == "State":
|
1566 |
-
state = self._xmlReadState(eltAttrs, eltContent, font)
|
1567 |
-
table.States.append(state)
|
1568 |
-
elif eltName == "PerGlyphLookup":
|
1569 |
-
lookup = self.perGlyphLookup.xmlRead(eltAttrs, eltContent, font)
|
1570 |
-
table.PerGlyphLookups.append(lookup)
|
1571 |
-
elif eltName == "LigComponents":
|
1572 |
-
table.LigComponents = self._xmlReadLigComponents(
|
1573 |
-
eltAttrs, eltContent, font
|
1574 |
-
)
|
1575 |
-
elif eltName == "Ligatures":
|
1576 |
-
table.Ligatures = self._xmlReadLigatures(eltAttrs, eltContent, font)
|
1577 |
-
table.GlyphClassCount = max(table.GlyphClasses.values()) + 1
|
1578 |
-
return table
|
1579 |
-
|
1580 |
-
def _xmlReadState(self, attrs, content, font):
|
1581 |
-
state = AATState()
|
1582 |
-
for eltName, eltAttrs, eltContent in filter(istuple, content):
|
1583 |
-
if eltName == "Transition":
|
1584 |
-
glyphClass = safeEval(eltAttrs["onGlyphClass"])
|
1585 |
-
transition = self.tableClass()
|
1586 |
-
transition.fromXML(eltName, eltAttrs, eltContent, font)
|
1587 |
-
state.Transitions[glyphClass] = transition
|
1588 |
-
return state
|
1589 |
-
|
1590 |
-
def _xmlReadLigComponents(self, attrs, content, font):
|
1591 |
-
ligComponents = []
|
1592 |
-
for eltName, eltAttrs, _eltContent in filter(istuple, content):
|
1593 |
-
if eltName == "LigComponent":
|
1594 |
-
ligComponents.append(safeEval(eltAttrs["value"]))
|
1595 |
-
return ligComponents
|
1596 |
-
|
1597 |
-
def _xmlReadLigatures(self, attrs, content, font):
|
1598 |
-
ligs = []
|
1599 |
-
for eltName, eltAttrs, _eltContent in filter(istuple, content):
|
1600 |
-
if eltName == "Ligature":
|
1601 |
-
ligs.append(eltAttrs["glyph"])
|
1602 |
-
return ligs
|
1603 |
-
|
1604 |
-
|
1605 |
-
class CIDGlyphMap(BaseConverter):
|
1606 |
-
def read(self, reader, font, tableDict):
|
1607 |
-
numCIDs = reader.readUShort()
|
1608 |
-
result = {}
|
1609 |
-
for cid, glyphID in enumerate(reader.readUShortArray(numCIDs)):
|
1610 |
-
if glyphID != 0xFFFF:
|
1611 |
-
result[cid] = font.getGlyphName(glyphID)
|
1612 |
-
return result
|
1613 |
-
|
1614 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
1615 |
-
items = {cid: font.getGlyphID(glyph) for cid, glyph in value.items()}
|
1616 |
-
count = max(items) + 1 if items else 0
|
1617 |
-
writer.writeUShort(count)
|
1618 |
-
for cid in range(count):
|
1619 |
-
writer.writeUShort(items.get(cid, 0xFFFF))
|
1620 |
-
|
1621 |
-
def xmlRead(self, attrs, content, font):
|
1622 |
-
result = {}
|
1623 |
-
for eName, eAttrs, _eContent in filter(istuple, content):
|
1624 |
-
if eName == "CID":
|
1625 |
-
result[safeEval(eAttrs["cid"])] = eAttrs["glyph"].strip()
|
1626 |
-
return result
|
1627 |
-
|
1628 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
1629 |
-
xmlWriter.begintag(name, attrs)
|
1630 |
-
xmlWriter.newline()
|
1631 |
-
for cid, glyph in sorted(value.items()):
|
1632 |
-
if glyph is not None and glyph != 0xFFFF:
|
1633 |
-
xmlWriter.simpletag("CID", cid=cid, glyph=glyph)
|
1634 |
-
xmlWriter.newline()
|
1635 |
-
xmlWriter.endtag(name)
|
1636 |
-
xmlWriter.newline()
|
1637 |
-
|
1638 |
-
|
1639 |
-
class GlyphCIDMap(BaseConverter):
|
1640 |
-
def read(self, reader, font, tableDict):
|
1641 |
-
glyphOrder = font.getGlyphOrder()
|
1642 |
-
count = reader.readUShort()
|
1643 |
-
cids = reader.readUShortArray(count)
|
1644 |
-
if count > len(glyphOrder):
|
1645 |
-
log.warning(
|
1646 |
-
"GlyphCIDMap has %d elements, "
|
1647 |
-
"but the font has only %d glyphs; "
|
1648 |
-
"ignoring the rest" % (count, len(glyphOrder))
|
1649 |
-
)
|
1650 |
-
result = {}
|
1651 |
-
for glyphID in range(min(len(cids), len(glyphOrder))):
|
1652 |
-
cid = cids[glyphID]
|
1653 |
-
if cid != 0xFFFF:
|
1654 |
-
result[glyphOrder[glyphID]] = cid
|
1655 |
-
return result
|
1656 |
-
|
1657 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
1658 |
-
items = {
|
1659 |
-
font.getGlyphID(g): cid
|
1660 |
-
for g, cid in value.items()
|
1661 |
-
if cid is not None and cid != 0xFFFF
|
1662 |
-
}
|
1663 |
-
count = max(items) + 1 if items else 0
|
1664 |
-
writer.writeUShort(count)
|
1665 |
-
for glyphID in range(count):
|
1666 |
-
writer.writeUShort(items.get(glyphID, 0xFFFF))
|
1667 |
-
|
1668 |
-
def xmlRead(self, attrs, content, font):
|
1669 |
-
result = {}
|
1670 |
-
for eName, eAttrs, _eContent in filter(istuple, content):
|
1671 |
-
if eName == "CID":
|
1672 |
-
result[eAttrs["glyph"]] = safeEval(eAttrs["value"])
|
1673 |
-
return result
|
1674 |
-
|
1675 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
1676 |
-
xmlWriter.begintag(name, attrs)
|
1677 |
-
xmlWriter.newline()
|
1678 |
-
for glyph, cid in sorted(value.items()):
|
1679 |
-
if cid is not None and cid != 0xFFFF:
|
1680 |
-
xmlWriter.simpletag("CID", glyph=glyph, value=cid)
|
1681 |
-
xmlWriter.newline()
|
1682 |
-
xmlWriter.endtag(name)
|
1683 |
-
xmlWriter.newline()
|
1684 |
-
|
1685 |
-
|
1686 |
-
class DeltaValue(BaseConverter):
|
1687 |
-
def read(self, reader, font, tableDict):
|
1688 |
-
StartSize = tableDict["StartSize"]
|
1689 |
-
EndSize = tableDict["EndSize"]
|
1690 |
-
DeltaFormat = tableDict["DeltaFormat"]
|
1691 |
-
assert DeltaFormat in (1, 2, 3), "illegal DeltaFormat"
|
1692 |
-
nItems = EndSize - StartSize + 1
|
1693 |
-
nBits = 1 << DeltaFormat
|
1694 |
-
minusOffset = 1 << nBits
|
1695 |
-
mask = (1 << nBits) - 1
|
1696 |
-
signMask = 1 << (nBits - 1)
|
1697 |
-
|
1698 |
-
DeltaValue = []
|
1699 |
-
tmp, shift = 0, 0
|
1700 |
-
for i in range(nItems):
|
1701 |
-
if shift == 0:
|
1702 |
-
tmp, shift = reader.readUShort(), 16
|
1703 |
-
shift = shift - nBits
|
1704 |
-
value = (tmp >> shift) & mask
|
1705 |
-
if value & signMask:
|
1706 |
-
value = value - minusOffset
|
1707 |
-
DeltaValue.append(value)
|
1708 |
-
return DeltaValue
|
1709 |
-
|
1710 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
1711 |
-
StartSize = tableDict["StartSize"]
|
1712 |
-
EndSize = tableDict["EndSize"]
|
1713 |
-
DeltaFormat = tableDict["DeltaFormat"]
|
1714 |
-
DeltaValue = value
|
1715 |
-
assert DeltaFormat in (1, 2, 3), "illegal DeltaFormat"
|
1716 |
-
nItems = EndSize - StartSize + 1
|
1717 |
-
nBits = 1 << DeltaFormat
|
1718 |
-
assert len(DeltaValue) == nItems
|
1719 |
-
mask = (1 << nBits) - 1
|
1720 |
-
|
1721 |
-
tmp, shift = 0, 16
|
1722 |
-
for value in DeltaValue:
|
1723 |
-
shift = shift - nBits
|
1724 |
-
tmp = tmp | ((value & mask) << shift)
|
1725 |
-
if shift == 0:
|
1726 |
-
writer.writeUShort(tmp)
|
1727 |
-
tmp, shift = 0, 16
|
1728 |
-
if shift != 16:
|
1729 |
-
writer.writeUShort(tmp)
|
1730 |
-
|
1731 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
1732 |
-
xmlWriter.simpletag(name, attrs + [("value", value)])
|
1733 |
-
xmlWriter.newline()
|
1734 |
-
|
1735 |
-
def xmlRead(self, attrs, content, font):
|
1736 |
-
return safeEval(attrs["value"])
|
1737 |
-
|
1738 |
-
|
1739 |
-
class VarIdxMapValue(BaseConverter):
|
1740 |
-
def read(self, reader, font, tableDict):
|
1741 |
-
fmt = tableDict["EntryFormat"]
|
1742 |
-
nItems = tableDict["MappingCount"]
|
1743 |
-
|
1744 |
-
innerBits = 1 + (fmt & 0x000F)
|
1745 |
-
innerMask = (1 << innerBits) - 1
|
1746 |
-
outerMask = 0xFFFFFFFF - innerMask
|
1747 |
-
outerShift = 16 - innerBits
|
1748 |
-
|
1749 |
-
entrySize = 1 + ((fmt & 0x0030) >> 4)
|
1750 |
-
readArray = {
|
1751 |
-
1: reader.readUInt8Array,
|
1752 |
-
2: reader.readUShortArray,
|
1753 |
-
3: reader.readUInt24Array,
|
1754 |
-
4: reader.readULongArray,
|
1755 |
-
}[entrySize]
|
1756 |
-
|
1757 |
-
return [
|
1758 |
-
(((raw & outerMask) << outerShift) | (raw & innerMask))
|
1759 |
-
for raw in readArray(nItems)
|
1760 |
-
]
|
1761 |
-
|
1762 |
-
def write(self, writer, font, tableDict, value, repeatIndex=None):
|
1763 |
-
fmt = tableDict["EntryFormat"]
|
1764 |
-
mapping = value
|
1765 |
-
writer["MappingCount"].setValue(len(mapping))
|
1766 |
-
|
1767 |
-
innerBits = 1 + (fmt & 0x000F)
|
1768 |
-
innerMask = (1 << innerBits) - 1
|
1769 |
-
outerShift = 16 - innerBits
|
1770 |
-
|
1771 |
-
entrySize = 1 + ((fmt & 0x0030) >> 4)
|
1772 |
-
writeArray = {
|
1773 |
-
1: writer.writeUInt8Array,
|
1774 |
-
2: writer.writeUShortArray,
|
1775 |
-
3: writer.writeUInt24Array,
|
1776 |
-
4: writer.writeULongArray,
|
1777 |
-
}[entrySize]
|
1778 |
-
|
1779 |
-
writeArray(
|
1780 |
-
[
|
1781 |
-
(((idx & 0xFFFF0000) >> outerShift) | (idx & innerMask))
|
1782 |
-
for idx in mapping
|
1783 |
-
]
|
1784 |
-
)
|
1785 |
-
|
1786 |
-
|
1787 |
-
class VarDataValue(BaseConverter):
|
1788 |
-
def read(self, reader, font, tableDict):
|
1789 |
-
values = []
|
1790 |
-
|
1791 |
-
regionCount = tableDict["VarRegionCount"]
|
1792 |
-
wordCount = tableDict["NumShorts"]
|
1793 |
-
|
1794 |
-
# https://github.com/fonttools/fonttools/issues/2279
|
1795 |
-
longWords = bool(wordCount & 0x8000)
|
1796 |
-
wordCount = wordCount & 0x7FFF
|
1797 |
-
|
1798 |
-
if longWords:
|
1799 |
-
readBigArray, readSmallArray = reader.readLongArray, reader.readShortArray
|
1800 |
-
else:
|
1801 |
-
readBigArray, readSmallArray = reader.readShortArray, reader.readInt8Array
|
1802 |
-
|
1803 |
-
n1, n2 = min(regionCount, wordCount), max(regionCount, wordCount)
|
1804 |
-
values.extend(readBigArray(n1))
|
1805 |
-
values.extend(readSmallArray(n2 - n1))
|
1806 |
-
if n2 > regionCount: # Padding
|
1807 |
-
del values[regionCount:]
|
1808 |
-
|
1809 |
-
return values
|
1810 |
-
|
1811 |
-
def write(self, writer, font, tableDict, values, repeatIndex=None):
|
1812 |
-
regionCount = tableDict["VarRegionCount"]
|
1813 |
-
wordCount = tableDict["NumShorts"]
|
1814 |
-
|
1815 |
-
# https://github.com/fonttools/fonttools/issues/2279
|
1816 |
-
longWords = bool(wordCount & 0x8000)
|
1817 |
-
wordCount = wordCount & 0x7FFF
|
1818 |
-
|
1819 |
-
(writeBigArray, writeSmallArray) = {
|
1820 |
-
False: (writer.writeShortArray, writer.writeInt8Array),
|
1821 |
-
True: (writer.writeLongArray, writer.writeShortArray),
|
1822 |
-
}[longWords]
|
1823 |
-
|
1824 |
-
n1, n2 = min(regionCount, wordCount), max(regionCount, wordCount)
|
1825 |
-
writeBigArray(values[:n1])
|
1826 |
-
writeSmallArray(values[n1:regionCount])
|
1827 |
-
if n2 > regionCount: # Padding
|
1828 |
-
writer.writeSmallArray([0] * (n2 - regionCount))
|
1829 |
-
|
1830 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
1831 |
-
xmlWriter.simpletag(name, attrs + [("value", value)])
|
1832 |
-
xmlWriter.newline()
|
1833 |
-
|
1834 |
-
def xmlRead(self, attrs, content, font):
|
1835 |
-
return safeEval(attrs["value"])
|
1836 |
-
|
1837 |
-
|
1838 |
-
class LookupFlag(UShort):
|
1839 |
-
def xmlWrite(self, xmlWriter, font, value, name, attrs):
|
1840 |
-
xmlWriter.simpletag(name, attrs + [("value", value)])
|
1841 |
-
flags = []
|
1842 |
-
if value & 0x01:
|
1843 |
-
flags.append("rightToLeft")
|
1844 |
-
if value & 0x02:
|
1845 |
-
flags.append("ignoreBaseGlyphs")
|
1846 |
-
if value & 0x04:
|
1847 |
-
flags.append("ignoreLigatures")
|
1848 |
-
if value & 0x08:
|
1849 |
-
flags.append("ignoreMarks")
|
1850 |
-
if value & 0x10:
|
1851 |
-
flags.append("useMarkFilteringSet")
|
1852 |
-
if value & 0xFF00:
|
1853 |
-
flags.append("markAttachmentType[%i]" % (value >> 8))
|
1854 |
-
if flags:
|
1855 |
-
xmlWriter.comment(" ".join(flags))
|
1856 |
-
xmlWriter.newline()
|
1857 |
-
|
1858 |
-
|
1859 |
-
class _UInt8Enum(UInt8):
|
1860 |
-
enumClass = NotImplemented
|
1861 |
-
|
1862 |
-
def read(self, reader, font, tableDict):
|
1863 |
-
return self.enumClass(super().read(reader, font, tableDict))
|
1864 |
-
|
1865 |
-
@classmethod
|
1866 |
-
def fromString(cls, value):
|
1867 |
-
return getattr(cls.enumClass, value.upper())
|
1868 |
-
|
1869 |
-
@classmethod
|
1870 |
-
def toString(cls, value):
|
1871 |
-
return cls.enumClass(value).name.lower()
|
1872 |
-
|
1873 |
-
|
1874 |
-
class ExtendMode(_UInt8Enum):
|
1875 |
-
enumClass = _ExtendMode
|
1876 |
-
|
1877 |
-
|
1878 |
-
class CompositeMode(_UInt8Enum):
|
1879 |
-
enumClass = _CompositeMode
|
1880 |
-
|
1881 |
-
|
1882 |
-
converterMapping = {
|
1883 |
-
# type class
|
1884 |
-
"int8": Int8,
|
1885 |
-
"int16": Short,
|
1886 |
-
"uint8": UInt8,
|
1887 |
-
"uint16": UShort,
|
1888 |
-
"uint24": UInt24,
|
1889 |
-
"uint32": ULong,
|
1890 |
-
"char64": Char64,
|
1891 |
-
"Flags32": Flags32,
|
1892 |
-
"VarIndex": VarIndex,
|
1893 |
-
"Version": Version,
|
1894 |
-
"Tag": Tag,
|
1895 |
-
"GlyphID": GlyphID,
|
1896 |
-
"GlyphID32": GlyphID32,
|
1897 |
-
"NameID": NameID,
|
1898 |
-
"DeciPoints": DeciPoints,
|
1899 |
-
"Fixed": Fixed,
|
1900 |
-
"F2Dot14": F2Dot14,
|
1901 |
-
"Angle": Angle,
|
1902 |
-
"BiasedAngle": BiasedAngle,
|
1903 |
-
"struct": Struct,
|
1904 |
-
"Offset": Table,
|
1905 |
-
"LOffset": LTable,
|
1906 |
-
"Offset24": Table24,
|
1907 |
-
"ValueRecord": ValueRecord,
|
1908 |
-
"DeltaValue": DeltaValue,
|
1909 |
-
"VarIdxMapValue": VarIdxMapValue,
|
1910 |
-
"VarDataValue": VarDataValue,
|
1911 |
-
"LookupFlag": LookupFlag,
|
1912 |
-
"ExtendMode": ExtendMode,
|
1913 |
-
"CompositeMode": CompositeMode,
|
1914 |
-
"STATFlags": STATFlags,
|
1915 |
-
# AAT
|
1916 |
-
"CIDGlyphMap": CIDGlyphMap,
|
1917 |
-
"GlyphCIDMap": GlyphCIDMap,
|
1918 |
-
"MortChain": StructWithLength,
|
1919 |
-
"MortSubtable": StructWithLength,
|
1920 |
-
"MorxChain": StructWithLength,
|
1921 |
-
"MorxSubtable": MorxSubtableConverter,
|
1922 |
-
# "Template" types
|
1923 |
-
"AATLookup": lambda C: partial(AATLookup, tableClass=C),
|
1924 |
-
"AATLookupWithDataOffset": lambda C: partial(AATLookupWithDataOffset, tableClass=C),
|
1925 |
-
"STXHeader": lambda C: partial(STXHeader, tableClass=C),
|
1926 |
-
"OffsetTo": lambda C: partial(Table, tableClass=C),
|
1927 |
-
"LOffsetTo": lambda C: partial(LTable, tableClass=C),
|
1928 |
-
"LOffset24To": lambda C: partial(Table24, tableClass=C),
|
1929 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|