Commit
·
cd06f38
1
Parent(s):
439ab68
Update parquet files (step 32 of 476)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (american pie 1 720p download 867) - See the comedy that started it all in 720p.md +0 -138
- spaces/1gistliPinn/ChatGPT4/Examples/4 Maras La Pelicula Completal.md +0 -22
- spaces/1gistliPinn/ChatGPT4/Examples/Commandos 3 Full Game Download.md +0 -9
- spaces/1gistliPinn/ChatGPT4/Examples/Freedownloadharrypotter7fullmovieinenglish.md +0 -6
- spaces/1line/AutoGPT/ui/utils.py +0 -31
- spaces/1phancelerku/anime-remove-background/CapCut for Android - Download the APK from Uptodown.md +0 -124
- spaces/1phancelerku/anime-remove-background/Download Monoposto Mod APK 3.75 - Unlimited Racing Fun.md +0 -169
- spaces/1phancelerku/anime-remove-background/Download and Activate Microsoft 365 or Office 2021 in Minutes.md +0 -140
- spaces/1phancelerku/anime-remove-background/Farm Heroes Saga MOD APK How to Download and Install the Latest Version with Unlimited Features.md +0 -112
- spaces/2023Liu2023/bingo/src/lib/isomorphic/browser.ts +0 -11
- spaces/A00001/bingothoo/src/lib/isomorphic/browser.ts +0 -11
- spaces/AI-DHD/Youtube-Whisperer/app.py +0 -93
- spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/feature_fusion.py +0 -193
- spaces/AIGText/GlyphControl/README.md +0 -15
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_tiny_fast_1xb12-40e_cat.py +0 -56
- spaces/Abeer123/Pokemon_Digimon/app.py +0 -19
- spaces/Adapter/CoAdapter/models/README.md +0 -6
- spaces/Aditya9790/yolo7-object-tracking/deploy/triton-inference-server/boundingbox.py +0 -33
- spaces/Aditya9790/yolo7-object-tracking/deploy/triton-inference-server/client.py +0 -334
- spaces/AgentVerse/agentVerse/ui/dist/assets/tilemaps/tiles/town.tsx +0 -4
- spaces/Aki004/herta-so-vits/preprocess_flist_config.py +0 -75
- spaces/AlexZou/SCUTAUTO210b/README.md +0 -13
- spaces/Alichuan/VITS-Umamusume-voice-synthesizer/monotonic_align/setup.py +0 -9
- spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp +0 -17
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/loaders.py +0 -0
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/__init__.py +0 -0
- spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/README.md +0 -61
- spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco.py +0 -98
- spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/kd_one_stage.py +0 -100
- spaces/Anish13/characterGPT/README.md +0 -13
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/System-requirements.md +0 -42
- spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/utils.py +0 -16
- spaces/AnthonyTruchetPoC/persistent-docker/start_server.sh +0 -3
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/urls.py +0 -62
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/_securetransport/low_level.py +0 -397
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/markers.py +0 -304
- spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/english_bert_mock.py +0 -5
- spaces/Bart92/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/__init__.py +0 -0
- spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/model_param_init.py +0 -69
- spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_537238KB.py +0 -123
- spaces/Benson/text-generation/Examples/Descarga De Vdeo 5 Seg.md +0 -64
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/status_codes.py +0 -128
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/config.py +0 -139
- spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/extract.py +0 -129
- spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/scan.h +0 -44
- spaces/CVPR/lama-example/bin/gen_debug_mask_dataset.py +0 -61
- spaces/Clebersla/RVC_V2_Huggingface_Version/lib/infer_pack/modules/F0Predictor/F0Predictor.py +0 -16
- spaces/DJQmUKV/rvc-inference/vc_infer_pipeline.py +0 -363
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/__init__.py +0 -2464
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-928645ac.css +0 -1
spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (american pie 1 720p download 867) - See the comedy that started it all in 720p.md
DELETED
@@ -1,138 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>HD Online Player (american pie 1 720p download 867)</h1>
|
3 |
-
<p>Are you a fan of the classic teen comedy movie American Pie? Do you want to relive the hilarious and raunchy adventures of Jim, Kevin, Oz, Finch, and Stifler as they try to lose their virginity before graduation? If so, you might be interested in watching American Pie in high-definition (HD) online. In this article, we will tell you everything you need to know about this movie, why you should watch it in HD online, and how to download it in 720p.</p>
|
4 |
-
<h2>HD Online Player (american pie 1 720p download 867)</h2><br /><p><b><b>Download File</b> ✸ <a href="https://byltly.com/2uKvsh">https://byltly.com/2uKvsh</a></b></p><br /><br />
|
5 |
-
<h2>What is American Pie?</h2>
|
6 |
-
<p>American Pie is a 1999 American coming-of-age teen sex comedy film directed by Paul Weitz and written by Adam Herz. It is the first film in the American Pie theatrical series and stars an ensemble cast that includes Jason Biggs, Chris Klein, Alyson Hannigan, Natasha Lyonne, Thomas Ian Nicholas, Tara Reid, Mena Suvari, Eddie Kaye Thomas, Seann William Scott, Eugene Levy, Shannon Elizabeth and Jennifer Coolidge. </p>
|
7 |
-
<h3>A brief summary of the plot</h3>
|
8 |
-
<p>The plot centers on five classmates (Jim, Kevin, Oz, Finch, and Stifler) who attend East Great Falls High. With the sole exception of Stifler, who has already lost his virginity, the youths make a pact to lose their virginity before their high school graduation. The title refers to a scene in which the protagonist is caught masturbating with a pie after being told that third base feels like "warm apple pie". Writer Adam Herz has stated that the title also refers to the quest of losing one's virginity in high school, which is as "American as apple pie." </p>
|
9 |
-
<h3>The cast and characters</h3>
|
10 |
-
<p>The film features a talented and charismatic cast that brings the characters to life. Here are some of the main characters and their actors:</p>
|
11 |
-
<ul>
|
12 |
-
<li>Jim Levenstein (Jason Biggs): An awkward and sexually naïve nerd whose dad offers him pornography and awkward sexual advice.</li>
|
13 |
-
<li>Kevin Myers (Thomas Ian Nicholas): The calm leader of the group seeking to lose his virginity with his girlfriend Vicky.</li>
|
14 |
-
<li>Chris "Oz" Ostreicher (Chris Klein): Overconfident star of the lacrosse team who joins the school choir to impress a girl.</li>
|
15 |
-
<li>Paul Finch (Eddie Kaye Thomas): A mochaccino-drinking sophisticate who has a crush on Stifler's mom.</li>
|
16 |
-
<li>Steve Stifler (Seann William Scott): A popular but raucous jock who often throws wild parties and is the only one of the five who is not a virgin.</li>
|
17 |
-
<li>Michelle Flaherty (Alyson Hannigan): A band geek who turns out to be sexually experienced and kinky.</li>
|
18 |
-
<li>Nadia (Shannon Elizabeth): A beautiful exchange student from Slovakia who becomes Jim's love interest.</li>
|
19 |
-
<li>Noah Levenstein (Eugene Levy): Jim's dad who tries to help his son with his sexual problems.</li>
|
20 |
-
<li>Jeanine Stifler (Jennifer Coolidge): Stifler's mom who seduces Finch.</li>
|
21 |
-
</ul>
|
22 |
-
<h3>The cultural impact and legacy</h3>
|
23 |
-
<p>American Pie became a worldwide pop culture phenomenon and gained a cult following among young people. It was praised for its humor, honesty, and relatability. It also spawned three direct sequels: American Pie 2 (2001), American Wedding (2003), and American Reunion (2012). In addition to the primary American Pie saga, there are five direct-to-DVD spin-off films bearing the title American Pie Presents: Band Camp (2005), The Naked Mile (2006), Beta House (2007), The Book of Love (2009), and Girls' Rules (2020). </p>
|
24 |
-
<p>The film also introduced several memorable catchphrases and slang terms into popular culture, such as "one time at band camp", "this one time", "MILF", "the Shermanator", "the flute incident", "the rule of three", and "the pale ale". It also popularized the use of pies as sexual metaphors. </p>
|
25 |
-
<h2>Why watch American Pie in HD online?</h2>
|
26 |
-
<p>If you are a fan of American Pie or want to watch it for the first time, you might wonder why you should watch it in HD online. Here are some reasons why:</p>
|
27 |
-
<p>American Pie 2020 free download<br />
|
28 |
-
American Pie 1999 BluRay full HD movie<br />
|
29 |
-
American Pie movie download google drive<br />
|
30 |
-
American Pie movie download in hindi<br />
|
31 |
-
American Pie movie download sub indo<br />
|
32 |
-
American Pie movie download 480p<br />
|
33 |
-
American Pie movie download 720p<br />
|
34 |
-
American Pie movie download 300mb<br />
|
35 |
-
American Pie movie download filmapik<br />
|
36 |
-
American Pie movie download rebahin<br />
|
37 |
-
American Pie movie download pahe<br />
|
38 |
-
American Pie movie download telegram<br />
|
39 |
-
American Pie movie watch online free<br />
|
40 |
-
American Pie movie watch online streaming<br />
|
41 |
-
American Pie movie watch online eng sub<br />
|
42 |
-
American Pie movie watch online hindi dubbed<br />
|
43 |
-
American Pie movie watch online tamilrockers<br />
|
44 |
-
American Pie movie watch online mkvcage<br />
|
45 |
-
American Pie movie watch online erosnow<br />
|
46 |
-
American Pie full movie free download<br />
|
47 |
-
American Pie full movie online free<br />
|
48 |
-
American Pie full movie in hindi<br />
|
49 |
-
American Pie full movie in tamil<br />
|
50 |
-
American Pie full movie sub indo<br />
|
51 |
-
American Pie full movie bluray<br />
|
52 |
-
American Pie full movie google drive<br />
|
53 |
-
American Pie full movie telegram<br />
|
54 |
-
Watch American Pie online free<br />
|
55 |
-
Watch American Pie online streaming<br />
|
56 |
-
Watch American Pie online eng sub<br />
|
57 |
-
Watch American Pie online hindi dubbed<br />
|
58 |
-
Watch American Pie online tamilrockers<br />
|
59 |
-
Watch American Pie online mkvcage<br />
|
60 |
-
Watch American Pie online erosnow<br />
|
61 |
-
Download film american pie gratis<br />
|
62 |
-
Download film american pie terbaru 2020<br />
|
63 |
-
Download film american pie subtitle indonesia<br />
|
64 |
-
Download film american pie bahasa indonesia<br />
|
65 |
-
Download film american pie kualitas hd<br />
|
66 |
-
Download film american pie lewat google drive<br />
|
67 |
-
Download film american pie lewat telegram<br />
|
68 |
-
Nonton film american pie gratis<br />
|
69 |
-
Nonton film american pie terbaru 2020<br />
|
70 |
-
Nonton film american pie subtitle indonesia<br />
|
71 |
-
Nonton film american pie bahasa indonesia<br />
|
72 |
-
Nonton film american pie kualitas hd<br />
|
73 |
-
Nonton film american pie lewat google drive<br />
|
74 |
-
Nonton film american pie lewat telegram</p>
|
75 |
-
<h3>The benefits of HD quality</h3>
|
76 |
-
<p>Watching American Pie in HD quality means that you can enjoy the movie with better clarity, sharpness, color, and contrast. You can see more details and nuances that might be missed in lower resolutions. You can also appreciate the cinematography, editing, and special effects more. HD quality also enhances the audio quality, making the dialogue, music, and sound effects more crisp and clear. You can hear every joke, scream, moan, and laugh better.</p>
|
77 |
-
<h3>The convenience of online streaming</h3>
|
78 |
-
<p>Watching American Pie online means that you can stream it anytime and anywhere you want. You don't have to worry about finding a DVD player or a physical copy of the movie. You can watch it on your laptop, tablet, smartphone, or smart TV with an internet connection. You can also pause, rewind, fast-forward, or skip scenes as you please. You can also choose from different subtitles and audio options if available.</p>
|
79 |
-
<h3>The best platforms to watch American Pie online</h3>
|
80 |
-
<p>There are many platforms that offer online streaming services for movies and TV shows. Some of them are free while others require a subscription or a rental fee. Here are some of the best platforms to watch American Pie online:</p>
|
81 |
-
<ul>
|
82 |
-
<li>Netflix: Netflix is one of the most popular and widely used streaming platforms in the world. It offers a vast library of movies and TV shows across different genres and languages. You can watch American Pie on Netflix with a monthly subscription fee that varies depending on your plan and region. You can also download movies and shows for offline viewing on some devices.</li>
|
83 |
-
<li>Hulu: Hulu is another popular streaming platform that offers movies and TV shows as well as live TV channels. You can watch American Pie on Hulu with a monthly subscription fee that also varies depending on your plan and region. You can also add premium channels like HBO Max or Showtime for an extra fee.</li>
|
84 |
-
<li>Amazon Prime Video: Amazon Prime Video is a streaming platform that is included with an Amazon Prime membership. It offers movies and TV shows as well as original content produced by Amazon Studios. You can watch American Pie on Amazon Prime Video with an annual or monthly subscription fee that also gives you access to other benefits like free shipping, music streaming, e-books, etc.</li>
|
85 |
-
<li>YouTube: YouTube is a video-sharing platform that allows users to upload, watch, comment on, and share videos. It offers a variety of content ranging from music videos to documentaries to tutorials to vlogs. You can watch American Pie on YouTube by renting or buying it for a one-time fee that depends on your region and resolution.</li>
|
86 |
-
</ul>
|
87 |
-
<h2>How to download American Pie in 720p?</h2>
|
88 |
-
<p>If you prefer to download American Pie instead of streaming it online, you might wonder how to do it in 720p resolution. Here are some things you should know before downloading movies:</p>
|
89 |
-
<h3>The advantages of downloading movies</h3>
|
90 |
-
<p>Downloading movies has some advantages over streaming them online. Some of them are:</p>
|
91 |
-
<ul>
|
92 |
-
<li>You can watch movies offline without an internet connection.</li>
|
93 |
-
<li>You can save data usage if you have a limited or expensive plan.</li>
|
94 |
-
<li>You can avoid buffering or loading issues if you have a slow or unstable connection.</li>
|
95 |
-
<li>You can keep movies on your device for as long as you want without worrying about expiration dates or removals.</li>
|
96 |
-
</ul>
|
97 |
-
<h3>The legal and ethical issues of downloading movies</h3>
|
98 |
-
<h3>The steps to download American Pie in 720p</h3>
|
99 |
-
<p>If you want to download American Pie in 720p resolution, you need to follow these steps:</p>
|
100 |
-
<ol>
|
101 |
-
<li>Go to a free movie download website or streaming service site you subscribe to.</li>
|
102 |
-
<li>Browse movies or search for a movie by name.</li>
|
103 |
-
<li>Check if the movie is available for download.</li>
|
104 |
-
<li>Decide if you want to download the SD, HD, or 4K version of the movie.</li>
|
105 |
-
<li>Decide which file format you want to download (if multiple format types are available).</li>
|
106 |
-
<li>Click on the download button or link and wait for the movie to be downloaded to your device.</li>
|
107 |
-
<li>Enjoy watching American Pie offline.</li>
|
108 |
-
</ol>
|
109 |
-
<p>Some of the free movie download websites that offer American Pie in 720p are:</p>
|
110 |
-
<ul>
|
111 |
-
<li>Public Domain Torrents: This website has only legal movies that you can download using BitTorrent. It offers American Pie in MP4 format with a file size of 867 MB. </li>
|
112 |
-
<li>The Internet Archive: This website is a digital library that hosts millions of free books, music, videos, and movies. It offers American Pie in MPEG4 format with a file size of 1.3 GB. </li>
|
113 |
-
<li>YouTube: This website is a video-sharing platform that allows users to upload, watch, comment on, and share videos. It offers American Pie in MP4 format with a file size of 1.1 GB. You need to rent or buy it for a one-time fee that depends on your region and resolution. </li>
|
114 |
-
</ul>
|
115 |
-
<h2>Conclusion</h2>
|
116 |
-
<p>American Pie is a hilarious and iconic movie that you can watch online or offline. You can stream it online in HD quality on various platforms like Netflix, Hulu, Amazon Prime Video, and YouTube. You can also download it in 720p resolution on some free movie download websites like Public Domain Torrents, The Internet Archive, and YouTube. However, you should be aware of the legal and ethical issues of downloading movies and use a trusted VPN to protect your privacy and security.</p>
|
117 |
-
<h2>FAQs</h2>
|
118 |
-
<p>Here are some frequently asked questions about watching and downloading American Pie:</p>
|
119 |
-
<ol>
|
120 |
-
<li>Is American Pie based on a true story?</li>
|
121 |
-
<p>No, American Pie is not based on a true story. It is a fictional comedy that was inspired by the writer's own experiences and observations of teenage life in the 1990s. </p>
|
122 |
-
<li>How many American Pie movies are there?</li>
|
123 |
-
<p>There are four main movies in the American Pie series: American Pie (1999), American Pie 2 (2001), American Wedding (2003), and American Reunion (2012). There are also five spin-off movies: American Pie Presents: Band Camp (2005), American Pie Presents: The Naked Mile (2006), American Pie Presents: Beta House (2007), American Pie Presents: The Book of Love (2009), and American Pie Presents: Girls' Rules (2020). </p>
|
124 |
-
<li>Who sings the song "American Pie"?</li>
|
125 |
-
<p>The song "American Pie" was written and sung by Don McLean in 1971. It is a folk rock song that tells the story of the cultural changes in America from the 1950s to the 1970s. It is not related to the movie series of the same name. </p>
|
126 |
-
<li>What does "warm apple pie" mean?</li>
|
127 |
-
<p>"Warm apple pie" is a sexual metaphor that was popularized by the movie American Pie. It refers to the sensation of having sex with a woman's vagina. In the movie, Jim's friend tells him that third base feels like "warm apple pie", which leads him to experiment with an actual pie in his kitchen. </p>
|
128 |
-
<li>What is the moral of American Pie?</li>
|
129 |
-
<p>American Pie is a comedy that does not have a clear moral message. However, some possible themes that can be derived from the movie are:</p>
|
130 |
-
<ul>
|
131 |
-
<li>The importance of friendship and loyalty.</li>
|
132 |
-
<li>The value of honesty and communication in relationships.</li>
|
133 |
-
<li>The consequences of peer pressure and social expectations.</li>
|
134 |
-
<li>The joys and challenges of growing up and discovering oneself.</li>
|
135 |
-
</ul>
|
136 |
-
</p> 0a6ba089eb<br />
|
137 |
-
<br />
|
138 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/4 Maras La Pelicula Completal.md
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
<h2>4 Maras La Pelicula Completal</h2><br /><p><b><b>Download Zip</b> ☆☆☆ <a href="https://imgfil.com/2uxZGq">https://imgfil.com/2uxZGq</a></b></p><br /><br />
|
2 |
-
|
3 |
-
Take advantage of our eReaders, translation tools, and professional services! With Accessible and Localized eBooks, iBooks, eJournal, and Translations (eReaders, audiobooks, and translated eBooks) through our Translation Services, you will be able to reach a worldwide audience. Learn more about our technical and localization services at:
|
4 |
-
|
5 |
-
6. General Ebooks (9.00 USD / month)
|
6 |
-
|
7 |
-
$9.00/month
|
8 |
-
|
9 |
-
All-Inclusive, Accessible and Localized eBook Solution! General eBooks are all-inclusive of eBooks, Translations, and eReaders. They are offered for one fee and are perfect for international users, translators, and eReaders. The General eBooks bundle includes access to our accessible and translated ebooks (eReaders, audiobooks, and eBook translations), as well as our eJournal service, and our professional and technical services. We also offer eBooks formatted for various eReaders, including Amazon Kindle, Nook, Kobo, iPad, and Apple iPad.
|
10 |
-
|
11 |
-
1. All-Inclusive Service
|
12 |
-
|
13 |
-
2. Includes eBooks, Translations, eReaders
|
14 |
-
|
15 |
-
3. Professional Services and Technical Support
|
16 |
-
|
17 |
-
4. eJournal Service and Publication Schedule
|
18 |
-
|
19 |
-
All-Inclusive, Accessible and Localized eBook Solution! General eBooks are all-inclusive of eBooks, Translations, and eReaders. They are offered for one fee and are perfect for international users, translators, and 4fefd39f24<br />
|
20 |
-
<br />
|
21 |
-
<br />
|
22 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Commandos 3 Full Game Download.md
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
|
2 |
-
<p>it can be a little difficult to grasp for newcomers, but while commandos 3 does not adopt a tutorial-like method of introduction, the title makes it clear what's going on. what the missions are about, what to expect, what to do and what not to do, the basics of the game are all hammered home. the difficulty is in fitting that information into a game that's as addictive as they come. </p>
|
3 |
-
<h2>Commandos 3 Full Game Download</h2><br /><p><b><b>Download Zip</b> ⇒ <a href="https://imgfil.com/2uxY23">https://imgfil.com/2uxY23</a></b></p><br /><br />
|
4 |
-
<p>the first thing to get used to is that the game is not necessarily easy. while some missions have you outnumbered by a large margin, others are a struggle to complete.the first mission, for example, starts with two other squads already on the map. you're thrown into a mission that's immediately under heavy fire, and the mission briefing is all but useless to the player. the difficulty ramps up quickly, and the lack of any kind of tutorial is a bummer. </p>
|
5 |
-
<p>when you get past this initial hurdle, the game really starts to shine.as the game progresses, you'll find that you have access to more and more options, and you'll get used to the game mechanics. if you've been playing a game like this in the past, you'll find that the game has many of the same ideas and can be played the same way. there are a few differences from previous games, but the interface is almost identical, and none of the gameplay has been altered. </p>
|
6 |
-
<p>throughout the game, you will need to plan your movements to the second to avoid getting into a firefight. there is also a built-in map editor that lets you place items on the map and manage your squad's positions. more importantly, the game is extremely accessible. you have a lot of options to work with, and each of the commando's moves is explained in great detail. </p>
|
7 |
-
<p></p> 899543212b<br />
|
8 |
-
<br />
|
9 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Freedownloadharrypotter7fullmovieinenglish.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>freedownloadharrypotter7fullmovieinenglish</h2><br /><p><b><b>DOWNLOAD</b> ○○○ <a href="https://imgfil.com/2uxYjh">https://imgfil.com/2uxYjh</a></b></p><br /><br />
|
2 |
-
|
3 |
-
d5da3c52bf<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1line/AutoGPT/ui/utils.py
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import re
|
3 |
-
|
4 |
-
def format_directory(directory):
|
5 |
-
output = []
|
6 |
-
def helper(directory, level, output):
|
7 |
-
files = os.listdir(directory)
|
8 |
-
for i, item in enumerate(files):
|
9 |
-
is_folder = os.path.isdir(os.path.join(directory, item))
|
10 |
-
joiner = "├── " if i < len(files) - 1 else "└── "
|
11 |
-
item_html = item + "/" if is_folder else f"<a href='file={os.path.join(directory, item)}'>{item}</a>"
|
12 |
-
output.append("│ " * level + joiner + item_html)
|
13 |
-
if is_folder:
|
14 |
-
helper(os.path.join(directory, item), level + 1, output)
|
15 |
-
output.append(os.path.basename(directory) + "/")
|
16 |
-
helper(directory, 1, output)
|
17 |
-
return "\n".join(output)
|
18 |
-
|
19 |
-
DOWNLOAD_OUTPUTS_JS = """
|
20 |
-
() => {
|
21 |
-
const a = document.createElement('a');
|
22 |
-
a.href = 'file=outputs.zip';
|
23 |
-
a.download = 'outputs.zip';
|
24 |
-
document.body.appendChild(a);
|
25 |
-
a.click();
|
26 |
-
document.body.removeChild(a);
|
27 |
-
}"""
|
28 |
-
|
29 |
-
def remove_color(text):
|
30 |
-
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
|
31 |
-
return ansi_escape.sub('', text)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/CapCut for Android - Download the APK from Uptodown.md
DELETED
@@ -1,124 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>CapCut 2020 APK: A Powerful and Easy-to-Use Video Editor for Android</h1>
|
3 |
-
<p>If you are looking for a video editing app that is versatile, powerful, and easy to use, then you should check out CapCut 2020 APK. This app is the official video editing app of TikTok, one of the most popular social media platforms in the world. With CapCut, you can create amazing videos for TikTok, Instagram, YouTube, or any other platform you like. In this article, we will tell you what CapCut is, why you should use it, how to download and install it on your Android device, and how to use it to edit your videos like a pro.</p>
|
4 |
-
<h2>capcut 2020 apk</h2><br /><p><b><b>Download File</b> ✫✫✫ <a href="https://jinyurl.com/2uNM1Q">https://jinyurl.com/2uNM1Q</a></b></p><br /><br />
|
5 |
-
<h2>What is CapCut and why you should use it</h2>
|
6 |
-
<h3>CapCut is a free video editing app from the creators of TikTok</h3>
|
7 |
-
<p>CapCut is an app developed by Bytedance Pte. Ltd., the same company that created TikTok, one of the most popular social media platforms in the world. CapCut was formerly known as Viamaker, but it was rebranded in 2020 to match its integration with TikTok. With CapCut, you can easily create videos for TikTok or any other platform you like. You can also link your TikTok account to CapCut and upload your creations directly to this social network.</p>
|
8 |
-
<h3>CapCut offers a wide range of features to create amazing videos</h3>
|
9 |
-
<p>CapCut is not just a simple video editor. It is a powerful tool that offers a wide range of features to help you create amazing videos. Some of these features are:</p>
|
10 |
-
<ul>
|
11 |
-
<li>Video editing: You can cut, copy, paste, crop, rotate, reverse, speed up or slow down your videos with ease.</li>
|
12 |
-
<li>Video enhancement: You can apply filters, stickers, text, music, effects, transitions, and more to make your videos more attractive and engaging.</li>
|
13 |
-
<li>Video templates: You can choose from hundreds of templates created by the community or by professional designers. These templates are categorized by themes, such as fitness, velocity, memes, retro, fandom, etc.</li>
|
14 |
-
<li>Video export: You can export your videos in high quality (up to 2K resolution) and choose the format and frame rate that suits your needs.</li>
|
15 |
-
<li>Video storage: You can save your videos on your device or upload them to the cloud to keep them safe and accessible at all times.</li>
|
16 |
-
</ul>
|
17 |
-
<h3>CapCut is easy to use and has a user-friendly interface</h3>
|
18 |
-
<p>One of the best things about CapCut is that it is very easy to use. You don't need to have any prior experience or knowledge in video editing to use this app. The interface is user-friendly and intuitive. You can access all the features from three tabs: Editing, Templates, and Tutorials. The Editing tab is where you can create your new projects and edit your videos with various tools. The Templates tab is where you can browse and use different templates for your videos. The Tutorials tab is where you can learn how to use the app and get tips and tricks from experts. You can also access the settings, feedback, and help options from the menu icon on the top right corner of the screen.</p>
|
19 |
-
<p>capcut 2020 apk download<br />
|
20 |
-
capcut 2020 apk free<br />
|
21 |
-
capcut 2020 apk mod<br />
|
22 |
-
capcut 2020 apk pro<br />
|
23 |
-
capcut 2020 apk latest version<br />
|
24 |
-
capcut 2020 apk uptodown<br />
|
25 |
-
capcut 2020 apk for android<br />
|
26 |
-
capcut 2020 apk for pc<br />
|
27 |
-
capcut 2020 apk for ios<br />
|
28 |
-
capcut 2020 apk for windows<br />
|
29 |
-
capcut 2020 apk video editor<br />
|
30 |
-
capcut 2020 apk video maker<br />
|
31 |
-
capcut 2020 apk tiktok<br />
|
32 |
-
capcut 2020 apk no watermark<br />
|
33 |
-
capcut 2020 apk premium<br />
|
34 |
-
capcut 2020 apk unlocked<br />
|
35 |
-
capcut 2020 apk full version<br />
|
36 |
-
capcut 2020 apk old version<br />
|
37 |
-
capcut 2020 apk new version<br />
|
38 |
-
capcut 2020 apk update<br />
|
39 |
-
capcut 2020 apk offline<br />
|
40 |
-
capcut 2020 apk online<br />
|
41 |
-
capcut 2020 apk hack<br />
|
42 |
-
capcut 2020 apk cracked<br />
|
43 |
-
capcut 2020 apk review<br />
|
44 |
-
capcut 2020 apk tutorial<br />
|
45 |
-
capcut 2020 apk features<br />
|
46 |
-
capcut 2020 apk tips and tricks<br />
|
47 |
-
capcut 2020 apk alternatives<br />
|
48 |
-
capcut 2020 apk comparison<br />
|
49 |
-
capcut 2020 apk vs kinemaster<br />
|
50 |
-
capcut 2020 apk vs inshot<br />
|
51 |
-
capcut 2020 apk vs alight motion<br />
|
52 |
-
capcut 2020 apk vs powerdirector<br />
|
53 |
-
capcut 2020 apk vs filmora go<br />
|
54 |
-
capcut 2020 apk vs viva video<br />
|
55 |
-
capcut 2020 apk vs funimate<br />
|
56 |
-
capcut 2020 apk vs vllo<br />
|
57 |
-
capcut 2020 apk vs quik<br />
|
58 |
-
capcut 2020 apk vs splice</p>
|
59 |
-
<h2>How to download and install CapCut 2020 APK on your Android device</h2>
|
60 |
-
<h3>Download the CapCut 2020 APK file from a trusted source</h3>
|
61 |
-
<p>CapCut is available on the Google Play Store, but if you want to download the 2020 version of the app, you will need to get the APK file from a trusted source. APK stands for Android Package Kit, and it is a file format that contains all the components of an Android app. You can find many websites that offer APK files for various apps, but you need to be careful and avoid downloading from unverified or malicious sources. Some of the trusted sources where you can download the CapCut 2020 APK file are:</p>
|
62 |
-
<ul>
|
63 |
-
<li><a href="">APKPure</a></li>
|
64 |
-
<li><a href="">Uptodown</a></li>
|
65 |
-
<li><a href="">APKMirror</a></li>
|
66 |
-
</ul>
|
67 |
-
<p>Once you find the CapCut 2020 APK file, you need to download it to your device. You can do this by tapping on the download button or scanning the QR code on the website.</p>
|
68 |
-
<h3>Enable the installation of apps from unknown sources on your device settings</h3>
|
69 |
-
<p>Before you can install the CapCut 2020 APK file on your device, you need to enable the installation of apps from unknown sources. This is a security feature that prevents unauthorized or harmful apps from being installed on your device. To enable this feature, you need to follow these steps:</p>
|
70 |
-
<ol>
|
71 |
-
<li>Go to your device settings and tap on Security or Privacy.</li>
|
72 |
-
<li>Find and tap on the option that says Unknown Sources or Install Unknown Apps.</li>
|
73 |
-
<li>Toggle on the switch or check the box that allows the installation of apps from unknown sources.</li>
|
74 |
-
<li>Confirm your choice by tapping on OK or Allow.</li>
|
75 |
-
</ol>
|
76 |
-
<p>Note: The exact steps may vary depending on your device model and Android version. You can also enable this feature for specific apps, such as your browser or file manager, instead of allowing it for all apps.</p>
|
77 |
-
<h3>Locate and tap on the downloaded APK file to start the installation process</h3>
|
78 |
-
<p>After you have enabled the installation of apps from unknown sources, you can proceed to install the CapCut 2020 APK file on your device. To do this, you need to locate and tap on the downloaded APK file. You can find it in your Downloads folder or in the notification bar. Alternatively, you can use a file manager app to browse and find the APK file on your device storage. Once you tap on the APK file, you will see a pop-up window that asks you to confirm the installation. Tap on Install and wait for a few seconds until the installation is complete.</p>
|
79 |
-
<h3>Follow the instructions on the screen and grant the necessary permissions to the app</h3>
|
80 |
-
<p>Once the installation is complete, you can open the app by tapping on Open or by finding it in your app drawer. The first time you open the app, you will see some instructions on how to use it and what features it offers. You will also be asked to grant some permissions to the app, such as access to your camera, microphone, storage, etc. These permissions are necessary for the app to function properly and access your media files. You can grant these permissions by tapping on Allow or Deny. If you deny any permission, you may not be able to use some features of the app.</p> <h2>How to use CapCut to edit your videos like a pro</h2>
|
81 |
-
<h3>Create a new project and add videos from your device or from the app's templates</h3>
|
82 |
-
<p>To start editing your videos with CapCut, you need to create a new project. You can do this by tapping on the plus icon on the Editing tab. You will then see two options: New Project and Templates. If you choose New Project, you can add videos from your device gallery or record a new video with the app's camera. You can also add multiple videos and merge them into one project. If you choose Templates, you can browse and use different templates for your videos. You can also customize the templates by changing the text, music, effects, etc.</p>
|
83 |
-
<h3>Edit your videos with various tools, such as cutting, cropping, speeding, reversing, etc.</h3>
|
84 |
-
<p>Once you have added your videos to your project, you can edit them with various tools that are available on the bottom toolbar. You can tap on any tool to access its options and settings. Some of the tools that you can use are:</p>
|
85 |
-
<ul>
|
86 |
-
<li>Cut: You can trim or split your videos by dragging the sliders or tapping on the scissors icon.</li>
|
87 |
-
<li>Crop: You can crop your videos by pinching the screen or choosing a preset ratio.</li>
|
88 |
-
<li>Speed: You can adjust the speed of your videos by sliding the bar or choosing a preset value.</li>
|
89 |
-
<li>Reverse: You can reverse your videos by tapping on the reverse icon.</li>
|
90 |
-
<li>Volume: You can adjust the volume of your videos by sliding the bar or muting the sound.</li>
|
91 |
-
<li>Rotate: You can rotate your videos by tapping on the rotate icon or choosing a preset angle.</li>
|
92 |
-
<li>Mirror: You can mirror your videos by tapping on the mirror icon.</li>
|
93 |
-
</ul>
|
94 |
-
<p>You can also use other tools, such as adjust, beauty, freeze frame, mix audio, etc. by tapping on the more icon.</p>
|
95 |
-
<h3>Enhance your videos with filters, stickers, text, music, effects, etc.</h3>
|
96 |
-
<p>To make your videos more attractive and engaging, you can enhance them with various elements that are available on the top toolbar. You can tap on any element to access its options and settings. Some of the elements that you can use are:</p>
|
97 |
-
<ul>
|
98 |
-
<li>Filters: You can apply different filters to your videos by swiping left or right or choosing a preset category.</li>
|
99 |
-
<li>Stickers: You can add different stickers to your videos by browsing or searching for them. You can also adjust their size, position, rotation, opacity, etc.</li>
|
100 |
-
<li>Text: You can add text to your videos by typing or choosing a preset style. You can also adjust their font, color, size, position, rotation, opacity, etc.</li>
|
101 |
-
<li>Music: You can add music to your videos by choosing from the app's library or importing from your device. You can also adjust their volume, duration, fade in/out, etc.</li>
|
102 |
-
<li>Effects: You can add different effects to your videos by swiping left or right or choosing a preset category. You can also adjust their intensity, duration, etc.</li>
|
103 |
-
<li>Transitions: You can add different transitions between your videos by swiping left or right or choosing a preset category. You can also adjust their duration, direction, etc.</li>
|
104 |
-
</ul>
|
105 |
-
<p>You can also use other elements, such as canvas, animation, subtitle, voiceover, etc. by tapping on the more icon.</p> <h2>Conclusion and FAQs</h2>
|
106 |
-
<p>CapCut 2020 APK is a powerful and easy-to-use video editing app for Android devices. It is the official video editing app of TikTok, one of the most popular social media platforms in the world. With CapCut, you can create amazing videos for TikTok or any other platform you like. You can also link your TikTok account to CapCut and upload your creations directly to this social network. CapCut offers a wide range of features to edit, enhance, and export your videos in high quality. You can also use various templates, filters, stickers, text, music, effects, transitions, and more to make your videos more attractive and engaging. CapCut is very easy to use and has a user-friendly interface. You can access all the features from three tabs: Editing, Templates, and Tutorials. You can also download and install the CapCut 2020 APK file from a trusted source and enable the installation of apps from unknown sources on your device settings. In this article, we have shown you what CapCut is, why you should use it, how to download and install it on your Android device, and how to use it to edit your videos like a pro.</p>
|
107 |
-
<p>If you have any questions about CapCut 2020 APK, you can check out the following FAQs:</p>
|
108 |
-
<h4>Q: Is CapCut 2020 APK safe to download and install?</h4>
|
109 |
-
<p>A: Yes, CapCut 2020 APK is safe to download and install as long as you get it from a trusted source. However, you should always be careful when downloading and installing apps from unknown sources and scan them with an antivirus app before opening them.</p>
|
110 |
-
<h4>Q: Is CapCut 2020 APK compatible with all Android devices?</h4>
|
111 |
-
<p>A: CapCut 2020 APK is compatible with most Android devices that run on Android 5.0 or higher. However, some features may not work properly on some devices or Android versions.</p>
|
112 |
-
<h4>Q: How can I update CapCut 2020 APK to the latest version?</h4>
|
113 |
-
<p>A: You can update CapCut 2020 APK to the latest version by downloading and installing the new APK file from a trusted source. Alternatively, you can check for updates on the app's settings or on the Google Play Store.</p>
|
114 |
-
<h4>Q: How can I delete CapCut 2020 APK from my device?</h4>
|
115 |
-
<p>A: You can delete CapCut 2020 APK from your device by following these steps:</p>
|
116 |
-
<ol>
|
117 |
-
<li>Go to your device settings and tap on Apps or Applications.</li>
|
118 |
-
<li>Find and tap on CapCut 2020 APK.</li>
|
119 |
-
<li>Tap on Uninstall and confirm your choice by tapping on OK or Uninstall.</li>
|
120 |
-
</ol>
|
121 |
-
<h4>Q: How can I contact the developers of CapCut 2020 APK?</h4>
|
122 |
-
<p>A: You can contact the developers of CapCut 2020 APK by sending an email to [email protected] or by using the feedback option on the app's settings.</p> 401be4b1e0<br />
|
123 |
-
<br />
|
124 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Monoposto Mod APK 3.75 - Unlimited Racing Fun.md
DELETED
@@ -1,169 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Monoposto: A Formula Racing Game with Single Seater Open-Wheel Cars</h1> | <p>If you are a fan of racing games, you might want to check out Monoposto, an amazing independent racing game with single seater open-wheel cars. This game is designed to provide an unmatched level of realism and authenticity, allowing you to experience the thrill of competitive racing firsthand. In this article, we will tell you everything you need to know about Monoposto, including what it is, how to download and install it, how to play it, and some tips and tricks to help you win. We will also suggest some alternatives to Monoposto in case you want to try something different.</p>
|
3 |
-
<h2>monoposto mod apk an1</h2><br /><p><b><b>Download File</b> ··· <a href="https://jinyurl.com/2uNRoX">https://jinyurl.com/2uNRoX</a></b></p><br /><br />
|
4 |
-
<h2>What is Monoposto?</h2>
|
5 |
-
<p>Monoposto is a racing game that simulates the formula racing series, where drivers compete in single seater open-wheel cars on various tracks around the world. The game is developed by Marco Pesce, an independent developer who has a passion for racing games. The game was first released in 2017 and has been updated regularly since then. The latest version of the game is 3.73, which was released in June 2023.</p>
|
6 |
-
<h3>Features of Monoposto</h3>
|
7 |
-
<p>Monoposto has many features that make it stand out from other racing games. Here are some of them:</p>
|
8 |
-
<ul>
|
9 |
-
<li><b>Full unlocked game</b>: You can enjoy all the features and benefits of the game without any limitations or ads.</li>
|
10 |
-
<li><b>24 realistic tracks</b>: You can compete in the new 2023 season, which includes 24 racing tracks from different countries and continents.</li>
|
11 |
-
<li><b>Online multiplayer duel</b>: You can challenge other players online and race against them in real time.</li>
|
12 |
-
<li><b>Quick race, Single race and Championship mode</b>: You can choose from different modes of play, depending on your preference and skill level.</li>
|
13 |
-
<li><b>Qualifying session</b>: You can try to get the best lap time and secure a good position on the starting grid.</li>
|
14 |
-
<li><b>Race session with up to 22 cars</b>: You can race against up to 22 AI opponents or human players in a realistic and dynamic race environment.</li>
|
15 |
-
<li><b>Pit stop during qualify and race</b>: You can make strategic decisions and adjust your car settings during the pit stop.</li>
|
16 |
-
<li><b>Car repair and setup during pit stop</b>: You can repair any damage to your car and change your tires during the pit stop.</li>
|
17 |
-
<li><b>Car setup before the race</b>: You can customize your car settings before the race, such as suspension, brakes, aerodynamics, engine, gearbox, etc.</li>
|
18 |
-
<li><b>Customization of cars and drivers</b>: You can change the colors and names of your cars and drivers.</li>
|
19 |
-
<li><b>Create your livery</b>: You can design your own livery for your car using different stickers and logos.</li>
|
20 |
-
<li><b>Choose your driver</b>: You can select from different drivers with different skills and personalities.</li>
|
21 |
-
<li><b>5 different camera view</b>: You can switch between different camera angles during the race, such as cockpit, chase, front wing, rear wing, etc.</li>
|
22 |
-
<li><b>Spectator TV mode race view</b>: You can watch the race from a TV-like perspective, with different camera shots and commentary.</li>
|
23 |
-
<li><b>Many options to customize your driving experience</b>: You can adjust various options to suit your preferences, such as difficulty level, steering sensitivity, traction control, brake assist, etc.</li>
|
24 |
-
<li><b>External and MFi game controller support</b>: You can use an external or MFi game controller to play the game more comfortably.</li>
|
25 |
-
</ul>
|
26 |
-
<h3>How to download and install Monoposto</h3>
|
27 |
-
<p>If you want to play Monoposto, you need to download and install the game on your device. The game is available for both Android and iOS devices, and you can download it from the official app stores or from third-party sources. Here are the steps to download and install Monoposto on your device:</p>
|
28 |
-
<h4>For Android devices</h4>
|
29 |
-
<p>If you want to download Monoposto from the Google Play Store, you need to follow these steps:</p>
|
30 |
-
<ol>
|
31 |
-
<li>Open the Google Play Store app on your device.</li>
|
32 |
-
<li>Search for Monoposto in the search bar.</li>
|
33 |
-
<li>Select the game from the search results and tap on Install.</li>
|
34 |
-
<li>Wait for the game to download and install on your device.</li>
|
35 |
-
<li>Launch the game and enjoy.</li>
|
36 |
-
</ol>
|
37 |
-
<p>If you want to download Monoposto from a third-party source, such as APKMB.com, you need to follow these steps:</p>
|
38 |
-
<p>monoposto mod apk download<br />
|
39 |
-
monoposto mod apk unlocked<br />
|
40 |
-
monoposto mod apk premium<br />
|
41 |
-
monoposto mod apk latest version<br />
|
42 |
-
monoposto mod apk free<br />
|
43 |
-
monoposto mod apk online multiplayer<br />
|
44 |
-
monoposto mod apk happymod<br />
|
45 |
-
monoposto mod apk apkmb<br />
|
46 |
-
monoposto mod apk android 1<br />
|
47 |
-
monoposto mod apk unlimited money<br />
|
48 |
-
monoposto mod apk 2023 season<br />
|
49 |
-
monoposto mod apk formula racing game<br />
|
50 |
-
monoposto mod apk single seater cars<br />
|
51 |
-
monoposto mod apk 22 racing tracks<br />
|
52 |
-
monoposto mod apk new cars and tires<br />
|
53 |
-
monoposto mod apk quick formula race<br />
|
54 |
-
monoposto mod apk single race mode<br />
|
55 |
-
monoposto mod apk championship mode<br />
|
56 |
-
monoposto mod apk no ads<br />
|
57 |
-
monoposto mod apk offline<br />
|
58 |
-
monoposto mod apk 3.70<br />
|
59 |
-
monoposto mod apk 3.73<br />
|
60 |
-
monoposto mod apk update<br />
|
61 |
-
monoposto mod apk revdl<br />
|
62 |
-
monoposto mod apk rexdl<br />
|
63 |
-
monoposto mod apk hack<br />
|
64 |
-
monoposto mod apk cheat<br />
|
65 |
-
monoposto mod apk full version<br />
|
66 |
-
monoposto mod apk pro<br />
|
67 |
-
monoposto mod apk vip<br />
|
68 |
-
monoposto mod apk mega mod<br />
|
69 |
-
monoposto mod apk data obb<br />
|
70 |
-
monoposto mod apk unlimited coins<br />
|
71 |
-
monoposto mod apk unlimited gems<br />
|
72 |
-
monoposto mod apk unlimited fuel<br />
|
73 |
-
monoposto mod apk all tracks unlocked<br />
|
74 |
-
monoposto mod apk all cars unlocked<br />
|
75 |
-
monoposto mod apk realistic physics<br />
|
76 |
-
monoposto mod apk high graphics<br />
|
77 |
-
monoposto mod apk low mb size<br />
|
78 |
-
monoposto mod apk easy controls<br />
|
79 |
-
monoposto mod apk best settings<br />
|
80 |
-
monoposto mod apk tips and tricks<br />
|
81 |
-
monoposto mod apk gameplay video<br />
|
82 |
-
monoposto mod apk review and rating<br />
|
83 |
-
monoposto mod apk download link</p>
|
84 |
-
<ol>
|
85 |
-
<li>Open your browser and go to [APKMB.com](^1^).</li>
|
86 |
-
<li>Search for Monoposto in the search bar.</li>
|
87 |
-
<li>Select the game from the search results and tap on Download APK.</li>
|
88 |
-
<li>Wait for the game to download on your device.</li>
|
89 |
-
<li>Before installing the game, you need to enable Unknown Sources in your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on.</li>
|
90 |
-
<li>Locate the downloaded APK file in your file manager and tap on it.</li>
|
91 |
-
<li>Follow the instructions on the screen to install the game.</li>
|
92 |
-
<li>Launch the game and enjoy.</li>
|
93 |
-
</ol>
|
94 |
-
<h4>For iOS devices</h4>
|
95 |
-
<p>If you want to download Monoposto from the App Store, you need to follow these steps:</p>
|
96 |
-
<ol>
|
97 |
-
<li>Open the App Store app on your device.</li>
|
98 |
-
<li>Search for Monoposto in the search bar.</li>
|
99 |
-
<li>Select the game from the search results and tap on Get.</li>
|
100 |
-
<li>Wait for the game to download and install on your device.</li>
|
101 |
-
<li>Launch the game and enjoy.</li>
|
102 |
-
</ol>
|
103 |
-
<p>If you want to download Monoposto from a third-party source, such as Panda Helper, you need to follow these steps:</p>
|
104 |
-
<ol>
|
105 |
-
<li>Open your browser and go to [Panda Helper].</li>
|
106 |
-
<li>Tap on Download Now and follow the instructions on the screen to install Panda Helper on your device.</li>
|
107 |
-
<li>Launch Panda Helper and search for Monoposto in the search bar.</li>
|
108 |
-
<li>Select the game from the search results and tap on Install.</li>
|
109 |
-
<li>Wait for the game to download and install on your device.</li>
|
110 |
-
<li>Launch the game and enjoy.</li>
|
111 |
-
</ol>
|
112 |
-
<h2>How to play Monoposto</h2>
|
113 |
-
<p>Now that you have downloaded and installed Monoposto on your device, you are ready to play. Here are some basic instructions on how to play Monoposto:</p>
|
114 |
-
<h3>Tips and tricks for Monoposto</h3>
|
115 |
-
<p>To help you improve your performance and win more races, here are some tips and tricks for Monoposto:</p>
|
116 |
-
<ul>
|
117 |
-
<li><b>Practice before racing</b>: Before you enter a race, it is advisable to practice on the track first. This will help you familiarize yourself with the layout, curves, turns, and obstacles of the track. You can also test different car settings and find out what works best for you.</li>
|
118 |
-
<li><b>Use the qualifying session wisely</b>: The qualifying session is important because it determines your position on the starting grid. The higher your position, the better your chances of winning. Therefore, you should try to get the best lap time possible during the qualifying session. You can also use this session to make any adjustments to your car or pit stop strategy.</li>
|
119 |
-
<li><b>Avoid collisions and penalties</b>: During the race, you should avoid colliding with other cars or objects, as this will damage your car and slow you down. You should also avoid cutting corners or overtaking illegally, as this will result in penalties that will affect your final position. You can check your damage level and penalty status on the top left corner of the screen.</li>
|
120 |
-
<li><b>Use the pit stop strategically</b>: The pit stop is a crucial part of any race, as it allows you to repair your car, change your tires, or modify your car settings. However, it also costs you time, so you should use it wisely. You can decide when to enter or exit the pit stop by tapping on the pit stop button on the bottom right corner of the screen. You can also see the recommended pit stop strategy on the top right corner of the screen.</li>
|
121 |
-
<li><b>Use the camera views to your advantage</b>: You can switch between different camera views during the race by tapping on the camera button on the bottom left corner of the screen. You can choose the view that suits your preference and style, such as cockpit, chase, front wing, rear wing, etc. You can also use the spectator TV mode to watch the race from a different perspective.</li>
|
122 |
-
<li><b>Use the game controller for better control</b>: If you have an external or MFi game controller, you can use it to play Monoposto more comfortably and accurately. You can connect your game controller to your device via Bluetooth or USB and configure the buttons and settings in the game options.</li>
|
123 |
-
</ul>
|
124 |
-
<h3>Alternatives to Monoposto</h3>
|
125 |
-
<p>If you want to try some other racing games that are similar to Monoposto, here are some alternatives that you might like:</p>
|
126 |
-
<table>
|
127 |
-
<tr>
|
128 |
-
<th>Game</th>
|
129 |
-
<th>Description</th>
|
130 |
-
</tr>
|
131 |
-
<tr>
|
132 |
-
<td>F1 Mobile Racing</td>
|
133 |
-
<td>A racing game that lets you compete in the official Formula 1 World Championship, with real teams, drivers, and tracks. You can also create your own custom car and challenge other players online.</td>
|
134 |
-
</tr>
|
135 |
-
<tr>
|
136 |
-
<td>Real Racing 3</td>
|
137 |
-
<td>A racing game that features realistic graphics, physics, and sound effects. You can race in over 250 cars from various manufacturers and categories, on over 40 tracks from around the world.</td>
|
138 |
-
</tr>
|
139 |
-
<tr>
|
140 |
-
<td>Asphalt 9: Legends</td>
|
141 |
-
<td>A racing game that focuses on arcade-style gameplay, with stunning visuals, fast-paced action, and stunts. You can race in over 60 cars from top brands and customize them with various parts and colors.</td>
|
142 |
-
</tr>
|
143 |
-
<tr>
|
144 |
-
<td>GRID Autosport</td>
|
145 |
-
<td>A racing game that offers a premium and authentic racing experience, with over 100 cars and 100 circuits to choose from. You can race in various disciplines, such as touring, endurance, open wheel, etc.</td>
|
146 |
-
</tr>
|
147 |
-
<tr>
|
148 |
-
<td>GT Racing 2: The Real Car Experience</td>
|
149 |
-
<td>A racing game that claims to be the most realistic car simulation ever made. You can race in over 70 cars from 30 manufacturers, on 13 tracks with different weather and time conditions.</td>
|
150 |
-
</tr>
|
151 |
-
</table>
|
152 |
-
<h2>Conclusion</h2>
|
153 |
-
<p>Monoposto is a formula racing game with single seater open-wheel cars that offers a realistic and immersive racing experience. You can download and install the game on your Android or iOS device and enjoy all its features and benefits. You can also follow some tips and tricks to improve your performance and win more races. If you are looking for some alternatives to Monoposto, you can try some other racing games that are similar or different in style and gameplay.</p>
|
154 |
-
<h3>FAQs</h3>
|
155 |
-
<p>Here are some frequently asked questions about Monoposto:</p>
|
156 |
-
<ol>
|
157 |
-
<li><b>How much does Monoposto cost?</b></li>
|
158 |
-
<p>Monoposto is a free-to-play game that does not require any in-app purchases or subscriptions. However, you can support the developer by making a voluntary donation via PayPal or Patreon.</p>
|
159 |
-
<li><b>Is Monoposto compatible with my device?</b></li>
|
160 |
-
<p>Monoposto is compatible with most Android and iOS devices that have at least 2 GB of RAM and a decent processor. However, some older or low-end devices may not run the game smoothly or at all.</p>
|
161 |
-
<li><b>How do I update Monoposto?</b></li>
|
162 |
-
<p>If you downloaded Monoposto from the official app stores, you will receive notifications when there is a new update available. You can then update the game by following the instructions on the screen. If you downloaded Monoposto from a third-party source, you will have to check the source website for any new updates and download them manually.</p>
|
163 |
-
<li><b>How do I contact the developer of Monoposto?</b></li>
|
164 |
-
<p>If you have any questions, feedback, suggestions, or issues regarding Monoposto, you can contact the developer by sending an email to [email protected] or by visiting his website at www.monopostogame.com.</p>
|
165 |
-
<li><b>How do I rate and review Monoposto?</b></li>
|
166 |
-
<p>If you enjoyed playing Monoposto, you can rate and review it on the app stores or on third-party websites. This will help other users discover the game and also show your appreciation to the developer.</p>
|
167 |
-
</ol></p> 401be4b1e0<br />
|
168 |
-
<br />
|
169 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download and Activate Microsoft 365 or Office 2021 in Minutes.md
DELETED
@@ -1,140 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download Office 365: A Complete Guide</h1>
|
3 |
-
<p>If you are looking for a productivity suite that can help you work from anywhere, on any device, and access your email, files, and Office programs online or offline, then you might want to consider Office 365. In this article, we will explain what Office 365 is, why you need it, how much it costs, and how to download and install it on your device. We will also show you how to activate and update Office 365 to get the most out of it.</p>
|
4 |
-
<h2>What is Office 365 and why you need it</h2>
|
5 |
-
<p>Office 365 is a cloud-based subscription service that provides premium apps, 1 TB of cloud storage, and collaboration, productivity, and security benefits. With Office 365, you can work from anywhere, on any device, and access your email, files, and Office programs (Word, PowerPoint, Excel) online or offline. You can also use a growing catalog of templates, photos, 3D models, icons, and fonts to create professional and engaging documents and presentations. Office 365 keeps you up to date with the latest features and patches, and lets you securely connect your financial accounts in Excel.</p>
|
6 |
-
<h2>download office 365</h2><br /><p><b><b>DOWNLOAD</b> ✵✵✵ <a href="https://jinyurl.com/2uNPQO">https://jinyurl.com/2uNPQO</a></b></p><br /><br />
|
7 |
-
<h3>Office 365 features and benefits</h3>
|
8 |
-
<p>Some of the main features and benefits of Office 365 are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Access to premium apps such as Word, Excel, PowerPoint, Outlook, OneNote, OneDrive, Microsoft Defender, Microsoft Editor, Clipchamp, Microsoft Family Safety.</li>
|
11 |
-
<li>Ability to collaborate online and see changes your team makes to shared documents on a real-time basis.</li>
|
12 |
-
<li>1 TB of OneDrive cloud storage per user to back up and access your files and photos across all your devices.</li>
|
13 |
-
<li>Advanced security features such as ransomware detection and recovery in OneDrive, two-step identity verification in your Personal Vault, data encryption and automatic deactivation of unsafe links in Outlook.</li>
|
14 |
-
<li>Ongoing technical support by chat or on the phone.</li>
|
15 |
-
<li>New features such as Microsoft Defender for identity theft monitoring, Microsoft Editor for intelligent writing assistance, Microsoft Premium templates for more design options, Microsoft Teams for family and friends communication, Microsoft Family Safety for digital and physical safety.</li>
|
16 |
-
</ul>
|
17 |
-
<h3>Office 365 subscription plans and prices</h3>
|
18 |
-
<p>Office 365 subscription prices vary depending on the plan, the number of users, and the payment frequency. For individual users, Office 365 Personal is $69.99 per year or $6.99 per month, and allows up to five devices simultaneously. For business users, Microsoft 365 Business Basic is $6 per user per month, and Microsoft 365 Business Premium is $22 per user per month, both with annual subscriptions. For enterprise users, Microsoft 365 Apps for enterprise is $12 per user per month, and Office 365 E1 is $10 per user per month, both with annual subscriptions. The prices are based on the United States market and may change in the future.</p>
|
19 |
-
<p>How to download office 365 for free<br />
|
20 |
-
Download office 365 offline installer<br />
|
21 |
-
Download office 365 home premium<br />
|
22 |
-
Download office 365 pro plus iso<br />
|
23 |
-
Download office 365 backup and restore<br />
|
24 |
-
Download office 365 email attachments<br />
|
25 |
-
Download office 365 with product key<br />
|
26 |
-
Download office 365 business essentials<br />
|
27 |
-
Download office 365 personal subscription<br />
|
28 |
-
Download office 365 outlook app<br />
|
29 |
-
Download office 365 on chromebook<br />
|
30 |
-
Download office 365 deployment tool<br />
|
31 |
-
Download office 365 for macbook air<br />
|
32 |
-
Download office 365 education for students<br />
|
33 |
-
Download office 365 admin center<br />
|
34 |
-
Download office 365 visio professional<br />
|
35 |
-
Download office 365 project online desktop client<br />
|
36 |
-
Download office 365 shared mailbox<br />
|
37 |
-
Download office 365 teams app<br />
|
38 |
-
Download office 365 word templates<br />
|
39 |
-
Download office 365 excel add ins<br />
|
40 |
-
Download office 365 powerpoint themes<br />
|
41 |
-
Download office 365 access database engine<br />
|
42 |
-
Download office 365 publisher trial<br />
|
43 |
-
Download office 365 onedrive for business sync client<br />
|
44 |
-
Download office 365 calendar to iphone<br />
|
45 |
-
Download office 365 contacts to android<br />
|
46 |
-
Download office 365 group policy templates<br />
|
47 |
-
Download office 365 language pack<br />
|
48 |
-
Download office 365 update assistant<br />
|
49 |
-
Download office 365 audit log reports<br />
|
50 |
-
Download office 365 mailbox content search results<br />
|
51 |
-
Download office 365 sharepoint designer<br />
|
52 |
-
Download office 365 forms app<br />
|
53 |
-
Download office 365 planner desktop app<br />
|
54 |
-
Download office 365 sway app<br />
|
55 |
-
Download office 365 yammer app<br />
|
56 |
-
Download office 365 stream app<br />
|
57 |
-
Download office 365 to do app<br />
|
58 |
-
Download office 365 whiteboard app</p>
|
59 |
-
<h2>How to download and install Office 365 on your device</h2>
|
60 |
-
<p>Before you download and install Office 365 on your device, make sure you have a valid subscription or product key. You also need to check the system requirements for Office 365 to ensure compatibility with your device.</p>
|
61 |
-
<h3>System requirements for Office 365</h3>
|
62 |
-
<p>The system requirements for Office 365 depend on the device and the operating system you are using. For Windows devices, you need Windows 10, Windows 8.1, or Windows 7 Service Pack 1. For Mac devices, you need macOS 10.14 Mojave or later. For iOS devices, you need iOS 13.0 or later. For Android devices, you need Android 6.0 or later. You also need a processor speed of at least 1 GHz, a memory of at least 2 GB, and a disk space of at least 4 GB.</p>
|
63 |
-
<h3>Steps to download and install Office 365 on a PC or Mac</h3>
|
64 |
-
<p>To download and install Office 365 on a PC or Mac, follow these steps:</p>
|
65 |
-
<ol>
|
66 |
-
<li>Go to the Microsoft 365 website and sign in with your Microsoft account or create one if you don't have one.</li>
|
67 |
-
<li>Select the Office 365 plan that suits your needs and click on Buy now or Try for free.</li>
|
68 |
-
<li>Enter your payment details and confirm your purchase or start your free trial.</li>
|
69 |
-
<li>Go to the Services & subscriptions page and find your Office 365 subscription.</li>
|
70 |
-
<li>Click on Install and follow the instructions on the screen to download the setup file.</li>
|
71 |
-
<li>Run the setup file and wait for the installation to complete.</li>
|
72 |
-
<li>Launch any Office app and sign in with your Microsoft account to activate your subscription.</li>
|
73 |
-
</ol>
|
74 |
-
<h3>Steps to download and install Office 365 on a mobile device</h3>
|
75 |
-
<p>To download and install Office 365 on a mobile device, follow these steps:</p>
|
76 |
-
<ol>
|
77 |
-
<li>Go to the App Store (for iOS devices) or Google Play Store (for Android devices) and search for Microsoft Office: Word, Excel, PowerPoint & More.</li>
|
78 |
-
<li>Download and install the app on your device.</li>
|
79 |
-
<li>Open the app and tap on Sign in with an account used for Office.</li>
|
80 |
-
<li>Enter your Microsoft account credentials and sign in to activate your subscription.</li>
|
81 |
-
<li>You can also access individual Office apps such as Word, Excel, PowerPoint, Outlook, OneNote, OneDrive by downloading them separately from the App Store or Google Play Store.</li>
|
82 |
-
</ol>
|
83 |
-
<h2>How to activate and update Office 365</h2>
|
84 |
-
<p>After you download and install Office 365 on your device, you need to activate it with your Microsoft account to access all the features and benefits. You also need to update Office 365 regularly to get the latest security patches and improvements.</p>
|
85 |
-
<h3>How to sign in and activate Office 365</h3>
|
86 |
-
<p>To sign in and activate Office 365, follow these steps:</p>
|
87 |
-
<ol>
|
88 |
-
<li>Open any Office app such as Word, Excel, PowerPoint, Outlook, OneNote, OneDrive.</li>
|
89 |
-
<li>If prompted, enter your Microsoft account email and password and click on Sign in.</li>
|
90 |
-
<li>If you have multiple Office subscriptions associated with your account, choose the one you want to activate and click on Next.</li>
|
91 |
-
<li>You will see a message that says "You're all set!" This means that your Office 365 subscription is activated on your device.</li>
|
92 |
-
</ol>
|
93 |
-
<h3>How to check for updates and keep Office 365 up to date</h3>
|
94 |
-
<p>To check for updates and keep Office 365 up to date, follow these steps:</p>
|
95 |
-
<ol>
|
96 |
-
<li>Open any Office app such as Word, Excel, PowerPoint, Outlook, OneNote, OneDrive.</li>
|
97 |
-
<li>Click on File > Account > Update Options > Update Now.</li>
|
98 |
-
<li>If there are any updates available, they will be downloaded and installed automatically.</li>
|
99 |
-
<li>You can also enable automatic updates by clicking on File > Account > Update Options > Enable Updates.</li>
|
100 |
-
</ol>
|
101 |
-
<h2>Conclusion</h2>
|
102 |
-
<p>In this article, we have explained how to download Office 365 on your device and enjoy its features and benefits. We have also shown you how to activate and update Office 365 to get the most out of it. With Office 365, you can work from anywhere, on any device, and access your email, files, and Office programs online or offline. You can also collaborate online with your team members and create professional and engaging documents and presentations. If you are ready to get started with Office 365, visit the Microsoft 365 website today!</p>
|
103 |
-
<h3>Summary of the main points</h3>
|
104 |
-
<ul>
|
105 |
-
<li>Office 365 is a cloud-based subscription service that provides premium apps, 1 TB of cloud storage, collaboration tools, productivity tools, security features, technical support, new features etc.</li>
|
106 |
-
<li>Office 365 subscription plans vary depending on the number of users and the payment frequency. For individual users, it costs $69.99 per year or $6.99 per month, and allows up to five devices simultaneously. For business users, it costs from $6 to $22 per user per month, depending on the plan. For enterprise users, it costs from $10 to $12 per user per month, depending on the plan.</li>
|
107 |
-
<li>To download and install Office 365 on your device, you need to have a valid subscription or product key, and check the system requirements for compatibility. You also need to go to the Microsoft 365 website and sign in with your Microsoft account or create one if you don't have one. Then, you need to select the Office 365 plan that suits your needs and follow the instructions on the screen to download and install the setup file. For mobile devices, you need to download and install the Microsoft Office app or individual Office apps from the App Store or Google Play Store.</li>
|
108 |
-
<li>To activate and update Office 365 on your device, you need to open any Office app and sign in with your Microsoft account. You also need to check for updates regularly and enable automatic updates to get the latest security patches and improvements.</li>
|
109 |
-
</ul>
|
110 |
-
<h3>Call to action and link to Microsoft 365 website</h3>
|
111 |
-
<p>If you want to learn more about Office 365 and its features and benefits, visit the Microsoft 365 website at . You can also compare different Office 365 plans and prices, and choose the one that best fits your needs. Don't miss this opportunity to boost your productivity and creativity with Office 365!</p>
|
112 |
-
<h2>FAQs</h2>
|
113 |
-
<p>Here are some frequently asked questions about Office 365:</p>
|
114 |
-
<ul>
|
115 |
-
<li><b>What is the difference between Office 365 and Microsoft 365?</b></li>
|
116 |
-
<p>Office 365 is a part of Microsoft 365, which is a broader bundle of services that includes Office 365, Windows 10, and Enterprise Mobility + Security. Microsoft 365 offers more features and benefits than Office 365, such as advanced security, device management, and Windows Virtual Desktop.</p>
|
117 |
-
<li><b>Can I use Office 365 offline?</b></li>
|
118 |
-
<p>Yes, you can use Office 365 offline by installing the desktop versions of the Office apps on your device. You can access your files and documents offline by syncing them with OneDrive or saving them locally on your device. However, some features and functions may not be available or work properly offline.</p>
|
119 |
-
<li><b>How many devices can I use Office 365 on?</b></li>
|
120 |
-
<p>The number of devices you can use Office 365 on depends on your subscription plan. For individual users, you can use Office 365 on up to five devices simultaneously with one subscription. For business users, you can use Office 365 on up to five devices per user with one subscription. For enterprise users, you can use Office 365 on up to five devices per user with one subscription.</p>
|
121 |
-
<li><b>How do I cancel my Office 365 subscription?</b></li>
|
122 |
-
<p>To cancel your Office 365 subscription, follow these steps:</p>
|
123 |
-
<ol>
|
124 |
-
<li>Go to the Services & subscriptions page and sign in with your Microsoft account.</li>
|
125 |
-
<li>Find your Office 365 subscription and click on Manage.</li>
|
126 |
-
<li>Click on Cancel or Turn off recurring billing.</li>
|
127 |
-
<li>Follow the instructions on the screen to confirm your cancellation.</li>
|
128 |
-
</ol>
|
129 |
-
<p>Note that if you cancel your subscription before it expires, you will lose access to all the features and benefits of Office 365. You will also lose any unused time left in your subscription period.</p>
|
130 |
-
<li><b>How do I renew my Office 365 subscription?</b></li>
|
131 |
-
<p>To renew your Office 365 subscription, follow these steps:</p>
|
132 |
-
<ol>
|
133 |
-
<li>Go to the Services & subscriptions page and sign in with your Microsoft account.</li>
|
134 |
-
<li>Find your Office 365 subscription and click on Renew.</li>
|
135 |
-
<li>Select the plan that suits your needs and click on Buy now or Try for free.</li>
|
136 |
-
<li>Enter your payment details and confirm your purchase or start your free trial.</li>
|
137 |
-
</ol>
|
138 |
-
<p>Note that if you renew your subscription before it expires, you will keep all the features and benefits of Office 365. You will also extend your subscription period by one year from the original expiration date.</p> 197e85843d<br />
|
139 |
-
<br />
|
140 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Farm Heroes Saga MOD APK How to Download and Install the Latest Version with Unlimited Features.md
DELETED
@@ -1,112 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Farm Heroes Saga Mod APK: A Fun and Addictive Farm-Themed Game</h1>
|
3 |
-
<p>If you are looking for a casual and relaxing game that can keep you entertained for hours, you might want to try Farm Heroes Saga. This is a popular puzzle game that challenges you to match cropsies and save the farm from the evil Rancid the Racoon. But what if you want to enjoy the game without any limitations or interruptions? That's where Farm Heroes Saga Mod APK comes in. In this article, we will tell you everything you need to know about this modified version of the game, including its features, benefits, drawbacks, and how to download and install it on your device.</p>
|
4 |
-
<h2>What is Farm Heroes Saga?</h2>
|
5 |
-
<p>Farm Heroes Saga is a fascinating farm-themed game developed by King, the same company behind Candy Crush Saga, Pet Rescue Saga, and other popular games. It is the last Saga game in the Saga Series, and it has over 100 million downloads on Google Play Store. </p>
|
6 |
-
<h2>farm heroes saga mod apk</h2><br /><p><b><b>Download Zip</b> ✦✦✦ <a href="https://jinyurl.com/2uNU9F">https://jinyurl.com/2uNU9F</a></b></p><br /><br />
|
7 |
-
<h3>The gameplay of Farm Heroes Saga</h3>
|
8 |
-
<p>The gameplay of Farm Heroes Saga is similar to other match-3 games, but with a twist. Instead of matching candies or jewels, you have to match cropsies, which are cute fruits and vegetables that grow on the farm. You have to match at least three cropsies of the same type to collect them and complete the level objectives. Some levels require you to collect a certain number of cropsies, while others require you to clear mud, ice, or fire from the board. You also have to deal with Rancid the Racoon, who tries to ruin your farm by throwing junk at you. You can use boosters and power-ups to help you overcome the challenges and earn more stars.</p>
|
9 |
-
<h3>The features of Farm Heroes Saga</h3>
|
10 |
-
<p>Farm Heroes Saga has many features that make it fun and addictive. Some of them are:</p>
|
11 |
-
<ul>
|
12 |
-
<li>Thousands of levels with different difficulties and objectives.</li>
|
13 |
-
<li>A variety of cropsies with different abilities and effects.</li>
|
14 |
-
<li>A colorful and charming graphics and sound design.</li>
|
15 |
-
<li>A social element that allows you to connect with your Facebook friends and compete with them on the leaderboards.</li>
|
16 |
-
<li>A farm club that lets you collect animals and rewards as you progress through the game.</li>
|
17 |
-
<li>A daily bonus wheel that gives you a chance to win free boosters and lives.</li>
|
18 |
-
</ul>
|
19 |
-
<h2>What is Farm Heroes Saga Mod APK?</h2>
|
20 |
-
<p>Farm Heroes Saga Mod APK is a modified version of the original game that gives you some advantages and extra features. It is not an official app from King, but rather a third-party app created by some developers who want to enhance the gaming experience for the players.</p>
|
21 |
-
<p>farm heroes saga unlimited lives and boosters apk<br />
|
22 |
-
farm heroes saga hack apk download<br />
|
23 |
-
farm heroes saga mod apk latest version<br />
|
24 |
-
farm heroes saga mod apk android 1<br />
|
25 |
-
farm heroes saga mod apk unlimited gold bars<br />
|
26 |
-
farm heroes saga mod apk revdl<br />
|
27 |
-
farm heroes saga mod apk offline<br />
|
28 |
-
farm heroes saga mod apk no root<br />
|
29 |
-
farm heroes saga mod apk free download<br />
|
30 |
-
farm heroes saga mod apk unlimited everything<br />
|
31 |
-
farm heroes saga mod apk unlimited moves<br />
|
32 |
-
farm heroes saga mod apk 2023<br />
|
33 |
-
farm heroes saga mod apk rexdl<br />
|
34 |
-
farm heroes saga mod apk happymod<br />
|
35 |
-
farm heroes saga mod apk online<br />
|
36 |
-
farm heroes saga mod apk unlimited beans<br />
|
37 |
-
farm heroes saga mod apk 6.15.3<br />
|
38 |
-
farm heroes saga mod apk for pc<br />
|
39 |
-
farm heroes saga mod apk pure<br />
|
40 |
-
farm heroes saga mod apk old version<br />
|
41 |
-
farm heroes saga premium mod apk<br />
|
42 |
-
farm heroes saga mega mod apk<br />
|
43 |
-
farm heroes saga super mod apk<br />
|
44 |
-
farm heroes saga pro mod apk<br />
|
45 |
-
farm heroes saga full mod apk<br />
|
46 |
-
farm heroes saga cracked mod apk<br />
|
47 |
-
farm heroes saga cheat mod apk<br />
|
48 |
-
farm heroes saga vip mod apk<br />
|
49 |
-
farm heroes saga unlocked mod apk<br />
|
50 |
-
farm heroes saga updated mod apk<br />
|
51 |
-
farm heroes saga new mod apk<br />
|
52 |
-
farm heroes saga best mod apk<br />
|
53 |
-
farm heroes saga easy mod apk<br />
|
54 |
-
farm heroes saga original mod apk<br />
|
55 |
-
farm heroes saga latest hack apk<br />
|
56 |
-
download game farm heroes saga mod apk<br />
|
57 |
-
how to install farm heroes saga mod apk<br />
|
58 |
-
how to play farm heroes saga mod apk<br />
|
59 |
-
how to get farm heroes saga mod apk<br />
|
60 |
-
how to update farm heroes saga mod apk</p>
|
61 |
-
<h3>The benefits of Farm Heroes Saga Mod APK</h3>
|
62 |
-
<p>Some of the benefits of using Farm Heroes Saga Mod APK are:</p>
|
63 |
-
<ul>
|
64 |
-
<li>You get unlimited lives, so you don't have to wait for them to refill or buy them with real money.</li>
|
65 |
-
<li>You get unlimited boosters, so you can use them as much as you want without running out or spending money.</li>
|
66 |
-
<li>You get unlimited gold bars, so you can buy more boosters, power-ups, or extra moves whenever you need them.</li>
|
67 |
-
<li>You get unlimited magic beans, so you can unlock more animals and rewards in the farm club.</li>
|
68 |
-
<li>You get all levels unlocked, so you can play any level you want without having to complete the previous ones.</li>
|
69 |
-
</ul>
|
70 |
-
<h3>The drawbacks of Farm Heroes Saga Mod APK</h3>
|
71 |
-
<p>However, there are also some drawbacks of using Farm Heroes Saga Mod APK that you should be aware of before downloading and installing it. Some of them are:</p>
|
72 |
-
<ul>
|
73 |
-
<li>You may face some compatibility issues with your device or the game version, as the mod APK may not be updated regularly or may not support all devices.</li>
|
74 |
-
<li>You may encounter some bugs or glitches in the game, as the mod APK may not be tested thoroughly or may interfere with the game's functionality.</li>
|
75 |
-
<li>You may risk losing your game progress or data, as the mod APK may not sync with your Facebook account or the game's server.</li>
|
76 |
-
<li>You may violate the game's terms of service or privacy policy, as the mod APK may modify the game's code or data without permission from the developer.</li>
|
77 |
-
<li>You may expose your device to malware or viruses, as the mod APK may contain harmful or malicious files or links that can harm your device or steal your information.</li>
|
78 |
-
</ul>
|
79 |
-
<h2>How to download and install Farm Heroes Saga Mod APK?</h2>
|
80 |
-
<p>If you still want to try Farm Heroes Saga Mod APK despite the drawbacks, you need to follow some steps to download and install it on your device. Here are the steps:</p>
|
81 |
-
<h3>The steps to download and install Farm Heroes Saga Mod APK</h3>
|
82 |
-
<ol>
|
83 |
-
<li>First, you need to find a reliable and trustworthy source that provides the download link for Farm Heroes Saga Mod APK. You can search online for some reviews or recommendations from other users who have tried it before.</li>
|
84 |
-
<li>Next, you need to enable the unknown sources option on your device's settings. This will allow you to install apps from sources other than Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.</li>
|
85 |
-
<li>Then, you need to download the Farm Heroes Saga Mod APK file from the source you have chosen. Make sure you have enough storage space on your device and a stable internet connection.</li>
|
86 |
-
<li>After that, you need to locate the downloaded file on your device's file manager and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.</li>
|
87 |
-
<li>Finally, you need to launch the game and enjoy playing Farm Heroes Saga Mod APK with unlimited resources and features.</li>
|
88 |
-
</ol>
|
89 |
-
<h3>The tips to play Farm Heroes Saga Mod APK safely and smoothly</h3>
|
90 |
-
<p>To avoid any problems or issues while playing Farm Heroes Saga Mod APK, here are some tips you can follow:</p>
|
91 |
-
<ul>
|
92 |
-
<li>Make sure you have a backup of your original game data before installing the mod APK, in case you want to switch back to the official version or restore your progress.</li>
|
93 |
-
<li>Make sure you update the mod APK whenever there is a new version available, to avoid any compatibility issues or bugs.</li>
|
94 |
-
<li>Make sure you scan the mod APK file with an antivirus software before installing it, to prevent any malware or viruses from infecting your device.</li>
|
95 |
-
<li>Make sure you play the game offline or with a VPN, to avoid any detection or ban from the game's server or developer.</li>
|
96 |
-
<li>Make sure you enjoy the game responsibly and moderately, and do not use it for any illegal or unethical purposes.</li>
|
97 |
-
</ul>
|
98 |
-
<h2>Conclusion</h2>
|
99 |
-
<p>Farm Heroes Saga is a fun and addictive farm-themed game that can keep you entertained for hours. However, if you want to enjoy the game without any limitations or interruptions, you can try Farm Heroes Saga Mod APK. This is a modified version of the game that gives you unlimited resources and features, but also comes with some drawbacks and risks. Therefore, you need to be careful and cautious when downloading and installing it on your device. We hope this article has given you some useful information and tips about Farm Heroes Saga Mod APK. If you have any questions or feedback, feel free to leave a comment below.</p>
|
100 |
-
<h3>A summary of the main points</h3>
|
101 |
-
<p>In this article, we have discussed:</p>
|
102 |
-
<ul>
|
103 |
-
<li>What is Farm Heroes Saga and what are its features?</li>
|
104 |
-
<li>What is Farm Heroes Saga Mod APK and what are its benefits and drawbacks?</li>
|
105 |
-
<li>How to download and install Farm Heroes Saga Mod APK on your device?</li>
|
106 |
-
<li>How to play Farm Heroes Saga Mod APK safely and smoothly?</li>
|
107 |
-
</ul>
|
108 |
-
<h3>A call to action for the readers</h3>
|
109 |
-
<p>If you are interested in trying Farm Heroes Saga Mod APK, you can follow the steps we have provided above. However, make sure you are aware of the drawbacks and risks involved in using it. Also, make sure you do not use it for any illegal or unethical purposes. If you like this article, please share it with your friends who might also enjoy playing Farm Heroes Saga Mod APK. Thank you for reading!</p>
|
110 |
-
FAQs Q: Is Farm Heroes Saga Mod APK safe to use? A: Farm Heroes Saga Mod APK is not an official app from King, but rather a third-party app created by some developers who want to enhance the gaming experience for the players. Therefore, it is not guaranteed to be safe or secure, and it may contain harmful or malicious files or links that can harm your device or steal your information. You should always scan the mod APK file with an antivirus software before installing it, and play the game offline or with a VPN to avoid any detection or ban from the game's server or developer. Q: Is Farm Heroes Saga Mod APK legal to use? A: Farm Heroes Saga Mod APK is not legal to use, as it violates the game's terms of service and privacy policy. It also infringes the intellectual property rights of King, as it modifies the game's code or data without permission from the developer. Using Farm Heroes Saga Mod APK may result in legal actions or penalties from King or other authorities. You should always respect the rights and rules of the original game and its developer, and do not use Farm Heroes Saga Mod APK for any illegal or unethical purposes. Q: How can I update Farm Heroes Saga Mod APK? A: Farm Heroes Saga Mod APK may not be updated regularly or may not support all devices or game versions. Therefore, you may face some compatibility issues or bugs while playing the game. To update Farm Heroes Saga Mod APK, you need to find a reliable and trustworthy source that provides the latest version of the mod APK file. You can search online for some reviews or recommendations from other users who have tried it before. Then, you need to download and install the new version of the mod APK file on your device, following the same steps we have provided above. Q: How can I restore my original game data after using Farm Heroes Saga Mod APK? A: Farm Heroes Saga Mod APK may not sync with your Facebook account or the game's server, and it may risk losing your game progress or data. Therefore, you should always have a backup of your original game data before installing the mod APK, in case you want to switch back to the official version or restore your progress. To restore your original game data, you need to uninstall Farm Heroes Saga Mod APK from your device, and reinstall the official version of Farm Heroes Saga from Google Play Store. Then, you need to log in with your Facebook account and sync your game data with the game's server. Q: How can I contact the developer of Farm Heroes Saga Mod APK? A: Farm Heroes Saga Mod APK is not an official app from King, but rather a third-party app created by some developers who want to enhance the gaming experience for the players. Therefore, we do not know who are the developers of Farm Heroes Saga Mod APK, and we do not have any contact information for them. If you have any questions or feedback about Farm Heroes Saga Mod APK, you can try to find them online or leave a comment on their website or social media platforms. However, we cannot guarantee that they will respond to you or provide any support for their app.</p> 197e85843d<br />
|
111 |
-
<br />
|
112 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/2023Liu2023/bingo/src/lib/isomorphic/browser.ts
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
'use client'
|
2 |
-
|
3 |
-
const debug = console.info.bind(console)
|
4 |
-
|
5 |
-
class WebSocketAlias extends WebSocket {
|
6 |
-
constructor(address: string | URL, ...args: any) {
|
7 |
-
super(address)
|
8 |
-
}
|
9 |
-
}
|
10 |
-
|
11 |
-
export default { fetch, WebSocket: WebSocketAlias, debug }
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/A00001/bingothoo/src/lib/isomorphic/browser.ts
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
'use client'
|
2 |
-
|
3 |
-
const debug = console.info.bind(console)
|
4 |
-
|
5 |
-
class WebSocketAlias extends WebSocket {
|
6 |
-
constructor(address: string | URL, ...args: any) {
|
7 |
-
super(address)
|
8 |
-
}
|
9 |
-
}
|
10 |
-
|
11 |
-
export default { fetch, WebSocket: WebSocketAlias, debug }
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AI-DHD/Youtube-Whisperer/app.py
DELETED
@@ -1,93 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import whisper
|
3 |
-
from pytube import YouTube
|
4 |
-
#Please modify this code to allow multiple links to be uploaded for batch editing and change the output to downloadable.txt files
|
5 |
-
|
6 |
-
class GradioInference():
|
7 |
-
def __init__(self):
|
8 |
-
self.sizes = list(whisper._MODELS.keys())
|
9 |
-
self.langs = ["none"] + sorted(list(whisper.tokenizer.LANGUAGES.values()))
|
10 |
-
self.current_size = "base"
|
11 |
-
self.loaded_model = whisper.load_model(self.current_size)
|
12 |
-
self.yt = None
|
13 |
-
|
14 |
-
def __call__(self, link, lang, size, subs):
|
15 |
-
if self.yt is None:
|
16 |
-
self.yt = YouTube(link)
|
17 |
-
path = self.yt.streams.filter(only_audio=True)[0].download(filename="tmp.mp4")
|
18 |
-
|
19 |
-
if lang == "none":
|
20 |
-
lang = None
|
21 |
-
|
22 |
-
if size != self.current_size:
|
23 |
-
self.loaded_model = whisper.load_model(size)
|
24 |
-
self.current_size = size
|
25 |
-
results = self.loaded_model.transcribe(path, language=lang)
|
26 |
-
|
27 |
-
if subs == "None":
|
28 |
-
return results["text"]
|
29 |
-
elif subs == ".srt":
|
30 |
-
return self.srt(results["segments"])
|
31 |
-
elif ".csv" == ".csv":
|
32 |
-
return self.csv(results["segments"])
|
33 |
-
|
34 |
-
def srt(self, segments):
|
35 |
-
output = ""
|
36 |
-
for i, segment in enumerate(segments):
|
37 |
-
output += f"{i+1}\n"
|
38 |
-
output += f"{self.format_time(segment['start'])} --> {self.format_time(segment['end'])}\n"
|
39 |
-
output += f"{segment['text']}\n\n"
|
40 |
-
return output
|
41 |
-
|
42 |
-
def csv(self, segments):
|
43 |
-
output = ""
|
44 |
-
for segment in segments:
|
45 |
-
output += f"{segment['start']},{segment['end']},{segment['text']}\n"
|
46 |
-
return output
|
47 |
-
|
48 |
-
def format_time(self, time):
|
49 |
-
hours = time//3600
|
50 |
-
minutes = (time - hours*3600)//60
|
51 |
-
seconds = time - hours*3600 - minutes*60
|
52 |
-
milliseconds = (time - int(time))*1000
|
53 |
-
return f"{int(hours):02d}:{int(minutes):02d}:{int(seconds):02d},{int(milliseconds):03d}"
|
54 |
-
|
55 |
-
def populate_metadata(self, link):
|
56 |
-
self.yt = YouTube(link)
|
57 |
-
return self.yt.thumbnail_url, self.yt.title
|
58 |
-
|
59 |
-
gio = GradioInference()
|
60 |
-
title="Youtube Whisperer"
|
61 |
-
description="Speech to text transcription of Youtube videos using OpenAI's Whisper"
|
62 |
-
|
63 |
-
block = gr.Blocks()
|
64 |
-
with block:
|
65 |
-
gr.HTML(
|
66 |
-
"""
|
67 |
-
<div style="text-align: center; max-width: 500px; margin: 0 auto;">
|
68 |
-
<div>
|
69 |
-
<h1>Youtube Whisperer</h1>
|
70 |
-
</div>
|
71 |
-
<p style="margin-bottom: 10px; font-size: 94%">
|
72 |
-
Speech to text transcription of Youtube videos using OpenAI's Whisper
|
73 |
-
</p>
|
74 |
-
</div>
|
75 |
-
"""
|
76 |
-
)
|
77 |
-
with gr.Group():
|
78 |
-
with gr.Box():
|
79 |
-
with gr.Row().style(equal_height=True):
|
80 |
-
sz = gr.Dropdown(label="Model Size", choices=gio.sizes, value='base')
|
81 |
-
lang = gr.Dropdown(label="Language (Optional)", choices=gio.langs, value="none")
|
82 |
-
with gr.Row().style(equal_height=True):
|
83 |
-
wt = gr.Radio(["None", ".srt", ".csv"], label="With Timestamps?")
|
84 |
-
link = gr.Textbox(label="YouTube Link")
|
85 |
-
title = gr.Label(label="Video Title")
|
86 |
-
with gr.Row().style(equal_height=True):
|
87 |
-
img = gr.Image(label="Thumbnail")
|
88 |
-
text = gr.Textbox(label="Transcription", placeholder="Transcription Output", lines=10)
|
89 |
-
with gr.Row().style(equal_height=True):
|
90 |
-
btn = gr.Button("Transcribe")
|
91 |
-
btn.click(gio, inputs=[link, lang, sz, wt], outputs=[text])
|
92 |
-
link.change(gio.populate_metadata, inputs=[link], outputs=[img, title])
|
93 |
-
block.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio_inpaint/ldm/modules/encoders/open_clap/feature_fusion.py
DELETED
@@ -1,193 +0,0 @@
|
|
1 |
-
'''
|
2 |
-
Feature Fusion for Varible-Length Data Processing
|
3 |
-
AFF/iAFF is referred and modified from https://github.com/YimianDai/open-aff/blob/master/aff_pytorch/aff_net/fusion.py
|
4 |
-
According to the paper: Yimian Dai et al, Attentional Feature Fusion, IEEE Winter Conference on Applications of Computer Vision, WACV 2021
|
5 |
-
'''
|
6 |
-
|
7 |
-
import torch
|
8 |
-
import torch.nn as nn
|
9 |
-
|
10 |
-
|
11 |
-
class DAF(nn.Module):
|
12 |
-
'''
|
13 |
-
直接相加 DirectAddFuse
|
14 |
-
'''
|
15 |
-
|
16 |
-
def __init__(self):
|
17 |
-
super(DAF, self).__init__()
|
18 |
-
|
19 |
-
def forward(self, x, residual):
|
20 |
-
return x + residual
|
21 |
-
|
22 |
-
|
23 |
-
class iAFF(nn.Module):
|
24 |
-
'''
|
25 |
-
多特征融合 iAFF
|
26 |
-
'''
|
27 |
-
|
28 |
-
def __init__(self, channels=64, r=4, type='2D'):
|
29 |
-
super(iAFF, self).__init__()
|
30 |
-
inter_channels = int(channels // r)
|
31 |
-
|
32 |
-
if type == '1D':
|
33 |
-
# 本地注意力
|
34 |
-
self.local_att = nn.Sequential(
|
35 |
-
nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
36 |
-
nn.BatchNorm1d(inter_channels),
|
37 |
-
nn.ReLU(inplace=True),
|
38 |
-
nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
39 |
-
nn.BatchNorm1d(channels),
|
40 |
-
)
|
41 |
-
|
42 |
-
# 全局注意力
|
43 |
-
self.global_att = nn.Sequential(
|
44 |
-
nn.AdaptiveAvgPool1d(1),
|
45 |
-
nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
46 |
-
nn.BatchNorm1d(inter_channels),
|
47 |
-
nn.ReLU(inplace=True),
|
48 |
-
nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
49 |
-
nn.BatchNorm1d(channels),
|
50 |
-
)
|
51 |
-
|
52 |
-
# 第二次本地注意力
|
53 |
-
self.local_att2 = nn.Sequential(
|
54 |
-
nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
55 |
-
nn.BatchNorm1d(inter_channels),
|
56 |
-
nn.ReLU(inplace=True),
|
57 |
-
nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
58 |
-
nn.BatchNorm1d(channels),
|
59 |
-
)
|
60 |
-
# 第二次全局注意力
|
61 |
-
self.global_att2 = nn.Sequential(
|
62 |
-
nn.AdaptiveAvgPool1d(1),
|
63 |
-
nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
64 |
-
nn.BatchNorm1d(inter_channels),
|
65 |
-
nn.ReLU(inplace=True),
|
66 |
-
nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
67 |
-
nn.BatchNorm1d(channels),
|
68 |
-
)
|
69 |
-
elif type == '2D':
|
70 |
-
# 本地注意力
|
71 |
-
self.local_att = nn.Sequential(
|
72 |
-
nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
73 |
-
nn.BatchNorm2d(inter_channels),
|
74 |
-
nn.ReLU(inplace=True),
|
75 |
-
nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
76 |
-
nn.BatchNorm2d(channels),
|
77 |
-
)
|
78 |
-
|
79 |
-
# 全局注意力
|
80 |
-
self.global_att = nn.Sequential(
|
81 |
-
nn.AdaptiveAvgPool2d(1),
|
82 |
-
nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
83 |
-
nn.BatchNorm2d(inter_channels),
|
84 |
-
nn.ReLU(inplace=True),
|
85 |
-
nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
86 |
-
nn.BatchNorm2d(channels),
|
87 |
-
)
|
88 |
-
|
89 |
-
# 第二次本地注意力
|
90 |
-
self.local_att2 = nn.Sequential(
|
91 |
-
nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
92 |
-
nn.BatchNorm2d(inter_channels),
|
93 |
-
nn.ReLU(inplace=True),
|
94 |
-
nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
95 |
-
nn.BatchNorm2d(channels),
|
96 |
-
)
|
97 |
-
# 第二次全局注意力
|
98 |
-
self.global_att2 = nn.Sequential(
|
99 |
-
nn.AdaptiveAvgPool2d(1),
|
100 |
-
nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
101 |
-
nn.BatchNorm2d(inter_channels),
|
102 |
-
nn.ReLU(inplace=True),
|
103 |
-
nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
104 |
-
nn.BatchNorm2d(channels),
|
105 |
-
)
|
106 |
-
else:
|
107 |
-
raise f'the type is not supported'
|
108 |
-
|
109 |
-
self.sigmoid = nn.Sigmoid()
|
110 |
-
|
111 |
-
def forward(self, x, residual):
|
112 |
-
flag = False
|
113 |
-
xa = x + residual
|
114 |
-
if xa.size(0) == 1:
|
115 |
-
xa = torch.cat([xa,xa],dim=0)
|
116 |
-
flag = True
|
117 |
-
xl = self.local_att(xa)
|
118 |
-
xg = self.global_att(xa)
|
119 |
-
xlg = xl + xg
|
120 |
-
wei = self.sigmoid(xlg)
|
121 |
-
xi = x * wei + residual * (1 - wei)
|
122 |
-
|
123 |
-
xl2 = self.local_att2(xi)
|
124 |
-
xg2 = self.global_att(xi)
|
125 |
-
xlg2 = xl2 + xg2
|
126 |
-
wei2 = self.sigmoid(xlg2)
|
127 |
-
xo = x * wei2 + residual * (1 - wei2)
|
128 |
-
if flag:
|
129 |
-
xo = xo[0].unsqueeze(0)
|
130 |
-
return xo
|
131 |
-
|
132 |
-
|
133 |
-
class AFF(nn.Module):
|
134 |
-
'''
|
135 |
-
多特征融合 AFF
|
136 |
-
'''
|
137 |
-
|
138 |
-
def __init__(self, channels=64, r=4, type='2D'):
|
139 |
-
super(AFF, self).__init__()
|
140 |
-
inter_channels = int(channels // r)
|
141 |
-
|
142 |
-
if type == '1D':
|
143 |
-
self.local_att = nn.Sequential(
|
144 |
-
nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
145 |
-
nn.BatchNorm1d(inter_channels),
|
146 |
-
nn.ReLU(inplace=True),
|
147 |
-
nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
148 |
-
nn.BatchNorm1d(channels),
|
149 |
-
)
|
150 |
-
self.global_att = nn.Sequential(
|
151 |
-
nn.AdaptiveAvgPool1d(1),
|
152 |
-
nn.Conv1d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
153 |
-
nn.BatchNorm1d(inter_channels),
|
154 |
-
nn.ReLU(inplace=True),
|
155 |
-
nn.Conv1d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
156 |
-
nn.BatchNorm1d(channels),
|
157 |
-
)
|
158 |
-
elif type == '2D':
|
159 |
-
self.local_att = nn.Sequential(
|
160 |
-
nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
161 |
-
nn.BatchNorm2d(inter_channels),
|
162 |
-
nn.ReLU(inplace=True),
|
163 |
-
nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
164 |
-
nn.BatchNorm2d(channels),
|
165 |
-
)
|
166 |
-
self.global_att = nn.Sequential(
|
167 |
-
nn.AdaptiveAvgPool2d(1),
|
168 |
-
nn.Conv2d(channels, inter_channels, kernel_size=1, stride=1, padding=0),
|
169 |
-
nn.BatchNorm2d(inter_channels),
|
170 |
-
nn.ReLU(inplace=True),
|
171 |
-
nn.Conv2d(inter_channels, channels, kernel_size=1, stride=1, padding=0),
|
172 |
-
nn.BatchNorm2d(channels),
|
173 |
-
)
|
174 |
-
else:
|
175 |
-
raise f'the type is not supported.'
|
176 |
-
|
177 |
-
self.sigmoid = nn.Sigmoid()
|
178 |
-
|
179 |
-
def forward(self, x, residual):
|
180 |
-
flag = False
|
181 |
-
xa = x + residual
|
182 |
-
if xa.size(0) == 1:
|
183 |
-
xa = torch.cat([xa,xa],dim=0)
|
184 |
-
flag = True
|
185 |
-
xl = self.local_att(xa)
|
186 |
-
xg = self.global_att(xa)
|
187 |
-
xlg = xl + xg
|
188 |
-
wei = self.sigmoid(xlg)
|
189 |
-
xo = 2 * x * wei + 2 * residual * (1 - wei)
|
190 |
-
if flag:
|
191 |
-
xo = xo[0].unsqueeze(0)
|
192 |
-
return xo
|
193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGText/GlyphControl/README.md
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: GlyphControl
|
3 |
-
emoji: 🏢
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
# sdk_version: 3.29.0
|
8 |
-
sdk_version: 3.36.1
|
9 |
-
# python_version: 3.9.17
|
10 |
-
app_file: app.py
|
11 |
-
pinned: false
|
12 |
-
license: mit
|
13 |
-
---
|
14 |
-
|
15 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/yolov7/yolov7_tiny_fast_1xb12-40e_cat.py
DELETED
@@ -1,56 +0,0 @@
|
|
1 |
-
_base_ = 'yolov7_tiny_syncbn_fast_8x16b-300e_coco.py'
|
2 |
-
|
3 |
-
data_root = './data/cat/'
|
4 |
-
class_name = ('cat', )
|
5 |
-
num_classes = len(class_name)
|
6 |
-
metainfo = dict(classes=class_name, palette=[(20, 220, 60)])
|
7 |
-
|
8 |
-
anchors = [
|
9 |
-
[(68, 69), (154, 91), (143, 162)], # P3/8
|
10 |
-
[(242, 160), (189, 287), (391, 207)], # P4/16
|
11 |
-
[(353, 337), (539, 341), (443, 432)] # P5/32
|
12 |
-
]
|
13 |
-
|
14 |
-
max_epochs = 40
|
15 |
-
train_batch_size_per_gpu = 12
|
16 |
-
train_num_workers = 4
|
17 |
-
|
18 |
-
load_from = 'https://download.openmmlab.com/mmyolo/v0/yolov7/yolov7_tiny_syncbn_fast_8x16b-300e_coco/yolov7_tiny_syncbn_fast_8x16b-300e_coco_20221126_102719-0ee5bbdf.pth' # noqa
|
19 |
-
|
20 |
-
model = dict(
|
21 |
-
backbone=dict(frozen_stages=4),
|
22 |
-
bbox_head=dict(
|
23 |
-
head_module=dict(num_classes=num_classes),
|
24 |
-
prior_generator=dict(base_sizes=anchors)))
|
25 |
-
|
26 |
-
train_dataloader = dict(
|
27 |
-
batch_size=train_batch_size_per_gpu,
|
28 |
-
num_workers=train_num_workers,
|
29 |
-
dataset=dict(
|
30 |
-
data_root=data_root,
|
31 |
-
metainfo=metainfo,
|
32 |
-
ann_file='annotations/trainval.json',
|
33 |
-
data_prefix=dict(img='images/')))
|
34 |
-
|
35 |
-
val_dataloader = dict(
|
36 |
-
dataset=dict(
|
37 |
-
metainfo=metainfo,
|
38 |
-
data_root=data_root,
|
39 |
-
ann_file='annotations/test.json',
|
40 |
-
data_prefix=dict(img='images/')))
|
41 |
-
|
42 |
-
test_dataloader = val_dataloader
|
43 |
-
|
44 |
-
_base_.optim_wrapper.optimizer.batch_size_per_gpu = train_batch_size_per_gpu
|
45 |
-
|
46 |
-
val_evaluator = dict(ann_file=data_root + 'annotations/test.json')
|
47 |
-
test_evaluator = val_evaluator
|
48 |
-
|
49 |
-
default_hooks = dict(
|
50 |
-
checkpoint=dict(interval=10, max_keep_ckpts=2, save_best='auto'),
|
51 |
-
# The warmup_mim_iter parameter is critical.
|
52 |
-
# The default value is 1000 which is not suitable for cat datasets.
|
53 |
-
param_scheduler=dict(max_epochs=max_epochs, warmup_mim_iter=10),
|
54 |
-
logger=dict(type='LoggerHook', interval=5))
|
55 |
-
train_cfg = dict(max_epochs=max_epochs, val_interval=10)
|
56 |
-
# visualizer = dict(vis_backends = [dict(type='LocalVisBackend'), dict(type='WandbVisBackend')]) # noqa
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abeer123/Pokemon_Digimon/app.py
DELETED
@@ -1,19 +0,0 @@
|
|
1 |
-
from fastai.vision.all import *
|
2 |
-
import gradio as gr
|
3 |
-
|
4 |
-
|
5 |
-
learner = load_learner('export.pkl')
|
6 |
-
|
7 |
-
categories = ('Digimon', 'Pokemon')
|
8 |
-
|
9 |
-
def classify_image(img):
|
10 |
-
pred,idx,probs = learner.predict(img)
|
11 |
-
return dict(zip(categories,map(float,probs)))
|
12 |
-
|
13 |
-
|
14 |
-
image = gr.inputs.Image(shape=(192,192))
|
15 |
-
label = gr.outputs.Label()
|
16 |
-
examples = ['dedenne.jpg','agumon.jpg','genesect.jpg']
|
17 |
-
|
18 |
-
intf = gr.Interface(fn=classify_image,inputs=image,outputs=label,examples=examples)
|
19 |
-
intf.launch(inline=False)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapter/CoAdapter/models/README.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
You can manually download the models from
|
2 |
-
- [T2I-Adapter v1](https://huggingface.co/TencentARC/T2I-Adapter/tree/main/models)
|
3 |
-
- [CoAdapter Preview version](https://huggingface.co/TencentARC/T2I-Adapter/tree/main/models)
|
4 |
-
- [third-party-models](https://huggingface.co/TencentARC/T2I-Adapter/tree/main/third-party-models)
|
5 |
-
|
6 |
-
and put them into `models` folder
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aditya9790/yolo7-object-tracking/deploy/triton-inference-server/boundingbox.py
DELETED
@@ -1,33 +0,0 @@
|
|
1 |
-
class BoundingBox:
|
2 |
-
def __init__(self, classID, confidence, x1, x2, y1, y2, image_width, image_height):
|
3 |
-
self.classID = classID
|
4 |
-
self.confidence = confidence
|
5 |
-
self.x1 = x1
|
6 |
-
self.x2 = x2
|
7 |
-
self.y1 = y1
|
8 |
-
self.y2 = y2
|
9 |
-
self.u1 = x1 / image_width
|
10 |
-
self.u2 = x2 / image_width
|
11 |
-
self.v1 = y1 / image_height
|
12 |
-
self.v2 = y2 / image_height
|
13 |
-
|
14 |
-
def box(self):
|
15 |
-
return (self.x1, self.y1, self.x2, self.y2)
|
16 |
-
|
17 |
-
def width(self):
|
18 |
-
return self.x2 - self.x1
|
19 |
-
|
20 |
-
def height(self):
|
21 |
-
return self.y2 - self.y1
|
22 |
-
|
23 |
-
def center_absolute(self):
|
24 |
-
return (0.5 * (self.x1 + self.x2), 0.5 * (self.y1 + self.y2))
|
25 |
-
|
26 |
-
def center_normalized(self):
|
27 |
-
return (0.5 * (self.u1 + self.u2), 0.5 * (self.v1 + self.v2))
|
28 |
-
|
29 |
-
def size_absolute(self):
|
30 |
-
return (self.x2 - self.x1, self.y2 - self.y1)
|
31 |
-
|
32 |
-
def size_normalized(self):
|
33 |
-
return (self.u2 - self.u1, self.v2 - self.v1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aditya9790/yolo7-object-tracking/deploy/triton-inference-server/client.py
DELETED
@@ -1,334 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python
|
2 |
-
|
3 |
-
import argparse
|
4 |
-
import numpy as np
|
5 |
-
import sys
|
6 |
-
import cv2
|
7 |
-
|
8 |
-
import tritonclient.grpc as grpcclient
|
9 |
-
from tritonclient.utils import InferenceServerException
|
10 |
-
|
11 |
-
from processing import preprocess, postprocess
|
12 |
-
from render import render_box, render_filled_box, get_text_size, render_text, RAND_COLORS
|
13 |
-
from labels import COCOLabels
|
14 |
-
|
15 |
-
INPUT_NAMES = ["images"]
|
16 |
-
OUTPUT_NAMES = ["num_dets", "det_boxes", "det_scores", "det_classes"]
|
17 |
-
|
18 |
-
if __name__ == '__main__':
|
19 |
-
parser = argparse.ArgumentParser()
|
20 |
-
parser.add_argument('mode',
|
21 |
-
choices=['dummy', 'image', 'video'],
|
22 |
-
default='dummy',
|
23 |
-
help='Run mode. \'dummy\' will send an emtpy buffer to the server to test if inference works. \'image\' will process an image. \'video\' will process a video.')
|
24 |
-
parser.add_argument('input',
|
25 |
-
type=str,
|
26 |
-
nargs='?',
|
27 |
-
help='Input file to load from in image or video mode')
|
28 |
-
parser.add_argument('-m',
|
29 |
-
'--model',
|
30 |
-
type=str,
|
31 |
-
required=False,
|
32 |
-
default='yolov7',
|
33 |
-
help='Inference model name, default yolov7')
|
34 |
-
parser.add_argument('--width',
|
35 |
-
type=int,
|
36 |
-
required=False,
|
37 |
-
default=640,
|
38 |
-
help='Inference model input width, default 640')
|
39 |
-
parser.add_argument('--height',
|
40 |
-
type=int,
|
41 |
-
required=False,
|
42 |
-
default=640,
|
43 |
-
help='Inference model input height, default 640')
|
44 |
-
parser.add_argument('-u',
|
45 |
-
'--url',
|
46 |
-
type=str,
|
47 |
-
required=False,
|
48 |
-
default='localhost:8001',
|
49 |
-
help='Inference server URL, default localhost:8001')
|
50 |
-
parser.add_argument('-o',
|
51 |
-
'--out',
|
52 |
-
type=str,
|
53 |
-
required=False,
|
54 |
-
default='',
|
55 |
-
help='Write output into file instead of displaying it')
|
56 |
-
parser.add_argument('-f',
|
57 |
-
'--fps',
|
58 |
-
type=float,
|
59 |
-
required=False,
|
60 |
-
default=24.0,
|
61 |
-
help='Video output fps, default 24.0 FPS')
|
62 |
-
parser.add_argument('-i',
|
63 |
-
'--model-info',
|
64 |
-
action="store_true",
|
65 |
-
required=False,
|
66 |
-
default=False,
|
67 |
-
help='Print model status, configuration and statistics')
|
68 |
-
parser.add_argument('-v',
|
69 |
-
'--verbose',
|
70 |
-
action="store_true",
|
71 |
-
required=False,
|
72 |
-
default=False,
|
73 |
-
help='Enable verbose client output')
|
74 |
-
parser.add_argument('-t',
|
75 |
-
'--client-timeout',
|
76 |
-
type=float,
|
77 |
-
required=False,
|
78 |
-
default=None,
|
79 |
-
help='Client timeout in seconds, default no timeout')
|
80 |
-
parser.add_argument('-s',
|
81 |
-
'--ssl',
|
82 |
-
action="store_true",
|
83 |
-
required=False,
|
84 |
-
default=False,
|
85 |
-
help='Enable SSL encrypted channel to the server')
|
86 |
-
parser.add_argument('-r',
|
87 |
-
'--root-certificates',
|
88 |
-
type=str,
|
89 |
-
required=False,
|
90 |
-
default=None,
|
91 |
-
help='File holding PEM-encoded root certificates, default none')
|
92 |
-
parser.add_argument('-p',
|
93 |
-
'--private-key',
|
94 |
-
type=str,
|
95 |
-
required=False,
|
96 |
-
default=None,
|
97 |
-
help='File holding PEM-encoded private key, default is none')
|
98 |
-
parser.add_argument('-x',
|
99 |
-
'--certificate-chain',
|
100 |
-
type=str,
|
101 |
-
required=False,
|
102 |
-
default=None,
|
103 |
-
help='File holding PEM-encoded certicate chain default is none')
|
104 |
-
|
105 |
-
FLAGS = parser.parse_args()
|
106 |
-
|
107 |
-
# Create server context
|
108 |
-
try:
|
109 |
-
triton_client = grpcclient.InferenceServerClient(
|
110 |
-
url=FLAGS.url,
|
111 |
-
verbose=FLAGS.verbose,
|
112 |
-
ssl=FLAGS.ssl,
|
113 |
-
root_certificates=FLAGS.root_certificates,
|
114 |
-
private_key=FLAGS.private_key,
|
115 |
-
certificate_chain=FLAGS.certificate_chain)
|
116 |
-
except Exception as e:
|
117 |
-
print("context creation failed: " + str(e))
|
118 |
-
sys.exit()
|
119 |
-
|
120 |
-
# Health check
|
121 |
-
if not triton_client.is_server_live():
|
122 |
-
print("FAILED : is_server_live")
|
123 |
-
sys.exit(1)
|
124 |
-
|
125 |
-
if not triton_client.is_server_ready():
|
126 |
-
print("FAILED : is_server_ready")
|
127 |
-
sys.exit(1)
|
128 |
-
|
129 |
-
if not triton_client.is_model_ready(FLAGS.model):
|
130 |
-
print("FAILED : is_model_ready")
|
131 |
-
sys.exit(1)
|
132 |
-
|
133 |
-
if FLAGS.model_info:
|
134 |
-
# Model metadata
|
135 |
-
try:
|
136 |
-
metadata = triton_client.get_model_metadata(FLAGS.model)
|
137 |
-
print(metadata)
|
138 |
-
except InferenceServerException as ex:
|
139 |
-
if "Request for unknown model" not in ex.message():
|
140 |
-
print("FAILED : get_model_metadata")
|
141 |
-
print("Got: {}".format(ex.message()))
|
142 |
-
sys.exit(1)
|
143 |
-
else:
|
144 |
-
print("FAILED : get_model_metadata")
|
145 |
-
sys.exit(1)
|
146 |
-
|
147 |
-
# Model configuration
|
148 |
-
try:
|
149 |
-
config = triton_client.get_model_config(FLAGS.model)
|
150 |
-
if not (config.config.name == FLAGS.model):
|
151 |
-
print("FAILED: get_model_config")
|
152 |
-
sys.exit(1)
|
153 |
-
print(config)
|
154 |
-
except InferenceServerException as ex:
|
155 |
-
print("FAILED : get_model_config")
|
156 |
-
print("Got: {}".format(ex.message()))
|
157 |
-
sys.exit(1)
|
158 |
-
|
159 |
-
# DUMMY MODE
|
160 |
-
if FLAGS.mode == 'dummy':
|
161 |
-
print("Running in 'dummy' mode")
|
162 |
-
print("Creating emtpy buffer filled with ones...")
|
163 |
-
inputs = []
|
164 |
-
outputs = []
|
165 |
-
inputs.append(grpcclient.InferInput(INPUT_NAMES[0], [1, 3, FLAGS.width, FLAGS.height], "FP32"))
|
166 |
-
inputs[0].set_data_from_numpy(np.ones(shape=(1, 3, FLAGS.width, FLAGS.height), dtype=np.float32))
|
167 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[0]))
|
168 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[1]))
|
169 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[2]))
|
170 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[3]))
|
171 |
-
|
172 |
-
print("Invoking inference...")
|
173 |
-
results = triton_client.infer(model_name=FLAGS.model,
|
174 |
-
inputs=inputs,
|
175 |
-
outputs=outputs,
|
176 |
-
client_timeout=FLAGS.client_timeout)
|
177 |
-
if FLAGS.model_info:
|
178 |
-
statistics = triton_client.get_inference_statistics(model_name=FLAGS.model)
|
179 |
-
if len(statistics.model_stats) != 1:
|
180 |
-
print("FAILED: get_inference_statistics")
|
181 |
-
sys.exit(1)
|
182 |
-
print(statistics)
|
183 |
-
print("Done")
|
184 |
-
|
185 |
-
for output in OUTPUT_NAMES:
|
186 |
-
result = results.as_numpy(output)
|
187 |
-
print(f"Received result buffer \"{output}\" of size {result.shape}")
|
188 |
-
print(f"Naive buffer sum: {np.sum(result)}")
|
189 |
-
|
190 |
-
# IMAGE MODE
|
191 |
-
if FLAGS.mode == 'image':
|
192 |
-
print("Running in 'image' mode")
|
193 |
-
if not FLAGS.input:
|
194 |
-
print("FAILED: no input image")
|
195 |
-
sys.exit(1)
|
196 |
-
|
197 |
-
inputs = []
|
198 |
-
outputs = []
|
199 |
-
inputs.append(grpcclient.InferInput(INPUT_NAMES[0], [1, 3, FLAGS.width, FLAGS.height], "FP32"))
|
200 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[0]))
|
201 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[1]))
|
202 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[2]))
|
203 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[3]))
|
204 |
-
|
205 |
-
print("Creating buffer from image file...")
|
206 |
-
input_image = cv2.imread(str(FLAGS.input))
|
207 |
-
if input_image is None:
|
208 |
-
print(f"FAILED: could not load input image {str(FLAGS.input)}")
|
209 |
-
sys.exit(1)
|
210 |
-
input_image_buffer = preprocess(input_image, [FLAGS.width, FLAGS.height])
|
211 |
-
input_image_buffer = np.expand_dims(input_image_buffer, axis=0)
|
212 |
-
|
213 |
-
inputs[0].set_data_from_numpy(input_image_buffer)
|
214 |
-
|
215 |
-
print("Invoking inference...")
|
216 |
-
results = triton_client.infer(model_name=FLAGS.model,
|
217 |
-
inputs=inputs,
|
218 |
-
outputs=outputs,
|
219 |
-
client_timeout=FLAGS.client_timeout)
|
220 |
-
if FLAGS.model_info:
|
221 |
-
statistics = triton_client.get_inference_statistics(model_name=FLAGS.model)
|
222 |
-
if len(statistics.model_stats) != 1:
|
223 |
-
print("FAILED: get_inference_statistics")
|
224 |
-
sys.exit(1)
|
225 |
-
print(statistics)
|
226 |
-
print("Done")
|
227 |
-
|
228 |
-
for output in OUTPUT_NAMES:
|
229 |
-
result = results.as_numpy(output)
|
230 |
-
print(f"Received result buffer \"{output}\" of size {result.shape}")
|
231 |
-
print(f"Naive buffer sum: {np.sum(result)}")
|
232 |
-
|
233 |
-
num_dets = results.as_numpy(OUTPUT_NAMES[0])
|
234 |
-
det_boxes = results.as_numpy(OUTPUT_NAMES[1])
|
235 |
-
det_scores = results.as_numpy(OUTPUT_NAMES[2])
|
236 |
-
det_classes = results.as_numpy(OUTPUT_NAMES[3])
|
237 |
-
detected_objects = postprocess(num_dets, det_boxes, det_scores, det_classes, input_image.shape[1], input_image.shape[0], [FLAGS.width, FLAGS.height])
|
238 |
-
print(f"Detected objects: {len(detected_objects)}")
|
239 |
-
|
240 |
-
for box in detected_objects:
|
241 |
-
print(f"{COCOLabels(box.classID).name}: {box.confidence}")
|
242 |
-
input_image = render_box(input_image, box.box(), color=tuple(RAND_COLORS[box.classID % 64].tolist()))
|
243 |
-
size = get_text_size(input_image, f"{COCOLabels(box.classID).name}: {box.confidence:.2f}", normalised_scaling=0.6)
|
244 |
-
input_image = render_filled_box(input_image, (box.x1 - 3, box.y1 - 3, box.x1 + size[0], box.y1 + size[1]), color=(220, 220, 220))
|
245 |
-
input_image = render_text(input_image, f"{COCOLabels(box.classID).name}: {box.confidence:.2f}", (box.x1, box.y1), color=(30, 30, 30), normalised_scaling=0.5)
|
246 |
-
|
247 |
-
if FLAGS.out:
|
248 |
-
cv2.imwrite(FLAGS.out, input_image)
|
249 |
-
print(f"Saved result to {FLAGS.out}")
|
250 |
-
else:
|
251 |
-
cv2.imshow('image', input_image)
|
252 |
-
cv2.waitKey(0)
|
253 |
-
cv2.destroyAllWindows()
|
254 |
-
|
255 |
-
# VIDEO MODE
|
256 |
-
if FLAGS.mode == 'video':
|
257 |
-
print("Running in 'video' mode")
|
258 |
-
if not FLAGS.input:
|
259 |
-
print("FAILED: no input video")
|
260 |
-
sys.exit(1)
|
261 |
-
|
262 |
-
inputs = []
|
263 |
-
outputs = []
|
264 |
-
inputs.append(grpcclient.InferInput(INPUT_NAMES[0], [1, 3, FLAGS.width, FLAGS.height], "FP32"))
|
265 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[0]))
|
266 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[1]))
|
267 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[2]))
|
268 |
-
outputs.append(grpcclient.InferRequestedOutput(OUTPUT_NAMES[3]))
|
269 |
-
|
270 |
-
print("Opening input video stream...")
|
271 |
-
cap = cv2.VideoCapture(FLAGS.input)
|
272 |
-
if not cap.isOpened():
|
273 |
-
print(f"FAILED: cannot open video {FLAGS.input}")
|
274 |
-
sys.exit(1)
|
275 |
-
|
276 |
-
counter = 0
|
277 |
-
out = None
|
278 |
-
print("Invoking inference...")
|
279 |
-
while True:
|
280 |
-
ret, frame = cap.read()
|
281 |
-
if not ret:
|
282 |
-
print("failed to fetch next frame")
|
283 |
-
break
|
284 |
-
|
285 |
-
if counter == 0 and FLAGS.out:
|
286 |
-
print("Opening output video stream...")
|
287 |
-
fourcc = cv2.VideoWriter_fourcc('M', 'P', '4', 'V')
|
288 |
-
out = cv2.VideoWriter(FLAGS.out, fourcc, FLAGS.fps, (frame.shape[1], frame.shape[0]))
|
289 |
-
|
290 |
-
input_image_buffer = preprocess(frame, [FLAGS.width, FLAGS.height])
|
291 |
-
input_image_buffer = np.expand_dims(input_image_buffer, axis=0)
|
292 |
-
|
293 |
-
inputs[0].set_data_from_numpy(input_image_buffer)
|
294 |
-
|
295 |
-
results = triton_client.infer(model_name=FLAGS.model,
|
296 |
-
inputs=inputs,
|
297 |
-
outputs=outputs,
|
298 |
-
client_timeout=FLAGS.client_timeout)
|
299 |
-
|
300 |
-
num_dets = results.as_numpy("num_dets")
|
301 |
-
det_boxes = results.as_numpy("det_boxes")
|
302 |
-
det_scores = results.as_numpy("det_scores")
|
303 |
-
det_classes = results.as_numpy("det_classes")
|
304 |
-
detected_objects = postprocess(num_dets, det_boxes, det_scores, det_classes, frame.shape[1], frame.shape[0], [FLAGS.width, FLAGS.height])
|
305 |
-
print(f"Frame {counter}: {len(detected_objects)} objects")
|
306 |
-
counter += 1
|
307 |
-
|
308 |
-
for box in detected_objects:
|
309 |
-
print(f"{COCOLabels(box.classID).name}: {box.confidence}")
|
310 |
-
frame = render_box(frame, box.box(), color=tuple(RAND_COLORS[box.classID % 64].tolist()))
|
311 |
-
size = get_text_size(frame, f"{COCOLabels(box.classID).name}: {box.confidence:.2f}", normalised_scaling=0.6)
|
312 |
-
frame = render_filled_box(frame, (box.x1 - 3, box.y1 - 3, box.x1 + size[0], box.y1 + size[1]), color=(220, 220, 220))
|
313 |
-
frame = render_text(frame, f"{COCOLabels(box.classID).name}: {box.confidence:.2f}", (box.x1, box.y1), color=(30, 30, 30), normalised_scaling=0.5)
|
314 |
-
|
315 |
-
if FLAGS.out:
|
316 |
-
out.write(frame)
|
317 |
-
else:
|
318 |
-
cv2.imshow('image', frame)
|
319 |
-
if cv2.waitKey(1) == ord('q'):
|
320 |
-
break
|
321 |
-
|
322 |
-
if FLAGS.model_info:
|
323 |
-
statistics = triton_client.get_inference_statistics(model_name=FLAGS.model)
|
324 |
-
if len(statistics.model_stats) != 1:
|
325 |
-
print("FAILED: get_inference_statistics")
|
326 |
-
sys.exit(1)
|
327 |
-
print(statistics)
|
328 |
-
print("Done")
|
329 |
-
|
330 |
-
cap.release()
|
331 |
-
if FLAGS.out:
|
332 |
-
out.release()
|
333 |
-
else:
|
334 |
-
cv2.destroyAllWindows()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/dist/assets/tilemaps/tiles/town.tsx
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
<?xml version="1.0" encoding="UTF-8"?>
|
2 |
-
<tileset version="1.10" tiledversion="1.10.1" name="tileset" tilewidth="16" tileheight="16" tilecount="7984" columns="8">
|
3 |
-
<image source="tileset.png" width="128" height="15968"/>
|
4 |
-
</tileset>
|
|
|
|
|
|
|
|
|
|
spaces/Aki004/herta-so-vits/preprocess_flist_config.py
DELETED
@@ -1,75 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import argparse
|
3 |
-
import re
|
4 |
-
|
5 |
-
from tqdm import tqdm
|
6 |
-
from random import shuffle
|
7 |
-
import json
|
8 |
-
import wave
|
9 |
-
|
10 |
-
config_template = json.load(open("configs_template/config_template.json"))
|
11 |
-
|
12 |
-
pattern = re.compile(r'^[\.a-zA-Z0-9_\/]+$')
|
13 |
-
|
14 |
-
def get_wav_duration(file_path):
|
15 |
-
with wave.open(file_path, 'rb') as wav_file:
|
16 |
-
# get audio frames
|
17 |
-
n_frames = wav_file.getnframes()
|
18 |
-
# get sampling rate
|
19 |
-
framerate = wav_file.getframerate()
|
20 |
-
# calculate duration in seconds
|
21 |
-
duration = n_frames / float(framerate)
|
22 |
-
return duration
|
23 |
-
|
24 |
-
if __name__ == "__main__":
|
25 |
-
parser = argparse.ArgumentParser()
|
26 |
-
parser.add_argument("--train_list", type=str, default="./filelists/train.txt", help="path to train list")
|
27 |
-
parser.add_argument("--val_list", type=str, default="./filelists/val.txt", help="path to val list")
|
28 |
-
parser.add_argument("--source_dir", type=str, default="./dataset/44k", help="path to source dir")
|
29 |
-
args = parser.parse_args()
|
30 |
-
|
31 |
-
train = []
|
32 |
-
val = []
|
33 |
-
idx = 0
|
34 |
-
spk_dict = {}
|
35 |
-
spk_id = 0
|
36 |
-
for speaker in tqdm(os.listdir(args.source_dir)):
|
37 |
-
spk_dict[speaker] = spk_id
|
38 |
-
spk_id += 1
|
39 |
-
wavs = ["/".join([args.source_dir, speaker, i]) for i in os.listdir(os.path.join(args.source_dir, speaker))]
|
40 |
-
new_wavs = []
|
41 |
-
for file in wavs:
|
42 |
-
if not file.endswith("wav"):
|
43 |
-
continue
|
44 |
-
if not pattern.match(file):
|
45 |
-
print(f"Warning: The file name of {file} contains non-alphanumeric and underscores, which may cause issues. (or maybe not)")
|
46 |
-
if get_wav_duration(file) < 0.3:
|
47 |
-
print("skip too short audio:", file)
|
48 |
-
continue
|
49 |
-
new_wavs.append(file)
|
50 |
-
wavs = new_wavs
|
51 |
-
shuffle(wavs)
|
52 |
-
train += wavs[2:]
|
53 |
-
val += wavs[:2]
|
54 |
-
|
55 |
-
shuffle(train)
|
56 |
-
shuffle(val)
|
57 |
-
|
58 |
-
print("Writing", args.train_list)
|
59 |
-
with open(args.train_list, "w") as f:
|
60 |
-
for fname in tqdm(train):
|
61 |
-
wavpath = fname
|
62 |
-
f.write(wavpath + "\n")
|
63 |
-
|
64 |
-
print("Writing", args.val_list)
|
65 |
-
with open(args.val_list, "w") as f:
|
66 |
-
for fname in tqdm(val):
|
67 |
-
wavpath = fname
|
68 |
-
f.write(wavpath + "\n")
|
69 |
-
|
70 |
-
config_template["spk"] = spk_dict
|
71 |
-
config_template["model"]["n_speakers"] = spk_id
|
72 |
-
|
73 |
-
print("Writing configs/config.json")
|
74 |
-
with open("configs/config.json", "w") as f:
|
75 |
-
json.dump(config_template, f, indent=2)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlexZou/SCUTAUTO210b/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: SCUTAUTO210b
|
3 |
-
emoji: 🐠
|
4 |
-
colorFrom: green
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.9.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: openrail
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Alichuan/VITS-Umamusume-voice-synthesizer/monotonic_align/setup.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
from distutils.core import setup
|
2 |
-
from Cython.Build import cythonize
|
3 |
-
import numpy
|
4 |
-
|
5 |
-
setup(
|
6 |
-
name = 'monotonic_align',
|
7 |
-
ext_modules = cythonize("core.pyx"),
|
8 |
-
include_dirs=[numpy.get_include()]
|
9 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/pool_alloc.cpp
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
#include "libipc/pool_alloc.h"
|
2 |
-
|
3 |
-
#include "libipc/memory/resource.h"
|
4 |
-
|
5 |
-
namespace ipc {
|
6 |
-
namespace mem {
|
7 |
-
|
8 |
-
void* pool_alloc::alloc(std::size_t size) {
|
9 |
-
return async_pool_alloc::alloc(size);
|
10 |
-
}
|
11 |
-
|
12 |
-
void pool_alloc::free(void* p, std::size_t size) {
|
13 |
-
async_pool_alloc::free(p, size);
|
14 |
-
}
|
15 |
-
|
16 |
-
} // namespace mem
|
17 |
-
} // namespace ipc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/loaders.py
DELETED
The diff for this file is too large to render.
See raw diff
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/__init__.py
DELETED
File without changes
|
spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/README.md
DELETED
@@ -1,61 +0,0 @@
|
|
1 |
-
# Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
|
2 |
-
|
3 |
-
## Introduction
|
4 |
-
|
5 |
-
[ALGORITHM]
|
6 |
-
|
7 |
-
```latex
|
8 |
-
@article{Ren_2017,
|
9 |
-
title={Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks},
|
10 |
-
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
|
11 |
-
publisher={Institute of Electrical and Electronics Engineers (IEEE)},
|
12 |
-
author={Ren, Shaoqing and He, Kaiming and Girshick, Ross and Sun, Jian},
|
13 |
-
year={2017},
|
14 |
-
month={Jun},
|
15 |
-
}
|
16 |
-
```
|
17 |
-
|
18 |
-
## Results and models
|
19 |
-
|
20 |
-
| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
|
21 |
-
| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: |
|
22 |
-
| R-50-DC5 | caffe | 1x | - | - | 37.2 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_1x_coco/faster_rcnn_r50_caffe_dc5_1x_coco_20201030_151909-531f0f43.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_1x_coco/faster_rcnn_r50_caffe_dc5_1x_coco_20201030_151909.log.json) |
|
23 |
-
| R-50-FPN | caffe | 1x | 3.8 | | 37.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco/faster_rcnn_r50_caffe_fpn_1x_coco_bbox_mAP-0.378_20200504_180032-c5925ee5.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco/faster_rcnn_r50_caffe_fpn_1x_coco_20200504_180032.log.json) |
|
24 |
-
| R-50-FPN | pytorch | 1x | 4.0 | 21.4 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130_204655.log.json) |
|
25 |
-
| R-50-FPN | pytorch | 2x | - | - | 38.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_2x_coco/faster_rcnn_r50_fpn_2x_coco_bbox_mAP-0.384_20200504_210434-a5d8aa15.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_2x_coco/faster_rcnn_r50_fpn_2x_coco_20200504_210434.log.json) |
|
26 |
-
| R-101-FPN | caffe | 1x | 5.7 | | 39.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r101_caffe_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_caffe_fpn_1x_coco/faster_rcnn_r101_caffe_fpn_1x_coco_bbox_mAP-0.398_20200504_180057-b269e9dd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_caffe_fpn_1x_coco/faster_rcnn_r101_caffe_fpn_1x_coco_20200504_180057.log.json) |
|
27 |
-
| R-101-FPN | pytorch | 1x | 6.0 | 15.6 | 39.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_fpn_1x_coco/faster_rcnn_r101_fpn_1x_coco_20200130-f513f705.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_fpn_1x_coco/faster_rcnn_r101_fpn_1x_coco_20200130_204655.log.json) |
|
28 |
-
| R-101-FPN | pytorch | 2x | - | - | 39.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r101_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_fpn_2x_coco/faster_rcnn_r101_fpn_2x_coco_bbox_mAP-0.398_20200504_210455-1d2dac9c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r101_fpn_2x_coco/faster_rcnn_r101_fpn_2x_coco_20200504_210455.log.json) |
|
29 |
-
| X-101-32x4d-FPN | pytorch | 1x | 7.2 | 13.8 | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_32x4d_fpn_1x_coco/faster_rcnn_x101_32x4d_fpn_1x_coco_20200203-cff10310.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_32x4d_fpn_1x_coco/faster_rcnn_x101_32x4d_fpn_1x_coco_20200203_000520.log.json) |
|
30 |
-
| X-101-32x4d-FPN | pytorch | 2x | - | - | 41.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco/faster_rcnn_x101_32x4d_fpn_2x_coco_bbox_mAP-0.412_20200506_041400-64a12c0b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_32x4d_fpn_2x_coco/faster_rcnn_x101_32x4d_fpn_2x_coco_20200506_041400.log.json) |
|
31 |
-
| X-101-64x4d-FPN | pytorch | 1x | 10.3 | 9.4 | 42.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_x101_64x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_1x_coco/faster_rcnn_x101_64x4d_fpn_1x_coco_20200204-833ee192.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_1x_coco/faster_rcnn_x101_64x4d_fpn_1x_coco_20200204_134340.log.json) |
|
32 |
-
| X-101-64x4d-FPN | pytorch | 2x | - | - | 41.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco/faster_rcnn_x101_64x4d_fpn_2x_coco_20200512_161033-5961fa95.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco/faster_rcnn_x101_64x4d_fpn_2x_coco_20200512_161033.log.json) |
|
33 |
-
|
34 |
-
## Different regression loss
|
35 |
-
|
36 |
-
We trained with R-50-FPN pytorch style backbone for 1x schedule.
|
37 |
-
|
38 |
-
| Backbone | Loss type | Mem (GB) | Inf time (fps) | box AP | Config | Download |
|
39 |
-
| :-------------: | :-------: | :------: | :------------: | :----: | :------: | :--------: |
|
40 |
-
| R-50-FPN | L1Loss | 4.0 | 21.4 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130_204655.log.json) |
|
41 |
-
| R-50-FPN | IoULoss | | | 37.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_iou_1x_coco-fdd207f3.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_iou_1x_coco_20200506_095954.log.json) |
|
42 |
-
| R-50-FPN | GIoULoss | | | 37.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_giou_1x_coco-0eada910.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_giou_1x_coco_20200505_161120.log.json) |
|
43 |
-
| R-50-FPN | BoundedIoULoss | | | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_bounded_iou_1x_coco-98ad993b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_bounded_iou_1x_coco_20200505_160738.log.json) |
|
44 |
-
|
45 |
-
## Pre-trained Models
|
46 |
-
|
47 |
-
We also train some models with longer schedules and multi-scale training. The users could finetune them for downstream tasks.
|
48 |
-
|
49 |
-
| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
|
50 |
-
| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :------: | :--------: |
|
51 |
-
| [R-50-DC5](./faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py) | caffe | 1x | - | | 37.4 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco_20201028_233851-b33d21b9.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco_20201028_233851.log.json)
|
52 |
-
| [R-50-DC5](./faster_rcnn_r50_caffe_dc5_mstrain_3x_coco.py) | caffe | 3x | - | | 38.7 | [config](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco_20201028_002107-34a53b2c.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco/faster_rcnn_r50_caffe_dc5_mstrain_3x_coco_20201028_002107.log.json)
|
53 |
-
| [R-50-FPN](./faster_rcnn_r50_caffe_fpn_mstrain_2x_coco.py) | caffe | 2x | 4.3 | | 39.7 |[config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_bbox_mAP-0.397_20200504_231813-10b2de58.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco/faster_rcnn_r50_caffe_fpn_mstrain_2x_coco_20200504_231813.log.json)
|
54 |
-
| [R-50-FPN](./faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py) | caffe | 3x | 4.3 | | 40.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_bbox_mAP-0.398_20200504_163323-30042637.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco/faster_rcnn_r50_caffe_fpn_mstrain_3x_coco_20200504_163323.log.json)
|
55 |
-
|
56 |
-
We further finetune some pre-trained models on the COCO subsets, which only contain only a few of the 80 categories.
|
57 |
-
|
58 |
-
| Backbone | Style | Class name | Pre-traind model | Mem (GB) | box AP | Config | Download |
|
59 |
-
| ------------------------------------------------------------ | ----- | ------------------ | ------------------------------------------------------------ | -------- | ------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
|
60 |
-
| [R-50-FPN](./faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person.py) | caffe | person | [R-50-FPN-Caffe-3x](./faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py) | 3.7 | 55.8 | [config](./faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco-person/faster_rcnn_r50_fpn_1x_coco-person_20201216_175929-d022e227.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco-person/faster_rcnn_r50_fpn_1x_coco-person_20201216_175929.log.json) |
|
61 |
-
| [R-50-FPN](./faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py) | caffe | person-bicycle-car | [R-50-FPN-Caffe-3x](./faster_rcnn_r50_caffe_fpn_mstrain_3x_coco.py) | 3.7 | 44.1 | [config](./faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-person-bicycle-car.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco-person-bicycle-car/faster_rcnn_r50_fpn_1x_coco-person-bicycle-car_20201216_173117-6eda6d92.pth) | [log](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco-person-bicycle-car/faster_rcnn_r50_fpn_1x_coco-person-bicycle-car_20201216_173117.log.json) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/nas_fcos/nas_fcos_fcoshead_r50_caffe_fpn_gn-head_4x4_1x_coco.py
DELETED
@@ -1,98 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/datasets/coco_detection.py',
|
3 |
-
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
|
4 |
-
]
|
5 |
-
|
6 |
-
model = dict(
|
7 |
-
type='NASFCOS',
|
8 |
-
pretrained='open-mmlab://detectron2/resnet50_caffe',
|
9 |
-
backbone=dict(
|
10 |
-
type='ResNet',
|
11 |
-
depth=50,
|
12 |
-
num_stages=4,
|
13 |
-
out_indices=(0, 1, 2, 3),
|
14 |
-
frozen_stages=1,
|
15 |
-
norm_cfg=dict(type='BN', requires_grad=False, eps=0),
|
16 |
-
style='caffe'),
|
17 |
-
neck=dict(
|
18 |
-
type='NASFCOS_FPN',
|
19 |
-
in_channels=[256, 512, 1024, 2048],
|
20 |
-
out_channels=256,
|
21 |
-
start_level=1,
|
22 |
-
add_extra_convs=True,
|
23 |
-
num_outs=5,
|
24 |
-
norm_cfg=dict(type='BN'),
|
25 |
-
conv_cfg=dict(type='DCNv2', deform_groups=2)),
|
26 |
-
bbox_head=dict(
|
27 |
-
type='FCOSHead',
|
28 |
-
num_classes=80,
|
29 |
-
in_channels=256,
|
30 |
-
stacked_convs=4,
|
31 |
-
feat_channels=256,
|
32 |
-
strides=[8, 16, 32, 64, 128],
|
33 |
-
norm_cfg=dict(type='GN', num_groups=32),
|
34 |
-
loss_cls=dict(
|
35 |
-
type='FocalLoss',
|
36 |
-
use_sigmoid=True,
|
37 |
-
gamma=2.0,
|
38 |
-
alpha=0.25,
|
39 |
-
loss_weight=1.0),
|
40 |
-
loss_bbox=dict(type='IoULoss', loss_weight=1.0),
|
41 |
-
loss_centerness=dict(
|
42 |
-
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
|
43 |
-
train_cfg=dict(
|
44 |
-
assigner=dict(
|
45 |
-
type='MaxIoUAssigner',
|
46 |
-
pos_iou_thr=0.5,
|
47 |
-
neg_iou_thr=0.4,
|
48 |
-
min_pos_iou=0,
|
49 |
-
ignore_iof_thr=-1),
|
50 |
-
allowed_border=-1,
|
51 |
-
pos_weight=-1,
|
52 |
-
debug=False),
|
53 |
-
test_cfg=dict(
|
54 |
-
nms_pre=1000,
|
55 |
-
min_bbox_size=0,
|
56 |
-
score_thr=0.05,
|
57 |
-
nms=dict(type='nms', iou_threshold=0.6),
|
58 |
-
max_per_img=100))
|
59 |
-
|
60 |
-
img_norm_cfg = dict(
|
61 |
-
mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
|
62 |
-
|
63 |
-
train_pipeline = [
|
64 |
-
dict(type='LoadImageFromFile'),
|
65 |
-
dict(type='LoadAnnotations', with_bbox=True),
|
66 |
-
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
|
67 |
-
dict(type='RandomFlip', flip_ratio=0.5),
|
68 |
-
dict(type='Normalize', **img_norm_cfg),
|
69 |
-
dict(type='Pad', size_divisor=32),
|
70 |
-
dict(type='DefaultFormatBundle'),
|
71 |
-
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
|
72 |
-
]
|
73 |
-
|
74 |
-
test_pipeline = [
|
75 |
-
dict(type='LoadImageFromFile'),
|
76 |
-
dict(
|
77 |
-
type='MultiScaleFlipAug',
|
78 |
-
img_scale=(1333, 800),
|
79 |
-
flip=False,
|
80 |
-
transforms=[
|
81 |
-
dict(type='Resize', keep_ratio=True),
|
82 |
-
dict(type='RandomFlip'),
|
83 |
-
dict(type='Normalize', **img_norm_cfg),
|
84 |
-
dict(type='Pad', size_divisor=32),
|
85 |
-
dict(type='ImageToTensor', keys=['img']),
|
86 |
-
dict(type='Collect', keys=['img']),
|
87 |
-
])
|
88 |
-
]
|
89 |
-
|
90 |
-
data = dict(
|
91 |
-
samples_per_gpu=4,
|
92 |
-
workers_per_gpu=2,
|
93 |
-
train=dict(pipeline=train_pipeline),
|
94 |
-
val=dict(pipeline=test_pipeline),
|
95 |
-
test=dict(pipeline=test_pipeline))
|
96 |
-
|
97 |
-
optimizer = dict(
|
98 |
-
lr=0.01, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/kd_one_stage.py
DELETED
@@ -1,100 +0,0 @@
|
|
1 |
-
import mmcv
|
2 |
-
import torch
|
3 |
-
from mmcv.runner import load_checkpoint
|
4 |
-
|
5 |
-
from .. import build_detector
|
6 |
-
from ..builder import DETECTORS
|
7 |
-
from .single_stage import SingleStageDetector
|
8 |
-
|
9 |
-
|
10 |
-
@DETECTORS.register_module()
|
11 |
-
class KnowledgeDistillationSingleStageDetector(SingleStageDetector):
|
12 |
-
r"""Implementation of `Distilling the Knowledge in a Neural Network.
|
13 |
-
<https://arxiv.org/abs/1503.02531>`_.
|
14 |
-
|
15 |
-
Args:
|
16 |
-
teacher_config (str | dict): Config file path
|
17 |
-
or the config object of teacher model.
|
18 |
-
teacher_ckpt (str, optional): Checkpoint path of teacher model.
|
19 |
-
If left as None, the model will not load any weights.
|
20 |
-
"""
|
21 |
-
|
22 |
-
def __init__(self,
|
23 |
-
backbone,
|
24 |
-
neck,
|
25 |
-
bbox_head,
|
26 |
-
teacher_config,
|
27 |
-
teacher_ckpt=None,
|
28 |
-
eval_teacher=True,
|
29 |
-
train_cfg=None,
|
30 |
-
test_cfg=None,
|
31 |
-
pretrained=None):
|
32 |
-
super().__init__(backbone, neck, bbox_head, train_cfg, test_cfg,
|
33 |
-
pretrained)
|
34 |
-
self.eval_teacher = eval_teacher
|
35 |
-
# Build teacher model
|
36 |
-
if isinstance(teacher_config, str):
|
37 |
-
teacher_config = mmcv.Config.fromfile(teacher_config)
|
38 |
-
self.teacher_model = build_detector(teacher_config['model'])
|
39 |
-
if teacher_ckpt is not None:
|
40 |
-
load_checkpoint(
|
41 |
-
self.teacher_model, teacher_ckpt, map_location='cpu')
|
42 |
-
|
43 |
-
def forward_train(self,
|
44 |
-
img,
|
45 |
-
img_metas,
|
46 |
-
gt_bboxes,
|
47 |
-
gt_labels,
|
48 |
-
gt_bboxes_ignore=None):
|
49 |
-
"""
|
50 |
-
Args:
|
51 |
-
img (Tensor): Input images of shape (N, C, H, W).
|
52 |
-
Typically these should be mean centered and std scaled.
|
53 |
-
img_metas (list[dict]): A List of image info dict where each dict
|
54 |
-
has: 'img_shape', 'scale_factor', 'flip', and may also contain
|
55 |
-
'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'.
|
56 |
-
For details on the values of these keys see
|
57 |
-
:class:`mmdet.datasets.pipelines.Collect`.
|
58 |
-
gt_bboxes (list[Tensor]): Each item are the truth boxes for each
|
59 |
-
image in [tl_x, tl_y, br_x, br_y] format.
|
60 |
-
gt_labels (list[Tensor]): Class indices corresponding to each box
|
61 |
-
gt_bboxes_ignore (None | list[Tensor]): Specify which bounding
|
62 |
-
boxes can be ignored when computing the loss.
|
63 |
-
Returns:
|
64 |
-
dict[str, Tensor]: A dictionary of loss components.
|
65 |
-
"""
|
66 |
-
x = self.extract_feat(img)
|
67 |
-
with torch.no_grad():
|
68 |
-
teacher_x = self.teacher_model.extract_feat(img)
|
69 |
-
out_teacher = self.teacher_model.bbox_head(teacher_x)
|
70 |
-
losses = self.bbox_head.forward_train(x, out_teacher, img_metas,
|
71 |
-
gt_bboxes, gt_labels,
|
72 |
-
gt_bboxes_ignore)
|
73 |
-
return losses
|
74 |
-
|
75 |
-
def cuda(self, device=None):
|
76 |
-
"""Since teacher_model is registered as a plain object, it is necessary
|
77 |
-
to put the teacher model to cuda when calling cuda function."""
|
78 |
-
self.teacher_model.cuda(device=device)
|
79 |
-
return super().cuda(device=device)
|
80 |
-
|
81 |
-
def train(self, mode=True):
|
82 |
-
"""Set the same train mode for teacher and student model."""
|
83 |
-
if self.eval_teacher:
|
84 |
-
self.teacher_model.train(False)
|
85 |
-
else:
|
86 |
-
self.teacher_model.train(mode)
|
87 |
-
super().train(mode)
|
88 |
-
|
89 |
-
def __setattr__(self, name, value):
|
90 |
-
"""Set attribute, i.e. self.name = value
|
91 |
-
|
92 |
-
This reloading prevent the teacher model from being registered as a
|
93 |
-
nn.Module. The teacher module is registered as a plain object, so that
|
94 |
-
the teacher parameters will not show up when calling
|
95 |
-
``self.parameters``, ``self.modules``, ``self.children`` methods.
|
96 |
-
"""
|
97 |
-
if name == 'teacher_model':
|
98 |
-
object.__setattr__(self, name, value)
|
99 |
-
else:
|
100 |
-
super().__setattr__(name, value)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anish13/characterGPT/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: CharacterGPT
|
3 |
-
emoji: 🌍
|
4 |
-
colorFrom: yellow
|
5 |
-
colorTo: blue
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.34.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: artistic-2.0
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/docs/System-requirements.md
DELETED
@@ -1,42 +0,0 @@
|
|
1 |
-
These are the VRAM and RAM requirements (in MiB) to run some examples of models **in 16-bit (default) precision**:
|
2 |
-
|
3 |
-
| model | VRAM (GPU) | RAM |
|
4 |
-
|:-----------------------|-------------:|--------:|
|
5 |
-
| arxiv_ai_gpt2 | 1512.37 | 5824.2 |
|
6 |
-
| blenderbot-1B-distill | 2441.75 | 4425.91 |
|
7 |
-
| opt-1.3b | 2509.61 | 4427.79 |
|
8 |
-
| gpt-neo-1.3b | 2605.27 | 5851.58 |
|
9 |
-
| opt-2.7b | 5058.05 | 4863.95 |
|
10 |
-
| gpt4chan_model_float16 | 11653.7 | 4437.71 |
|
11 |
-
| gpt-j-6B | 11653.7 | 5633.79 |
|
12 |
-
| galactica-6.7b | 12697.9 | 4429.89 |
|
13 |
-
| opt-6.7b | 12700 | 4368.66 |
|
14 |
-
| bloomz-7b1-p3 | 13483.1 | 4470.34 |
|
15 |
-
|
16 |
-
#### GPU mode with 8-bit precision
|
17 |
-
|
18 |
-
Allows you to load models that would not normally fit into your GPU. Enabled by default for 13b and 20b models in this web UI.
|
19 |
-
|
20 |
-
| model | VRAM (GPU) | RAM |
|
21 |
-
|:---------------|-------------:|--------:|
|
22 |
-
| opt-13b | 12528.1 | 1152.39 |
|
23 |
-
| gpt-neox-20b | 20384 | 2291.7 |
|
24 |
-
|
25 |
-
#### CPU mode (32-bit precision)
|
26 |
-
|
27 |
-
A lot slower, but does not require a GPU.
|
28 |
-
|
29 |
-
On my i5-12400F, 6B models take around 10-20 seconds to respond in chat mode, and around 5 minutes to generate a 200 tokens completion.
|
30 |
-
|
31 |
-
| model | RAM |
|
32 |
-
|:-----------------------|---------:|
|
33 |
-
| arxiv_ai_gpt2 | 4430.82 |
|
34 |
-
| gpt-neo-1.3b | 6089.31 |
|
35 |
-
| opt-1.3b | 8411.12 |
|
36 |
-
| blenderbot-1B-distill | 8508.16 |
|
37 |
-
| opt-2.7b | 14969.3 |
|
38 |
-
| bloomz-7b1-p3 | 21371.2 |
|
39 |
-
| gpt-j-6B | 24200.3 |
|
40 |
-
| gpt4chan_model | 24246.3 |
|
41 |
-
| galactica-6.7b | 26561.4 |
|
42 |
-
| opt-6.7b | 29596.6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/superboogav2/utils.py
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
This module contains common functions across multiple other modules.
|
3 |
-
"""
|
4 |
-
|
5 |
-
import extensions.superboogav2.parameters as parameters
|
6 |
-
|
7 |
-
# Create the context using the prefix + data_separator + postfix from parameters.
|
8 |
-
def create_context_text(results):
|
9 |
-
context = parameters.get_prefix() + parameters.get_data_separator().join(results) + parameters.get_postfix()
|
10 |
-
|
11 |
-
return context
|
12 |
-
|
13 |
-
|
14 |
-
# Create metadata with the specified source
|
15 |
-
def create_metadata_source(source: str):
|
16 |
-
return {'source': source}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnthonyTruchetPoC/persistent-docker/start_server.sh
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
#!/bin/bash
|
2 |
-
|
3 |
-
source ./start_jupyter.sh
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/urls.py
DELETED
@@ -1,62 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import string
|
3 |
-
import urllib.parse
|
4 |
-
import urllib.request
|
5 |
-
from typing import Optional
|
6 |
-
|
7 |
-
from .compat import WINDOWS
|
8 |
-
|
9 |
-
|
10 |
-
def get_url_scheme(url: str) -> Optional[str]:
|
11 |
-
if ":" not in url:
|
12 |
-
return None
|
13 |
-
return url.split(":", 1)[0].lower()
|
14 |
-
|
15 |
-
|
16 |
-
def path_to_url(path: str) -> str:
|
17 |
-
"""
|
18 |
-
Convert a path to a file: URL. The path will be made absolute and have
|
19 |
-
quoted path parts.
|
20 |
-
"""
|
21 |
-
path = os.path.normpath(os.path.abspath(path))
|
22 |
-
url = urllib.parse.urljoin("file:", urllib.request.pathname2url(path))
|
23 |
-
return url
|
24 |
-
|
25 |
-
|
26 |
-
def url_to_path(url: str) -> str:
|
27 |
-
"""
|
28 |
-
Convert a file: URL to a path.
|
29 |
-
"""
|
30 |
-
assert url.startswith(
|
31 |
-
"file:"
|
32 |
-
), f"You can only turn file: urls into filenames (not {url!r})"
|
33 |
-
|
34 |
-
_, netloc, path, _, _ = urllib.parse.urlsplit(url)
|
35 |
-
|
36 |
-
if not netloc or netloc == "localhost":
|
37 |
-
# According to RFC 8089, same as empty authority.
|
38 |
-
netloc = ""
|
39 |
-
elif WINDOWS:
|
40 |
-
# If we have a UNC path, prepend UNC share notation.
|
41 |
-
netloc = "\\\\" + netloc
|
42 |
-
else:
|
43 |
-
raise ValueError(
|
44 |
-
f"non-local file URIs are not supported on this platform: {url!r}"
|
45 |
-
)
|
46 |
-
|
47 |
-
path = urllib.request.url2pathname(netloc + path)
|
48 |
-
|
49 |
-
# On Windows, urlsplit parses the path as something like "/C:/Users/foo".
|
50 |
-
# This creates issues for path-related functions like io.open(), so we try
|
51 |
-
# to detect and strip the leading slash.
|
52 |
-
if (
|
53 |
-
WINDOWS
|
54 |
-
and not netloc # Not UNC.
|
55 |
-
and len(path) >= 3
|
56 |
-
and path[0] == "/" # Leading slash to strip.
|
57 |
-
and path[1] in string.ascii_letters # Drive letter.
|
58 |
-
and path[2:4] in (":", ":/") # Colon + end of string, or colon + absolute path.
|
59 |
-
):
|
60 |
-
path = path[1:]
|
61 |
-
|
62 |
-
return path
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/_securetransport/low_level.py
DELETED
@@ -1,397 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Low-level helpers for the SecureTransport bindings.
|
3 |
-
|
4 |
-
These are Python functions that are not directly related to the high-level APIs
|
5 |
-
but are necessary to get them to work. They include a whole bunch of low-level
|
6 |
-
CoreFoundation messing about and memory management. The concerns in this module
|
7 |
-
are almost entirely about trying to avoid memory leaks and providing
|
8 |
-
appropriate and useful assistance to the higher-level code.
|
9 |
-
"""
|
10 |
-
import base64
|
11 |
-
import ctypes
|
12 |
-
import itertools
|
13 |
-
import os
|
14 |
-
import re
|
15 |
-
import ssl
|
16 |
-
import struct
|
17 |
-
import tempfile
|
18 |
-
|
19 |
-
from .bindings import CFConst, CoreFoundation, Security
|
20 |
-
|
21 |
-
# This regular expression is used to grab PEM data out of a PEM bundle.
|
22 |
-
_PEM_CERTS_RE = re.compile(
|
23 |
-
b"-----BEGIN CERTIFICATE-----\n(.*?)\n-----END CERTIFICATE-----", re.DOTALL
|
24 |
-
)
|
25 |
-
|
26 |
-
|
27 |
-
def _cf_data_from_bytes(bytestring):
|
28 |
-
"""
|
29 |
-
Given a bytestring, create a CFData object from it. This CFData object must
|
30 |
-
be CFReleased by the caller.
|
31 |
-
"""
|
32 |
-
return CoreFoundation.CFDataCreate(
|
33 |
-
CoreFoundation.kCFAllocatorDefault, bytestring, len(bytestring)
|
34 |
-
)
|
35 |
-
|
36 |
-
|
37 |
-
def _cf_dictionary_from_tuples(tuples):
|
38 |
-
"""
|
39 |
-
Given a list of Python tuples, create an associated CFDictionary.
|
40 |
-
"""
|
41 |
-
dictionary_size = len(tuples)
|
42 |
-
|
43 |
-
# We need to get the dictionary keys and values out in the same order.
|
44 |
-
keys = (t[0] for t in tuples)
|
45 |
-
values = (t[1] for t in tuples)
|
46 |
-
cf_keys = (CoreFoundation.CFTypeRef * dictionary_size)(*keys)
|
47 |
-
cf_values = (CoreFoundation.CFTypeRef * dictionary_size)(*values)
|
48 |
-
|
49 |
-
return CoreFoundation.CFDictionaryCreate(
|
50 |
-
CoreFoundation.kCFAllocatorDefault,
|
51 |
-
cf_keys,
|
52 |
-
cf_values,
|
53 |
-
dictionary_size,
|
54 |
-
CoreFoundation.kCFTypeDictionaryKeyCallBacks,
|
55 |
-
CoreFoundation.kCFTypeDictionaryValueCallBacks,
|
56 |
-
)
|
57 |
-
|
58 |
-
|
59 |
-
def _cfstr(py_bstr):
|
60 |
-
"""
|
61 |
-
Given a Python binary data, create a CFString.
|
62 |
-
The string must be CFReleased by the caller.
|
63 |
-
"""
|
64 |
-
c_str = ctypes.c_char_p(py_bstr)
|
65 |
-
cf_str = CoreFoundation.CFStringCreateWithCString(
|
66 |
-
CoreFoundation.kCFAllocatorDefault,
|
67 |
-
c_str,
|
68 |
-
CFConst.kCFStringEncodingUTF8,
|
69 |
-
)
|
70 |
-
return cf_str
|
71 |
-
|
72 |
-
|
73 |
-
def _create_cfstring_array(lst):
|
74 |
-
"""
|
75 |
-
Given a list of Python binary data, create an associated CFMutableArray.
|
76 |
-
The array must be CFReleased by the caller.
|
77 |
-
|
78 |
-
Raises an ssl.SSLError on failure.
|
79 |
-
"""
|
80 |
-
cf_arr = None
|
81 |
-
try:
|
82 |
-
cf_arr = CoreFoundation.CFArrayCreateMutable(
|
83 |
-
CoreFoundation.kCFAllocatorDefault,
|
84 |
-
0,
|
85 |
-
ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks),
|
86 |
-
)
|
87 |
-
if not cf_arr:
|
88 |
-
raise MemoryError("Unable to allocate memory!")
|
89 |
-
for item in lst:
|
90 |
-
cf_str = _cfstr(item)
|
91 |
-
if not cf_str:
|
92 |
-
raise MemoryError("Unable to allocate memory!")
|
93 |
-
try:
|
94 |
-
CoreFoundation.CFArrayAppendValue(cf_arr, cf_str)
|
95 |
-
finally:
|
96 |
-
CoreFoundation.CFRelease(cf_str)
|
97 |
-
except BaseException as e:
|
98 |
-
if cf_arr:
|
99 |
-
CoreFoundation.CFRelease(cf_arr)
|
100 |
-
raise ssl.SSLError("Unable to allocate array: %s" % (e,))
|
101 |
-
return cf_arr
|
102 |
-
|
103 |
-
|
104 |
-
def _cf_string_to_unicode(value):
|
105 |
-
"""
|
106 |
-
Creates a Unicode string from a CFString object. Used entirely for error
|
107 |
-
reporting.
|
108 |
-
|
109 |
-
Yes, it annoys me quite a lot that this function is this complex.
|
110 |
-
"""
|
111 |
-
value_as_void_p = ctypes.cast(value, ctypes.POINTER(ctypes.c_void_p))
|
112 |
-
|
113 |
-
string = CoreFoundation.CFStringGetCStringPtr(
|
114 |
-
value_as_void_p, CFConst.kCFStringEncodingUTF8
|
115 |
-
)
|
116 |
-
if string is None:
|
117 |
-
buffer = ctypes.create_string_buffer(1024)
|
118 |
-
result = CoreFoundation.CFStringGetCString(
|
119 |
-
value_as_void_p, buffer, 1024, CFConst.kCFStringEncodingUTF8
|
120 |
-
)
|
121 |
-
if not result:
|
122 |
-
raise OSError("Error copying C string from CFStringRef")
|
123 |
-
string = buffer.value
|
124 |
-
if string is not None:
|
125 |
-
string = string.decode("utf-8")
|
126 |
-
return string
|
127 |
-
|
128 |
-
|
129 |
-
def _assert_no_error(error, exception_class=None):
|
130 |
-
"""
|
131 |
-
Checks the return code and throws an exception if there is an error to
|
132 |
-
report
|
133 |
-
"""
|
134 |
-
if error == 0:
|
135 |
-
return
|
136 |
-
|
137 |
-
cf_error_string = Security.SecCopyErrorMessageString(error, None)
|
138 |
-
output = _cf_string_to_unicode(cf_error_string)
|
139 |
-
CoreFoundation.CFRelease(cf_error_string)
|
140 |
-
|
141 |
-
if output is None or output == u"":
|
142 |
-
output = u"OSStatus %s" % error
|
143 |
-
|
144 |
-
if exception_class is None:
|
145 |
-
exception_class = ssl.SSLError
|
146 |
-
|
147 |
-
raise exception_class(output)
|
148 |
-
|
149 |
-
|
150 |
-
def _cert_array_from_pem(pem_bundle):
|
151 |
-
"""
|
152 |
-
Given a bundle of certs in PEM format, turns them into a CFArray of certs
|
153 |
-
that can be used to validate a cert chain.
|
154 |
-
"""
|
155 |
-
# Normalize the PEM bundle's line endings.
|
156 |
-
pem_bundle = pem_bundle.replace(b"\r\n", b"\n")
|
157 |
-
|
158 |
-
der_certs = [
|
159 |
-
base64.b64decode(match.group(1)) for match in _PEM_CERTS_RE.finditer(pem_bundle)
|
160 |
-
]
|
161 |
-
if not der_certs:
|
162 |
-
raise ssl.SSLError("No root certificates specified")
|
163 |
-
|
164 |
-
cert_array = CoreFoundation.CFArrayCreateMutable(
|
165 |
-
CoreFoundation.kCFAllocatorDefault,
|
166 |
-
0,
|
167 |
-
ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks),
|
168 |
-
)
|
169 |
-
if not cert_array:
|
170 |
-
raise ssl.SSLError("Unable to allocate memory!")
|
171 |
-
|
172 |
-
try:
|
173 |
-
for der_bytes in der_certs:
|
174 |
-
certdata = _cf_data_from_bytes(der_bytes)
|
175 |
-
if not certdata:
|
176 |
-
raise ssl.SSLError("Unable to allocate memory!")
|
177 |
-
cert = Security.SecCertificateCreateWithData(
|
178 |
-
CoreFoundation.kCFAllocatorDefault, certdata
|
179 |
-
)
|
180 |
-
CoreFoundation.CFRelease(certdata)
|
181 |
-
if not cert:
|
182 |
-
raise ssl.SSLError("Unable to build cert object!")
|
183 |
-
|
184 |
-
CoreFoundation.CFArrayAppendValue(cert_array, cert)
|
185 |
-
CoreFoundation.CFRelease(cert)
|
186 |
-
except Exception:
|
187 |
-
# We need to free the array before the exception bubbles further.
|
188 |
-
# We only want to do that if an error occurs: otherwise, the caller
|
189 |
-
# should free.
|
190 |
-
CoreFoundation.CFRelease(cert_array)
|
191 |
-
raise
|
192 |
-
|
193 |
-
return cert_array
|
194 |
-
|
195 |
-
|
196 |
-
def _is_cert(item):
|
197 |
-
"""
|
198 |
-
Returns True if a given CFTypeRef is a certificate.
|
199 |
-
"""
|
200 |
-
expected = Security.SecCertificateGetTypeID()
|
201 |
-
return CoreFoundation.CFGetTypeID(item) == expected
|
202 |
-
|
203 |
-
|
204 |
-
def _is_identity(item):
|
205 |
-
"""
|
206 |
-
Returns True if a given CFTypeRef is an identity.
|
207 |
-
"""
|
208 |
-
expected = Security.SecIdentityGetTypeID()
|
209 |
-
return CoreFoundation.CFGetTypeID(item) == expected
|
210 |
-
|
211 |
-
|
212 |
-
def _temporary_keychain():
|
213 |
-
"""
|
214 |
-
This function creates a temporary Mac keychain that we can use to work with
|
215 |
-
credentials. This keychain uses a one-time password and a temporary file to
|
216 |
-
store the data. We expect to have one keychain per socket. The returned
|
217 |
-
SecKeychainRef must be freed by the caller, including calling
|
218 |
-
SecKeychainDelete.
|
219 |
-
|
220 |
-
Returns a tuple of the SecKeychainRef and the path to the temporary
|
221 |
-
directory that contains it.
|
222 |
-
"""
|
223 |
-
# Unfortunately, SecKeychainCreate requires a path to a keychain. This
|
224 |
-
# means we cannot use mkstemp to use a generic temporary file. Instead,
|
225 |
-
# we're going to create a temporary directory and a filename to use there.
|
226 |
-
# This filename will be 8 random bytes expanded into base64. We also need
|
227 |
-
# some random bytes to password-protect the keychain we're creating, so we
|
228 |
-
# ask for 40 random bytes.
|
229 |
-
random_bytes = os.urandom(40)
|
230 |
-
filename = base64.b16encode(random_bytes[:8]).decode("utf-8")
|
231 |
-
password = base64.b16encode(random_bytes[8:]) # Must be valid UTF-8
|
232 |
-
tempdirectory = tempfile.mkdtemp()
|
233 |
-
|
234 |
-
keychain_path = os.path.join(tempdirectory, filename).encode("utf-8")
|
235 |
-
|
236 |
-
# We now want to create the keychain itself.
|
237 |
-
keychain = Security.SecKeychainRef()
|
238 |
-
status = Security.SecKeychainCreate(
|
239 |
-
keychain_path, len(password), password, False, None, ctypes.byref(keychain)
|
240 |
-
)
|
241 |
-
_assert_no_error(status)
|
242 |
-
|
243 |
-
# Having created the keychain, we want to pass it off to the caller.
|
244 |
-
return keychain, tempdirectory
|
245 |
-
|
246 |
-
|
247 |
-
def _load_items_from_file(keychain, path):
|
248 |
-
"""
|
249 |
-
Given a single file, loads all the trust objects from it into arrays and
|
250 |
-
the keychain.
|
251 |
-
Returns a tuple of lists: the first list is a list of identities, the
|
252 |
-
second a list of certs.
|
253 |
-
"""
|
254 |
-
certificates = []
|
255 |
-
identities = []
|
256 |
-
result_array = None
|
257 |
-
|
258 |
-
with open(path, "rb") as f:
|
259 |
-
raw_filedata = f.read()
|
260 |
-
|
261 |
-
try:
|
262 |
-
filedata = CoreFoundation.CFDataCreate(
|
263 |
-
CoreFoundation.kCFAllocatorDefault, raw_filedata, len(raw_filedata)
|
264 |
-
)
|
265 |
-
result_array = CoreFoundation.CFArrayRef()
|
266 |
-
result = Security.SecItemImport(
|
267 |
-
filedata, # cert data
|
268 |
-
None, # Filename, leaving it out for now
|
269 |
-
None, # What the type of the file is, we don't care
|
270 |
-
None, # what's in the file, we don't care
|
271 |
-
0, # import flags
|
272 |
-
None, # key params, can include passphrase in the future
|
273 |
-
keychain, # The keychain to insert into
|
274 |
-
ctypes.byref(result_array), # Results
|
275 |
-
)
|
276 |
-
_assert_no_error(result)
|
277 |
-
|
278 |
-
# A CFArray is not very useful to us as an intermediary
|
279 |
-
# representation, so we are going to extract the objects we want
|
280 |
-
# and then free the array. We don't need to keep hold of keys: the
|
281 |
-
# keychain already has them!
|
282 |
-
result_count = CoreFoundation.CFArrayGetCount(result_array)
|
283 |
-
for index in range(result_count):
|
284 |
-
item = CoreFoundation.CFArrayGetValueAtIndex(result_array, index)
|
285 |
-
item = ctypes.cast(item, CoreFoundation.CFTypeRef)
|
286 |
-
|
287 |
-
if _is_cert(item):
|
288 |
-
CoreFoundation.CFRetain(item)
|
289 |
-
certificates.append(item)
|
290 |
-
elif _is_identity(item):
|
291 |
-
CoreFoundation.CFRetain(item)
|
292 |
-
identities.append(item)
|
293 |
-
finally:
|
294 |
-
if result_array:
|
295 |
-
CoreFoundation.CFRelease(result_array)
|
296 |
-
|
297 |
-
CoreFoundation.CFRelease(filedata)
|
298 |
-
|
299 |
-
return (identities, certificates)
|
300 |
-
|
301 |
-
|
302 |
-
def _load_client_cert_chain(keychain, *paths):
|
303 |
-
"""
|
304 |
-
Load certificates and maybe keys from a number of files. Has the end goal
|
305 |
-
of returning a CFArray containing one SecIdentityRef, and then zero or more
|
306 |
-
SecCertificateRef objects, suitable for use as a client certificate trust
|
307 |
-
chain.
|
308 |
-
"""
|
309 |
-
# Ok, the strategy.
|
310 |
-
#
|
311 |
-
# This relies on knowing that macOS will not give you a SecIdentityRef
|
312 |
-
# unless you have imported a key into a keychain. This is a somewhat
|
313 |
-
# artificial limitation of macOS (for example, it doesn't necessarily
|
314 |
-
# affect iOS), but there is nothing inside Security.framework that lets you
|
315 |
-
# get a SecIdentityRef without having a key in a keychain.
|
316 |
-
#
|
317 |
-
# So the policy here is we take all the files and iterate them in order.
|
318 |
-
# Each one will use SecItemImport to have one or more objects loaded from
|
319 |
-
# it. We will also point at a keychain that macOS can use to work with the
|
320 |
-
# private key.
|
321 |
-
#
|
322 |
-
# Once we have all the objects, we'll check what we actually have. If we
|
323 |
-
# already have a SecIdentityRef in hand, fab: we'll use that. Otherwise,
|
324 |
-
# we'll take the first certificate (which we assume to be our leaf) and
|
325 |
-
# ask the keychain to give us a SecIdentityRef with that cert's associated
|
326 |
-
# key.
|
327 |
-
#
|
328 |
-
# We'll then return a CFArray containing the trust chain: one
|
329 |
-
# SecIdentityRef and then zero-or-more SecCertificateRef objects. The
|
330 |
-
# responsibility for freeing this CFArray will be with the caller. This
|
331 |
-
# CFArray must remain alive for the entire connection, so in practice it
|
332 |
-
# will be stored with a single SSLSocket, along with the reference to the
|
333 |
-
# keychain.
|
334 |
-
certificates = []
|
335 |
-
identities = []
|
336 |
-
|
337 |
-
# Filter out bad paths.
|
338 |
-
paths = (path for path in paths if path)
|
339 |
-
|
340 |
-
try:
|
341 |
-
for file_path in paths:
|
342 |
-
new_identities, new_certs = _load_items_from_file(keychain, file_path)
|
343 |
-
identities.extend(new_identities)
|
344 |
-
certificates.extend(new_certs)
|
345 |
-
|
346 |
-
# Ok, we have everything. The question is: do we have an identity? If
|
347 |
-
# not, we want to grab one from the first cert we have.
|
348 |
-
if not identities:
|
349 |
-
new_identity = Security.SecIdentityRef()
|
350 |
-
status = Security.SecIdentityCreateWithCertificate(
|
351 |
-
keychain, certificates[0], ctypes.byref(new_identity)
|
352 |
-
)
|
353 |
-
_assert_no_error(status)
|
354 |
-
identities.append(new_identity)
|
355 |
-
|
356 |
-
# We now want to release the original certificate, as we no longer
|
357 |
-
# need it.
|
358 |
-
CoreFoundation.CFRelease(certificates.pop(0))
|
359 |
-
|
360 |
-
# We now need to build a new CFArray that holds the trust chain.
|
361 |
-
trust_chain = CoreFoundation.CFArrayCreateMutable(
|
362 |
-
CoreFoundation.kCFAllocatorDefault,
|
363 |
-
0,
|
364 |
-
ctypes.byref(CoreFoundation.kCFTypeArrayCallBacks),
|
365 |
-
)
|
366 |
-
for item in itertools.chain(identities, certificates):
|
367 |
-
# ArrayAppendValue does a CFRetain on the item. That's fine,
|
368 |
-
# because the finally block will release our other refs to them.
|
369 |
-
CoreFoundation.CFArrayAppendValue(trust_chain, item)
|
370 |
-
|
371 |
-
return trust_chain
|
372 |
-
finally:
|
373 |
-
for obj in itertools.chain(identities, certificates):
|
374 |
-
CoreFoundation.CFRelease(obj)
|
375 |
-
|
376 |
-
|
377 |
-
TLS_PROTOCOL_VERSIONS = {
|
378 |
-
"SSLv2": (0, 2),
|
379 |
-
"SSLv3": (3, 0),
|
380 |
-
"TLSv1": (3, 1),
|
381 |
-
"TLSv1.1": (3, 2),
|
382 |
-
"TLSv1.2": (3, 3),
|
383 |
-
}
|
384 |
-
|
385 |
-
|
386 |
-
def _build_tls_unknown_ca_alert(version):
|
387 |
-
"""
|
388 |
-
Builds a TLS alert record for an unknown CA.
|
389 |
-
"""
|
390 |
-
ver_maj, ver_min = TLS_PROTOCOL_VERSIONS[version]
|
391 |
-
severity_fatal = 0x02
|
392 |
-
description_unknown_ca = 0x30
|
393 |
-
msg = struct.pack(">BB", severity_fatal, description_unknown_ca)
|
394 |
-
msg_len = len(msg)
|
395 |
-
record_type_alert = 0x15
|
396 |
-
record = struct.pack(">BBBH", record_type_alert, ver_maj, ver_min, msg_len) + msg
|
397 |
-
return record
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/packaging/markers.py
DELETED
@@ -1,304 +0,0 @@
|
|
1 |
-
# This file is dual licensed under the terms of the Apache License, Version
|
2 |
-
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
3 |
-
# for complete details.
|
4 |
-
|
5 |
-
import operator
|
6 |
-
import os
|
7 |
-
import platform
|
8 |
-
import sys
|
9 |
-
from typing import Any, Callable, Dict, List, Optional, Tuple, Union
|
10 |
-
|
11 |
-
from setuptools.extern.pyparsing import ( # noqa: N817
|
12 |
-
Forward,
|
13 |
-
Group,
|
14 |
-
Literal as L,
|
15 |
-
ParseException,
|
16 |
-
ParseResults,
|
17 |
-
QuotedString,
|
18 |
-
ZeroOrMore,
|
19 |
-
stringEnd,
|
20 |
-
stringStart,
|
21 |
-
)
|
22 |
-
|
23 |
-
from .specifiers import InvalidSpecifier, Specifier
|
24 |
-
|
25 |
-
__all__ = [
|
26 |
-
"InvalidMarker",
|
27 |
-
"UndefinedComparison",
|
28 |
-
"UndefinedEnvironmentName",
|
29 |
-
"Marker",
|
30 |
-
"default_environment",
|
31 |
-
]
|
32 |
-
|
33 |
-
Operator = Callable[[str, str], bool]
|
34 |
-
|
35 |
-
|
36 |
-
class InvalidMarker(ValueError):
|
37 |
-
"""
|
38 |
-
An invalid marker was found, users should refer to PEP 508.
|
39 |
-
"""
|
40 |
-
|
41 |
-
|
42 |
-
class UndefinedComparison(ValueError):
|
43 |
-
"""
|
44 |
-
An invalid operation was attempted on a value that doesn't support it.
|
45 |
-
"""
|
46 |
-
|
47 |
-
|
48 |
-
class UndefinedEnvironmentName(ValueError):
|
49 |
-
"""
|
50 |
-
A name was attempted to be used that does not exist inside of the
|
51 |
-
environment.
|
52 |
-
"""
|
53 |
-
|
54 |
-
|
55 |
-
class Node:
|
56 |
-
def __init__(self, value: Any) -> None:
|
57 |
-
self.value = value
|
58 |
-
|
59 |
-
def __str__(self) -> str:
|
60 |
-
return str(self.value)
|
61 |
-
|
62 |
-
def __repr__(self) -> str:
|
63 |
-
return f"<{self.__class__.__name__}('{self}')>"
|
64 |
-
|
65 |
-
def serialize(self) -> str:
|
66 |
-
raise NotImplementedError
|
67 |
-
|
68 |
-
|
69 |
-
class Variable(Node):
|
70 |
-
def serialize(self) -> str:
|
71 |
-
return str(self)
|
72 |
-
|
73 |
-
|
74 |
-
class Value(Node):
|
75 |
-
def serialize(self) -> str:
|
76 |
-
return f'"{self}"'
|
77 |
-
|
78 |
-
|
79 |
-
class Op(Node):
|
80 |
-
def serialize(self) -> str:
|
81 |
-
return str(self)
|
82 |
-
|
83 |
-
|
84 |
-
VARIABLE = (
|
85 |
-
L("implementation_version")
|
86 |
-
| L("platform_python_implementation")
|
87 |
-
| L("implementation_name")
|
88 |
-
| L("python_full_version")
|
89 |
-
| L("platform_release")
|
90 |
-
| L("platform_version")
|
91 |
-
| L("platform_machine")
|
92 |
-
| L("platform_system")
|
93 |
-
| L("python_version")
|
94 |
-
| L("sys_platform")
|
95 |
-
| L("os_name")
|
96 |
-
| L("os.name") # PEP-345
|
97 |
-
| L("sys.platform") # PEP-345
|
98 |
-
| L("platform.version") # PEP-345
|
99 |
-
| L("platform.machine") # PEP-345
|
100 |
-
| L("platform.python_implementation") # PEP-345
|
101 |
-
| L("python_implementation") # undocumented setuptools legacy
|
102 |
-
| L("extra") # PEP-508
|
103 |
-
)
|
104 |
-
ALIASES = {
|
105 |
-
"os.name": "os_name",
|
106 |
-
"sys.platform": "sys_platform",
|
107 |
-
"platform.version": "platform_version",
|
108 |
-
"platform.machine": "platform_machine",
|
109 |
-
"platform.python_implementation": "platform_python_implementation",
|
110 |
-
"python_implementation": "platform_python_implementation",
|
111 |
-
}
|
112 |
-
VARIABLE.setParseAction(lambda s, l, t: Variable(ALIASES.get(t[0], t[0])))
|
113 |
-
|
114 |
-
VERSION_CMP = (
|
115 |
-
L("===") | L("==") | L(">=") | L("<=") | L("!=") | L("~=") | L(">") | L("<")
|
116 |
-
)
|
117 |
-
|
118 |
-
MARKER_OP = VERSION_CMP | L("not in") | L("in")
|
119 |
-
MARKER_OP.setParseAction(lambda s, l, t: Op(t[0]))
|
120 |
-
|
121 |
-
MARKER_VALUE = QuotedString("'") | QuotedString('"')
|
122 |
-
MARKER_VALUE.setParseAction(lambda s, l, t: Value(t[0]))
|
123 |
-
|
124 |
-
BOOLOP = L("and") | L("or")
|
125 |
-
|
126 |
-
MARKER_VAR = VARIABLE | MARKER_VALUE
|
127 |
-
|
128 |
-
MARKER_ITEM = Group(MARKER_VAR + MARKER_OP + MARKER_VAR)
|
129 |
-
MARKER_ITEM.setParseAction(lambda s, l, t: tuple(t[0]))
|
130 |
-
|
131 |
-
LPAREN = L("(").suppress()
|
132 |
-
RPAREN = L(")").suppress()
|
133 |
-
|
134 |
-
MARKER_EXPR = Forward()
|
135 |
-
MARKER_ATOM = MARKER_ITEM | Group(LPAREN + MARKER_EXPR + RPAREN)
|
136 |
-
MARKER_EXPR << MARKER_ATOM + ZeroOrMore(BOOLOP + MARKER_EXPR)
|
137 |
-
|
138 |
-
MARKER = stringStart + MARKER_EXPR + stringEnd
|
139 |
-
|
140 |
-
|
141 |
-
def _coerce_parse_result(results: Union[ParseResults, List[Any]]) -> List[Any]:
|
142 |
-
if isinstance(results, ParseResults):
|
143 |
-
return [_coerce_parse_result(i) for i in results]
|
144 |
-
else:
|
145 |
-
return results
|
146 |
-
|
147 |
-
|
148 |
-
def _format_marker(
|
149 |
-
marker: Union[List[str], Tuple[Node, ...], str], first: Optional[bool] = True
|
150 |
-
) -> str:
|
151 |
-
|
152 |
-
assert isinstance(marker, (list, tuple, str))
|
153 |
-
|
154 |
-
# Sometimes we have a structure like [[...]] which is a single item list
|
155 |
-
# where the single item is itself it's own list. In that case we want skip
|
156 |
-
# the rest of this function so that we don't get extraneous () on the
|
157 |
-
# outside.
|
158 |
-
if (
|
159 |
-
isinstance(marker, list)
|
160 |
-
and len(marker) == 1
|
161 |
-
and isinstance(marker[0], (list, tuple))
|
162 |
-
):
|
163 |
-
return _format_marker(marker[0])
|
164 |
-
|
165 |
-
if isinstance(marker, list):
|
166 |
-
inner = (_format_marker(m, first=False) for m in marker)
|
167 |
-
if first:
|
168 |
-
return " ".join(inner)
|
169 |
-
else:
|
170 |
-
return "(" + " ".join(inner) + ")"
|
171 |
-
elif isinstance(marker, tuple):
|
172 |
-
return " ".join([m.serialize() for m in marker])
|
173 |
-
else:
|
174 |
-
return marker
|
175 |
-
|
176 |
-
|
177 |
-
_operators: Dict[str, Operator] = {
|
178 |
-
"in": lambda lhs, rhs: lhs in rhs,
|
179 |
-
"not in": lambda lhs, rhs: lhs not in rhs,
|
180 |
-
"<": operator.lt,
|
181 |
-
"<=": operator.le,
|
182 |
-
"==": operator.eq,
|
183 |
-
"!=": operator.ne,
|
184 |
-
">=": operator.ge,
|
185 |
-
">": operator.gt,
|
186 |
-
}
|
187 |
-
|
188 |
-
|
189 |
-
def _eval_op(lhs: str, op: Op, rhs: str) -> bool:
|
190 |
-
try:
|
191 |
-
spec = Specifier("".join([op.serialize(), rhs]))
|
192 |
-
except InvalidSpecifier:
|
193 |
-
pass
|
194 |
-
else:
|
195 |
-
return spec.contains(lhs)
|
196 |
-
|
197 |
-
oper: Optional[Operator] = _operators.get(op.serialize())
|
198 |
-
if oper is None:
|
199 |
-
raise UndefinedComparison(f"Undefined {op!r} on {lhs!r} and {rhs!r}.")
|
200 |
-
|
201 |
-
return oper(lhs, rhs)
|
202 |
-
|
203 |
-
|
204 |
-
class Undefined:
|
205 |
-
pass
|
206 |
-
|
207 |
-
|
208 |
-
_undefined = Undefined()
|
209 |
-
|
210 |
-
|
211 |
-
def _get_env(environment: Dict[str, str], name: str) -> str:
|
212 |
-
value: Union[str, Undefined] = environment.get(name, _undefined)
|
213 |
-
|
214 |
-
if isinstance(value, Undefined):
|
215 |
-
raise UndefinedEnvironmentName(
|
216 |
-
f"{name!r} does not exist in evaluation environment."
|
217 |
-
)
|
218 |
-
|
219 |
-
return value
|
220 |
-
|
221 |
-
|
222 |
-
def _evaluate_markers(markers: List[Any], environment: Dict[str, str]) -> bool:
|
223 |
-
groups: List[List[bool]] = [[]]
|
224 |
-
|
225 |
-
for marker in markers:
|
226 |
-
assert isinstance(marker, (list, tuple, str))
|
227 |
-
|
228 |
-
if isinstance(marker, list):
|
229 |
-
groups[-1].append(_evaluate_markers(marker, environment))
|
230 |
-
elif isinstance(marker, tuple):
|
231 |
-
lhs, op, rhs = marker
|
232 |
-
|
233 |
-
if isinstance(lhs, Variable):
|
234 |
-
lhs_value = _get_env(environment, lhs.value)
|
235 |
-
rhs_value = rhs.value
|
236 |
-
else:
|
237 |
-
lhs_value = lhs.value
|
238 |
-
rhs_value = _get_env(environment, rhs.value)
|
239 |
-
|
240 |
-
groups[-1].append(_eval_op(lhs_value, op, rhs_value))
|
241 |
-
else:
|
242 |
-
assert marker in ["and", "or"]
|
243 |
-
if marker == "or":
|
244 |
-
groups.append([])
|
245 |
-
|
246 |
-
return any(all(item) for item in groups)
|
247 |
-
|
248 |
-
|
249 |
-
def format_full_version(info: "sys._version_info") -> str:
|
250 |
-
version = "{0.major}.{0.minor}.{0.micro}".format(info)
|
251 |
-
kind = info.releaselevel
|
252 |
-
if kind != "final":
|
253 |
-
version += kind[0] + str(info.serial)
|
254 |
-
return version
|
255 |
-
|
256 |
-
|
257 |
-
def default_environment() -> Dict[str, str]:
|
258 |
-
iver = format_full_version(sys.implementation.version)
|
259 |
-
implementation_name = sys.implementation.name
|
260 |
-
return {
|
261 |
-
"implementation_name": implementation_name,
|
262 |
-
"implementation_version": iver,
|
263 |
-
"os_name": os.name,
|
264 |
-
"platform_machine": platform.machine(),
|
265 |
-
"platform_release": platform.release(),
|
266 |
-
"platform_system": platform.system(),
|
267 |
-
"platform_version": platform.version(),
|
268 |
-
"python_full_version": platform.python_version(),
|
269 |
-
"platform_python_implementation": platform.python_implementation(),
|
270 |
-
"python_version": ".".join(platform.python_version_tuple()[:2]),
|
271 |
-
"sys_platform": sys.platform,
|
272 |
-
}
|
273 |
-
|
274 |
-
|
275 |
-
class Marker:
|
276 |
-
def __init__(self, marker: str) -> None:
|
277 |
-
try:
|
278 |
-
self._markers = _coerce_parse_result(MARKER.parseString(marker))
|
279 |
-
except ParseException as e:
|
280 |
-
raise InvalidMarker(
|
281 |
-
f"Invalid marker: {marker!r}, parse error at "
|
282 |
-
f"{marker[e.loc : e.loc + 8]!r}"
|
283 |
-
)
|
284 |
-
|
285 |
-
def __str__(self) -> str:
|
286 |
-
return _format_marker(self._markers)
|
287 |
-
|
288 |
-
def __repr__(self) -> str:
|
289 |
-
return f"<Marker('{self}')>"
|
290 |
-
|
291 |
-
def evaluate(self, environment: Optional[Dict[str, str]] = None) -> bool:
|
292 |
-
"""Evaluate a marker.
|
293 |
-
|
294 |
-
Return the boolean from evaluating the given marker against the
|
295 |
-
environment. environment is an optional argument to override all or
|
296 |
-
part of the determined environment.
|
297 |
-
|
298 |
-
The environment is determined from the current Python process.
|
299 |
-
"""
|
300 |
-
current_environment = default_environment()
|
301 |
-
if environment is not None:
|
302 |
-
current_environment.update(environment)
|
303 |
-
|
304 |
-
return _evaluate_markers(self._markers, current_environment)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/english_bert_mock.py
DELETED
@@ -1,5 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
|
3 |
-
|
4 |
-
def get_bert_feature(norm_text, word2ph):
|
5 |
-
return torch.zeros(1024, sum(word2ph))
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/infer/lib/infer_pack/modules/F0Predictor/__init__.py
DELETED
File without changes
|
spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/model_param_init.py
DELETED
@@ -1,69 +0,0 @@
|
|
1 |
-
import json
|
2 |
-
import os
|
3 |
-
import pathlib
|
4 |
-
|
5 |
-
default_param = {}
|
6 |
-
default_param["bins"] = 768
|
7 |
-
default_param["unstable_bins"] = 9 # training only
|
8 |
-
default_param["reduction_bins"] = 762 # training only
|
9 |
-
default_param["sr"] = 44100
|
10 |
-
default_param["pre_filter_start"] = 757
|
11 |
-
default_param["pre_filter_stop"] = 768
|
12 |
-
default_param["band"] = {}
|
13 |
-
|
14 |
-
|
15 |
-
default_param["band"][1] = {
|
16 |
-
"sr": 11025,
|
17 |
-
"hl": 128,
|
18 |
-
"n_fft": 960,
|
19 |
-
"crop_start": 0,
|
20 |
-
"crop_stop": 245,
|
21 |
-
"lpf_start": 61, # inference only
|
22 |
-
"res_type": "polyphase",
|
23 |
-
}
|
24 |
-
|
25 |
-
default_param["band"][2] = {
|
26 |
-
"sr": 44100,
|
27 |
-
"hl": 512,
|
28 |
-
"n_fft": 1536,
|
29 |
-
"crop_start": 24,
|
30 |
-
"crop_stop": 547,
|
31 |
-
"hpf_start": 81, # inference only
|
32 |
-
"res_type": "sinc_best",
|
33 |
-
}
|
34 |
-
|
35 |
-
|
36 |
-
def int_keys(d):
|
37 |
-
r = {}
|
38 |
-
for k, v in d:
|
39 |
-
if k.isdigit():
|
40 |
-
k = int(k)
|
41 |
-
r[k] = v
|
42 |
-
return r
|
43 |
-
|
44 |
-
|
45 |
-
class ModelParameters(object):
|
46 |
-
def __init__(self, config_path=""):
|
47 |
-
if ".pth" == pathlib.Path(config_path).suffix:
|
48 |
-
import zipfile
|
49 |
-
|
50 |
-
with zipfile.ZipFile(config_path, "r") as zip:
|
51 |
-
self.param = json.loads(
|
52 |
-
zip.read("param.json"), object_pairs_hook=int_keys
|
53 |
-
)
|
54 |
-
elif ".json" == pathlib.Path(config_path).suffix:
|
55 |
-
with open(config_path, "r") as f:
|
56 |
-
self.param = json.loads(f.read(), object_pairs_hook=int_keys)
|
57 |
-
else:
|
58 |
-
self.param = default_param
|
59 |
-
|
60 |
-
for k in [
|
61 |
-
"mid_side",
|
62 |
-
"mid_side_b",
|
63 |
-
"mid_side_b2",
|
64 |
-
"stereo_w",
|
65 |
-
"stereo_n",
|
66 |
-
"reverse",
|
67 |
-
]:
|
68 |
-
if not k in self.param:
|
69 |
-
self.param[k] = False
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/lib/uvr5_pack/lib_v5/nets_537238KB.py
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import numpy as np
|
3 |
-
from torch import nn
|
4 |
-
import torch.nn.functional as F
|
5 |
-
|
6 |
-
from . import layers_537238KB as layers
|
7 |
-
|
8 |
-
|
9 |
-
class BaseASPPNet(nn.Module):
|
10 |
-
def __init__(self, nin, ch, dilations=(4, 8, 16)):
|
11 |
-
super(BaseASPPNet, self).__init__()
|
12 |
-
self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
|
13 |
-
self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
|
14 |
-
self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
|
15 |
-
self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
|
16 |
-
|
17 |
-
self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
|
18 |
-
|
19 |
-
self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
|
20 |
-
self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
|
21 |
-
self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
|
22 |
-
self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
|
23 |
-
|
24 |
-
def __call__(self, x):
|
25 |
-
h, e1 = self.enc1(x)
|
26 |
-
h, e2 = self.enc2(h)
|
27 |
-
h, e3 = self.enc3(h)
|
28 |
-
h, e4 = self.enc4(h)
|
29 |
-
|
30 |
-
h = self.aspp(h)
|
31 |
-
|
32 |
-
h = self.dec4(h, e4)
|
33 |
-
h = self.dec3(h, e3)
|
34 |
-
h = self.dec2(h, e2)
|
35 |
-
h = self.dec1(h, e1)
|
36 |
-
|
37 |
-
return h
|
38 |
-
|
39 |
-
|
40 |
-
class CascadedASPPNet(nn.Module):
|
41 |
-
def __init__(self, n_fft):
|
42 |
-
super(CascadedASPPNet, self).__init__()
|
43 |
-
self.stg1_low_band_net = BaseASPPNet(2, 64)
|
44 |
-
self.stg1_high_band_net = BaseASPPNet(2, 64)
|
45 |
-
|
46 |
-
self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
|
47 |
-
self.stg2_full_band_net = BaseASPPNet(32, 64)
|
48 |
-
|
49 |
-
self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0)
|
50 |
-
self.stg3_full_band_net = BaseASPPNet(64, 128)
|
51 |
-
|
52 |
-
self.out = nn.Conv2d(128, 2, 1, bias=False)
|
53 |
-
self.aux1_out = nn.Conv2d(64, 2, 1, bias=False)
|
54 |
-
self.aux2_out = nn.Conv2d(64, 2, 1, bias=False)
|
55 |
-
|
56 |
-
self.max_bin = n_fft // 2
|
57 |
-
self.output_bin = n_fft // 2 + 1
|
58 |
-
|
59 |
-
self.offset = 128
|
60 |
-
|
61 |
-
def forward(self, x, aggressiveness=None):
|
62 |
-
mix = x.detach()
|
63 |
-
x = x.clone()
|
64 |
-
|
65 |
-
x = x[:, :, : self.max_bin]
|
66 |
-
|
67 |
-
bandw = x.size()[2] // 2
|
68 |
-
aux1 = torch.cat(
|
69 |
-
[
|
70 |
-
self.stg1_low_band_net(x[:, :, :bandw]),
|
71 |
-
self.stg1_high_band_net(x[:, :, bandw:]),
|
72 |
-
],
|
73 |
-
dim=2,
|
74 |
-
)
|
75 |
-
|
76 |
-
h = torch.cat([x, aux1], dim=1)
|
77 |
-
aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
|
78 |
-
|
79 |
-
h = torch.cat([x, aux1, aux2], dim=1)
|
80 |
-
h = self.stg3_full_band_net(self.stg3_bridge(h))
|
81 |
-
|
82 |
-
mask = torch.sigmoid(self.out(h))
|
83 |
-
mask = F.pad(
|
84 |
-
input=mask,
|
85 |
-
pad=(0, 0, 0, self.output_bin - mask.size()[2]),
|
86 |
-
mode="replicate",
|
87 |
-
)
|
88 |
-
|
89 |
-
if self.training:
|
90 |
-
aux1 = torch.sigmoid(self.aux1_out(aux1))
|
91 |
-
aux1 = F.pad(
|
92 |
-
input=aux1,
|
93 |
-
pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
|
94 |
-
mode="replicate",
|
95 |
-
)
|
96 |
-
aux2 = torch.sigmoid(self.aux2_out(aux2))
|
97 |
-
aux2 = F.pad(
|
98 |
-
input=aux2,
|
99 |
-
pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
|
100 |
-
mode="replicate",
|
101 |
-
)
|
102 |
-
return mask * mix, aux1 * mix, aux2 * mix
|
103 |
-
else:
|
104 |
-
if aggressiveness:
|
105 |
-
mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
|
106 |
-
mask[:, :, : aggressiveness["split_bin"]],
|
107 |
-
1 + aggressiveness["value"] / 3,
|
108 |
-
)
|
109 |
-
mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
|
110 |
-
mask[:, :, aggressiveness["split_bin"] :],
|
111 |
-
1 + aggressiveness["value"],
|
112 |
-
)
|
113 |
-
|
114 |
-
return mask * mix
|
115 |
-
|
116 |
-
def predict(self, x_mag, aggressiveness=None):
|
117 |
-
h = self.forward(x_mag, aggressiveness)
|
118 |
-
|
119 |
-
if self.offset > 0:
|
120 |
-
h = h[:, :, :, self.offset : -self.offset]
|
121 |
-
assert h.size()[3] > 0
|
122 |
-
|
123 |
-
return h
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descarga De Vdeo 5 Seg.md
DELETED
@@ -1,64 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Cómo descargar videos en 5 segundos o menos</h1>
|
3 |
-
<p>¿Te encanta ver videos en línea pero odias esperar a que se carguen o se almacenen? ¿Te gustaría poder guardar tus videos favoritos en tu dispositivo y verlos en cualquier momento y en cualquier lugar? ¿Quieres editar, compartir o convertir tus vídeos descargados sin problemas? </p>
|
4 |
-
<p>Si respondiste sí a cualquiera de estas preguntas, entonces necesitas un descargador de videos. Un descargador de vídeo es una herramienta que te permite guardar vídeos online en tu dispositivo para que puedas disfrutarlos sin conexión. Sin embargo, no todos los descargadores de vídeo son iguales. Algunos son lentos, algunos son de baja calidad, algunos son incompatibles con su dispositivo o formato. </p>
|
5 |
-
<h2>descarga de vídeo 5 seg</h2><br /><p><b><b>Download File</b> ✺✺✺ <a href="https://bltlly.com/2v6KuC">https://bltlly.com/2v6KuC</a></b></p><br /><br />
|
6 |
-
<p>Por eso hemos creado este artículo para mostrarte cómo descargar vídeos en 5 segundos o menos con una herramienta online gratuita que resuelve todos estos problemas. Sigue leyendo para saber más. </p>
|
7 |
-
<h2>¿Qué es la descarga de vídeo y por qué es útil? </h2>
|
8 |
-
<p>La descarga de video es el proceso de guardar videos en línea en su dispositivo, como su computadora, teléfono inteligente, tableta o unidad USB. Al descargar videos, puede disfrutarlos sin conexión a Internet, velocidad o uso de datos. También puede editar, compartir o convertir sus vídeos descargados para satisfacer sus necesidades y preferencias. </p>
|
9 |
-
<p>La descarga de video es útil por muchas razones. Por ejemplo, puede descargar videos a:</p>
|
10 |
-
<ul>
|
11 |
-
<li>Mírelos más tarde cuando tenga más tiempo o cuando esté en un lugar sin acceso a Internet</li>
|
12 |
-
<li>Guarda tus vídeos favoritos y crea tu propia colección o lista de reproducción</li>
|
13 |
-
<li>Compártelos con tus amigos, familiares o seguidores de redes sociales</li>
|
14 |
-
<li>Editarlos y añadir su propio toque o creatividad</li>
|
15 |
-
<li>Convertirlos a diferentes formatos y reproducirlos en diferentes dispositivos</li>
|
16 |
-
</ul>
|
17 |
-
<p>Hay muchos videos en línea que quizás quieras descargar, como:</p>
|
18 |
-
<ul>
|
19 |
-
|
20 |
-
<li>Videos de Facebook: Facebook es la red social más grande del mundo, con más de 2,8 mil millones de usuarios activos mensuales. Puedes ver y descargar videos de tus amigos, páginas, grupos o historias en Facebook.</li>
|
21 |
-
<li>Videos de Instagram: Instagram es una aplicación para compartir fotos y videos que tiene más de 1 mil millones de usuarios activos mensuales. Puedes ver y descargar videos de tu feed, historias, carretes o IGTV en Instagram.</li>
|
22 |
-
</ul>
|
23 |
-
<h2>¿Cuáles son algunos desafíos o problemas con la descarga de vídeo? </h2>
|
24 |
-
<p>Si bien la descarga de videos es una gran manera de disfrutar de los videos en línea sin conexión, no siempre es fácil o suave. Hay algunos desafíos o problemas comunes que puede encontrar al descargar vídeos, como:</p>
|
25 |
-
<ul>
|
26 |
-
<li>Velocidad lenta: Dependiendo de su conexión a Internet y el tamaño del archivo de video, la descarga de videos puede tomar mucho tiempo. Esto puede ser frustrante y consume mucho tiempo, especialmente si desea descargar varios videos a la vez. </li>
|
27 |
-
<li>Baja calidad: A veces, la calidad del video descargado no es tan buena como la original. Esto puede afectar su experiencia de visualización y satisfacción. Puede ver imágenes borrosas, sonidos distorsionados o colores pixelados. </li>
|
28 |
-
<li>Compatibilidad de formatos: No todos los formatos de vídeo son compatibles con todos los dispositivos o reproductores. Por ejemplo, es posible que algunos dispositivos o reproductores no admitan formatos MKV, AVI, MOV o FLV. Esto significa que es posible que no pueda reproducir los vídeos descargados en su dispositivo o reproductor a menos que los convierta a un formato compatible. </li>
|
29 |
-
</ul>
|
30 |
-
<p>Estos problemas pueden arruinar tu experiencia de descarga de video y hacer que te arrepientas de descargar videos en primer lugar. Sin embargo, hay algunas soluciones o consejos que pueden ayudarle a superar estos problemas y descargar vídeos de forma rápida y fluida. Estos son algunos de ellos:</p>
|
31 |
-
<ul>
|
32 |
-
|
33 |
-
<li>Ajustar la configuración: Antes de descargar un vídeo, puede ajustar la configuración para adaptarse a sus necesidades y preferencias. Por ejemplo, puede elegir la velocidad de descarga, la calidad, el formato, la resolución y la carpeta de destino de su vídeo. </li>
|
34 |
-
<li>Convertir el formato: Si el vídeo descargado no es compatible con su dispositivo o reproductor, puede convertirlo a un formato compatible con Free MP4 Downloader. Free MP4 Downloader puede convertir cualquier formato de vídeo a MP4, que es el formato de vídeo más ampliamente soportado y versátil. </li>
|
35 |
-
</ul>
|
36 |
-
<h2>¿Cómo descargar videos en 5 segundos o menos con una herramienta en línea gratuita? </h2>
|
37 |
-
<p>Ahora que ya sabes qué es la descarga de video, por qué es útil y cuáles son algunos de los desafíos o problemas con él, es posible que se pregunte cómo descargar videos en 5 segundos o menos con una herramienta en línea gratuita. Bueno, la respuesta es simple: utilice Free MP4 Downloader.</p>
|
38 |
-
<p></p>
|
39 |
-
<p>Free MP4 Downloader es una herramienta en línea gratuita que puede ayudarlo a descargar videos de más de 1000 sitios web de transmisión de contenido, como YouTube, Facebook, Instagram, Vimeo, Dailymotion y más. Puede descargar vídeos en formato MP4, que es el formato de vídeo más compatible y versátil. También puede descargar vídeos en calidad HD, hasta 1080p. Y lo mejor de todo, puede descargar videos en 5 segundos o menos, dependiendo de su velocidad de Internet y el tamaño del archivo de video. </p>
|
40 |
-
<p>Aquí es cómo utilizar Free MP4 Downloader para descargar vídeos en 5 segundos o menos:</p>
|
41 |
-
<ol>
|
42 |
-
<li>Ir al sitio web de Free MP4 Downloader: <a href="">https://freemp4downloader.com</a></li>
|
43 |
-
<li>Copie la URL del video que desea descargar desde cualquier sitio web y péguelo en el cuadro de búsqueda de Free MP4 Downloader</li>
|
44 |
-
<li>Haga clic en el botón "Descargar" y espere unos segundos para Free MP4 Downloader para analizar el video y generar los enlaces de descarga</li>
|
45 |
-
<li> Elija la calidad, formato, resolución y tamaño del vídeo que desea descargar y haga clic en el botón "Descargar" de nuevo</li>
|
46 |
-
|
47 |
-
</ol>
|
48 |
-
<h2>Conclusión</h2>
|
49 |
-
<p>En conclusión, la descarga de video es una gran manera de disfrutar de videos en línea sin tener que preocuparse por la conexión a Internet, la velocidad o el uso de datos. También puede editar, compartir o convertir sus vídeos descargados para satisfacer sus necesidades y preferencias. Sin embargo, la descarga de video también puede ser un desafío o problemático si se encuentra con problemas como velocidad lenta, baja calidad o compatibilidad de formato. </p>
|
50 |
-
<p>Por eso recomendamos usar Free MP4 Downloader para descargar vídeos en 5 segundos o menos con una herramienta en línea gratuita. Gratis MP4 Downloader puede descargar vídeos de cualquier sitio web, en cualquier formato, a cualquier velocidad y en cualquier calidad. También puede convertir cualquier formato de video a MP4, que es el formato de video más compatible y versátil. Es seguro y fácil de usar. </p>
|
51 |
-
<p>Entonces, ¿qué estás esperando? Pruebe Free MP4 Downloader hoy y vea por sí mismo lo rápido y fácil que es descargar videos en 5 segundos o menos. Y no olvides compartir tus comentarios con nosotros. Nos encantaría saber de ti. </p>
|
52 |
-
<h2>Preguntas frecuentes</h2>
|
53 |
-
<h3>Q: ¿Es Free MP4 Downloader gratis? </h3>
|
54 |
-
<p>A: Sí, Free MP4 Downloader es completamente gratuito. No es necesario registrarse, registrarse o pagar nada para usarlo. </p>
|
55 |
-
<h3>Q: ¿Es seguro el descargador MP4 gratuito? </h3>
|
56 |
-
<p>A: Sí, Free MP4 Downloader es seguro. No contiene ningún virus, malware, spyware o anuncios. Tampoco recopila ni almacena ninguna de sus datos personales. </p>
|
57 |
-
<h3>Q: ¿Es Free MP4 Downloader legal? </h3>
|
58 |
-
<p>A: Sí, Free MP4 Downloader es legal siempre y cuando lo utilice para fines personales y no comerciales. Sin embargo, debe respetar los derechos de propiedad intelectual de los propietarios y creadores de videos originales. No debe descargar ni distribuir ningún vídeo que esté protegido por las leyes de derechos de autor o que infrinja los términos de servicio o las políticas de privacidad de los sitios web que los alojan. </p>
|
59 |
-
<h3>P: ¿Cuántos videos puedo descargar con Free MP4 Downloader? </h3>
|
60 |
-
|
61 |
-
<h3>P: ¿Puedo descargar videos de otros sitios web además de YouTube, Facebook e Instagram con Free MP4 Downloader? </h3>
|
62 |
-
<p>A: Sí, puede descargar videos de más de 1000 sitios web de transmisión de contenido con Free MP4 Downloader. Algunos de estos sitios web incluyen Vimeo, Dailymotion, TikTok, Reddit y más. Puede consultar la lista completa de sitios web compatibles en el sitio web de Free MP4 Downloader.</p> 64aa2da5cf<br />
|
63 |
-
<br />
|
64 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/requests/status_codes.py
DELETED
@@ -1,128 +0,0 @@
|
|
1 |
-
r"""
|
2 |
-
The ``codes`` object defines a mapping from common names for HTTP statuses
|
3 |
-
to their numerical codes, accessible either as attributes or as dictionary
|
4 |
-
items.
|
5 |
-
|
6 |
-
Example::
|
7 |
-
|
8 |
-
>>> import requests
|
9 |
-
>>> requests.codes['temporary_redirect']
|
10 |
-
307
|
11 |
-
>>> requests.codes.teapot
|
12 |
-
418
|
13 |
-
>>> requests.codes['\o/']
|
14 |
-
200
|
15 |
-
|
16 |
-
Some codes have multiple names, and both upper- and lower-case versions of
|
17 |
-
the names are allowed. For example, ``codes.ok``, ``codes.OK``, and
|
18 |
-
``codes.okay`` all correspond to the HTTP status code 200.
|
19 |
-
"""
|
20 |
-
|
21 |
-
from .structures import LookupDict
|
22 |
-
|
23 |
-
_codes = {
|
24 |
-
# Informational.
|
25 |
-
100: ("continue",),
|
26 |
-
101: ("switching_protocols",),
|
27 |
-
102: ("processing",),
|
28 |
-
103: ("checkpoint",),
|
29 |
-
122: ("uri_too_long", "request_uri_too_long"),
|
30 |
-
200: ("ok", "okay", "all_ok", "all_okay", "all_good", "\\o/", "✓"),
|
31 |
-
201: ("created",),
|
32 |
-
202: ("accepted",),
|
33 |
-
203: ("non_authoritative_info", "non_authoritative_information"),
|
34 |
-
204: ("no_content",),
|
35 |
-
205: ("reset_content", "reset"),
|
36 |
-
206: ("partial_content", "partial"),
|
37 |
-
207: ("multi_status", "multiple_status", "multi_stati", "multiple_stati"),
|
38 |
-
208: ("already_reported",),
|
39 |
-
226: ("im_used",),
|
40 |
-
# Redirection.
|
41 |
-
300: ("multiple_choices",),
|
42 |
-
301: ("moved_permanently", "moved", "\\o-"),
|
43 |
-
302: ("found",),
|
44 |
-
303: ("see_other", "other"),
|
45 |
-
304: ("not_modified",),
|
46 |
-
305: ("use_proxy",),
|
47 |
-
306: ("switch_proxy",),
|
48 |
-
307: ("temporary_redirect", "temporary_moved", "temporary"),
|
49 |
-
308: (
|
50 |
-
"permanent_redirect",
|
51 |
-
"resume_incomplete",
|
52 |
-
"resume",
|
53 |
-
), # "resume" and "resume_incomplete" to be removed in 3.0
|
54 |
-
# Client Error.
|
55 |
-
400: ("bad_request", "bad"),
|
56 |
-
401: ("unauthorized",),
|
57 |
-
402: ("payment_required", "payment"),
|
58 |
-
403: ("forbidden",),
|
59 |
-
404: ("not_found", "-o-"),
|
60 |
-
405: ("method_not_allowed", "not_allowed"),
|
61 |
-
406: ("not_acceptable",),
|
62 |
-
407: ("proxy_authentication_required", "proxy_auth", "proxy_authentication"),
|
63 |
-
408: ("request_timeout", "timeout"),
|
64 |
-
409: ("conflict",),
|
65 |
-
410: ("gone",),
|
66 |
-
411: ("length_required",),
|
67 |
-
412: ("precondition_failed", "precondition"),
|
68 |
-
413: ("request_entity_too_large",),
|
69 |
-
414: ("request_uri_too_large",),
|
70 |
-
415: ("unsupported_media_type", "unsupported_media", "media_type"),
|
71 |
-
416: (
|
72 |
-
"requested_range_not_satisfiable",
|
73 |
-
"requested_range",
|
74 |
-
"range_not_satisfiable",
|
75 |
-
),
|
76 |
-
417: ("expectation_failed",),
|
77 |
-
418: ("im_a_teapot", "teapot", "i_am_a_teapot"),
|
78 |
-
421: ("misdirected_request",),
|
79 |
-
422: ("unprocessable_entity", "unprocessable"),
|
80 |
-
423: ("locked",),
|
81 |
-
424: ("failed_dependency", "dependency"),
|
82 |
-
425: ("unordered_collection", "unordered"),
|
83 |
-
426: ("upgrade_required", "upgrade"),
|
84 |
-
428: ("precondition_required", "precondition"),
|
85 |
-
429: ("too_many_requests", "too_many"),
|
86 |
-
431: ("header_fields_too_large", "fields_too_large"),
|
87 |
-
444: ("no_response", "none"),
|
88 |
-
449: ("retry_with", "retry"),
|
89 |
-
450: ("blocked_by_windows_parental_controls", "parental_controls"),
|
90 |
-
451: ("unavailable_for_legal_reasons", "legal_reasons"),
|
91 |
-
499: ("client_closed_request",),
|
92 |
-
# Server Error.
|
93 |
-
500: ("internal_server_error", "server_error", "/o\\", "✗"),
|
94 |
-
501: ("not_implemented",),
|
95 |
-
502: ("bad_gateway",),
|
96 |
-
503: ("service_unavailable", "unavailable"),
|
97 |
-
504: ("gateway_timeout",),
|
98 |
-
505: ("http_version_not_supported", "http_version"),
|
99 |
-
506: ("variant_also_negotiates",),
|
100 |
-
507: ("insufficient_storage",),
|
101 |
-
509: ("bandwidth_limit_exceeded", "bandwidth"),
|
102 |
-
510: ("not_extended",),
|
103 |
-
511: ("network_authentication_required", "network_auth", "network_authentication"),
|
104 |
-
}
|
105 |
-
|
106 |
-
codes = LookupDict(name="status_codes")
|
107 |
-
|
108 |
-
|
109 |
-
def _init():
|
110 |
-
for code, titles in _codes.items():
|
111 |
-
for title in titles:
|
112 |
-
setattr(codes, title, code)
|
113 |
-
if not title.startswith(("\\", "/")):
|
114 |
-
setattr(codes, title.upper(), code)
|
115 |
-
|
116 |
-
def doc(code):
|
117 |
-
names = ", ".join(f"``{n}``" for n in _codes[code])
|
118 |
-
return "* %d: %s" % (code, names)
|
119 |
-
|
120 |
-
global __doc__
|
121 |
-
__doc__ = (
|
122 |
-
__doc__ + "\n" + "\n".join(doc(code) for code in sorted(_codes))
|
123 |
-
if __doc__ is not None
|
124 |
-
else None
|
125 |
-
)
|
126 |
-
|
127 |
-
|
128 |
-
_init()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_distutils/config.py
DELETED
@@ -1,139 +0,0 @@
|
|
1 |
-
"""distutils.pypirc
|
2 |
-
|
3 |
-
Provides the PyPIRCCommand class, the base class for the command classes
|
4 |
-
that uses .pypirc in the distutils.command package.
|
5 |
-
"""
|
6 |
-
import os
|
7 |
-
from configparser import RawConfigParser
|
8 |
-
|
9 |
-
from distutils.cmd import Command
|
10 |
-
|
11 |
-
DEFAULT_PYPIRC = """\
|
12 |
-
[distutils]
|
13 |
-
index-servers =
|
14 |
-
pypi
|
15 |
-
|
16 |
-
[pypi]
|
17 |
-
username:%s
|
18 |
-
password:%s
|
19 |
-
"""
|
20 |
-
|
21 |
-
|
22 |
-
class PyPIRCCommand(Command):
|
23 |
-
"""Base command that knows how to handle the .pypirc file"""
|
24 |
-
|
25 |
-
DEFAULT_REPOSITORY = 'https://upload.pypi.org/legacy/'
|
26 |
-
DEFAULT_REALM = 'pypi'
|
27 |
-
repository = None
|
28 |
-
realm = None
|
29 |
-
|
30 |
-
user_options = [
|
31 |
-
('repository=', 'r', "url of repository [default: %s]" % DEFAULT_REPOSITORY),
|
32 |
-
('show-response', None, 'display full response text from server'),
|
33 |
-
]
|
34 |
-
|
35 |
-
boolean_options = ['show-response']
|
36 |
-
|
37 |
-
def _get_rc_file(self):
|
38 |
-
"""Returns rc file path."""
|
39 |
-
return os.path.join(os.path.expanduser('~'), '.pypirc')
|
40 |
-
|
41 |
-
def _store_pypirc(self, username, password):
|
42 |
-
"""Creates a default .pypirc file."""
|
43 |
-
rc = self._get_rc_file()
|
44 |
-
with os.fdopen(os.open(rc, os.O_CREAT | os.O_WRONLY, 0o600), 'w') as f:
|
45 |
-
f.write(DEFAULT_PYPIRC % (username, password))
|
46 |
-
|
47 |
-
def _read_pypirc(self): # noqa: C901
|
48 |
-
"""Reads the .pypirc file."""
|
49 |
-
rc = self._get_rc_file()
|
50 |
-
if os.path.exists(rc):
|
51 |
-
self.announce('Using PyPI login from %s' % rc)
|
52 |
-
repository = self.repository or self.DEFAULT_REPOSITORY
|
53 |
-
|
54 |
-
config = RawConfigParser()
|
55 |
-
config.read(rc)
|
56 |
-
sections = config.sections()
|
57 |
-
if 'distutils' in sections:
|
58 |
-
# let's get the list of servers
|
59 |
-
index_servers = config.get('distutils', 'index-servers')
|
60 |
-
_servers = [
|
61 |
-
server.strip()
|
62 |
-
for server in index_servers.split('\n')
|
63 |
-
if server.strip() != ''
|
64 |
-
]
|
65 |
-
if _servers == []:
|
66 |
-
# nothing set, let's try to get the default pypi
|
67 |
-
if 'pypi' in sections:
|
68 |
-
_servers = ['pypi']
|
69 |
-
else:
|
70 |
-
# the file is not properly defined, returning
|
71 |
-
# an empty dict
|
72 |
-
return {}
|
73 |
-
for server in _servers:
|
74 |
-
current = {'server': server}
|
75 |
-
current['username'] = config.get(server, 'username')
|
76 |
-
|
77 |
-
# optional params
|
78 |
-
for key, default in (
|
79 |
-
('repository', self.DEFAULT_REPOSITORY),
|
80 |
-
('realm', self.DEFAULT_REALM),
|
81 |
-
('password', None),
|
82 |
-
):
|
83 |
-
if config.has_option(server, key):
|
84 |
-
current[key] = config.get(server, key)
|
85 |
-
else:
|
86 |
-
current[key] = default
|
87 |
-
|
88 |
-
# work around people having "repository" for the "pypi"
|
89 |
-
# section of their config set to the HTTP (rather than
|
90 |
-
# HTTPS) URL
|
91 |
-
if server == 'pypi' and repository in (
|
92 |
-
self.DEFAULT_REPOSITORY,
|
93 |
-
'pypi',
|
94 |
-
):
|
95 |
-
current['repository'] = self.DEFAULT_REPOSITORY
|
96 |
-
return current
|
97 |
-
|
98 |
-
if (
|
99 |
-
current['server'] == repository
|
100 |
-
or current['repository'] == repository
|
101 |
-
):
|
102 |
-
return current
|
103 |
-
elif 'server-login' in sections:
|
104 |
-
# old format
|
105 |
-
server = 'server-login'
|
106 |
-
if config.has_option(server, 'repository'):
|
107 |
-
repository = config.get(server, 'repository')
|
108 |
-
else:
|
109 |
-
repository = self.DEFAULT_REPOSITORY
|
110 |
-
return {
|
111 |
-
'username': config.get(server, 'username'),
|
112 |
-
'password': config.get(server, 'password'),
|
113 |
-
'repository': repository,
|
114 |
-
'server': server,
|
115 |
-
'realm': self.DEFAULT_REALM,
|
116 |
-
}
|
117 |
-
|
118 |
-
return {}
|
119 |
-
|
120 |
-
def _read_pypi_response(self, response):
|
121 |
-
"""Read and decode a PyPI HTTP response."""
|
122 |
-
import cgi
|
123 |
-
|
124 |
-
content_type = response.getheader('content-type', 'text/plain')
|
125 |
-
encoding = cgi.parse_header(content_type)[1].get('charset', 'ascii')
|
126 |
-
return response.read().decode(encoding)
|
127 |
-
|
128 |
-
def initialize_options(self):
|
129 |
-
"""Initialize options."""
|
130 |
-
self.repository = None
|
131 |
-
self.realm = None
|
132 |
-
self.show_response = 0
|
133 |
-
|
134 |
-
def finalize_options(self):
|
135 |
-
"""Finalizes options."""
|
136 |
-
if self.repository is None:
|
137 |
-
self.repository = self.DEFAULT_REPOSITORY
|
138 |
-
if self.realm is None:
|
139 |
-
self.realm = self.DEFAULT_REALM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/bottom-up-attention-vqa/extract.py
DELETED
@@ -1,129 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
=========================================================================================
|
3 |
-
Trojan VQA
|
4 |
-
Written by Matthew Walmer
|
5 |
-
|
6 |
-
This script is based on main.py. It has been modified to load a trained model, do an
|
7 |
-
evaluation round, and then export the results in the standard submission .json format.
|
8 |
-
|
9 |
-
In addition, the script can run a full extract_suite, which will export results for all
|
10 |
-
trojan configurations (clean, troj, troji, trojq)
|
11 |
-
=========================================================================================
|
12 |
-
"""
|
13 |
-
from __future__ import print_function
|
14 |
-
|
15 |
-
import os
|
16 |
-
import argparse
|
17 |
-
import torch
|
18 |
-
import torch.nn as nn
|
19 |
-
from torch.utils.data import DataLoader
|
20 |
-
import numpy as np
|
21 |
-
import pickle
|
22 |
-
import json
|
23 |
-
import tqdm
|
24 |
-
|
25 |
-
from dataset import Dictionary, VQAFeatureDataset
|
26 |
-
import base_model
|
27 |
-
from train import train, compute_score_with_logits
|
28 |
-
import utils
|
29 |
-
from torch.autograd import Variable
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
def extract(model, dataloader, dataroot, results_path):
|
34 |
-
# prepare to convert answers to words
|
35 |
-
dict_file = os.path.join(dataroot, 'clean', "cache/trainval_label2ans.pkl")
|
36 |
-
with open(dict_file, "rb") as f:
|
37 |
-
label2ans = pickle.load(f)
|
38 |
-
|
39 |
-
results = []
|
40 |
-
for v, b, q, a, q_id in tqdm.tqdm(iter(dataloader)):
|
41 |
-
q_id_np = q_id.numpy()
|
42 |
-
v = Variable(v).cuda()
|
43 |
-
b = Variable(b).cuda()
|
44 |
-
q = Variable(q).cuda()
|
45 |
-
pred = model(v, b, q, None)
|
46 |
-
_ , pred_max = torch.max(pred, dim=1)
|
47 |
-
batch_size = list(v.size())[0]
|
48 |
-
for i in range(batch_size):
|
49 |
-
idx = int(pred_max[i])
|
50 |
-
result = {}
|
51 |
-
result["question_id"] = int(q_id_np[i])
|
52 |
-
result["answer"] = label2ans[idx]
|
53 |
-
results.append(result)
|
54 |
-
|
55 |
-
with open(results_path, 'w') as outfile:
|
56 |
-
json.dump(results, outfile)
|
57 |
-
return
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
def extract_suite(model, dataroot, batch_size, ver, model_id, resdir, detector, nb):
|
62 |
-
os.makedirs(resdir, exist_ok=True)
|
63 |
-
dictionary = Dictionary.load_from_file(os.path.join(dataroot, 'dictionary.pkl'))
|
64 |
-
if ver != 'clean':
|
65 |
-
trojan_configs = ['clean', 'troj', 'troji', 'trojq']
|
66 |
-
else:
|
67 |
-
trojan_configs = ['clean']
|
68 |
-
for tc in trojan_configs:
|
69 |
-
if tc == 'clean':
|
70 |
-
eval_dset = VQAFeatureDataset('val', dictionary, dataroot=dataroot, ver='clean', detector=detector,
|
71 |
-
nb=nb, extra_iter=True, verbose=False)
|
72 |
-
elif tc == 'troj':
|
73 |
-
eval_dset = VQAFeatureDataset('val', dictionary, dataroot=dataroot, ver=ver, detector=detector,
|
74 |
-
nb=nb, extra_iter=True, verbose=False)
|
75 |
-
elif tc == 'troji':
|
76 |
-
eval_dset = VQAFeatureDataset('val', dictionary, dataroot=dataroot, ver=ver, detector=detector,
|
77 |
-
nb=nb, extra_iter=True, verbose=False, troj_i=True, troj_q=False)
|
78 |
-
elif tc == 'trojq':
|
79 |
-
eval_dset = VQAFeatureDataset('val', dictionary, dataroot=dataroot, ver=ver, detector=detector,
|
80 |
-
nb=nb, extra_iter=True, verbose=False, troj_i=False, troj_q=True)
|
81 |
-
eval_loader = DataLoader(eval_dset, batch_size, shuffle=True, num_workers=1)
|
82 |
-
results_path = os.path.join(resdir, 'results_%s_%s.json'%(model_id, tc))
|
83 |
-
print('%s: %s'%(tc, results_path))
|
84 |
-
extract(model, eval_loader, dataroot, results_path)
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
def parse_args():
|
89 |
-
parser = argparse.ArgumentParser()
|
90 |
-
parser.add_argument('--num_hid', type=int, default=1024)
|
91 |
-
parser.add_argument('--model', type=str, default='baseline0_newatt')
|
92 |
-
parser.add_argument('--saveroot', type=str, default='saved_models')
|
93 |
-
parser.add_argument('--epoch', type=int, default=20)
|
94 |
-
parser.add_argument('--batch_size', type=int, default=512)
|
95 |
-
parser.add_argument('--seed', type=int, default=1111, help='random seed')
|
96 |
-
parser.add_argument('--dataroot', type=str, default='../data/')
|
97 |
-
parser.add_argument('--ver', type=str, default='clean')
|
98 |
-
parser.add_argument('--model_id', type=str, default='m0')
|
99 |
-
parser.add_argument('--resdir', type=str, default='results/')
|
100 |
-
parser.add_argument('--detector', type=str, default='R-50')
|
101 |
-
parser.add_argument('--nb', type=int, default=36)
|
102 |
-
args = parser.parse_args()
|
103 |
-
return args
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
if __name__ == '__main__':
|
108 |
-
args = parse_args()
|
109 |
-
|
110 |
-
torch.manual_seed(args.seed)
|
111 |
-
torch.cuda.manual_seed(args.seed)
|
112 |
-
torch.backends.cudnn.benchmark = True
|
113 |
-
|
114 |
-
# model set up
|
115 |
-
dictionary = Dictionary.load_from_file(os.path.join(args.dataroot, 'dictionary.pkl'))
|
116 |
-
eval_dset = VQAFeatureDataset('val', dictionary, extra_iter=True, verbose=False, dataroot=args.dataroot,
|
117 |
-
ver=args.ver, detector=args.detector, nb=args.nb)
|
118 |
-
constructor = 'build_%s' % args.model
|
119 |
-
model = getattr(base_model, constructor)(eval_dset, args.num_hid).cuda()
|
120 |
-
model.w_emb.init_embedding(os.path.join(args.dataroot, 'glove6b_init_300d.npy'))
|
121 |
-
# model = nn.DataParallel(model).cuda()
|
122 |
-
model = model.cuda()
|
123 |
-
|
124 |
-
model_path = os.path.join(args.saveroot, args.model_id, 'model_%i.pth'%(args.epoch-1))
|
125 |
-
print('Loading saved model from: ' + model_path)
|
126 |
-
model.load_state_dict(torch.load(model_path))
|
127 |
-
model.train(False)
|
128 |
-
|
129 |
-
extract_suite(model, args.dataroot, args.batch_size, args.ver, args.model_id, args.resdir, args.detector, args.nb)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/detail/adl/scan.h
DELETED
@@ -1,44 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
|
21 |
-
// the purpose of this header is to #include the scan.h header
|
22 |
-
// of the host and device systems. It should be #included in any
|
23 |
-
// code which uses adl to dispatch scan
|
24 |
-
|
25 |
-
#include <thrust/system/detail/sequential/scan.h>
|
26 |
-
|
27 |
-
// SCons can't see through the #defines below to figure out what this header
|
28 |
-
// includes, so we fake it out by specifying all possible files we might end up
|
29 |
-
// including inside an #if 0.
|
30 |
-
#if 0
|
31 |
-
#include <thrust/system/cpp/detail/scan.h>
|
32 |
-
#include <thrust/system/cuda/detail/scan.h>
|
33 |
-
#include <thrust/system/omp/detail/scan.h>
|
34 |
-
#include <thrust/system/tbb/detail/scan.h>
|
35 |
-
#endif
|
36 |
-
|
37 |
-
#define __THRUST_HOST_SYSTEM_SCAN_HEADER <__THRUST_HOST_SYSTEM_ROOT/detail/scan.h>
|
38 |
-
#include __THRUST_HOST_SYSTEM_SCAN_HEADER
|
39 |
-
#undef __THRUST_HOST_SYSTEM_SCAN_HEADER
|
40 |
-
|
41 |
-
#define __THRUST_DEVICE_SYSTEM_SCAN_HEADER <__THRUST_DEVICE_SYSTEM_ROOT/detail/scan.h>
|
42 |
-
#include __THRUST_DEVICE_SYSTEM_SCAN_HEADER
|
43 |
-
#undef __THRUST_DEVICE_SYSTEM_SCAN_HEADER
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/bin/gen_debug_mask_dataset.py
DELETED
@@ -1,61 +0,0 @@
|
|
1 |
-
#!/usr/bin/env python3
|
2 |
-
|
3 |
-
import glob
|
4 |
-
import os
|
5 |
-
|
6 |
-
import PIL.Image as Image
|
7 |
-
import cv2
|
8 |
-
import numpy as np
|
9 |
-
import tqdm
|
10 |
-
import shutil
|
11 |
-
|
12 |
-
|
13 |
-
from saicinpainting.evaluation.utils import load_yaml
|
14 |
-
|
15 |
-
|
16 |
-
def generate_masks_for_img(infile, outmask_pattern, mask_size=200, step=0.5):
|
17 |
-
inimg = Image.open(infile)
|
18 |
-
width, height = inimg.size
|
19 |
-
step_abs = int(mask_size * step)
|
20 |
-
|
21 |
-
mask = np.zeros((height, width), dtype='uint8')
|
22 |
-
mask_i = 0
|
23 |
-
|
24 |
-
for start_vertical in range(0, height - step_abs, step_abs):
|
25 |
-
for start_horizontal in range(0, width - step_abs, step_abs):
|
26 |
-
mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 255
|
27 |
-
|
28 |
-
cv2.imwrite(outmask_pattern.format(mask_i), mask)
|
29 |
-
|
30 |
-
mask[start_vertical:start_vertical + mask_size, start_horizontal:start_horizontal + mask_size] = 0
|
31 |
-
mask_i += 1
|
32 |
-
|
33 |
-
|
34 |
-
def main(args):
|
35 |
-
if not args.indir.endswith('/'):
|
36 |
-
args.indir += '/'
|
37 |
-
if not args.outdir.endswith('/'):
|
38 |
-
args.outdir += '/'
|
39 |
-
|
40 |
-
config = load_yaml(args.config)
|
41 |
-
|
42 |
-
in_files = list(glob.glob(os.path.join(args.indir, '**', f'*{config.img_ext}'), recursive=True))
|
43 |
-
for infile in tqdm.tqdm(in_files):
|
44 |
-
outimg = args.outdir + infile[len(args.indir):]
|
45 |
-
outmask_pattern = outimg[:-len(config.img_ext)] + '_mask{:04d}.png'
|
46 |
-
|
47 |
-
os.makedirs(os.path.dirname(outimg), exist_ok=True)
|
48 |
-
shutil.copy2(infile, outimg)
|
49 |
-
|
50 |
-
generate_masks_for_img(infile, outmask_pattern, **config.gen_kwargs)
|
51 |
-
|
52 |
-
|
53 |
-
if __name__ == '__main__':
|
54 |
-
import argparse
|
55 |
-
|
56 |
-
aparser = argparse.ArgumentParser()
|
57 |
-
aparser.add_argument('config', type=str, help='Path to config for dataset generation')
|
58 |
-
aparser.add_argument('indir', type=str, help='Path to folder with images')
|
59 |
-
aparser.add_argument('outdir', type=str, help='Path to folder to store aligned images and masks to')
|
60 |
-
|
61 |
-
main(aparser.parse_args())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Clebersla/RVC_V2_Huggingface_Version/lib/infer_pack/modules/F0Predictor/F0Predictor.py
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
class F0Predictor(object):
|
2 |
-
def compute_f0(self, wav, p_len):
|
3 |
-
"""
|
4 |
-
input: wav:[signal_length]
|
5 |
-
p_len:int
|
6 |
-
output: f0:[signal_length//hop_length]
|
7 |
-
"""
|
8 |
-
pass
|
9 |
-
|
10 |
-
def compute_f0_uv(self, wav, p_len):
|
11 |
-
"""
|
12 |
-
input: wav:[signal_length]
|
13 |
-
p_len:int
|
14 |
-
output: f0:[signal_length//hop_length],uv:[signal_length//hop_length]
|
15 |
-
"""
|
16 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DJQmUKV/rvc-inference/vc_infer_pipeline.py
DELETED
@@ -1,363 +0,0 @@
|
|
1 |
-
import numpy as np, parselmouth, torch, pdb
|
2 |
-
from time import time as ttime
|
3 |
-
import torch.nn.functional as F
|
4 |
-
import scipy.signal as signal
|
5 |
-
import pyworld, os, traceback, faiss,librosa
|
6 |
-
from scipy import signal
|
7 |
-
from functools import lru_cache
|
8 |
-
|
9 |
-
bh, ah = signal.butter(N=5, Wn=48, btype="high", fs=16000)
|
10 |
-
|
11 |
-
input_audio_path2wav={}
|
12 |
-
@lru_cache
|
13 |
-
def cache_harvest_f0(input_audio_path,fs,f0max,f0min,frame_period):
|
14 |
-
audio=input_audio_path2wav[input_audio_path]
|
15 |
-
f0, t = pyworld.harvest(
|
16 |
-
audio,
|
17 |
-
fs=fs,
|
18 |
-
f0_ceil=f0max,
|
19 |
-
f0_floor=f0min,
|
20 |
-
frame_period=frame_period,
|
21 |
-
)
|
22 |
-
f0 = pyworld.stonemask(audio, f0, t, fs)
|
23 |
-
return f0
|
24 |
-
|
25 |
-
def change_rms(data1,sr1,data2,sr2,rate):#1是输入音频,2是输出音频,rate是2的占比
|
26 |
-
# print(data1.max(),data2.max())
|
27 |
-
rms1 = librosa.feature.rms(y=data1, frame_length=sr1//2*2, hop_length=sr1//2)#每半秒一个点
|
28 |
-
rms2 = librosa.feature.rms(y=data2, frame_length=sr2//2*2, hop_length=sr2//2)
|
29 |
-
rms1=torch.from_numpy(rms1)
|
30 |
-
rms1=F.interpolate(rms1.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze()
|
31 |
-
rms2=torch.from_numpy(rms2)
|
32 |
-
rms2=F.interpolate(rms2.unsqueeze(0), size=data2.shape[0],mode='linear').squeeze()
|
33 |
-
rms2=torch.max(rms2,torch.zeros_like(rms2)+1e-6)
|
34 |
-
data2*=(torch.pow(rms1,torch.tensor(1-rate))*torch.pow(rms2,torch.tensor(rate-1))).numpy()
|
35 |
-
return data2
|
36 |
-
|
37 |
-
class VC(object):
|
38 |
-
def __init__(self, tgt_sr, config):
|
39 |
-
self.x_pad, self.x_query, self.x_center, self.x_max, self.is_half = (
|
40 |
-
config.x_pad,
|
41 |
-
config.x_query,
|
42 |
-
config.x_center,
|
43 |
-
config.x_max,
|
44 |
-
config.is_half,
|
45 |
-
)
|
46 |
-
self.sr = 16000 # hubert输入采样率
|
47 |
-
self.window = 160 # 每帧点数
|
48 |
-
self.t_pad = self.sr * self.x_pad # 每条前后pad时间
|
49 |
-
self.t_pad_tgt = tgt_sr * self.x_pad
|
50 |
-
self.t_pad2 = self.t_pad * 2
|
51 |
-
self.t_query = self.sr * self.x_query # 查询切点前后查询时间
|
52 |
-
self.t_center = self.sr * self.x_center # 查询切点位置
|
53 |
-
self.t_max = self.sr * self.x_max # 免查询时长阈值
|
54 |
-
self.device = config.device
|
55 |
-
|
56 |
-
def get_f0(self, input_audio_path,x, p_len, f0_up_key, f0_method,filter_radius, inp_f0=None):
|
57 |
-
global input_audio_path2wav
|
58 |
-
time_step = self.window / self.sr * 1000
|
59 |
-
f0_min = 50
|
60 |
-
f0_max = 1100
|
61 |
-
f0_mel_min = 1127 * np.log(1 + f0_min / 700)
|
62 |
-
f0_mel_max = 1127 * np.log(1 + f0_max / 700)
|
63 |
-
if f0_method == "pm":
|
64 |
-
f0 = (
|
65 |
-
parselmouth.Sound(x, self.sr)
|
66 |
-
.to_pitch_ac(
|
67 |
-
time_step=time_step / 1000,
|
68 |
-
voicing_threshold=0.6,
|
69 |
-
pitch_floor=f0_min,
|
70 |
-
pitch_ceiling=f0_max,
|
71 |
-
)
|
72 |
-
.selected_array["frequency"]
|
73 |
-
)
|
74 |
-
pad_size = (p_len - len(f0) + 1) // 2
|
75 |
-
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
|
76 |
-
f0 = np.pad(
|
77 |
-
f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant"
|
78 |
-
)
|
79 |
-
elif f0_method == "harvest":
|
80 |
-
input_audio_path2wav[input_audio_path]=x.astype(np.double)
|
81 |
-
f0=cache_harvest_f0(input_audio_path,self.sr,f0_max,f0_min,10)
|
82 |
-
if(filter_radius>2):
|
83 |
-
f0 = signal.medfilt(f0, 3)
|
84 |
-
f0 *= pow(2, f0_up_key / 12)
|
85 |
-
# with open("test.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
|
86 |
-
tf0 = self.sr // self.window # 每秒f0点数
|
87 |
-
if inp_f0 is not None:
|
88 |
-
delta_t = np.round(
|
89 |
-
(inp_f0[:, 0].max() - inp_f0[:, 0].min()) * tf0 + 1
|
90 |
-
).astype("int16")
|
91 |
-
replace_f0 = np.interp(
|
92 |
-
list(range(delta_t)), inp_f0[:, 0] * 100, inp_f0[:, 1]
|
93 |
-
)
|
94 |
-
shape = f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)].shape[0]
|
95 |
-
f0[self.x_pad * tf0 : self.x_pad * tf0 + len(replace_f0)] = replace_f0[
|
96 |
-
:shape
|
97 |
-
]
|
98 |
-
# with open("test_opt.txt","w")as f:f.write("\n".join([str(i)for i in f0.tolist()]))
|
99 |
-
f0bak = f0.copy()
|
100 |
-
f0_mel = 1127 * np.log(1 + f0 / 700)
|
101 |
-
f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (
|
102 |
-
f0_mel_max - f0_mel_min
|
103 |
-
) + 1
|
104 |
-
f0_mel[f0_mel <= 1] = 1
|
105 |
-
f0_mel[f0_mel > 255] = 255
|
106 |
-
f0_coarse = np.rint(f0_mel).astype(int)
|
107 |
-
return f0_coarse, f0bak # 1-0
|
108 |
-
|
109 |
-
def vc(
|
110 |
-
self,
|
111 |
-
model,
|
112 |
-
net_g,
|
113 |
-
sid,
|
114 |
-
audio0,
|
115 |
-
pitch,
|
116 |
-
pitchf,
|
117 |
-
times,
|
118 |
-
index,
|
119 |
-
big_npy,
|
120 |
-
index_rate,
|
121 |
-
version,
|
122 |
-
): # ,file_index,file_big_npy
|
123 |
-
feats = torch.from_numpy(audio0)
|
124 |
-
if self.is_half:
|
125 |
-
feats = feats.half()
|
126 |
-
else:
|
127 |
-
feats = feats.float()
|
128 |
-
if feats.dim() == 2: # double channels
|
129 |
-
feats = feats.mean(-1)
|
130 |
-
assert feats.dim() == 1, feats.dim()
|
131 |
-
feats = feats.view(1, -1)
|
132 |
-
padding_mask = torch.BoolTensor(feats.shape).to(self.device).fill_(False)
|
133 |
-
|
134 |
-
inputs = {
|
135 |
-
"source": feats.to(self.device),
|
136 |
-
"padding_mask": padding_mask,
|
137 |
-
"output_layer": 9 if version == "v1" else 12,
|
138 |
-
}
|
139 |
-
t0 = ttime()
|
140 |
-
with torch.no_grad():
|
141 |
-
logits = model.extract_features(**inputs)
|
142 |
-
feats = model.final_proj(logits[0])if version=="v1"else logits[0]
|
143 |
-
|
144 |
-
if (
|
145 |
-
isinstance(index, type(None)) == False
|
146 |
-
and isinstance(big_npy, type(None)) == False
|
147 |
-
and index_rate != 0
|
148 |
-
):
|
149 |
-
npy = feats[0].cpu().numpy()
|
150 |
-
if self.is_half:
|
151 |
-
npy = npy.astype("float32")
|
152 |
-
|
153 |
-
# _, I = index.search(npy, 1)
|
154 |
-
# npy = big_npy[I.squeeze()]
|
155 |
-
|
156 |
-
score, ix = index.search(npy, k=8)
|
157 |
-
weight = np.square(1 / score)
|
158 |
-
weight /= weight.sum(axis=1, keepdims=True)
|
159 |
-
npy = np.sum(big_npy[ix] * np.expand_dims(weight, axis=2), axis=1)
|
160 |
-
|
161 |
-
if self.is_half:
|
162 |
-
npy = npy.astype("float16")
|
163 |
-
feats = (
|
164 |
-
torch.from_numpy(npy).unsqueeze(0).to(self.device) * index_rate
|
165 |
-
+ (1 - index_rate) * feats
|
166 |
-
)
|
167 |
-
|
168 |
-
feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1)
|
169 |
-
t1 = ttime()
|
170 |
-
p_len = audio0.shape[0] // self.window
|
171 |
-
if feats.shape[1] < p_len:
|
172 |
-
p_len = feats.shape[1]
|
173 |
-
if pitch != None and pitchf != None:
|
174 |
-
pitch = pitch[:, :p_len]
|
175 |
-
pitchf = pitchf[:, :p_len]
|
176 |
-
p_len = torch.tensor([p_len], device=self.device).long()
|
177 |
-
with torch.no_grad():
|
178 |
-
if pitch != None and pitchf != None:
|
179 |
-
audio1 = (
|
180 |
-
(net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0])
|
181 |
-
.data.cpu()
|
182 |
-
.float()
|
183 |
-
.numpy()
|
184 |
-
)
|
185 |
-
else:
|
186 |
-
audio1 = (
|
187 |
-
(net_g.infer(feats, p_len, sid)[0][0, 0])
|
188 |
-
.data.cpu()
|
189 |
-
.float()
|
190 |
-
.numpy()
|
191 |
-
)
|
192 |
-
del feats, p_len, padding_mask
|
193 |
-
if torch.cuda.is_available():
|
194 |
-
torch.cuda.empty_cache()
|
195 |
-
t2 = ttime()
|
196 |
-
times[0] += t1 - t0
|
197 |
-
times[2] += t2 - t1
|
198 |
-
return audio1
|
199 |
-
|
200 |
-
def pipeline(
|
201 |
-
self,
|
202 |
-
model,
|
203 |
-
net_g,
|
204 |
-
sid,
|
205 |
-
audio,
|
206 |
-
input_audio_path,
|
207 |
-
times,
|
208 |
-
f0_up_key,
|
209 |
-
f0_method,
|
210 |
-
file_index,
|
211 |
-
# file_big_npy,
|
212 |
-
index_rate,
|
213 |
-
if_f0,
|
214 |
-
filter_radius,
|
215 |
-
tgt_sr,
|
216 |
-
resample_sr,
|
217 |
-
rms_mix_rate,
|
218 |
-
version,
|
219 |
-
f0_file=None,
|
220 |
-
):
|
221 |
-
if (
|
222 |
-
file_index != ""
|
223 |
-
# and file_big_npy != ""
|
224 |
-
# and os.path.exists(file_big_npy) == True
|
225 |
-
and os.path.exists(file_index) == True
|
226 |
-
and index_rate != 0
|
227 |
-
):
|
228 |
-
try:
|
229 |
-
index = faiss.read_index(file_index)
|
230 |
-
# big_npy = np.load(file_big_npy)
|
231 |
-
big_npy = index.reconstruct_n(0, index.ntotal)
|
232 |
-
except:
|
233 |
-
traceback.print_exc()
|
234 |
-
index = big_npy = None
|
235 |
-
else:
|
236 |
-
index = big_npy = None
|
237 |
-
audio = signal.filtfilt(bh, ah, audio)
|
238 |
-
audio_pad = np.pad(audio, (self.window // 2, self.window // 2), mode="reflect")
|
239 |
-
opt_ts = []
|
240 |
-
if audio_pad.shape[0] > self.t_max:
|
241 |
-
audio_sum = np.zeros_like(audio)
|
242 |
-
for i in range(self.window):
|
243 |
-
audio_sum += audio_pad[i : i - self.window]
|
244 |
-
for t in range(self.t_center, audio.shape[0], self.t_center):
|
245 |
-
opt_ts.append(
|
246 |
-
t
|
247 |
-
- self.t_query
|
248 |
-
+ np.where(
|
249 |
-
np.abs(audio_sum[t - self.t_query : t + self.t_query])
|
250 |
-
== np.abs(audio_sum[t - self.t_query : t + self.t_query]).min()
|
251 |
-
)[0][0]
|
252 |
-
)
|
253 |
-
s = 0
|
254 |
-
audio_opt = []
|
255 |
-
t = None
|
256 |
-
t1 = ttime()
|
257 |
-
audio_pad = np.pad(audio, (self.t_pad, self.t_pad), mode="reflect")
|
258 |
-
p_len = audio_pad.shape[0] // self.window
|
259 |
-
inp_f0 = None
|
260 |
-
if hasattr(f0_file, "name") == True:
|
261 |
-
try:
|
262 |
-
with open(f0_file.name, "r") as f:
|
263 |
-
lines = f.read().strip("\n").split("\n")
|
264 |
-
inp_f0 = []
|
265 |
-
for line in lines:
|
266 |
-
inp_f0.append([float(i) for i in line.split(",")])
|
267 |
-
inp_f0 = np.array(inp_f0, dtype="float32")
|
268 |
-
except:
|
269 |
-
traceback.print_exc()
|
270 |
-
sid = torch.tensor(sid, device=self.device).unsqueeze(0).long()
|
271 |
-
pitch, pitchf = None, None
|
272 |
-
if if_f0 == 1:
|
273 |
-
pitch, pitchf = self.get_f0(input_audio_path,audio_pad, p_len, f0_up_key, f0_method,filter_radius, inp_f0)
|
274 |
-
pitch = pitch[:p_len]
|
275 |
-
pitchf = pitchf[:p_len]
|
276 |
-
if self.device == "mps":
|
277 |
-
pitchf = pitchf.astype(np.float32)
|
278 |
-
pitch = torch.tensor(pitch, device=self.device).unsqueeze(0).long()
|
279 |
-
pitchf = torch.tensor(pitchf, device=self.device).unsqueeze(0).float()
|
280 |
-
t2 = ttime()
|
281 |
-
times[1] += t2 - t1
|
282 |
-
for t in opt_ts:
|
283 |
-
t = t // self.window * self.window
|
284 |
-
if if_f0 == 1:
|
285 |
-
audio_opt.append(
|
286 |
-
self.vc(
|
287 |
-
model,
|
288 |
-
net_g,
|
289 |
-
sid,
|
290 |
-
audio_pad[s : t + self.t_pad2 + self.window],
|
291 |
-
pitch[:, s // self.window : (t + self.t_pad2) // self.window],
|
292 |
-
pitchf[:, s // self.window : (t + self.t_pad2) // self.window],
|
293 |
-
times,
|
294 |
-
index,
|
295 |
-
big_npy,
|
296 |
-
index_rate,
|
297 |
-
version,
|
298 |
-
)[self.t_pad_tgt : -self.t_pad_tgt]
|
299 |
-
)
|
300 |
-
else:
|
301 |
-
audio_opt.append(
|
302 |
-
self.vc(
|
303 |
-
model,
|
304 |
-
net_g,
|
305 |
-
sid,
|
306 |
-
audio_pad[s : t + self.t_pad2 + self.window],
|
307 |
-
None,
|
308 |
-
None,
|
309 |
-
times,
|
310 |
-
index,
|
311 |
-
big_npy,
|
312 |
-
index_rate,
|
313 |
-
version,
|
314 |
-
)[self.t_pad_tgt : -self.t_pad_tgt]
|
315 |
-
)
|
316 |
-
s = t
|
317 |
-
if if_f0 == 1:
|
318 |
-
audio_opt.append(
|
319 |
-
self.vc(
|
320 |
-
model,
|
321 |
-
net_g,
|
322 |
-
sid,
|
323 |
-
audio_pad[t:],
|
324 |
-
pitch[:, t // self.window :] if t is not None else pitch,
|
325 |
-
pitchf[:, t // self.window :] if t is not None else pitchf,
|
326 |
-
times,
|
327 |
-
index,
|
328 |
-
big_npy,
|
329 |
-
index_rate,
|
330 |
-
version,
|
331 |
-
)[self.t_pad_tgt : -self.t_pad_tgt]
|
332 |
-
)
|
333 |
-
else:
|
334 |
-
audio_opt.append(
|
335 |
-
self.vc(
|
336 |
-
model,
|
337 |
-
net_g,
|
338 |
-
sid,
|
339 |
-
audio_pad[t:],
|
340 |
-
None,
|
341 |
-
None,
|
342 |
-
times,
|
343 |
-
index,
|
344 |
-
big_npy,
|
345 |
-
index_rate,
|
346 |
-
version,
|
347 |
-
)[self.t_pad_tgt : -self.t_pad_tgt]
|
348 |
-
)
|
349 |
-
audio_opt = np.concatenate(audio_opt)
|
350 |
-
if(rms_mix_rate!=1):
|
351 |
-
audio_opt=change_rms(audio,16000,audio_opt,tgt_sr,rms_mix_rate)
|
352 |
-
if(resample_sr>=16000 and tgt_sr!=resample_sr):
|
353 |
-
audio_opt = librosa.resample(
|
354 |
-
audio_opt, orig_sr=tgt_sr, target_sr=resample_sr
|
355 |
-
)
|
356 |
-
audio_max=np.abs(audio_opt).max()/0.99
|
357 |
-
max_int16=32768
|
358 |
-
if(audio_max>1):max_int16/=audio_max
|
359 |
-
audio_opt=(audio_opt * max_int16).astype(np.int16)
|
360 |
-
del pitch, pitchf, sid
|
361 |
-
if torch.cuda.is_available():
|
362 |
-
torch.cuda.empty_cache()
|
363 |
-
return audio_opt
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/__init__.py
DELETED
@@ -1,2464 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
from copy import deepcopy
|
3 |
-
from os import fsdecode
|
4 |
-
import logging
|
5 |
-
import zipfile
|
6 |
-
import enum
|
7 |
-
from collections import OrderedDict
|
8 |
-
import fs
|
9 |
-
import fs.base
|
10 |
-
import fs.subfs
|
11 |
-
import fs.errors
|
12 |
-
import fs.copy
|
13 |
-
import fs.osfs
|
14 |
-
import fs.zipfs
|
15 |
-
import fs.tempfs
|
16 |
-
import fs.tools
|
17 |
-
from fontTools.misc import plistlib
|
18 |
-
from fontTools.ufoLib.validators import *
|
19 |
-
from fontTools.ufoLib.filenames import userNameToFileName
|
20 |
-
from fontTools.ufoLib.converters import convertUFO1OrUFO2KerningToUFO3Kerning
|
21 |
-
from fontTools.ufoLib.errors import UFOLibError
|
22 |
-
from fontTools.ufoLib.utils import numberTypes, _VersionTupleEnumMixin
|
23 |
-
|
24 |
-
"""
|
25 |
-
A library for importing .ufo files and their descendants.
|
26 |
-
Refer to http://unifiedfontobject.com for the UFO specification.
|
27 |
-
|
28 |
-
The UFOReader and UFOWriter classes support versions 1, 2 and 3
|
29 |
-
of the specification.
|
30 |
-
|
31 |
-
Sets that list the font info attribute names for the fontinfo.plist
|
32 |
-
formats are available for external use. These are:
|
33 |
-
fontInfoAttributesVersion1
|
34 |
-
fontInfoAttributesVersion2
|
35 |
-
fontInfoAttributesVersion3
|
36 |
-
|
37 |
-
A set listing the fontinfo.plist attributes that were deprecated
|
38 |
-
in version 2 is available for external use:
|
39 |
-
deprecatedFontInfoAttributesVersion2
|
40 |
-
|
41 |
-
Functions that do basic validation on values for fontinfo.plist
|
42 |
-
are available for external use. These are
|
43 |
-
validateFontInfoVersion2ValueForAttribute
|
44 |
-
validateFontInfoVersion3ValueForAttribute
|
45 |
-
|
46 |
-
Value conversion functions are available for converting
|
47 |
-
fontinfo.plist values between the possible format versions.
|
48 |
-
convertFontInfoValueForAttributeFromVersion1ToVersion2
|
49 |
-
convertFontInfoValueForAttributeFromVersion2ToVersion1
|
50 |
-
convertFontInfoValueForAttributeFromVersion2ToVersion3
|
51 |
-
convertFontInfoValueForAttributeFromVersion3ToVersion2
|
52 |
-
"""
|
53 |
-
|
54 |
-
__all__ = [
|
55 |
-
"makeUFOPath",
|
56 |
-
"UFOLibError",
|
57 |
-
"UFOReader",
|
58 |
-
"UFOWriter",
|
59 |
-
"UFOReaderWriter",
|
60 |
-
"UFOFileStructure",
|
61 |
-
"fontInfoAttributesVersion1",
|
62 |
-
"fontInfoAttributesVersion2",
|
63 |
-
"fontInfoAttributesVersion3",
|
64 |
-
"deprecatedFontInfoAttributesVersion2",
|
65 |
-
"validateFontInfoVersion2ValueForAttribute",
|
66 |
-
"validateFontInfoVersion3ValueForAttribute",
|
67 |
-
"convertFontInfoValueForAttributeFromVersion1ToVersion2",
|
68 |
-
"convertFontInfoValueForAttributeFromVersion2ToVersion1",
|
69 |
-
]
|
70 |
-
|
71 |
-
__version__ = "3.0.0"
|
72 |
-
|
73 |
-
|
74 |
-
logger = logging.getLogger(__name__)
|
75 |
-
|
76 |
-
|
77 |
-
# ---------
|
78 |
-
# Constants
|
79 |
-
# ---------
|
80 |
-
|
81 |
-
DEFAULT_GLYPHS_DIRNAME = "glyphs"
|
82 |
-
DATA_DIRNAME = "data"
|
83 |
-
IMAGES_DIRNAME = "images"
|
84 |
-
METAINFO_FILENAME = "metainfo.plist"
|
85 |
-
FONTINFO_FILENAME = "fontinfo.plist"
|
86 |
-
LIB_FILENAME = "lib.plist"
|
87 |
-
GROUPS_FILENAME = "groups.plist"
|
88 |
-
KERNING_FILENAME = "kerning.plist"
|
89 |
-
FEATURES_FILENAME = "features.fea"
|
90 |
-
LAYERCONTENTS_FILENAME = "layercontents.plist"
|
91 |
-
LAYERINFO_FILENAME = "layerinfo.plist"
|
92 |
-
|
93 |
-
DEFAULT_LAYER_NAME = "public.default"
|
94 |
-
|
95 |
-
|
96 |
-
class UFOFormatVersion(tuple, _VersionTupleEnumMixin, enum.Enum):
|
97 |
-
FORMAT_1_0 = (1, 0)
|
98 |
-
FORMAT_2_0 = (2, 0)
|
99 |
-
FORMAT_3_0 = (3, 0)
|
100 |
-
|
101 |
-
|
102 |
-
# python 3.11 doesn't like when a mixin overrides a dunder method like __str__
|
103 |
-
# for some reasons it keep using Enum.__str__, see
|
104 |
-
# https://github.com/fonttools/fonttools/pull/2655
|
105 |
-
UFOFormatVersion.__str__ = _VersionTupleEnumMixin.__str__
|
106 |
-
|
107 |
-
|
108 |
-
class UFOFileStructure(enum.Enum):
|
109 |
-
ZIP = "zip"
|
110 |
-
PACKAGE = "package"
|
111 |
-
|
112 |
-
|
113 |
-
# --------------
|
114 |
-
# Shared Methods
|
115 |
-
# --------------
|
116 |
-
|
117 |
-
|
118 |
-
class _UFOBaseIO:
|
119 |
-
def getFileModificationTime(self, path):
|
120 |
-
"""
|
121 |
-
Returns the modification time for the file at the given path, as a
|
122 |
-
floating point number giving the number of seconds since the epoch.
|
123 |
-
The path must be relative to the UFO path.
|
124 |
-
Returns None if the file does not exist.
|
125 |
-
"""
|
126 |
-
try:
|
127 |
-
dt = self.fs.getinfo(fsdecode(path), namespaces=["details"]).modified
|
128 |
-
except (fs.errors.MissingInfoNamespace, fs.errors.ResourceNotFound):
|
129 |
-
return None
|
130 |
-
else:
|
131 |
-
return dt.timestamp()
|
132 |
-
|
133 |
-
def _getPlist(self, fileName, default=None):
|
134 |
-
"""
|
135 |
-
Read a property list relative to the UFO filesystem's root.
|
136 |
-
Raises UFOLibError if the file is missing and default is None,
|
137 |
-
otherwise default is returned.
|
138 |
-
|
139 |
-
The errors that could be raised during the reading of a plist are
|
140 |
-
unpredictable and/or too large to list, so, a blind try: except:
|
141 |
-
is done. If an exception occurs, a UFOLibError will be raised.
|
142 |
-
"""
|
143 |
-
try:
|
144 |
-
with self.fs.open(fileName, "rb") as f:
|
145 |
-
return plistlib.load(f)
|
146 |
-
except fs.errors.ResourceNotFound:
|
147 |
-
if default is None:
|
148 |
-
raise UFOLibError(
|
149 |
-
"'%s' is missing on %s. This file is required" % (fileName, self.fs)
|
150 |
-
)
|
151 |
-
else:
|
152 |
-
return default
|
153 |
-
except Exception as e:
|
154 |
-
# TODO(anthrotype): try to narrow this down a little
|
155 |
-
raise UFOLibError(f"'{fileName}' could not be read on {self.fs}: {e}")
|
156 |
-
|
157 |
-
def _writePlist(self, fileName, obj):
|
158 |
-
"""
|
159 |
-
Write a property list to a file relative to the UFO filesystem's root.
|
160 |
-
|
161 |
-
Do this sort of atomically, making it harder to corrupt existing files,
|
162 |
-
for example when plistlib encounters an error halfway during write.
|
163 |
-
This also checks to see if text matches the text that is already in the
|
164 |
-
file at path. If so, the file is not rewritten so that the modification
|
165 |
-
date is preserved.
|
166 |
-
|
167 |
-
The errors that could be raised during the writing of a plist are
|
168 |
-
unpredictable and/or too large to list, so, a blind try: except: is done.
|
169 |
-
If an exception occurs, a UFOLibError will be raised.
|
170 |
-
"""
|
171 |
-
if self._havePreviousFile:
|
172 |
-
try:
|
173 |
-
data = plistlib.dumps(obj)
|
174 |
-
except Exception as e:
|
175 |
-
raise UFOLibError(
|
176 |
-
"'%s' could not be written on %s because "
|
177 |
-
"the data is not properly formatted: %s" % (fileName, self.fs, e)
|
178 |
-
)
|
179 |
-
if self.fs.exists(fileName) and data == self.fs.readbytes(fileName):
|
180 |
-
return
|
181 |
-
self.fs.writebytes(fileName, data)
|
182 |
-
else:
|
183 |
-
with self.fs.openbin(fileName, mode="w") as fp:
|
184 |
-
try:
|
185 |
-
plistlib.dump(obj, fp)
|
186 |
-
except Exception as e:
|
187 |
-
raise UFOLibError(
|
188 |
-
"'%s' could not be written on %s because "
|
189 |
-
"the data is not properly formatted: %s"
|
190 |
-
% (fileName, self.fs, e)
|
191 |
-
)
|
192 |
-
|
193 |
-
|
194 |
-
# ----------
|
195 |
-
# UFO Reader
|
196 |
-
# ----------
|
197 |
-
|
198 |
-
|
199 |
-
class UFOReader(_UFOBaseIO):
|
200 |
-
|
201 |
-
"""
|
202 |
-
Read the various components of the .ufo.
|
203 |
-
|
204 |
-
By default read data is validated. Set ``validate`` to
|
205 |
-
``False`` to not validate the data.
|
206 |
-
"""
|
207 |
-
|
208 |
-
def __init__(self, path, validate=True):
|
209 |
-
if hasattr(path, "__fspath__"): # support os.PathLike objects
|
210 |
-
path = path.__fspath__()
|
211 |
-
|
212 |
-
if isinstance(path, str):
|
213 |
-
structure = _sniffFileStructure(path)
|
214 |
-
try:
|
215 |
-
if structure is UFOFileStructure.ZIP:
|
216 |
-
parentFS = fs.zipfs.ZipFS(path, write=False, encoding="utf-8")
|
217 |
-
else:
|
218 |
-
parentFS = fs.osfs.OSFS(path)
|
219 |
-
except fs.errors.CreateFailed as e:
|
220 |
-
raise UFOLibError(f"unable to open '{path}': {e}")
|
221 |
-
|
222 |
-
if structure is UFOFileStructure.ZIP:
|
223 |
-
# .ufoz zip files must contain a single root directory, with arbitrary
|
224 |
-
# name, containing all the UFO files
|
225 |
-
rootDirs = [
|
226 |
-
p.name
|
227 |
-
for p in parentFS.scandir("/")
|
228 |
-
# exclude macOS metadata contained in zip file
|
229 |
-
if p.is_dir and p.name != "__MACOSX"
|
230 |
-
]
|
231 |
-
if len(rootDirs) == 1:
|
232 |
-
# 'ClosingSubFS' ensures that the parent zip file is closed when
|
233 |
-
# its root subdirectory is closed
|
234 |
-
self.fs = parentFS.opendir(
|
235 |
-
rootDirs[0], factory=fs.subfs.ClosingSubFS
|
236 |
-
)
|
237 |
-
else:
|
238 |
-
raise UFOLibError(
|
239 |
-
"Expected exactly 1 root directory, found %d" % len(rootDirs)
|
240 |
-
)
|
241 |
-
else:
|
242 |
-
# normal UFO 'packages' are just a single folder
|
243 |
-
self.fs = parentFS
|
244 |
-
# when passed a path string, we make sure we close the newly opened fs
|
245 |
-
# upon calling UFOReader.close method or context manager's __exit__
|
246 |
-
self._shouldClose = True
|
247 |
-
self._fileStructure = structure
|
248 |
-
elif isinstance(path, fs.base.FS):
|
249 |
-
filesystem = path
|
250 |
-
try:
|
251 |
-
filesystem.check()
|
252 |
-
except fs.errors.FilesystemClosed:
|
253 |
-
raise UFOLibError("the filesystem '%s' is closed" % path)
|
254 |
-
else:
|
255 |
-
self.fs = filesystem
|
256 |
-
try:
|
257 |
-
path = filesystem.getsyspath("/")
|
258 |
-
except fs.errors.NoSysPath:
|
259 |
-
# network or in-memory FS may not map to the local one
|
260 |
-
path = str(filesystem)
|
261 |
-
# when user passed an already initialized fs instance, it is her
|
262 |
-
# responsibility to close it, thus UFOReader.close/__exit__ are no-op
|
263 |
-
self._shouldClose = False
|
264 |
-
# default to a 'package' structure
|
265 |
-
self._fileStructure = UFOFileStructure.PACKAGE
|
266 |
-
else:
|
267 |
-
raise TypeError(
|
268 |
-
"Expected a path string or fs.base.FS object, found '%s'"
|
269 |
-
% type(path).__name__
|
270 |
-
)
|
271 |
-
self._path = fsdecode(path)
|
272 |
-
self._validate = validate
|
273 |
-
self._upConvertedKerningData = None
|
274 |
-
|
275 |
-
try:
|
276 |
-
self.readMetaInfo(validate=validate)
|
277 |
-
except UFOLibError:
|
278 |
-
self.close()
|
279 |
-
raise
|
280 |
-
|
281 |
-
# properties
|
282 |
-
|
283 |
-
def _get_path(self):
|
284 |
-
import warnings
|
285 |
-
|
286 |
-
warnings.warn(
|
287 |
-
"The 'path' attribute is deprecated; use the 'fs' attribute instead",
|
288 |
-
DeprecationWarning,
|
289 |
-
stacklevel=2,
|
290 |
-
)
|
291 |
-
return self._path
|
292 |
-
|
293 |
-
path = property(_get_path, doc="The path of the UFO (DEPRECATED).")
|
294 |
-
|
295 |
-
def _get_formatVersion(self):
|
296 |
-
import warnings
|
297 |
-
|
298 |
-
warnings.warn(
|
299 |
-
"The 'formatVersion' attribute is deprecated; use the 'formatVersionTuple'",
|
300 |
-
DeprecationWarning,
|
301 |
-
stacklevel=2,
|
302 |
-
)
|
303 |
-
return self._formatVersion.major
|
304 |
-
|
305 |
-
formatVersion = property(
|
306 |
-
_get_formatVersion,
|
307 |
-
doc="The (major) format version of the UFO. DEPRECATED: Use formatVersionTuple",
|
308 |
-
)
|
309 |
-
|
310 |
-
@property
|
311 |
-
def formatVersionTuple(self):
|
312 |
-
"""The (major, minor) format version of the UFO.
|
313 |
-
This is determined by reading metainfo.plist during __init__.
|
314 |
-
"""
|
315 |
-
return self._formatVersion
|
316 |
-
|
317 |
-
def _get_fileStructure(self):
|
318 |
-
return self._fileStructure
|
319 |
-
|
320 |
-
fileStructure = property(
|
321 |
-
_get_fileStructure,
|
322 |
-
doc=(
|
323 |
-
"The file structure of the UFO: "
|
324 |
-
"either UFOFileStructure.ZIP or UFOFileStructure.PACKAGE"
|
325 |
-
),
|
326 |
-
)
|
327 |
-
|
328 |
-
# up conversion
|
329 |
-
|
330 |
-
def _upConvertKerning(self, validate):
|
331 |
-
"""
|
332 |
-
Up convert kerning and groups in UFO 1 and 2.
|
333 |
-
The data will be held internally until each bit of data
|
334 |
-
has been retrieved. The conversion of both must be done
|
335 |
-
at once, so the raw data is cached and an error is raised
|
336 |
-
if one bit of data becomes obsolete before it is called.
|
337 |
-
|
338 |
-
``validate`` will validate the data.
|
339 |
-
"""
|
340 |
-
if self._upConvertedKerningData:
|
341 |
-
testKerning = self._readKerning()
|
342 |
-
if testKerning != self._upConvertedKerningData["originalKerning"]:
|
343 |
-
raise UFOLibError(
|
344 |
-
"The data in kerning.plist has been modified since it was converted to UFO 3 format."
|
345 |
-
)
|
346 |
-
testGroups = self._readGroups()
|
347 |
-
if testGroups != self._upConvertedKerningData["originalGroups"]:
|
348 |
-
raise UFOLibError(
|
349 |
-
"The data in groups.plist has been modified since it was converted to UFO 3 format."
|
350 |
-
)
|
351 |
-
else:
|
352 |
-
groups = self._readGroups()
|
353 |
-
if validate:
|
354 |
-
invalidFormatMessage = "groups.plist is not properly formatted."
|
355 |
-
if not isinstance(groups, dict):
|
356 |
-
raise UFOLibError(invalidFormatMessage)
|
357 |
-
for groupName, glyphList in groups.items():
|
358 |
-
if not isinstance(groupName, str):
|
359 |
-
raise UFOLibError(invalidFormatMessage)
|
360 |
-
elif not isinstance(glyphList, list):
|
361 |
-
raise UFOLibError(invalidFormatMessage)
|
362 |
-
for glyphName in glyphList:
|
363 |
-
if not isinstance(glyphName, str):
|
364 |
-
raise UFOLibError(invalidFormatMessage)
|
365 |
-
self._upConvertedKerningData = dict(
|
366 |
-
kerning={},
|
367 |
-
originalKerning=self._readKerning(),
|
368 |
-
groups={},
|
369 |
-
originalGroups=groups,
|
370 |
-
)
|
371 |
-
# convert kerning and groups
|
372 |
-
kerning, groups, conversionMaps = convertUFO1OrUFO2KerningToUFO3Kerning(
|
373 |
-
self._upConvertedKerningData["originalKerning"],
|
374 |
-
deepcopy(self._upConvertedKerningData["originalGroups"]),
|
375 |
-
self.getGlyphSet(),
|
376 |
-
)
|
377 |
-
# store
|
378 |
-
self._upConvertedKerningData["kerning"] = kerning
|
379 |
-
self._upConvertedKerningData["groups"] = groups
|
380 |
-
self._upConvertedKerningData["groupRenameMaps"] = conversionMaps
|
381 |
-
|
382 |
-
# support methods
|
383 |
-
|
384 |
-
def readBytesFromPath(self, path):
|
385 |
-
"""
|
386 |
-
Returns the bytes in the file at the given path.
|
387 |
-
The path must be relative to the UFO's filesystem root.
|
388 |
-
Returns None if the file does not exist.
|
389 |
-
"""
|
390 |
-
try:
|
391 |
-
return self.fs.readbytes(fsdecode(path))
|
392 |
-
except fs.errors.ResourceNotFound:
|
393 |
-
return None
|
394 |
-
|
395 |
-
def getReadFileForPath(self, path, encoding=None):
|
396 |
-
"""
|
397 |
-
Returns a file (or file-like) object for the file at the given path.
|
398 |
-
The path must be relative to the UFO path.
|
399 |
-
Returns None if the file does not exist.
|
400 |
-
By default the file is opened in binary mode (reads bytes).
|
401 |
-
If encoding is passed, the file is opened in text mode (reads str).
|
402 |
-
|
403 |
-
Note: The caller is responsible for closing the open file.
|
404 |
-
"""
|
405 |
-
path = fsdecode(path)
|
406 |
-
try:
|
407 |
-
if encoding is None:
|
408 |
-
return self.fs.openbin(path)
|
409 |
-
else:
|
410 |
-
return self.fs.open(path, mode="r", encoding=encoding)
|
411 |
-
except fs.errors.ResourceNotFound:
|
412 |
-
return None
|
413 |
-
|
414 |
-
# metainfo.plist
|
415 |
-
|
416 |
-
def _readMetaInfo(self, validate=None):
|
417 |
-
"""
|
418 |
-
Read metainfo.plist and return raw data. Only used for internal operations.
|
419 |
-
|
420 |
-
``validate`` will validate the read data, by default it is set
|
421 |
-
to the class's validate value, can be overridden.
|
422 |
-
"""
|
423 |
-
if validate is None:
|
424 |
-
validate = self._validate
|
425 |
-
data = self._getPlist(METAINFO_FILENAME)
|
426 |
-
if validate and not isinstance(data, dict):
|
427 |
-
raise UFOLibError("metainfo.plist is not properly formatted.")
|
428 |
-
try:
|
429 |
-
formatVersionMajor = data["formatVersion"]
|
430 |
-
except KeyError:
|
431 |
-
raise UFOLibError(
|
432 |
-
f"Missing required formatVersion in '{METAINFO_FILENAME}' on {self.fs}"
|
433 |
-
)
|
434 |
-
formatVersionMinor = data.setdefault("formatVersionMinor", 0)
|
435 |
-
|
436 |
-
try:
|
437 |
-
formatVersion = UFOFormatVersion((formatVersionMajor, formatVersionMinor))
|
438 |
-
except ValueError as e:
|
439 |
-
unsupportedMsg = (
|
440 |
-
f"Unsupported UFO format ({formatVersionMajor}.{formatVersionMinor}) "
|
441 |
-
f"in '{METAINFO_FILENAME}' on {self.fs}"
|
442 |
-
)
|
443 |
-
if validate:
|
444 |
-
from fontTools.ufoLib.errors import UnsupportedUFOFormat
|
445 |
-
|
446 |
-
raise UnsupportedUFOFormat(unsupportedMsg) from e
|
447 |
-
|
448 |
-
formatVersion = UFOFormatVersion.default()
|
449 |
-
logger.warning(
|
450 |
-
"%s. Assuming the latest supported version (%s). "
|
451 |
-
"Some data may be skipped or parsed incorrectly",
|
452 |
-
unsupportedMsg,
|
453 |
-
formatVersion,
|
454 |
-
)
|
455 |
-
data["formatVersionTuple"] = formatVersion
|
456 |
-
return data
|
457 |
-
|
458 |
-
def readMetaInfo(self, validate=None):
|
459 |
-
"""
|
460 |
-
Read metainfo.plist and set formatVersion. Only used for internal operations.
|
461 |
-
|
462 |
-
``validate`` will validate the read data, by default it is set
|
463 |
-
to the class's validate value, can be overridden.
|
464 |
-
"""
|
465 |
-
data = self._readMetaInfo(validate=validate)
|
466 |
-
self._formatVersion = data["formatVersionTuple"]
|
467 |
-
|
468 |
-
# groups.plist
|
469 |
-
|
470 |
-
def _readGroups(self):
|
471 |
-
groups = self._getPlist(GROUPS_FILENAME, {})
|
472 |
-
# remove any duplicate glyphs in a kerning group
|
473 |
-
for groupName, glyphList in groups.items():
|
474 |
-
if groupName.startswith(("public.kern1.", "public.kern2.")):
|
475 |
-
groups[groupName] = list(OrderedDict.fromkeys(glyphList))
|
476 |
-
return groups
|
477 |
-
|
478 |
-
def readGroups(self, validate=None):
|
479 |
-
"""
|
480 |
-
Read groups.plist. Returns a dict.
|
481 |
-
``validate`` will validate the read data, by default it is set to the
|
482 |
-
class's validate value, can be overridden.
|
483 |
-
"""
|
484 |
-
if validate is None:
|
485 |
-
validate = self._validate
|
486 |
-
# handle up conversion
|
487 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
488 |
-
self._upConvertKerning(validate)
|
489 |
-
groups = self._upConvertedKerningData["groups"]
|
490 |
-
# normal
|
491 |
-
else:
|
492 |
-
groups = self._readGroups()
|
493 |
-
if validate:
|
494 |
-
valid, message = groupsValidator(groups)
|
495 |
-
if not valid:
|
496 |
-
raise UFOLibError(message)
|
497 |
-
return groups
|
498 |
-
|
499 |
-
def getKerningGroupConversionRenameMaps(self, validate=None):
|
500 |
-
"""
|
501 |
-
Get maps defining the renaming that was done during any
|
502 |
-
needed kerning group conversion. This method returns a
|
503 |
-
dictionary of this form::
|
504 |
-
|
505 |
-
{
|
506 |
-
"side1" : {"old group name" : "new group name"},
|
507 |
-
"side2" : {"old group name" : "new group name"}
|
508 |
-
}
|
509 |
-
|
510 |
-
When no conversion has been performed, the side1 and side2
|
511 |
-
dictionaries will be empty.
|
512 |
-
|
513 |
-
``validate`` will validate the groups, by default it is set to the
|
514 |
-
class's validate value, can be overridden.
|
515 |
-
"""
|
516 |
-
if validate is None:
|
517 |
-
validate = self._validate
|
518 |
-
if self._formatVersion >= UFOFormatVersion.FORMAT_3_0:
|
519 |
-
return dict(side1={}, side2={})
|
520 |
-
# use the public group reader to force the load and
|
521 |
-
# conversion of the data if it hasn't happened yet.
|
522 |
-
self.readGroups(validate=validate)
|
523 |
-
return self._upConvertedKerningData["groupRenameMaps"]
|
524 |
-
|
525 |
-
# fontinfo.plist
|
526 |
-
|
527 |
-
def _readInfo(self, validate):
|
528 |
-
data = self._getPlist(FONTINFO_FILENAME, {})
|
529 |
-
if validate and not isinstance(data, dict):
|
530 |
-
raise UFOLibError("fontinfo.plist is not properly formatted.")
|
531 |
-
return data
|
532 |
-
|
533 |
-
def readInfo(self, info, validate=None):
|
534 |
-
"""
|
535 |
-
Read fontinfo.plist. It requires an object that allows
|
536 |
-
setting attributes with names that follow the fontinfo.plist
|
537 |
-
version 3 specification. This will write the attributes
|
538 |
-
defined in the file into the object.
|
539 |
-
|
540 |
-
``validate`` will validate the read data, by default it is set to the
|
541 |
-
class's validate value, can be overridden.
|
542 |
-
"""
|
543 |
-
if validate is None:
|
544 |
-
validate = self._validate
|
545 |
-
infoDict = self._readInfo(validate)
|
546 |
-
infoDataToSet = {}
|
547 |
-
# version 1
|
548 |
-
if self._formatVersion == UFOFormatVersion.FORMAT_1_0:
|
549 |
-
for attr in fontInfoAttributesVersion1:
|
550 |
-
value = infoDict.get(attr)
|
551 |
-
if value is not None:
|
552 |
-
infoDataToSet[attr] = value
|
553 |
-
infoDataToSet = _convertFontInfoDataVersion1ToVersion2(infoDataToSet)
|
554 |
-
infoDataToSet = _convertFontInfoDataVersion2ToVersion3(infoDataToSet)
|
555 |
-
# version 2
|
556 |
-
elif self._formatVersion == UFOFormatVersion.FORMAT_2_0:
|
557 |
-
for attr, dataValidationDict in list(
|
558 |
-
fontInfoAttributesVersion2ValueData.items()
|
559 |
-
):
|
560 |
-
value = infoDict.get(attr)
|
561 |
-
if value is None:
|
562 |
-
continue
|
563 |
-
infoDataToSet[attr] = value
|
564 |
-
infoDataToSet = _convertFontInfoDataVersion2ToVersion3(infoDataToSet)
|
565 |
-
# version 3.x
|
566 |
-
elif self._formatVersion.major == UFOFormatVersion.FORMAT_3_0.major:
|
567 |
-
for attr, dataValidationDict in list(
|
568 |
-
fontInfoAttributesVersion3ValueData.items()
|
569 |
-
):
|
570 |
-
value = infoDict.get(attr)
|
571 |
-
if value is None:
|
572 |
-
continue
|
573 |
-
infoDataToSet[attr] = value
|
574 |
-
# unsupported version
|
575 |
-
else:
|
576 |
-
raise NotImplementedError(self._formatVersion)
|
577 |
-
# validate data
|
578 |
-
if validate:
|
579 |
-
infoDataToSet = validateInfoVersion3Data(infoDataToSet)
|
580 |
-
# populate the object
|
581 |
-
for attr, value in list(infoDataToSet.items()):
|
582 |
-
try:
|
583 |
-
setattr(info, attr, value)
|
584 |
-
except AttributeError:
|
585 |
-
raise UFOLibError(
|
586 |
-
"The supplied info object does not support setting a necessary attribute (%s)."
|
587 |
-
% attr
|
588 |
-
)
|
589 |
-
|
590 |
-
# kerning.plist
|
591 |
-
|
592 |
-
def _readKerning(self):
|
593 |
-
data = self._getPlist(KERNING_FILENAME, {})
|
594 |
-
return data
|
595 |
-
|
596 |
-
def readKerning(self, validate=None):
|
597 |
-
"""
|
598 |
-
Read kerning.plist. Returns a dict.
|
599 |
-
|
600 |
-
``validate`` will validate the kerning data, by default it is set to the
|
601 |
-
class's validate value, can be overridden.
|
602 |
-
"""
|
603 |
-
if validate is None:
|
604 |
-
validate = self._validate
|
605 |
-
# handle up conversion
|
606 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
607 |
-
self._upConvertKerning(validate)
|
608 |
-
kerningNested = self._upConvertedKerningData["kerning"]
|
609 |
-
# normal
|
610 |
-
else:
|
611 |
-
kerningNested = self._readKerning()
|
612 |
-
if validate:
|
613 |
-
valid, message = kerningValidator(kerningNested)
|
614 |
-
if not valid:
|
615 |
-
raise UFOLibError(message)
|
616 |
-
# flatten
|
617 |
-
kerning = {}
|
618 |
-
for left in kerningNested:
|
619 |
-
for right in kerningNested[left]:
|
620 |
-
value = kerningNested[left][right]
|
621 |
-
kerning[left, right] = value
|
622 |
-
return kerning
|
623 |
-
|
624 |
-
# lib.plist
|
625 |
-
|
626 |
-
def readLib(self, validate=None):
|
627 |
-
"""
|
628 |
-
Read lib.plist. Returns a dict.
|
629 |
-
|
630 |
-
``validate`` will validate the data, by default it is set to the
|
631 |
-
class's validate value, can be overridden.
|
632 |
-
"""
|
633 |
-
if validate is None:
|
634 |
-
validate = self._validate
|
635 |
-
data = self._getPlist(LIB_FILENAME, {})
|
636 |
-
if validate:
|
637 |
-
valid, message = fontLibValidator(data)
|
638 |
-
if not valid:
|
639 |
-
raise UFOLibError(message)
|
640 |
-
return data
|
641 |
-
|
642 |
-
# features.fea
|
643 |
-
|
644 |
-
def readFeatures(self):
|
645 |
-
"""
|
646 |
-
Read features.fea. Return a string.
|
647 |
-
The returned string is empty if the file is missing.
|
648 |
-
"""
|
649 |
-
try:
|
650 |
-
with self.fs.open(FEATURES_FILENAME, "r", encoding="utf-8") as f:
|
651 |
-
return f.read()
|
652 |
-
except fs.errors.ResourceNotFound:
|
653 |
-
return ""
|
654 |
-
|
655 |
-
# glyph sets & layers
|
656 |
-
|
657 |
-
def _readLayerContents(self, validate):
|
658 |
-
"""
|
659 |
-
Rebuild the layer contents list by checking what glyphsets
|
660 |
-
are available on disk.
|
661 |
-
|
662 |
-
``validate`` will validate the layer contents.
|
663 |
-
"""
|
664 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
665 |
-
return [(DEFAULT_LAYER_NAME, DEFAULT_GLYPHS_DIRNAME)]
|
666 |
-
contents = self._getPlist(LAYERCONTENTS_FILENAME)
|
667 |
-
if validate:
|
668 |
-
valid, error = layerContentsValidator(contents, self.fs)
|
669 |
-
if not valid:
|
670 |
-
raise UFOLibError(error)
|
671 |
-
return contents
|
672 |
-
|
673 |
-
def getLayerNames(self, validate=None):
|
674 |
-
"""
|
675 |
-
Get the ordered layer names from layercontents.plist.
|
676 |
-
|
677 |
-
``validate`` will validate the data, by default it is set to the
|
678 |
-
class's validate value, can be overridden.
|
679 |
-
"""
|
680 |
-
if validate is None:
|
681 |
-
validate = self._validate
|
682 |
-
layerContents = self._readLayerContents(validate)
|
683 |
-
layerNames = [layerName for layerName, directoryName in layerContents]
|
684 |
-
return layerNames
|
685 |
-
|
686 |
-
def getDefaultLayerName(self, validate=None):
|
687 |
-
"""
|
688 |
-
Get the default layer name from layercontents.plist.
|
689 |
-
|
690 |
-
``validate`` will validate the data, by default it is set to the
|
691 |
-
class's validate value, can be overridden.
|
692 |
-
"""
|
693 |
-
if validate is None:
|
694 |
-
validate = self._validate
|
695 |
-
layerContents = self._readLayerContents(validate)
|
696 |
-
for layerName, layerDirectory in layerContents:
|
697 |
-
if layerDirectory == DEFAULT_GLYPHS_DIRNAME:
|
698 |
-
return layerName
|
699 |
-
# this will already have been raised during __init__
|
700 |
-
raise UFOLibError("The default layer is not defined in layercontents.plist.")
|
701 |
-
|
702 |
-
def getGlyphSet(self, layerName=None, validateRead=None, validateWrite=None):
|
703 |
-
"""
|
704 |
-
Return the GlyphSet associated with the
|
705 |
-
glyphs directory mapped to layerName
|
706 |
-
in the UFO. If layerName is not provided,
|
707 |
-
the name retrieved with getDefaultLayerName
|
708 |
-
will be used.
|
709 |
-
|
710 |
-
``validateRead`` will validate the read data, by default it is set to the
|
711 |
-
class's validate value, can be overridden.
|
712 |
-
``validateWrite`` will validate the written data, by default it is set to the
|
713 |
-
class's validate value, can be overridden.
|
714 |
-
"""
|
715 |
-
from fontTools.ufoLib.glifLib import GlyphSet
|
716 |
-
|
717 |
-
if validateRead is None:
|
718 |
-
validateRead = self._validate
|
719 |
-
if validateWrite is None:
|
720 |
-
validateWrite = self._validate
|
721 |
-
if layerName is None:
|
722 |
-
layerName = self.getDefaultLayerName(validate=validateRead)
|
723 |
-
directory = None
|
724 |
-
layerContents = self._readLayerContents(validateRead)
|
725 |
-
for storedLayerName, storedLayerDirectory in layerContents:
|
726 |
-
if layerName == storedLayerName:
|
727 |
-
directory = storedLayerDirectory
|
728 |
-
break
|
729 |
-
if directory is None:
|
730 |
-
raise UFOLibError('No glyphs directory is mapped to "%s".' % layerName)
|
731 |
-
try:
|
732 |
-
glyphSubFS = self.fs.opendir(directory)
|
733 |
-
except fs.errors.ResourceNotFound:
|
734 |
-
raise UFOLibError(f"No '{directory}' directory for layer '{layerName}'")
|
735 |
-
return GlyphSet(
|
736 |
-
glyphSubFS,
|
737 |
-
ufoFormatVersion=self._formatVersion,
|
738 |
-
validateRead=validateRead,
|
739 |
-
validateWrite=validateWrite,
|
740 |
-
expectContentsFile=True,
|
741 |
-
)
|
742 |
-
|
743 |
-
def getCharacterMapping(self, layerName=None, validate=None):
|
744 |
-
"""
|
745 |
-
Return a dictionary that maps unicode values (ints) to
|
746 |
-
lists of glyph names.
|
747 |
-
"""
|
748 |
-
if validate is None:
|
749 |
-
validate = self._validate
|
750 |
-
glyphSet = self.getGlyphSet(
|
751 |
-
layerName, validateRead=validate, validateWrite=True
|
752 |
-
)
|
753 |
-
allUnicodes = glyphSet.getUnicodes()
|
754 |
-
cmap = {}
|
755 |
-
for glyphName, unicodes in allUnicodes.items():
|
756 |
-
for code in unicodes:
|
757 |
-
if code in cmap:
|
758 |
-
cmap[code].append(glyphName)
|
759 |
-
else:
|
760 |
-
cmap[code] = [glyphName]
|
761 |
-
return cmap
|
762 |
-
|
763 |
-
# /data
|
764 |
-
|
765 |
-
def getDataDirectoryListing(self):
|
766 |
-
"""
|
767 |
-
Returns a list of all files in the data directory.
|
768 |
-
The returned paths will be relative to the UFO.
|
769 |
-
This will not list directory names, only file names.
|
770 |
-
Thus, empty directories will be skipped.
|
771 |
-
"""
|
772 |
-
try:
|
773 |
-
self._dataFS = self.fs.opendir(DATA_DIRNAME)
|
774 |
-
except fs.errors.ResourceNotFound:
|
775 |
-
return []
|
776 |
-
except fs.errors.DirectoryExpected:
|
777 |
-
raise UFOLibError('The UFO contains a "data" file instead of a directory.')
|
778 |
-
try:
|
779 |
-
# fs Walker.files method returns "absolute" paths (in terms of the
|
780 |
-
# root of the 'data' SubFS), so we strip the leading '/' to make
|
781 |
-
# them relative
|
782 |
-
return [p.lstrip("/") for p in self._dataFS.walk.files()]
|
783 |
-
except fs.errors.ResourceError:
|
784 |
-
return []
|
785 |
-
|
786 |
-
def getImageDirectoryListing(self, validate=None):
|
787 |
-
"""
|
788 |
-
Returns a list of all image file names in
|
789 |
-
the images directory. Each of the images will
|
790 |
-
have been verified to have the PNG signature.
|
791 |
-
|
792 |
-
``validate`` will validate the data, by default it is set to the
|
793 |
-
class's validate value, can be overridden.
|
794 |
-
"""
|
795 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
796 |
-
return []
|
797 |
-
if validate is None:
|
798 |
-
validate = self._validate
|
799 |
-
try:
|
800 |
-
self._imagesFS = imagesFS = self.fs.opendir(IMAGES_DIRNAME)
|
801 |
-
except fs.errors.ResourceNotFound:
|
802 |
-
return []
|
803 |
-
except fs.errors.DirectoryExpected:
|
804 |
-
raise UFOLibError(
|
805 |
-
'The UFO contains an "images" file instead of a directory.'
|
806 |
-
)
|
807 |
-
result = []
|
808 |
-
for path in imagesFS.scandir("/"):
|
809 |
-
if path.is_dir:
|
810 |
-
# silently skip this as version control
|
811 |
-
# systems often have hidden directories
|
812 |
-
continue
|
813 |
-
if validate:
|
814 |
-
with imagesFS.openbin(path.name) as fp:
|
815 |
-
valid, error = pngValidator(fileObj=fp)
|
816 |
-
if valid:
|
817 |
-
result.append(path.name)
|
818 |
-
else:
|
819 |
-
result.append(path.name)
|
820 |
-
return result
|
821 |
-
|
822 |
-
def readData(self, fileName):
|
823 |
-
"""
|
824 |
-
Return bytes for the file named 'fileName' inside the 'data/' directory.
|
825 |
-
"""
|
826 |
-
fileName = fsdecode(fileName)
|
827 |
-
try:
|
828 |
-
try:
|
829 |
-
dataFS = self._dataFS
|
830 |
-
except AttributeError:
|
831 |
-
# in case readData is called before getDataDirectoryListing
|
832 |
-
dataFS = self.fs.opendir(DATA_DIRNAME)
|
833 |
-
data = dataFS.readbytes(fileName)
|
834 |
-
except fs.errors.ResourceNotFound:
|
835 |
-
raise UFOLibError(f"No data file named '{fileName}' on {self.fs}")
|
836 |
-
return data
|
837 |
-
|
838 |
-
def readImage(self, fileName, validate=None):
|
839 |
-
"""
|
840 |
-
Return image data for the file named fileName.
|
841 |
-
|
842 |
-
``validate`` will validate the data, by default it is set to the
|
843 |
-
class's validate value, can be overridden.
|
844 |
-
"""
|
845 |
-
if validate is None:
|
846 |
-
validate = self._validate
|
847 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
848 |
-
raise UFOLibError(
|
849 |
-
f"Reading images is not allowed in UFO {self._formatVersion.major}."
|
850 |
-
)
|
851 |
-
fileName = fsdecode(fileName)
|
852 |
-
try:
|
853 |
-
try:
|
854 |
-
imagesFS = self._imagesFS
|
855 |
-
except AttributeError:
|
856 |
-
# in case readImage is called before getImageDirectoryListing
|
857 |
-
imagesFS = self.fs.opendir(IMAGES_DIRNAME)
|
858 |
-
data = imagesFS.readbytes(fileName)
|
859 |
-
except fs.errors.ResourceNotFound:
|
860 |
-
raise UFOLibError(f"No image file named '{fileName}' on {self.fs}")
|
861 |
-
if validate:
|
862 |
-
valid, error = pngValidator(data=data)
|
863 |
-
if not valid:
|
864 |
-
raise UFOLibError(error)
|
865 |
-
return data
|
866 |
-
|
867 |
-
def close(self):
|
868 |
-
if self._shouldClose:
|
869 |
-
self.fs.close()
|
870 |
-
|
871 |
-
def __enter__(self):
|
872 |
-
return self
|
873 |
-
|
874 |
-
def __exit__(self, exc_type, exc_value, exc_tb):
|
875 |
-
self.close()
|
876 |
-
|
877 |
-
|
878 |
-
# ----------
|
879 |
-
# UFO Writer
|
880 |
-
# ----------
|
881 |
-
|
882 |
-
|
883 |
-
class UFOWriter(UFOReader):
|
884 |
-
|
885 |
-
"""
|
886 |
-
Write the various components of the .ufo.
|
887 |
-
|
888 |
-
By default, the written data will be validated before writing. Set ``validate`` to
|
889 |
-
``False`` if you do not want to validate the data. Validation can also be overriden
|
890 |
-
on a per method level if desired.
|
891 |
-
|
892 |
-
The ``formatVersion`` argument allows to specify the UFO format version as a tuple
|
893 |
-
of integers (major, minor), or as a single integer for the major digit only (minor
|
894 |
-
is implied as 0). By default the latest formatVersion will be used; currently it's
|
895 |
-
3.0, which is equivalent to formatVersion=(3, 0).
|
896 |
-
|
897 |
-
An UnsupportedUFOFormat exception is raised if the requested UFO formatVersion is
|
898 |
-
not supported.
|
899 |
-
"""
|
900 |
-
|
901 |
-
def __init__(
|
902 |
-
self,
|
903 |
-
path,
|
904 |
-
formatVersion=None,
|
905 |
-
fileCreator="com.github.fonttools.ufoLib",
|
906 |
-
structure=None,
|
907 |
-
validate=True,
|
908 |
-
):
|
909 |
-
try:
|
910 |
-
formatVersion = UFOFormatVersion(formatVersion)
|
911 |
-
except ValueError as e:
|
912 |
-
from fontTools.ufoLib.errors import UnsupportedUFOFormat
|
913 |
-
|
914 |
-
raise UnsupportedUFOFormat(
|
915 |
-
f"Unsupported UFO format: {formatVersion!r}"
|
916 |
-
) from e
|
917 |
-
|
918 |
-
if hasattr(path, "__fspath__"): # support os.PathLike objects
|
919 |
-
path = path.__fspath__()
|
920 |
-
|
921 |
-
if isinstance(path, str):
|
922 |
-
# normalize path by removing trailing or double slashes
|
923 |
-
path = os.path.normpath(path)
|
924 |
-
havePreviousFile = os.path.exists(path)
|
925 |
-
if havePreviousFile:
|
926 |
-
# ensure we use the same structure as the destination
|
927 |
-
existingStructure = _sniffFileStructure(path)
|
928 |
-
if structure is not None:
|
929 |
-
try:
|
930 |
-
structure = UFOFileStructure(structure)
|
931 |
-
except ValueError:
|
932 |
-
raise UFOLibError(
|
933 |
-
"Invalid or unsupported structure: '%s'" % structure
|
934 |
-
)
|
935 |
-
if structure is not existingStructure:
|
936 |
-
raise UFOLibError(
|
937 |
-
"A UFO with a different structure (%s) already exists "
|
938 |
-
"at the given path: '%s'" % (existingStructure, path)
|
939 |
-
)
|
940 |
-
else:
|
941 |
-
structure = existingStructure
|
942 |
-
else:
|
943 |
-
# if not exists, default to 'package' structure
|
944 |
-
if structure is None:
|
945 |
-
structure = UFOFileStructure.PACKAGE
|
946 |
-
dirName = os.path.dirname(path)
|
947 |
-
if dirName and not os.path.isdir(dirName):
|
948 |
-
raise UFOLibError(
|
949 |
-
"Cannot write to '%s': directory does not exist" % path
|
950 |
-
)
|
951 |
-
if structure is UFOFileStructure.ZIP:
|
952 |
-
if havePreviousFile:
|
953 |
-
# we can't write a zip in-place, so we have to copy its
|
954 |
-
# contents to a temporary location and work from there, then
|
955 |
-
# upon closing UFOWriter we create the final zip file
|
956 |
-
parentFS = fs.tempfs.TempFS()
|
957 |
-
with fs.zipfs.ZipFS(path, encoding="utf-8") as origFS:
|
958 |
-
fs.copy.copy_fs(origFS, parentFS)
|
959 |
-
# if output path is an existing zip, we require that it contains
|
960 |
-
# one, and only one, root directory (with arbitrary name), in turn
|
961 |
-
# containing all the existing UFO contents
|
962 |
-
rootDirs = [
|
963 |
-
p.name
|
964 |
-
for p in parentFS.scandir("/")
|
965 |
-
# exclude macOS metadata contained in zip file
|
966 |
-
if p.is_dir and p.name != "__MACOSX"
|
967 |
-
]
|
968 |
-
if len(rootDirs) != 1:
|
969 |
-
raise UFOLibError(
|
970 |
-
"Expected exactly 1 root directory, found %d"
|
971 |
-
% len(rootDirs)
|
972 |
-
)
|
973 |
-
else:
|
974 |
-
# 'ClosingSubFS' ensures that the parent filesystem is closed
|
975 |
-
# when its root subdirectory is closed
|
976 |
-
self.fs = parentFS.opendir(
|
977 |
-
rootDirs[0], factory=fs.subfs.ClosingSubFS
|
978 |
-
)
|
979 |
-
else:
|
980 |
-
# if the output zip file didn't exist, we create the root folder;
|
981 |
-
# we name it the same as input 'path', but with '.ufo' extension
|
982 |
-
rootDir = os.path.splitext(os.path.basename(path))[0] + ".ufo"
|
983 |
-
parentFS = fs.zipfs.ZipFS(path, write=True, encoding="utf-8")
|
984 |
-
parentFS.makedir(rootDir)
|
985 |
-
self.fs = parentFS.opendir(rootDir, factory=fs.subfs.ClosingSubFS)
|
986 |
-
else:
|
987 |
-
self.fs = fs.osfs.OSFS(path, create=True)
|
988 |
-
self._fileStructure = structure
|
989 |
-
self._havePreviousFile = havePreviousFile
|
990 |
-
self._shouldClose = True
|
991 |
-
elif isinstance(path, fs.base.FS):
|
992 |
-
filesystem = path
|
993 |
-
try:
|
994 |
-
filesystem.check()
|
995 |
-
except fs.errors.FilesystemClosed:
|
996 |
-
raise UFOLibError("the filesystem '%s' is closed" % path)
|
997 |
-
else:
|
998 |
-
self.fs = filesystem
|
999 |
-
try:
|
1000 |
-
path = filesystem.getsyspath("/")
|
1001 |
-
except fs.errors.NoSysPath:
|
1002 |
-
# network or in-memory FS may not map to the local one
|
1003 |
-
path = str(filesystem)
|
1004 |
-
# if passed an FS object, always use 'package' structure
|
1005 |
-
if structure and structure is not UFOFileStructure.PACKAGE:
|
1006 |
-
import warnings
|
1007 |
-
|
1008 |
-
warnings.warn(
|
1009 |
-
"The 'structure' argument is not used when input is an FS object",
|
1010 |
-
UserWarning,
|
1011 |
-
stacklevel=2,
|
1012 |
-
)
|
1013 |
-
self._fileStructure = UFOFileStructure.PACKAGE
|
1014 |
-
# if FS contains a "metainfo.plist", we consider it non-empty
|
1015 |
-
self._havePreviousFile = filesystem.exists(METAINFO_FILENAME)
|
1016 |
-
# the user is responsible for closing the FS object
|
1017 |
-
self._shouldClose = False
|
1018 |
-
else:
|
1019 |
-
raise TypeError(
|
1020 |
-
"Expected a path string or fs object, found %s" % type(path).__name__
|
1021 |
-
)
|
1022 |
-
|
1023 |
-
# establish some basic stuff
|
1024 |
-
self._path = fsdecode(path)
|
1025 |
-
self._formatVersion = formatVersion
|
1026 |
-
self._fileCreator = fileCreator
|
1027 |
-
self._downConversionKerningData = None
|
1028 |
-
self._validate = validate
|
1029 |
-
# if the file already exists, get the format version.
|
1030 |
-
# this will be needed for up and down conversion.
|
1031 |
-
previousFormatVersion = None
|
1032 |
-
if self._havePreviousFile:
|
1033 |
-
metaInfo = self._readMetaInfo(validate=validate)
|
1034 |
-
previousFormatVersion = metaInfo["formatVersionTuple"]
|
1035 |
-
# catch down conversion
|
1036 |
-
if previousFormatVersion > formatVersion:
|
1037 |
-
from fontTools.ufoLib.errors import UnsupportedUFOFormat
|
1038 |
-
|
1039 |
-
raise UnsupportedUFOFormat(
|
1040 |
-
"The UFO located at this path is a higher version "
|
1041 |
-
f"({previousFormatVersion}) than the version ({formatVersion}) "
|
1042 |
-
"that is trying to be written. This is not supported."
|
1043 |
-
)
|
1044 |
-
# handle the layer contents
|
1045 |
-
self.layerContents = {}
|
1046 |
-
if previousFormatVersion is not None and previousFormatVersion.major >= 3:
|
1047 |
-
# already exists
|
1048 |
-
self.layerContents = OrderedDict(self._readLayerContents(validate))
|
1049 |
-
else:
|
1050 |
-
# previous < 3
|
1051 |
-
# imply the layer contents
|
1052 |
-
if self.fs.exists(DEFAULT_GLYPHS_DIRNAME):
|
1053 |
-
self.layerContents = {DEFAULT_LAYER_NAME: DEFAULT_GLYPHS_DIRNAME}
|
1054 |
-
# write the new metainfo
|
1055 |
-
self._writeMetaInfo()
|
1056 |
-
|
1057 |
-
# properties
|
1058 |
-
|
1059 |
-
def _get_fileCreator(self):
|
1060 |
-
return self._fileCreator
|
1061 |
-
|
1062 |
-
fileCreator = property(
|
1063 |
-
_get_fileCreator,
|
1064 |
-
doc="The file creator of the UFO. This is set into metainfo.plist during __init__.",
|
1065 |
-
)
|
1066 |
-
|
1067 |
-
# support methods for file system interaction
|
1068 |
-
|
1069 |
-
def copyFromReader(self, reader, sourcePath, destPath):
|
1070 |
-
"""
|
1071 |
-
Copy the sourcePath in the provided UFOReader to destPath
|
1072 |
-
in this writer. The paths must be relative. This works with
|
1073 |
-
both individual files and directories.
|
1074 |
-
"""
|
1075 |
-
if not isinstance(reader, UFOReader):
|
1076 |
-
raise UFOLibError("The reader must be an instance of UFOReader.")
|
1077 |
-
sourcePath = fsdecode(sourcePath)
|
1078 |
-
destPath = fsdecode(destPath)
|
1079 |
-
if not reader.fs.exists(sourcePath):
|
1080 |
-
raise UFOLibError(
|
1081 |
-
'The reader does not have data located at "%s".' % sourcePath
|
1082 |
-
)
|
1083 |
-
if self.fs.exists(destPath):
|
1084 |
-
raise UFOLibError('A file named "%s" already exists.' % destPath)
|
1085 |
-
# create the destination directory if it doesn't exist
|
1086 |
-
self.fs.makedirs(fs.path.dirname(destPath), recreate=True)
|
1087 |
-
if reader.fs.isdir(sourcePath):
|
1088 |
-
fs.copy.copy_dir(reader.fs, sourcePath, self.fs, destPath)
|
1089 |
-
else:
|
1090 |
-
fs.copy.copy_file(reader.fs, sourcePath, self.fs, destPath)
|
1091 |
-
|
1092 |
-
def writeBytesToPath(self, path, data):
|
1093 |
-
"""
|
1094 |
-
Write bytes to a path relative to the UFO filesystem's root.
|
1095 |
-
If writing to an existing UFO, check to see if data matches the data
|
1096 |
-
that is already in the file at path; if so, the file is not rewritten
|
1097 |
-
so that the modification date is preserved.
|
1098 |
-
If needed, the directory tree for the given path will be built.
|
1099 |
-
"""
|
1100 |
-
path = fsdecode(path)
|
1101 |
-
if self._havePreviousFile:
|
1102 |
-
if self.fs.isfile(path) and data == self.fs.readbytes(path):
|
1103 |
-
return
|
1104 |
-
try:
|
1105 |
-
self.fs.writebytes(path, data)
|
1106 |
-
except fs.errors.FileExpected:
|
1107 |
-
raise UFOLibError("A directory exists at '%s'" % path)
|
1108 |
-
except fs.errors.ResourceNotFound:
|
1109 |
-
self.fs.makedirs(fs.path.dirname(path), recreate=True)
|
1110 |
-
self.fs.writebytes(path, data)
|
1111 |
-
|
1112 |
-
def getFileObjectForPath(self, path, mode="w", encoding=None):
|
1113 |
-
"""
|
1114 |
-
Returns a file (or file-like) object for the
|
1115 |
-
file at the given path. The path must be relative
|
1116 |
-
to the UFO path. Returns None if the file does
|
1117 |
-
not exist and the mode is "r" or "rb.
|
1118 |
-
An encoding may be passed if the file is opened in text mode.
|
1119 |
-
|
1120 |
-
Note: The caller is responsible for closing the open file.
|
1121 |
-
"""
|
1122 |
-
path = fsdecode(path)
|
1123 |
-
try:
|
1124 |
-
return self.fs.open(path, mode=mode, encoding=encoding)
|
1125 |
-
except fs.errors.ResourceNotFound as e:
|
1126 |
-
m = mode[0]
|
1127 |
-
if m == "r":
|
1128 |
-
# XXX I think we should just let it raise. The docstring,
|
1129 |
-
# however, says that this returns None if mode is 'r'
|
1130 |
-
return None
|
1131 |
-
elif m == "w" or m == "a" or m == "x":
|
1132 |
-
self.fs.makedirs(fs.path.dirname(path), recreate=True)
|
1133 |
-
return self.fs.open(path, mode=mode, encoding=encoding)
|
1134 |
-
except fs.errors.ResourceError as e:
|
1135 |
-
return UFOLibError(f"unable to open '{path}' on {self.fs}: {e}")
|
1136 |
-
|
1137 |
-
def removePath(self, path, force=False, removeEmptyParents=True):
|
1138 |
-
"""
|
1139 |
-
Remove the file (or directory) at path. The path
|
1140 |
-
must be relative to the UFO.
|
1141 |
-
Raises UFOLibError if the path doesn't exist.
|
1142 |
-
If force=True, ignore non-existent paths.
|
1143 |
-
If the directory where 'path' is located becomes empty, it will
|
1144 |
-
be automatically removed, unless 'removeEmptyParents' is False.
|
1145 |
-
"""
|
1146 |
-
path = fsdecode(path)
|
1147 |
-
try:
|
1148 |
-
self.fs.remove(path)
|
1149 |
-
except fs.errors.FileExpected:
|
1150 |
-
self.fs.removetree(path)
|
1151 |
-
except fs.errors.ResourceNotFound:
|
1152 |
-
if not force:
|
1153 |
-
raise UFOLibError(f"'{path}' does not exist on {self.fs}")
|
1154 |
-
if removeEmptyParents:
|
1155 |
-
parent = fs.path.dirname(path)
|
1156 |
-
if parent:
|
1157 |
-
fs.tools.remove_empty(self.fs, parent)
|
1158 |
-
|
1159 |
-
# alias kept for backward compatibility with old API
|
1160 |
-
removeFileForPath = removePath
|
1161 |
-
|
1162 |
-
# UFO mod time
|
1163 |
-
|
1164 |
-
def setModificationTime(self):
|
1165 |
-
"""
|
1166 |
-
Set the UFO modification time to the current time.
|
1167 |
-
This is never called automatically. It is up to the
|
1168 |
-
caller to call this when finished working on the UFO.
|
1169 |
-
"""
|
1170 |
-
path = self._path
|
1171 |
-
if path is not None and os.path.exists(path):
|
1172 |
-
try:
|
1173 |
-
# this may fail on some filesystems (e.g. SMB servers)
|
1174 |
-
os.utime(path, None)
|
1175 |
-
except OSError as e:
|
1176 |
-
logger.warning("Failed to set modified time: %s", e)
|
1177 |
-
|
1178 |
-
# metainfo.plist
|
1179 |
-
|
1180 |
-
def _writeMetaInfo(self):
|
1181 |
-
metaInfo = dict(
|
1182 |
-
creator=self._fileCreator,
|
1183 |
-
formatVersion=self._formatVersion.major,
|
1184 |
-
)
|
1185 |
-
if self._formatVersion.minor != 0:
|
1186 |
-
metaInfo["formatVersionMinor"] = self._formatVersion.minor
|
1187 |
-
self._writePlist(METAINFO_FILENAME, metaInfo)
|
1188 |
-
|
1189 |
-
# groups.plist
|
1190 |
-
|
1191 |
-
def setKerningGroupConversionRenameMaps(self, maps):
|
1192 |
-
"""
|
1193 |
-
Set maps defining the renaming that should be done
|
1194 |
-
when writing groups and kerning in UFO 1 and UFO 2.
|
1195 |
-
This will effectively undo the conversion done when
|
1196 |
-
UFOReader reads this data. The dictionary should have
|
1197 |
-
this form::
|
1198 |
-
|
1199 |
-
{
|
1200 |
-
"side1" : {"group name to use when writing" : "group name in data"},
|
1201 |
-
"side2" : {"group name to use when writing" : "group name in data"}
|
1202 |
-
}
|
1203 |
-
|
1204 |
-
This is the same form returned by UFOReader's
|
1205 |
-
getKerningGroupConversionRenameMaps method.
|
1206 |
-
"""
|
1207 |
-
if self._formatVersion >= UFOFormatVersion.FORMAT_3_0:
|
1208 |
-
return # XXX raise an error here
|
1209 |
-
# flip the dictionaries
|
1210 |
-
remap = {}
|
1211 |
-
for side in ("side1", "side2"):
|
1212 |
-
for writeName, dataName in list(maps[side].items()):
|
1213 |
-
remap[dataName] = writeName
|
1214 |
-
self._downConversionKerningData = dict(groupRenameMap=remap)
|
1215 |
-
|
1216 |
-
def writeGroups(self, groups, validate=None):
|
1217 |
-
"""
|
1218 |
-
Write groups.plist. This method requires a
|
1219 |
-
dict of glyph groups as an argument.
|
1220 |
-
|
1221 |
-
``validate`` will validate the data, by default it is set to the
|
1222 |
-
class's validate value, can be overridden.
|
1223 |
-
"""
|
1224 |
-
if validate is None:
|
1225 |
-
validate = self._validate
|
1226 |
-
# validate the data structure
|
1227 |
-
if validate:
|
1228 |
-
valid, message = groupsValidator(groups)
|
1229 |
-
if not valid:
|
1230 |
-
raise UFOLibError(message)
|
1231 |
-
# down convert
|
1232 |
-
if (
|
1233 |
-
self._formatVersion < UFOFormatVersion.FORMAT_3_0
|
1234 |
-
and self._downConversionKerningData is not None
|
1235 |
-
):
|
1236 |
-
remap = self._downConversionKerningData["groupRenameMap"]
|
1237 |
-
remappedGroups = {}
|
1238 |
-
# there are some edge cases here that are ignored:
|
1239 |
-
# 1. if a group is being renamed to a name that
|
1240 |
-
# already exists, the existing group is always
|
1241 |
-
# overwritten. (this is why there are two loops
|
1242 |
-
# below.) there doesn't seem to be a logical
|
1243 |
-
# solution to groups mismatching and overwriting
|
1244 |
-
# with the specifiecd group seems like a better
|
1245 |
-
# solution than throwing an error.
|
1246 |
-
# 2. if side 1 and side 2 groups are being renamed
|
1247 |
-
# to the same group name there is no check to
|
1248 |
-
# ensure that the contents are identical. that
|
1249 |
-
# is left up to the caller.
|
1250 |
-
for name, contents in list(groups.items()):
|
1251 |
-
if name in remap:
|
1252 |
-
continue
|
1253 |
-
remappedGroups[name] = contents
|
1254 |
-
for name, contents in list(groups.items()):
|
1255 |
-
if name not in remap:
|
1256 |
-
continue
|
1257 |
-
name = remap[name]
|
1258 |
-
remappedGroups[name] = contents
|
1259 |
-
groups = remappedGroups
|
1260 |
-
# pack and write
|
1261 |
-
groupsNew = {}
|
1262 |
-
for key, value in groups.items():
|
1263 |
-
groupsNew[key] = list(value)
|
1264 |
-
if groupsNew:
|
1265 |
-
self._writePlist(GROUPS_FILENAME, groupsNew)
|
1266 |
-
elif self._havePreviousFile:
|
1267 |
-
self.removePath(GROUPS_FILENAME, force=True, removeEmptyParents=False)
|
1268 |
-
|
1269 |
-
# fontinfo.plist
|
1270 |
-
|
1271 |
-
def writeInfo(self, info, validate=None):
|
1272 |
-
"""
|
1273 |
-
Write info.plist. This method requires an object
|
1274 |
-
that supports getting attributes that follow the
|
1275 |
-
fontinfo.plist version 2 specification. Attributes
|
1276 |
-
will be taken from the given object and written
|
1277 |
-
into the file.
|
1278 |
-
|
1279 |
-
``validate`` will validate the data, by default it is set to the
|
1280 |
-
class's validate value, can be overridden.
|
1281 |
-
"""
|
1282 |
-
if validate is None:
|
1283 |
-
validate = self._validate
|
1284 |
-
# gather version 3 data
|
1285 |
-
infoData = {}
|
1286 |
-
for attr in list(fontInfoAttributesVersion3ValueData.keys()):
|
1287 |
-
if hasattr(info, attr):
|
1288 |
-
try:
|
1289 |
-
value = getattr(info, attr)
|
1290 |
-
except AttributeError:
|
1291 |
-
raise UFOLibError(
|
1292 |
-
"The supplied info object does not support getting a necessary attribute (%s)."
|
1293 |
-
% attr
|
1294 |
-
)
|
1295 |
-
if value is None:
|
1296 |
-
continue
|
1297 |
-
infoData[attr] = value
|
1298 |
-
# down convert data if necessary and validate
|
1299 |
-
if self._formatVersion == UFOFormatVersion.FORMAT_3_0:
|
1300 |
-
if validate:
|
1301 |
-
infoData = validateInfoVersion3Data(infoData)
|
1302 |
-
elif self._formatVersion == UFOFormatVersion.FORMAT_2_0:
|
1303 |
-
infoData = _convertFontInfoDataVersion3ToVersion2(infoData)
|
1304 |
-
if validate:
|
1305 |
-
infoData = validateInfoVersion2Data(infoData)
|
1306 |
-
elif self._formatVersion == UFOFormatVersion.FORMAT_1_0:
|
1307 |
-
infoData = _convertFontInfoDataVersion3ToVersion2(infoData)
|
1308 |
-
if validate:
|
1309 |
-
infoData = validateInfoVersion2Data(infoData)
|
1310 |
-
infoData = _convertFontInfoDataVersion2ToVersion1(infoData)
|
1311 |
-
# write file if there is anything to write
|
1312 |
-
if infoData:
|
1313 |
-
self._writePlist(FONTINFO_FILENAME, infoData)
|
1314 |
-
|
1315 |
-
# kerning.plist
|
1316 |
-
|
1317 |
-
def writeKerning(self, kerning, validate=None):
|
1318 |
-
"""
|
1319 |
-
Write kerning.plist. This method requires a
|
1320 |
-
dict of kerning pairs as an argument.
|
1321 |
-
|
1322 |
-
This performs basic structural validation of the kerning,
|
1323 |
-
but it does not check for compliance with the spec in
|
1324 |
-
regards to conflicting pairs. The assumption is that the
|
1325 |
-
kerning data being passed is standards compliant.
|
1326 |
-
|
1327 |
-
``validate`` will validate the data, by default it is set to the
|
1328 |
-
class's validate value, can be overridden.
|
1329 |
-
"""
|
1330 |
-
if validate is None:
|
1331 |
-
validate = self._validate
|
1332 |
-
# validate the data structure
|
1333 |
-
if validate:
|
1334 |
-
invalidFormatMessage = "The kerning is not properly formatted."
|
1335 |
-
if not isDictEnough(kerning):
|
1336 |
-
raise UFOLibError(invalidFormatMessage)
|
1337 |
-
for pair, value in list(kerning.items()):
|
1338 |
-
if not isinstance(pair, (list, tuple)):
|
1339 |
-
raise UFOLibError(invalidFormatMessage)
|
1340 |
-
if not len(pair) == 2:
|
1341 |
-
raise UFOLibError(invalidFormatMessage)
|
1342 |
-
if not isinstance(pair[0], str):
|
1343 |
-
raise UFOLibError(invalidFormatMessage)
|
1344 |
-
if not isinstance(pair[1], str):
|
1345 |
-
raise UFOLibError(invalidFormatMessage)
|
1346 |
-
if not isinstance(value, numberTypes):
|
1347 |
-
raise UFOLibError(invalidFormatMessage)
|
1348 |
-
# down convert
|
1349 |
-
if (
|
1350 |
-
self._formatVersion < UFOFormatVersion.FORMAT_3_0
|
1351 |
-
and self._downConversionKerningData is not None
|
1352 |
-
):
|
1353 |
-
remap = self._downConversionKerningData["groupRenameMap"]
|
1354 |
-
remappedKerning = {}
|
1355 |
-
for (side1, side2), value in list(kerning.items()):
|
1356 |
-
side1 = remap.get(side1, side1)
|
1357 |
-
side2 = remap.get(side2, side2)
|
1358 |
-
remappedKerning[side1, side2] = value
|
1359 |
-
kerning = remappedKerning
|
1360 |
-
# pack and write
|
1361 |
-
kerningDict = {}
|
1362 |
-
for left, right in kerning.keys():
|
1363 |
-
value = kerning[left, right]
|
1364 |
-
if left not in kerningDict:
|
1365 |
-
kerningDict[left] = {}
|
1366 |
-
kerningDict[left][right] = value
|
1367 |
-
if kerningDict:
|
1368 |
-
self._writePlist(KERNING_FILENAME, kerningDict)
|
1369 |
-
elif self._havePreviousFile:
|
1370 |
-
self.removePath(KERNING_FILENAME, force=True, removeEmptyParents=False)
|
1371 |
-
|
1372 |
-
# lib.plist
|
1373 |
-
|
1374 |
-
def writeLib(self, libDict, validate=None):
|
1375 |
-
"""
|
1376 |
-
Write lib.plist. This method requires a
|
1377 |
-
lib dict as an argument.
|
1378 |
-
|
1379 |
-
``validate`` will validate the data, by default it is set to the
|
1380 |
-
class's validate value, can be overridden.
|
1381 |
-
"""
|
1382 |
-
if validate is None:
|
1383 |
-
validate = self._validate
|
1384 |
-
if validate:
|
1385 |
-
valid, message = fontLibValidator(libDict)
|
1386 |
-
if not valid:
|
1387 |
-
raise UFOLibError(message)
|
1388 |
-
if libDict:
|
1389 |
-
self._writePlist(LIB_FILENAME, libDict)
|
1390 |
-
elif self._havePreviousFile:
|
1391 |
-
self.removePath(LIB_FILENAME, force=True, removeEmptyParents=False)
|
1392 |
-
|
1393 |
-
# features.fea
|
1394 |
-
|
1395 |
-
def writeFeatures(self, features, validate=None):
|
1396 |
-
"""
|
1397 |
-
Write features.fea. This method requires a
|
1398 |
-
features string as an argument.
|
1399 |
-
"""
|
1400 |
-
if validate is None:
|
1401 |
-
validate = self._validate
|
1402 |
-
if self._formatVersion == UFOFormatVersion.FORMAT_1_0:
|
1403 |
-
raise UFOLibError("features.fea is not allowed in UFO Format Version 1.")
|
1404 |
-
if validate:
|
1405 |
-
if not isinstance(features, str):
|
1406 |
-
raise UFOLibError("The features are not text.")
|
1407 |
-
if features:
|
1408 |
-
self.writeBytesToPath(FEATURES_FILENAME, features.encode("utf8"))
|
1409 |
-
elif self._havePreviousFile:
|
1410 |
-
self.removePath(FEATURES_FILENAME, force=True, removeEmptyParents=False)
|
1411 |
-
|
1412 |
-
# glyph sets & layers
|
1413 |
-
|
1414 |
-
def writeLayerContents(self, layerOrder=None, validate=None):
|
1415 |
-
"""
|
1416 |
-
Write the layercontents.plist file. This method *must* be called
|
1417 |
-
after all glyph sets have been written.
|
1418 |
-
"""
|
1419 |
-
if validate is None:
|
1420 |
-
validate = self._validate
|
1421 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
1422 |
-
return
|
1423 |
-
if layerOrder is not None:
|
1424 |
-
newOrder = []
|
1425 |
-
for layerName in layerOrder:
|
1426 |
-
if layerName is None:
|
1427 |
-
layerName = DEFAULT_LAYER_NAME
|
1428 |
-
newOrder.append(layerName)
|
1429 |
-
layerOrder = newOrder
|
1430 |
-
else:
|
1431 |
-
layerOrder = list(self.layerContents.keys())
|
1432 |
-
if validate and set(layerOrder) != set(self.layerContents.keys()):
|
1433 |
-
raise UFOLibError(
|
1434 |
-
"The layer order content does not match the glyph sets that have been created."
|
1435 |
-
)
|
1436 |
-
layerContents = [
|
1437 |
-
(layerName, self.layerContents[layerName]) for layerName in layerOrder
|
1438 |
-
]
|
1439 |
-
self._writePlist(LAYERCONTENTS_FILENAME, layerContents)
|
1440 |
-
|
1441 |
-
def _findDirectoryForLayerName(self, layerName):
|
1442 |
-
foundDirectory = None
|
1443 |
-
for existingLayerName, directoryName in list(self.layerContents.items()):
|
1444 |
-
if layerName is None and directoryName == DEFAULT_GLYPHS_DIRNAME:
|
1445 |
-
foundDirectory = directoryName
|
1446 |
-
break
|
1447 |
-
elif existingLayerName == layerName:
|
1448 |
-
foundDirectory = directoryName
|
1449 |
-
break
|
1450 |
-
if not foundDirectory:
|
1451 |
-
raise UFOLibError(
|
1452 |
-
"Could not locate a glyph set directory for the layer named %s."
|
1453 |
-
% layerName
|
1454 |
-
)
|
1455 |
-
return foundDirectory
|
1456 |
-
|
1457 |
-
def getGlyphSet(
|
1458 |
-
self,
|
1459 |
-
layerName=None,
|
1460 |
-
defaultLayer=True,
|
1461 |
-
glyphNameToFileNameFunc=None,
|
1462 |
-
validateRead=None,
|
1463 |
-
validateWrite=None,
|
1464 |
-
expectContentsFile=False,
|
1465 |
-
):
|
1466 |
-
"""
|
1467 |
-
Return the GlyphSet object associated with the
|
1468 |
-
appropriate glyph directory in the .ufo.
|
1469 |
-
If layerName is None, the default glyph set
|
1470 |
-
will be used. The defaultLayer flag indictes
|
1471 |
-
that the layer should be saved into the default
|
1472 |
-
glyphs directory.
|
1473 |
-
|
1474 |
-
``validateRead`` will validate the read data, by default it is set to the
|
1475 |
-
class's validate value, can be overridden.
|
1476 |
-
``validateWrte`` will validate the written data, by default it is set to the
|
1477 |
-
class's validate value, can be overridden.
|
1478 |
-
``expectContentsFile`` will raise a GlifLibError if a contents.plist file is
|
1479 |
-
not found on the glyph set file system. This should be set to ``True`` if you
|
1480 |
-
are reading an existing UFO and ``False`` if you use ``getGlyphSet`` to create
|
1481 |
-
a fresh glyph set.
|
1482 |
-
"""
|
1483 |
-
if validateRead is None:
|
1484 |
-
validateRead = self._validate
|
1485 |
-
if validateWrite is None:
|
1486 |
-
validateWrite = self._validate
|
1487 |
-
# only default can be written in < 3
|
1488 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0 and (
|
1489 |
-
not defaultLayer or layerName is not None
|
1490 |
-
):
|
1491 |
-
raise UFOLibError(
|
1492 |
-
f"Only the default layer can be writen in UFO {self._formatVersion.major}."
|
1493 |
-
)
|
1494 |
-
# locate a layer name when None has been given
|
1495 |
-
if layerName is None and defaultLayer:
|
1496 |
-
for existingLayerName, directory in self.layerContents.items():
|
1497 |
-
if directory == DEFAULT_GLYPHS_DIRNAME:
|
1498 |
-
layerName = existingLayerName
|
1499 |
-
if layerName is None:
|
1500 |
-
layerName = DEFAULT_LAYER_NAME
|
1501 |
-
elif layerName is None and not defaultLayer:
|
1502 |
-
raise UFOLibError("A layer name must be provided for non-default layers.")
|
1503 |
-
# move along to format specific writing
|
1504 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
1505 |
-
return self._getDefaultGlyphSet(
|
1506 |
-
validateRead,
|
1507 |
-
validateWrite,
|
1508 |
-
glyphNameToFileNameFunc=glyphNameToFileNameFunc,
|
1509 |
-
expectContentsFile=expectContentsFile,
|
1510 |
-
)
|
1511 |
-
elif self._formatVersion.major == UFOFormatVersion.FORMAT_3_0.major:
|
1512 |
-
return self._getGlyphSetFormatVersion3(
|
1513 |
-
validateRead,
|
1514 |
-
validateWrite,
|
1515 |
-
layerName=layerName,
|
1516 |
-
defaultLayer=defaultLayer,
|
1517 |
-
glyphNameToFileNameFunc=glyphNameToFileNameFunc,
|
1518 |
-
expectContentsFile=expectContentsFile,
|
1519 |
-
)
|
1520 |
-
else:
|
1521 |
-
raise NotImplementedError(self._formatVersion)
|
1522 |
-
|
1523 |
-
def _getDefaultGlyphSet(
|
1524 |
-
self,
|
1525 |
-
validateRead,
|
1526 |
-
validateWrite,
|
1527 |
-
glyphNameToFileNameFunc=None,
|
1528 |
-
expectContentsFile=False,
|
1529 |
-
):
|
1530 |
-
from fontTools.ufoLib.glifLib import GlyphSet
|
1531 |
-
|
1532 |
-
glyphSubFS = self.fs.makedir(DEFAULT_GLYPHS_DIRNAME, recreate=True)
|
1533 |
-
return GlyphSet(
|
1534 |
-
glyphSubFS,
|
1535 |
-
glyphNameToFileNameFunc=glyphNameToFileNameFunc,
|
1536 |
-
ufoFormatVersion=self._formatVersion,
|
1537 |
-
validateRead=validateRead,
|
1538 |
-
validateWrite=validateWrite,
|
1539 |
-
expectContentsFile=expectContentsFile,
|
1540 |
-
)
|
1541 |
-
|
1542 |
-
def _getGlyphSetFormatVersion3(
|
1543 |
-
self,
|
1544 |
-
validateRead,
|
1545 |
-
validateWrite,
|
1546 |
-
layerName=None,
|
1547 |
-
defaultLayer=True,
|
1548 |
-
glyphNameToFileNameFunc=None,
|
1549 |
-
expectContentsFile=False,
|
1550 |
-
):
|
1551 |
-
from fontTools.ufoLib.glifLib import GlyphSet
|
1552 |
-
|
1553 |
-
# if the default flag is on, make sure that the default in the file
|
1554 |
-
# matches the default being written. also make sure that this layer
|
1555 |
-
# name is not already linked to a non-default layer.
|
1556 |
-
if defaultLayer:
|
1557 |
-
for existingLayerName, directory in self.layerContents.items():
|
1558 |
-
if directory == DEFAULT_GLYPHS_DIRNAME:
|
1559 |
-
if existingLayerName != layerName:
|
1560 |
-
raise UFOLibError(
|
1561 |
-
"Another layer ('%s') is already mapped to the default directory."
|
1562 |
-
% existingLayerName
|
1563 |
-
)
|
1564 |
-
elif existingLayerName == layerName:
|
1565 |
-
raise UFOLibError(
|
1566 |
-
"The layer name is already mapped to a non-default layer."
|
1567 |
-
)
|
1568 |
-
# get an existing directory name
|
1569 |
-
if layerName in self.layerContents:
|
1570 |
-
directory = self.layerContents[layerName]
|
1571 |
-
# get a new directory name
|
1572 |
-
else:
|
1573 |
-
if defaultLayer:
|
1574 |
-
directory = DEFAULT_GLYPHS_DIRNAME
|
1575 |
-
else:
|
1576 |
-
# not caching this could be slightly expensive,
|
1577 |
-
# but caching it will be cumbersome
|
1578 |
-
existing = {d.lower() for d in self.layerContents.values()}
|
1579 |
-
directory = userNameToFileName(
|
1580 |
-
layerName, existing=existing, prefix="glyphs."
|
1581 |
-
)
|
1582 |
-
# make the directory
|
1583 |
-
glyphSubFS = self.fs.makedir(directory, recreate=True)
|
1584 |
-
# store the mapping
|
1585 |
-
self.layerContents[layerName] = directory
|
1586 |
-
# load the glyph set
|
1587 |
-
return GlyphSet(
|
1588 |
-
glyphSubFS,
|
1589 |
-
glyphNameToFileNameFunc=glyphNameToFileNameFunc,
|
1590 |
-
ufoFormatVersion=self._formatVersion,
|
1591 |
-
validateRead=validateRead,
|
1592 |
-
validateWrite=validateWrite,
|
1593 |
-
expectContentsFile=expectContentsFile,
|
1594 |
-
)
|
1595 |
-
|
1596 |
-
def renameGlyphSet(self, layerName, newLayerName, defaultLayer=False):
|
1597 |
-
"""
|
1598 |
-
Rename a glyph set.
|
1599 |
-
|
1600 |
-
Note: if a GlyphSet object has already been retrieved for
|
1601 |
-
layerName, it is up to the caller to inform that object that
|
1602 |
-
the directory it represents has changed.
|
1603 |
-
"""
|
1604 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
1605 |
-
# ignore renaming glyph sets for UFO1 UFO2
|
1606 |
-
# just write the data from the default layer
|
1607 |
-
return
|
1608 |
-
# the new and old names can be the same
|
1609 |
-
# as long as the default is being switched
|
1610 |
-
if layerName == newLayerName:
|
1611 |
-
# if the default is off and the layer is already not the default, skip
|
1612 |
-
if (
|
1613 |
-
self.layerContents[layerName] != DEFAULT_GLYPHS_DIRNAME
|
1614 |
-
and not defaultLayer
|
1615 |
-
):
|
1616 |
-
return
|
1617 |
-
# if the default is on and the layer is already the default, skip
|
1618 |
-
if self.layerContents[layerName] == DEFAULT_GLYPHS_DIRNAME and defaultLayer:
|
1619 |
-
return
|
1620 |
-
else:
|
1621 |
-
# make sure the new layer name doesn't already exist
|
1622 |
-
if newLayerName is None:
|
1623 |
-
newLayerName = DEFAULT_LAYER_NAME
|
1624 |
-
if newLayerName in self.layerContents:
|
1625 |
-
raise UFOLibError("A layer named %s already exists." % newLayerName)
|
1626 |
-
# make sure the default layer doesn't already exist
|
1627 |
-
if defaultLayer and DEFAULT_GLYPHS_DIRNAME in self.layerContents.values():
|
1628 |
-
raise UFOLibError("A default layer already exists.")
|
1629 |
-
# get the paths
|
1630 |
-
oldDirectory = self._findDirectoryForLayerName(layerName)
|
1631 |
-
if defaultLayer:
|
1632 |
-
newDirectory = DEFAULT_GLYPHS_DIRNAME
|
1633 |
-
else:
|
1634 |
-
existing = {name.lower() for name in self.layerContents.values()}
|
1635 |
-
newDirectory = userNameToFileName(
|
1636 |
-
newLayerName, existing=existing, prefix="glyphs."
|
1637 |
-
)
|
1638 |
-
# update the internal mapping
|
1639 |
-
del self.layerContents[layerName]
|
1640 |
-
self.layerContents[newLayerName] = newDirectory
|
1641 |
-
# do the file system copy
|
1642 |
-
self.fs.movedir(oldDirectory, newDirectory, create=True)
|
1643 |
-
|
1644 |
-
def deleteGlyphSet(self, layerName):
|
1645 |
-
"""
|
1646 |
-
Remove the glyph set matching layerName.
|
1647 |
-
"""
|
1648 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
1649 |
-
# ignore deleting glyph sets for UFO1 UFO2 as there are no layers
|
1650 |
-
# just write the data from the default layer
|
1651 |
-
return
|
1652 |
-
foundDirectory = self._findDirectoryForLayerName(layerName)
|
1653 |
-
self.removePath(foundDirectory, removeEmptyParents=False)
|
1654 |
-
del self.layerContents[layerName]
|
1655 |
-
|
1656 |
-
def writeData(self, fileName, data):
|
1657 |
-
"""
|
1658 |
-
Write data to fileName in the 'data' directory.
|
1659 |
-
The data must be a bytes string.
|
1660 |
-
"""
|
1661 |
-
self.writeBytesToPath(f"{DATA_DIRNAME}/{fsdecode(fileName)}", data)
|
1662 |
-
|
1663 |
-
def removeData(self, fileName):
|
1664 |
-
"""
|
1665 |
-
Remove the file named fileName from the data directory.
|
1666 |
-
"""
|
1667 |
-
self.removePath(f"{DATA_DIRNAME}/{fsdecode(fileName)}")
|
1668 |
-
|
1669 |
-
# /images
|
1670 |
-
|
1671 |
-
def writeImage(self, fileName, data, validate=None):
|
1672 |
-
"""
|
1673 |
-
Write data to fileName in the images directory.
|
1674 |
-
The data must be a valid PNG.
|
1675 |
-
"""
|
1676 |
-
if validate is None:
|
1677 |
-
validate = self._validate
|
1678 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
1679 |
-
raise UFOLibError(
|
1680 |
-
f"Images are not allowed in UFO {self._formatVersion.major}."
|
1681 |
-
)
|
1682 |
-
fileName = fsdecode(fileName)
|
1683 |
-
if validate:
|
1684 |
-
valid, error = pngValidator(data=data)
|
1685 |
-
if not valid:
|
1686 |
-
raise UFOLibError(error)
|
1687 |
-
self.writeBytesToPath(f"{IMAGES_DIRNAME}/{fileName}", data)
|
1688 |
-
|
1689 |
-
def removeImage(self, fileName, validate=None): # XXX remove unused 'validate'?
|
1690 |
-
"""
|
1691 |
-
Remove the file named fileName from the
|
1692 |
-
images directory.
|
1693 |
-
"""
|
1694 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
1695 |
-
raise UFOLibError(
|
1696 |
-
f"Images are not allowed in UFO {self._formatVersion.major}."
|
1697 |
-
)
|
1698 |
-
self.removePath(f"{IMAGES_DIRNAME}/{fsdecode(fileName)}")
|
1699 |
-
|
1700 |
-
def copyImageFromReader(self, reader, sourceFileName, destFileName, validate=None):
|
1701 |
-
"""
|
1702 |
-
Copy the sourceFileName in the provided UFOReader to destFileName
|
1703 |
-
in this writer. This uses the most memory efficient method possible
|
1704 |
-
for copying the data possible.
|
1705 |
-
"""
|
1706 |
-
if validate is None:
|
1707 |
-
validate = self._validate
|
1708 |
-
if self._formatVersion < UFOFormatVersion.FORMAT_3_0:
|
1709 |
-
raise UFOLibError(
|
1710 |
-
f"Images are not allowed in UFO {self._formatVersion.major}."
|
1711 |
-
)
|
1712 |
-
sourcePath = f"{IMAGES_DIRNAME}/{fsdecode(sourceFileName)}"
|
1713 |
-
destPath = f"{IMAGES_DIRNAME}/{fsdecode(destFileName)}"
|
1714 |
-
self.copyFromReader(reader, sourcePath, destPath)
|
1715 |
-
|
1716 |
-
def close(self):
|
1717 |
-
if self._havePreviousFile and self._fileStructure is UFOFileStructure.ZIP:
|
1718 |
-
# if we are updating an existing zip file, we can now compress the
|
1719 |
-
# contents of the temporary filesystem in the destination path
|
1720 |
-
rootDir = os.path.splitext(os.path.basename(self._path))[0] + ".ufo"
|
1721 |
-
with fs.zipfs.ZipFS(self._path, write=True, encoding="utf-8") as destFS:
|
1722 |
-
fs.copy.copy_fs(self.fs, destFS.makedir(rootDir))
|
1723 |
-
super().close()
|
1724 |
-
|
1725 |
-
|
1726 |
-
# just an alias, makes it more explicit
|
1727 |
-
UFOReaderWriter = UFOWriter
|
1728 |
-
|
1729 |
-
|
1730 |
-
# ----------------
|
1731 |
-
# Helper Functions
|
1732 |
-
# ----------------
|
1733 |
-
|
1734 |
-
|
1735 |
-
def _sniffFileStructure(ufo_path):
|
1736 |
-
"""Return UFOFileStructure.ZIP if the UFO at path 'ufo_path' (str)
|
1737 |
-
is a zip file, else return UFOFileStructure.PACKAGE if 'ufo_path' is a
|
1738 |
-
directory.
|
1739 |
-
Raise UFOLibError if it is a file with unknown structure, or if the path
|
1740 |
-
does not exist.
|
1741 |
-
"""
|
1742 |
-
if zipfile.is_zipfile(ufo_path):
|
1743 |
-
return UFOFileStructure.ZIP
|
1744 |
-
elif os.path.isdir(ufo_path):
|
1745 |
-
return UFOFileStructure.PACKAGE
|
1746 |
-
elif os.path.isfile(ufo_path):
|
1747 |
-
raise UFOLibError(
|
1748 |
-
"The specified UFO does not have a known structure: '%s'" % ufo_path
|
1749 |
-
)
|
1750 |
-
else:
|
1751 |
-
raise UFOLibError("No such file or directory: '%s'" % ufo_path)
|
1752 |
-
|
1753 |
-
|
1754 |
-
def makeUFOPath(path):
|
1755 |
-
"""
|
1756 |
-
Return a .ufo pathname.
|
1757 |
-
|
1758 |
-
>>> makeUFOPath("directory/something.ext") == (
|
1759 |
-
... os.path.join('directory', 'something.ufo'))
|
1760 |
-
True
|
1761 |
-
>>> makeUFOPath("directory/something.another.thing.ext") == (
|
1762 |
-
... os.path.join('directory', 'something.another.thing.ufo'))
|
1763 |
-
True
|
1764 |
-
"""
|
1765 |
-
dir, name = os.path.split(path)
|
1766 |
-
name = ".".join([".".join(name.split(".")[:-1]), "ufo"])
|
1767 |
-
return os.path.join(dir, name)
|
1768 |
-
|
1769 |
-
|
1770 |
-
# ----------------------
|
1771 |
-
# fontinfo.plist Support
|
1772 |
-
# ----------------------
|
1773 |
-
|
1774 |
-
# Version Validators
|
1775 |
-
|
1776 |
-
# There is no version 1 validator and there shouldn't be.
|
1777 |
-
# The version 1 spec was very loose and there were numerous
|
1778 |
-
# cases of invalid values.
|
1779 |
-
|
1780 |
-
|
1781 |
-
def validateFontInfoVersion2ValueForAttribute(attr, value):
|
1782 |
-
"""
|
1783 |
-
This performs very basic validation of the value for attribute
|
1784 |
-
following the UFO 2 fontinfo.plist specification. The results
|
1785 |
-
of this should not be interpretted as *correct* for the font
|
1786 |
-
that they are part of. This merely indicates that the value
|
1787 |
-
is of the proper type and, where the specification defines
|
1788 |
-
a set range of possible values for an attribute, that the
|
1789 |
-
value is in the accepted range.
|
1790 |
-
"""
|
1791 |
-
dataValidationDict = fontInfoAttributesVersion2ValueData[attr]
|
1792 |
-
valueType = dataValidationDict.get("type")
|
1793 |
-
validator = dataValidationDict.get("valueValidator")
|
1794 |
-
valueOptions = dataValidationDict.get("valueOptions")
|
1795 |
-
# have specific options for the validator
|
1796 |
-
if valueOptions is not None:
|
1797 |
-
isValidValue = validator(value, valueOptions)
|
1798 |
-
# no specific options
|
1799 |
-
else:
|
1800 |
-
if validator == genericTypeValidator:
|
1801 |
-
isValidValue = validator(value, valueType)
|
1802 |
-
else:
|
1803 |
-
isValidValue = validator(value)
|
1804 |
-
return isValidValue
|
1805 |
-
|
1806 |
-
|
1807 |
-
def validateInfoVersion2Data(infoData):
|
1808 |
-
"""
|
1809 |
-
This performs very basic validation of the value for infoData
|
1810 |
-
following the UFO 2 fontinfo.plist specification. The results
|
1811 |
-
of this should not be interpretted as *correct* for the font
|
1812 |
-
that they are part of. This merely indicates that the values
|
1813 |
-
are of the proper type and, where the specification defines
|
1814 |
-
a set range of possible values for an attribute, that the
|
1815 |
-
value is in the accepted range.
|
1816 |
-
"""
|
1817 |
-
validInfoData = {}
|
1818 |
-
for attr, value in list(infoData.items()):
|
1819 |
-
isValidValue = validateFontInfoVersion2ValueForAttribute(attr, value)
|
1820 |
-
if not isValidValue:
|
1821 |
-
raise UFOLibError(f"Invalid value for attribute {attr} ({value!r}).")
|
1822 |
-
else:
|
1823 |
-
validInfoData[attr] = value
|
1824 |
-
return validInfoData
|
1825 |
-
|
1826 |
-
|
1827 |
-
def validateFontInfoVersion3ValueForAttribute(attr, value):
|
1828 |
-
"""
|
1829 |
-
This performs very basic validation of the value for attribute
|
1830 |
-
following the UFO 3 fontinfo.plist specification. The results
|
1831 |
-
of this should not be interpretted as *correct* for the font
|
1832 |
-
that they are part of. This merely indicates that the value
|
1833 |
-
is of the proper type and, where the specification defines
|
1834 |
-
a set range of possible values for an attribute, that the
|
1835 |
-
value is in the accepted range.
|
1836 |
-
"""
|
1837 |
-
dataValidationDict = fontInfoAttributesVersion3ValueData[attr]
|
1838 |
-
valueType = dataValidationDict.get("type")
|
1839 |
-
validator = dataValidationDict.get("valueValidator")
|
1840 |
-
valueOptions = dataValidationDict.get("valueOptions")
|
1841 |
-
# have specific options for the validator
|
1842 |
-
if valueOptions is not None:
|
1843 |
-
isValidValue = validator(value, valueOptions)
|
1844 |
-
# no specific options
|
1845 |
-
else:
|
1846 |
-
if validator == genericTypeValidator:
|
1847 |
-
isValidValue = validator(value, valueType)
|
1848 |
-
else:
|
1849 |
-
isValidValue = validator(value)
|
1850 |
-
return isValidValue
|
1851 |
-
|
1852 |
-
|
1853 |
-
def validateInfoVersion3Data(infoData):
|
1854 |
-
"""
|
1855 |
-
This performs very basic validation of the value for infoData
|
1856 |
-
following the UFO 3 fontinfo.plist specification. The results
|
1857 |
-
of this should not be interpretted as *correct* for the font
|
1858 |
-
that they are part of. This merely indicates that the values
|
1859 |
-
are of the proper type and, where the specification defines
|
1860 |
-
a set range of possible values for an attribute, that the
|
1861 |
-
value is in the accepted range.
|
1862 |
-
"""
|
1863 |
-
validInfoData = {}
|
1864 |
-
for attr, value in list(infoData.items()):
|
1865 |
-
isValidValue = validateFontInfoVersion3ValueForAttribute(attr, value)
|
1866 |
-
if not isValidValue:
|
1867 |
-
raise UFOLibError(f"Invalid value for attribute {attr} ({value!r}).")
|
1868 |
-
else:
|
1869 |
-
validInfoData[attr] = value
|
1870 |
-
return validInfoData
|
1871 |
-
|
1872 |
-
|
1873 |
-
# Value Options
|
1874 |
-
|
1875 |
-
fontInfoOpenTypeHeadFlagsOptions = list(range(0, 15))
|
1876 |
-
fontInfoOpenTypeOS2SelectionOptions = [1, 2, 3, 4, 7, 8, 9]
|
1877 |
-
fontInfoOpenTypeOS2UnicodeRangesOptions = list(range(0, 128))
|
1878 |
-
fontInfoOpenTypeOS2CodePageRangesOptions = list(range(0, 64))
|
1879 |
-
fontInfoOpenTypeOS2TypeOptions = [0, 1, 2, 3, 8, 9]
|
1880 |
-
|
1881 |
-
# Version Attribute Definitions
|
1882 |
-
# This defines the attributes, types and, in some
|
1883 |
-
# cases the possible values, that can exist is
|
1884 |
-
# fontinfo.plist.
|
1885 |
-
|
1886 |
-
fontInfoAttributesVersion1 = {
|
1887 |
-
"familyName",
|
1888 |
-
"styleName",
|
1889 |
-
"fullName",
|
1890 |
-
"fontName",
|
1891 |
-
"menuName",
|
1892 |
-
"fontStyle",
|
1893 |
-
"note",
|
1894 |
-
"versionMajor",
|
1895 |
-
"versionMinor",
|
1896 |
-
"year",
|
1897 |
-
"copyright",
|
1898 |
-
"notice",
|
1899 |
-
"trademark",
|
1900 |
-
"license",
|
1901 |
-
"licenseURL",
|
1902 |
-
"createdBy",
|
1903 |
-
"designer",
|
1904 |
-
"designerURL",
|
1905 |
-
"vendorURL",
|
1906 |
-
"unitsPerEm",
|
1907 |
-
"ascender",
|
1908 |
-
"descender",
|
1909 |
-
"capHeight",
|
1910 |
-
"xHeight",
|
1911 |
-
"defaultWidth",
|
1912 |
-
"slantAngle",
|
1913 |
-
"italicAngle",
|
1914 |
-
"widthName",
|
1915 |
-
"weightName",
|
1916 |
-
"weightValue",
|
1917 |
-
"fondName",
|
1918 |
-
"otFamilyName",
|
1919 |
-
"otStyleName",
|
1920 |
-
"otMacName",
|
1921 |
-
"msCharSet",
|
1922 |
-
"fondID",
|
1923 |
-
"uniqueID",
|
1924 |
-
"ttVendor",
|
1925 |
-
"ttUniqueID",
|
1926 |
-
"ttVersion",
|
1927 |
-
}
|
1928 |
-
|
1929 |
-
fontInfoAttributesVersion2ValueData = {
|
1930 |
-
"familyName": dict(type=str),
|
1931 |
-
"styleName": dict(type=str),
|
1932 |
-
"styleMapFamilyName": dict(type=str),
|
1933 |
-
"styleMapStyleName": dict(
|
1934 |
-
type=str, valueValidator=fontInfoStyleMapStyleNameValidator
|
1935 |
-
),
|
1936 |
-
"versionMajor": dict(type=int),
|
1937 |
-
"versionMinor": dict(type=int),
|
1938 |
-
"year": dict(type=int),
|
1939 |
-
"copyright": dict(type=str),
|
1940 |
-
"trademark": dict(type=str),
|
1941 |
-
"unitsPerEm": dict(type=(int, float)),
|
1942 |
-
"descender": dict(type=(int, float)),
|
1943 |
-
"xHeight": dict(type=(int, float)),
|
1944 |
-
"capHeight": dict(type=(int, float)),
|
1945 |
-
"ascender": dict(type=(int, float)),
|
1946 |
-
"italicAngle": dict(type=(float, int)),
|
1947 |
-
"note": dict(type=str),
|
1948 |
-
"openTypeHeadCreated": dict(
|
1949 |
-
type=str, valueValidator=fontInfoOpenTypeHeadCreatedValidator
|
1950 |
-
),
|
1951 |
-
"openTypeHeadLowestRecPPEM": dict(type=(int, float)),
|
1952 |
-
"openTypeHeadFlags": dict(
|
1953 |
-
type="integerList",
|
1954 |
-
valueValidator=genericIntListValidator,
|
1955 |
-
valueOptions=fontInfoOpenTypeHeadFlagsOptions,
|
1956 |
-
),
|
1957 |
-
"openTypeHheaAscender": dict(type=(int, float)),
|
1958 |
-
"openTypeHheaDescender": dict(type=(int, float)),
|
1959 |
-
"openTypeHheaLineGap": dict(type=(int, float)),
|
1960 |
-
"openTypeHheaCaretSlopeRise": dict(type=int),
|
1961 |
-
"openTypeHheaCaretSlopeRun": dict(type=int),
|
1962 |
-
"openTypeHheaCaretOffset": dict(type=(int, float)),
|
1963 |
-
"openTypeNameDesigner": dict(type=str),
|
1964 |
-
"openTypeNameDesignerURL": dict(type=str),
|
1965 |
-
"openTypeNameManufacturer": dict(type=str),
|
1966 |
-
"openTypeNameManufacturerURL": dict(type=str),
|
1967 |
-
"openTypeNameLicense": dict(type=str),
|
1968 |
-
"openTypeNameLicenseURL": dict(type=str),
|
1969 |
-
"openTypeNameVersion": dict(type=str),
|
1970 |
-
"openTypeNameUniqueID": dict(type=str),
|
1971 |
-
"openTypeNameDescription": dict(type=str),
|
1972 |
-
"openTypeNamePreferredFamilyName": dict(type=str),
|
1973 |
-
"openTypeNamePreferredSubfamilyName": dict(type=str),
|
1974 |
-
"openTypeNameCompatibleFullName": dict(type=str),
|
1975 |
-
"openTypeNameSampleText": dict(type=str),
|
1976 |
-
"openTypeNameWWSFamilyName": dict(type=str),
|
1977 |
-
"openTypeNameWWSSubfamilyName": dict(type=str),
|
1978 |
-
"openTypeOS2WidthClass": dict(
|
1979 |
-
type=int, valueValidator=fontInfoOpenTypeOS2WidthClassValidator
|
1980 |
-
),
|
1981 |
-
"openTypeOS2WeightClass": dict(
|
1982 |
-
type=int, valueValidator=fontInfoOpenTypeOS2WeightClassValidator
|
1983 |
-
),
|
1984 |
-
"openTypeOS2Selection": dict(
|
1985 |
-
type="integerList",
|
1986 |
-
valueValidator=genericIntListValidator,
|
1987 |
-
valueOptions=fontInfoOpenTypeOS2SelectionOptions,
|
1988 |
-
),
|
1989 |
-
"openTypeOS2VendorID": dict(type=str),
|
1990 |
-
"openTypeOS2Panose": dict(
|
1991 |
-
type="integerList", valueValidator=fontInfoVersion2OpenTypeOS2PanoseValidator
|
1992 |
-
),
|
1993 |
-
"openTypeOS2FamilyClass": dict(
|
1994 |
-
type="integerList", valueValidator=fontInfoOpenTypeOS2FamilyClassValidator
|
1995 |
-
),
|
1996 |
-
"openTypeOS2UnicodeRanges": dict(
|
1997 |
-
type="integerList",
|
1998 |
-
valueValidator=genericIntListValidator,
|
1999 |
-
valueOptions=fontInfoOpenTypeOS2UnicodeRangesOptions,
|
2000 |
-
),
|
2001 |
-
"openTypeOS2CodePageRanges": dict(
|
2002 |
-
type="integerList",
|
2003 |
-
valueValidator=genericIntListValidator,
|
2004 |
-
valueOptions=fontInfoOpenTypeOS2CodePageRangesOptions,
|
2005 |
-
),
|
2006 |
-
"openTypeOS2TypoAscender": dict(type=(int, float)),
|
2007 |
-
"openTypeOS2TypoDescender": dict(type=(int, float)),
|
2008 |
-
"openTypeOS2TypoLineGap": dict(type=(int, float)),
|
2009 |
-
"openTypeOS2WinAscent": dict(type=(int, float)),
|
2010 |
-
"openTypeOS2WinDescent": dict(type=(int, float)),
|
2011 |
-
"openTypeOS2Type": dict(
|
2012 |
-
type="integerList",
|
2013 |
-
valueValidator=genericIntListValidator,
|
2014 |
-
valueOptions=fontInfoOpenTypeOS2TypeOptions,
|
2015 |
-
),
|
2016 |
-
"openTypeOS2SubscriptXSize": dict(type=(int, float)),
|
2017 |
-
"openTypeOS2SubscriptYSize": dict(type=(int, float)),
|
2018 |
-
"openTypeOS2SubscriptXOffset": dict(type=(int, float)),
|
2019 |
-
"openTypeOS2SubscriptYOffset": dict(type=(int, float)),
|
2020 |
-
"openTypeOS2SuperscriptXSize": dict(type=(int, float)),
|
2021 |
-
"openTypeOS2SuperscriptYSize": dict(type=(int, float)),
|
2022 |
-
"openTypeOS2SuperscriptXOffset": dict(type=(int, float)),
|
2023 |
-
"openTypeOS2SuperscriptYOffset": dict(type=(int, float)),
|
2024 |
-
"openTypeOS2StrikeoutSize": dict(type=(int, float)),
|
2025 |
-
"openTypeOS2StrikeoutPosition": dict(type=(int, float)),
|
2026 |
-
"openTypeVheaVertTypoAscender": dict(type=(int, float)),
|
2027 |
-
"openTypeVheaVertTypoDescender": dict(type=(int, float)),
|
2028 |
-
"openTypeVheaVertTypoLineGap": dict(type=(int, float)),
|
2029 |
-
"openTypeVheaCaretSlopeRise": dict(type=int),
|
2030 |
-
"openTypeVheaCaretSlopeRun": dict(type=int),
|
2031 |
-
"openTypeVheaCaretOffset": dict(type=(int, float)),
|
2032 |
-
"postscriptFontName": dict(type=str),
|
2033 |
-
"postscriptFullName": dict(type=str),
|
2034 |
-
"postscriptSlantAngle": dict(type=(float, int)),
|
2035 |
-
"postscriptUniqueID": dict(type=int),
|
2036 |
-
"postscriptUnderlineThickness": dict(type=(int, float)),
|
2037 |
-
"postscriptUnderlinePosition": dict(type=(int, float)),
|
2038 |
-
"postscriptIsFixedPitch": dict(type=bool),
|
2039 |
-
"postscriptBlueValues": dict(
|
2040 |
-
type="integerList", valueValidator=fontInfoPostscriptBluesValidator
|
2041 |
-
),
|
2042 |
-
"postscriptOtherBlues": dict(
|
2043 |
-
type="integerList", valueValidator=fontInfoPostscriptOtherBluesValidator
|
2044 |
-
),
|
2045 |
-
"postscriptFamilyBlues": dict(
|
2046 |
-
type="integerList", valueValidator=fontInfoPostscriptBluesValidator
|
2047 |
-
),
|
2048 |
-
"postscriptFamilyOtherBlues": dict(
|
2049 |
-
type="integerList", valueValidator=fontInfoPostscriptOtherBluesValidator
|
2050 |
-
),
|
2051 |
-
"postscriptStemSnapH": dict(
|
2052 |
-
type="integerList", valueValidator=fontInfoPostscriptStemsValidator
|
2053 |
-
),
|
2054 |
-
"postscriptStemSnapV": dict(
|
2055 |
-
type="integerList", valueValidator=fontInfoPostscriptStemsValidator
|
2056 |
-
),
|
2057 |
-
"postscriptBlueFuzz": dict(type=(int, float)),
|
2058 |
-
"postscriptBlueShift": dict(type=(int, float)),
|
2059 |
-
"postscriptBlueScale": dict(type=(float, int)),
|
2060 |
-
"postscriptForceBold": dict(type=bool),
|
2061 |
-
"postscriptDefaultWidthX": dict(type=(int, float)),
|
2062 |
-
"postscriptNominalWidthX": dict(type=(int, float)),
|
2063 |
-
"postscriptWeightName": dict(type=str),
|
2064 |
-
"postscriptDefaultCharacter": dict(type=str),
|
2065 |
-
"postscriptWindowsCharacterSet": dict(
|
2066 |
-
type=int, valueValidator=fontInfoPostscriptWindowsCharacterSetValidator
|
2067 |
-
),
|
2068 |
-
"macintoshFONDFamilyID": dict(type=int),
|
2069 |
-
"macintoshFONDName": dict(type=str),
|
2070 |
-
}
|
2071 |
-
fontInfoAttributesVersion2 = set(fontInfoAttributesVersion2ValueData.keys())
|
2072 |
-
|
2073 |
-
fontInfoAttributesVersion3ValueData = deepcopy(fontInfoAttributesVersion2ValueData)
|
2074 |
-
fontInfoAttributesVersion3ValueData.update(
|
2075 |
-
{
|
2076 |
-
"versionMinor": dict(type=int, valueValidator=genericNonNegativeIntValidator),
|
2077 |
-
"unitsPerEm": dict(
|
2078 |
-
type=(int, float), valueValidator=genericNonNegativeNumberValidator
|
2079 |
-
),
|
2080 |
-
"openTypeHeadLowestRecPPEM": dict(
|
2081 |
-
type=int, valueValidator=genericNonNegativeNumberValidator
|
2082 |
-
),
|
2083 |
-
"openTypeHheaAscender": dict(type=int),
|
2084 |
-
"openTypeHheaDescender": dict(type=int),
|
2085 |
-
"openTypeHheaLineGap": dict(type=int),
|
2086 |
-
"openTypeHheaCaretOffset": dict(type=int),
|
2087 |
-
"openTypeOS2Panose": dict(
|
2088 |
-
type="integerList",
|
2089 |
-
valueValidator=fontInfoVersion3OpenTypeOS2PanoseValidator,
|
2090 |
-
),
|
2091 |
-
"openTypeOS2TypoAscender": dict(type=int),
|
2092 |
-
"openTypeOS2TypoDescender": dict(type=int),
|
2093 |
-
"openTypeOS2TypoLineGap": dict(type=int),
|
2094 |
-
"openTypeOS2WinAscent": dict(
|
2095 |
-
type=int, valueValidator=genericNonNegativeNumberValidator
|
2096 |
-
),
|
2097 |
-
"openTypeOS2WinDescent": dict(
|
2098 |
-
type=int, valueValidator=genericNonNegativeNumberValidator
|
2099 |
-
),
|
2100 |
-
"openTypeOS2SubscriptXSize": dict(type=int),
|
2101 |
-
"openTypeOS2SubscriptYSize": dict(type=int),
|
2102 |
-
"openTypeOS2SubscriptXOffset": dict(type=int),
|
2103 |
-
"openTypeOS2SubscriptYOffset": dict(type=int),
|
2104 |
-
"openTypeOS2SuperscriptXSize": dict(type=int),
|
2105 |
-
"openTypeOS2SuperscriptYSize": dict(type=int),
|
2106 |
-
"openTypeOS2SuperscriptXOffset": dict(type=int),
|
2107 |
-
"openTypeOS2SuperscriptYOffset": dict(type=int),
|
2108 |
-
"openTypeOS2StrikeoutSize": dict(type=int),
|
2109 |
-
"openTypeOS2StrikeoutPosition": dict(type=int),
|
2110 |
-
"openTypeGaspRangeRecords": dict(
|
2111 |
-
type="dictList", valueValidator=fontInfoOpenTypeGaspRangeRecordsValidator
|
2112 |
-
),
|
2113 |
-
"openTypeNameRecords": dict(
|
2114 |
-
type="dictList", valueValidator=fontInfoOpenTypeNameRecordsValidator
|
2115 |
-
),
|
2116 |
-
"openTypeVheaVertTypoAscender": dict(type=int),
|
2117 |
-
"openTypeVheaVertTypoDescender": dict(type=int),
|
2118 |
-
"openTypeVheaVertTypoLineGap": dict(type=int),
|
2119 |
-
"openTypeVheaCaretOffset": dict(type=int),
|
2120 |
-
"woffMajorVersion": dict(
|
2121 |
-
type=int, valueValidator=genericNonNegativeIntValidator
|
2122 |
-
),
|
2123 |
-
"woffMinorVersion": dict(
|
2124 |
-
type=int, valueValidator=genericNonNegativeIntValidator
|
2125 |
-
),
|
2126 |
-
"woffMetadataUniqueID": dict(
|
2127 |
-
type=dict, valueValidator=fontInfoWOFFMetadataUniqueIDValidator
|
2128 |
-
),
|
2129 |
-
"woffMetadataVendor": dict(
|
2130 |
-
type=dict, valueValidator=fontInfoWOFFMetadataVendorValidator
|
2131 |
-
),
|
2132 |
-
"woffMetadataCredits": dict(
|
2133 |
-
type=dict, valueValidator=fontInfoWOFFMetadataCreditsValidator
|
2134 |
-
),
|
2135 |
-
"woffMetadataDescription": dict(
|
2136 |
-
type=dict, valueValidator=fontInfoWOFFMetadataDescriptionValidator
|
2137 |
-
),
|
2138 |
-
"woffMetadataLicense": dict(
|
2139 |
-
type=dict, valueValidator=fontInfoWOFFMetadataLicenseValidator
|
2140 |
-
),
|
2141 |
-
"woffMetadataCopyright": dict(
|
2142 |
-
type=dict, valueValidator=fontInfoWOFFMetadataCopyrightValidator
|
2143 |
-
),
|
2144 |
-
"woffMetadataTrademark": dict(
|
2145 |
-
type=dict, valueValidator=fontInfoWOFFMetadataTrademarkValidator
|
2146 |
-
),
|
2147 |
-
"woffMetadataLicensee": dict(
|
2148 |
-
type=dict, valueValidator=fontInfoWOFFMetadataLicenseeValidator
|
2149 |
-
),
|
2150 |
-
"woffMetadataExtensions": dict(
|
2151 |
-
type=list, valueValidator=fontInfoWOFFMetadataExtensionsValidator
|
2152 |
-
),
|
2153 |
-
"guidelines": dict(type=list, valueValidator=guidelinesValidator),
|
2154 |
-
}
|
2155 |
-
)
|
2156 |
-
fontInfoAttributesVersion3 = set(fontInfoAttributesVersion3ValueData.keys())
|
2157 |
-
|
2158 |
-
# insert the type validator for all attrs that
|
2159 |
-
# have no defined validator.
|
2160 |
-
for attr, dataDict in list(fontInfoAttributesVersion2ValueData.items()):
|
2161 |
-
if "valueValidator" not in dataDict:
|
2162 |
-
dataDict["valueValidator"] = genericTypeValidator
|
2163 |
-
|
2164 |
-
for attr, dataDict in list(fontInfoAttributesVersion3ValueData.items()):
|
2165 |
-
if "valueValidator" not in dataDict:
|
2166 |
-
dataDict["valueValidator"] = genericTypeValidator
|
2167 |
-
|
2168 |
-
# Version Conversion Support
|
2169 |
-
# These are used from converting from version 1
|
2170 |
-
# to version 2 or vice-versa.
|
2171 |
-
|
2172 |
-
|
2173 |
-
def _flipDict(d):
|
2174 |
-
flipped = {}
|
2175 |
-
for key, value in list(d.items()):
|
2176 |
-
flipped[value] = key
|
2177 |
-
return flipped
|
2178 |
-
|
2179 |
-
|
2180 |
-
fontInfoAttributesVersion1To2 = {
|
2181 |
-
"menuName": "styleMapFamilyName",
|
2182 |
-
"designer": "openTypeNameDesigner",
|
2183 |
-
"designerURL": "openTypeNameDesignerURL",
|
2184 |
-
"createdBy": "openTypeNameManufacturer",
|
2185 |
-
"vendorURL": "openTypeNameManufacturerURL",
|
2186 |
-
"license": "openTypeNameLicense",
|
2187 |
-
"licenseURL": "openTypeNameLicenseURL",
|
2188 |
-
"ttVersion": "openTypeNameVersion",
|
2189 |
-
"ttUniqueID": "openTypeNameUniqueID",
|
2190 |
-
"notice": "openTypeNameDescription",
|
2191 |
-
"otFamilyName": "openTypeNamePreferredFamilyName",
|
2192 |
-
"otStyleName": "openTypeNamePreferredSubfamilyName",
|
2193 |
-
"otMacName": "openTypeNameCompatibleFullName",
|
2194 |
-
"weightName": "postscriptWeightName",
|
2195 |
-
"weightValue": "openTypeOS2WeightClass",
|
2196 |
-
"ttVendor": "openTypeOS2VendorID",
|
2197 |
-
"uniqueID": "postscriptUniqueID",
|
2198 |
-
"fontName": "postscriptFontName",
|
2199 |
-
"fondID": "macintoshFONDFamilyID",
|
2200 |
-
"fondName": "macintoshFONDName",
|
2201 |
-
"defaultWidth": "postscriptDefaultWidthX",
|
2202 |
-
"slantAngle": "postscriptSlantAngle",
|
2203 |
-
"fullName": "postscriptFullName",
|
2204 |
-
# require special value conversion
|
2205 |
-
"fontStyle": "styleMapStyleName",
|
2206 |
-
"widthName": "openTypeOS2WidthClass",
|
2207 |
-
"msCharSet": "postscriptWindowsCharacterSet",
|
2208 |
-
}
|
2209 |
-
fontInfoAttributesVersion2To1 = _flipDict(fontInfoAttributesVersion1To2)
|
2210 |
-
deprecatedFontInfoAttributesVersion2 = set(fontInfoAttributesVersion1To2.keys())
|
2211 |
-
|
2212 |
-
_fontStyle1To2 = {64: "regular", 1: "italic", 32: "bold", 33: "bold italic"}
|
2213 |
-
_fontStyle2To1 = _flipDict(_fontStyle1To2)
|
2214 |
-
# Some UFO 1 files have 0
|
2215 |
-
_fontStyle1To2[0] = "regular"
|
2216 |
-
|
2217 |
-
_widthName1To2 = {
|
2218 |
-
"Ultra-condensed": 1,
|
2219 |
-
"Extra-condensed": 2,
|
2220 |
-
"Condensed": 3,
|
2221 |
-
"Semi-condensed": 4,
|
2222 |
-
"Medium (normal)": 5,
|
2223 |
-
"Semi-expanded": 6,
|
2224 |
-
"Expanded": 7,
|
2225 |
-
"Extra-expanded": 8,
|
2226 |
-
"Ultra-expanded": 9,
|
2227 |
-
}
|
2228 |
-
_widthName2To1 = _flipDict(_widthName1To2)
|
2229 |
-
# FontLab's default width value is "Normal".
|
2230 |
-
# Many format version 1 UFOs will have this.
|
2231 |
-
_widthName1To2["Normal"] = 5
|
2232 |
-
# FontLab has an "All" width value. In UFO 1
|
2233 |
-
# move this up to "Normal".
|
2234 |
-
_widthName1To2["All"] = 5
|
2235 |
-
# "medium" appears in a lot of UFO 1 files.
|
2236 |
-
_widthName1To2["medium"] = 5
|
2237 |
-
# "Medium" appears in a lot of UFO 1 files.
|
2238 |
-
_widthName1To2["Medium"] = 5
|
2239 |
-
|
2240 |
-
_msCharSet1To2 = {
|
2241 |
-
0: 1,
|
2242 |
-
1: 2,
|
2243 |
-
2: 3,
|
2244 |
-
77: 4,
|
2245 |
-
128: 5,
|
2246 |
-
129: 6,
|
2247 |
-
130: 7,
|
2248 |
-
134: 8,
|
2249 |
-
136: 9,
|
2250 |
-
161: 10,
|
2251 |
-
162: 11,
|
2252 |
-
163: 12,
|
2253 |
-
177: 13,
|
2254 |
-
178: 14,
|
2255 |
-
186: 15,
|
2256 |
-
200: 16,
|
2257 |
-
204: 17,
|
2258 |
-
222: 18,
|
2259 |
-
238: 19,
|
2260 |
-
255: 20,
|
2261 |
-
}
|
2262 |
-
_msCharSet2To1 = _flipDict(_msCharSet1To2)
|
2263 |
-
|
2264 |
-
# 1 <-> 2
|
2265 |
-
|
2266 |
-
|
2267 |
-
def convertFontInfoValueForAttributeFromVersion1ToVersion2(attr, value):
|
2268 |
-
"""
|
2269 |
-
Convert value from version 1 to version 2 format.
|
2270 |
-
Returns the new attribute name and the converted value.
|
2271 |
-
If the value is None, None will be returned for the new value.
|
2272 |
-
"""
|
2273 |
-
# convert floats to ints if possible
|
2274 |
-
if isinstance(value, float):
|
2275 |
-
if int(value) == value:
|
2276 |
-
value = int(value)
|
2277 |
-
if value is not None:
|
2278 |
-
if attr == "fontStyle":
|
2279 |
-
v = _fontStyle1To2.get(value)
|
2280 |
-
if v is None:
|
2281 |
-
raise UFOLibError(
|
2282 |
-
f"Cannot convert value ({value!r}) for attribute {attr}."
|
2283 |
-
)
|
2284 |
-
value = v
|
2285 |
-
elif attr == "widthName":
|
2286 |
-
v = _widthName1To2.get(value)
|
2287 |
-
if v is None:
|
2288 |
-
raise UFOLibError(
|
2289 |
-
f"Cannot convert value ({value!r}) for attribute {attr}."
|
2290 |
-
)
|
2291 |
-
value = v
|
2292 |
-
elif attr == "msCharSet":
|
2293 |
-
v = _msCharSet1To2.get(value)
|
2294 |
-
if v is None:
|
2295 |
-
raise UFOLibError(
|
2296 |
-
f"Cannot convert value ({value!r}) for attribute {attr}."
|
2297 |
-
)
|
2298 |
-
value = v
|
2299 |
-
attr = fontInfoAttributesVersion1To2.get(attr, attr)
|
2300 |
-
return attr, value
|
2301 |
-
|
2302 |
-
|
2303 |
-
def convertFontInfoValueForAttributeFromVersion2ToVersion1(attr, value):
|
2304 |
-
"""
|
2305 |
-
Convert value from version 2 to version 1 format.
|
2306 |
-
Returns the new attribute name and the converted value.
|
2307 |
-
If the value is None, None will be returned for the new value.
|
2308 |
-
"""
|
2309 |
-
if value is not None:
|
2310 |
-
if attr == "styleMapStyleName":
|
2311 |
-
value = _fontStyle2To1.get(value)
|
2312 |
-
elif attr == "openTypeOS2WidthClass":
|
2313 |
-
value = _widthName2To1.get(value)
|
2314 |
-
elif attr == "postscriptWindowsCharacterSet":
|
2315 |
-
value = _msCharSet2To1.get(value)
|
2316 |
-
attr = fontInfoAttributesVersion2To1.get(attr, attr)
|
2317 |
-
return attr, value
|
2318 |
-
|
2319 |
-
|
2320 |
-
def _convertFontInfoDataVersion1ToVersion2(data):
|
2321 |
-
converted = {}
|
2322 |
-
for attr, value in list(data.items()):
|
2323 |
-
# FontLab gives -1 for the weightValue
|
2324 |
-
# for fonts wil no defined value. Many
|
2325 |
-
# format version 1 UFOs will have this.
|
2326 |
-
if attr == "weightValue" and value == -1:
|
2327 |
-
continue
|
2328 |
-
newAttr, newValue = convertFontInfoValueForAttributeFromVersion1ToVersion2(
|
2329 |
-
attr, value
|
2330 |
-
)
|
2331 |
-
# skip if the attribute is not part of version 2
|
2332 |
-
if newAttr not in fontInfoAttributesVersion2:
|
2333 |
-
continue
|
2334 |
-
# catch values that can't be converted
|
2335 |
-
if value is None:
|
2336 |
-
raise UFOLibError(
|
2337 |
-
f"Cannot convert value ({value!r}) for attribute {newAttr}."
|
2338 |
-
)
|
2339 |
-
# store
|
2340 |
-
converted[newAttr] = newValue
|
2341 |
-
return converted
|
2342 |
-
|
2343 |
-
|
2344 |
-
def _convertFontInfoDataVersion2ToVersion1(data):
|
2345 |
-
converted = {}
|
2346 |
-
for attr, value in list(data.items()):
|
2347 |
-
newAttr, newValue = convertFontInfoValueForAttributeFromVersion2ToVersion1(
|
2348 |
-
attr, value
|
2349 |
-
)
|
2350 |
-
# only take attributes that are registered for version 1
|
2351 |
-
if newAttr not in fontInfoAttributesVersion1:
|
2352 |
-
continue
|
2353 |
-
# catch values that can't be converted
|
2354 |
-
if value is None:
|
2355 |
-
raise UFOLibError(
|
2356 |
-
f"Cannot convert value ({value!r}) for attribute {newAttr}."
|
2357 |
-
)
|
2358 |
-
# store
|
2359 |
-
converted[newAttr] = newValue
|
2360 |
-
return converted
|
2361 |
-
|
2362 |
-
|
2363 |
-
# 2 <-> 3
|
2364 |
-
|
2365 |
-
_ufo2To3NonNegativeInt = {
|
2366 |
-
"versionMinor",
|
2367 |
-
"openTypeHeadLowestRecPPEM",
|
2368 |
-
"openTypeOS2WinAscent",
|
2369 |
-
"openTypeOS2WinDescent",
|
2370 |
-
}
|
2371 |
-
_ufo2To3NonNegativeIntOrFloat = {
|
2372 |
-
"unitsPerEm",
|
2373 |
-
}
|
2374 |
-
_ufo2To3FloatToInt = {
|
2375 |
-
"openTypeHeadLowestRecPPEM",
|
2376 |
-
"openTypeHheaAscender",
|
2377 |
-
"openTypeHheaDescender",
|
2378 |
-
"openTypeHheaLineGap",
|
2379 |
-
"openTypeHheaCaretOffset",
|
2380 |
-
"openTypeOS2TypoAscender",
|
2381 |
-
"openTypeOS2TypoDescender",
|
2382 |
-
"openTypeOS2TypoLineGap",
|
2383 |
-
"openTypeOS2WinAscent",
|
2384 |
-
"openTypeOS2WinDescent",
|
2385 |
-
"openTypeOS2SubscriptXSize",
|
2386 |
-
"openTypeOS2SubscriptYSize",
|
2387 |
-
"openTypeOS2SubscriptXOffset",
|
2388 |
-
"openTypeOS2SubscriptYOffset",
|
2389 |
-
"openTypeOS2SuperscriptXSize",
|
2390 |
-
"openTypeOS2SuperscriptYSize",
|
2391 |
-
"openTypeOS2SuperscriptXOffset",
|
2392 |
-
"openTypeOS2SuperscriptYOffset",
|
2393 |
-
"openTypeOS2StrikeoutSize",
|
2394 |
-
"openTypeOS2StrikeoutPosition",
|
2395 |
-
"openTypeVheaVertTypoAscender",
|
2396 |
-
"openTypeVheaVertTypoDescender",
|
2397 |
-
"openTypeVheaVertTypoLineGap",
|
2398 |
-
"openTypeVheaCaretOffset",
|
2399 |
-
}
|
2400 |
-
|
2401 |
-
|
2402 |
-
def convertFontInfoValueForAttributeFromVersion2ToVersion3(attr, value):
|
2403 |
-
"""
|
2404 |
-
Convert value from version 2 to version 3 format.
|
2405 |
-
Returns the new attribute name and the converted value.
|
2406 |
-
If the value is None, None will be returned for the new value.
|
2407 |
-
"""
|
2408 |
-
if attr in _ufo2To3FloatToInt:
|
2409 |
-
try:
|
2410 |
-
value = round(value)
|
2411 |
-
except (ValueError, TypeError):
|
2412 |
-
raise UFOLibError("Could not convert value for %s." % attr)
|
2413 |
-
if attr in _ufo2To3NonNegativeInt:
|
2414 |
-
try:
|
2415 |
-
value = int(abs(value))
|
2416 |
-
except (ValueError, TypeError):
|
2417 |
-
raise UFOLibError("Could not convert value for %s." % attr)
|
2418 |
-
elif attr in _ufo2To3NonNegativeIntOrFloat:
|
2419 |
-
try:
|
2420 |
-
v = float(abs(value))
|
2421 |
-
except (ValueError, TypeError):
|
2422 |
-
raise UFOLibError("Could not convert value for %s." % attr)
|
2423 |
-
if v == int(v):
|
2424 |
-
v = int(v)
|
2425 |
-
if v != value:
|
2426 |
-
value = v
|
2427 |
-
return attr, value
|
2428 |
-
|
2429 |
-
|
2430 |
-
def convertFontInfoValueForAttributeFromVersion3ToVersion2(attr, value):
|
2431 |
-
"""
|
2432 |
-
Convert value from version 3 to version 2 format.
|
2433 |
-
Returns the new attribute name and the converted value.
|
2434 |
-
If the value is None, None will be returned for the new value.
|
2435 |
-
"""
|
2436 |
-
return attr, value
|
2437 |
-
|
2438 |
-
|
2439 |
-
def _convertFontInfoDataVersion3ToVersion2(data):
|
2440 |
-
converted = {}
|
2441 |
-
for attr, value in list(data.items()):
|
2442 |
-
newAttr, newValue = convertFontInfoValueForAttributeFromVersion3ToVersion2(
|
2443 |
-
attr, value
|
2444 |
-
)
|
2445 |
-
if newAttr not in fontInfoAttributesVersion2:
|
2446 |
-
continue
|
2447 |
-
converted[newAttr] = newValue
|
2448 |
-
return converted
|
2449 |
-
|
2450 |
-
|
2451 |
-
def _convertFontInfoDataVersion2ToVersion3(data):
|
2452 |
-
converted = {}
|
2453 |
-
for attr, value in list(data.items()):
|
2454 |
-
attr, value = convertFontInfoValueForAttributeFromVersion2ToVersion3(
|
2455 |
-
attr, value
|
2456 |
-
)
|
2457 |
-
converted[attr] = value
|
2458 |
-
return converted
|
2459 |
-
|
2460 |
-
|
2461 |
-
if __name__ == "__main__":
|
2462 |
-
import doctest
|
2463 |
-
|
2464 |
-
doctest.testmod()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-928645ac.css
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
.container.svelte-19on2m6.svelte-19on2m6{display:flex;flex-direction:column;gap:var(--spacing-sm);padding:var(--block-padding)}.hl.svelte-19on2m6+.hl.svelte-19on2m6{margin-left:var(--size-1)}.textspan.svelte-19on2m6:last-child>.label.svelte-19on2m6{margin-right:0}.category-legend.svelte-19on2m6.svelte-19on2m6{display:flex;flex-wrap:wrap;gap:var(--spacing-sm);color:#000}.category-label.svelte-19on2m6.svelte-19on2m6{cursor:pointer;border-radius:var(--radius-xs);padding-right:var(--size-2);padding-left:var(--size-2);font-weight:var(--weight-semibold)}.color-legend.svelte-19on2m6.svelte-19on2m6{display:flex;justify-content:space-between;border-radius:var(--radius-xs);background:linear-gradient(to right,var(--color-purple),rgba(255,255,255,0),var(--color-red));padding:var(--size-1) var(--size-2);font-weight:var(--weight-semibold)}.textfield.svelte-19on2m6.svelte-19on2m6{box-sizing:border-box;border-radius:var(--radius-xs);background:var(--background-fill-primary);background-color:transparent;max-width:var(--size-full);line-height:var(--scale-4);word-break:break-all}.textspan.svelte-19on2m6.svelte-19on2m6{transition:.15s;border-radius:var(--radius-xs);padding-top:2.5px;padding-right:var(--size-1);padding-bottom:3.5px;padding-left:var(--size-1);color:#000}.label.svelte-19on2m6.svelte-19on2m6{transition:.15s;margin-top:1px;margin-right:calc(var(--size-1) * -1);border-radius:var(--radius-xs);padding:1px 5px;color:var(--body-text-color);color:#fff;font-weight:var(--weight-bold);font-size:var(--text-sm);text-transform:uppercase}.text.svelte-19on2m6.svelte-19on2m6{color:#000}.score-text.svelte-19on2m6 .text.svelte-19on2m6{color:var(--body-text-color)}.score-text.svelte-19on2m6.svelte-19on2m6{margin-right:var(--size-1);padding:var(--size-1)}.no-cat.svelte-19on2m6.svelte-19on2m6,.no-label.svelte-19on2m6.svelte-19on2m6{color:var(--body-text-color)}.selectable.svelte-19on2m6.svelte-19on2m6{cursor:pointer}
|
|
|
|