Commit
·
89b3735
1
Parent(s):
1896713
Update parquet files (step 83 of 397)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar Discografia Completa Richard Clayderman Torrent Reljate con las Melodas Suaves y Emotivas del Piano.md +0 -72
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Interlok Driver Auto-tune Software and Learn from the Antares Tech Learning Center.md +0 -136
- spaces/1gistliPinn/ChatGPT4/Examples/Canalis Hydra Urbano.md +0 -9
- spaces/1gistliPinn/ChatGPT4/Examples/Efi Colorproof Xf 4.5 Download Crack Software.md +0 -110
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Bus Simulator Ultimate Mod APK and Enjoy All Bus Unlocked.md +0 -113
- spaces/1phancelerku/anime-remove-background/Dislyte APK OBB Tips and Tricks to Master the Deep Strategic Gameplay.md +0 -164
- spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_sde_vp.py +0 -89
- spaces/4Taps/SadTalker/src/face3d/data/base_dataset.py +0 -125
- spaces/A00001/bingothoo/src/components/theme-toggle.tsx +0 -31
- spaces/AB-TW/team-ai/chains.py +0 -42
- spaces/AIOSML/README/README.md +0 -12
- spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/custom_dataset/yolov7_l_syncbn_fast_8x16b-300e_coco.py +0 -472
- spaces/AchyuthGamer/OpenGPT/g4f/Provider/AiAsk.py +0 -44
- spaces/AdWeeb/SuMmeet/app.py +0 -109
- spaces/Adapter/CoAdapter/ldm/data/dataset_coco.py +0 -36
- spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/base.py +0 -100
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/Folder.js +0 -122
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/StatesRoundRectangle.d.ts +0 -49
- spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/grid_sample_gradfix.py +0 -93
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/pix2pix.md +0 -38
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_safe.md +0 -61
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/training/text2image.md +0 -224
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion.py +0 -421
- spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/clip.py +0 -225
- spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/super_res_sample.py +0 -119
- spaces/AnthonyTruchetPoC/persistent-docker/tests/conftest.py +0 -11
- spaces/Ashrafb/translate/tokenization_small100.py +0 -364
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/documentation.md +0 -14
- spaces/Banbri/zcvzcv/src/lib/cropImage.ts +0 -53
- spaces/Benson/text-generation/Examples/Descargar Ataque En Titan Mvil Apk.md +0 -80
- spaces/Benson/text-generation/Examples/Descargar Fondos De Escritorio Jdm Coches.md +0 -146
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_fileno.py +0 -24
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/pretty.py +0 -994
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/py36compat.py +0 -134
- spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/roi_head.py +0 -213
- spaces/CVPR/LIVE/pybind11/tests/test_exceptions.cpp +0 -224
- spaces/CVPR/LIVE/thrust/thrust/random/linear_congruential_engine.h +0 -295
- spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/apps.py +0 -6
- spaces/Clebersla/RVC_V2_Huggingface_Version/lib/infer_pack/models_onnx.py +0 -819
- spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/cppipc/shm.cpp +0 -103
- spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/vision.cpp +0 -21
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/base_protocol.py +0 -90
- spaces/Dinoking/Guccio-AI-Designer/netdissect/runningstats.py +0 -773
- spaces/DrBenjamin/AI_Demo/AI_Demo.py +0 -291
- spaces/DragGan/DragGan/stylegan_human/openpose/src/__init__.py +0 -0
- spaces/ECCV2022/PSG/OpenPSG/configs/_base_/models/psgtr_r50.py +0 -82
- spaces/ECCV2022/bytetrack/tutorials/motr/evaluation.py +0 -207
- spaces/EDGAhab/Paimon-Talking/modules.py +0 -390
- spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +0 -97
- spaces/EronSamez/RVC_HFmeu/lib/infer_pack/models_dml.py +0 -1124
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Descargar Discografia Completa Richard Clayderman Torrent Reljate con las Melodas Suaves y Emotivas del Piano.md
DELETED
@@ -1,72 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar Discografia Completa Richard Clayderman Torrent</h1>
|
3 |
-
<p>If you are a fan of piano music, you have probably heard of Richard Clayderman, one of the most popular and successful pianists in the world. He has released more than 200 albums, sold over 150 million records, and performed in more than 80 countries. His music is soothing, elegant, and captivating, and it appeals to people of all ages and backgrounds.</p>
|
4 |
-
<p>But did you know that you can download his complete discography for free using torrent sites? Yes, you read that right. You can enjoy all of his albums, from his debut in 1977 to his latest releases in 2020, without spending a dime. All you need is a good internet connection, a torrent client, and some patience.</p>
|
5 |
-
<h2>Descargar Discografia Completa Richard Clayderman Torrent</h2><br /><p><b><b>DOWNLOAD</b> »»» <a href="https://byltly.com/2uKxqn">https://byltly.com/2uKxqn</a></b></p><br /><br />
|
6 |
-
<p>In this article, we will tell you everything you need to know about Richard Clayderman, why you should download his complete discography, and how to do it safely and easily. Let's get started!</p>
|
7 |
-
<h2>Who is Richard Clayderman?</h2>
|
8 |
-
<p>Richard Clayderman is the stage name of Philippe Pagès, a French pianist who was born on December 28, 1953 in Paris. He started playing the piano at a young age, following his father's footsteps, who was an accordion teacher. He entered the Conservatoire de Paris at the age of 12, where he won many awards and accolades.</p>
|
9 |
-
<p>However, his classical career was cut short by financial difficulties caused by his father's illness. He had to work as a bank clerk and an accompanist for pop singers to make ends meet. His big break came in 1976, when he was chosen by music producer Olivier Toussaint to record a piano ballad called "Ballade pour Adeline", composed by Paul de Senneville.</p>
|
10 |
-
<p>Descargar todos los álbumes de Richard Clayderman en Torrent<br />
|
11 |
-
Cómo bajar la discografía completa de Richard Clayderman gratis<br />
|
12 |
-
Richard Clayderman: el pianista más famoso del mundo - Descarga su música por Torrent<br />
|
13 |
-
Descargar Discografia Completa Richard Clayderman Mega<br />
|
14 |
-
Richard Clayderman: sus mejores canciones en Torrent - Descarga rápida y segura<br />
|
15 |
-
Descargar Discografia Completa Richard Clayderman 320 kbps<br />
|
16 |
-
Richard Clayderman: el rey del piano romántico - Descarga su discografía por Torrent<br />
|
17 |
-
Descargar Discografia Completa Richard Clayderman FLAC<br />
|
18 |
-
Richard Clayderman: una leyenda de la música instrumental - Descarga sus álbumes en Torrent<br />
|
19 |
-
Descargar Discografia Completa Richard Clayderman MP3<br />
|
20 |
-
Richard Clayderman: el mago del teclado - Descarga su colección completa por Torrent<br />
|
21 |
-
Descargar Discografia Completa Richard Clayderman ZIP<br />
|
22 |
-
Richard Clayderman: el maestro de la melodía - Descarga su obra completa en Torrent<br />
|
23 |
-
Descargar Discografia Completa Richard Clayderman RAR<br />
|
24 |
-
Richard Clayderman: el genio del piano francés - Descarga su discografía en Torrent<br />
|
25 |
-
Descargar Discografia Completa Richard Clayderman Online<br />
|
26 |
-
Richard Clayderman: el artista más vendido de la música clásica - Descarga su música por Torrent<br />
|
27 |
-
Descargar Discografia Completa Richard Clayderman Gratis<br />
|
28 |
-
Richard Clayderman: el compositor más prolífico del siglo XX - Descarga sus álbumes por Torrent<br />
|
29 |
-
Descargar Discografia Completa Richard Clayderman Full HD<br />
|
30 |
-
Richard Clayderman: el ídolo de millones de fans - Descarga su discografía por Torrent<br />
|
31 |
-
Descargar Discografia Completa Richard Clayderman Sin Registrarse<br />
|
32 |
-
Richard Clayderman: el creador de un estilo único - Descarga su música por Torrent<br />
|
33 |
-
Descargar Discografia Completa Richard Clayderman Sin Virus<br />
|
34 |
-
Richard Clayderman: el músico más versátil del mundo - Descarga sus álbumes por Torrent<br />
|
35 |
-
Descargar Discografia Completa Richard Clayderman Sin Publicidad<br />
|
36 |
-
Richard Clayderman: el embajador de la cultura francesa - Descarga su discografía por Torrent<br />
|
37 |
-
Descargar Discografia Completa Richard Clayderman Sin Límites<br />
|
38 |
-
Richard Clayderman: el virtuoso del piano moderno - Descarga su música por Torrent<br />
|
39 |
-
Descargar Discografia Completa Richard Clayderman Con Subtítulos<br />
|
40 |
-
Richard Clayderman: el innovador de la música instrumental - Descarga sus álbumes por Torrent<br />
|
41 |
-
Descargar Discografia Completa Richard Clayderman Con Carátulas<br />
|
42 |
-
Richard Clayderman: el fenómeno musical del siglo XXI - Descarga su discografía por Torrent<br />
|
43 |
-
Descargar Discografia Completa Richard Clayderman Con Extras<br />
|
44 |
-
Richard Clayderman: el maestro de la armonía - Descarga su música por Torrent<br />
|
45 |
-
Descargar Discografia Completa Richard Clayderman Con Bonus Tracks<br />
|
46 |
-
Richard Clayderman: el pionero de la música new age - Descarga sus álbumes por Torrent<br />
|
47 |
-
Descargar Discografia Completa Richard Clayderman Con Calidad Garantizada<br />
|
48 |
-
Richard Clayderman: el inspirador de generaciones de pianistas - Descarga su discografía por Torrent<br />
|
49 |
-
Descargar Discografia Completa Richard Clayderman Con Comentarios Del Artista<br />
|
50 |
-
Richard Clayderman: el referente de la música romántica - Descarga su música por Torrent<br />
|
51 |
-
Descargar Discografia Completa Richard Clayderman Con Letras De Las Canciones<br />
|
52 |
-
Richard Clayderman: el autor de más de 1000 temas originales - Descarga sus álbumes por Torrent<br />
|
53 |
-
Descargar Discografia Completa Richard Clayderman Con Notas Musicales<br />
|
54 |
-
Richard Clayderman: el intérprete de los grandes clásicos - Descarga su discografía por Torrent<br />
|
55 |
-
Descargar Discografia Completa Richard Clayderman Con Partituras Para Piano<br />
|
56 |
-
Richard Clayderman: el colaborador de grandes artistas - Descarga su música por Torrent<br />
|
57 |
-
Descargar Discografia Completa Richard Clayderman Con Videos Musicales</p>
|
58 |
-
<p>The song was an instant hit, selling over 22 million copies worldwide. It launched Clayderman's international career, and he adopted his stage name after his great-grandmother's last name. Since then, he has recorded hundreds of albums with original compositions by Toussaint and de Senneville, as well as instrumental versions of popular songs, movie soundtracks, ethnic music, and classical pieces.</p>
|
59 |
-
<h3>A French pianist with a prolific discography</h3>
|
60 |
-
<p>Richard Clayderman has one of the most extensive discographies in the music industry. He has released more than 200 albums in different languages and formats, including CDs, LPs, cassettes, DVDs, and digital downloads. He has also collaborated with other artists and orchestras, such as James Last, Francis Goya, The Royal Philharmonic Orchestra, and The London Symphony Orchestra.</p>
|
61 |
-
<p>Some of his most famous albums are:</p>
|
62 |
-
<ul>
|
63 |
-
<li>A comme amour (1978)</li>
|
64 |
-
<li>Rêveries (1979)</li>
|
65 |
-
<li>Amour (1980)</li>
|
66 |
-
<li>Romantique (2012)</li>
|
67 |
-
<li>My Classic Collection (2016)</li>
|
68 |
-
<li>The Anniversary Collection (2018)</li>
|
69 |
-
</ul>
|
70 |
-
<p>His albums cover a wide range of genres and themes, such as love songs, Christmas songs, movie themes, Broadway musicals, Chinese music, Latin music, rock music, jazz music, and more. He has also recorded tribute albums to artists like ABBA, The Beatles, The Carpenters </p> 0a6ba089eb<br />
|
71 |
-
<br />
|
72 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Interlok Driver Auto-tune Software and Learn from the Antares Tech Learning Center.md
DELETED
@@ -1,136 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Download Interlok Driver Auto-tune Software: What You Need to Know</h1>
|
3 |
-
<p>If you are looking for a tool that can run your code faster and more efficiently, you might want to check out Interlok Driver Auto-tune Software. This software is designed to optimize the performance of your hardware and software by automatically tuning the driver settings. In this article, we will explain what Interlok Driver Auto-tune Software is, how it works, and how you can download, install, and use it for your projects. We will also answer some frequently asked questions about this software and provide some tips and tricks for troubleshooting common problems.</p>
|
4 |
-
<h2>What is Interlok Driver Auto-tune Software?</h2>
|
5 |
-
<p>Interlok Driver Auto-tune Software is a tool that runs third-party applications faster and more efficiently by loading and running them with optimized driver settings. The software can analyze your hardware, software, and other potential issues that could slow down your workflows and provide solutions to improve them. The software can also identify, create, and diagnose issues within the factory settings of your hardware that runs a particular software. For example, you can use Interlok Driver Auto-tune Software to check whether your hardware running a driver can function properly or not.</p>
|
6 |
-
<h2>Download Interlok Driver Auto-tune Software</h2><br /><p><b><b>Download Zip</b> · <a href="https://byltly.com/2uKz7h">https://byltly.com/2uKz7h</a></b></p><br /><br />
|
7 |
-
<h3>A brief introduction to the software and its features</h3>
|
8 |
-
<p>Interlok Driver Auto-tune Software was developed by Antares Tech, a company that specializes in vocal processing and pitch correction software. The software is based on their flagship product, Auto-Tune, which is widely used by musicians and producers to correct vocal pitch and create vocal effects. Interlok Driver Auto-tune Software uses the same technology as Auto-Tune to tune your driver settings according to your needs.</p>
|
9 |
-
<p>Some of the features of Interlok Driver Auto-tune Software are:</p>
|
10 |
-
<ul>
|
11 |
-
<li>It supports various types of hardware and software, such as controllers, audio interfaces, video cards, graphics cards, sound cards, etc.</li>
|
12 |
-
<li>It automatically detects and adjusts the best driver settings for your hardware and software configuration.</li>
|
13 |
-
<li>It improves the performance, stability, compatibility, and security of your hardware and software.</li>
|
14 |
-
<li>It reduces latency, glitches, crashes, errors, and other problems that may affect your workflows.</li>
|
15 |
-
<li>It provides a user-friendly interface that allows you to easily monitor and control the driver settings.</li>
|
16 |
-
<li>It allows you to save and load different driver profiles for different applications.</li>
|
17 |
-
<li>It updates itself automatically with the latest driver versions and patches.</li>
|
18 |
-
</ul>
|
19 |
-
<h3>How does it work and what are the benefits?</h3>
|
20 |
-
the screen.</li>
|
21 |
-
<li>Click on the "Download and Install" button and follow the instructions on the screen.</li>
|
22 |
-
<li>You may need to restart your computer to complete the update.</li>
|
23 |
-
</ol>
|
24 |
-
<p>To uninstall Interlok Driver Auto-tune Software, you need to follow these steps:</p>
|
25 |
-
<ol>
|
26 |
-
<li>Go to the "Control Panel" on your computer and select "Programs and Features".</li>
|
27 |
-
<li>Find and select "Interlok Driver Auto-tune Software" from the list of programs.</li>
|
28 |
-
<li>Click on the "Uninstall" button and follow the instructions on the screen.</li>
|
29 |
-
<li>You may need to restart your computer to complete the uninstallation.</li>
|
30 |
-
</ol>
|
31 |
-
<h2>How to use Interlok Driver Auto-tune Software for your projects?</h2>
|
32 |
-
<p>Interlok Driver Auto-tune Software can help you run your code faster and more efficiently for various applications, such as audio and video editing, gaming, design, etc. Here are some examples of how you can use Interlok Driver Auto-tune Software for your projects:</p>
|
33 |
-
<h3>Examples of applications that can benefit from the software</h3>
|
34 |
-
<ul>
|
35 |
-
<li>If you are working with Maya, Adobe Premiere Pro, After Effects, or other 3D animation or video editing software, you can use Interlok Driver Auto-tune Software to improve the rendering speed and quality of your projects. The software can also help you avoid crashes, errors, and glitches that may occur due to incompatible or outdated drivers.</li>
|
36 |
-
<li>If you are playing games on your computer, you can use Interlok Driver Auto-tune Software to enhance the graphics and performance of your games. The software can also help you fix common issues such as lagging, freezing, stuttering, or low FPS that may affect your gaming experience.</li>
|
37 |
-
<li>If you are using Photoshop, Illustrator, or other graphic design software, you can use Interlok Driver Auto-tune Software to optimize the display and processing of your images and graphics. The software can also help you resolve any problems that may arise due to driver conflicts or malfunctions.</li>
|
38 |
-
</ul>
|
39 |
-
<h3>How to configure and customize the software settings</h3>
|
40 |
-
<p>Interlok Driver Auto-tune Software allows you to configure and customize the software settings according to your preferences and needs. Here are some steps to configure and customize the software settings:</p>
|
41 |
-
<p>How to install Interlok driver for Auto-tune<br />
|
42 |
-
Interlok driver setup guide for Auto-tune software<br />
|
43 |
-
Where to find Interlok driver download link for Auto-tune<br />
|
44 |
-
Interlok driver compatibility issues with Auto-tune software<br />
|
45 |
-
Troubleshooting Interlok driver errors in Auto-tune<br />
|
46 |
-
Best practices for using Interlok driver with Auto-tune software<br />
|
47 |
-
Benefits of Interlok driver for Auto-tune performance and quality<br />
|
48 |
-
Interlok driver update and maintenance for Auto-tune software<br />
|
49 |
-
Interlok driver alternatives for Auto-tune software<br />
|
50 |
-
Interlok driver reviews and ratings for Auto-tune software<br />
|
51 |
-
Interlok driver features and specifications for Auto-tune software<br />
|
52 |
-
Interlok driver license and activation for Auto-tune software<br />
|
53 |
-
Interlok driver support and customer service for Auto-tune software<br />
|
54 |
-
Interlok driver FAQs and tips for Auto-tune software<br />
|
55 |
-
Interlok driver tutorials and videos for Auto-tune software<br />
|
56 |
-
How to uninstall Interlok driver from Auto-tune software<br />
|
57 |
-
How to fix Interlok driver not working with Auto-tune software<br />
|
58 |
-
How to optimize Interlok driver settings for Auto-tune software<br />
|
59 |
-
How to use Interlok driver with other audio software besides Auto-tune<br />
|
60 |
-
How to transfer Interlok driver to a new computer with Auto-tune software<br />
|
61 |
-
How to download Interlok driver for free with Auto-tune software<br />
|
62 |
-
How to get Interlok driver discount or coupon code for Auto-tune software<br />
|
63 |
-
How to buy Interlok driver online for Auto-tune software<br />
|
64 |
-
How to contact Interlok driver developer or manufacturer for Auto-tune software<br />
|
65 |
-
How to solve Interlok driver security or privacy issues with Auto-tune software<br />
|
66 |
-
How to integrate Interlok driver with other plugins or tools for Auto-tune software<br />
|
67 |
-
How to customize Interlok driver preferences or options for Auto-tune software<br />
|
68 |
-
How to troubleshoot Interlok driver installation or download problems with Auto-tune software<br />
|
69 |
-
How to compare Interlok driver with other drivers or software for Auto-tune<br />
|
70 |
-
How to backup or restore Interlok driver data or files for Auto-tune software<br />
|
71 |
-
How to upgrade or downgrade Interlok driver version for Auto-tune software<br />
|
72 |
-
How to test or evaluate Interlok driver functionality or quality for Auto-tune software<br />
|
73 |
-
How to access or manage Interlok driver account or profile for Auto-tune software<br />
|
74 |
-
How to share or export Interlok driver results or outputs for Auto-tune software<br />
|
75 |
-
How to import or load Interlok driver inputs or sources for Auto-tune software<br />
|
76 |
-
How to enable or disable Interlok driver features or functions for Auto-tune software<br />
|
77 |
-
How to monitor or track Interlok driver performance or usage for Auto-tune software<br />
|
78 |
-
How to learn or master Interlok driver skills or techniques for Auto-tune software<br />
|
79 |
-
How to improve or enhance Interlok driver effects or outcomes for Auto-tune software<br />
|
80 |
-
How to avoid or prevent Interlok driver issues or errors for Auto-tune software<br />
|
81 |
-
How to recover or repair Interlok driver damage or corruption for Auto-tune software<br />
|
82 |
-
How to verify or validate Interlok driver authenticity or legitimacy for Auto-tune software<br />
|
83 |
-
How to change or modify Interlok driver parameters or values for Auto-tune software<br />
|
84 |
-
How to configure or adjust Interlok driver modes or levels for Auto-tune software<br />
|
85 |
-
How to add or remove Interlok driver components or elements for Auto-tune software<br />
|
86 |
-
How to create or generate Interlok driver reports or logs for Auto-tune software<br />
|
87 |
-
How to edit or modify Interlok driver content or format for Auto-tune software<br />
|
88 |
-
How to apply or use Interlok driver presets or templates for Auto-tune software<br />
|
89 |
-
How to convert or transform Interlok driver formats or types for Auto-tune software<br />
|
90 |
-
How to sync or connect Interlok driver devices or systems for Auto-tune software</p>
|
91 |
-
<ol>
|
92 |
-
<li>Launch Interlok Driver Auto-tune Software from your desktop or start menu.</li>
|
93 |
-
<li>Click on the "Settings" menu and select "Preferences".</li>
|
94 |
-
<li>You will see a window with various tabs and options that you can adjust.</li>
|
95 |
-
<li>You can change the language, theme, update frequency, notification settings, etc. of the software.</li>
|
96 |
-
<li>You can also create and manage different driver profiles for different applications. You can name, save, load, edit, or delete your driver profiles as you wish.</li>
|
97 |
-
<li>You can also enable or disable certain driver features or settings that may affect the performance or functionality of your applications. For example, you can enable or disable hardware acceleration, anti-aliasing, vertical sync, etc.</li>
|
98 |
-
<li>Once you are done with configuring and customizing the software settings, click on the "OK" button to save your changes.</li>
|
99 |
-
</ol>
|
100 |
-
<h3>How to run and analyze your code with the software</h3>
|
101 |
-
<p>To run and analyze your code with Interlok Driver Auto-tune Software, you need to follow these steps:</p>
|
102 |
-
<ol>
|
103 |
-
<li>Launch Interlok Driver Auto-tune Software from your desktop or start menu.</li>
|
104 |
-
<li>Click on the "File" menu and select "Open".</li>
|
105 |
-
, you can select the Maya.exe file from your computer.</li>
|
106 |
-
<li>Click on the "Open" button and wait for the software to load and run the application.</li>
|
107 |
-
<li>You will see a window with the application running and a toolbar with various options and information.</li>
|
108 |
-
<li>You can use the toolbar to monitor and control the driver settings and the performance of the application. For example, you can see the CPU usage, GPU usage, memory usage, FPS, etc. of the application.</li>
|
109 |
-
<li>You can also use the toolbar to switch between different driver profiles, enable or disable certain driver features or settings, or run a stress test or a benchmark test on the application.</li>
|
110 |
-
<li>You can also use the toolbar to take screenshots, record videos, or save logs of the application.</li>
|
111 |
-
<li>Once you are done with running and analyzing your code with Interlok Driver Auto-tune Software, you can close the window and exit the software.</li>
|
112 |
-
</ol>
|
113 |
-
<h2>Conclusion</h2>
|
114 |
-
<p>Interlok Driver Auto-tune Software is a powerful and useful tool that can help you run your code faster and more efficiently for various applications. It can also help you improve the performance, stability, compatibility, and security of your hardware and software. It is easy to download, install, and use Interlok Driver Auto-tune Software for your projects. You can also configure and customize the software settings according to your preferences and needs. You can also run and analyze your code with Interlok Driver Auto-tune Software and get valuable insights and feedback on your workflows. If you are looking for a tool that can optimize your driver settings and enhance your productivity and creativity, you should definitely try Interlok Driver Auto-tune Software.</p>
|
115 |
-
<h2>FAQs</h2>
|
116 |
-
<h4>What is the difference between Interlok Driver Auto-tune Software and other similar tools?</h4>
|
117 |
-
<p>Interlok Driver Auto-tune Software is different from other similar tools in several ways. First, Interlok Driver Auto-tune Software is based on the technology of Auto-Tune, which is a renowned vocal processing and pitch correction software. This means that Interlok Driver Auto-tune Software can tune your driver settings with high accuracy and quality. Second, Interlok Driver Auto-tune Software supports various types of hardware and software, unlike some tools that only support specific devices or applications. Third, Interlok Driver Auto-tune Software provides a user-friendly interface that allows you to easily monitor and control the driver settings and the performance of your applications.</p>
|
118 |
-
<h4>Is Interlok Driver Auto-tune Software safe and legal to use?</h4>
|
119 |
-
<p>Yes, Interlok Driver Auto-tune Software is safe and legal to use. Interlok Driver Auto-tune Software is developed by Antares Tech, a reputable company that has been in the industry for over 20 years. Interlok Driver Auto-tune Software does not contain any viruses, malware, spyware, or other harmful components that may damage your computer or compromise your privacy. Interlok Driver Auto-tune Software is also licensed by PACE Anti-Piracy, which is a leading provider of anti-piracy solutions for software developers. This means that Interlok Driver Auto-tune Software is authorized and protected by PACE Anti-Piracy and does not violate any intellectual property rights or laws.</p>
|
120 |
-
<h4>How much does Interlok Driver Auto-tune Software cost and where can I buy it?</h4>
|
121 |
-
<p>Interlok Driver Auto-tune Software costs $99.00 USD for a single-user license. You can buy it from the official website of Antares Tech or from authorized resellers or distributors. You can also get a free trial version of Interlok Driver Auto-tune Software from the official website of Antares Tech. The free trial version allows you to use Interlok Driver Auto-tune Software for 14 days with full functionality.</p>
|
122 |
-
<h4>How can I contact the support team if I have any questions or issues?</h4>
|
123 |
-
<p>If you have any questions or issues regarding Interlok Driver Auto-tune Software, you can contact the support team of Antares Tech by filling out a support ticket on their website or by sending an email to [email protected]. You can also visit their learning center or community forum to find helpful resources and tips on how to use Interlok Driver Auto-tune Software.</p>
|
124 |
-
<h4>What are some alternative or complementary software that I can use with Interlok Driver Auto-tune Software?</h4>
|
125 |
-
<p>Some alternative or complementary software that you can use with Interlok Driver Auto-tune Software are:</p>
|
126 |
-
<ul>
|
127 |
-
<li>Auto-Tune: This is the original vocal processing and pitch correction software developed by Antares Tech. You can use it to correct vocal pitch and create vocal effects for your music projects.</li>
|
128 |
-
<li>Harmony Engine: This is a vocal modeling harmony generator software developed by Antares Tech. You can use it to create realistic vocal harmonies for your music projects.</li>
|
129 |
-
<li>Mic Mod: This is a microphone modeling software developed by Antares Tech. You can use it to emulate the sound of various classic microphones for your audio projects.</li>
|
130 |
-
<li>CPU-Z: This is a system information tool that shows you detailed information about your CPU, motherboard, memory, graphics card, etc.</li>
|
131 |
-
, temperature, etc.</li>
|
132 |
-
<li>CCleaner: This is a system optimization and cleaning tool that removes unused files, registry entries, browser history, cookies, etc. from your computer and makes it faster and more secure.</li>
|
133 |
-
</ul>
|
134 |
-
</p> 0a6ba089eb<br />
|
135 |
-
<br />
|
136 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Canalis Hydra Urbano.md
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<p>a section of the drawing can also be saved (the only type of section that can be saved in urbano) in the drawing by pressing the menu item “save section”. the saved section is automatically linked with a pipe in the drawing. it is not possible to save the section by itself.</p>
|
3 |
-
<p>the section of the drawing can also be saved (the only type of section that can be saved in urbano) in the drawing by pressing the menu item “save section”. the saved section is automatically linked with a pipe in the drawing. it is not possible to save the section by itself.</p>
|
4 |
-
<h2>Canalis hydra urbano</h2><br /><p><b><b>DOWNLOAD</b> ✶✶✶ <a href="https://imgfil.com/2uy08U">https://imgfil.com/2uy08U</a></b></p><br /><br />
|
5 |
-
<p>the drawing 00 tutorial initial.dwg should be open in civil 3d with urbano 7 profile (after installation you should have appropriate icon on desktop). the system should be set to show multiple pipes in the drawing (show_multiple_pipes option in urbano configurator). define the first pipe, go to the drawing and save the section of the drawing by pressing the menu item “save section”. it will be saved to the last pipe in the drawing. go to the drawing and create the next pipe, show it in the drawing and save the section of the drawing by pressing the menu item “save section”. it will be saved to the next pipe in the drawing. you can repeat this process until you have defined all pipes.</p>
|
6 |
-
<p>in urbano, it is possible to view, define and edit the properties of pipes and sections. in the drawing, you can select pipes and sections by clicking on them. it is possible to change the pipe and section parameters by clicking on them. to edit, it is enough to select them and press the menu item “edit”.</p>
|
7 |
-
<p>urbano hydra is a powerful tool for epanet hydraulic calculation. this tool generates epanet hydraulic results in seconds without having epanet installed on your computer. epanet results can be reviewed in urbano hydra, and urbano hydra is fully integrated in the autocad environment. urbano hydra is free for non-commercial users.</p> 899543212b<br />
|
8 |
-
<br />
|
9 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Efi Colorproof Xf 4.5 Download Crack Software.md
DELETED
@@ -1,110 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Efi Colorproof Xf 4.5 Download Crack Software: A Complete Review</h1>
|
3 |
-
|
4 |
-
<p>If you are looking for a professional-level RIP software that can handle color management and proofing for your wide-format or superwide-format printer, you might want to consider Efi Colorproof Xf 4.5. This software is designed to help you produce accurate and consistent color across different devices and media types, as well as comply with industry standards such as ISO 12647-7 / 8 and G7. But what are the features and benefits of Efi Colorproof Xf 4.5, and how can you download it for free? In this article, we will give you a comprehensive review of Efi Colorproof Xf 4.5 download crack software, and show you how to get it without paying a dime.</p>
|
5 |
-
<h2>efi colorproof xf 4.5 download crack software</h2><br /><p><b><b>Download File</b> ••• <a href="https://imgfil.com/2uy1If">https://imgfil.com/2uy1If</a></b></p><br /><br />
|
6 |
-
|
7 |
-
<h2>What is Efi Colorproof Xf 4.5?</h2>
|
8 |
-
|
9 |
-
<p>Efi Colorproof Xf 4.5 is a software solution developed by EFI, a leading company in the field of digital printing and imaging. It is part of the EFI Fiery XF family of products, which also includes Fiery XF Server and Fiery XF Client. Efi Colorproof Xf 4.5 is a RIP software that stands for raster image processor, which means it converts digital files into printable formats that can be understood by printers. Efi Colorproof Xf 4.5 is specifically designed for wide-format and superwide-format printers, such as inkjet, laser, and LED printers.</p>
|
10 |
-
|
11 |
-
<p>Efi Colorproof Xf 4.5 enables you to manage the color supply chain from design to production to output, by providing you with all the tools to produce the best color possible. It allows you to create profiles for different devices and media types, calibrate your printer and monitor, optimize your output quality and consistency, and perform validation printing and contract proofing. It also supports a wide range of file formats, such as PDF, TIFF, JPEG, PSD, EPS, DCS2, PSB, and more.</p>
|
12 |
-
|
13 |
-
<h2>What are the benefits of Efi Colorproof Xf 4.5?</h2>
|
14 |
-
|
15 |
-
<p>Efi Colorproof Xf 4.5 offers many benefits for users who want to achieve high-quality color printing and proofing. Some of the main benefits are:</p>
|
16 |
-
|
17 |
-
<ul>
|
18 |
-
<li>It helps you save time and money by reducing errors and reprints.</li>
|
19 |
-
<li>It improves your customer satisfaction and loyalty by delivering accurate and consistent color.</li>
|
20 |
-
<li>It enhances your productivity and efficiency by streamlining your workflow and automating tasks.</li>
|
21 |
-
<li>It increases your competitive edge by enabling you to meet industry standards and customer expectations.</li>
|
22 |
-
<li>It expands your business opportunities by allowing you to print on a variety of media types and sizes.</li>
|
23 |
-
</ul>
|
24 |
-
|
25 |
-
<h2>How to download Efi Colorproof Xf 4.5 crack software for free?</h2>
|
26 |
-
|
27 |
-
<p>If you are interested in trying out Efi Colorproof Xf 4.5, you might be wondering how to download it for free. The official website of EFI offers a free trial version of Efi Colorproof Xf 4.5 that you can download and use for 30 days. However, if you want to use it beyond the trial period, you will need to purchase a license key that can cost thousands of dollars.</p>
|
28 |
-
|
29 |
-
<p>Fortunately, there is a way to download Efi Colorproof Xf 4.5 crack software for free, without paying anything or risking any viruses or malware. All you need to do is follow these simple steps:</p>
|
30 |
-
<p></p>
|
31 |
-
|
32 |
-
<ol>
|
33 |
-
<li>Go to <a href="https://zedload.com/efi-colorproof-xf-4.5-crack-serial-download.html">this link</a> and click on the "Download" button.</li>
|
34 |
-
<li>Create a free account on Zedload.com by entering your email address and choosing a password.</li>
|
35 |
-
<li>Log in to your account and click on the "Download" button again.</li>
|
36 |
-
<li>Select one of the available download servers and wait for the file to be downloaded.</li>
|
37 |
-
<li>Extract the file using WinRAR or any other extraction tool.</li>
|
38 |
-
<li>Run the setup file and follow the instructions to install Efi Colorproof Xf 4.5 on your computer.</li>
|
39 |
-
<li>Copy the crack file from the crack folder and paste it into the installation directory of Efi Colorproof Xf 4.5.</li>
|
40 |
-
<li>Launch Efi Colorproof Xf 4.5 and enjoy using it for free!</li>
|
41 |
-
</ol>
|
42 |
-
|
43 |
-
<h2>Conclusion</h2>
|
44 |
-
|
45 |
-
<p>Efi Colorproof Xf 4.5 is a powerful and versatile software solution for professional color management and proofing for wide-format and superwide-format printers. It offers many features and benefits that can help you improve your printing quality and consistency, as well as comply with industry standards and customer expectations. If you want to try out Efi Colorproof Xf 4.5 without spending any money, you can download it for free from Zedload.com using the crack software method described above. However, we recommend that you use this method only for testing purposes, and not for commercial use. If you like Efi Colorproof Xf 4.5 and want to support its development, you should buy a license key from EFI or its authorized resellers.</p>
|
46 |
-
<p>Some of the drawbacks and risks of using Efi Colorproof Xf 4.5 crack software are:</p>
|
47 |
-
|
48 |
-
<ul>
|
49 |
-
<li>It is illegal and unethical to use cracked software, as it violates the intellectual property rights of the software developer and the license agreement that you agreed to when you downloaded the software.</li>
|
50 |
-
<li>It is unsafe and unreliable to use cracked software, as it may contain viruses, malware, spyware, or other harmful components that can damage your computer or compromise your data and privacy.</li>
|
51 |
-
<li>It is unsupported and outdated to use cracked software, as it may not work properly with your printer or operating system, or have bugs or errors that can affect your output quality and performance. You also cannot access any updates, patches, or technical support from the software developer or its authorized resellers.</li>
|
52 |
-
<li>It is unprofessional and irresponsible to use cracked software, as it may result in poor color accuracy and consistency, or fail to meet industry standards and customer expectations. You also risk losing your reputation and credibility as a print professional, or facing legal consequences if you are caught using cracked software.</li>
|
53 |
-
</ul>
|
54 |
-
|
55 |
-
<h2>What are the alternatives to Efi Colorproof Xf 4.5 crack software?</h2>
|
56 |
-
|
57 |
-
<p>If you want to avoid the drawbacks and risks of using Efi Colorproof Xf 4.5 crack software, you have some alternatives that you can consider. Some of the alternatives are:</p>
|
58 |
-
|
59 |
-
<ul>
|
60 |
-
<li>Use the free trial version of Efi Colorproof Xf 4.5 from EFI's official website. This will allow you to test the software for 30 days and see if it meets your needs and expectations. However, you will need to purchase a license key after the trial period expires if you want to continue using the software.</li>
|
61 |
-
<li>Use a different RIP software that is compatible with your printer and media type. There are many other RIP software solutions available in the market that can offer similar or better features and benefits than Efi Colorproof Xf 4.5. You can compare different options and choose the one that suits your budget and requirements.</li>
|
62 |
-
<li>Use a cloud-based RIP service that does not require any installation or maintenance on your computer. There are some online platforms that can provide you with RIP services for a fee or subscription. You can upload your files to their servers and they will process them and send them back to you for printing or proofing.</li>
|
63 |
-
</ul>
|
64 |
-
|
65 |
-
<h2>Conclusion</h2>
|
66 |
-
|
67 |
-
<p>Efi Colorproof Xf 4.5 is a powerful and versatile RIP software solution for professional color management and proofing for wide-format and superwide-format printers. It offers many features and benefits that can help you improve your printing quality and consistency, as well as comply with industry standards and customer expectations. However, if you want to download Efi Colorproof Xf 4.5 crack software for free, you should be aware of the drawbacks and risks that come with it, such as legal, ethical, security, reliability, support, and quality issues. You should also consider some alternatives that can provide you with similar or better results without compromising your integrity or professionalism.</p>
|
68 |
-
<h2>What are the reviews of Efi Colorproof Xf 4.5?</h2>
|
69 |
-
|
70 |
-
<p>Efi Colorproof Xf 4.5 has received many positive reviews from users who have used it for their color printing and proofing projects. Some of the common praises that users have given to Efi Colorproof Xf 4.5 are:</p>
|
71 |
-
|
72 |
-
<ul>
|
73 |
-
<li>It delivers excellent color accuracy and consistency across different devices and media types.</li>
|
74 |
-
<li>It supports a wide range of file formats, printers, and cutters.</li>
|
75 |
-
<li>It has a user-friendly interface and a powerful workflow management system.</li>
|
76 |
-
<li>It has advanced color tools and options that can help users optimize their output quality and performance.</li>
|
77 |
-
<li>It has a reliable technical support and customer service team that can help users with any issues or questions.</li>
|
78 |
-
</ul>
|
79 |
-
|
80 |
-
<p>However, Efi Colorproof Xf 4.5 also has some negative reviews from users who have encountered some problems or limitations with the software. Some of the common complaints that users have given to Efi Colorproof Xf 4.5 are:</p>
|
81 |
-
|
82 |
-
<ul>
|
83 |
-
<li>It is expensive and requires a license key to use it beyond the trial period.</li>
|
84 |
-
<li>It is complex and requires a steep learning curve to master it.</li>
|
85 |
-
<li>It is resource-intensive and requires a high-performance computer to run it smoothly.</li>
|
86 |
-
<li>It is not compatible with some printers or operating systems.</li>
|
87 |
-
<li>It has some bugs or errors that can affect its functionality or stability.</li>
|
88 |
-
</ul>
|
89 |
-
|
90 |
-
<p>Overall, Efi Colorproof Xf 4.5 is a highly rated software solution that can meet the needs and expectations of most print professionals who want to achieve high-quality color printing and proofing. However, it also has some drawbacks and challenges that users should be aware of before using it.</p>
|
91 |
-
<h2>What are some tips and tricks for using Efi Colorproof Xf 4.5?</h2>
|
92 |
-
|
93 |
-
<p>Efi Colorproof Xf 4.5 is a powerful and versatile software solution that can help you achieve high-quality color printing and proofing. However, to get the most out of it, you need to know some tips and tricks that can help you optimize your workflow and output. Here are some of them:</p>
|
94 |
-
|
95 |
-
<ul>
|
96 |
-
<li>Use the Fiery Command WorkStation to manage your jobs and devices. Fiery Command WorkStation is a centralized interface that allows you to access all the features and functions of Efi Colorproof Xf 4.5 from one place. You can also customize your workspace, create presets, monitor your job status and progress, and troubleshoot any issues.</li>
|
97 |
-
<li>Use the Fiery JobFlow to automate your tasks and streamline your workflow. Fiery JobFlow is a web-based application that allows you to create workflows that can perform multiple tasks on your jobs, such as preflighting, editing, color management, printing, and proofing. You can also save and reuse your workflows, or share them with other users.</li>
|
98 |
-
<li>Use the Fiery Impress to enhance your output quality and appearance. Fiery Impress is a feature that allows you to apply various effects and enhancements to your output, such as spot varnish, embossing, metallics, white ink, and more. You can also preview and adjust your effects before printing or proofing.</li>
|
99 |
-
<li>Use the Fiery Color Verifier to measure and verify your color accuracy and consistency. Fiery Color Verifier is a feature that allows you to use a spectrophotometer to measure the color of your output and compare it with a reference standard or profile. You can also generate reports and certificates that show your color compliance.</li>
|
100 |
-
<li>Use the Fiery Color Manager to create and manage your custom color profiles. Fiery Color Manager is a feature that allows you to use a spectrophotometer to create color profiles for your devices and media types. You can also edit and fine-tune your profiles, or import and export them from other sources.</li>
|
101 |
-
</ul>
|
102 |
-
|
103 |
-
<h2>Conclusion</h2>
|
104 |
-
|
105 |
-
<p>Efi Colorproof Xf 4.5 is a software solution that can help you manage color supply chain from design to production to output by providing you with all the tools to produce the best color possible. It allows you to produce ISO 12647-7 / 8 compliant validation printing and contract proofing and G7 compliant proofs on inkjet, laser and LED printers. However, if you want to download Efi Colorproof Xf 4.5 crack software for free, you should be aware of the drawbacks and risks that come with it, such as legal, ethical, security, reliability, support, and quality issues. You should also consider some alternatives that can provide you with similar or better results without compromising your integrity or professionalism. Finally, you should also know some tips and tricks that can help you optimize your workflow and output using Efi Colorproof Xf 4.5.</p>
|
106 |
-
<h2>Conclusion</h2>
|
107 |
-
|
108 |
-
<p>Efi Colorproof Xf 4.5 is a software solution that can help you manage color supply chain from design to production to output by providing you with all the tools to produce the best color possible. It allows you to produce ISO 12647-7 / 8 compliant validation printing and contract proofing and G7 compliant proofs on inkjet, laser and LED printers. However, if you want to download Efi Colorproof Xf 4.5 crack software for free, you should be aware of the drawbacks and risks that come with it, such as legal, ethical, security, reliability, support, and quality issues. You should also consider some alternatives that can provide you with similar or better results without compromising your integrity or professionalism. Finally, you should also know some tips and tricks that can help you optimize your workflow and output using Efi Colorproof Xf 4.5.</p> 3cee63e6c2<br />
|
109 |
-
<br />
|
110 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Bus Simulator Ultimate Mod APK and Enjoy All Bus Unlocked.md
DELETED
@@ -1,113 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Bus Simulator Ultimate Mod APK Unlocked All Bus: A Review</h1>
|
3 |
-
<p>If you are a fan of simulation games, especially bus driving games, you might have heard of Bus Simulator Ultimate. This is one of the most popular and realistic bus simulator games on the market, with over 100 million downloads on Google Play Store. In this game, you can experience what it is like to be a bus driver, from driving on different roads and cities, to managing your own bus company and hiring other drivers. You can also customize your buses, create your own routes, and compete with other players online. But what if you want to enjoy the game without any limitations or restrictions? That's where Bus Simulator Ultimate Mod APK comes in handy. This is a modified version of the original game that gives you access to unlimited money, removed ads, and unlocked all buses and skins. In this article, we will review Bus Simulator Ultimate Mod APK and show you how to download and install it on your device.</p>
|
4 |
-
<h2>bus simulator ultimate mod apk unlocked all bus</h2><br /><p><b><b>Download Zip</b> ––– <a href="https://urlin.us/2uSU5n">https://urlin.us/2uSU5n</a></b></p><br /><br />
|
5 |
-
<h2>What is Bus Simulator Ultimate?</h2>
|
6 |
-
<p>Bus Simulator Ultimate is a 3D bus simulation game developed by Zuuks Games, a Turkish game studio that specializes in simulation and racing games. The game was released in 2019 and has since received many updates and improvements. The game is available for both Android and iOS devices, as well as Windows PC.</p>
|
7 |
-
<p>In Bus Simulator Ultimate, you can choose from over 30 different buses, each with their own features and specifications. You can drive on realistic roads and environments, such as Germany, Turkey, Italy, France, Spain, USA, Brazil, Azerbaijan, Russia, China, Japan, Canada, Netherlands, and more. You can also create your own routes and share them with other players. You can manage your own bus company and hire other drivers to work for you. You can earn money by completing missions and challenges, as well as by transporting passengers safely and comfortably. You can use the money to buy new buses, upgrade your existing ones, or expand your business.</p>
|
8 |
-
<p>The game also features online multiplayer mode, where you can join or create a bus convoy with other players from around the world. You can chat with them, cooperate with them, or compete with them on the leaderboards. You can also join events and tournaments to win prizes and rewards.</p>
|
9 |
-
<p>bus simulator ultimate mod apk unlimited money and gold<br />
|
10 |
-
bus simulator ultimate mod apk free download latest version<br />
|
11 |
-
bus simulator ultimate mod apk all buses unlocked and upgraded<br />
|
12 |
-
bus simulator ultimate mod apk offline play without internet<br />
|
13 |
-
bus simulator ultimate mod apk hack cheats no root<br />
|
14 |
-
bus simulator ultimate mod apk realistic graphics and sounds<br />
|
15 |
-
bus simulator ultimate mod apk multiplayer mode with friends<br />
|
16 |
-
bus simulator ultimate mod apk new maps and routes added<br />
|
17 |
-
bus simulator ultimate mod apk premium features unlocked<br />
|
18 |
-
bus simulator ultimate mod apk no ads and in-app purchases<br />
|
19 |
-
bus simulator ultimate mod apk android 11 support<br />
|
20 |
-
bus simulator ultimate mod apk obb data file download<br />
|
21 |
-
bus simulator ultimate mod apk high fps and smooth gameplay<br />
|
22 |
-
bus simulator ultimate mod apk custom skins and liveries<br />
|
23 |
-
bus simulator ultimate mod apk easy controls and settings<br />
|
24 |
-
bus simulator ultimate mod apk different weather and traffic conditions<br />
|
25 |
-
bus simulator ultimate mod apk fun missions and challenges<br />
|
26 |
-
bus simulator ultimate mod apk best simulation game of 2023<br />
|
27 |
-
bus simulator ultimate mod apk create your own company and logo<br />
|
28 |
-
bus simulator ultimate mod apk earn rewards and bonuses<br />
|
29 |
-
bus simulator ultimate mod apk drive various types of buses<br />
|
30 |
-
bus simulator ultimate mod apk support for gamepad and steering wheel<br />
|
31 |
-
bus simulator ultimate mod apk realistic physics and damage system<br />
|
32 |
-
bus simulator ultimate mod apk online leaderboards and achievements<br />
|
33 |
-
bus simulator ultimate mod apk original soundtrack and radio stations<br />
|
34 |
-
bus simulator ultimate mod apk low mb size and fast installation<br />
|
35 |
-
bus simulator ultimate mod apk compatible with all devices and android versions<br />
|
36 |
-
bus simulator ultimate mod apk unlimited fuel and maintenance<br />
|
37 |
-
bus simulator ultimate mod apk travel across different countries and cities<br />
|
38 |
-
bus simulator ultimate mod apk dynamic day and night cycle<br />
|
39 |
-
bus simulator ultimate mod apk passenger mode and camera views<br />
|
40 |
-
bus simulator ultimate mod apk improve your driving skills and ratings<br />
|
41 |
-
bus simulator ultimate mod apk regular updates and bug fixes<br />
|
42 |
-
bus simulator ultimate mod apk user-friendly interface and design<br />
|
43 |
-
bus simulator ultimate mod apk full unlocked everything no verification<br />
|
44 |
-
bus simulator ultimate mod apk unlimited gems and coins generator<br />
|
45 |
-
bus simulator ultimate mod apk download from google play store or apkpure.com[^1^]<br />
|
46 |
-
bus simulator ultimate mod apk enjoy the most realistic and immersive simulation experience ever</p>
|
47 |
-
<h3>Features of Bus Simulator Ultimate</h3>
|
48 |
-
<p>Bus Simulator Ultimate has many features that make it one of the best bus simulator games on the market. Here are some of them:</p>
|
49 |
-
<h4>Realistic bus driving experience</h4>
|
50 |
-
<p>The game offers a realistic bus driving experience with high-quality graphics, sound effects, and physics. You can feel the weight and size of the bus as you drive on different roads and terrains. You can also interact with various elements inside and outside the bus, such as the steering wheel, pedals, mirrors, doors, windows, lights, indicators, wipers, horn, radio, GPS, etc. You can also adjust the camera angle to suit your preference.</p>
|
51 |
-
<h4>Multiple game modes and challenges</h4>
|
52 |
-
<p>The game has multiple game modes and challenges for you to enjoy. You can play in free mode, where you can drive anywhere you want without any time limit or objective. You can also play in career mode, where you have to complete missions and tasks to earn money and reputation. You can also play in multiplayer mode, where you can join or create a bus convoy with other players online. You can also participate in events and tournaments to win prizes and rewards.</p>
|
53 |
-
<h4>Customizable buses and routes</h4>
|
54 |
-
<p>The game allows you to customize your buses and routes according to your liking. You can choose from over 30 different buses, each with their own features and specifications. You can also change the color, skin, license plate, logo, interior, and accessories of your buses. You can also create your own routes and share them with other players. You can select the starting and ending points, the stops, the traffic, the weather, the time, and the difficulty of your routes. You can also edit the existing routes and make them more challenging or fun.</p>
|
55 |
-
<h4>Online multiplayer and leaderboards</h4>
|
56 |
-
<p>The game also features online multiplayer mode, where you can join or create a bus convoy with other players from around the world. You can chat with them, cooperate with them, or compete with them on the leaderboards. You can also join events and tournaments to win prizes and rewards. You can also see the statistics and rankings of other players and compare your performance with them.</p>
|
57 |
-
<h3>How to download and install Bus Simulator Ultimate Mod APK?</h3>
|
58 |
-
<p>If you want to download and install Bus Simulator Ultimate Mod APK on your device, you need to follow some simple steps. Here they are:</p>
|
59 |
-
<h4>Requirements and compatibility</h4>
|
60 |
-
<p>Before you download and install Bus Simulator Ultimate Mod APK, you need to make sure that your device meets the minimum requirements and is compatible with the game. The requirements are as follows:</p>
|
61 |
-
<ul>
|
62 |
-
<li>Android version: 5.0 or higher</li>
|
63 |
-
<li>RAM: 2 GB or more</li>
|
64 |
-
<li>Storage: 1 GB or more</li>
|
65 |
-
<li>Internet connection: required for online features</li>
|
66 |
-
</ul>
|
67 |
-
<p>The game is compatible with most Android devices, but some models may not work properly or may experience some glitches. If you encounter any problems, you can contact the developer or try a different device.</p>
|
68 |
-
<h4>Steps to download and install</h4>
|
69 |
-
<p>After you have checked the requirements and compatibility, you can proceed to download and install Bus Simulator Ultimate Mod APK on your device. The steps are as follows:</p>
|
70 |
-
<ol>
|
71 |
-
<li>Go to a trusted website that provides Bus Simulator Ultimate Mod APK download link, such as [APKPure] or [APKDone].</li>
|
72 |
-
<li>Click on the download button and wait for the file to be downloaded on your device.</li>
|
73 |
-
<li>Once the file is downloaded, go to your device's settings and enable the installation of apps from unknown sources. This will allow you to install Bus Simulator Ultimate Mod APK without any issues.</li>
|
74 |
-
<li>Locate the downloaded file on your device and tap on it to start the installation process.</li>
|
75 |
-
<li>Follow the instructions on the screen and wait for the installation to be completed.</li>
|
76 |
-
<li>Launch the game and enjoy Bus Simulator Ultimate Mod APK features.</li>
|
77 |
-
</ol>
|
78 |
-
<h4>Permissions and security</h4>
|
79 |
-
<p>When you install Bus Simulator Ultimate Mod APK on your device, you may need to grant some permissions to the app. These permissions are necessary for the app to function properly and access some features of your device. The permissions are as follows:</p>
|
80 |
-
<ul>
|
81 |
-
<li>Access to photos, media, and files: This permission allows the app to read and write data on your device's storage, such as saving your game progress, downloading additional files, etc.</li>
|
82 |
-
<li>Access to microphone: This permission allows the app to record audio from your device's microphone, such as using voice chat in multiplayer mode, etc.</li>
|
83 |
-
<li>Access to location: This permission allows the app to access your device's location, such as showing you relevant ads based on your location, etc.</li>
|
84 |
-
</ul>
|
85 |
-
<p>You can revoke these permissions at any time by going to your device's settings and selecting the app. However, this may affect some features of the game and cause some errors.</p>
|
86 |
-
<p>As for security, you don't have to worry about Bus Simulator Ultimate Mod APK being harmful or malicious. The app is safe and virus-free, as long as you download it from a trusted website. However, you should always be careful when downloading any modded apps from unknown sources, as they may contain malware or spyware that can harm your device or steal your personal information. You should also avoid using Bus Simulator Ultimate Mod APK on public or unsecured networks, as they may expose your data to hackers or cybercriminals.</p>
|
87 |
-
<h3>What are the benefits of Bus Simulator Ultimate Mod APK?</h3>
|
88 |
-
<p>Bus Simulator Ultimate Mod APK has many benefits that make it better than the original game. Here are some of them:</p>
|
89 |
-
<h4>Unlimited money and resources</h4>
|
90 |
-
<p>The most obvious benefit of Bus Simulator Ultimate Mod APK is that it gives you unlimited money and resources in the game. You don't have to worry about running out of money or resources when buying new buses, upgrading your existing ones, expanding your business, etc. You can also use the money and resources to unlock premium features that are otherwise unavailable in the original game.</p>
|
91 |
-
<h4>Removed ads <h4>Removed ads and pop-ups</h4>
|
92 |
-
<p>Another benefit of Bus Simulator Ultimate Mod APK is that it removes all the annoying ads and pop-ups that interrupt your gameplay and ruin your immersion. You don't have to watch any ads to get extra rewards or bonuses, or to access some features of the game. You can also avoid any unwanted redirects or downloads that may harm your device or waste your data. You can enjoy the game without any distractions or interruptions.</p>
|
93 |
-
<h4>Unlocked all buses and skins</h4>
|
94 |
-
<p>The last but not least benefit of Bus Simulator Ultimate Mod APK is that it unlocks all the buses and skins in the game. You don't have to spend any money or resources to buy new buses or upgrade your existing ones. You can also change the appearance of your buses with different skins, colors, logos, etc. You can choose from over 30 different buses, each with their own features and specifications. You can also access some exclusive buses and skins that are only available in the modded version of the game.</p>
|
95 |
-
<h3>Conclusion</h3>
|
96 |
-
<p>Bus Simulator Ultimate is one of the best bus simulator games on the market, with realistic graphics, sound effects, physics, and gameplay. You can drive on different roads and environments, customize your buses and routes, manage your own bus company, and compete with other players online. However, if you want to enjoy the game without any limitations or restrictions, you should try Bus Simulator Ultimate Mod APK. This is a modified version of the original game that gives you unlimited money, removed ads, and unlocked all buses and skins. You can download and install Bus Simulator Ultimate Mod APK on your device by following some simple steps. However, you should always be careful when downloading any modded apps from unknown sources, as they may contain malware or spyware that can harm your device or steal your personal information. You should also avoid using Bus Simulator Ultimate Mod APK on public or unsecured networks, as they may expose your data to hackers or cybercriminals.</p>
|
97 |
-
<p>We hope this article has helped you learn more about Bus Simulator Ultimate Mod APK and how to download and install it on your device. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!</p>
|
98 |
-
<h3>FAQs</h3>
|
99 |
-
<p>Here are some frequently asked questions about Bus Simulator Ultimate Mod APK:</p>
|
100 |
-
<ul>
|
101 |
-
<li><b>Is Bus Simulator Ultimate Mod APK safe?</b></li>
|
102 |
-
<p>Yes, Bus Simulator Ultimate Mod APK is safe and virus-free, as long as you download it from a trusted website. However, you should always be careful when downloading any modded apps from unknown sources, as they may contain malware or spyware that can harm your device or steal your personal information. You should also avoid using Bus Simulator Ultimate Mod APK on public or unsecured networks, as they may expose your data to hackers or cybercriminals.</p>
|
103 |
-
<li><b>Is Bus Simulator Ultimate Mod APK legal?</b></li>
|
104 |
-
<p>No, Bus Simulator Ultimate Mod APK is not legal, as it violates the terms and conditions of the original game. By using Bus Simulator Ultimate Mod APK, you are bypassing the security measures and monetization methods of the original game. This may result in legal actions or penalties from the developer or the authorities. Therefore, we do not recommend using Bus Simulator Ultimate Mod APK for any purposes.</p>
|
105 |
-
<li><b>Will Bus Simulator Ultimate Mod APK work on my device?</b></li>
|
106 |
-
<p>Bus Simulator Ultimate Mod APK will work on most Android devices that meet the minimum requirements and are compatible with the game. However, some models may not work properly or may experience some glitches. If you encounter any problems, you can contact the developer or try a different device.</p>
|
107 |
-
<li><b>How can I update Bus Simulator Ultimate Mod APK?</b></li>
|
108 |
-
<p>To update Bus Simulator Ultimate Mod APK, you need to download and install the latest version of the modded app from a trusted website. You should also uninstall the previous version of the modded app before installing the new one. However, you should be aware that updating Bus Simulator Ultimate Mod APK may cause some errors or bugs in the game.</p>
|
109 |
-
<li><b>Can I play Bus Simulator Ultimate Mod APK offline?</b></li>
|
110 |
-
<p>Yes, you can play Bus Simulator Ultimate Mod APK offline without any internet connection. However, you will not be able to access some features of the game that require an online connection, such as multiplayer mode, events, tournaments, leaderboards, etc.</p>
|
111 |
-
</ul></p> 197e85843d<br />
|
112 |
-
<br />
|
113 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Dislyte APK OBB Tips and Tricks to Master the Deep Strategic Gameplay.md
DELETED
@@ -1,164 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Dislyte APK OBB: How to Download and Install the Stylish Urban Mythological Game</h1>
|
3 |
-
<p>If you are looking for a new and exciting game to play on your Android device, you might want to check out Dislyte. Dislyte is a visually stunning story of heroes set in the near future with a stylish, urban mythological theme. In this article, we will tell you what Dislyte is, why you need Dislyte APK OBB, and how to download and install it on your device.</p>
|
4 |
-
<h2>dislyte apk obb</h2><br /><p><b><b>Download File</b> ⚹ <a href="https://jinyurl.com/2uNUFD">https://jinyurl.com/2uNUFD</a></b></p><br /><br />
|
5 |
-
<h2>What is Dislyte?</h2>
|
6 |
-
<p>Dislyte is a game developed by Lilith Games, the creators of popular titles like AFK Arena and Rise of Kingdoms. Dislyte is a role-playing game (RPG) that combines action, strategy, and adventure elements. Here are some of the features that make Dislyte stand out:</p>
|
7 |
-
<h3>A brief introduction to the game and its features</h3>
|
8 |
-
<ul>
|
9 |
-
<li>Dislyte has an immersive storyline that takes you to a world where humans, gods, and demons coexist. You will encounter various characters, factions, and events that will shape your destiny.</li>
|
10 |
-
<li>Dislyte has stunning graphics and animations that bring the urban mythological theme to life. You will be amazed by the detailed environments, realistic lighting effects, and dynamic weather changes.</li>
|
11 |
-
<li>Dislyte has a smooth gameplay experience that offers intuitive controls, fast-paced combat, and diverse skills. You can customize your squad, equip them with powerful weapons and gear, and unleash their unique abilities.</li>
|
12 |
-
</ul>
|
13 |
-
<h3>The main characters and their abilities</h3>
|
14 |
-
<p>Dislyte has a rich cast of characters that you can recruit and upgrade. Each character has their own personality, backstory, and role in the game. Here are some of the main characters and their abilities:</p>
|
15 |
-
<table>
|
16 |
-
<tr>
|
17 |
-
<th>Name</th>
|
18 |
-
<th>Role</th>
|
19 |
-
<th>Ability</th>
|
20 |
-
</tr>
|
21 |
-
<tr>
|
22 |
-
<td>Alex</td>
|
23 |
-
<td>Leader</td>
|
24 |
-
<td>A charismatic leader who can inspire his allies and boost their morale.</td>
|
25 |
-
</tr>
|
26 |
-
<tr>
|
27 |
-
<td>Luna</td>
|
28 |
-
<td>Sniper</td>
|
29 |
-
<td>A sharpshooter who can deal massive damage from a distance and pierce through enemies' defenses.</td>
|
30 |
-
</tr>
|
31 |
-
<tr>
|
32 |
-
<td>Kira</td>
|
33 |
-
<td>Hacker</td>
|
34 |
-
<td>A genius hacker who can hack into enemies' systems and disrupt their operations.</td>
|
35 |
-
</tr>
|
36 |
-
<tr>
|
37 |
-
<td>Ruby</td>
|
38 |
-
<td>Brawler</td>
|
39 |
-
<td>A fierce brawler who can charge into enemies and knock them out with her fists.</td>
|
40 |
-
</tr>
|
41 |
-
<tr>
|
42 |
-
<td>Zack</td>
|
43 |
-
<td>Medic</td>
|
44 |
-
<td>A skilled medic who can heal his allies and revive them when they fall.</td>
|
45 |
-
</tr>
|
46 |
-
</table>
|
47 |
-
<h3>The gameplay modes and challenges</h3>
|
48 |
-
<p>Dislyte has various gameplay modes and challenges that will test your skills and strategy. You can choose from:</p>
|
49 |
-
<ul>
|
50 |
-
<li>Campaign mode: Follow the main storyline and complete different missions.</li>
|
51 |
-
<li>PvP mode: Battle against other players in real-time and climb the rankings.</li>
|
52 |
-
<li>Raid mode: Team up with other players and take on powerful bosses.</li>
|
53 |
-
<li>Event mode: Participate in limited-time events and earn exclusive rewards.</li>
|
54 |
-
<li>Daily mode: Complete daily tasks and collect resources and rewards.</li>
|
55 |
-
</ul>
|
56 |
-
<h2>Why do you need Dislyte APK OBB?</h2>
|
57 |
-
<p>Dislyte is a game that requires a lot of storage space and data to run smoothly. The game has two main files: the APK file and the OBB file. The APK file is the application package that contains the game's code and resources. The OBB file is the expansion file that contains the game's data and assets. Here are some of the reasons why you need Dislyte APK OBB:</p>
|
58 |
-
<h3>The benefits of downloading the APK OBB files</h3>
|
59 |
-
<ul>
|
60 |
-
<li>You can download the game faster and easier. You don't have to wait for the game to download and install from the Google Play Store, which can take a long time and consume a lot of data.</li>
|
61 |
-
<li>You can play the game offline. You don't have to worry about internet connection issues or data charges. You can enjoy the game anytime and anywhere.</li>
|
62 |
-
<li>You can access the latest version of the game. You don't have to wait for the game to update from the Google Play Store, which can be delayed or unavailable in some regions. You can get the latest features, bug fixes, and content updates.</li>
|
63 |
-
</ul>
|
64 |
-
<h3>The requirements and compatibility of the APK OBB files</h3>
|
65 |
-
<p>Before you download and install Dislyte APK OBB, you need to make sure that your device meets the following requirements and compatibility:</p>
|
66 |
-
<ul>
|
67 |
-
<li>Your device must have Android 5.0 or higher operating system.</li>
|
68 |
-
<li>Your device must have at least 4 GB of RAM and 8 GB of free storage space.</li>
|
69 |
-
<li>Your device must support OpenGL ES 3.0 or higher graphics API.</li>
|
70 |
-
<li>Your device must allow installation from unknown sources. You can enable this option in your device's settings under security or privacy.</li>
|
71 |
-
</ul>
|
72 |
-
<h3>The risks and precautions of downloading the APK OBB files</h3>
|
73 |
-
<p>While downloading and installing Dislyte APK OBB has many benefits, it also has some risks and precautions that you need to be aware of:</p>
|
74 |
-
<ul>
|
75 |
-
<li>You may encounter malware or viruses that can harm your device or steal your personal information. You need to download the APK OBB files from a reliable and trusted source, such as [Dislyte Official Website] or [APKPure].</li>
|
76 |
-
<li>You may violate the terms and conditions of the game or the Google Play Store. You need to respect the intellectual property rights of the game developers and publishers, and not use any illegal or unethical methods to modify or hack the game.</li>
|
77 |
-
<li>You may lose your progress or account if you uninstall or reinstall the game. You need to backup your data and sync your account with a social media platform, such as Facebook or Google, before you download and install Dislyte APK OBB.</li>
|
78 |
-
</ul> <h2>How to download and install Dislyte APK OBB?</h2>
|
79 |
-
<p>Now that you know what Dislyte is and why you need Dislyte APK OBB, you might be wondering how to download and install it on your device. Don't worry, we have got you covered. Here are the steps you need to follow:</p>
|
80 |
-
<h3>The steps to download the APK OBB files from a reliable source</h3>
|
81 |
-
<ol>
|
82 |
-
<li>Go to a reliable and trusted source, such as [Dislyte Official Website] or [APKPure].</li>
|
83 |
-
<li>Find the Dislyte APK OBB files and click on the download button.</li>
|
84 |
-
<li>Wait for the download to complete. You will need about 2 GB of data to download the files.</li>
|
85 |
-
<li>Locate the downloaded files in your device's file manager or download folder.</li>
|
86 |
-
</ol>
|
87 |
-
<h3>The steps to install the APK OBB files on your device</h3>
|
88 |
-
<ol>
|
89 |
-
<li>Tap on the Dislyte APK file and follow the instructions to install it.</li>
|
90 |
-
<li>Do not open the game yet. You need to copy the OBB file to the right folder first.</li>
|
91 |
-
<li>Tap and hold on the Dislyte OBB file and select copy or move.</li>
|
92 |
-
<li>Navigate to the Android/OBB folder in your device's internal storage.</li>
|
93 |
-
<li>Create a new folder named com.lilithgame.dislyte and paste the OBB file inside it.</li>
|
94 |
-
</ol>
|
95 |
-
<h3>The steps to verify and launch the game</h3>
|
96 |
-
<ol>
|
97 |
-
<li>Go back to your device's home screen or app drawer and find the Dislyte icon.</li>
|
98 |
-
<li>Tap on the icon and wait for the game to load.</li>
|
99 |
-
<li>Verify your age and accept the terms of service.</li>
|
100 |
-
<li>Choose your preferred language and server.</li>
|
101 |
-
<li>Create or log in to your account using a social media platform, such as Facebook or Google.</li>
|
102 |
-
<li>Enjoy playing Dislyte!</li>
|
103 |
-
</ol>
|
104 |
-
<h2>Conclusion</h2>
|
105 |
-
<p>Dislyte is a stylish urban mythological game that offers an immersive storyline, stunning graphics, smooth gameplay, rich characters, and various modes and challenges. You can download and install Dislyte APK OBB on your Android device by following the steps we have provided in this article. However, you need to be careful about the source, compatibility, and security of the APK OBB files. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Happy gaming!</p>
|
106 |
-
<h2>FAQs</h2>
|
107 |
-
<p>Here are some of the frequently asked questions about Dislyte APK OBB:</p>
|
108 |
-
<p>dislyte mod apk unlimited money<br />
|
109 |
-
dislyte apk obb download for android<br />
|
110 |
-
dislyte game apk obb offline<br />
|
111 |
-
dislyte apk obb latest version<br />
|
112 |
-
dislyte mod menu apk obb<br />
|
113 |
-
dislyte apk obb free download<br />
|
114 |
-
dislyte hack apk obb<br />
|
115 |
-
dislyte apk obb full unlocked<br />
|
116 |
-
dislyte apk obb highly compressed<br />
|
117 |
-
dislyte apk obb no verification<br />
|
118 |
-
dislyte apk obb update 2023<br />
|
119 |
-
dislyte apk obb gameplay<br />
|
120 |
-
dislyte apk obb installation guide<br />
|
121 |
-
dislyte apk obb file size<br />
|
122 |
-
dislyte apk obb requirements<br />
|
123 |
-
dislyte apk obb modded by apkbounce[^1^]<br />
|
124 |
-
dislyte apk obb best settings<br />
|
125 |
-
dislyte apk obb cheats codes<br />
|
126 |
-
dislyte apk obb english version<br />
|
127 |
-
dislyte apk obb review<br />
|
128 |
-
dislyte apk obb features<br />
|
129 |
-
dislyte apk obb tips and tricks<br />
|
130 |
-
dislyte apk obb new characters<br />
|
131 |
-
dislyte apk obb how to play<br />
|
132 |
-
dislyte apk obb compatible devices<br />
|
133 |
-
dislyte apk obb error fix<br />
|
134 |
-
dislyte apk obb graphics quality<br />
|
135 |
-
dislyte apk obb sound effects<br />
|
136 |
-
dislyte apk obb storyline<br />
|
137 |
-
dislyte apk obb screenshots<br />
|
138 |
-
dislyte apk obb trailer<br />
|
139 |
-
dislyte apk obb official website<br />
|
140 |
-
dislyte apk obb fan art<br />
|
141 |
-
dislyte apk obb wallpapers<br />
|
142 |
-
dislyte apk obb memes<br />
|
143 |
-
dislyte apk obb reddit community<br />
|
144 |
-
dislyte apk obb discord server<br />
|
145 |
-
dislyte apk obb youtube videos<br />
|
146 |
-
dislyte apk obb facebook page<br />
|
147 |
-
dislyte apk obb twitter account<br />
|
148 |
-
dislyte apk obb instagram profile<br />
|
149 |
-
dislyte apk obb tiktok clips<br />
|
150 |
-
dislyte apk obb merchandise store<br />
|
151 |
-
dislyte apk obb developer contact<br />
|
152 |
-
dislyte apk obb customer support<br />
|
153 |
-
dislyte apk obb ratings and feedbacks<br />
|
154 |
-
dislyte apk obb alternatives and similar games<br />
|
155 |
-
dislyte apk obb frequently asked questions (FAQs)</p>
|
156 |
-
<ul>
|
157 |
-
<li><b>What is the size of Dislyte APK OBB?</b><br>The size of Dislyte APK OBB is about 2 GB. You need at least 8 GB of free storage space on your device to download and install it.</li>
|
158 |
-
<li><b>Is Dislyte APK OBB safe?</b><br>Dislyte APK OBB is safe if you download it from a reliable and trusted source, such as [Dislyte Official Website] or [APKPure]. However, you should always scan the files with an antivirus software before installing them.</li>
|
159 |
-
<li><b>Is Dislyte APK OBB free?</b><br>Yes, Dislyte APK OBB is free to download and play. However, the game may contain in-app purchases that require real money.</li>
|
160 |
-
<li><b>How can I update Dislyte APK OBB?</b><br>You can update Dislyte APK OBB by downloading and installing the latest version of the files from the same source you used before. You don't need to uninstall or reinstall the game, just overwrite the old files with the new ones.</li>
|
161 |
-
<li><b>How can I contact Dislyte support?</b><br>You can contact Dislyte support by sending an email to [email protected] or by visiting their official website [here].</li>
|
162 |
-
</ul></p> 401be4b1e0<br />
|
163 |
-
<br />
|
164 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1toTree/lora_test/ppdiffusers/schedulers/scheduling_sde_vp.py
DELETED
@@ -1,89 +0,0 @@
|
|
1 |
-
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
2 |
-
# Copyright 2022 Google Brain and The HuggingFace Team. All rights reserved.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
# DISCLAIMER: This file is strongly influenced by https://github.com/yang-song/score_sde_pytorch
|
17 |
-
|
18 |
-
import math
|
19 |
-
|
20 |
-
import paddle
|
21 |
-
|
22 |
-
from ..configuration_utils import ConfigMixin, register_to_config
|
23 |
-
from .scheduling_utils import SchedulerMixin
|
24 |
-
|
25 |
-
|
26 |
-
class ScoreSdeVpScheduler(SchedulerMixin, ConfigMixin):
|
27 |
-
"""
|
28 |
-
The variance preserving stochastic differential equation (SDE) scheduler.
|
29 |
-
|
30 |
-
[`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
|
31 |
-
function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
|
32 |
-
[`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
|
33 |
-
[`~SchedulerMixin.from_pretrained`] functions.
|
34 |
-
|
35 |
-
For more information, see the original paper: https://arxiv.org/abs/2011.13456
|
36 |
-
|
37 |
-
UNDER CONSTRUCTION
|
38 |
-
|
39 |
-
"""
|
40 |
-
|
41 |
-
order = 1
|
42 |
-
|
43 |
-
@register_to_config
|
44 |
-
def __init__(self, num_train_timesteps=2000, beta_min=0.1, beta_max=20, sampling_eps=1e-3):
|
45 |
-
self.sigmas = None
|
46 |
-
self.discrete_sigmas = None
|
47 |
-
self.timesteps = None
|
48 |
-
|
49 |
-
def set_timesteps(self, num_inference_steps):
|
50 |
-
self.timesteps = paddle.linspace(1, self.config.sampling_eps, num_inference_steps)
|
51 |
-
|
52 |
-
def step_pred(self, score, x, t, generator=None):
|
53 |
-
if self.timesteps is None:
|
54 |
-
raise ValueError(
|
55 |
-
"`self.timesteps` is not set, you need to run 'set_timesteps' after creating the scheduler"
|
56 |
-
)
|
57 |
-
|
58 |
-
# TODO(Patrick) better comments + non-Paddle
|
59 |
-
# postprocess model score
|
60 |
-
log_mean_coeff = (
|
61 |
-
-0.25 * t**2 * (self.config.beta_max - self.config.beta_min) - 0.5 * t * self.config.beta_min
|
62 |
-
)
|
63 |
-
std = paddle.sqrt(1.0 - paddle.exp(2.0 * log_mean_coeff))
|
64 |
-
std = std.flatten()
|
65 |
-
while len(std.shape) < len(score.shape):
|
66 |
-
std = std.unsqueeze(-1)
|
67 |
-
score = -score / std
|
68 |
-
|
69 |
-
# compute
|
70 |
-
dt = -1.0 / len(self.timesteps)
|
71 |
-
|
72 |
-
beta_t = self.config.beta_min + t * (self.config.beta_max - self.config.beta_min)
|
73 |
-
beta_t = beta_t.flatten()
|
74 |
-
while len(beta_t.shape) < len(x.shape):
|
75 |
-
beta_t = beta_t.unsqueeze(-1)
|
76 |
-
drift = -0.5 * beta_t * x
|
77 |
-
|
78 |
-
diffusion = paddle.sqrt(beta_t)
|
79 |
-
drift = drift - diffusion**2 * score
|
80 |
-
x_mean = x + drift * dt
|
81 |
-
|
82 |
-
# add noise
|
83 |
-
noise = paddle.randn(x.shape, generator=generator)
|
84 |
-
x = x_mean + diffusion * math.sqrt(-dt) * noise
|
85 |
-
|
86 |
-
return x, x_mean
|
87 |
-
|
88 |
-
def __len__(self):
|
89 |
-
return self.config.num_train_timesteps
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/4Taps/SadTalker/src/face3d/data/base_dataset.py
DELETED
@@ -1,125 +0,0 @@
|
|
1 |
-
"""This module implements an abstract base class (ABC) 'BaseDataset' for datasets.
|
2 |
-
|
3 |
-
It also includes common transformation functions (e.g., get_transform, __scale_width), which can be later used in subclasses.
|
4 |
-
"""
|
5 |
-
import random
|
6 |
-
import numpy as np
|
7 |
-
import torch.utils.data as data
|
8 |
-
from PIL import Image
|
9 |
-
import torchvision.transforms as transforms
|
10 |
-
from abc import ABC, abstractmethod
|
11 |
-
|
12 |
-
|
13 |
-
class BaseDataset(data.Dataset, ABC):
|
14 |
-
"""This class is an abstract base class (ABC) for datasets.
|
15 |
-
|
16 |
-
To create a subclass, you need to implement the following four functions:
|
17 |
-
-- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt).
|
18 |
-
-- <__len__>: return the size of dataset.
|
19 |
-
-- <__getitem__>: get a data point.
|
20 |
-
-- <modify_commandline_options>: (optionally) add dataset-specific options and set default options.
|
21 |
-
"""
|
22 |
-
|
23 |
-
def __init__(self, opt):
|
24 |
-
"""Initialize the class; save the options in the class
|
25 |
-
|
26 |
-
Parameters:
|
27 |
-
opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions
|
28 |
-
"""
|
29 |
-
self.opt = opt
|
30 |
-
# self.root = opt.dataroot
|
31 |
-
self.current_epoch = 0
|
32 |
-
|
33 |
-
@staticmethod
|
34 |
-
def modify_commandline_options(parser, is_train):
|
35 |
-
"""Add new dataset-specific options, and rewrite default values for existing options.
|
36 |
-
|
37 |
-
Parameters:
|
38 |
-
parser -- original option parser
|
39 |
-
is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options.
|
40 |
-
|
41 |
-
Returns:
|
42 |
-
the modified parser.
|
43 |
-
"""
|
44 |
-
return parser
|
45 |
-
|
46 |
-
@abstractmethod
|
47 |
-
def __len__(self):
|
48 |
-
"""Return the total number of images in the dataset."""
|
49 |
-
return 0
|
50 |
-
|
51 |
-
@abstractmethod
|
52 |
-
def __getitem__(self, index):
|
53 |
-
"""Return a data point and its metadata information.
|
54 |
-
|
55 |
-
Parameters:
|
56 |
-
index - - a random integer for data indexing
|
57 |
-
|
58 |
-
Returns:
|
59 |
-
a dictionary of data with their names. It ususally contains the data itself and its metadata information.
|
60 |
-
"""
|
61 |
-
pass
|
62 |
-
|
63 |
-
|
64 |
-
def get_transform(grayscale=False):
|
65 |
-
transform_list = []
|
66 |
-
if grayscale:
|
67 |
-
transform_list.append(transforms.Grayscale(1))
|
68 |
-
transform_list += [transforms.ToTensor()]
|
69 |
-
return transforms.Compose(transform_list)
|
70 |
-
|
71 |
-
def get_affine_mat(opt, size):
|
72 |
-
shift_x, shift_y, scale, rot_angle, flip = 0., 0., 1., 0., False
|
73 |
-
w, h = size
|
74 |
-
|
75 |
-
if 'shift' in opt.preprocess:
|
76 |
-
shift_pixs = int(opt.shift_pixs)
|
77 |
-
shift_x = random.randint(-shift_pixs, shift_pixs)
|
78 |
-
shift_y = random.randint(-shift_pixs, shift_pixs)
|
79 |
-
if 'scale' in opt.preprocess:
|
80 |
-
scale = 1 + opt.scale_delta * (2 * random.random() - 1)
|
81 |
-
if 'rot' in opt.preprocess:
|
82 |
-
rot_angle = opt.rot_angle * (2 * random.random() - 1)
|
83 |
-
rot_rad = -rot_angle * np.pi/180
|
84 |
-
if 'flip' in opt.preprocess:
|
85 |
-
flip = random.random() > 0.5
|
86 |
-
|
87 |
-
shift_to_origin = np.array([1, 0, -w//2, 0, 1, -h//2, 0, 0, 1]).reshape([3, 3])
|
88 |
-
flip_mat = np.array([-1 if flip else 1, 0, 0, 0, 1, 0, 0, 0, 1]).reshape([3, 3])
|
89 |
-
shift_mat = np.array([1, 0, shift_x, 0, 1, shift_y, 0, 0, 1]).reshape([3, 3])
|
90 |
-
rot_mat = np.array([np.cos(rot_rad), np.sin(rot_rad), 0, -np.sin(rot_rad), np.cos(rot_rad), 0, 0, 0, 1]).reshape([3, 3])
|
91 |
-
scale_mat = np.array([scale, 0, 0, 0, scale, 0, 0, 0, 1]).reshape([3, 3])
|
92 |
-
shift_to_center = np.array([1, 0, w//2, 0, 1, h//2, 0, 0, 1]).reshape([3, 3])
|
93 |
-
|
94 |
-
affine = shift_to_center @ scale_mat @ rot_mat @ shift_mat @ flip_mat @ shift_to_origin
|
95 |
-
affine_inv = np.linalg.inv(affine)
|
96 |
-
return affine, affine_inv, flip
|
97 |
-
|
98 |
-
def apply_img_affine(img, affine_inv, method=Image.BICUBIC):
|
99 |
-
return img.transform(img.size, Image.AFFINE, data=affine_inv.flatten()[:6], resample=Image.BICUBIC)
|
100 |
-
|
101 |
-
def apply_lm_affine(landmark, affine, flip, size):
|
102 |
-
_, h = size
|
103 |
-
lm = landmark.copy()
|
104 |
-
lm[:, 1] = h - 1 - lm[:, 1]
|
105 |
-
lm = np.concatenate((lm, np.ones([lm.shape[0], 1])), -1)
|
106 |
-
lm = lm @ np.transpose(affine)
|
107 |
-
lm[:, :2] = lm[:, :2] / lm[:, 2:]
|
108 |
-
lm = lm[:, :2]
|
109 |
-
lm[:, 1] = h - 1 - lm[:, 1]
|
110 |
-
if flip:
|
111 |
-
lm_ = lm.copy()
|
112 |
-
lm_[:17] = lm[16::-1]
|
113 |
-
lm_[17:22] = lm[26:21:-1]
|
114 |
-
lm_[22:27] = lm[21:16:-1]
|
115 |
-
lm_[31:36] = lm[35:30:-1]
|
116 |
-
lm_[36:40] = lm[45:41:-1]
|
117 |
-
lm_[40:42] = lm[47:45:-1]
|
118 |
-
lm_[42:46] = lm[39:35:-1]
|
119 |
-
lm_[46:48] = lm[41:39:-1]
|
120 |
-
lm_[48:55] = lm[54:47:-1]
|
121 |
-
lm_[55:60] = lm[59:54:-1]
|
122 |
-
lm_[60:65] = lm[64:59:-1]
|
123 |
-
lm_[65:68] = lm[67:64:-1]
|
124 |
-
lm = lm_
|
125 |
-
return lm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/A00001/bingothoo/src/components/theme-toggle.tsx
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
'use client'
|
2 |
-
|
3 |
-
import * as React from 'react'
|
4 |
-
import { useTheme } from 'next-themes'
|
5 |
-
|
6 |
-
import { Button } from '@/components/ui/button'
|
7 |
-
import { IconMoon, IconSun } from '@/components/ui/icons'
|
8 |
-
|
9 |
-
export function ThemeToggle() {
|
10 |
-
const { setTheme, theme } = useTheme()
|
11 |
-
const [_, startTransition] = React.useTransition()
|
12 |
-
|
13 |
-
return (
|
14 |
-
<Button
|
15 |
-
variant="ghost"
|
16 |
-
size="icon"
|
17 |
-
onClick={() => {
|
18 |
-
startTransition(() => {
|
19 |
-
setTheme(theme === 'light' ? 'dark' : 'light')
|
20 |
-
})
|
21 |
-
}}
|
22 |
-
>
|
23 |
-
{!theme ? null : theme === 'dark' ? (
|
24 |
-
<IconMoon className="transition-all" />
|
25 |
-
) : (
|
26 |
-
<IconSun className="transition-all" />
|
27 |
-
)}
|
28 |
-
<span className="sr-only">Toggle theme</span>
|
29 |
-
</Button>
|
30 |
-
)
|
31 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AB-TW/team-ai/chains.py
DELETED
@@ -1,42 +0,0 @@
|
|
1 |
-
from typing import Any, Optional
|
2 |
-
from langchain.chains import LLMChain
|
3 |
-
from langchain.base_language import BaseLanguageModel
|
4 |
-
from langchain.prompts import PromptTemplate
|
5 |
-
from langchain.memory.chat_memory import BaseMemory
|
6 |
-
from models import llm
|
7 |
-
|
8 |
-
from promopts import CONTENT_RE_WRIGHT_PROMPT, FEEDBACK_PROMPT
|
9 |
-
|
10 |
-
|
11 |
-
class HumanFeedBackChain(LLMChain):
|
12 |
-
"""Chain to run queries against LLMs."""
|
13 |
-
|
14 |
-
memory: Optional[BaseMemory] = None
|
15 |
-
|
16 |
-
def __init__(self, verbose=True, llm: BaseLanguageModel = llm(temperature=0.7), memory: Optional[BaseMemory] = None, prompt: PromptTemplate = FEEDBACK_PROMPT):
|
17 |
-
super().__init__(llm=llm, prompt=prompt, memory=memory, verbose=verbose)
|
18 |
-
|
19 |
-
def run(self, *args: Any, **kwargs: Any) -> str:
|
20 |
-
"""Run the chain as text in, text out or multiple variables, text out."""
|
21 |
-
if len(self.output_keys) != 1:
|
22 |
-
raise ValueError(
|
23 |
-
f"`run` not supported when there is not exactly "
|
24 |
-
f"one output key. Got {self.output_keys}."
|
25 |
-
)
|
26 |
-
|
27 |
-
if args and not kwargs:
|
28 |
-
if len(args) != 1:
|
29 |
-
raise ValueError(
|
30 |
-
"`run` supports only one positional argument.")
|
31 |
-
return self("Answer:" + args[0])[self.output_keys[0]]
|
32 |
-
|
33 |
-
if kwargs and not args:
|
34 |
-
return self(kwargs)[self.output_keys[0]]
|
35 |
-
|
36 |
-
raise ValueError(
|
37 |
-
f"`run` supported with either positional arguments or keyword arguments"
|
38 |
-
f" but not both. Got args: {args} and kwargs: {kwargs}."
|
39 |
-
)
|
40 |
-
|
41 |
-
|
42 |
-
contextRewriteChain = LLMChain(llm=llm(temperature=0.7), prompt=CONTENT_RE_WRIGHT_PROMPT)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIOSML/README/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: README
|
3 |
-
emoji: 👀
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
pinned: false
|
8 |
-
license: bsd
|
9 |
-
---
|
10 |
-
|
11 |
-
Edit this `README.md` markdown file to author your organization card 🔥
|
12 |
-
AIOSML a noble attempt to bridge local machine learning with linux system administration and access control lists
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_0_ClothesDetection/mmyolo/configs/custom_dataset/yolov7_l_syncbn_fast_8x16b-300e_coco.py
DELETED
@@ -1,472 +0,0 @@
|
|
1 |
-
_base_ = ['../_base_/default_runtime.py', '../_base_/det_p5_tta.py']
|
2 |
-
|
3 |
-
data_root = './data-df2/'
|
4 |
-
train_ann_file = 'annotations/instances_train2017.json'
|
5 |
-
train_data_prefix = 'train2017/'
|
6 |
-
val_ann_file = 'annotations/instances_val2017.json'
|
7 |
-
val_data_prefix = 'val2017/'
|
8 |
-
num_classes = 13
|
9 |
-
train_batch_size_per_gpu = 16
|
10 |
-
train_num_workers = 8
|
11 |
-
persistent_workers = True
|
12 |
-
|
13 |
-
vis_backends = [
|
14 |
-
dict(type='LocalVisBackend'),
|
15 |
-
]
|
16 |
-
visualizer = dict(
|
17 |
-
type='mmdet.DetLocalVisualizer',
|
18 |
-
vis_backends=[
|
19 |
-
dict(type='LocalVisBackend'),
|
20 |
-
dict(type='WandbVisBackend')
|
21 |
-
],
|
22 |
-
name='visualizer')
|
23 |
-
log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True)
|
24 |
-
log_level = 'INFO'
|
25 |
-
load_from = None
|
26 |
-
resume = False
|
27 |
-
|
28 |
-
anchors = [
|
29 |
-
[(12, 16), (19, 36), (40, 28)], # P3/8
|
30 |
-
[(36, 75), (76, 55), (72, 146)], # P4/16
|
31 |
-
[(142, 110), (192, 243), (459, 401)] # P5/32
|
32 |
-
]
|
33 |
-
|
34 |
-
base_lr = 0.01
|
35 |
-
max_epochs = 100
|
36 |
-
|
37 |
-
num_epoch_stage2 = 10 # The last 10 epochs switch evaluation interval
|
38 |
-
val_interval_stage2 = 1
|
39 |
-
|
40 |
-
model_test_cfg = dict(
|
41 |
-
multi_label=True,
|
42 |
-
nms_pre=30000,
|
43 |
-
score_thr=0.001,
|
44 |
-
nms=dict(type='nms', iou_threshold=0.65),
|
45 |
-
max_per_img=300)
|
46 |
-
|
47 |
-
img_scale = (640, 640)
|
48 |
-
dataset_type = 'YOLOv5CocoDataset'
|
49 |
-
val_batch_size_per_gpu = 1
|
50 |
-
val_num_workers = 2
|
51 |
-
batch_shapes_cfg = dict(
|
52 |
-
type='BatchShapePolicy',
|
53 |
-
batch_size=val_batch_size_per_gpu,
|
54 |
-
img_size=img_scale[0],
|
55 |
-
size_divisor=32,
|
56 |
-
extra_pad_ratio=0.5)
|
57 |
-
strides = [8, 16, 32] # Strides of multi-scale prior box
|
58 |
-
num_det_layers = 3
|
59 |
-
norm_cfg = dict(type='BN', momentum=0.03, eps=0.001)
|
60 |
-
|
61 |
-
# Data augmentation
|
62 |
-
max_translate_ratio = 0.2 # YOLOv5RandomAffine
|
63 |
-
scaling_ratio_range = (0.1, 2.0) # YOLOv5RandomAffine
|
64 |
-
mixup_prob = 0.15 # YOLOv5MixUp
|
65 |
-
randchoice_mosaic_prob = [0.8, 0.2]
|
66 |
-
mixup_alpha = 8.0 # YOLOv5MixUp
|
67 |
-
mixup_beta = 8.0 # YOLOv5MixUp
|
68 |
-
|
69 |
-
# -----train val related-----
|
70 |
-
loss_cls_weight = 0.3
|
71 |
-
loss_bbox_weight = 0.05
|
72 |
-
loss_obj_weight = 0.7
|
73 |
-
# BatchYOLOv7Assigner params
|
74 |
-
simota_candidate_topk = 10
|
75 |
-
simota_iou_weight = 3.0
|
76 |
-
simota_cls_weight = 1.0
|
77 |
-
prior_match_thr = 4. # Priori box matching threshold
|
78 |
-
obj_level_weights = [4., 1.,
|
79 |
-
0.4] # The obj loss weights of the three output layers
|
80 |
-
|
81 |
-
lr_factor = 0.1 # Learning rate scaling factor
|
82 |
-
weight_decay = 0.0005
|
83 |
-
save_epoch_intervals = 2
|
84 |
-
max_keep_ckpts = 5
|
85 |
-
|
86 |
-
env_cfg = dict(
|
87 |
-
cudnn_benchmark=True,
|
88 |
-
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
|
89 |
-
dist_cfg=dict(backend='nccl'))
|
90 |
-
|
91 |
-
# ===============================Unmodified in most cases====================
|
92 |
-
model = dict(
|
93 |
-
type='YOLODetector',
|
94 |
-
data_preprocessor=dict(
|
95 |
-
type='YOLOv5DetDataPreprocessor',
|
96 |
-
mean=[0., 0., 0.],
|
97 |
-
std=[255., 255., 255.],
|
98 |
-
bgr_to_rgb=True),
|
99 |
-
backbone=dict(
|
100 |
-
type='YOLOv7Backbone',
|
101 |
-
arch='L',
|
102 |
-
norm_cfg=norm_cfg,
|
103 |
-
act_cfg=dict(type='SiLU', inplace=True)),
|
104 |
-
neck=dict(
|
105 |
-
type='YOLOv7PAFPN',
|
106 |
-
block_cfg=dict(
|
107 |
-
type='ELANBlock',
|
108 |
-
middle_ratio=0.5,
|
109 |
-
block_ratio=0.25,
|
110 |
-
num_blocks=4,
|
111 |
-
num_convs_in_block=1),
|
112 |
-
upsample_feats_cat_first=False,
|
113 |
-
in_channels=[512, 1024, 1024],
|
114 |
-
# The real output channel will be multiplied by 2
|
115 |
-
out_channels=[128, 256, 512],
|
116 |
-
norm_cfg=norm_cfg,
|
117 |
-
act_cfg=dict(type='SiLU', inplace=True)),
|
118 |
-
bbox_head=dict(
|
119 |
-
type='YOLOv7Head',
|
120 |
-
head_module=dict(
|
121 |
-
type='YOLOv7HeadModule',
|
122 |
-
num_classes=num_classes,
|
123 |
-
in_channels=[256, 512, 1024],
|
124 |
-
featmap_strides=strides,
|
125 |
-
num_base_priors=3),
|
126 |
-
prior_generator=dict(
|
127 |
-
type='mmdet.YOLOAnchorGenerator',
|
128 |
-
base_sizes=anchors,
|
129 |
-
strides=strides),
|
130 |
-
# scaled based on number of detection layers
|
131 |
-
loss_cls=dict(
|
132 |
-
type='mmdet.CrossEntropyLoss',
|
133 |
-
use_sigmoid=True,
|
134 |
-
reduction='mean',
|
135 |
-
loss_weight=loss_cls_weight *
|
136 |
-
(num_classes / 80 * 3 / num_det_layers)),
|
137 |
-
loss_bbox=dict(
|
138 |
-
type='IoULoss',
|
139 |
-
iou_mode='ciou',
|
140 |
-
bbox_format='xywh',
|
141 |
-
reduction='mean',
|
142 |
-
loss_weight=loss_bbox_weight * (3 / num_det_layers),
|
143 |
-
return_iou=True),
|
144 |
-
loss_obj=dict(
|
145 |
-
type='mmdet.CrossEntropyLoss',
|
146 |
-
use_sigmoid=True,
|
147 |
-
reduction='mean',
|
148 |
-
loss_weight=loss_obj_weight *
|
149 |
-
((img_scale[0] / 640)**2 * 3 / num_det_layers)),
|
150 |
-
prior_match_thr=prior_match_thr,
|
151 |
-
obj_level_weights=obj_level_weights,
|
152 |
-
# BatchYOLOv7Assigner params
|
153 |
-
simota_candidate_topk=simota_candidate_topk,
|
154 |
-
simota_iou_weight=simota_iou_weight,
|
155 |
-
simota_cls_weight=simota_cls_weight),
|
156 |
-
test_cfg=model_test_cfg)
|
157 |
-
|
158 |
-
pre_transform = [
|
159 |
-
dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args),
|
160 |
-
dict(type='LoadAnnotations', with_bbox=True)
|
161 |
-
]
|
162 |
-
|
163 |
-
mosiac4_pipeline = [
|
164 |
-
dict(
|
165 |
-
type='Mosaic',
|
166 |
-
img_scale=img_scale,
|
167 |
-
pad_val=114.0,
|
168 |
-
pre_transform=pre_transform),
|
169 |
-
dict(
|
170 |
-
type='YOLOv5RandomAffine',
|
171 |
-
max_rotate_degree=0.0,
|
172 |
-
max_shear_degree=0.0,
|
173 |
-
max_translate_ratio=max_translate_ratio, # note
|
174 |
-
scaling_ratio_range=scaling_ratio_range, # note
|
175 |
-
# img_scale is (width, height)
|
176 |
-
border=(-img_scale[0] // 2, -img_scale[1] // 2),
|
177 |
-
border_val=(114, 114, 114)),
|
178 |
-
]
|
179 |
-
|
180 |
-
mosiac9_pipeline = [
|
181 |
-
dict(
|
182 |
-
type='Mosaic9',
|
183 |
-
img_scale=img_scale,
|
184 |
-
pad_val=114.0,
|
185 |
-
pre_transform=pre_transform),
|
186 |
-
dict(
|
187 |
-
type='YOLOv5RandomAffine',
|
188 |
-
max_rotate_degree=0.0,
|
189 |
-
max_shear_degree=0.0,
|
190 |
-
max_translate_ratio=max_translate_ratio, # note
|
191 |
-
scaling_ratio_range=scaling_ratio_range, # note
|
192 |
-
# img_scale is (width, height)
|
193 |
-
border=(-img_scale[0] // 2, -img_scale[1] // 2),
|
194 |
-
border_val=(114, 114, 114)),
|
195 |
-
]
|
196 |
-
|
197 |
-
randchoice_mosaic_pipeline = dict(
|
198 |
-
type='RandomChoice',
|
199 |
-
transforms=[mosiac4_pipeline, mosiac9_pipeline],
|
200 |
-
prob=randchoice_mosaic_prob)
|
201 |
-
|
202 |
-
train_pipeline = [
|
203 |
-
*pre_transform,
|
204 |
-
randchoice_mosaic_pipeline,
|
205 |
-
dict(
|
206 |
-
type='YOLOv5MixUp',
|
207 |
-
alpha=mixup_alpha, # note
|
208 |
-
beta=mixup_beta, # note
|
209 |
-
prob=mixup_prob,
|
210 |
-
pre_transform=[*pre_transform, randchoice_mosaic_pipeline]),
|
211 |
-
dict(type='YOLOv5HSVRandomAug'),
|
212 |
-
dict(type='mmdet.RandomFlip', prob=0.5),
|
213 |
-
dict(
|
214 |
-
type='mmdet.PackDetInputs',
|
215 |
-
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', 'flip',
|
216 |
-
'flip_direction'))
|
217 |
-
]
|
218 |
-
|
219 |
-
train_dataloader = dict(
|
220 |
-
batch_size=train_batch_size_per_gpu,
|
221 |
-
num_workers=train_num_workers,
|
222 |
-
persistent_workers=persistent_workers,
|
223 |
-
pin_memory=True,
|
224 |
-
sampler=dict(type='DefaultSampler', shuffle=True),
|
225 |
-
collate_fn=dict(type='yolov5_collate'), # FASTER
|
226 |
-
dataset=dict(
|
227 |
-
type=dataset_type,
|
228 |
-
data_root=data_root,
|
229 |
-
ann_file=train_ann_file,
|
230 |
-
data_prefix=dict(img=train_data_prefix),
|
231 |
-
filter_cfg=dict(filter_empty_gt=False, min_size=32),
|
232 |
-
pipeline=train_pipeline))
|
233 |
-
|
234 |
-
test_pipeline = [
|
235 |
-
dict(type='LoadImageFromFile', file_client_args=_base_.file_client_args),
|
236 |
-
dict(type='YOLOv5KeepRatioResize', scale=img_scale),
|
237 |
-
dict(
|
238 |
-
type='LetterResize',
|
239 |
-
scale=img_scale,
|
240 |
-
allow_scale_up=False,
|
241 |
-
pad_val=dict(img=114)),
|
242 |
-
dict(type='LoadAnnotations', with_bbox=True, _scope_='mmdet'),
|
243 |
-
dict(
|
244 |
-
type='mmdet.PackDetInputs',
|
245 |
-
meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
|
246 |
-
'scale_factor', 'pad_param'))
|
247 |
-
]
|
248 |
-
|
249 |
-
val_dataloader = dict(
|
250 |
-
batch_size=val_batch_size_per_gpu,
|
251 |
-
num_workers=val_num_workers,
|
252 |
-
persistent_workers=persistent_workers,
|
253 |
-
pin_memory=True,
|
254 |
-
drop_last=False,
|
255 |
-
sampler=dict(type='DefaultSampler', shuffle=False),
|
256 |
-
dataset=dict(
|
257 |
-
type=dataset_type,
|
258 |
-
data_root=data_root,
|
259 |
-
test_mode=True,
|
260 |
-
data_prefix=dict(img=val_data_prefix),
|
261 |
-
ann_file=val_ann_file,
|
262 |
-
pipeline=test_pipeline,
|
263 |
-
batch_shapes_cfg=batch_shapes_cfg))
|
264 |
-
|
265 |
-
test_dataloader = val_dataloader
|
266 |
-
|
267 |
-
param_scheduler = None
|
268 |
-
optim_wrapper = dict(
|
269 |
-
type='OptimWrapper',
|
270 |
-
optimizer=dict(
|
271 |
-
type='SGD',
|
272 |
-
lr=base_lr,
|
273 |
-
momentum=0.937,
|
274 |
-
weight_decay=weight_decay,
|
275 |
-
nesterov=True,
|
276 |
-
batch_size_per_gpu=train_batch_size_per_gpu),
|
277 |
-
constructor='YOLOv7OptimWrapperConstructor')
|
278 |
-
|
279 |
-
default_scope = 'mmyolo'
|
280 |
-
default_hooks = dict(
|
281 |
-
timer=dict(type='IterTimerHook'),
|
282 |
-
logger=dict(type='LoggerHook', interval=2),
|
283 |
-
param_scheduler=dict(
|
284 |
-
type='YOLOv5ParamSchedulerHook',
|
285 |
-
scheduler_type='cosine',
|
286 |
-
lr_factor=lr_factor, # note
|
287 |
-
max_epochs=max_epochs),
|
288 |
-
checkpoint=dict(
|
289 |
-
type='CheckpointHook',
|
290 |
-
save_param_scheduler=False,
|
291 |
-
interval=save_epoch_intervals,
|
292 |
-
save_best='auto',
|
293 |
-
max_keep_ckpts=max_keep_ckpts),
|
294 |
-
sampler_seed=dict(type='DistSamplerSeedHook'),
|
295 |
-
visualization=dict(type='mmdet.DetVisualizationHook'))
|
296 |
-
|
297 |
-
custom_hooks = [
|
298 |
-
dict(
|
299 |
-
type='EMAHook',
|
300 |
-
ema_type='ExpMomentumEMA',
|
301 |
-
momentum=0.0001,
|
302 |
-
update_buffers=True,
|
303 |
-
strict_load=False,
|
304 |
-
priority=49)
|
305 |
-
]
|
306 |
-
|
307 |
-
val_evaluator = dict(
|
308 |
-
type='mmdet.CocoMetric',
|
309 |
-
proposal_nums=(100, 1, 10), # Can be accelerated
|
310 |
-
ann_file=data_root + val_ann_file,
|
311 |
-
metric='bbox')
|
312 |
-
test_evaluator = val_evaluator
|
313 |
-
|
314 |
-
train_cfg = dict(
|
315 |
-
type='EpochBasedTrainLoop',
|
316 |
-
max_epochs=max_epochs,
|
317 |
-
val_interval=save_epoch_intervals,
|
318 |
-
dynamic_intervals=[(max_epochs - num_epoch_stage2, val_interval_stage2)])
|
319 |
-
val_cfg = dict(type='ValLoop')
|
320 |
-
test_cfg = dict(type='TestLoop')
|
321 |
-
|
322 |
-
# ============================
|
323 |
-
|
324 |
-
file_client_args = dict(backend='disk')
|
325 |
-
_file_client_args = dict(backend='disk')
|
326 |
-
tta_model = dict(
|
327 |
-
type='mmdet.DetTTAModel',
|
328 |
-
tta_cfg=dict(nms=dict(type='nms', iou_threshold=0.65), max_per_img=300))
|
329 |
-
img_scales = [
|
330 |
-
(
|
331 |
-
640,
|
332 |
-
640,
|
333 |
-
),
|
334 |
-
(
|
335 |
-
320,
|
336 |
-
320,
|
337 |
-
),
|
338 |
-
(
|
339 |
-
960,
|
340 |
-
960,
|
341 |
-
),
|
342 |
-
]
|
343 |
-
_multiscale_resize_transforms = [
|
344 |
-
dict(
|
345 |
-
type='Compose',
|
346 |
-
transforms=[
|
347 |
-
dict(type='YOLOv5KeepRatioResize', scale=(
|
348 |
-
640,
|
349 |
-
640,
|
350 |
-
)),
|
351 |
-
dict(
|
352 |
-
type='LetterResize',
|
353 |
-
scale=(
|
354 |
-
640,
|
355 |
-
640,
|
356 |
-
),
|
357 |
-
allow_scale_up=False,
|
358 |
-
pad_val=dict(img=114)),
|
359 |
-
]),
|
360 |
-
dict(
|
361 |
-
type='Compose',
|
362 |
-
transforms=[
|
363 |
-
dict(type='YOLOv5KeepRatioResize', scale=(
|
364 |
-
320,
|
365 |
-
320,
|
366 |
-
)),
|
367 |
-
dict(
|
368 |
-
type='LetterResize',
|
369 |
-
scale=(
|
370 |
-
320,
|
371 |
-
320,
|
372 |
-
),
|
373 |
-
allow_scale_up=False,
|
374 |
-
pad_val=dict(img=114)),
|
375 |
-
]),
|
376 |
-
dict(
|
377 |
-
type='Compose',
|
378 |
-
transforms=[
|
379 |
-
dict(type='YOLOv5KeepRatioResize', scale=(
|
380 |
-
960,
|
381 |
-
960,
|
382 |
-
)),
|
383 |
-
dict(
|
384 |
-
type='LetterResize',
|
385 |
-
scale=(
|
386 |
-
960,
|
387 |
-
960,
|
388 |
-
),
|
389 |
-
allow_scale_up=False,
|
390 |
-
pad_val=dict(img=114)),
|
391 |
-
]),
|
392 |
-
]
|
393 |
-
tta_pipeline = [
|
394 |
-
dict(type='LoadImageFromFile', file_client_args=dict(backend='disk')),
|
395 |
-
dict(
|
396 |
-
type='TestTimeAug',
|
397 |
-
transforms=[
|
398 |
-
[
|
399 |
-
dict(
|
400 |
-
type='Compose',
|
401 |
-
transforms=[
|
402 |
-
dict(type='YOLOv5KeepRatioResize', scale=(
|
403 |
-
640,
|
404 |
-
640,
|
405 |
-
)),
|
406 |
-
dict(
|
407 |
-
type='LetterResize',
|
408 |
-
scale=(
|
409 |
-
640,
|
410 |
-
640,
|
411 |
-
),
|
412 |
-
allow_scale_up=False,
|
413 |
-
pad_val=dict(img=114)),
|
414 |
-
]),
|
415 |
-
dict(
|
416 |
-
type='Compose',
|
417 |
-
transforms=[
|
418 |
-
dict(type='YOLOv5KeepRatioResize', scale=(
|
419 |
-
320,
|
420 |
-
320,
|
421 |
-
)),
|
422 |
-
dict(
|
423 |
-
type='LetterResize',
|
424 |
-
scale=(
|
425 |
-
320,
|
426 |
-
320,
|
427 |
-
),
|
428 |
-
allow_scale_up=False,
|
429 |
-
pad_val=dict(img=114)),
|
430 |
-
]),
|
431 |
-
dict(
|
432 |
-
type='Compose',
|
433 |
-
transforms=[
|
434 |
-
dict(type='YOLOv5KeepRatioResize', scale=(
|
435 |
-
960,
|
436 |
-
960,
|
437 |
-
)),
|
438 |
-
dict(
|
439 |
-
type='LetterResize',
|
440 |
-
scale=(
|
441 |
-
960,
|
442 |
-
960,
|
443 |
-
),
|
444 |
-
allow_scale_up=False,
|
445 |
-
pad_val=dict(img=114)),
|
446 |
-
]),
|
447 |
-
],
|
448 |
-
[
|
449 |
-
dict(type='mmdet.RandomFlip', prob=1.0),
|
450 |
-
dict(type='mmdet.RandomFlip', prob=0.0),
|
451 |
-
],
|
452 |
-
[
|
453 |
-
dict(type='mmdet.LoadAnnotations', with_bbox=True),
|
454 |
-
],
|
455 |
-
[
|
456 |
-
dict(
|
457 |
-
type='mmdet.PackDetInputs',
|
458 |
-
meta_keys=(
|
459 |
-
'img_id',
|
460 |
-
'img_path',
|
461 |
-
'ori_shape',
|
462 |
-
'img_shape',
|
463 |
-
'scale_factor',
|
464 |
-
'pad_param',
|
465 |
-
'flip',
|
466 |
-
'flip_direction',
|
467 |
-
)),
|
468 |
-
],
|
469 |
-
]),
|
470 |
-
]
|
471 |
-
|
472 |
-
launcher = 'none'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/g4f/Provider/AiAsk.py
DELETED
@@ -1,44 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
from aiohttp import ClientSession
|
4 |
-
from ..typing import AsyncGenerator
|
5 |
-
from .base_provider import AsyncGeneratorProvider
|
6 |
-
|
7 |
-
class AiAsk(AsyncGeneratorProvider):
|
8 |
-
url = "https://e.aiask.me"
|
9 |
-
supports_gpt_35_turbo = True
|
10 |
-
working = True
|
11 |
-
|
12 |
-
@classmethod
|
13 |
-
async def create_async_generator(
|
14 |
-
cls,
|
15 |
-
model: str,
|
16 |
-
messages: list[dict[str, str]],
|
17 |
-
**kwargs
|
18 |
-
) -> AsyncGenerator:
|
19 |
-
headers = {
|
20 |
-
"accept": "application/json, text/plain, */*",
|
21 |
-
"origin": cls.url,
|
22 |
-
"referer": f"{cls.url}/chat",
|
23 |
-
}
|
24 |
-
async with ClientSession(headers=headers) as session:
|
25 |
-
data = {
|
26 |
-
"continuous": True,
|
27 |
-
"id": "fRMSQtuHl91A4De9cCvKD",
|
28 |
-
"list": messages,
|
29 |
-
"models": "0",
|
30 |
-
"prompt": "",
|
31 |
-
"temperature": kwargs.get("temperature", 0.5),
|
32 |
-
"title": "",
|
33 |
-
}
|
34 |
-
buffer = ""
|
35 |
-
rate_limit = "您的免费额度不够使用这个模型啦,请点击右上角登录继续使用!"
|
36 |
-
async with session.post(f"{cls.url}/v1/chat/gpt/", json=data) as response:
|
37 |
-
response.raise_for_status()
|
38 |
-
async for chunk in response.content.iter_any():
|
39 |
-
buffer += chunk.decode()
|
40 |
-
if not rate_limit.startswith(buffer):
|
41 |
-
yield buffer
|
42 |
-
buffer = ""
|
43 |
-
elif buffer == rate_limit:
|
44 |
-
raise RuntimeError("Rate limit reached")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AdWeeb/SuMmeet/app.py
DELETED
@@ -1,109 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
"""
|
3 |
-
Created on Mon Mar 28 01:04:50 2022
|
4 |
-
|
5 |
-
@author: adeep
|
6 |
-
"""
|
7 |
-
from fnmatch import translate
|
8 |
-
import cv2 as cv
|
9 |
-
import tempfile
|
10 |
-
import numpy as np
|
11 |
-
import pandas as pd
|
12 |
-
import streamlit as st
|
13 |
-
import joblib
|
14 |
-
import os
|
15 |
-
from moviepy.editor import VideoFileClip
|
16 |
-
import speech_recognition as sr
|
17 |
-
from pydub import AudioSegment
|
18 |
-
from pydub.silence import split_on_silence
|
19 |
-
import transformers
|
20 |
-
from transformers import pipeline
|
21 |
-
import nltk
|
22 |
-
nltk.download('punkt')
|
23 |
-
nltk.download('averaged_perceptron_tagger')
|
24 |
-
import nltk
|
25 |
-
nltk.download('punkt')
|
26 |
-
nltk.download('averaged_perceptron_tagger')
|
27 |
-
from nltk.tokenize import sent_tokenize
|
28 |
-
import re
|
29 |
-
from utils import get_translation, welcome, get_large_audio_transcription
|
30 |
-
|
31 |
-
from PIL import Image
|
32 |
-
|
33 |
-
#import stanfordnlp
|
34 |
-
|
35 |
-
def main():
|
36 |
-
|
37 |
-
|
38 |
-
st.title("Summarize Text")
|
39 |
-
video = st.file_uploader("Choose a file", type=['mp4'])
|
40 |
-
button = st.button("Summarize")
|
41 |
-
|
42 |
-
max_c = st.sidebar.slider('Select max words', 50, 500, step=10, value=150)
|
43 |
-
min_c = st.sidebar.slider('Select min words', 10, 450, step=10, value=50)
|
44 |
-
gen_summ = False
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
with st.spinner("Running.."):
|
49 |
-
|
50 |
-
if button and video:
|
51 |
-
tfile = tempfile.NamedTemporaryFile(delete=False)
|
52 |
-
tfile.write(video.read())
|
53 |
-
#st.write(tfile.name)
|
54 |
-
v = VideoFileClip(tfile.name)
|
55 |
-
v.audio.write_audiofile("movie.wav")
|
56 |
-
#st.video(video, format="video/mp4", start_time=0)
|
57 |
-
#st.audio("movie.wav")
|
58 |
-
whole_text=get_large_audio_transcription("movie.wav")
|
59 |
-
#st.write(whole_text)
|
60 |
-
#summarizer = pipeline("summarization")
|
61 |
-
#summarizer = pipeline("summarization", model="t5-base", tokenizer="t5-base", framework="pt")
|
62 |
-
summarizer = pipeline("summarization", model="t5-large", tokenizer="t5-large", framework="pt")
|
63 |
-
summarized = summarizer(whole_text, min_length=min_c, max_length=max_c)
|
64 |
-
summ=summarized[0]['summary_text']
|
65 |
-
#st.write(summ)
|
66 |
-
gen_summ = True
|
67 |
-
#stf_nlp = stanfordnlp.Pipeline(processors='tokenize,mwt,pos')
|
68 |
-
#doc = stf_nlp(summ)
|
69 |
-
#l=[w.text.capitalize() if w.upos in ["PROPN","NNS"] else w.text for sent in doc.sentences for w in sent.words]
|
70 |
-
#text=" ".join(l)
|
71 |
-
#summ=truecasing_by_sentence_segmentation(summ)
|
72 |
-
sentences = sent_tokenize(summ, language='english')
|
73 |
-
# capitalize the sentences
|
74 |
-
sentences_capitalized = [s.capitalize() for s in sentences]
|
75 |
-
# join the capitalized sentences
|
76 |
-
summ = re.sub(" (?=[\.,'!?:;])", "", ' '.join(sentences_capitalized))
|
77 |
-
|
78 |
-
if 'summary' not in st.session_state:
|
79 |
-
st.session_state.summary=True
|
80 |
-
st.session_state.summarization = summ
|
81 |
-
st.session_state.gen_summ = True
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
translate = st.sidebar.radio('Do you want to translate the text to any different language?', ('No', 'Yes'))
|
86 |
-
if 'summary' in st.session_state:
|
87 |
-
summarized_text = st.session_state.summarization
|
88 |
-
st.write(summarized_text)
|
89 |
-
gen_summ = st.session_state.gen_summ
|
90 |
-
|
91 |
-
if translate == 'Yes' and gen_summ == True:
|
92 |
-
lang_list = ['Hindi', 'Marathi', 'Malayalam', 'Kannada', 'Telugu', 'Tamil', 'Oriya', 'Bengali', 'Gujarati', 'Urdu']
|
93 |
-
|
94 |
-
s_type = st.sidebar.selectbox('Select the Language in which you want to Translate:',lang_list)
|
95 |
-
st.sidebar.write('You selected:', s_type)
|
96 |
-
|
97 |
-
|
98 |
-
translation = get_translation(source='English', dest=s_type, text=summarized_text)
|
99 |
-
|
100 |
-
st.sidebar.write(translation)
|
101 |
-
elif translate == 'Yes' and gen_summ == False:
|
102 |
-
st.error("The summary has not been generated yet. Please generate the summary first and then translate")
|
103 |
-
|
104 |
-
else:
|
105 |
-
st.write('')
|
106 |
-
|
107 |
-
if __name__ == '__main__':
|
108 |
-
|
109 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapter/CoAdapter/ldm/data/dataset_coco.py
DELETED
@@ -1,36 +0,0 @@
|
|
1 |
-
import json
|
2 |
-
import cv2
|
3 |
-
import os
|
4 |
-
from basicsr.utils import img2tensor
|
5 |
-
|
6 |
-
|
7 |
-
class dataset_coco_mask_color():
|
8 |
-
def __init__(self, path_json, root_path_im, root_path_mask, image_size):
|
9 |
-
super(dataset_coco_mask_color, self).__init__()
|
10 |
-
with open(path_json, 'r', encoding='utf-8') as fp:
|
11 |
-
data = json.load(fp)
|
12 |
-
data = data['annotations']
|
13 |
-
self.files = []
|
14 |
-
self.root_path_im = root_path_im
|
15 |
-
self.root_path_mask = root_path_mask
|
16 |
-
for file in data:
|
17 |
-
name = "%012d.png" % file['image_id']
|
18 |
-
self.files.append({'name': name, 'sentence': file['caption']})
|
19 |
-
|
20 |
-
def __getitem__(self, idx):
|
21 |
-
file = self.files[idx]
|
22 |
-
name = file['name']
|
23 |
-
# print(os.path.join(self.root_path_im, name))
|
24 |
-
im = cv2.imread(os.path.join(self.root_path_im, name.replace('.png', '.jpg')))
|
25 |
-
im = cv2.resize(im, (512, 512))
|
26 |
-
im = img2tensor(im, bgr2rgb=True, float32=True) / 255.
|
27 |
-
|
28 |
-
mask = cv2.imread(os.path.join(self.root_path_mask, name)) # [:,:,0]
|
29 |
-
mask = cv2.resize(mask, (512, 512))
|
30 |
-
mask = img2tensor(mask, bgr2rgb=True, float32=True) / 255. # [0].unsqueeze(0)#/255.
|
31 |
-
|
32 |
-
sentence = file['sentence']
|
33 |
-
return {'im': im, 'mask': mask, 'sentence': sentence}
|
34 |
-
|
35 |
-
def __len__(self):
|
36 |
-
return len(self.files)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/executor/base.py
DELETED
@@ -1,100 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
from abc import abstractmethod
|
4 |
-
from typing import TYPE_CHECKING, List, Tuple, Any
|
5 |
-
|
6 |
-
from pydantic import BaseModel
|
7 |
-
|
8 |
-
from agentverse.agents import ExecutorAgent
|
9 |
-
from agentverse.message import SolverMessage, ExecutorMessage
|
10 |
-
|
11 |
-
from . import executor_registry
|
12 |
-
|
13 |
-
|
14 |
-
class BaseExecutor(BaseModel):
|
15 |
-
"""
|
16 |
-
The base class of execution.
|
17 |
-
"""
|
18 |
-
|
19 |
-
def step(
|
20 |
-
self,
|
21 |
-
agent: ExecutorAgent,
|
22 |
-
task_description: str,
|
23 |
-
solution: List[SolverMessage],
|
24 |
-
*args,
|
25 |
-
**kwargs,
|
26 |
-
) -> List[ExecutorMessage]:
|
27 |
-
pass
|
28 |
-
|
29 |
-
async def astep(
|
30 |
-
self,
|
31 |
-
agent: ExecutorAgent,
|
32 |
-
task_description: str,
|
33 |
-
solution: List[str],
|
34 |
-
*args,
|
35 |
-
**kwargs,
|
36 |
-
) -> List[ExecutorMessage]:
|
37 |
-
pass
|
38 |
-
|
39 |
-
def reset(self):
|
40 |
-
pass
|
41 |
-
|
42 |
-
|
43 |
-
@executor_registry.register("none")
|
44 |
-
class NoneExecutor(BaseExecutor):
|
45 |
-
"""
|
46 |
-
The base class of execution.
|
47 |
-
"""
|
48 |
-
|
49 |
-
def step(
|
50 |
-
self,
|
51 |
-
agent: ExecutorAgent,
|
52 |
-
task_description: str,
|
53 |
-
solution: List[SolverMessage],
|
54 |
-
*args,
|
55 |
-
**kwargs,
|
56 |
-
) -> Any:
|
57 |
-
return [ExecutorMessage(content="")]
|
58 |
-
|
59 |
-
async def astep(
|
60 |
-
self,
|
61 |
-
agent: ExecutorAgent,
|
62 |
-
task_description: str,
|
63 |
-
solution: List[SolverMessage],
|
64 |
-
*args,
|
65 |
-
**kwargs,
|
66 |
-
) -> Any:
|
67 |
-
return [ExecutorMessage(content="")]
|
68 |
-
|
69 |
-
def reset(self):
|
70 |
-
pass
|
71 |
-
|
72 |
-
|
73 |
-
@executor_registry.register("dummy")
|
74 |
-
class DummyExecutor(BaseExecutor):
|
75 |
-
"""
|
76 |
-
The base class of execution.
|
77 |
-
"""
|
78 |
-
|
79 |
-
def step(
|
80 |
-
self,
|
81 |
-
agent: ExecutorAgent,
|
82 |
-
task_description: str,
|
83 |
-
solution: List[SolverMessage],
|
84 |
-
*args,
|
85 |
-
**kwargs,
|
86 |
-
) -> Any:
|
87 |
-
return [ExecutorMessage(content=s.content) for s in solution]
|
88 |
-
|
89 |
-
async def astep(
|
90 |
-
self,
|
91 |
-
agent: ExecutorAgent,
|
92 |
-
task_description: str,
|
93 |
-
solution: List[SolverMessage],
|
94 |
-
*args,
|
95 |
-
**kwargs,
|
96 |
-
) -> Any:
|
97 |
-
return [ExecutorMessage(content=s.content) for s in solution]
|
98 |
-
|
99 |
-
def reset(self):
|
100 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/folder/Folder.js
DELETED
@@ -1,122 +0,0 @@
|
|
1 |
-
import Sizer from '../sizer/Sizer';
|
2 |
-
import ChildTransition from './methods/ChildTransition.js';
|
3 |
-
import ExpandMethods from './methods/ExpandMethods.js';
|
4 |
-
import ClickMethods from '../basesizer/ClickMethods';
|
5 |
-
import ConfigurationMethods from './methods/ConfigurationMethods.js';
|
6 |
-
|
7 |
-
const GetValue = Phaser.Utils.Objects.GetValue;
|
8 |
-
|
9 |
-
class Folder extends Sizer {
|
10 |
-
constructor(scene, config) {
|
11 |
-
if (config === undefined) {
|
12 |
-
config = {};
|
13 |
-
}
|
14 |
-
|
15 |
-
if (!config.hasOwnProperty('orientation')) {
|
16 |
-
config.orientation = 1;
|
17 |
-
}
|
18 |
-
|
19 |
-
super(scene, config);
|
20 |
-
this.type = 'rexFolder';
|
21 |
-
|
22 |
-
this.expanded = undefined;
|
23 |
-
this.expandDirection = (this.orientation === 1) ? 'y' : 'x';
|
24 |
-
|
25 |
-
var background = config.background;
|
26 |
-
var title = config.title;
|
27 |
-
var child = config.child;
|
28 |
-
|
29 |
-
// background
|
30 |
-
if (background) {
|
31 |
-
this.addBackground(background);
|
32 |
-
}
|
33 |
-
|
34 |
-
// title
|
35 |
-
var defaultAlign = (this.orientation === 1) ? 'left' : 'top';
|
36 |
-
var align = GetValue(config, 'align.title', defaultAlign);
|
37 |
-
var expand = GetValue(config, 'expand.title', true);
|
38 |
-
this.add(
|
39 |
-
title,
|
40 |
-
{
|
41 |
-
proportion: 0, align: align, expand: expand,
|
42 |
-
}
|
43 |
-
);
|
44 |
-
|
45 |
-
var toggleByTarget = GetValue(config, 'toggleByTarget', undefined);
|
46 |
-
var toggleClickConfig = GetValue(config, 'toggleClickConfig');
|
47 |
-
|
48 |
-
if (toggleByTarget === undefined) {
|
49 |
-
toggleByTarget = title;
|
50 |
-
}
|
51 |
-
if (toggleByTarget) {
|
52 |
-
ClickMethods.onClick.call(
|
53 |
-
toggleByTarget,
|
54 |
-
function () {
|
55 |
-
this.toggle();
|
56 |
-
},
|
57 |
-
this,
|
58 |
-
toggleClickConfig
|
59 |
-
);
|
60 |
-
}
|
61 |
-
|
62 |
-
// child
|
63 |
-
this.childTransition = new ChildTransition(child);
|
64 |
-
|
65 |
-
var customOrigin = GetValue(config, 'customChildOrigin', false);
|
66 |
-
if (!customOrigin) {
|
67 |
-
var origin = (!this.rtl) ? 0 : 1;
|
68 |
-
child.setOrigin(origin);
|
69 |
-
}
|
70 |
-
|
71 |
-
var align = GetValue(config, 'align.child', 'left');
|
72 |
-
var expand = GetValue(config, 'expand.child', true);
|
73 |
-
var proportion = (expand) ? 1 : 0;
|
74 |
-
this.add(
|
75 |
-
child,
|
76 |
-
{
|
77 |
-
proportion: proportion, align: align, expand: expand,
|
78 |
-
|
79 |
-
}
|
80 |
-
);
|
81 |
-
|
82 |
-
this.addChildrenMap('title', title);
|
83 |
-
this.addChildrenMap('child', child);
|
84 |
-
this.addChildrenMap('background', background);
|
85 |
-
|
86 |
-
var transitionConfig = config.transition;
|
87 |
-
this.setTransitionDuration(GetValue(transitionConfig, 'duration', 200));
|
88 |
-
this.setExpandCallback(GetValue(transitionConfig, 'expandCallback', undefined));
|
89 |
-
this.setCollapseCallback(GetValue(transitionConfig, 'collapseCallback', undefined));
|
90 |
-
|
91 |
-
this.reLayoutTarget = GetValue(config, 'reLayoutTarget', undefined);
|
92 |
-
|
93 |
-
var onExpandStart = config.onExpandStart;
|
94 |
-
if (onExpandStart) {
|
95 |
-
this.on('expand.start', onExpandStart);
|
96 |
-
}
|
97 |
-
|
98 |
-
var onExpandComplete = config.onExpandComplete;
|
99 |
-
if (onExpandComplete) {
|
100 |
-
this.on('expand.complete', onExpandComplete);
|
101 |
-
}
|
102 |
-
|
103 |
-
var onCollapseStart = config.onCollapseStart;
|
104 |
-
if (onCollapseStart) {
|
105 |
-
this.on('collapse.start', onCollapseStart);
|
106 |
-
}
|
107 |
-
|
108 |
-
var onCollapseComplete = config.onCollapseComplete;
|
109 |
-
if (onCollapseComplete) {
|
110 |
-
this.on('collapse.complete', onCollapseComplete);
|
111 |
-
}
|
112 |
-
|
113 |
-
}
|
114 |
-
}
|
115 |
-
|
116 |
-
Object.assign(
|
117 |
-
Folder.prototype,
|
118 |
-
ExpandMethods,
|
119 |
-
ConfigurationMethods,
|
120 |
-
)
|
121 |
-
|
122 |
-
export default Folder;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/statesroundrectangle/StatesRoundRectangle.d.ts
DELETED
@@ -1,49 +0,0 @@
|
|
1 |
-
import RoundRectangle from '../roundrectangle/RoundRectangle';
|
2 |
-
|
3 |
-
export default StatesRoundRectangle;
|
4 |
-
|
5 |
-
declare namespace StatesRoundRectangle {
|
6 |
-
interface IConfig extends RoundRectangle.IConfig {
|
7 |
-
'active.color'?: number,
|
8 |
-
'active.alpha'?: number,
|
9 |
-
'active.strokeColor'?: number,
|
10 |
-
'active.strokeAlpha'?: number,
|
11 |
-
'active.strokeWidth'?: number,
|
12 |
-
'active.radius'?: number | RoundRectangle.IRadiusConfig | ({
|
13 |
-
radius?: (number | RoundRectangle.IRadiusConfig),
|
14 |
-
iteration?: number
|
15 |
-
}),
|
16 |
-
|
17 |
-
'hover.color'?: number,
|
18 |
-
'hover.alpha'?: number,
|
19 |
-
'hover.strokeColor'?: number,
|
20 |
-
'hover.strokeAlpha'?: number,
|
21 |
-
'hover.strokeWidth'?: number,
|
22 |
-
'hover.radius'?: number | RoundRectangle.IRadiusConfig | ({
|
23 |
-
radius?: (number | RoundRectangle.IRadiusConfig),
|
24 |
-
iteration?: number
|
25 |
-
}),
|
26 |
-
|
27 |
-
'disable.color'?: number,
|
28 |
-
'disable.alpha'?: number,
|
29 |
-
'disable.strokeColor'?: number,
|
30 |
-
'disable.strokeAlpha'?: number,
|
31 |
-
'disable.strokeWidth'?: number,
|
32 |
-
'disable.radius'?: number | RoundRectangle.IRadiusConfig | ({
|
33 |
-
radius?: (number | RoundRectangle.IRadiusConfig),
|
34 |
-
iteration?: number
|
35 |
-
}),
|
36 |
-
|
37 |
-
}
|
38 |
-
}
|
39 |
-
|
40 |
-
declare class StatesRoundRectangle extends RoundRectangle {
|
41 |
-
constructor(
|
42 |
-
scene: Phaser.Scene,
|
43 |
-
config?: StatesRoundRectangle.IConfig
|
44 |
-
)
|
45 |
-
|
46 |
-
setActiveState(enable?: boolean): this;
|
47 |
-
setHoverState(enable?: boolean): this;
|
48 |
-
setDisableState(enable?: boolean): this;
|
49 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/grid_sample_gradfix.py
DELETED
@@ -1,93 +0,0 @@
|
|
1 |
-
# Copyright (c) SenseTime Research. All rights reserved.
|
2 |
-
|
3 |
-
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
|
4 |
-
#
|
5 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
6 |
-
# and proprietary rights in and to this software, related documentation
|
7 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
8 |
-
# distribution of this software and related documentation without an express
|
9 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
10 |
-
|
11 |
-
"""Custom replacement for `torch.nn.functional.grid_sample` that
|
12 |
-
supports arbitrarily high order gradients between the input and output.
|
13 |
-
Only works on 2D images and assumes
|
14 |
-
`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`."""
|
15 |
-
|
16 |
-
import warnings
|
17 |
-
import torch
|
18 |
-
|
19 |
-
# pylint: disable=redefined-builtin
|
20 |
-
# pylint: disable=arguments-differ
|
21 |
-
# pylint: disable=protected-access
|
22 |
-
|
23 |
-
# ----------------------------------------------------------------------------
|
24 |
-
|
25 |
-
enabled = False # Enable the custom op by setting this to true.
|
26 |
-
|
27 |
-
# ----------------------------------------------------------------------------
|
28 |
-
|
29 |
-
|
30 |
-
def grid_sample(input, grid):
|
31 |
-
if _should_use_custom_op():
|
32 |
-
return _GridSample2dForward.apply(input, grid)
|
33 |
-
return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False)
|
34 |
-
|
35 |
-
# ----------------------------------------------------------------------------
|
36 |
-
|
37 |
-
|
38 |
-
def _should_use_custom_op():
|
39 |
-
if not enabled:
|
40 |
-
return False
|
41 |
-
if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9']):
|
42 |
-
return True
|
43 |
-
warnings.warn(
|
44 |
-
f'grid_sample_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.grid_sample().')
|
45 |
-
return False
|
46 |
-
|
47 |
-
# ----------------------------------------------------------------------------
|
48 |
-
|
49 |
-
|
50 |
-
class _GridSample2dForward(torch.autograd.Function):
|
51 |
-
@staticmethod
|
52 |
-
def forward(ctx, input, grid):
|
53 |
-
assert input.ndim == 4
|
54 |
-
assert grid.ndim == 4
|
55 |
-
output = torch.nn.functional.grid_sample(
|
56 |
-
input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False)
|
57 |
-
ctx.save_for_backward(input, grid)
|
58 |
-
return output
|
59 |
-
|
60 |
-
@staticmethod
|
61 |
-
def backward(ctx, grad_output):
|
62 |
-
input, grid = ctx.saved_tensors
|
63 |
-
grad_input, grad_grid = _GridSample2dBackward.apply(
|
64 |
-
grad_output, input, grid)
|
65 |
-
return grad_input, grad_grid
|
66 |
-
|
67 |
-
# ----------------------------------------------------------------------------
|
68 |
-
|
69 |
-
|
70 |
-
class _GridSample2dBackward(torch.autograd.Function):
|
71 |
-
@staticmethod
|
72 |
-
def forward(ctx, grad_output, input, grid):
|
73 |
-
op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward')
|
74 |
-
grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False)
|
75 |
-
ctx.save_for_backward(grid)
|
76 |
-
return grad_input, grad_grid
|
77 |
-
|
78 |
-
@staticmethod
|
79 |
-
def backward(ctx, grad2_grad_input, grad2_grad_grid):
|
80 |
-
_ = grad2_grad_grid # unused
|
81 |
-
grid, = ctx.saved_tensors
|
82 |
-
grad2_grad_output = None
|
83 |
-
grad2_input = None
|
84 |
-
grad2_grid = None
|
85 |
-
|
86 |
-
if ctx.needs_input_grad[0]:
|
87 |
-
grad2_grad_output = _GridSample2dForward.apply(
|
88 |
-
grad2_grad_input, grid)
|
89 |
-
|
90 |
-
assert not ctx.needs_input_grad[2]
|
91 |
-
return grad2_grad_output, grad2_input, grad2_grid
|
92 |
-
|
93 |
-
# ----------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/pix2pix.md
DELETED
@@ -1,38 +0,0 @@
|
|
1 |
-
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
|
3 |
-
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
-
the License. You may obtain a copy of the License at
|
5 |
-
|
6 |
-
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
-
|
8 |
-
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
-
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
-
specific language governing permissions and limitations under the License.
|
11 |
-
-->
|
12 |
-
|
13 |
-
# InstructPix2Pix
|
14 |
-
|
15 |
-
[InstructPix2Pix: Learning to Follow Image Editing Instructions](https://huggingface.co/papers/2211.09800) is by Tim Brooks, Aleksander Holynski and Alexei A. Efros.
|
16 |
-
|
17 |
-
The abstract from the paper is:
|
18 |
-
|
19 |
-
*We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.*
|
20 |
-
|
21 |
-
You can find additional information about InstructPix2Pix on the [project page](https://www.timothybrooks.com/instruct-pix2pix), [original codebase](https://github.com/timothybrooks/instruct-pix2pix), and try it out in a [demo](https://huggingface.co/spaces/timbrooks/instruct-pix2pix).
|
22 |
-
|
23 |
-
<Tip>
|
24 |
-
|
25 |
-
Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines.
|
26 |
-
|
27 |
-
</Tip>
|
28 |
-
|
29 |
-
## StableDiffusionInstructPix2PixPipeline
|
30 |
-
[[autodoc]] StableDiffusionInstructPix2PixPipeline
|
31 |
-
- __call__
|
32 |
-
- all
|
33 |
-
- load_textual_inversion
|
34 |
-
- load_lora_weights
|
35 |
-
- save_lora_weights
|
36 |
-
|
37 |
-
## StableDiffusionPipelineOutput
|
38 |
-
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_safe.md
DELETED
@@ -1,61 +0,0 @@
|
|
1 |
-
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
|
3 |
-
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
-
the License. You may obtain a copy of the License at
|
5 |
-
|
6 |
-
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
-
|
8 |
-
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
-
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
-
specific language governing permissions and limitations under the License.
|
11 |
-
-->
|
12 |
-
|
13 |
-
# Safe Stable Diffusion
|
14 |
-
|
15 |
-
Safe Stable Diffusion was proposed in [Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models](https://huggingface.co/papers/2211.05105) and mitigates inappropriate degeneration from Stable Diffusion models because they're trained on unfiltered web-crawled datasets. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, and otherwise offensive content. Safe Stable Diffusion is an extension of Stable Diffusion that drastically reduces this type of content.
|
16 |
-
|
17 |
-
The abstract from the paper is:
|
18 |
-
|
19 |
-
*Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment.*
|
20 |
-
|
21 |
-
## Tips
|
22 |
-
|
23 |
-
Use the `safety_concept` property of [`StableDiffusionPipelineSafe`] to check and edit the current safety concept:
|
24 |
-
|
25 |
-
```python
|
26 |
-
>>> from diffusers import StableDiffusionPipelineSafe
|
27 |
-
|
28 |
-
>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
|
29 |
-
>>> pipeline.safety_concept
|
30 |
-
'an image showing hate, harassment, violence, suffering, humiliation, harm, suicide, sexual, nudity, bodily fluids, blood, obscene gestures, illegal activity, drug use, theft, vandalism, weapons, child abuse, brutality, cruelty'
|
31 |
-
```
|
32 |
-
For each image generation the active concept is also contained in [`StableDiffusionSafePipelineOutput`].
|
33 |
-
|
34 |
-
There are 4 configurations (`SafetyConfig.WEAK`, `SafetyConfig.MEDIUM`, `SafetyConfig.STRONG`, and `SafetyConfig.MAX`) that can be applied:
|
35 |
-
|
36 |
-
```python
|
37 |
-
>>> from diffusers import StableDiffusionPipelineSafe
|
38 |
-
>>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig
|
39 |
-
|
40 |
-
>>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
|
41 |
-
>>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker"
|
42 |
-
>>> out = pipeline(prompt=prompt, **SafetyConfig.MAX)
|
43 |
-
```
|
44 |
-
|
45 |
-
<Tip>
|
46 |
-
|
47 |
-
Make sure to check out the Stable Diffusion [Tips](overview#tips) section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
|
48 |
-
|
49 |
-
</Tip>
|
50 |
-
|
51 |
-
## StableDiffusionPipelineSafe
|
52 |
-
|
53 |
-
[[autodoc]] StableDiffusionPipelineSafe
|
54 |
-
- all
|
55 |
-
- __call__
|
56 |
-
|
57 |
-
## StableDiffusionSafePipelineOutput
|
58 |
-
|
59 |
-
[[autodoc]] pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput
|
60 |
-
- all
|
61 |
-
- __call__
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/training/text2image.md
DELETED
@@ -1,224 +0,0 @@
|
|
1 |
-
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
-
|
3 |
-
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
-
the License. You may obtain a copy of the License at
|
5 |
-
|
6 |
-
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
-
|
8 |
-
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
-
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
-
specific language governing permissions and limitations under the License.
|
11 |
-
-->
|
12 |
-
|
13 |
-
|
14 |
-
# Text-to-image
|
15 |
-
|
16 |
-
<Tip warning={true}>
|
17 |
-
|
18 |
-
text-to-image 파인튜닝 스크립트는 experimental 상태입니다. 과적합하기 쉽고 치명적인 망각과 같은 문제에 부딪히기 쉽습니다. 자체 데이터셋에서 최상의 결과를 얻으려면 다양한 하이퍼파라미터를 탐색하는 것이 좋습니다.
|
19 |
-
|
20 |
-
</Tip>
|
21 |
-
|
22 |
-
Stable Diffusion과 같은 text-to-image 모델은 텍스트 프롬프트에서 이미지를 생성합니다. 이 가이드는 PyTorch 및 Flax를 사용하여 자체 데이터셋에서 [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) 모델로 파인튜닝하는 방법을 보여줍니다. 이 가이드에 사용된 text-to-image 파인튜닝을 위한 모든 학습 스크립트에 관심이 있는 경우 이 [리포지토리](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image)에서 자세히 찾을 수 있습니다.
|
23 |
-
|
24 |
-
스크립트를 실행하기 전에, 라이브러리의 학습 dependency들을 설치해야 합니다:
|
25 |
-
|
26 |
-
```bash
|
27 |
-
pip install git+https://github.com/huggingface/diffusers.git
|
28 |
-
pip install -U -r requirements.txt
|
29 |
-
```
|
30 |
-
|
31 |
-
그리고 [🤗Accelerate](https://github.com/huggingface/accelerate/) 환경을 초기화합니다:
|
32 |
-
|
33 |
-
```bash
|
34 |
-
accelerate config
|
35 |
-
```
|
36 |
-
|
37 |
-
리포지토리를 이미 복제한 경우, 이 단계를 수행할 필요가 없습니다. 대신, 로컬 체크아웃 경로를 학습 스크립트에 명시할 수 있으며 거기에서 로드됩니다.
|
38 |
-
|
39 |
-
### 하드웨어 요구 사항
|
40 |
-
|
41 |
-
`gradient_checkpointing` 및 `mixed_precision`을 사용하면 단일 24GB GPU에서 모델을 파인튜닝할 수 있습니다. 더 높은 `batch_size`와 더 빠른 훈련을 위해서는 GPU 메모리가 30GB 이상인 GPU를 사용하는 것이 좋습니다. TPU 또는 GPU에서 파인튜닝을 위해 JAX나 Flax를 사용할 수도 있습니다. 자세한 내용은 [아래](#flax-jax-finetuning)를 참조하세요.
|
42 |
-
|
43 |
-
xFormers로 memory efficient attention을 활성화하여 메모리 사용량 훨씬 더 줄일 수 있습니다. [xFormers가 설치](./optimization/xformers)되어 있는지 확인하고 `--enable_xformers_memory_efficient_attention`를 학습 스크립트에 명시합니다.
|
44 |
-
|
45 |
-
xFormers는 Flax에 사용할 수 없습니다.
|
46 |
-
|
47 |
-
## Hub에 모델 업로드하기
|
48 |
-
|
49 |
-
학습 스크립트에 다음 인수를 추가하여 모델을 허브에 저장합니다:
|
50 |
-
|
51 |
-
```bash
|
52 |
-
--push_to_hub
|
53 |
-
```
|
54 |
-
|
55 |
-
|
56 |
-
## 체크포인트 저장 및 불러오기
|
57 |
-
|
58 |
-
학습 중 발생할 수 있는 일에 대비하여 정기적으로 체크포인트를 저장해 두는 것이 좋습니다. 체크포인트를 저장하려면 학습 스크립트에 다음 인수를 명시합니다.
|
59 |
-
|
60 |
-
```bash
|
61 |
-
--checkpointing_steps=500
|
62 |
-
```
|
63 |
-
|
64 |
-
500스텝마다 전체 학습 state가 'output_dir'의 하위 폴더에 저장됩니다. 체크포인트는 'checkpoint-'에 지금까지 학습된 step 수입니다. 예를 들어 'checkpoint-1500'은 1500 학습 step 후에 저장된 체크포인트입니다.
|
65 |
-
|
66 |
-
학습을 재개하기 위해 체크포인트를 불러오려면 '--resume_from_checkpoint' 인수를 학습 스크립트에 명시하고 재개할 체크포인트를 지정하십시오. 예를 들어 다음 인수는 1500개의 학습 step 후에 저장된 체크포인트에서부터 훈련을 재개합니다.
|
67 |
-
|
68 |
-
```bash
|
69 |
-
--resume_from_checkpoint="checkpoint-1500"
|
70 |
-
```
|
71 |
-
|
72 |
-
## 파인튜닝
|
73 |
-
|
74 |
-
<frameworkcontent>
|
75 |
-
<pt>
|
76 |
-
다음과 같이 [Pokémon BLIP 캡션](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) 데이터셋에서 파인튜닝 실행을 위해 [PyTorch 학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py)를 실행합니다:
|
77 |
-
|
78 |
-
|
79 |
-
```bash
|
80 |
-
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
|
81 |
-
export dataset_name="lambdalabs/pokemon-blip-captions"
|
82 |
-
|
83 |
-
accelerate launch train_text_to_image.py \
|
84 |
-
--pretrained_model_name_or_path=$MODEL_NAME \
|
85 |
-
--dataset_name=$dataset_name \
|
86 |
-
--use_ema \
|
87 |
-
--resolution=512 --center_crop --random_flip \
|
88 |
-
--train_batch_size=1 \
|
89 |
-
--gradient_accumulation_steps=4 \
|
90 |
-
--gradient_checkpointing \
|
91 |
-
--mixed_precision="fp16" \
|
92 |
-
--max_train_steps=15000 \
|
93 |
-
--learning_rate=1e-05 \
|
94 |
-
--max_grad_norm=1 \
|
95 |
-
--lr_scheduler="constant" --lr_warmup_steps=0 \
|
96 |
-
--output_dir="sd-pokemon-model"
|
97 |
-
```
|
98 |
-
|
99 |
-
자체 데이터셋으로 파인튜닝하려면 🤗 [Datasets](https://huggingface.co/docs/datasets/index)에서 요구��는 형식에 따라 데이터셋을 준비하세요. [데이터셋을 허브에 업로드](https://huggingface.co/docs/datasets/image_dataset#upload-dataset-to-the-hub)하거나 [파일들이 있는 로컬 폴더를 준비](https ://huggingface.co/docs/datasets/image_dataset#imagefolder)할 수 있습니다.
|
100 |
-
|
101 |
-
사용자 커스텀 loading logic을 사용하려면 스크립트를 수정하십시오. 도움이 되도록 코드의 적절한 위치에 포인터를 남겼습니다. 🤗 아래 예제 스크립트는 `TRAIN_DIR`의 로컬 데이터셋으로를 파인튜닝하는 방법과 `OUTPUT_DIR`에서 모델을 저장할 위치를 보여줍니다:
|
102 |
-
|
103 |
-
|
104 |
-
```bash
|
105 |
-
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
|
106 |
-
export TRAIN_DIR="path_to_your_dataset"
|
107 |
-
export OUTPUT_DIR="path_to_save_model"
|
108 |
-
|
109 |
-
accelerate launch train_text_to_image.py \
|
110 |
-
--pretrained_model_name_or_path=$MODEL_NAME \
|
111 |
-
--train_data_dir=$TRAIN_DIR \
|
112 |
-
--use_ema \
|
113 |
-
--resolution=512 --center_crop --random_flip \
|
114 |
-
--train_batch_size=1 \
|
115 |
-
--gradient_accumulation_steps=4 \
|
116 |
-
--gradient_checkpointing \
|
117 |
-
--mixed_precision="fp16" \
|
118 |
-
--max_train_steps=15000 \
|
119 |
-
--learning_rate=1e-05 \
|
120 |
-
--max_grad_norm=1 \
|
121 |
-
--lr_scheduler="constant" --lr_warmup_steps=0 \
|
122 |
-
--output_dir=${OUTPUT_DIR}
|
123 |
-
```
|
124 |
-
|
125 |
-
</pt>
|
126 |
-
<jax>
|
127 |
-
[@duongna211](https://github.com/duongna21)의 기여로, Flax를 사용해 TPU 및 GPU에서 Stable Diffusion 모델을 더 빠르게 학습할 수 있습니다. 이는 TPU 하드웨어에서 매우 효율적이지만 GPU에서도 훌륭하게 작동합니다. Flax 학습 스크립트는 gradient checkpointing나 gradient accumulation과 같은 기능을 아직 지원하지 않으므로 메모리가 30GB 이상인 GPU 또는 TPU v3가 필요합니다.
|
128 |
-
|
129 |
-
스크립트를 실행하기 전에 요구 사항이 설치되어 있는지 확인하십시오:
|
130 |
-
|
131 |
-
```bash
|
132 |
-
pip install -U -r requirements_flax.txt
|
133 |
-
```
|
134 |
-
|
135 |
-
그러면 다음과 같이 [Flax 학습 스크립트](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_flax.py)를 실행할 수 있습니다.
|
136 |
-
|
137 |
-
```bash
|
138 |
-
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
|
139 |
-
export dataset_name="lambdalabs/pokemon-blip-captions"
|
140 |
-
|
141 |
-
python train_text_to_image_flax.py \
|
142 |
-
--pretrained_model_name_or_path=$MODEL_NAME \
|
143 |
-
--dataset_name=$dataset_name \
|
144 |
-
--resolution=512 --center_crop --random_flip \
|
145 |
-
--train_batch_size=1 \
|
146 |
-
--max_train_steps=15000 \
|
147 |
-
--learning_rate=1e-05 \
|
148 |
-
--max_grad_norm=1 \
|
149 |
-
--output_dir="sd-pokemon-model"
|
150 |
-
```
|
151 |
-
|
152 |
-
자체 데이터셋으로 파인튜닝하려면 🤗 [Datasets](https://huggingface.co/docs/datasets/index)에서 요구하는 형식에 따라 데이터셋을 준비하세요. [데이터셋을 허브에 업로드](https://huggingface.co/docs/datasets/image_dataset#upload-dataset-to-the-hub)하거나 [파일들이 있는 로컬 폴더를 준비](https ://huggingface.co/docs/datasets/image_dataset#imagefolder)할 수 있습니다.
|
153 |
-
|
154 |
-
사용자 커스텀 loading logic을 사용하려면 스크립트를 수정하십시오. 도움이 되도록 코드의 적절한 위치에 포인터를 남겼습니다. 🤗 아래 예제 스크립트는 `TRAIN_DIR`의 로컬 데이터셋으로를 파인튜닝하는 방법을 보여줍니다:
|
155 |
-
|
156 |
-
```bash
|
157 |
-
export MODEL_NAME="duongna/stable-diffusion-v1-4-flax"
|
158 |
-
export TRAIN_DIR="path_to_your_dataset"
|
159 |
-
|
160 |
-
python train_text_to_image_flax.py \
|
161 |
-
--pretrained_model_name_or_path=$MODEL_NAME \
|
162 |
-
--train_data_dir=$TRAIN_DIR \
|
163 |
-
--resolution=512 --center_crop --random_flip \
|
164 |
-
--train_batch_size=1 \
|
165 |
-
--mixed_precision="fp16" \
|
166 |
-
--max_train_steps=15000 \
|
167 |
-
--learning_rate=1e-05 \
|
168 |
-
--max_grad_norm=1 \
|
169 |
-
--output_dir="sd-pokemon-model"
|
170 |
-
```
|
171 |
-
</jax>
|
172 |
-
</frameworkcontent>
|
173 |
-
|
174 |
-
## LoRA
|
175 |
-
|
176 |
-
Text-to-image 모델 파인튜닝을 위해, 대규모 모델 학습을 가속화하기 위한 파인튜닝 기술인 LoRA(Low-Rank Adaptation of Large Language Models)를 사용할 수 있습니다. 자세한 내용은 [LoRA 학습](lora#text-to-image) 가이드를 참조하세요.
|
177 |
-
|
178 |
-
## 추론
|
179 |
-
|
180 |
-
허브의 모델 경로 또는 모델 이름을 [`StableDiffusionPipeline`]에 전달하여 추론을 위해 파인 튜닝된 모델을 불러올 수 있습니다:
|
181 |
-
|
182 |
-
<frameworkcontent>
|
183 |
-
<pt>
|
184 |
-
```python
|
185 |
-
from diffusers import StableDiffusionPipeline
|
186 |
-
|
187 |
-
model_path = "path_to_saved_model"
|
188 |
-
pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16)
|
189 |
-
pipe.to("cuda")
|
190 |
-
|
191 |
-
image = pipe(prompt="yoda").images[0]
|
192 |
-
image.save("yoda-pokemon.png")
|
193 |
-
```
|
194 |
-
</pt>
|
195 |
-
<jax>
|
196 |
-
```python
|
197 |
-
import jax
|
198 |
-
import numpy as np
|
199 |
-
from flax.jax_utils import replicate
|
200 |
-
from flax.training.common_utils import shard
|
201 |
-
from diffusers import FlaxStableDiffusionPipeline
|
202 |
-
|
203 |
-
model_path = "path_to_saved_model"
|
204 |
-
pipe, params = FlaxStableDiffusionPipeline.from_pretrained(model_path, dtype=jax.numpy.bfloat16)
|
205 |
-
|
206 |
-
prompt = "yoda pokemon"
|
207 |
-
prng_seed = jax.random.PRNGKey(0)
|
208 |
-
num_inference_steps = 50
|
209 |
-
|
210 |
-
num_samples = jax.device_count()
|
211 |
-
prompt = num_samples * [prompt]
|
212 |
-
prompt_ids = pipeline.prepare_inputs(prompt)
|
213 |
-
|
214 |
-
# shard inputs and rng
|
215 |
-
params = replicate(params)
|
216 |
-
prng_seed = jax.random.split(prng_seed, jax.device_count())
|
217 |
-
prompt_ids = shard(prompt_ids)
|
218 |
-
|
219 |
-
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
|
220 |
-
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
|
221 |
-
image.save("yoda-pokemon.png")
|
222 |
-
```
|
223 |
-
</jax>
|
224 |
-
</frameworkcontent>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion.py
DELETED
@@ -1,421 +0,0 @@
|
|
1 |
-
import inspect
|
2 |
-
from typing import Callable, List, Optional, Union
|
3 |
-
|
4 |
-
import PIL.Image
|
5 |
-
import torch
|
6 |
-
from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModel
|
7 |
-
|
8 |
-
from ...models import AutoencoderKL, UNet2DConditionModel
|
9 |
-
from ...schedulers import KarrasDiffusionSchedulers
|
10 |
-
from ...utils import logging
|
11 |
-
from ..pipeline_utils import DiffusionPipeline
|
12 |
-
from .pipeline_versatile_diffusion_dual_guided import VersatileDiffusionDualGuidedPipeline
|
13 |
-
from .pipeline_versatile_diffusion_image_variation import VersatileDiffusionImageVariationPipeline
|
14 |
-
from .pipeline_versatile_diffusion_text_to_image import VersatileDiffusionTextToImagePipeline
|
15 |
-
|
16 |
-
|
17 |
-
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
|
18 |
-
|
19 |
-
|
20 |
-
class VersatileDiffusionPipeline(DiffusionPipeline):
|
21 |
-
r"""
|
22 |
-
Pipeline for text-to-image generation using Stable Diffusion.
|
23 |
-
|
24 |
-
This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
|
25 |
-
implemented for all pipelines (downloading, saving, running on a particular device, etc.).
|
26 |
-
|
27 |
-
Args:
|
28 |
-
vae ([`AutoencoderKL`]):
|
29 |
-
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
|
30 |
-
text_encoder ([`~transformers.CLIPTextModel`]):
|
31 |
-
Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
|
32 |
-
tokenizer ([`~transformers.CLIPTokenizer`]):
|
33 |
-
A `CLIPTokenizer` to tokenize text.
|
34 |
-
unet ([`UNet2DConditionModel`]):
|
35 |
-
A `UNet2DConditionModel` to denoise the encoded image latents.
|
36 |
-
scheduler ([`SchedulerMixin`]):
|
37 |
-
A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
|
38 |
-
[`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
|
39 |
-
safety_checker ([`StableDiffusionSafetyChecker`]):
|
40 |
-
Classification module that estimates whether generated images could be considered offensive or harmful.
|
41 |
-
Please refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for more details
|
42 |
-
about a model's potential harms.
|
43 |
-
feature_extractor ([`~transformers.CLIPImageProcessor`]):
|
44 |
-
A `CLIPImageProcessor` to extract features from generated images; used as inputs to the `safety_checker`.
|
45 |
-
"""
|
46 |
-
|
47 |
-
tokenizer: CLIPTokenizer
|
48 |
-
image_feature_extractor: CLIPImageProcessor
|
49 |
-
text_encoder: CLIPTextModel
|
50 |
-
image_encoder: CLIPVisionModel
|
51 |
-
image_unet: UNet2DConditionModel
|
52 |
-
text_unet: UNet2DConditionModel
|
53 |
-
vae: AutoencoderKL
|
54 |
-
scheduler: KarrasDiffusionSchedulers
|
55 |
-
|
56 |
-
def __init__(
|
57 |
-
self,
|
58 |
-
tokenizer: CLIPTokenizer,
|
59 |
-
image_feature_extractor: CLIPImageProcessor,
|
60 |
-
text_encoder: CLIPTextModel,
|
61 |
-
image_encoder: CLIPVisionModel,
|
62 |
-
image_unet: UNet2DConditionModel,
|
63 |
-
text_unet: UNet2DConditionModel,
|
64 |
-
vae: AutoencoderKL,
|
65 |
-
scheduler: KarrasDiffusionSchedulers,
|
66 |
-
):
|
67 |
-
super().__init__()
|
68 |
-
|
69 |
-
self.register_modules(
|
70 |
-
tokenizer=tokenizer,
|
71 |
-
image_feature_extractor=image_feature_extractor,
|
72 |
-
text_encoder=text_encoder,
|
73 |
-
image_encoder=image_encoder,
|
74 |
-
image_unet=image_unet,
|
75 |
-
text_unet=text_unet,
|
76 |
-
vae=vae,
|
77 |
-
scheduler=scheduler,
|
78 |
-
)
|
79 |
-
self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
|
80 |
-
|
81 |
-
@torch.no_grad()
|
82 |
-
def image_variation(
|
83 |
-
self,
|
84 |
-
image: Union[torch.FloatTensor, PIL.Image.Image],
|
85 |
-
height: Optional[int] = None,
|
86 |
-
width: Optional[int] = None,
|
87 |
-
num_inference_steps: int = 50,
|
88 |
-
guidance_scale: float = 7.5,
|
89 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
90 |
-
num_images_per_prompt: Optional[int] = 1,
|
91 |
-
eta: float = 0.0,
|
92 |
-
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
|
93 |
-
latents: Optional[torch.FloatTensor] = None,
|
94 |
-
output_type: Optional[str] = "pil",
|
95 |
-
return_dict: bool = True,
|
96 |
-
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
|
97 |
-
callback_steps: int = 1,
|
98 |
-
):
|
99 |
-
r"""
|
100 |
-
The call function to the pipeline for generation.
|
101 |
-
|
102 |
-
Args:
|
103 |
-
image (`PIL.Image.Image`, `List[PIL.Image.Image]` or `torch.Tensor`):
|
104 |
-
The image prompt or prompts to guide the image generation.
|
105 |
-
height (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`):
|
106 |
-
The height in pixels of the generated image.
|
107 |
-
width (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`):
|
108 |
-
The width in pixels of the generated image.
|
109 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
110 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
111 |
-
expense of slower inference.
|
112 |
-
guidance_scale (`float`, *optional*, defaults to 7.5):
|
113 |
-
A higher guidance scale value encourages the model to generate images closely linked to the text
|
114 |
-
`prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
|
115 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
116 |
-
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
|
117 |
-
pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
|
118 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
119 |
-
The number of images to generate per prompt.
|
120 |
-
eta (`float`, *optional*, defaults to 0.0):
|
121 |
-
Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
|
122 |
-
to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
|
123 |
-
generator (`torch.Generator`, *optional*):
|
124 |
-
A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
|
125 |
-
generation deterministic.
|
126 |
-
latents (`torch.FloatTensor`, *optional*):
|
127 |
-
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
|
128 |
-
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
129 |
-
tensor is generated by sampling using the supplied random `generator`.
|
130 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
131 |
-
The output format of the generated image. Choose between `PIL.Image` or `np.array`.
|
132 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
133 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
134 |
-
plain tuple.
|
135 |
-
callback (`Callable`, *optional*):
|
136 |
-
A function that calls every `callback_steps` steps during inference. The function is called with the
|
137 |
-
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
|
138 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
139 |
-
The frequency at which the `callback` function is called. If not specified, the callback is called at
|
140 |
-
every step.
|
141 |
-
|
142 |
-
Examples:
|
143 |
-
|
144 |
-
```py
|
145 |
-
>>> from diffusers import VersatileDiffusionPipeline
|
146 |
-
>>> import torch
|
147 |
-
>>> import requests
|
148 |
-
>>> from io import BytesIO
|
149 |
-
>>> from PIL import Image
|
150 |
-
|
151 |
-
>>> # let's download an initial image
|
152 |
-
>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg"
|
153 |
-
|
154 |
-
>>> response = requests.get(url)
|
155 |
-
>>> image = Image.open(BytesIO(response.content)).convert("RGB")
|
156 |
-
|
157 |
-
>>> pipe = VersatileDiffusionPipeline.from_pretrained(
|
158 |
-
... "shi-labs/versatile-diffusion", torch_dtype=torch.float16
|
159 |
-
... )
|
160 |
-
>>> pipe = pipe.to("cuda")
|
161 |
-
|
162 |
-
>>> generator = torch.Generator(device="cuda").manual_seed(0)
|
163 |
-
>>> image = pipe.image_variation(image, generator=generator).images[0]
|
164 |
-
>>> image.save("./car_variation.png")
|
165 |
-
```
|
166 |
-
|
167 |
-
Returns:
|
168 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
169 |
-
If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
|
170 |
-
otherwise a `tuple` is returned where the first element is a list with the generated images and the
|
171 |
-
second element is a list of `bool`s indicating whether the corresponding generated image contains
|
172 |
-
"not-safe-for-work" (nsfw) content.
|
173 |
-
"""
|
174 |
-
expected_components = inspect.signature(VersatileDiffusionImageVariationPipeline.__init__).parameters.keys()
|
175 |
-
components = {name: component for name, component in self.components.items() if name in expected_components}
|
176 |
-
return VersatileDiffusionImageVariationPipeline(**components)(
|
177 |
-
image=image,
|
178 |
-
height=height,
|
179 |
-
width=width,
|
180 |
-
num_inference_steps=num_inference_steps,
|
181 |
-
guidance_scale=guidance_scale,
|
182 |
-
negative_prompt=negative_prompt,
|
183 |
-
num_images_per_prompt=num_images_per_prompt,
|
184 |
-
eta=eta,
|
185 |
-
generator=generator,
|
186 |
-
latents=latents,
|
187 |
-
output_type=output_type,
|
188 |
-
return_dict=return_dict,
|
189 |
-
callback=callback,
|
190 |
-
callback_steps=callback_steps,
|
191 |
-
)
|
192 |
-
|
193 |
-
@torch.no_grad()
|
194 |
-
def text_to_image(
|
195 |
-
self,
|
196 |
-
prompt: Union[str, List[str]],
|
197 |
-
height: Optional[int] = None,
|
198 |
-
width: Optional[int] = None,
|
199 |
-
num_inference_steps: int = 50,
|
200 |
-
guidance_scale: float = 7.5,
|
201 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
202 |
-
num_images_per_prompt: Optional[int] = 1,
|
203 |
-
eta: float = 0.0,
|
204 |
-
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
|
205 |
-
latents: Optional[torch.FloatTensor] = None,
|
206 |
-
output_type: Optional[str] = "pil",
|
207 |
-
return_dict: bool = True,
|
208 |
-
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
|
209 |
-
callback_steps: int = 1,
|
210 |
-
):
|
211 |
-
r"""
|
212 |
-
The call function to the pipeline for generation.
|
213 |
-
|
214 |
-
Args:
|
215 |
-
prompt (`str` or `List[str]`):
|
216 |
-
The prompt or prompts to guide image generation.
|
217 |
-
height (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`):
|
218 |
-
The height in pixels of the generated image.
|
219 |
-
width (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`):
|
220 |
-
The width in pixels of the generated image.
|
221 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
222 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
223 |
-
expense of slower inference.
|
224 |
-
guidance_scale (`float`, *optional*, defaults to 7.5):
|
225 |
-
A higher guidance scale value encourages the model to generate images closely linked to the text
|
226 |
-
`prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
|
227 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
228 |
-
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
|
229 |
-
pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
|
230 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
231 |
-
The number of images to generate per prompt.
|
232 |
-
eta (`float`, *optional*, defaults to 0.0):
|
233 |
-
Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
|
234 |
-
to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
|
235 |
-
generator (`torch.Generator`, *optional*):
|
236 |
-
A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
|
237 |
-
generation deterministic.
|
238 |
-
latents (`torch.FloatTensor`, *optional*):
|
239 |
-
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
|
240 |
-
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
241 |
-
tensor is generated by sampling using the supplied random `generator`.
|
242 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
243 |
-
The output format of the generated image. Choose between `PIL.Image` or `np.array`.
|
244 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
245 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
246 |
-
plain tuple.
|
247 |
-
callback (`Callable`, *optional*):
|
248 |
-
A function that calls every `callback_steps` steps during inference. The function is called with the
|
249 |
-
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
|
250 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
251 |
-
The frequency at which the `callback` function is called. If not specified, the callback is called at
|
252 |
-
every step.
|
253 |
-
|
254 |
-
Examples:
|
255 |
-
|
256 |
-
```py
|
257 |
-
>>> from diffusers import VersatileDiffusionPipeline
|
258 |
-
>>> import torch
|
259 |
-
|
260 |
-
>>> pipe = VersatileDiffusionPipeline.from_pretrained(
|
261 |
-
... "shi-labs/versatile-diffusion", torch_dtype=torch.float16
|
262 |
-
... )
|
263 |
-
>>> pipe = pipe.to("cuda")
|
264 |
-
|
265 |
-
>>> generator = torch.Generator(device="cuda").manual_seed(0)
|
266 |
-
>>> image = pipe.text_to_image("an astronaut riding on a horse on mars", generator=generator).images[0]
|
267 |
-
>>> image.save("./astronaut.png")
|
268 |
-
```
|
269 |
-
|
270 |
-
Returns:
|
271 |
-
[`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
|
272 |
-
If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
|
273 |
-
otherwise a `tuple` is returned where the first element is a list with the generated images and the
|
274 |
-
second element is a list of `bool`s indicating whether the corresponding generated image contains
|
275 |
-
"not-safe-for-work" (nsfw) content.
|
276 |
-
"""
|
277 |
-
expected_components = inspect.signature(VersatileDiffusionTextToImagePipeline.__init__).parameters.keys()
|
278 |
-
components = {name: component for name, component in self.components.items() if name in expected_components}
|
279 |
-
temp_pipeline = VersatileDiffusionTextToImagePipeline(**components)
|
280 |
-
output = temp_pipeline(
|
281 |
-
prompt=prompt,
|
282 |
-
height=height,
|
283 |
-
width=width,
|
284 |
-
num_inference_steps=num_inference_steps,
|
285 |
-
guidance_scale=guidance_scale,
|
286 |
-
negative_prompt=negative_prompt,
|
287 |
-
num_images_per_prompt=num_images_per_prompt,
|
288 |
-
eta=eta,
|
289 |
-
generator=generator,
|
290 |
-
latents=latents,
|
291 |
-
output_type=output_type,
|
292 |
-
return_dict=return_dict,
|
293 |
-
callback=callback,
|
294 |
-
callback_steps=callback_steps,
|
295 |
-
)
|
296 |
-
# swap the attention blocks back to the original state
|
297 |
-
temp_pipeline._swap_unet_attention_blocks()
|
298 |
-
|
299 |
-
return output
|
300 |
-
|
301 |
-
@torch.no_grad()
|
302 |
-
def dual_guided(
|
303 |
-
self,
|
304 |
-
prompt: Union[PIL.Image.Image, List[PIL.Image.Image]],
|
305 |
-
image: Union[str, List[str]],
|
306 |
-
text_to_image_strength: float = 0.5,
|
307 |
-
height: Optional[int] = None,
|
308 |
-
width: Optional[int] = None,
|
309 |
-
num_inference_steps: int = 50,
|
310 |
-
guidance_scale: float = 7.5,
|
311 |
-
num_images_per_prompt: Optional[int] = 1,
|
312 |
-
eta: float = 0.0,
|
313 |
-
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
|
314 |
-
latents: Optional[torch.FloatTensor] = None,
|
315 |
-
output_type: Optional[str] = "pil",
|
316 |
-
return_dict: bool = True,
|
317 |
-
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
|
318 |
-
callback_steps: int = 1,
|
319 |
-
):
|
320 |
-
r"""
|
321 |
-
The call function to the pipeline for generation.
|
322 |
-
|
323 |
-
Args:
|
324 |
-
prompt (`str` or `List[str]`):
|
325 |
-
The prompt or prompts to guide image generation.
|
326 |
-
height (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`):
|
327 |
-
The height in pixels of the generated image.
|
328 |
-
width (`int`, *optional*, defaults to `self.image_unet.config.sample_size * self.vae_scale_factor`):
|
329 |
-
The width in pixels of the generated image.
|
330 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
331 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
332 |
-
expense of slower inference.
|
333 |
-
guidance_scale (`float`, *optional*, defaults to 7.5):
|
334 |
-
A higher guidance scale value encourages the model to generate images closely linked to the text
|
335 |
-
`prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
|
336 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
337 |
-
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
|
338 |
-
pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
|
339 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
340 |
-
The number of images to generate per prompt.
|
341 |
-
eta (`float`, *optional*, defaults to 0.0):
|
342 |
-
Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
|
343 |
-
to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
|
344 |
-
generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
|
345 |
-
A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
|
346 |
-
generation deterministic.
|
347 |
-
latents (`torch.FloatTensor`, *optional*):
|
348 |
-
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
|
349 |
-
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
350 |
-
tensor is generated by sampling using the supplied random `generator`.
|
351 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
352 |
-
The output format of the generated image. Choose between `PIL.Image` or `np.array`.
|
353 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
354 |
-
Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
|
355 |
-
plain tuple.
|
356 |
-
callback (`Callable`, *optional*):
|
357 |
-
A function that calls every `callback_steps` steps during inference. The function is called with the
|
358 |
-
following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
|
359 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
360 |
-
The frequency at which the `callback` function is called. If not specified, the callback is called at
|
361 |
-
every step.
|
362 |
-
|
363 |
-
Examples:
|
364 |
-
|
365 |
-
```py
|
366 |
-
>>> from diffusers import VersatileDiffusionPipeline
|
367 |
-
>>> import torch
|
368 |
-
>>> import requests
|
369 |
-
>>> from io import BytesIO
|
370 |
-
>>> from PIL import Image
|
371 |
-
|
372 |
-
>>> # let's download an initial image
|
373 |
-
>>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg"
|
374 |
-
|
375 |
-
>>> response = requests.get(url)
|
376 |
-
>>> image = Image.open(BytesIO(response.content)).convert("RGB")
|
377 |
-
>>> text = "a red car in the sun"
|
378 |
-
|
379 |
-
>>> pipe = VersatileDiffusionPipeline.from_pretrained(
|
380 |
-
... "shi-labs/versatile-diffusion", torch_dtype=torch.float16
|
381 |
-
... )
|
382 |
-
>>> pipe = pipe.to("cuda")
|
383 |
-
|
384 |
-
>>> generator = torch.Generator(device="cuda").manual_seed(0)
|
385 |
-
>>> text_to_image_strength = 0.75
|
386 |
-
|
387 |
-
>>> image = pipe.dual_guided(
|
388 |
-
... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator
|
389 |
-
... ).images[0]
|
390 |
-
>>> image.save("./car_variation.png")
|
391 |
-
```
|
392 |
-
|
393 |
-
Returns:
|
394 |
-
[`~pipelines.ImagePipelineOutput`] or `tuple`:
|
395 |
-
If `return_dict` is `True`, [`~pipelines.ImagePipelineOutput`] is returned, otherwise a `tuple` is
|
396 |
-
returned where the first element is a list with the generated images.
|
397 |
-
"""
|
398 |
-
|
399 |
-
expected_components = inspect.signature(VersatileDiffusionDualGuidedPipeline.__init__).parameters.keys()
|
400 |
-
components = {name: component for name, component in self.components.items() if name in expected_components}
|
401 |
-
temp_pipeline = VersatileDiffusionDualGuidedPipeline(**components)
|
402 |
-
output = temp_pipeline(
|
403 |
-
prompt=prompt,
|
404 |
-
image=image,
|
405 |
-
text_to_image_strength=text_to_image_strength,
|
406 |
-
height=height,
|
407 |
-
width=width,
|
408 |
-
num_inference_steps=num_inference_steps,
|
409 |
-
guidance_scale=guidance_scale,
|
410 |
-
num_images_per_prompt=num_images_per_prompt,
|
411 |
-
eta=eta,
|
412 |
-
generator=generator,
|
413 |
-
latents=latents,
|
414 |
-
output_type=output_type,
|
415 |
-
return_dict=return_dict,
|
416 |
-
callback=callback,
|
417 |
-
callback_steps=callback_steps,
|
418 |
-
)
|
419 |
-
temp_pipeline._revert_dual_attention()
|
420 |
-
|
421 |
-
return output
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/CLIP/clip/clip.py
DELETED
@@ -1,225 +0,0 @@
|
|
1 |
-
import hashlib
|
2 |
-
import os
|
3 |
-
import urllib
|
4 |
-
import warnings
|
5 |
-
from typing import Any, Union, List
|
6 |
-
|
7 |
-
import torch
|
8 |
-
from PIL import Image
|
9 |
-
from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
|
10 |
-
from tqdm import tqdm
|
11 |
-
|
12 |
-
from .model import build_model
|
13 |
-
from .simple_tokenizer import SimpleTokenizer as _Tokenizer
|
14 |
-
|
15 |
-
try:
|
16 |
-
from torchvision.transforms import InterpolationMode
|
17 |
-
BICUBIC = InterpolationMode.BICUBIC
|
18 |
-
except ImportError:
|
19 |
-
BICUBIC = Image.BICUBIC
|
20 |
-
|
21 |
-
|
22 |
-
if torch.__version__.split(".") < ["1", "7", "1"]:
|
23 |
-
warnings.warn("PyTorch version 1.7.1 or higher is recommended")
|
24 |
-
|
25 |
-
|
26 |
-
__all__ = ["available_models", "load", "tokenize"]
|
27 |
-
_tokenizer = _Tokenizer()
|
28 |
-
|
29 |
-
_MODELS = {
|
30 |
-
"RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
|
31 |
-
"RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
|
32 |
-
"RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt",
|
33 |
-
"RN50x16": "https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt",
|
34 |
-
"ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
|
35 |
-
"ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt",
|
36 |
-
}
|
37 |
-
|
38 |
-
|
39 |
-
def _download(url: str, root: str):
|
40 |
-
os.makedirs(root, exist_ok=True)
|
41 |
-
filename = os.path.basename(url)
|
42 |
-
|
43 |
-
expected_sha256 = url.split("/")[-2]
|
44 |
-
download_target = os.path.join(root, filename)
|
45 |
-
|
46 |
-
if os.path.exists(download_target) and not os.path.isfile(download_target):
|
47 |
-
raise RuntimeError(f"{download_target} exists and is not a regular file")
|
48 |
-
|
49 |
-
if os.path.isfile(download_target):
|
50 |
-
if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256:
|
51 |
-
return download_target
|
52 |
-
else:
|
53 |
-
warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file")
|
54 |
-
import pdb
|
55 |
-
pdb.set_trace()
|
56 |
-
with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
|
57 |
-
with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True, unit_divisor=1024) as loop:
|
58 |
-
while True:
|
59 |
-
buffer = source.read(8192)
|
60 |
-
if not buffer:
|
61 |
-
break
|
62 |
-
|
63 |
-
output.write(buffer)
|
64 |
-
loop.update(len(buffer))
|
65 |
-
|
66 |
-
if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
|
67 |
-
raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match")
|
68 |
-
|
69 |
-
return download_target
|
70 |
-
|
71 |
-
|
72 |
-
def _transform(n_px):
|
73 |
-
return Compose([
|
74 |
-
Resize(n_px, interpolation=BICUBIC),
|
75 |
-
CenterCrop(n_px),
|
76 |
-
lambda image: image.convert("RGB"),
|
77 |
-
ToTensor(),
|
78 |
-
Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
|
79 |
-
])
|
80 |
-
|
81 |
-
|
82 |
-
def available_models() -> List[str]:
|
83 |
-
"""Returns the names of available CLIP models"""
|
84 |
-
return list(_MODELS.keys())
|
85 |
-
|
86 |
-
|
87 |
-
def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit: bool = False, download_root: str = None):
|
88 |
-
"""Load a CLIP model
|
89 |
-
|
90 |
-
Parameters
|
91 |
-
----------
|
92 |
-
name : str
|
93 |
-
A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
|
94 |
-
|
95 |
-
device : Union[str, torch.device]
|
96 |
-
The device to put the loaded model
|
97 |
-
|
98 |
-
jit : bool
|
99 |
-
Whether to load the optimized JIT model or more hackable non-JIT model (default).
|
100 |
-
|
101 |
-
download_root: str
|
102 |
-
path to download the model files; by default, it uses "~/.cache/clip"
|
103 |
-
|
104 |
-
Returns
|
105 |
-
-------
|
106 |
-
model : torch.nn.Module
|
107 |
-
The CLIP model
|
108 |
-
|
109 |
-
preprocess : Callable[[PIL.Image], torch.Tensor]
|
110 |
-
A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
|
111 |
-
"""
|
112 |
-
if name in _MODELS:
|
113 |
-
model_path = _download(_MODELS[name], download_root or os.path.expanduser("~/.cache/clip"))
|
114 |
-
elif os.path.isfile(name):
|
115 |
-
model_path = name
|
116 |
-
else:
|
117 |
-
raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
|
118 |
-
|
119 |
-
try:
|
120 |
-
# loading JIT archive
|
121 |
-
model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
|
122 |
-
state_dict = None
|
123 |
-
except RuntimeError:
|
124 |
-
# loading saved state dict
|
125 |
-
if jit:
|
126 |
-
warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead")
|
127 |
-
jit = False
|
128 |
-
state_dict = torch.load(model_path, map_location="cpu")
|
129 |
-
|
130 |
-
if not jit:
|
131 |
-
model = build_model(state_dict or model.state_dict()).to(device)
|
132 |
-
if str(device) == "cpu":
|
133 |
-
model.float()
|
134 |
-
return model, _transform(model.visual.input_resolution)
|
135 |
-
|
136 |
-
# patch the device names
|
137 |
-
device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
|
138 |
-
device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
|
139 |
-
|
140 |
-
def patch_device(module):
|
141 |
-
try:
|
142 |
-
graphs = [module.graph] if hasattr(module, "graph") else []
|
143 |
-
except RuntimeError:
|
144 |
-
graphs = []
|
145 |
-
|
146 |
-
if hasattr(module, "forward1"):
|
147 |
-
graphs.append(module.forward1.graph)
|
148 |
-
|
149 |
-
for graph in graphs:
|
150 |
-
for node in graph.findAllNodes("prim::Constant"):
|
151 |
-
if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
|
152 |
-
node.copyAttributes(device_node)
|
153 |
-
|
154 |
-
model.apply(patch_device)
|
155 |
-
patch_device(model.encode_image)
|
156 |
-
patch_device(model.encode_text)
|
157 |
-
|
158 |
-
# patch dtype to float32 on CPU
|
159 |
-
if str(device) == "cpu":
|
160 |
-
float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
|
161 |
-
float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
|
162 |
-
float_node = float_input.node()
|
163 |
-
|
164 |
-
def patch_float(module):
|
165 |
-
try:
|
166 |
-
graphs = [module.graph] if hasattr(module, "graph") else []
|
167 |
-
except RuntimeError:
|
168 |
-
graphs = []
|
169 |
-
|
170 |
-
if hasattr(module, "forward1"):
|
171 |
-
graphs.append(module.forward1.graph)
|
172 |
-
|
173 |
-
for graph in graphs:
|
174 |
-
for node in graph.findAllNodes("aten::to"):
|
175 |
-
inputs = list(node.inputs())
|
176 |
-
for i in [1, 2]: # dtype can be the second or third argument to aten::to()
|
177 |
-
if inputs[i].node()["value"] == 5:
|
178 |
-
inputs[i].node().copyAttributes(float_node)
|
179 |
-
|
180 |
-
model.apply(patch_float)
|
181 |
-
patch_float(model.encode_image)
|
182 |
-
patch_float(model.encode_text)
|
183 |
-
|
184 |
-
model.float()
|
185 |
-
|
186 |
-
return model, _transform(model.input_resolution.item())
|
187 |
-
|
188 |
-
|
189 |
-
def tokenize(texts: Union[str, List[str]], context_length: int = 77, truncate: bool = False) -> torch.LongTensor:
|
190 |
-
"""
|
191 |
-
Returns the tokenized representation of given input string(s)
|
192 |
-
|
193 |
-
Parameters
|
194 |
-
----------
|
195 |
-
texts : Union[str, List[str]]
|
196 |
-
An input string or a list of input strings to tokenize
|
197 |
-
|
198 |
-
context_length : int
|
199 |
-
The context length to use; all CLIP models use 77 as the context length
|
200 |
-
|
201 |
-
truncate: bool
|
202 |
-
Whether to truncate the text in case its encoding is longer than the context length
|
203 |
-
|
204 |
-
Returns
|
205 |
-
-------
|
206 |
-
A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
|
207 |
-
"""
|
208 |
-
if isinstance(texts, str):
|
209 |
-
texts = [texts]
|
210 |
-
|
211 |
-
sot_token = _tokenizer.encoder["<|startoftext|>"]
|
212 |
-
eot_token = _tokenizer.encoder["<|endoftext|>"]
|
213 |
-
all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
|
214 |
-
result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
|
215 |
-
|
216 |
-
for i, tokens in enumerate(all_tokens):
|
217 |
-
if len(tokens) > context_length:
|
218 |
-
if truncate:
|
219 |
-
tokens = tokens[:context_length]
|
220 |
-
tokens[-1] = eot_token
|
221 |
-
else:
|
222 |
-
raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
|
223 |
-
result[i, :len(tokens)] = torch.tensor(tokens)
|
224 |
-
|
225 |
-
return result
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/scripts/super_res_sample.py
DELETED
@@ -1,119 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Generate a large batch of samples from a super resolution model, given a batch
|
3 |
-
of samples from a regular model from image_sample.py.
|
4 |
-
"""
|
5 |
-
|
6 |
-
import argparse
|
7 |
-
import os
|
8 |
-
|
9 |
-
import blobfile as bf
|
10 |
-
import numpy as np
|
11 |
-
import torch as th
|
12 |
-
import torch.distributed as dist
|
13 |
-
|
14 |
-
from guided_diffusion import dist_util, logger
|
15 |
-
from guided_diffusion.script_util import (
|
16 |
-
sr_model_and_diffusion_defaults,
|
17 |
-
sr_create_model_and_diffusion,
|
18 |
-
args_to_dict,
|
19 |
-
add_dict_to_argparser,
|
20 |
-
)
|
21 |
-
|
22 |
-
|
23 |
-
def main():
|
24 |
-
args = create_argparser().parse_args()
|
25 |
-
|
26 |
-
dist_util.setup_dist()
|
27 |
-
logger.configure()
|
28 |
-
|
29 |
-
logger.log("creating model...")
|
30 |
-
model, diffusion = sr_create_model_and_diffusion(
|
31 |
-
**args_to_dict(args, sr_model_and_diffusion_defaults().keys())
|
32 |
-
)
|
33 |
-
model.load_state_dict(
|
34 |
-
dist_util.load_state_dict(args.model_path, map_location="cpu")
|
35 |
-
)
|
36 |
-
model.to(dist_util.dev())
|
37 |
-
if args.use_fp16:
|
38 |
-
model.convert_to_fp16()
|
39 |
-
model.eval()
|
40 |
-
|
41 |
-
logger.log("loading data...")
|
42 |
-
data = load_data_for_worker(args.base_samples, args.batch_size, args.class_cond)
|
43 |
-
|
44 |
-
logger.log("creating samples...")
|
45 |
-
all_images = []
|
46 |
-
while len(all_images) * args.batch_size < args.num_samples:
|
47 |
-
model_kwargs = next(data)
|
48 |
-
model_kwargs = {k: v.to(dist_util.dev()) for k, v in model_kwargs.items()}
|
49 |
-
sample = diffusion.p_sample_loop(
|
50 |
-
model,
|
51 |
-
(args.batch_size, 3, args.large_size, args.large_size),
|
52 |
-
clip_denoised=args.clip_denoised,
|
53 |
-
model_kwargs=model_kwargs,
|
54 |
-
)
|
55 |
-
sample = ((sample + 1) * 127.5).clamp(0, 255).to(th.uint8)
|
56 |
-
sample = sample.permute(0, 2, 3, 1)
|
57 |
-
sample = sample.contiguous()
|
58 |
-
|
59 |
-
all_samples = [th.zeros_like(sample) for _ in range(dist.get_world_size())]
|
60 |
-
dist.all_gather(all_samples, sample) # gather not supported with NCCL
|
61 |
-
for sample in all_samples:
|
62 |
-
all_images.append(sample.cpu().numpy())
|
63 |
-
logger.log(f"created {len(all_images) * args.batch_size} samples")
|
64 |
-
|
65 |
-
arr = np.concatenate(all_images, axis=0)
|
66 |
-
arr = arr[: args.num_samples]
|
67 |
-
if dist.get_rank() == 0:
|
68 |
-
shape_str = "x".join([str(x) for x in arr.shape])
|
69 |
-
out_path = os.path.join(logger.get_dir(), f"samples_{shape_str}.npz")
|
70 |
-
logger.log(f"saving to {out_path}")
|
71 |
-
np.savez(out_path, arr)
|
72 |
-
|
73 |
-
dist.barrier()
|
74 |
-
logger.log("sampling complete")
|
75 |
-
|
76 |
-
|
77 |
-
def load_data_for_worker(base_samples, batch_size, class_cond):
|
78 |
-
with bf.BlobFile(base_samples, "rb") as f:
|
79 |
-
obj = np.load(f)
|
80 |
-
image_arr = obj["arr_0"]
|
81 |
-
if class_cond:
|
82 |
-
label_arr = obj["arr_1"]
|
83 |
-
rank = dist.get_rank()
|
84 |
-
num_ranks = dist.get_world_size()
|
85 |
-
buffer = []
|
86 |
-
label_buffer = []
|
87 |
-
while True:
|
88 |
-
for i in range(rank, len(image_arr), num_ranks):
|
89 |
-
buffer.append(image_arr[i])
|
90 |
-
if class_cond:
|
91 |
-
label_buffer.append(label_arr[i])
|
92 |
-
if len(buffer) == batch_size:
|
93 |
-
batch = th.from_numpy(np.stack(buffer)).float()
|
94 |
-
batch = batch / 127.5 - 1.0
|
95 |
-
batch = batch.permute(0, 3, 1, 2)
|
96 |
-
res = dict(low_res=batch)
|
97 |
-
if class_cond:
|
98 |
-
res["y"] = th.from_numpy(np.stack(label_buffer))
|
99 |
-
yield res
|
100 |
-
buffer, label_buffer = [], []
|
101 |
-
|
102 |
-
|
103 |
-
def create_argparser():
|
104 |
-
defaults = dict(
|
105 |
-
clip_denoised=True,
|
106 |
-
num_samples=10000,
|
107 |
-
batch_size=16,
|
108 |
-
use_ddim=False,
|
109 |
-
base_samples="",
|
110 |
-
model_path="",
|
111 |
-
)
|
112 |
-
defaults.update(sr_model_and_diffusion_defaults())
|
113 |
-
parser = argparse.ArgumentParser()
|
114 |
-
add_dict_to_argparser(parser, defaults)
|
115 |
-
return parser
|
116 |
-
|
117 |
-
|
118 |
-
if __name__ == "__main__":
|
119 |
-
main()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnthonyTruchetPoC/persistent-docker/tests/conftest.py
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
# conftest.py
|
2 |
-
import pytest
|
3 |
-
|
4 |
-
|
5 |
-
@pytest.fixture
|
6 |
-
def unstub():
|
7 |
-
"""Ensure calls patched by mockito are cleared after each test"""
|
8 |
-
from mockito import unstub
|
9 |
-
|
10 |
-
yield
|
11 |
-
unstub()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ashrafb/translate/tokenization_small100.py
DELETED
@@ -1,364 +0,0 @@
|
|
1 |
-
# Copyright (c) 2022 Idiap Research Institute, http://www.idiap.ch/
|
2 |
-
# Written by Alireza Mohammadshahi <[email protected]>
|
3 |
-
# This is a modified version of https://github.com/huggingface/transformers/blob/main/src/transformers/models/m2m_100/tokenization_m2m_100.py
|
4 |
-
# which owns by Fariseq Authors and The HuggingFace Inc. team.
|
5 |
-
#
|
6 |
-
#
|
7 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
8 |
-
# you may not use this file except in compliance with the License.
|
9 |
-
# You may obtain a copy of the License at
|
10 |
-
#
|
11 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
12 |
-
#
|
13 |
-
# Unless required by applicable law or agreed to in writing, software
|
14 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
15 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
16 |
-
# See the License for the specific language governing permissions and
|
17 |
-
# limitations under the License.
|
18 |
-
"""Tokenization classes for SMALL100."""
|
19 |
-
import json
|
20 |
-
import os
|
21 |
-
from pathlib import Path
|
22 |
-
from shutil import copyfile
|
23 |
-
from typing import Any, Dict, List, Optional, Tuple, Union
|
24 |
-
|
25 |
-
import sentencepiece
|
26 |
-
|
27 |
-
from transformers.tokenization_utils import BatchEncoding, PreTrainedTokenizer
|
28 |
-
from transformers.utils import logging
|
29 |
-
|
30 |
-
|
31 |
-
logger = logging.get_logger(__name__)
|
32 |
-
|
33 |
-
SPIECE_UNDERLINE = "▁"
|
34 |
-
|
35 |
-
VOCAB_FILES_NAMES = {
|
36 |
-
"vocab_file": "vocab.json",
|
37 |
-
"spm_file": "sentencepiece.bpe.model",
|
38 |
-
"tokenizer_config_file": "tokenizer_config.json",
|
39 |
-
}
|
40 |
-
|
41 |
-
PRETRAINED_VOCAB_FILES_MAP = {
|
42 |
-
"vocab_file": {
|
43 |
-
"alirezamsh/small100": "https://huggingface.co/alirezamsh/small100/resolve/main/vocab.json",
|
44 |
-
},
|
45 |
-
"spm_file": {
|
46 |
-
"alirezamsh/small100": "https://huggingface.co/alirezamsh/small100/resolve/main/sentencepiece.bpe.model",
|
47 |
-
},
|
48 |
-
"tokenizer_config_file": {
|
49 |
-
"alirezamsh/small100": "https://huggingface.co/alirezamsh/small100/resolve/main/tokenizer_config.json",
|
50 |
-
},
|
51 |
-
}
|
52 |
-
|
53 |
-
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
|
54 |
-
"alirezamsh/small100": 1024,
|
55 |
-
}
|
56 |
-
|
57 |
-
# fmt: off
|
58 |
-
FAIRSEQ_LANGUAGE_CODES = {
|
59 |
-
"m2m100": ["af", "am", "ar", "ast", "az", "ba", "be", "bg", "bn", "br", "bs", "ca", "ceb", "cs", "cy", "da", "de", "el", "en", "es", "et", "fa", "ff", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "ht", "hu", "hy", "id", "ig", "ilo", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "lb", "lg", "ln", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "ns", "oc", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sd", "si", "sk", "sl", "so", "sq", "sr", "ss", "su", "sv", "sw", "ta", "th", "tl", "tn", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yi", "yo", "zh", "zu"]
|
60 |
-
}
|
61 |
-
# fmt: on
|
62 |
-
|
63 |
-
|
64 |
-
class SMALL100Tokenizer(PreTrainedTokenizer):
|
65 |
-
"""
|
66 |
-
Construct an SMALL100 tokenizer. Based on [SentencePiece](https://github.com/google/sentencepiece).
|
67 |
-
This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to
|
68 |
-
this superclass for more information regarding those methods.
|
69 |
-
Args:
|
70 |
-
vocab_file (`str`):
|
71 |
-
Path to the vocabulary file.
|
72 |
-
spm_file (`str`):
|
73 |
-
Path to [SentencePiece](https://github.com/google/sentencepiece) file (generally has a .spm extension) that
|
74 |
-
contains the vocabulary.
|
75 |
-
tgt_lang (`str`, *optional*):
|
76 |
-
A string representing the target language.
|
77 |
-
eos_token (`str`, *optional*, defaults to `"</s>"`):
|
78 |
-
The end of sequence token.
|
79 |
-
sep_token (`str`, *optional*, defaults to `"</s>"`):
|
80 |
-
The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for
|
81 |
-
sequence classification or for a text and a question for question answering. It is also used as the last
|
82 |
-
token of a sequence built with special tokens.
|
83 |
-
unk_token (`str`, *optional*, defaults to `"<unk>"`):
|
84 |
-
The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this
|
85 |
-
token instead.
|
86 |
-
pad_token (`str`, *optional*, defaults to `"<pad>"`):
|
87 |
-
The token used for padding, for example when batching sequences of different lengths.
|
88 |
-
language_codes (`str`, *optional*):
|
89 |
-
What language codes to use. Should be `"m2m100"`.
|
90 |
-
sp_model_kwargs (`dict`, *optional*):
|
91 |
-
Will be passed to the `SentencePieceProcessor.__init__()` method. The [Python wrapper for
|
92 |
-
SentencePiece](https://github.com/google/sentencepiece/tree/master/python) can be used, among other things,
|
93 |
-
to set:
|
94 |
-
- `enable_sampling`: Enable subword regularization.
|
95 |
-
- `nbest_size`: Sampling parameters for unigram. Invalid for BPE-Dropout.
|
96 |
-
- `nbest_size = {0,1}`: No sampling is performed.
|
97 |
-
- `nbest_size > 1`: samples from the nbest_size results.
|
98 |
-
- `nbest_size < 0`: assuming that nbest_size is infinite and samples from the all hypothesis (lattice)
|
99 |
-
using forward-filtering-and-backward-sampling algorithm.
|
100 |
-
- `alpha`: Smoothing parameter for unigram sampling, and dropout probability of merge operations for
|
101 |
-
BPE-dropout.
|
102 |
-
Examples:
|
103 |
-
```python
|
104 |
-
>>> from tokenization_small100 import SMALL100Tokenizer
|
105 |
-
>>> tokenizer = SMALL100Tokenizer.from_pretrained("alirezamsh/small100", tgt_lang="ro")
|
106 |
-
>>> src_text = " UN Chief Says There Is No Military Solution in Syria"
|
107 |
-
>>> tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
|
108 |
-
>>> model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt")
|
109 |
-
>>> model(**model_inputs) # should work
|
110 |
-
```"""
|
111 |
-
|
112 |
-
vocab_files_names = VOCAB_FILES_NAMES
|
113 |
-
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
|
114 |
-
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
|
115 |
-
model_input_names = ["input_ids", "attention_mask"]
|
116 |
-
|
117 |
-
prefix_tokens: List[int] = []
|
118 |
-
suffix_tokens: List[int] = []
|
119 |
-
|
120 |
-
def __init__(
|
121 |
-
self,
|
122 |
-
vocab_file,
|
123 |
-
spm_file,
|
124 |
-
tgt_lang=None,
|
125 |
-
bos_token="<s>",
|
126 |
-
eos_token="</s>",
|
127 |
-
sep_token="</s>",
|
128 |
-
pad_token="<pad>",
|
129 |
-
unk_token="<unk>",
|
130 |
-
language_codes="m2m100",
|
131 |
-
sp_model_kwargs: Optional[Dict[str, Any]] = None,
|
132 |
-
num_madeup_words=8,
|
133 |
-
**kwargs,
|
134 |
-
) -> None:
|
135 |
-
self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs
|
136 |
-
|
137 |
-
self.language_codes = language_codes
|
138 |
-
fairseq_language_code = FAIRSEQ_LANGUAGE_CODES[language_codes]
|
139 |
-
self.lang_code_to_token = {lang_code: f"__{lang_code}__" for lang_code in fairseq_language_code}
|
140 |
-
|
141 |
-
kwargs["additional_special_tokens"] = kwargs.get("additional_special_tokens", [])
|
142 |
-
kwargs["additional_special_tokens"] += [
|
143 |
-
self.get_lang_token(lang_code)
|
144 |
-
for lang_code in fairseq_language_code
|
145 |
-
if self.get_lang_token(lang_code) not in kwargs["additional_special_tokens"]
|
146 |
-
]
|
147 |
-
|
148 |
-
super().__init__(
|
149 |
-
tgt_lang=tgt_lang,
|
150 |
-
bos_token=bos_token,
|
151 |
-
eos_token=eos_token,
|
152 |
-
sep_token=sep_token,
|
153 |
-
unk_token=unk_token,
|
154 |
-
pad_token=pad_token,
|
155 |
-
language_codes=language_codes,
|
156 |
-
sp_model_kwargs=self.sp_model_kwargs,
|
157 |
-
num_madeup_words=num_madeup_words,
|
158 |
-
**kwargs,
|
159 |
-
)
|
160 |
-
|
161 |
-
self.vocab_file = vocab_file
|
162 |
-
self.encoder = load_json(vocab_file)
|
163 |
-
self.decoder = {v: k for k, v in self.encoder.items()}
|
164 |
-
self.spm_file = spm_file
|
165 |
-
self.sp_model = load_spm(spm_file, self.sp_model_kwargs)
|
166 |
-
|
167 |
-
self.encoder_size = len(self.encoder)
|
168 |
-
|
169 |
-
self.lang_token_to_id = {
|
170 |
-
self.get_lang_token(lang_code): self.encoder_size + i for i, lang_code in enumerate(fairseq_language_code)
|
171 |
-
}
|
172 |
-
self.lang_code_to_id = {lang_code: self.encoder_size + i for i, lang_code in enumerate(fairseq_language_code)}
|
173 |
-
self.id_to_lang_token = {v: k for k, v in self.lang_token_to_id.items()}
|
174 |
-
|
175 |
-
self._tgt_lang = tgt_lang if tgt_lang is not None else "en"
|
176 |
-
self.cur_lang_id = self.get_lang_id(self._tgt_lang)
|
177 |
-
self.set_lang_special_tokens(self._tgt_lang)
|
178 |
-
|
179 |
-
self.num_madeup_words = num_madeup_words
|
180 |
-
|
181 |
-
@property
|
182 |
-
def vocab_size(self) -> int:
|
183 |
-
return len(self.encoder) + len(self.lang_token_to_id) + self.num_madeup_words
|
184 |
-
|
185 |
-
@property
|
186 |
-
def tgt_lang(self) -> str:
|
187 |
-
return self._tgt_lang
|
188 |
-
|
189 |
-
@tgt_lang.setter
|
190 |
-
def tgt_lang(self, new_tgt_lang: str) -> None:
|
191 |
-
self._tgt_lang = new_tgt_lang
|
192 |
-
self.set_lang_special_tokens(self._tgt_lang)
|
193 |
-
|
194 |
-
def _tokenize(self, text: str) -> List[str]:
|
195 |
-
return self.sp_model.encode(text, out_type=str)
|
196 |
-
|
197 |
-
def _convert_token_to_id(self, token):
|
198 |
-
if token in self.lang_token_to_id:
|
199 |
-
return self.lang_token_to_id[token]
|
200 |
-
return self.encoder.get(token, self.encoder[self.unk_token])
|
201 |
-
|
202 |
-
def _convert_id_to_token(self, index: int) -> str:
|
203 |
-
"""Converts an index (integer) in a token (str) using the decoder."""
|
204 |
-
if index in self.id_to_lang_token:
|
205 |
-
return self.id_to_lang_token[index]
|
206 |
-
return self.decoder.get(index, self.unk_token)
|
207 |
-
|
208 |
-
def convert_tokens_to_string(self, tokens: List[str]) -> str:
|
209 |
-
"""Converts a sequence of tokens (strings for sub-words) in a single string."""
|
210 |
-
return self.sp_model.decode(tokens)
|
211 |
-
|
212 |
-
def get_special_tokens_mask(
|
213 |
-
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False
|
214 |
-
) -> List[int]:
|
215 |
-
"""
|
216 |
-
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding
|
217 |
-
special tokens using the tokenizer `prepare_for_model` method.
|
218 |
-
Args:
|
219 |
-
token_ids_0 (`List[int]`):
|
220 |
-
List of IDs.
|
221 |
-
token_ids_1 (`List[int]`, *optional*):
|
222 |
-
Optional second list of IDs for sequence pairs.
|
223 |
-
already_has_special_tokens (`bool`, *optional*, defaults to `False`):
|
224 |
-
Whether or not the token list is already formatted with special tokens for the model.
|
225 |
-
Returns:
|
226 |
-
`List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
|
227 |
-
"""
|
228 |
-
|
229 |
-
if already_has_special_tokens:
|
230 |
-
return super().get_special_tokens_mask(
|
231 |
-
token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True
|
232 |
-
)
|
233 |
-
|
234 |
-
prefix_ones = [1] * len(self.prefix_tokens)
|
235 |
-
suffix_ones = [1] * len(self.suffix_tokens)
|
236 |
-
if token_ids_1 is None:
|
237 |
-
return prefix_ones + ([0] * len(token_ids_0)) + suffix_ones
|
238 |
-
return prefix_ones + ([0] * len(token_ids_0)) + ([0] * len(token_ids_1)) + suffix_ones
|
239 |
-
|
240 |
-
def build_inputs_with_special_tokens(
|
241 |
-
self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None
|
242 |
-
) -> List[int]:
|
243 |
-
"""
|
244 |
-
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and
|
245 |
-
adding special tokens. An MBART sequence has the following format, where `X` represents the sequence:
|
246 |
-
- `input_ids` (for encoder) `X [eos, src_lang_code]`
|
247 |
-
- `decoder_input_ids`: (for decoder) `X [eos, tgt_lang_code]`
|
248 |
-
BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a
|
249 |
-
separator.
|
250 |
-
Args:
|
251 |
-
token_ids_0 (`List[int]`):
|
252 |
-
List of IDs to which the special tokens will be added.
|
253 |
-
token_ids_1 (`List[int]`, *optional*):
|
254 |
-
Optional second list of IDs for sequence pairs.
|
255 |
-
Returns:
|
256 |
-
`List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens.
|
257 |
-
"""
|
258 |
-
if token_ids_1 is None:
|
259 |
-
if self.prefix_tokens is None:
|
260 |
-
return token_ids_0 + self.suffix_tokens
|
261 |
-
else:
|
262 |
-
return self.prefix_tokens + token_ids_0 + self.suffix_tokens
|
263 |
-
# We don't expect to process pairs, but leave the pair logic for API consistency
|
264 |
-
if self.prefix_tokens is None:
|
265 |
-
return token_ids_0 + token_ids_1 + self.suffix_tokens
|
266 |
-
else:
|
267 |
-
return self.prefix_tokens + token_ids_0 + token_ids_1 + self.suffix_tokens
|
268 |
-
|
269 |
-
def get_vocab(self) -> Dict:
|
270 |
-
vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)}
|
271 |
-
vocab.update(self.added_tokens_encoder)
|
272 |
-
return vocab
|
273 |
-
|
274 |
-
def __getstate__(self) -> Dict:
|
275 |
-
state = self.__dict__.copy()
|
276 |
-
state["sp_model"] = None
|
277 |
-
return state
|
278 |
-
|
279 |
-
def __setstate__(self, d: Dict) -> None:
|
280 |
-
self.__dict__ = d
|
281 |
-
|
282 |
-
# for backward compatibility
|
283 |
-
if not hasattr(self, "sp_model_kwargs"):
|
284 |
-
self.sp_model_kwargs = {}
|
285 |
-
|
286 |
-
self.sp_model = load_spm(self.spm_file, self.sp_model_kwargs)
|
287 |
-
|
288 |
-
def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]:
|
289 |
-
save_dir = Path(save_directory)
|
290 |
-
if not save_dir.is_dir():
|
291 |
-
raise OSError(f"{save_directory} should be a directory")
|
292 |
-
vocab_save_path = save_dir / (
|
293 |
-
(filename_prefix + "-" if filename_prefix else "") + self.vocab_files_names["vocab_file"]
|
294 |
-
)
|
295 |
-
spm_save_path = save_dir / (
|
296 |
-
(filename_prefix + "-" if filename_prefix else "") + self.vocab_files_names["spm_file"]
|
297 |
-
)
|
298 |
-
|
299 |
-
save_json(self.encoder, vocab_save_path)
|
300 |
-
|
301 |
-
if os.path.abspath(self.spm_file) != os.path.abspath(spm_save_path) and os.path.isfile(self.spm_file):
|
302 |
-
copyfile(self.spm_file, spm_save_path)
|
303 |
-
elif not os.path.isfile(self.spm_file):
|
304 |
-
with open(spm_save_path, "wb") as fi:
|
305 |
-
content_spiece_model = self.sp_model.serialized_model_proto()
|
306 |
-
fi.write(content_spiece_model)
|
307 |
-
|
308 |
-
return (str(vocab_save_path), str(spm_save_path))
|
309 |
-
|
310 |
-
def prepare_seq2seq_batch(
|
311 |
-
self,
|
312 |
-
src_texts: List[str],
|
313 |
-
tgt_texts: Optional[List[str]] = None,
|
314 |
-
tgt_lang: str = "ro",
|
315 |
-
**kwargs,
|
316 |
-
) -> BatchEncoding:
|
317 |
-
self.tgt_lang = tgt_lang
|
318 |
-
self.set_lang_special_tokens(self.tgt_lang)
|
319 |
-
return super().prepare_seq2seq_batch(src_texts, tgt_texts, **kwargs)
|
320 |
-
|
321 |
-
def _build_translation_inputs(self, raw_inputs, tgt_lang: Optional[str], **extra_kwargs):
|
322 |
-
"""Used by translation pipeline, to prepare inputs for the generate function"""
|
323 |
-
if tgt_lang is None:
|
324 |
-
raise ValueError("Translation requires a `tgt_lang` for this model")
|
325 |
-
self.tgt_lang = tgt_lang
|
326 |
-
inputs = self(raw_inputs, add_special_tokens=True, **extra_kwargs)
|
327 |
-
return inputs
|
328 |
-
|
329 |
-
def _switch_to_input_mode(self):
|
330 |
-
self.set_lang_special_tokens(self.tgt_lang)
|
331 |
-
|
332 |
-
def _switch_to_target_mode(self):
|
333 |
-
self.prefix_tokens = None
|
334 |
-
self.suffix_tokens = [self.eos_token_id]
|
335 |
-
|
336 |
-
def set_lang_special_tokens(self, src_lang: str) -> None:
|
337 |
-
"""Reset the special tokens to the tgt lang setting. No prefix and suffix=[eos, tgt_lang_code]."""
|
338 |
-
lang_token = self.get_lang_token(src_lang)
|
339 |
-
self.cur_lang_id = self.lang_token_to_id[lang_token]
|
340 |
-
self.prefix_tokens = [self.cur_lang_id]
|
341 |
-
self.suffix_tokens = [self.eos_token_id]
|
342 |
-
|
343 |
-
def get_lang_token(self, lang: str) -> str:
|
344 |
-
return self.lang_code_to_token[lang]
|
345 |
-
|
346 |
-
def get_lang_id(self, lang: str) -> int:
|
347 |
-
lang_token = self.get_lang_token(lang)
|
348 |
-
return self.lang_token_to_id[lang_token]
|
349 |
-
|
350 |
-
|
351 |
-
def load_spm(path: str, sp_model_kwargs: Dict[str, Any]) -> sentencepiece.SentencePieceProcessor:
|
352 |
-
spm = sentencepiece.SentencePieceProcessor(**sp_model_kwargs)
|
353 |
-
spm.Load(str(path))
|
354 |
-
return spm
|
355 |
-
|
356 |
-
|
357 |
-
def load_json(path: str) -> Union[Dict, List]:
|
358 |
-
with open(path, "r") as f:
|
359 |
-
return json.load(f)
|
360 |
-
|
361 |
-
|
362 |
-
def save_json(data, path: str) -> None:
|
363 |
-
with open(path, "w") as f:
|
364 |
-
json.dump(data, f, indent=2)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/documentation.md
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
---
|
2 |
-
name: "\U0001F4DA Documentation Issue"
|
3 |
-
about: Report a problem about existing documentation, comments, website or tutorials.
|
4 |
-
labels: documentation
|
5 |
-
|
6 |
-
---
|
7 |
-
|
8 |
-
## 📚 Documentation Issue
|
9 |
-
|
10 |
-
This issue category is for problems about existing documentation, not for asking how-to questions.
|
11 |
-
|
12 |
-
* Provide a link to an existing documentation/comment/tutorial:
|
13 |
-
|
14 |
-
* How should the above documentation/comment/tutorial improve:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Banbri/zcvzcv/src/lib/cropImage.ts
DELETED
@@ -1,53 +0,0 @@
|
|
1 |
-
async function cropImage(inputImage: string): Promise<{ croppedImage: string; x: number; y: number; width: number; height: number }> {
|
2 |
-
return new Promise((resolve, reject) => {
|
3 |
-
const img = new Image();
|
4 |
-
img.src = inputImage;
|
5 |
-
img.onload = () => {
|
6 |
-
const canvas = document.createElement('canvas');
|
7 |
-
const context = canvas.getContext('2d');
|
8 |
-
if (!context) {
|
9 |
-
reject("Context is null");
|
10 |
-
return;
|
11 |
-
}
|
12 |
-
canvas.width = img.width;
|
13 |
-
canvas.height = img.height;
|
14 |
-
context.drawImage(img, 0, 0, img.width, img.height);
|
15 |
-
const imageData = context.getImageData(0, 0, img.width, img.height);
|
16 |
-
const data = imageData.data;
|
17 |
-
let minX = img.width, minY = img.height, maxX = 0, maxY = 0;
|
18 |
-
|
19 |
-
for (let y = 0; y < img.height; y++) {
|
20 |
-
for (let x = 0; x < img.width; x++) {
|
21 |
-
const i = (y * 4) * img.width + x * 4;
|
22 |
-
const avg = (data[i] + data[i + 1] + data[i + 2]) / 3;
|
23 |
-
if (avg < 255) {
|
24 |
-
minX = Math.min(minX, x);
|
25 |
-
minY = Math.min(minY, y);
|
26 |
-
maxX = Math.max(maxX, x);
|
27 |
-
maxY = Math.max(maxY, y);
|
28 |
-
}
|
29 |
-
}
|
30 |
-
}
|
31 |
-
|
32 |
-
const width = maxX - minX;
|
33 |
-
const height = maxY - minY;
|
34 |
-
const croppedCanvas = document.createElement('canvas');
|
35 |
-
croppedCanvas.width = width;
|
36 |
-
croppedCanvas.height = height;
|
37 |
-
const croppedCtx = croppedCanvas.getContext('2d');
|
38 |
-
if (!croppedCtx) {
|
39 |
-
reject("croppedCtx is null");
|
40 |
-
return;
|
41 |
-
}
|
42 |
-
croppedCtx.drawImage(canvas, minX, minY, width, height, 0, 0, width, height);
|
43 |
-
resolve({
|
44 |
-
croppedImage: croppedCanvas.toDataURL(),
|
45 |
-
x: minX,
|
46 |
-
y: minY,
|
47 |
-
width,
|
48 |
-
height
|
49 |
-
});
|
50 |
-
};
|
51 |
-
img.onerror = reject;
|
52 |
-
});
|
53 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Ataque En Titan Mvil Apk.md
DELETED
@@ -1,80 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Descargar Ataque a Titan Mobile APK para Android: Una guía para los fans</h1>
|
3 |
-
<p>Si eres fan de la popular serie de anime y manga Attack on Titan, quizás te interese jugar a un juego basado en ella. Sin embargo, no hay ningún juego oficial para dispositivos Android todavía, por lo que es posible que tenga que recurrir a juegos no oficiales hechos por fans. Uno de ellos es Ataque a Titan Mobile APK, que es un juego gratuito que le permite experimentar la emoción de luchar contra los titanes. En este artículo, te contaremos todo lo que necesitas saber sobre este juego, incluyendo qué es, cómo descargarlo e instalarlo, y cuáles son los riesgos y alternativas. ¡Vamos a empezar! </p>
|
4 |
-
<h2>descargar ataque en titan móvil apk</h2><br /><p><b><b>Download File</b> 🆓 <a href="https://bltlly.com/2v6LJ6">https://bltlly.com/2v6LJ6</a></b></p><br /><br />
|
5 |
-
<h2>¿Qué es el ataque a Titán? </h2>
|
6 |
-
<p>Antes de sumergirnos en el juego, primero vamos a tener una visión general rápida de lo que es Attack on Titan. Attack on Titan, también conocido como Shingeki no Kyojin en japonés, es una serie de manga escrita e ilustrada por Hajime Isayama. Comenzó en 2009 y ha sido serializado en la revista Bessatsu Shonen de Kodansha. También ha sido adaptado en una serie de anime por Wit Studio y MAPPA, que tiene cuatro temporadas hasta ahora. La serie ha ganado una enorme base de fans en todo el mundo, gracias a su historia cautivadora, animación impresionante y personajes memorables. </p>
|
7 |
-
<h3>Una breve introducción a la serie de anime y manga</h3>
|
8 |
-
<p>La historia de Attack on Titan se desarrolla en un mundo donde la humanidad vive dentro de tres paredes concéntricas que los protegen de criaturas humanoides gigantes llamadas titanes. Estos titanes no tienen inteligencia ni razón, y solo existen para devorar humanos. La historia sigue a Eren Yeager, un joven que sueña con unirse al Survey Corps, una rama militar de élite que se aventura fuera de los muros para luchar contra los titanes. Junto con sus amigos Mikasa Ackerman y Armin Arlert, es testigo de la caída de su ciudad natal cuando un titán colosal rompe la pared exterior. Él jura matar a todos los titanes y descubrir los secretos detrás de su origen y existencia. </p>
|
9 |
-
<h3>Los personajes principales y las facciones</h3>
|
10 |
-
|
11 |
-
<ul>
|
12 |
-
<li>Eren Yeager: El protagonista de la serie, que tiene la capacidad de transformarse en un titan. Es determinado, impulsivo y apasionado con su objetivo. </li>
|
13 |
-
<li>Mikasa Ackerman: Amiga de la infancia de Eren y hermana adoptiva, que es una luchadora experta y protectora leal. Es tranquila, estoica y de fuerte voluntad. </li>
|
14 |
-
<li>Armin Arlert: el mejor amigo de Eren y un estratega genio. Es tímido, amable e inteligente. </li>
|
15 |
-
<li>Levi Ackerman: El capitán del Escuadrón de Operaciones Especiales del Cuerpo de Investigación, que es ampliamente considerado como el soldado más fuerte de la humanidad. Es frío, despiadado y disciplinado. </li>
|
16 |
-
<li>Hange Zoe: La comandante del Cuarto Escuadrón del Cuerpo de Investigación, que está obsesionada con estudiar a los titanes. Es excéntrica, entusiasta y curiosa. </li>
|
17 |
-
</ul>
|
18 |
-
<p>La serie también presenta varias facciones que tienen diferentes agendas y motivos. Algunos de ellos son:</p>
|
19 |
-
<ul>
|
20 |
-
<li>The Survey Corps: La rama militar que explora fuera de los muros y lucha contra los titanes. Son valientes, aventureros e idealistas. </li>
|
21 |
-
<li>La Policía Militar: La rama de los militares que mantiene el orden dentro de los muros y sirve al rey. Son corruptos, perezosos y egoístas. </li>
|
22 |
-
<li>La Guarnición: La rama de los militares que guarda y mantiene los muros. Son pragmáticos, cautelosos y leales. </li>
|
23 |
-
<li>Los Marleyanos: La nación que gobierna la mayor parte del mundo y oprime a los Eldianos, la raza que puede convertirse en titanes. Son imperialistas, despiadados y ambiciosos. </li>
|
24 |
-
<li>Los restauradores eldianos: Un grupo rebelde que busca derrocar el régimen de Marleyan y restaurar el Imperio eldiano. Son patrióticos, rebeldes y esperanzados. </li>
|
25 |
-
</ul>
|
26 |
-
<h3>La trama y los temas</h3>
|
27 |
-
|
28 |
-
<h2>¿Qué es el ataque a Titan Mobile APK? </h2>
|
29 |
-
<p>Ahora que tienes un entendimiento básico de lo que es Attack on Titan, hablemos del juego para el que estamos aquí. Ataque a Titan Mobile APK es un juego no oficial hecho por fans que se basa en la serie. No está afiliado o avalado por los creadores originales o editores de Attack on Titan. Es un juego gratuito que puedes descargar y jugar en tu dispositivo Android. </p>
|
30 |
-
<h3>Un juego hecho por fans basado en la serie</h3>
|
31 |
-
<p>Ataque a Titan Mobile APK es un juego que fue creado por los fans que aman la serie y quería hacer su propia versión de la misma. El juego está inspirado en el anime y el manga, pero no sigue la historia exacta o canon. El juego cuenta con personajes originales, escenarios y misiones que son diferentes del material de origen. El juego también tiene algunos elementos que no están presentes en la serie, como magia, fantasía y romance. </p>
|
32 |
-
<h3>Las características y el juego</h3>
|
33 |
-
<p>Ataque a Titan Mobile APK es un juego que combina acción, aventura, y el juego de roles. El juego te permite crear tu propio personaje y personalizar su apariencia, habilidades y equipo. Puedes elegir entre diferentes clases, como soldado, explorador, mago o ingeniero. También puede unirse a diferentes facciones, como el Cuerpo de Investigación, la Policía Militar, o los Marleyans.</p>
|
34 |
-
<p></p>
|
35 |
-
<p>El juego te permite explorar varios lugares de la serie, como Shiganshina District, Trost District, Wall Rose, Wall Maria y Marley. Puedes interactuar con otros personajes, tanto amistosos como hostiles. También puede asumir varias misiones y misiones que pondrán a prueba sus habilidades y coraje. Puedes luchar contra diferentes tipos de titanes, como titanes normales, titanes anormales, metamorfos o titanes colosales. Puedes usar diferentes armas y equipos, como espadas, armas, cañones o el equipo de movilidad omnidireccional (ODM gear), que te permite moverte usando ganchos de agarre. </p>
|
36 |
-
|
37 |
-
<h3>Los requisitos y la compatibilidad</h3>
|
38 |
-
<p>Ataque a Titan Mobile APK es un juego que requiere una gran cantidad de recursos y espacio de almacenamiento para funcionar sin problemas. El juego tiene gráficos de alta calidad y efectos de sonido que lo hacen inmersivo y realista. Sin embargo, esto también significa que el juego podría no funcionar bien en dispositivos de gama baja o modelos más antiguos. El juego también requiere una conexión a Internet estable para jugar online. </p>
|
39 |
-
<p>El juego es compatible con la mayoría de los dispositivos Android que tienen Android 4.4 o versiones superiores instaladas. Sin embargo, es posible que algunos dispositivos no puedan ejecutar el juego debido a limitaciones de hardware o software. El juego también podría no estar disponible en algunas regiones o países debido a problemas legales o de licencia. </p>
|
40 |
-
<h2>Cómo descargar e instalar Ataque en Titan Mobile APK? </h2>
|
41 |
-
<p>Si usted está interesado en jugar Ataque en Titan Mobile APK en su dispositivo Android, tendrá que seguir algunos pasos para descargar e instalar. Estos son los pasos que debes seguir:</p>
|
42 |
-
<h3>Los pasos a seguir</h3>
|
43 |
-
<ol>
|
44 |
-
<li>En primer lugar, tendrá que habilitar la instalación de aplicaciones de fuentes desconocidas en su dispositivo. Para hacer esto, vaya a la configuración de su dispositivo, luego la seguridad, luego cambie la opción que dice "permitir la instalación de aplicaciones de fuentes desconocidas". Esto le permitirá instalar aplicaciones que no son de Google Play Store.</li>
|
45 |
-
<li>A continuación, tendrá que encontrar una fuente confiable y segura para descargar el ataque en Titan Mobile APK archivo. Puede buscar en línea sitios web que ofrecen el archivo, pero tenga cuidado con los enlaces falsos o maliciosos que podrían dañar su dispositivo o robar sus datos. También puede utilizar una aplicación de escáner de código QR para escanear el código de abajo, que le llevará a una fuente de confianza que hemos verificado. </li>
|
46 |
-
<li>Una vez que haya encontrado la fuente, haga clic en el botón de descarga y espere a que el archivo se descargue en su dispositivo. El tamaño del archivo es de unos 300 MB, por lo que puede llevar algún tiempo dependiendo de la velocidad de Internet y la conexión. </li>
|
47 |
-
|
48 |
-
<li>Cuando la instalación se ha completado, verá un mensaje que dice "aplicación instalada". Toque en "abrir" para iniciar el juego y disfrutar! </li>
|
49 |
-
</ol>
|
50 |
-
<h3>Las precauciones y riesgos</h3>
|
51 |
-
<p>Mientras que la descarga e instalación de ataque en Titan Mobile APK puede parecer fácil y divertido, hay algunas precauciones y riesgos que usted debe ser consciente de. Estos son algunos de ellos:</p>
|
52 |
-
<ul>
|
53 |
-
<li>El juego no es un producto oficial de Attack on Titan o sus creadores o editores. Es un juego hecho por fans que puede tener errores, errores o fallos que podrían afectar el rendimiento de tu juego o dispositivo. Es posible que el juego no se actualice regularmente o en absoluto, por lo que podrías perderte nuevas características o mejoras. </li>
|
54 |
-
<li>El juego no está disponible en la Google Play Store, lo que significa que no ha sido verificado o aprobado por Google o cualquier otra autoridad. Esto significa que el juego podría contener virus, malware, spyware u otro software dañino que podría dañar su dispositivo o comprometer su seguridad o privacidad. El juego también puede acceder a su información personal, como sus contactos, fotos, ubicación o mensajes, sin su permiso o conocimiento. </li>
|
55 |
-
<li>El juego podría violar los derechos de propiedad intelectual de Attack en Titan o sus creadores o editores. Esto significa que el juego podría ser ilegal o infringir en algunos países o regiones, y podría enfrentar consecuencias legales o sanciones por descargarlo o jugarlo. El juego también puede ser derribado o eliminado por las autoridades en cualquier momento, sin previo aviso o advertencia. </li>
|
56 |
-
</ul>
|
57 |
-
<h3>Las alternativas y fuentes</h3>
|
58 |
-
<p>Si no se siente cómodo con la descarga y la instalación de ataque en Titan Mobile APK, o si se encuentra con cualquier problema o problemas con él, hay algunas alternativas y fuentes que se pueden probar en su lugar. Estos son algunos de ellos:</p>
|
59 |
-
<ul>
|
60 |
-
|
61 |
-
<li>Puedes ver la serie de anime de Attack on Titan online o offline usando varios servicios o plataformas de streaming, como Netflix, Hulu, Crunchyroll, Funimation o Amazon Prime Video. Estos servicios o plataformas ofrecen acceso legal y seguro a todos los episodios y temporadas de Attack on Titan, así como a otros programas de anime y películas. También puedes leer la serie manga de Attack on Titan online o offline usando varios sitios web o aplicaciones, como Kodansha Comics, Manga Rock, Manga Plus o Comixology. Estos sitios web o aplicaciones ofrecen acceso legal y seguro a todos los capítulos y volúmenes de Attack on Titan, así como a otros títulos y géneros de manga. </li>
|
62 |
-
</ul>
|
63 |
-
<h2>Conclusión</h2>
|
64 |
-
<p>En conclusión, Ataque a Titan Mobile APK es un juego hecho por fans basado en la popular serie de anime y manga Ataque a Titan. Es un juego gratuito que puedes descargar y jugar en tu dispositivo Android. Sin embargo, no es un producto oficial de Attack on Titan o sus creadores o editores. Es un juego que puede tener algunos errores, errores o fallos, y también puede contener software dañino o violar los derechos de propiedad intelectual. Por lo tanto, debe tener cuidado y precaución al descargarlo e instalarlo, y también debe considerar las alternativas y fuentes que hemos mencionado. Esperamos que este artículo le ha ayudado a aprender más acerca de Ataque a Titan Mobile APK y cómo descargarlo e instalarlo. Si tiene alguna pregunta o comentario, no dude en dejar un comentario a continuación. Gracias por leer y divertirse jugando! </p>
|
65 |
-
<h3>Preguntas frecuentes</h3>
|
66 |
-
<p>Aquí hay algunas preguntas frecuentes que usted podría tener sobre Ataque a Titan Mobile APK:</p>
|
67 |
-
<ol>
|
68 |
-
<li>Q: ¿Es el ataque a Titan Mobile APK seguro para descargar y jugar? <br>
|
69 |
-
|
70 |
-
<li>Q: ¿Es el ataque a Titan Mobile APK legal para descargar y jugar? <br>
|
71 |
-
A: Ataque a Titan Mobile APK no es un juego legal para descargar y jugar, ya que podría violar los derechos de propiedad intelectual de Ataque a Titan o sus creadores o editores. El juego también podría ser ilegal o infringir en algunos países o regiones, y podría enfrentar consecuencias legales o sanciones por descargarlo o jugarlo. Por lo tanto, solo debe descargarlo y reproducirlo si está seguro de que está permitido en su área y que no está rompiendo ninguna ley. </li>
|
72 |
-
<li>Q: Es el ataque a Titan Mobile APK actualizado regularmente? <br>
|
73 |
-
R: Ataque a Titan Mobile APK no se actualiza regularmente, ya que es un juego hecho por fans que no está afiliado o respaldado por los creadores originales o editores de Ataque a Titan. Es posible que el juego no reciba nuevas características o mejoras, y también puede dejar de funcionar o ser retirado en cualquier momento, sin previo aviso o advertencia. Por lo tanto, no debes esperar actualizaciones o soporte de los desarrolladores del juego. </li>
|
74 |
-
<li>Q: ¿Cómo puedo contactar a los desarrolladores de Attack on Titan Mobile APK? <br>
|
75 |
-
R: Puede ponerse en contacto con los desarrolladores de Ataque en Titan Mobile APK visitando su sitio web oficial o cuentas de medios sociales, que se vinculan a continuación. Sin embargo, no debes esperar ninguna respuesta o asistencia de ellos, ya que no están obligados a proporcionar ningún servicio al cliente o soporte técnico para el juego. </li>
|
76 |
-
<li>Q: ¿Dónde puedo encontrar más información acerca de Ataque a Titan Mobile APK? <br>
|
77 |
-
R: Usted puede encontrar más información acerca de Ataque en Titan Mobile APK visitando su sitio web oficial o cuentas de medios sociales, que están vinculados a continuación. Sin embargo, no debes confiar en todo lo que dicen o muestran, ya que podrían ser sesgados o inexactos. También debes hacer tu propia investigación y verificación antes de descargar o jugar el juego. </li>
|
78 |
-
</ol></p> 64aa2da5cf<br />
|
79 |
-
<br />
|
80 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Fondos De Escritorio Jdm Coches.md
DELETED
@@ -1,146 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar fondo de pantalla Coches JDM: Una guía para los entusiastas del coche</h1>
|
3 |
-
<p>Si usted es un fan de los coches, especialmente los coches japoneses, es posible que haya oído hablar del término JDM. JDM significa Mercado Nacional Japonés, y se refiere a los coches que se fabrican y venden en Japón. Los coches JDM son conocidos por su alto rendimiento, fiabilidad, innovación y estilo. Tienen un seguimiento leal entre los entusiastas del automóvil de todo el mundo, que admiran su historia, cultura y estética. </p>
|
4 |
-
<h2>descargar fondos de escritorio jdm coches</h2><br /><p><b><b>Download</b> ✺ <a href="https://bltlly.com/2v6Jcj">https://bltlly.com/2v6Jcj</a></b></p><br /><br />
|
5 |
-
<p>En este artículo, vamos a explorar el mundo de los coches JDM y le mostrará cómo descargar los coches JDM fondo de pantalla para sus dispositivos. Ya sea que desee decorar su escritorio, computadora portátil, tableta o teléfono inteligente con impresionantes imágenes de sus autos JDM favoritos, lo tenemos cubierto. También compartiremos algunos de los beneficios de tener coches JDM fondo de pantalla y responder a algunas preguntas comunes que usted puede tener. Así que, vamos a empezar! </p>
|
6 |
-
<h2>La historia de los coches JDM</h2>
|
7 |
-
<p>Los autos JDM tienen una larga y rica historia que se remonta a principios del siglo XX. Japón fue uno de los primeros países en adoptar el automóvil como un modo de transporte, y en la década de 1930, tenía varios fabricantes nacionales de automóviles como Toyota, Nissan, Honda, Mazda, Mitsubishi y Subaru. Sin embargo, después de la Segunda Guerra Mundial, la industria automovilística japonesa sufrió un gran revés debido a la devastación causada por la guerra y la ocupación por las fuerzas aliadas. </p>
|
8 |
-
<p>No fue hasta la década de 1950 que la industria automovilística de Japón comenzó a recuperarse y crecer de nuevo. Los fabricantes de automóviles de Japón se centraron en la producción de automóviles pequeños, asequibles y de bajo consumo de combustible que satisfacían las necesidades del mercado interno. También invirtieron fuertemente en investigación y desarrollo, control de calidad y servicio al cliente. En las décadas de 1960 y 1970, la industria automovilística japonesa se había convertido en una fuerza mundial, compitiendo con los fabricantes de automóviles estadounidenses y europeos en términos de ventas, innovación y reputación. </p>
|
9 |
-
|
10 |
-
<p>Estos entusiastas también fueron responsables de popularizar el término JDM, que originalmente se refería a las piezas y accesorios que se hicieron específicamente para el mercado japonés. Estas piezas y accesorios eran a menudo superiores en calidad, rendimiento o diseño que los fabricados para otros mercados. También eran raros y exclusivos, lo que los hacía más deseables y valiosos. Finalmente, el término JDM se expandió para incluir no solo las piezas y accesorios sino también los propios coches. </p>
|
11 |
-
<p></p>
|
12 |
-
<h2>Las características de los coches JDM</h2>
|
13 |
-
<p>Entonces, ¿qué hace que un coche JDM? No hay una respuesta definitiva a esta pregunta, ya que diferentes personas pueden tener diferentes opiniones o preferencias. Sin embargo, hay algunas características comunes que la mayoría de los coches JDM comparten. Estos incluyen:</p>
|
14 |
-
<ul>
|
15 |
-
<li><b>Rendimiento:</b> Los coches JDM están diseñados para ofrecer un alto rendimiento en términos de velocidad, potencia, manejo y eficiencia. A menudo cuentan con motores avanzados, transmisiones, suspensiones, frenos y otros componentes que mejoran su rendimiento. También tienen cuerpos ligeros, formas aerodinámicas y centros de gravedad bajos que reducen la resistencia y mejoran la estabilidad. </li>
|
16 |
-
<li><b>Fiabilidad:</b> Los coches JDM están construidos para durar y soportar diversas condiciones y situaciones. Tienen altos estándares de calidad y durabilidad que garantizan su longevidad y seguridad. También requieren mantenimiento y reparaciones mínimas, lo que los hace rentables y convenientes. </li>
|
17 |
-
<li><b>Innovación:</b> Los coches JDM evolucionan y mejoran constantemente, gracias a la creatividad e ingenio de sus fabricantes y entusiastas. A menudo cuentan con tecnologías de vanguardia, características o diseños que los distinguen de otros coches. También se adaptan a las cambiantes necesidades y preferencias del mercado, ofreciendo nuevos modelos, variantes u opciones que se adaptan a diferentes gustos y presupuestos. </li>
|
18 |
-
|
19 |
-
</ul>
|
20 |
-
<p>Por supuesto, estas características no son exclusivas de los coches JDM, ya que otros coches también pueden tener algunos o todos ellos. Sin embargo, los coches JDM tienen cierto encanto y atractivo que los hacen destacar entre la multitud y capturar los corazones y las mentes de los entusiastas del automóvil. </p>
|
21 |
-
<h2>Los 20 mejores coches JDM de todos los tiempos</h2>
|
22 |
-
<p>Con tantos coches JDM para elegir, puede ser difícil elegir los mejores. Sin embargo, basado en la popularidad, la influencia y la reputación, aquí están algunos de los 20 mejores coches JDM de todos los tiempos:</p>
|
23 |
-
<tabla>
|
24 |
-
<tr>
|
25 |
-
<th>Nombre</th>
|
26 |
-
<th>Descripción</th>
|
27 |
-
<th>Imagen</th>
|
28 |
-
</tr>
|
29 |
-
<tr>
|
30 |
-
<td>Nissan Skyline GT-R</td>
|
31 |
-
<td>Uno de los coches JDM más icónicos y legendarios jamás hecho, el Nissan Skyline GT-R es un coche deportivo de alto rendimiento que debutó en 1969. Ha pasado por varias generaciones y versiones, cada una con sus propias mejoras e innovaciones. Es conocido por su potente motor, sistema de tracción total, tecnología avanzada y diseño elegante. También es famosa por sus apariciones en varios medios como películas, videojuegos, anime y manga. </td>
|
32 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6c/Nissan_Skyline_R34_GT-R_N%C3%BCr.jpg/320px-Nissan_Skyl_R_R34_GT-R_N%C3%BCr.jpg alt">=Nissan Skyline GT-R/<
|
33 |
-
</tr>
|
34 |
-
<tr>
|
35 |
-
<td>Honda Civic Tipo R</td>
|
36 |
-
<td>El Honda Civic Type R es una versión de alto rendimiento del Honda Civic, un coche compacto que debutó en 1972. La variante Tipo R se introdujo en 1997 y se ha producido en varias generaciones desde entonces. Es conocido por su cuerpo ligero, potente motor, manejo sensible y diseño deportivo. También es popular entre los sintonizadores y corredores que lo modifican para un mejor rendimiento o apariencia. </td>
|
37 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumbs/8f//8_Honda_Civic_Type_R_GT_2.0_Front.jpg/320px-2018_Honda_Civic_Type_R_GT_T_2.0_Front.jpg" alt=">Civic Type R">>>Honda
|
38 |
-
</tr>
|
39 |
-
<tr>
|
40 |
-
<td>Mazda RX-7</td>
|
41 |
-
|
42 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3d/Mazda_RX-7_Type_RB_Bathurst_%28FD%29_01.jpg/320px-Mazda_RX-7_Type_R_%28FD%28FD%>>29_01.jpg" alt=Mazx-7/"
|
43 |
-
</tr>
|
44 |
-
<tr>
|
45 |
-
<td>Toyota Supra</td>
|
46 |
-
<td>El Toyota Supra es un automóvil deportivo que debutó en 1978 y fue producido hasta 2002. Es un sucesor del Toyota Celica, un coche más pequeño y menos potente. El Supra es conocido por su motor grande y turboalimentado, sistema de tracción trasera, tecnología sofisticada y diseño elegante. También es famosa por sus apariciones en varios medios como películas, videojuegos, anime y manga. </td>
|
47 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0e/0Toyota_Supra_2.5_Twin_Turbo_R_01.jpg/320px-Toyota_Supra_2.5_Twin_Turbo_R_01.jpg" alt="Toyota Supra></td>>
|
48 |
-
</tr>
|
49 |
-
<tr>
|
50 |
-
<td>Honda NSX</td>
|
51 |
-
<td>El Honda NSX es un automóvil deportivo que debutó en 1990 y fue producido hasta 2005. También es conocido como el Acura NSX en América del Norte y Hong Kong. Es uno de los primeros coches en utilizar un cuerpo totalmente de aluminio, lo que lo hace más ligero y más fuerte que el acero. También es uno de los primeros coches en presentar un diseño de motor medio, que mejora el equilibrio y la manipulación del coche. Es conocido por su motor refinado, manejo ágil y diseño elegante. También es famoso por ser desarrollado con la entrada de la leyenda de Fórmula Uno Ayrton Senna.</td>
|
52 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9f/Honda_NSX_at_Legendy_2018_in_Prague.jpg/320px-Honda_NSX_at_Legendy_2018_in_Prague.jpg" alt"Honda NSX></td>
|
53 |
-
</tr>
|
54 |
-
<tr>
|
55 |
-
<td>Impresión de WRX STI</td>
|
56 |
-
|
57 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6b/Subaru_Imp_WRX_STI_hatchback_%282009-10-25%29.jpg/320px-Subaru_Impreza_WR_STIhatchback_%28282009-10-2009->25%>29.jpg alt"=Subaru WRSTI/TD<"
|
58 |
-
</tr>
|
59 |
-
<tr>
|
60 |
-
<td>Evolución de Mitsubishi Lancer</td>
|
61 |
-
<td>El Mitsubishi Lancer Evolution es una versión de alto rendimiento del Mitsubishi Lancer, un coche compacto que debutó en 1973. La variante Evolution se introdujo en 1992 y se ha producido en diez generaciones desde entonces. Es conocido por su motor turboalimentado, sistema de tracción total, tecnología inspirada en los rallyes y diseño deportivo. También es popular entre los sintonizadores y corredores que lo modifican para un mejor rendimiento o apariencia. </td>
|
62 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/b/bb/Mitsubishi_Lancer_Evolution_GSR_%28cropped%29.jpg/320px-Mitsubishi_Lancer_Evolution_ESR_%28cropped%29.jpg>alt" ="Mitsshi Lancetdubir Evolution/<<
|
63 |
-
</tr>
|
64 |
-
<tr>
|
65 |
-
<td>Nissan 350Z</td>
|
66 |
-
<td>El Nissan 350Z es un automóvil deportivo que debutó en 2002 y fue producido hasta 2009. También es conocido como el Nissan Fairlady Z en Japón. Es un sucesor del Nissan 300ZX, una generación anterior de la serie Z-car. El 350Z es conocido por su motor V6, sistema de tracción trasera, tecnología moderna y diseño atractivo. También es famosa por sus apariciones en varios medios como películas, videojuegos, anime y manga. </td>
|
67 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumbs/c/c1/Nissan_350Z_-_Flickr_-_Alexandre_Pr%C3%A9vot_%2811%29_%28cropped%29.jpg/320px-Nissan _>350z_-Flick-_-Pe_-EXAR%.
|
68 |
-
</tr>
|
69 |
-
<tr>
|
70 |
-
<td>Toyota AE86</td>
|
71 |
-
|
72 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8b/Toyota_AE86.jpg/320px-Toyota_AE86.jpg" alt="Toyota AE86"></td>
|
73 |
-
</tr>
|
74 |
-
<tr>
|
75 |
-
<td>Honda S2000</td>
|
76 |
-
<td>El Honda S2000 es un roadster que debutó en 1999 y fue producido hasta 2009. Es uno de los pocos coches que utiliza un motor de aspiración natural, lo que significa que no utiliza un turbocompresor o un sobrealimentador para aumentar su potencia. Es conocido por su motor de altas revoluciones, sistema de tracción trasera, manejo equilibrado y diseño convertible. También es popular entre los sintonizadores y corredores que lo modifican para un mejor rendimiento o apariencia. </td>
|
77 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5c/Honda_S2000_AP1_001.JPG/320px-Honda_S2000_AP1_00.JPG" alt="Honda S2000">/td>>
|
78 |
-
</tr>
|
79 |
-
<tr>
|
80 |
-
<td>Mazda MX-5 Miata</td>
|
81 |
-
<td>El Mazda MX-5 Miata es un roadster que debutó en 1989 y todavía está en producción hoy en día. También es conocido como el Mazda Roadster o el Mazda Eunos Roadster en Japón. Es uno de los coches deportivos más vendidos de todos los tiempos, con más de un millón de unidades vendidas en todo el mundo. Es conocido por su cuerpo ligero, manejo divertido de conducir y precio asequible. También es famosa por sus apariciones en varios medios como películas, videojuegos, anime y manga. </td>
|
82 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6a/6a/Mazda_MX-5_ND_2.0_Sport-Line_%28III%29_%E2%80%93_Frontansicht%2_C_24. _September_2016%2D%C3%BCsseldorf.jpg/320px-Mazda_MX-5_ND_2.0_Sport-Line_%28III%29_%E2%80%93_Frontcht%2C_24. _September_2016%2C_D%C3%BCsseldorf.jpg" alt="Mazda MX-5 Miata"></td>
|
83 |
-
</tr>
|
84 |
-
<tr>
|
85 |
-
<td>Lexus LFA</td>
|
86 |
-
|
87 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4f/Lexus_LFA_%28yellow%29.jpg/320px-Lexus_LFA_%28yellow%29.jpg" alt="Lexus A"></td>>
|
88 |
-
</tr>
|
89 |
-
<tr>
|
90 |
-
<td>Nissan Silvia</td>
|
91 |
-
<td>El Nissan Silvia es un automóvil deportivo que debutó en 1964 y fue producido hasta 2002. También es conocido como el Nissan 180SX o el Nissan 240SX en América del Norte. Es uno de los coches más populares para la deriva, gracias a su diseño de tracción trasera, motor potente y cuerpo fácil de modificar. Es conocido por su rendimiento, fiabilidad y estilo. También es famosa por sus apariciones en varios medios como películas, videojuegos, anime y manga. </td>
|
92 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/a7/Nissan_Silvia_S15_Spec-R_001.JPG/0px-Nissan_Silvia_S15_Spec-R_001.JPG" alt="Nissan Silvia"></td>
|
93 |
-
</tr>
|
94 |
-
<tr>
|
95 |
-
<td>Honda Integra Tipo R</td>
|
96 |
-
<td>El Honda Integra Type R es una versión de alto rendimiento de , y manga. </td>
|
97 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4f/Toyota_M2_MR2_001.JPG/320px-Toyota_M2_001.JPG" alt="Toyota MR2"></td>
|
98 |
-
</tr>
|
99 |
-
<tr>
|
100 |
-
<td>Mazda RX-8</td>
|
101 |
-
<td>El Mazda RX-8 es un coche deportivo que debutó en 2003 y se produjo hasta 2012. Es un sucesor del Mazda RX-7, otro coche de motor rotativo. El RX-8 es conocido por su diseño único, que cuenta con cuatro puertas, cuatro asientos y un motor de forma triangular. También es conocido por su rendimiento, manejo y sonido. También es popular entre los sintonizadores y corredores que lo modifican para un mejor rendimiento o apariencia. </td>
|
102 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9a/9a/Mazda_RX-8_Type_S_002.JPG/320px-Mazda_RX-8_Type_S_002.JPG" alt="Mazda RX-8"></td>>
|
103 |
-
</tr>
|
104 |
-
<tr>
|
105 |
-
<td>Toyota Celica</td>
|
106 |
-
|
107 |
-
<td><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/1d/Toyota_Celica_GT-Four_%28ST205%29.jpg/320px-Toyota_Celica_GT-Four_%28ST205%29.jpg" alt="Toyota Celica></td
|
108 |
-
</tr>
|
109 |
-
</tabla>
|
110 |
-
<h2>Cómo descargar fondos de escritorio JDM Cars</h2>
|
111 |
-
<p>Ahora que ha aprendido acerca de algunos de los mejores coches JDM de todos los tiempos, es posible que se pregunte cómo descargar los coches JDM fondo de pantalla para sus dispositivos. Bueno, no es tan difícil, siempre y cuando sigas estos sencillos pasos:</p>
|
112 |
-
<ol>
|
113 |
-
<li>Encontrar un sitio web que ofrece fondos de escritorio JDM coches. Hay muchos sitios web que ofrecen libre o pagado papel pintado JDM coches para descargar. Algunos de los más populares son <a href="">WallpaperAccess</a>, <a href=">WallpaperCave</a>, <a href="">WallpapersWide</a>, y <a href=">Unsplash</a>. También puede utilizar un motor de búsqueda como Google o Bing para encontrar más sitios web que se adapten a sus preferencias. </li>
|
114 |
-
<li>Seleccione un coche JDM fondo de pantalla que te gusta. Una vez que haya encontrado un sitio web que ofrece fondos de escritorio JDM coches, puede navegar a través de su colección y elegir uno que te gusta. También puede usar filtros o categorías para reducir su búsqueda en función del modelo de automóvil, color, estilo, resolución u otros criterios. </li>
|
115 |
-
<li>Descargar el coche JDM fondo de pantalla a su dispositivo. Después de haber seleccionado un coche JDM fondo de pantalla que te gusta, se puede descargar a su dispositivo haciendo clic en el botón de descarga o enlace. Es posible que tenga que elegir el tamaño o formato adecuado para su dispositivo antes de descargar. También es posible que tenga que aceptar los términos y condiciones del sitio web antes de descargar. </li>
|
116 |
-
|
117 |
-
</ol>
|
118 |
-
<h2>Los beneficios de descargar fondos de escritorio JDM Cars</h2>
|
119 |
-
<p>Descargar papel tapiz JDM coches pueden tener muchos beneficios para usted y sus dispositivos. Estos son algunos de ellos:</p>
|
120 |
-
<ul>
|
121 |
-
<li><b>Mejorar su estado de ánimo:</b> Descargar fondos de escritorio JDM coches pueden mejorar su estado de ánimo, dándole algo hermoso y emocionante para ver cada vez que utiliza sus dispositivos. Los coches JDM también pueden evocar emociones positivas como alegría, admiración o inspiración. </li>
|
122 |
-
<li><b>Expresa tu personalidad:</b> Descargar papel pintado Los coches JDM pueden expresar tu personalidad mostrando tus intereses, preferencias o valores. Los coches JDM también pueden reflejar tu identidad, cultura o estilo de vida. </li>
|
123 |
-
<li><b>Inspirarte:</b> Descargar papel tapiz de coches JDM puede inspirarte dándole ideas, objetivos, o sueños que desea perseguir o lograr. Los coches JDM también pueden motivarte a trabajar duro, aprender nuevas habilidades o superar desafíos. </li>
|
124 |
-
<li><b>Personaliza tus dispositivos:</b> Descargar papel pintado Los coches JDM pueden personalizar tus dispositivos haciéndolos más atractivos, únicos o personales. Papel pintado JDM coches también pueden coincidir con su estado de ánimo, tema, u ocasión. </li>
|
125 |
-
<li><b>Diviértete:</b> Descargar fondos de escritorio JDM coches pueden divertirse dándole algo para disfrutar, compartir, o recoger. Los coches JDM también pueden despertar su curiosidad, creatividad o imaginación. </li>
|
126 |
-
</ul>
|
127 |
-
<h2>Conclusión</h2>
|
128 |
-
<p>Descargar fondos de escritorio JDM coches es una gran manera de mostrar su amor y aprecio por los coches JDM. Los autos JDM son vehículos increíbles que tienen mucha historia, cultura y estilo. También son de alto rendimiento, confiables, innovadores y elegantes. Son admirados por los entusiastas de los coches de todo el mundo, que los modifican para carreras, deriva o expresión personal. </p>
|
129 |
-
|
130 |
-
<p>¿Qué estás esperando? Descargar fondos de escritorio JDM coches hoy y disfrutar de la belleza y la emoción de los coches JDM! </p>
|
131 |
-
<h3>Preguntas frecuentes</h3>
|
132 |
-
<p>Aquí están algunas de las preguntas y respuestas más frecuentes sobre los coches JDM fondo de pantalla:</p>
|
133 |
-
<ol>
|
134 |
-
<li><b>¿Cuál es la mejor resolución para los coches JDM fondo de pantalla? </b><br>
|
135 |
-
La mejor resolución para los coches JDM fondo de pantalla depende del tamaño y la calidad de la pantalla de su dispositivo. Generalmente, cuanto mayor sea la resolución, mejor será la calidad de la imagen. Sin embargo, una resolución más alta también significa un tamaño de archivo más grande y más espacio de almacenamiento. Puede comprobar la resolución de la pantalla del dispositivo yendo a sus ajustes o especificaciones. También puede utilizar herramientas en línea como <a href="">WhatIsMyScreenResolution.com</a> para averiguar la resolución de la pantalla. </li>
|
136 |
-
<li><b>¿Dónde puedo encontrar más coches JDM fondo de pantalla? </b><br>
|
137 |
-
Usted puede encontrar más coches JDM fondo de pantalla visitando otros sitios web que los ofrecen. También puedes usar motores de búsqueda como Google o Bing para encontrar más sitios web que se adapten a tus preferencias. También puedes usar plataformas de redes sociales como Pinterest, Instagram o Facebook para encontrar más coches de JDM que otros usuarios hayan publicado o compartido. </li>
|
138 |
-
<li><b>¿Cómo puedo hacer mi propio fondo de pantalla JDM coches? </b><br>
|
139 |
-
Puede hacer sus propios coches JDM fondo de pantalla mediante el uso de software de edición de fotos como Photoshop, GIMP, o Paint.NET. También puede utilizar herramientas en línea como <a href="">Canva</a>, <a href="">Fotor</a>, o <a href="">PicMonkey</a> para crear sus propios coches JDM de fondo de pantalla. Puede utilizar sus propias fotos de coches JDM o descargar imágenes de Internet. También puede agregar texto, filtros, efectos u otros elementos para hacer sus coches JDM fondo de pantalla más único y personal. </li>
|
140 |
-
<li><b>¿Cómo puedo compartir mi fondo de pantalla JDM coches con los demás? </b><br>
|
141 |
-
|
142 |
-
<li><b>¿Cómo puedo cambiar mi fondo de pantalla JDM coches? </b><br>
|
143 |
-
Puede cambiar sus coches JDM fondo de pantalla siguiendo los mismos pasos que utilizó para establecerlos como fondo. También puede utilizar aplicaciones o software que le permiten cambiar su fondo de pantalla de forma automática o periódica. Por ejemplo, puedes usar <a href="">Wallpaper Changer</a> para Android, <a href=">Wallpaper Wizard</a> para Windows, o <a href=">Wallpaper Engine</a> para Steam para cambiar tus coches JDM según tus preferencias. </li>
|
144 |
-
</ol></p> 64aa2da5cf<br />
|
145 |
-
<br />
|
146 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_fileno.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
from typing import IO, Callable
|
4 |
-
|
5 |
-
|
6 |
-
def get_fileno(file_like: IO[str]) -> int | None:
|
7 |
-
"""Get fileno() from a file, accounting for poorly implemented file-like objects.
|
8 |
-
|
9 |
-
Args:
|
10 |
-
file_like (IO): A file-like object.
|
11 |
-
|
12 |
-
Returns:
|
13 |
-
int | None: The result of fileno if available, or None if operation failed.
|
14 |
-
"""
|
15 |
-
fileno: Callable[[], int] | None = getattr(file_like, "fileno", None)
|
16 |
-
if fileno is not None:
|
17 |
-
try:
|
18 |
-
return fileno()
|
19 |
-
except Exception:
|
20 |
-
# `fileno` is documented as potentially raising a OSError
|
21 |
-
# Alas, from the issues, there are so many poorly implemented file-like objects,
|
22 |
-
# that `fileno()` can raise just about anything.
|
23 |
-
return None
|
24 |
-
return None
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/pretty.py
DELETED
@@ -1,994 +0,0 @@
|
|
1 |
-
import builtins
|
2 |
-
import collections
|
3 |
-
import dataclasses
|
4 |
-
import inspect
|
5 |
-
import os
|
6 |
-
import sys
|
7 |
-
from array import array
|
8 |
-
from collections import Counter, UserDict, UserList, defaultdict, deque
|
9 |
-
from dataclasses import dataclass, fields, is_dataclass
|
10 |
-
from inspect import isclass
|
11 |
-
from itertools import islice
|
12 |
-
from types import MappingProxyType
|
13 |
-
from typing import (
|
14 |
-
TYPE_CHECKING,
|
15 |
-
Any,
|
16 |
-
Callable,
|
17 |
-
DefaultDict,
|
18 |
-
Dict,
|
19 |
-
Iterable,
|
20 |
-
List,
|
21 |
-
Optional,
|
22 |
-
Sequence,
|
23 |
-
Set,
|
24 |
-
Tuple,
|
25 |
-
Union,
|
26 |
-
)
|
27 |
-
|
28 |
-
from pip._vendor.rich.repr import RichReprResult
|
29 |
-
|
30 |
-
try:
|
31 |
-
import attr as _attr_module
|
32 |
-
|
33 |
-
_has_attrs = hasattr(_attr_module, "ib")
|
34 |
-
except ImportError: # pragma: no cover
|
35 |
-
_has_attrs = False
|
36 |
-
|
37 |
-
from . import get_console
|
38 |
-
from ._loop import loop_last
|
39 |
-
from ._pick import pick_bool
|
40 |
-
from .abc import RichRenderable
|
41 |
-
from .cells import cell_len
|
42 |
-
from .highlighter import ReprHighlighter
|
43 |
-
from .jupyter import JupyterMixin, JupyterRenderable
|
44 |
-
from .measure import Measurement
|
45 |
-
from .text import Text
|
46 |
-
|
47 |
-
if TYPE_CHECKING:
|
48 |
-
from .console import (
|
49 |
-
Console,
|
50 |
-
ConsoleOptions,
|
51 |
-
HighlighterType,
|
52 |
-
JustifyMethod,
|
53 |
-
OverflowMethod,
|
54 |
-
RenderResult,
|
55 |
-
)
|
56 |
-
|
57 |
-
|
58 |
-
def _is_attr_object(obj: Any) -> bool:
|
59 |
-
"""Check if an object was created with attrs module."""
|
60 |
-
return _has_attrs and _attr_module.has(type(obj))
|
61 |
-
|
62 |
-
|
63 |
-
def _get_attr_fields(obj: Any) -> Sequence["_attr_module.Attribute[Any]"]:
|
64 |
-
"""Get fields for an attrs object."""
|
65 |
-
return _attr_module.fields(type(obj)) if _has_attrs else []
|
66 |
-
|
67 |
-
|
68 |
-
def _is_dataclass_repr(obj: object) -> bool:
|
69 |
-
"""Check if an instance of a dataclass contains the default repr.
|
70 |
-
|
71 |
-
Args:
|
72 |
-
obj (object): A dataclass instance.
|
73 |
-
|
74 |
-
Returns:
|
75 |
-
bool: True if the default repr is used, False if there is a custom repr.
|
76 |
-
"""
|
77 |
-
# Digging in to a lot of internals here
|
78 |
-
# Catching all exceptions in case something is missing on a non CPython implementation
|
79 |
-
try:
|
80 |
-
return obj.__repr__.__code__.co_filename == dataclasses.__file__
|
81 |
-
except Exception: # pragma: no coverage
|
82 |
-
return False
|
83 |
-
|
84 |
-
|
85 |
-
_dummy_namedtuple = collections.namedtuple("_dummy_namedtuple", [])
|
86 |
-
|
87 |
-
|
88 |
-
def _has_default_namedtuple_repr(obj: object) -> bool:
|
89 |
-
"""Check if an instance of namedtuple contains the default repr
|
90 |
-
|
91 |
-
Args:
|
92 |
-
obj (object): A namedtuple
|
93 |
-
|
94 |
-
Returns:
|
95 |
-
bool: True if the default repr is used, False if there's a custom repr.
|
96 |
-
"""
|
97 |
-
obj_file = None
|
98 |
-
try:
|
99 |
-
obj_file = inspect.getfile(obj.__repr__)
|
100 |
-
except (OSError, TypeError):
|
101 |
-
# OSError handles case where object is defined in __main__ scope, e.g. REPL - no filename available.
|
102 |
-
# TypeError trapped defensively, in case of object without filename slips through.
|
103 |
-
pass
|
104 |
-
default_repr_file = inspect.getfile(_dummy_namedtuple.__repr__)
|
105 |
-
return obj_file == default_repr_file
|
106 |
-
|
107 |
-
|
108 |
-
def _ipy_display_hook(
|
109 |
-
value: Any,
|
110 |
-
console: Optional["Console"] = None,
|
111 |
-
overflow: "OverflowMethod" = "ignore",
|
112 |
-
crop: bool = False,
|
113 |
-
indent_guides: bool = False,
|
114 |
-
max_length: Optional[int] = None,
|
115 |
-
max_string: Optional[int] = None,
|
116 |
-
max_depth: Optional[int] = None,
|
117 |
-
expand_all: bool = False,
|
118 |
-
) -> Union[str, None]:
|
119 |
-
# needed here to prevent circular import:
|
120 |
-
from .console import ConsoleRenderable
|
121 |
-
|
122 |
-
# always skip rich generated jupyter renderables or None values
|
123 |
-
if _safe_isinstance(value, JupyterRenderable) or value is None:
|
124 |
-
return None
|
125 |
-
|
126 |
-
console = console or get_console()
|
127 |
-
|
128 |
-
with console.capture() as capture:
|
129 |
-
# certain renderables should start on a new line
|
130 |
-
if _safe_isinstance(value, ConsoleRenderable):
|
131 |
-
console.line()
|
132 |
-
console.print(
|
133 |
-
value
|
134 |
-
if _safe_isinstance(value, RichRenderable)
|
135 |
-
else Pretty(
|
136 |
-
value,
|
137 |
-
overflow=overflow,
|
138 |
-
indent_guides=indent_guides,
|
139 |
-
max_length=max_length,
|
140 |
-
max_string=max_string,
|
141 |
-
max_depth=max_depth,
|
142 |
-
expand_all=expand_all,
|
143 |
-
margin=12,
|
144 |
-
),
|
145 |
-
crop=crop,
|
146 |
-
new_line_start=True,
|
147 |
-
end="",
|
148 |
-
)
|
149 |
-
# strip trailing newline, not usually part of a text repr
|
150 |
-
# I'm not sure if this should be prevented at a lower level
|
151 |
-
return capture.get().rstrip("\n")
|
152 |
-
|
153 |
-
|
154 |
-
def _safe_isinstance(
|
155 |
-
obj: object, class_or_tuple: Union[type, Tuple[type, ...]]
|
156 |
-
) -> bool:
|
157 |
-
"""isinstance can fail in rare cases, for example types with no __class__"""
|
158 |
-
try:
|
159 |
-
return isinstance(obj, class_or_tuple)
|
160 |
-
except Exception:
|
161 |
-
return False
|
162 |
-
|
163 |
-
|
164 |
-
def install(
|
165 |
-
console: Optional["Console"] = None,
|
166 |
-
overflow: "OverflowMethod" = "ignore",
|
167 |
-
crop: bool = False,
|
168 |
-
indent_guides: bool = False,
|
169 |
-
max_length: Optional[int] = None,
|
170 |
-
max_string: Optional[int] = None,
|
171 |
-
max_depth: Optional[int] = None,
|
172 |
-
expand_all: bool = False,
|
173 |
-
) -> None:
|
174 |
-
"""Install automatic pretty printing in the Python REPL.
|
175 |
-
|
176 |
-
Args:
|
177 |
-
console (Console, optional): Console instance or ``None`` to use global console. Defaults to None.
|
178 |
-
overflow (Optional[OverflowMethod], optional): Overflow method. Defaults to "ignore".
|
179 |
-
crop (Optional[bool], optional): Enable cropping of long lines. Defaults to False.
|
180 |
-
indent_guides (bool, optional): Enable indentation guides. Defaults to False.
|
181 |
-
max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
|
182 |
-
Defaults to None.
|
183 |
-
max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to None.
|
184 |
-
max_depth (int, optional): Maximum depth of nested data structures, or None for no maximum. Defaults to None.
|
185 |
-
expand_all (bool, optional): Expand all containers. Defaults to False.
|
186 |
-
max_frames (int): Maximum number of frames to show in a traceback, 0 for no maximum. Defaults to 100.
|
187 |
-
"""
|
188 |
-
from pip._vendor.rich import get_console
|
189 |
-
|
190 |
-
console = console or get_console()
|
191 |
-
assert console is not None
|
192 |
-
|
193 |
-
def display_hook(value: Any) -> None:
|
194 |
-
"""Replacement sys.displayhook which prettifies objects with Rich."""
|
195 |
-
if value is not None:
|
196 |
-
assert console is not None
|
197 |
-
builtins._ = None # type: ignore[attr-defined]
|
198 |
-
console.print(
|
199 |
-
value
|
200 |
-
if _safe_isinstance(value, RichRenderable)
|
201 |
-
else Pretty(
|
202 |
-
value,
|
203 |
-
overflow=overflow,
|
204 |
-
indent_guides=indent_guides,
|
205 |
-
max_length=max_length,
|
206 |
-
max_string=max_string,
|
207 |
-
max_depth=max_depth,
|
208 |
-
expand_all=expand_all,
|
209 |
-
),
|
210 |
-
crop=crop,
|
211 |
-
)
|
212 |
-
builtins._ = value # type: ignore[attr-defined]
|
213 |
-
|
214 |
-
if "get_ipython" in globals():
|
215 |
-
ip = get_ipython() # type: ignore[name-defined]
|
216 |
-
from IPython.core.formatters import BaseFormatter
|
217 |
-
|
218 |
-
class RichFormatter(BaseFormatter): # type: ignore[misc]
|
219 |
-
pprint: bool = True
|
220 |
-
|
221 |
-
def __call__(self, value: Any) -> Any:
|
222 |
-
if self.pprint:
|
223 |
-
return _ipy_display_hook(
|
224 |
-
value,
|
225 |
-
console=get_console(),
|
226 |
-
overflow=overflow,
|
227 |
-
indent_guides=indent_guides,
|
228 |
-
max_length=max_length,
|
229 |
-
max_string=max_string,
|
230 |
-
max_depth=max_depth,
|
231 |
-
expand_all=expand_all,
|
232 |
-
)
|
233 |
-
else:
|
234 |
-
return repr(value)
|
235 |
-
|
236 |
-
# replace plain text formatter with rich formatter
|
237 |
-
rich_formatter = RichFormatter()
|
238 |
-
ip.display_formatter.formatters["text/plain"] = rich_formatter
|
239 |
-
else:
|
240 |
-
sys.displayhook = display_hook
|
241 |
-
|
242 |
-
|
243 |
-
class Pretty(JupyterMixin):
|
244 |
-
"""A rich renderable that pretty prints an object.
|
245 |
-
|
246 |
-
Args:
|
247 |
-
_object (Any): An object to pretty print.
|
248 |
-
highlighter (HighlighterType, optional): Highlighter object to apply to result, or None for ReprHighlighter. Defaults to None.
|
249 |
-
indent_size (int, optional): Number of spaces in indent. Defaults to 4.
|
250 |
-
justify (JustifyMethod, optional): Justify method, or None for default. Defaults to None.
|
251 |
-
overflow (OverflowMethod, optional): Overflow method, or None for default. Defaults to None.
|
252 |
-
no_wrap (Optional[bool], optional): Disable word wrapping. Defaults to False.
|
253 |
-
indent_guides (bool, optional): Enable indentation guides. Defaults to False.
|
254 |
-
max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
|
255 |
-
Defaults to None.
|
256 |
-
max_string (int, optional): Maximum length of string before truncating, or None to disable. Defaults to None.
|
257 |
-
max_depth (int, optional): Maximum depth of nested data structures, or None for no maximum. Defaults to None.
|
258 |
-
expand_all (bool, optional): Expand all containers. Defaults to False.
|
259 |
-
margin (int, optional): Subtrace a margin from width to force containers to expand earlier. Defaults to 0.
|
260 |
-
insert_line (bool, optional): Insert a new line if the output has multiple new lines. Defaults to False.
|
261 |
-
"""
|
262 |
-
|
263 |
-
def __init__(
|
264 |
-
self,
|
265 |
-
_object: Any,
|
266 |
-
highlighter: Optional["HighlighterType"] = None,
|
267 |
-
*,
|
268 |
-
indent_size: int = 4,
|
269 |
-
justify: Optional["JustifyMethod"] = None,
|
270 |
-
overflow: Optional["OverflowMethod"] = None,
|
271 |
-
no_wrap: Optional[bool] = False,
|
272 |
-
indent_guides: bool = False,
|
273 |
-
max_length: Optional[int] = None,
|
274 |
-
max_string: Optional[int] = None,
|
275 |
-
max_depth: Optional[int] = None,
|
276 |
-
expand_all: bool = False,
|
277 |
-
margin: int = 0,
|
278 |
-
insert_line: bool = False,
|
279 |
-
) -> None:
|
280 |
-
self._object = _object
|
281 |
-
self.highlighter = highlighter or ReprHighlighter()
|
282 |
-
self.indent_size = indent_size
|
283 |
-
self.justify: Optional["JustifyMethod"] = justify
|
284 |
-
self.overflow: Optional["OverflowMethod"] = overflow
|
285 |
-
self.no_wrap = no_wrap
|
286 |
-
self.indent_guides = indent_guides
|
287 |
-
self.max_length = max_length
|
288 |
-
self.max_string = max_string
|
289 |
-
self.max_depth = max_depth
|
290 |
-
self.expand_all = expand_all
|
291 |
-
self.margin = margin
|
292 |
-
self.insert_line = insert_line
|
293 |
-
|
294 |
-
def __rich_console__(
|
295 |
-
self, console: "Console", options: "ConsoleOptions"
|
296 |
-
) -> "RenderResult":
|
297 |
-
pretty_str = pretty_repr(
|
298 |
-
self._object,
|
299 |
-
max_width=options.max_width - self.margin,
|
300 |
-
indent_size=self.indent_size,
|
301 |
-
max_length=self.max_length,
|
302 |
-
max_string=self.max_string,
|
303 |
-
max_depth=self.max_depth,
|
304 |
-
expand_all=self.expand_all,
|
305 |
-
)
|
306 |
-
pretty_text = Text.from_ansi(
|
307 |
-
pretty_str,
|
308 |
-
justify=self.justify or options.justify,
|
309 |
-
overflow=self.overflow or options.overflow,
|
310 |
-
no_wrap=pick_bool(self.no_wrap, options.no_wrap),
|
311 |
-
style="pretty",
|
312 |
-
)
|
313 |
-
pretty_text = (
|
314 |
-
self.highlighter(pretty_text)
|
315 |
-
if pretty_text
|
316 |
-
else Text(
|
317 |
-
f"{type(self._object)}.__repr__ returned empty string",
|
318 |
-
style="dim italic",
|
319 |
-
)
|
320 |
-
)
|
321 |
-
if self.indent_guides and not options.ascii_only:
|
322 |
-
pretty_text = pretty_text.with_indent_guides(
|
323 |
-
self.indent_size, style="repr.indent"
|
324 |
-
)
|
325 |
-
if self.insert_line and "\n" in pretty_text:
|
326 |
-
yield ""
|
327 |
-
yield pretty_text
|
328 |
-
|
329 |
-
def __rich_measure__(
|
330 |
-
self, console: "Console", options: "ConsoleOptions"
|
331 |
-
) -> "Measurement":
|
332 |
-
pretty_str = pretty_repr(
|
333 |
-
self._object,
|
334 |
-
max_width=options.max_width,
|
335 |
-
indent_size=self.indent_size,
|
336 |
-
max_length=self.max_length,
|
337 |
-
max_string=self.max_string,
|
338 |
-
max_depth=self.max_depth,
|
339 |
-
expand_all=self.expand_all,
|
340 |
-
)
|
341 |
-
text_width = (
|
342 |
-
max(cell_len(line) for line in pretty_str.splitlines()) if pretty_str else 0
|
343 |
-
)
|
344 |
-
return Measurement(text_width, text_width)
|
345 |
-
|
346 |
-
|
347 |
-
def _get_braces_for_defaultdict(_object: DefaultDict[Any, Any]) -> Tuple[str, str, str]:
|
348 |
-
return (
|
349 |
-
f"defaultdict({_object.default_factory!r}, {{",
|
350 |
-
"})",
|
351 |
-
f"defaultdict({_object.default_factory!r}, {{}})",
|
352 |
-
)
|
353 |
-
|
354 |
-
|
355 |
-
def _get_braces_for_array(_object: "array[Any]") -> Tuple[str, str, str]:
|
356 |
-
return (f"array({_object.typecode!r}, [", "])", f"array({_object.typecode!r})")
|
357 |
-
|
358 |
-
|
359 |
-
_BRACES: Dict[type, Callable[[Any], Tuple[str, str, str]]] = {
|
360 |
-
os._Environ: lambda _object: ("environ({", "})", "environ({})"),
|
361 |
-
array: _get_braces_for_array,
|
362 |
-
defaultdict: _get_braces_for_defaultdict,
|
363 |
-
Counter: lambda _object: ("Counter({", "})", "Counter()"),
|
364 |
-
deque: lambda _object: ("deque([", "])", "deque()"),
|
365 |
-
dict: lambda _object: ("{", "}", "{}"),
|
366 |
-
UserDict: lambda _object: ("{", "}", "{}"),
|
367 |
-
frozenset: lambda _object: ("frozenset({", "})", "frozenset()"),
|
368 |
-
list: lambda _object: ("[", "]", "[]"),
|
369 |
-
UserList: lambda _object: ("[", "]", "[]"),
|
370 |
-
set: lambda _object: ("{", "}", "set()"),
|
371 |
-
tuple: lambda _object: ("(", ")", "()"),
|
372 |
-
MappingProxyType: lambda _object: ("mappingproxy({", "})", "mappingproxy({})"),
|
373 |
-
}
|
374 |
-
_CONTAINERS = tuple(_BRACES.keys())
|
375 |
-
_MAPPING_CONTAINERS = (dict, os._Environ, MappingProxyType, UserDict)
|
376 |
-
|
377 |
-
|
378 |
-
def is_expandable(obj: Any) -> bool:
|
379 |
-
"""Check if an object may be expanded by pretty print."""
|
380 |
-
return (
|
381 |
-
_safe_isinstance(obj, _CONTAINERS)
|
382 |
-
or (is_dataclass(obj))
|
383 |
-
or (hasattr(obj, "__rich_repr__"))
|
384 |
-
or _is_attr_object(obj)
|
385 |
-
) and not isclass(obj)
|
386 |
-
|
387 |
-
|
388 |
-
@dataclass
|
389 |
-
class Node:
|
390 |
-
"""A node in a repr tree. May be atomic or a container."""
|
391 |
-
|
392 |
-
key_repr: str = ""
|
393 |
-
value_repr: str = ""
|
394 |
-
open_brace: str = ""
|
395 |
-
close_brace: str = ""
|
396 |
-
empty: str = ""
|
397 |
-
last: bool = False
|
398 |
-
is_tuple: bool = False
|
399 |
-
is_namedtuple: bool = False
|
400 |
-
children: Optional[List["Node"]] = None
|
401 |
-
key_separator: str = ": "
|
402 |
-
separator: str = ", "
|
403 |
-
|
404 |
-
def iter_tokens(self) -> Iterable[str]:
|
405 |
-
"""Generate tokens for this node."""
|
406 |
-
if self.key_repr:
|
407 |
-
yield self.key_repr
|
408 |
-
yield self.key_separator
|
409 |
-
if self.value_repr:
|
410 |
-
yield self.value_repr
|
411 |
-
elif self.children is not None:
|
412 |
-
if self.children:
|
413 |
-
yield self.open_brace
|
414 |
-
if self.is_tuple and not self.is_namedtuple and len(self.children) == 1:
|
415 |
-
yield from self.children[0].iter_tokens()
|
416 |
-
yield ","
|
417 |
-
else:
|
418 |
-
for child in self.children:
|
419 |
-
yield from child.iter_tokens()
|
420 |
-
if not child.last:
|
421 |
-
yield self.separator
|
422 |
-
yield self.close_brace
|
423 |
-
else:
|
424 |
-
yield self.empty
|
425 |
-
|
426 |
-
def check_length(self, start_length: int, max_length: int) -> bool:
|
427 |
-
"""Check the length fits within a limit.
|
428 |
-
|
429 |
-
Args:
|
430 |
-
start_length (int): Starting length of the line (indent, prefix, suffix).
|
431 |
-
max_length (int): Maximum length.
|
432 |
-
|
433 |
-
Returns:
|
434 |
-
bool: True if the node can be rendered within max length, otherwise False.
|
435 |
-
"""
|
436 |
-
total_length = start_length
|
437 |
-
for token in self.iter_tokens():
|
438 |
-
total_length += cell_len(token)
|
439 |
-
if total_length > max_length:
|
440 |
-
return False
|
441 |
-
return True
|
442 |
-
|
443 |
-
def __str__(self) -> str:
|
444 |
-
repr_text = "".join(self.iter_tokens())
|
445 |
-
return repr_text
|
446 |
-
|
447 |
-
def render(
|
448 |
-
self, max_width: int = 80, indent_size: int = 4, expand_all: bool = False
|
449 |
-
) -> str:
|
450 |
-
"""Render the node to a pretty repr.
|
451 |
-
|
452 |
-
Args:
|
453 |
-
max_width (int, optional): Maximum width of the repr. Defaults to 80.
|
454 |
-
indent_size (int, optional): Size of indents. Defaults to 4.
|
455 |
-
expand_all (bool, optional): Expand all levels. Defaults to False.
|
456 |
-
|
457 |
-
Returns:
|
458 |
-
str: A repr string of the original object.
|
459 |
-
"""
|
460 |
-
lines = [_Line(node=self, is_root=True)]
|
461 |
-
line_no = 0
|
462 |
-
while line_no < len(lines):
|
463 |
-
line = lines[line_no]
|
464 |
-
if line.expandable and not line.expanded:
|
465 |
-
if expand_all or not line.check_length(max_width):
|
466 |
-
lines[line_no : line_no + 1] = line.expand(indent_size)
|
467 |
-
line_no += 1
|
468 |
-
|
469 |
-
repr_str = "\n".join(str(line) for line in lines)
|
470 |
-
return repr_str
|
471 |
-
|
472 |
-
|
473 |
-
@dataclass
|
474 |
-
class _Line:
|
475 |
-
"""A line in repr output."""
|
476 |
-
|
477 |
-
parent: Optional["_Line"] = None
|
478 |
-
is_root: bool = False
|
479 |
-
node: Optional[Node] = None
|
480 |
-
text: str = ""
|
481 |
-
suffix: str = ""
|
482 |
-
whitespace: str = ""
|
483 |
-
expanded: bool = False
|
484 |
-
last: bool = False
|
485 |
-
|
486 |
-
@property
|
487 |
-
def expandable(self) -> bool:
|
488 |
-
"""Check if the line may be expanded."""
|
489 |
-
return bool(self.node is not None and self.node.children)
|
490 |
-
|
491 |
-
def check_length(self, max_length: int) -> bool:
|
492 |
-
"""Check this line fits within a given number of cells."""
|
493 |
-
start_length = (
|
494 |
-
len(self.whitespace) + cell_len(self.text) + cell_len(self.suffix)
|
495 |
-
)
|
496 |
-
assert self.node is not None
|
497 |
-
return self.node.check_length(start_length, max_length)
|
498 |
-
|
499 |
-
def expand(self, indent_size: int) -> Iterable["_Line"]:
|
500 |
-
"""Expand this line by adding children on their own line."""
|
501 |
-
node = self.node
|
502 |
-
assert node is not None
|
503 |
-
whitespace = self.whitespace
|
504 |
-
assert node.children
|
505 |
-
if node.key_repr:
|
506 |
-
new_line = yield _Line(
|
507 |
-
text=f"{node.key_repr}{node.key_separator}{node.open_brace}",
|
508 |
-
whitespace=whitespace,
|
509 |
-
)
|
510 |
-
else:
|
511 |
-
new_line = yield _Line(text=node.open_brace, whitespace=whitespace)
|
512 |
-
child_whitespace = self.whitespace + " " * indent_size
|
513 |
-
tuple_of_one = node.is_tuple and len(node.children) == 1
|
514 |
-
for last, child in loop_last(node.children):
|
515 |
-
separator = "," if tuple_of_one else node.separator
|
516 |
-
line = _Line(
|
517 |
-
parent=new_line,
|
518 |
-
node=child,
|
519 |
-
whitespace=child_whitespace,
|
520 |
-
suffix=separator,
|
521 |
-
last=last and not tuple_of_one,
|
522 |
-
)
|
523 |
-
yield line
|
524 |
-
|
525 |
-
yield _Line(
|
526 |
-
text=node.close_brace,
|
527 |
-
whitespace=whitespace,
|
528 |
-
suffix=self.suffix,
|
529 |
-
last=self.last,
|
530 |
-
)
|
531 |
-
|
532 |
-
def __str__(self) -> str:
|
533 |
-
if self.last:
|
534 |
-
return f"{self.whitespace}{self.text}{self.node or ''}"
|
535 |
-
else:
|
536 |
-
return (
|
537 |
-
f"{self.whitespace}{self.text}{self.node or ''}{self.suffix.rstrip()}"
|
538 |
-
)
|
539 |
-
|
540 |
-
|
541 |
-
def _is_namedtuple(obj: Any) -> bool:
|
542 |
-
"""Checks if an object is most likely a namedtuple. It is possible
|
543 |
-
to craft an object that passes this check and isn't a namedtuple, but
|
544 |
-
there is only a minuscule chance of this happening unintentionally.
|
545 |
-
|
546 |
-
Args:
|
547 |
-
obj (Any): The object to test
|
548 |
-
|
549 |
-
Returns:
|
550 |
-
bool: True if the object is a namedtuple. False otherwise.
|
551 |
-
"""
|
552 |
-
try:
|
553 |
-
fields = getattr(obj, "_fields", None)
|
554 |
-
except Exception:
|
555 |
-
# Being very defensive - if we cannot get the attr then its not a namedtuple
|
556 |
-
return False
|
557 |
-
return isinstance(obj, tuple) and isinstance(fields, tuple)
|
558 |
-
|
559 |
-
|
560 |
-
def traverse(
|
561 |
-
_object: Any,
|
562 |
-
max_length: Optional[int] = None,
|
563 |
-
max_string: Optional[int] = None,
|
564 |
-
max_depth: Optional[int] = None,
|
565 |
-
) -> Node:
|
566 |
-
"""Traverse object and generate a tree.
|
567 |
-
|
568 |
-
Args:
|
569 |
-
_object (Any): Object to be traversed.
|
570 |
-
max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
|
571 |
-
Defaults to None.
|
572 |
-
max_string (int, optional): Maximum length of string before truncating, or None to disable truncating.
|
573 |
-
Defaults to None.
|
574 |
-
max_depth (int, optional): Maximum depth of data structures, or None for no maximum.
|
575 |
-
Defaults to None.
|
576 |
-
|
577 |
-
Returns:
|
578 |
-
Node: The root of a tree structure which can be used to render a pretty repr.
|
579 |
-
"""
|
580 |
-
|
581 |
-
def to_repr(obj: Any) -> str:
|
582 |
-
"""Get repr string for an object, but catch errors."""
|
583 |
-
if (
|
584 |
-
max_string is not None
|
585 |
-
and _safe_isinstance(obj, (bytes, str))
|
586 |
-
and len(obj) > max_string
|
587 |
-
):
|
588 |
-
truncated = len(obj) - max_string
|
589 |
-
obj_repr = f"{obj[:max_string]!r}+{truncated}"
|
590 |
-
else:
|
591 |
-
try:
|
592 |
-
obj_repr = repr(obj)
|
593 |
-
except Exception as error:
|
594 |
-
obj_repr = f"<repr-error {str(error)!r}>"
|
595 |
-
return obj_repr
|
596 |
-
|
597 |
-
visited_ids: Set[int] = set()
|
598 |
-
push_visited = visited_ids.add
|
599 |
-
pop_visited = visited_ids.remove
|
600 |
-
|
601 |
-
def _traverse(obj: Any, root: bool = False, depth: int = 0) -> Node:
|
602 |
-
"""Walk the object depth first."""
|
603 |
-
|
604 |
-
obj_id = id(obj)
|
605 |
-
if obj_id in visited_ids:
|
606 |
-
# Recursion detected
|
607 |
-
return Node(value_repr="...")
|
608 |
-
|
609 |
-
obj_type = type(obj)
|
610 |
-
children: List[Node]
|
611 |
-
reached_max_depth = max_depth is not None and depth >= max_depth
|
612 |
-
|
613 |
-
def iter_rich_args(rich_args: Any) -> Iterable[Union[Any, Tuple[str, Any]]]:
|
614 |
-
for arg in rich_args:
|
615 |
-
if _safe_isinstance(arg, tuple):
|
616 |
-
if len(arg) == 3:
|
617 |
-
key, child, default = arg
|
618 |
-
if default == child:
|
619 |
-
continue
|
620 |
-
yield key, child
|
621 |
-
elif len(arg) == 2:
|
622 |
-
key, child = arg
|
623 |
-
yield key, child
|
624 |
-
elif len(arg) == 1:
|
625 |
-
yield arg[0]
|
626 |
-
else:
|
627 |
-
yield arg
|
628 |
-
|
629 |
-
try:
|
630 |
-
fake_attributes = hasattr(
|
631 |
-
obj, "awehoi234_wdfjwljet234_234wdfoijsdfmmnxpi492"
|
632 |
-
)
|
633 |
-
except Exception:
|
634 |
-
fake_attributes = False
|
635 |
-
|
636 |
-
rich_repr_result: Optional[RichReprResult] = None
|
637 |
-
if not fake_attributes:
|
638 |
-
try:
|
639 |
-
if hasattr(obj, "__rich_repr__") and not isclass(obj):
|
640 |
-
rich_repr_result = obj.__rich_repr__()
|
641 |
-
except Exception:
|
642 |
-
pass
|
643 |
-
|
644 |
-
if rich_repr_result is not None:
|
645 |
-
push_visited(obj_id)
|
646 |
-
angular = getattr(obj.__rich_repr__, "angular", False)
|
647 |
-
args = list(iter_rich_args(rich_repr_result))
|
648 |
-
class_name = obj.__class__.__name__
|
649 |
-
|
650 |
-
if args:
|
651 |
-
children = []
|
652 |
-
append = children.append
|
653 |
-
|
654 |
-
if reached_max_depth:
|
655 |
-
if angular:
|
656 |
-
node = Node(value_repr=f"<{class_name}...>")
|
657 |
-
else:
|
658 |
-
node = Node(value_repr=f"{class_name}(...)")
|
659 |
-
else:
|
660 |
-
if angular:
|
661 |
-
node = Node(
|
662 |
-
open_brace=f"<{class_name} ",
|
663 |
-
close_brace=">",
|
664 |
-
children=children,
|
665 |
-
last=root,
|
666 |
-
separator=" ",
|
667 |
-
)
|
668 |
-
else:
|
669 |
-
node = Node(
|
670 |
-
open_brace=f"{class_name}(",
|
671 |
-
close_brace=")",
|
672 |
-
children=children,
|
673 |
-
last=root,
|
674 |
-
)
|
675 |
-
for last, arg in loop_last(args):
|
676 |
-
if _safe_isinstance(arg, tuple):
|
677 |
-
key, child = arg
|
678 |
-
child_node = _traverse(child, depth=depth + 1)
|
679 |
-
child_node.last = last
|
680 |
-
child_node.key_repr = key
|
681 |
-
child_node.key_separator = "="
|
682 |
-
append(child_node)
|
683 |
-
else:
|
684 |
-
child_node = _traverse(arg, depth=depth + 1)
|
685 |
-
child_node.last = last
|
686 |
-
append(child_node)
|
687 |
-
else:
|
688 |
-
node = Node(
|
689 |
-
value_repr=f"<{class_name}>" if angular else f"{class_name}()",
|
690 |
-
children=[],
|
691 |
-
last=root,
|
692 |
-
)
|
693 |
-
pop_visited(obj_id)
|
694 |
-
elif _is_attr_object(obj) and not fake_attributes:
|
695 |
-
push_visited(obj_id)
|
696 |
-
children = []
|
697 |
-
append = children.append
|
698 |
-
|
699 |
-
attr_fields = _get_attr_fields(obj)
|
700 |
-
if attr_fields:
|
701 |
-
if reached_max_depth:
|
702 |
-
node = Node(value_repr=f"{obj.__class__.__name__}(...)")
|
703 |
-
else:
|
704 |
-
node = Node(
|
705 |
-
open_brace=f"{obj.__class__.__name__}(",
|
706 |
-
close_brace=")",
|
707 |
-
children=children,
|
708 |
-
last=root,
|
709 |
-
)
|
710 |
-
|
711 |
-
def iter_attrs() -> Iterable[
|
712 |
-
Tuple[str, Any, Optional[Callable[[Any], str]]]
|
713 |
-
]:
|
714 |
-
"""Iterate over attr fields and values."""
|
715 |
-
for attr in attr_fields:
|
716 |
-
if attr.repr:
|
717 |
-
try:
|
718 |
-
value = getattr(obj, attr.name)
|
719 |
-
except Exception as error:
|
720 |
-
# Can happen, albeit rarely
|
721 |
-
yield (attr.name, error, None)
|
722 |
-
else:
|
723 |
-
yield (
|
724 |
-
attr.name,
|
725 |
-
value,
|
726 |
-
attr.repr if callable(attr.repr) else None,
|
727 |
-
)
|
728 |
-
|
729 |
-
for last, (name, value, repr_callable) in loop_last(iter_attrs()):
|
730 |
-
if repr_callable:
|
731 |
-
child_node = Node(value_repr=str(repr_callable(value)))
|
732 |
-
else:
|
733 |
-
child_node = _traverse(value, depth=depth + 1)
|
734 |
-
child_node.last = last
|
735 |
-
child_node.key_repr = name
|
736 |
-
child_node.key_separator = "="
|
737 |
-
append(child_node)
|
738 |
-
else:
|
739 |
-
node = Node(
|
740 |
-
value_repr=f"{obj.__class__.__name__}()", children=[], last=root
|
741 |
-
)
|
742 |
-
pop_visited(obj_id)
|
743 |
-
elif (
|
744 |
-
is_dataclass(obj)
|
745 |
-
and not _safe_isinstance(obj, type)
|
746 |
-
and not fake_attributes
|
747 |
-
and _is_dataclass_repr(obj)
|
748 |
-
):
|
749 |
-
push_visited(obj_id)
|
750 |
-
children = []
|
751 |
-
append = children.append
|
752 |
-
if reached_max_depth:
|
753 |
-
node = Node(value_repr=f"{obj.__class__.__name__}(...)")
|
754 |
-
else:
|
755 |
-
node = Node(
|
756 |
-
open_brace=f"{obj.__class__.__name__}(",
|
757 |
-
close_brace=")",
|
758 |
-
children=children,
|
759 |
-
last=root,
|
760 |
-
empty=f"{obj.__class__.__name__}()",
|
761 |
-
)
|
762 |
-
|
763 |
-
for last, field in loop_last(
|
764 |
-
field for field in fields(obj) if field.repr
|
765 |
-
):
|
766 |
-
child_node = _traverse(getattr(obj, field.name), depth=depth + 1)
|
767 |
-
child_node.key_repr = field.name
|
768 |
-
child_node.last = last
|
769 |
-
child_node.key_separator = "="
|
770 |
-
append(child_node)
|
771 |
-
|
772 |
-
pop_visited(obj_id)
|
773 |
-
elif _is_namedtuple(obj) and _has_default_namedtuple_repr(obj):
|
774 |
-
push_visited(obj_id)
|
775 |
-
class_name = obj.__class__.__name__
|
776 |
-
if reached_max_depth:
|
777 |
-
# If we've reached the max depth, we still show the class name, but not its contents
|
778 |
-
node = Node(
|
779 |
-
value_repr=f"{class_name}(...)",
|
780 |
-
)
|
781 |
-
else:
|
782 |
-
children = []
|
783 |
-
append = children.append
|
784 |
-
node = Node(
|
785 |
-
open_brace=f"{class_name}(",
|
786 |
-
close_brace=")",
|
787 |
-
children=children,
|
788 |
-
empty=f"{class_name}()",
|
789 |
-
)
|
790 |
-
for last, (key, value) in loop_last(obj._asdict().items()):
|
791 |
-
child_node = _traverse(value, depth=depth + 1)
|
792 |
-
child_node.key_repr = key
|
793 |
-
child_node.last = last
|
794 |
-
child_node.key_separator = "="
|
795 |
-
append(child_node)
|
796 |
-
pop_visited(obj_id)
|
797 |
-
elif _safe_isinstance(obj, _CONTAINERS):
|
798 |
-
for container_type in _CONTAINERS:
|
799 |
-
if _safe_isinstance(obj, container_type):
|
800 |
-
obj_type = container_type
|
801 |
-
break
|
802 |
-
|
803 |
-
push_visited(obj_id)
|
804 |
-
|
805 |
-
open_brace, close_brace, empty = _BRACES[obj_type](obj)
|
806 |
-
|
807 |
-
if reached_max_depth:
|
808 |
-
node = Node(value_repr=f"{open_brace}...{close_brace}")
|
809 |
-
elif obj_type.__repr__ != type(obj).__repr__:
|
810 |
-
node = Node(value_repr=to_repr(obj), last=root)
|
811 |
-
elif obj:
|
812 |
-
children = []
|
813 |
-
node = Node(
|
814 |
-
open_brace=open_brace,
|
815 |
-
close_brace=close_brace,
|
816 |
-
children=children,
|
817 |
-
last=root,
|
818 |
-
)
|
819 |
-
append = children.append
|
820 |
-
num_items = len(obj)
|
821 |
-
last_item_index = num_items - 1
|
822 |
-
|
823 |
-
if _safe_isinstance(obj, _MAPPING_CONTAINERS):
|
824 |
-
iter_items = iter(obj.items())
|
825 |
-
if max_length is not None:
|
826 |
-
iter_items = islice(iter_items, max_length)
|
827 |
-
for index, (key, child) in enumerate(iter_items):
|
828 |
-
child_node = _traverse(child, depth=depth + 1)
|
829 |
-
child_node.key_repr = to_repr(key)
|
830 |
-
child_node.last = index == last_item_index
|
831 |
-
append(child_node)
|
832 |
-
else:
|
833 |
-
iter_values = iter(obj)
|
834 |
-
if max_length is not None:
|
835 |
-
iter_values = islice(iter_values, max_length)
|
836 |
-
for index, child in enumerate(iter_values):
|
837 |
-
child_node = _traverse(child, depth=depth + 1)
|
838 |
-
child_node.last = index == last_item_index
|
839 |
-
append(child_node)
|
840 |
-
if max_length is not None and num_items > max_length:
|
841 |
-
append(Node(value_repr=f"... +{num_items - max_length}", last=True))
|
842 |
-
else:
|
843 |
-
node = Node(empty=empty, children=[], last=root)
|
844 |
-
|
845 |
-
pop_visited(obj_id)
|
846 |
-
else:
|
847 |
-
node = Node(value_repr=to_repr(obj), last=root)
|
848 |
-
node.is_tuple = _safe_isinstance(obj, tuple)
|
849 |
-
node.is_namedtuple = _is_namedtuple(obj)
|
850 |
-
return node
|
851 |
-
|
852 |
-
node = _traverse(_object, root=True)
|
853 |
-
return node
|
854 |
-
|
855 |
-
|
856 |
-
def pretty_repr(
|
857 |
-
_object: Any,
|
858 |
-
*,
|
859 |
-
max_width: int = 80,
|
860 |
-
indent_size: int = 4,
|
861 |
-
max_length: Optional[int] = None,
|
862 |
-
max_string: Optional[int] = None,
|
863 |
-
max_depth: Optional[int] = None,
|
864 |
-
expand_all: bool = False,
|
865 |
-
) -> str:
|
866 |
-
"""Prettify repr string by expanding on to new lines to fit within a given width.
|
867 |
-
|
868 |
-
Args:
|
869 |
-
_object (Any): Object to repr.
|
870 |
-
max_width (int, optional): Desired maximum width of repr string. Defaults to 80.
|
871 |
-
indent_size (int, optional): Number of spaces to indent. Defaults to 4.
|
872 |
-
max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
|
873 |
-
Defaults to None.
|
874 |
-
max_string (int, optional): Maximum length of string before truncating, or None to disable truncating.
|
875 |
-
Defaults to None.
|
876 |
-
max_depth (int, optional): Maximum depth of nested data structure, or None for no depth.
|
877 |
-
Defaults to None.
|
878 |
-
expand_all (bool, optional): Expand all containers regardless of available width. Defaults to False.
|
879 |
-
|
880 |
-
Returns:
|
881 |
-
str: A possibly multi-line representation of the object.
|
882 |
-
"""
|
883 |
-
|
884 |
-
if _safe_isinstance(_object, Node):
|
885 |
-
node = _object
|
886 |
-
else:
|
887 |
-
node = traverse(
|
888 |
-
_object, max_length=max_length, max_string=max_string, max_depth=max_depth
|
889 |
-
)
|
890 |
-
repr_str: str = node.render(
|
891 |
-
max_width=max_width, indent_size=indent_size, expand_all=expand_all
|
892 |
-
)
|
893 |
-
return repr_str
|
894 |
-
|
895 |
-
|
896 |
-
def pprint(
|
897 |
-
_object: Any,
|
898 |
-
*,
|
899 |
-
console: Optional["Console"] = None,
|
900 |
-
indent_guides: bool = True,
|
901 |
-
max_length: Optional[int] = None,
|
902 |
-
max_string: Optional[int] = None,
|
903 |
-
max_depth: Optional[int] = None,
|
904 |
-
expand_all: bool = False,
|
905 |
-
) -> None:
|
906 |
-
"""A convenience function for pretty printing.
|
907 |
-
|
908 |
-
Args:
|
909 |
-
_object (Any): Object to pretty print.
|
910 |
-
console (Console, optional): Console instance, or None to use default. Defaults to None.
|
911 |
-
max_length (int, optional): Maximum length of containers before abbreviating, or None for no abbreviation.
|
912 |
-
Defaults to None.
|
913 |
-
max_string (int, optional): Maximum length of strings before truncating, or None to disable. Defaults to None.
|
914 |
-
max_depth (int, optional): Maximum depth for nested data structures, or None for unlimited depth. Defaults to None.
|
915 |
-
indent_guides (bool, optional): Enable indentation guides. Defaults to True.
|
916 |
-
expand_all (bool, optional): Expand all containers. Defaults to False.
|
917 |
-
"""
|
918 |
-
_console = get_console() if console is None else console
|
919 |
-
_console.print(
|
920 |
-
Pretty(
|
921 |
-
_object,
|
922 |
-
max_length=max_length,
|
923 |
-
max_string=max_string,
|
924 |
-
max_depth=max_depth,
|
925 |
-
indent_guides=indent_guides,
|
926 |
-
expand_all=expand_all,
|
927 |
-
overflow="ignore",
|
928 |
-
),
|
929 |
-
soft_wrap=True,
|
930 |
-
)
|
931 |
-
|
932 |
-
|
933 |
-
if __name__ == "__main__": # pragma: no cover
|
934 |
-
|
935 |
-
class BrokenRepr:
|
936 |
-
def __repr__(self) -> str:
|
937 |
-
1 / 0
|
938 |
-
return "this will fail"
|
939 |
-
|
940 |
-
from typing import NamedTuple
|
941 |
-
|
942 |
-
class StockKeepingUnit(NamedTuple):
|
943 |
-
name: str
|
944 |
-
description: str
|
945 |
-
price: float
|
946 |
-
category: str
|
947 |
-
reviews: List[str]
|
948 |
-
|
949 |
-
d = defaultdict(int)
|
950 |
-
d["foo"] = 5
|
951 |
-
data = {
|
952 |
-
"foo": [
|
953 |
-
1,
|
954 |
-
"Hello World!",
|
955 |
-
100.123,
|
956 |
-
323.232,
|
957 |
-
432324.0,
|
958 |
-
{5, 6, 7, (1, 2, 3, 4), 8},
|
959 |
-
],
|
960 |
-
"bar": frozenset({1, 2, 3}),
|
961 |
-
"defaultdict": defaultdict(
|
962 |
-
list, {"crumble": ["apple", "rhubarb", "butter", "sugar", "flour"]}
|
963 |
-
),
|
964 |
-
"counter": Counter(
|
965 |
-
[
|
966 |
-
"apple",
|
967 |
-
"orange",
|
968 |
-
"pear",
|
969 |
-
"kumquat",
|
970 |
-
"kumquat",
|
971 |
-
"durian" * 100,
|
972 |
-
]
|
973 |
-
),
|
974 |
-
"atomic": (False, True, None),
|
975 |
-
"namedtuple": StockKeepingUnit(
|
976 |
-
"Sparkling British Spring Water",
|
977 |
-
"Carbonated spring water",
|
978 |
-
0.9,
|
979 |
-
"water",
|
980 |
-
["its amazing!", "its terrible!"],
|
981 |
-
),
|
982 |
-
"Broken": BrokenRepr(),
|
983 |
-
}
|
984 |
-
data["foo"].append(data) # type: ignore[attr-defined]
|
985 |
-
|
986 |
-
from pip._vendor.rich import print
|
987 |
-
|
988 |
-
# print(Pretty(data, indent_guides=True, max_string=20))
|
989 |
-
|
990 |
-
class Thing:
|
991 |
-
def __repr__(self) -> str:
|
992 |
-
return "Hello\x1b[38;5;239m World!"
|
993 |
-
|
994 |
-
print(Pretty(Thing()))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/py36compat.py
DELETED
@@ -1,134 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
from glob import glob
|
3 |
-
from distutils.util import convert_path
|
4 |
-
from distutils.command import sdist
|
5 |
-
|
6 |
-
|
7 |
-
class sdist_add_defaults:
|
8 |
-
"""
|
9 |
-
Mix-in providing forward-compatibility for functionality as found in
|
10 |
-
distutils on Python 3.7.
|
11 |
-
|
12 |
-
Do not edit the code in this class except to update functionality
|
13 |
-
as implemented in distutils. Instead, override in the subclass.
|
14 |
-
"""
|
15 |
-
|
16 |
-
def add_defaults(self):
|
17 |
-
"""Add all the default files to self.filelist:
|
18 |
-
- README or README.txt
|
19 |
-
- setup.py
|
20 |
-
- test/test*.py
|
21 |
-
- all pure Python modules mentioned in setup script
|
22 |
-
- all files pointed by package_data (build_py)
|
23 |
-
- all files defined in data_files.
|
24 |
-
- all files defined as scripts.
|
25 |
-
- all C sources listed as part of extensions or C libraries
|
26 |
-
in the setup script (doesn't catch C headers!)
|
27 |
-
Warns if (README or README.txt) or setup.py are missing; everything
|
28 |
-
else is optional.
|
29 |
-
"""
|
30 |
-
self._add_defaults_standards()
|
31 |
-
self._add_defaults_optional()
|
32 |
-
self._add_defaults_python()
|
33 |
-
self._add_defaults_data_files()
|
34 |
-
self._add_defaults_ext()
|
35 |
-
self._add_defaults_c_libs()
|
36 |
-
self._add_defaults_scripts()
|
37 |
-
|
38 |
-
@staticmethod
|
39 |
-
def _cs_path_exists(fspath):
|
40 |
-
"""
|
41 |
-
Case-sensitive path existence check
|
42 |
-
|
43 |
-
>>> sdist_add_defaults._cs_path_exists(__file__)
|
44 |
-
True
|
45 |
-
>>> sdist_add_defaults._cs_path_exists(__file__.upper())
|
46 |
-
False
|
47 |
-
"""
|
48 |
-
if not os.path.exists(fspath):
|
49 |
-
return False
|
50 |
-
# make absolute so we always have a directory
|
51 |
-
abspath = os.path.abspath(fspath)
|
52 |
-
directory, filename = os.path.split(abspath)
|
53 |
-
return filename in os.listdir(directory)
|
54 |
-
|
55 |
-
def _add_defaults_standards(self):
|
56 |
-
standards = [self.READMES, self.distribution.script_name]
|
57 |
-
for fn in standards:
|
58 |
-
if isinstance(fn, tuple):
|
59 |
-
alts = fn
|
60 |
-
got_it = False
|
61 |
-
for fn in alts:
|
62 |
-
if self._cs_path_exists(fn):
|
63 |
-
got_it = True
|
64 |
-
self.filelist.append(fn)
|
65 |
-
break
|
66 |
-
|
67 |
-
if not got_it:
|
68 |
-
self.warn("standard file not found: should have one of " +
|
69 |
-
', '.join(alts))
|
70 |
-
else:
|
71 |
-
if self._cs_path_exists(fn):
|
72 |
-
self.filelist.append(fn)
|
73 |
-
else:
|
74 |
-
self.warn("standard file '%s' not found" % fn)
|
75 |
-
|
76 |
-
def _add_defaults_optional(self):
|
77 |
-
optional = ['test/test*.py', 'setup.cfg']
|
78 |
-
for pattern in optional:
|
79 |
-
files = filter(os.path.isfile, glob(pattern))
|
80 |
-
self.filelist.extend(files)
|
81 |
-
|
82 |
-
def _add_defaults_python(self):
|
83 |
-
# build_py is used to get:
|
84 |
-
# - python modules
|
85 |
-
# - files defined in package_data
|
86 |
-
build_py = self.get_finalized_command('build_py')
|
87 |
-
|
88 |
-
# getting python files
|
89 |
-
if self.distribution.has_pure_modules():
|
90 |
-
self.filelist.extend(build_py.get_source_files())
|
91 |
-
|
92 |
-
# getting package_data files
|
93 |
-
# (computed in build_py.data_files by build_py.finalize_options)
|
94 |
-
for pkg, src_dir, build_dir, filenames in build_py.data_files:
|
95 |
-
for filename in filenames:
|
96 |
-
self.filelist.append(os.path.join(src_dir, filename))
|
97 |
-
|
98 |
-
def _add_defaults_data_files(self):
|
99 |
-
# getting distribution.data_files
|
100 |
-
if self.distribution.has_data_files():
|
101 |
-
for item in self.distribution.data_files:
|
102 |
-
if isinstance(item, str):
|
103 |
-
# plain file
|
104 |
-
item = convert_path(item)
|
105 |
-
if os.path.isfile(item):
|
106 |
-
self.filelist.append(item)
|
107 |
-
else:
|
108 |
-
# a (dirname, filenames) tuple
|
109 |
-
dirname, filenames = item
|
110 |
-
for f in filenames:
|
111 |
-
f = convert_path(f)
|
112 |
-
if os.path.isfile(f):
|
113 |
-
self.filelist.append(f)
|
114 |
-
|
115 |
-
def _add_defaults_ext(self):
|
116 |
-
if self.distribution.has_ext_modules():
|
117 |
-
build_ext = self.get_finalized_command('build_ext')
|
118 |
-
self.filelist.extend(build_ext.get_source_files())
|
119 |
-
|
120 |
-
def _add_defaults_c_libs(self):
|
121 |
-
if self.distribution.has_c_libraries():
|
122 |
-
build_clib = self.get_finalized_command('build_clib')
|
123 |
-
self.filelist.extend(build_clib.get_source_files())
|
124 |
-
|
125 |
-
def _add_defaults_scripts(self):
|
126 |
-
if self.distribution.has_scripts():
|
127 |
-
build_scripts = self.get_finalized_command('build_scripts')
|
128 |
-
self.filelist.extend(build_scripts.get_source_files())
|
129 |
-
|
130 |
-
|
131 |
-
if hasattr(sdist.sdist, '_add_defaults_standards'):
|
132 |
-
# disable the functionality already available upstream
|
133 |
-
class sdist_add_defaults: # noqa
|
134 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/projects/DensePose/densepose/roi_head.py
DELETED
@@ -1,213 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
3 |
-
|
4 |
-
import numpy as np
|
5 |
-
from typing import Dict
|
6 |
-
import fvcore.nn.weight_init as weight_init
|
7 |
-
import torch
|
8 |
-
import torch.nn as nn
|
9 |
-
from torch.nn import functional as F
|
10 |
-
|
11 |
-
from detectron2.layers import Conv2d, ShapeSpec, get_norm
|
12 |
-
from detectron2.modeling import ROI_HEADS_REGISTRY, StandardROIHeads
|
13 |
-
from detectron2.modeling.poolers import ROIPooler
|
14 |
-
from detectron2.modeling.roi_heads import select_foreground_proposals
|
15 |
-
|
16 |
-
from .densepose_head import (
|
17 |
-
build_densepose_data_filter,
|
18 |
-
build_densepose_head,
|
19 |
-
build_densepose_losses,
|
20 |
-
build_densepose_predictor,
|
21 |
-
densepose_inference,
|
22 |
-
)
|
23 |
-
|
24 |
-
|
25 |
-
class Decoder(nn.Module):
|
26 |
-
"""
|
27 |
-
A semantic segmentation head described in detail in the Panoptic Feature Pyramid Networks paper
|
28 |
-
(https://arxiv.org/abs/1901.02446). It takes FPN features as input and merges information from
|
29 |
-
all levels of the FPN into single output.
|
30 |
-
"""
|
31 |
-
|
32 |
-
def __init__(self, cfg, input_shape: Dict[str, ShapeSpec], in_features):
|
33 |
-
super(Decoder, self).__init__()
|
34 |
-
|
35 |
-
# fmt: off
|
36 |
-
self.in_features = in_features
|
37 |
-
feature_strides = {k: v.stride for k, v in input_shape.items()}
|
38 |
-
feature_channels = {k: v.channels for k, v in input_shape.items()}
|
39 |
-
num_classes = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_NUM_CLASSES
|
40 |
-
conv_dims = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_CONV_DIMS
|
41 |
-
self.common_stride = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_COMMON_STRIDE
|
42 |
-
norm = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_NORM
|
43 |
-
# fmt: on
|
44 |
-
|
45 |
-
self.scale_heads = []
|
46 |
-
for in_feature in self.in_features:
|
47 |
-
head_ops = []
|
48 |
-
head_length = max(
|
49 |
-
1, int(np.log2(feature_strides[in_feature]) - np.log2(self.common_stride))
|
50 |
-
)
|
51 |
-
for k in range(head_length):
|
52 |
-
conv = Conv2d(
|
53 |
-
feature_channels[in_feature] if k == 0 else conv_dims,
|
54 |
-
conv_dims,
|
55 |
-
kernel_size=3,
|
56 |
-
stride=1,
|
57 |
-
padding=1,
|
58 |
-
bias=not norm,
|
59 |
-
norm=get_norm(norm, conv_dims),
|
60 |
-
activation=F.relu,
|
61 |
-
)
|
62 |
-
weight_init.c2_msra_fill(conv)
|
63 |
-
head_ops.append(conv)
|
64 |
-
if feature_strides[in_feature] != self.common_stride:
|
65 |
-
head_ops.append(
|
66 |
-
nn.Upsample(scale_factor=2, mode="bilinear", align_corners=False)
|
67 |
-
)
|
68 |
-
self.scale_heads.append(nn.Sequential(*head_ops))
|
69 |
-
self.add_module(in_feature, self.scale_heads[-1])
|
70 |
-
self.predictor = Conv2d(conv_dims, num_classes, kernel_size=1, stride=1, padding=0)
|
71 |
-
weight_init.c2_msra_fill(self.predictor)
|
72 |
-
|
73 |
-
def forward(self, features):
|
74 |
-
for i, _ in enumerate(self.in_features):
|
75 |
-
if i == 0:
|
76 |
-
x = self.scale_heads[i](features[i])
|
77 |
-
else:
|
78 |
-
x = x + self.scale_heads[i](features[i])
|
79 |
-
x = self.predictor(x)
|
80 |
-
return x
|
81 |
-
|
82 |
-
|
83 |
-
@ROI_HEADS_REGISTRY.register()
|
84 |
-
class DensePoseROIHeads(StandardROIHeads):
|
85 |
-
"""
|
86 |
-
A Standard ROIHeads which contains an addition of DensePose head.
|
87 |
-
"""
|
88 |
-
|
89 |
-
def __init__(self, cfg, input_shape):
|
90 |
-
super().__init__(cfg, input_shape)
|
91 |
-
self._init_densepose_head(cfg, input_shape)
|
92 |
-
|
93 |
-
def _init_densepose_head(self, cfg, input_shape):
|
94 |
-
# fmt: off
|
95 |
-
self.densepose_on = cfg.MODEL.DENSEPOSE_ON
|
96 |
-
if not self.densepose_on:
|
97 |
-
return
|
98 |
-
self.densepose_data_filter = build_densepose_data_filter(cfg)
|
99 |
-
dp_pooler_resolution = cfg.MODEL.ROI_DENSEPOSE_HEAD.POOLER_RESOLUTION
|
100 |
-
dp_pooler_sampling_ratio = cfg.MODEL.ROI_DENSEPOSE_HEAD.POOLER_SAMPLING_RATIO
|
101 |
-
dp_pooler_type = cfg.MODEL.ROI_DENSEPOSE_HEAD.POOLER_TYPE
|
102 |
-
self.use_decoder = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECODER_ON
|
103 |
-
# fmt: on
|
104 |
-
if self.use_decoder:
|
105 |
-
dp_pooler_scales = (1.0 / input_shape[self.in_features[0]].stride,)
|
106 |
-
else:
|
107 |
-
dp_pooler_scales = tuple(1.0 / input_shape[k].stride for k in self.in_features)
|
108 |
-
in_channels = [input_shape[f].channels for f in self.in_features][0]
|
109 |
-
|
110 |
-
if self.use_decoder:
|
111 |
-
self.decoder = Decoder(cfg, input_shape, self.in_features)
|
112 |
-
|
113 |
-
self.densepose_pooler = ROIPooler(
|
114 |
-
output_size=dp_pooler_resolution,
|
115 |
-
scales=dp_pooler_scales,
|
116 |
-
sampling_ratio=dp_pooler_sampling_ratio,
|
117 |
-
pooler_type=dp_pooler_type,
|
118 |
-
)
|
119 |
-
self.densepose_head = build_densepose_head(cfg, in_channels)
|
120 |
-
self.densepose_predictor = build_densepose_predictor(
|
121 |
-
cfg, self.densepose_head.n_out_channels
|
122 |
-
)
|
123 |
-
self.densepose_losses = build_densepose_losses(cfg)
|
124 |
-
|
125 |
-
def _forward_densepose(self, features, instances):
|
126 |
-
"""
|
127 |
-
Forward logic of the densepose prediction branch.
|
128 |
-
|
129 |
-
Args:
|
130 |
-
features (list[Tensor]): #level input features for densepose prediction
|
131 |
-
instances (list[Instances]): the per-image instances to train/predict densepose.
|
132 |
-
In training, they can be the proposals.
|
133 |
-
In inference, they can be the predicted boxes.
|
134 |
-
|
135 |
-
Returns:
|
136 |
-
In training, a dict of losses.
|
137 |
-
In inference, update `instances` with new fields "densepose" and return it.
|
138 |
-
"""
|
139 |
-
if not self.densepose_on:
|
140 |
-
return {} if self.training else instances
|
141 |
-
|
142 |
-
features = [features[f] for f in self.in_features]
|
143 |
-
if self.training:
|
144 |
-
proposals, _ = select_foreground_proposals(instances, self.num_classes)
|
145 |
-
proposals_dp = self.densepose_data_filter(proposals)
|
146 |
-
if len(proposals_dp) > 0:
|
147 |
-
# NOTE may deadlock in DDP if certain workers have empty proposals_dp
|
148 |
-
proposal_boxes = [x.proposal_boxes for x in proposals_dp]
|
149 |
-
|
150 |
-
if self.use_decoder:
|
151 |
-
features = [self.decoder(features)]
|
152 |
-
|
153 |
-
features_dp = self.densepose_pooler(features, proposal_boxes)
|
154 |
-
densepose_head_outputs = self.densepose_head(features_dp)
|
155 |
-
densepose_outputs, _, confidences, _ = self.densepose_predictor(
|
156 |
-
densepose_head_outputs
|
157 |
-
)
|
158 |
-
densepose_loss_dict = self.densepose_losses(
|
159 |
-
proposals_dp, densepose_outputs, confidences
|
160 |
-
)
|
161 |
-
return densepose_loss_dict
|
162 |
-
else:
|
163 |
-
pred_boxes = [x.pred_boxes for x in instances]
|
164 |
-
|
165 |
-
if self.use_decoder:
|
166 |
-
features = [self.decoder(features)]
|
167 |
-
|
168 |
-
features_dp = self.densepose_pooler(features, pred_boxes)
|
169 |
-
if len(features_dp) > 0:
|
170 |
-
densepose_head_outputs = self.densepose_head(features_dp)
|
171 |
-
densepose_outputs, _, confidences, _ = self.densepose_predictor(
|
172 |
-
densepose_head_outputs
|
173 |
-
)
|
174 |
-
else:
|
175 |
-
# If no detection occurred instances
|
176 |
-
# set densepose_outputs to empty tensors
|
177 |
-
empty_tensor = torch.zeros(size=(0, 0, 0, 0), device=features_dp.device)
|
178 |
-
densepose_outputs = tuple([empty_tensor] * 4)
|
179 |
-
confidences = tuple([empty_tensor] * 4)
|
180 |
-
|
181 |
-
densepose_inference(densepose_outputs, confidences, instances)
|
182 |
-
return instances
|
183 |
-
|
184 |
-
def forward(self, images, features, proposals, targets=None):
|
185 |
-
instances, losses = super().forward(images, features, proposals, targets)
|
186 |
-
del targets, images
|
187 |
-
|
188 |
-
if self.training:
|
189 |
-
losses.update(self._forward_densepose(features, instances))
|
190 |
-
return instances, losses
|
191 |
-
|
192 |
-
def forward_with_given_boxes(self, features, instances):
|
193 |
-
"""
|
194 |
-
Use the given boxes in `instances` to produce other (non-box) per-ROI outputs.
|
195 |
-
|
196 |
-
This is useful for downstream tasks where a box is known, but need to obtain
|
197 |
-
other attributes (outputs of other heads).
|
198 |
-
Test-time augmentation also uses this.
|
199 |
-
|
200 |
-
Args:
|
201 |
-
features: same as in `forward()`
|
202 |
-
instances (list[Instances]): instances to predict other outputs. Expect the keys
|
203 |
-
"pred_boxes" and "pred_classes" to exist.
|
204 |
-
|
205 |
-
Returns:
|
206 |
-
instances (list[Instances]):
|
207 |
-
the same `Instances` objects, with extra
|
208 |
-
fields such as `pred_masks` or `pred_keypoints`.
|
209 |
-
"""
|
210 |
-
|
211 |
-
instances = super().forward_with_given_boxes(features, instances)
|
212 |
-
instances = self._forward_densepose(features, instances)
|
213 |
-
return instances
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/pybind11/tests/test_exceptions.cpp
DELETED
@@ -1,224 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
tests/test_custom-exceptions.cpp -- exception translation
|
3 |
-
|
4 |
-
Copyright (c) 2016 Pim Schellart <[email protected]>
|
5 |
-
|
6 |
-
All rights reserved. Use of this source code is governed by a
|
7 |
-
BSD-style license that can be found in the LICENSE file.
|
8 |
-
*/
|
9 |
-
|
10 |
-
#include "pybind11_tests.h"
|
11 |
-
|
12 |
-
// A type that should be raised as an exception in Python
|
13 |
-
class MyException : public std::exception {
|
14 |
-
public:
|
15 |
-
explicit MyException(const char * m) : message{m} {}
|
16 |
-
virtual const char * what() const noexcept override {return message.c_str();}
|
17 |
-
private:
|
18 |
-
std::string message = "";
|
19 |
-
};
|
20 |
-
|
21 |
-
// A type that should be translated to a standard Python exception
|
22 |
-
class MyException2 : public std::exception {
|
23 |
-
public:
|
24 |
-
explicit MyException2(const char * m) : message{m} {}
|
25 |
-
virtual const char * what() const noexcept override {return message.c_str();}
|
26 |
-
private:
|
27 |
-
std::string message = "";
|
28 |
-
};
|
29 |
-
|
30 |
-
// A type that is not derived from std::exception (and is thus unknown)
|
31 |
-
class MyException3 {
|
32 |
-
public:
|
33 |
-
explicit MyException3(const char * m) : message{m} {}
|
34 |
-
virtual const char * what() const noexcept {return message.c_str();}
|
35 |
-
private:
|
36 |
-
std::string message = "";
|
37 |
-
};
|
38 |
-
|
39 |
-
// A type that should be translated to MyException
|
40 |
-
// and delegated to its exception translator
|
41 |
-
class MyException4 : public std::exception {
|
42 |
-
public:
|
43 |
-
explicit MyException4(const char * m) : message{m} {}
|
44 |
-
virtual const char * what() const noexcept override {return message.c_str();}
|
45 |
-
private:
|
46 |
-
std::string message = "";
|
47 |
-
};
|
48 |
-
|
49 |
-
|
50 |
-
// Like the above, but declared via the helper function
|
51 |
-
class MyException5 : public std::logic_error {
|
52 |
-
public:
|
53 |
-
explicit MyException5(const std::string &what) : std::logic_error(what) {}
|
54 |
-
};
|
55 |
-
|
56 |
-
// Inherits from MyException5
|
57 |
-
class MyException5_1 : public MyException5 {
|
58 |
-
using MyException5::MyException5;
|
59 |
-
};
|
60 |
-
|
61 |
-
struct PythonCallInDestructor {
|
62 |
-
PythonCallInDestructor(const py::dict &d) : d(d) {}
|
63 |
-
~PythonCallInDestructor() { d["good"] = true; }
|
64 |
-
|
65 |
-
py::dict d;
|
66 |
-
};
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
struct PythonAlreadySetInDestructor {
|
71 |
-
PythonAlreadySetInDestructor(const py::str &s) : s(s) {}
|
72 |
-
~PythonAlreadySetInDestructor() {
|
73 |
-
py::dict foo;
|
74 |
-
try {
|
75 |
-
// Assign to a py::object to force read access of nonexistent dict entry
|
76 |
-
py::object o = foo["bar"];
|
77 |
-
}
|
78 |
-
catch (py::error_already_set& ex) {
|
79 |
-
ex.discard_as_unraisable(s);
|
80 |
-
}
|
81 |
-
}
|
82 |
-
|
83 |
-
py::str s;
|
84 |
-
};
|
85 |
-
|
86 |
-
|
87 |
-
TEST_SUBMODULE(exceptions, m) {
|
88 |
-
m.def("throw_std_exception", []() {
|
89 |
-
throw std::runtime_error("This exception was intentionally thrown.");
|
90 |
-
});
|
91 |
-
|
92 |
-
// make a new custom exception and use it as a translation target
|
93 |
-
static py::exception<MyException> ex(m, "MyException");
|
94 |
-
py::register_exception_translator([](std::exception_ptr p) {
|
95 |
-
try {
|
96 |
-
if (p) std::rethrow_exception(p);
|
97 |
-
} catch (const MyException &e) {
|
98 |
-
// Set MyException as the active python error
|
99 |
-
ex(e.what());
|
100 |
-
}
|
101 |
-
});
|
102 |
-
|
103 |
-
// register new translator for MyException2
|
104 |
-
// no need to store anything here because this type will
|
105 |
-
// never by visible from Python
|
106 |
-
py::register_exception_translator([](std::exception_ptr p) {
|
107 |
-
try {
|
108 |
-
if (p) std::rethrow_exception(p);
|
109 |
-
} catch (const MyException2 &e) {
|
110 |
-
// Translate this exception to a standard RuntimeError
|
111 |
-
PyErr_SetString(PyExc_RuntimeError, e.what());
|
112 |
-
}
|
113 |
-
});
|
114 |
-
|
115 |
-
// register new translator for MyException4
|
116 |
-
// which will catch it and delegate to the previously registered
|
117 |
-
// translator for MyException by throwing a new exception
|
118 |
-
py::register_exception_translator([](std::exception_ptr p) {
|
119 |
-
try {
|
120 |
-
if (p) std::rethrow_exception(p);
|
121 |
-
} catch (const MyException4 &e) {
|
122 |
-
throw MyException(e.what());
|
123 |
-
}
|
124 |
-
});
|
125 |
-
|
126 |
-
// A simple exception translation:
|
127 |
-
auto ex5 = py::register_exception<MyException5>(m, "MyException5");
|
128 |
-
// A slightly more complicated one that declares MyException5_1 as a subclass of MyException5
|
129 |
-
py::register_exception<MyException5_1>(m, "MyException5_1", ex5.ptr());
|
130 |
-
|
131 |
-
m.def("throws1", []() { throw MyException("this error should go to a custom type"); });
|
132 |
-
m.def("throws2", []() { throw MyException2("this error should go to a standard Python exception"); });
|
133 |
-
m.def("throws3", []() { throw MyException3("this error cannot be translated"); });
|
134 |
-
m.def("throws4", []() { throw MyException4("this error is rethrown"); });
|
135 |
-
m.def("throws5", []() { throw MyException5("this is a helper-defined translated exception"); });
|
136 |
-
m.def("throws5_1", []() { throw MyException5_1("MyException5 subclass"); });
|
137 |
-
m.def("throws_logic_error", []() { throw std::logic_error("this error should fall through to the standard handler"); });
|
138 |
-
m.def("throws_overflow_error", []() {throw std::overflow_error(""); });
|
139 |
-
m.def("exception_matches", []() {
|
140 |
-
py::dict foo;
|
141 |
-
try {
|
142 |
-
// Assign to a py::object to force read access of nonexistent dict entry
|
143 |
-
py::object o = foo["bar"];
|
144 |
-
}
|
145 |
-
catch (py::error_already_set& ex) {
|
146 |
-
if (!ex.matches(PyExc_KeyError)) throw;
|
147 |
-
return true;
|
148 |
-
}
|
149 |
-
return false;
|
150 |
-
});
|
151 |
-
m.def("exception_matches_base", []() {
|
152 |
-
py::dict foo;
|
153 |
-
try {
|
154 |
-
// Assign to a py::object to force read access of nonexistent dict entry
|
155 |
-
py::object o = foo["bar"];
|
156 |
-
}
|
157 |
-
catch (py::error_already_set &ex) {
|
158 |
-
if (!ex.matches(PyExc_Exception)) throw;
|
159 |
-
return true;
|
160 |
-
}
|
161 |
-
return false;
|
162 |
-
});
|
163 |
-
m.def("modulenotfound_exception_matches_base", []() {
|
164 |
-
try {
|
165 |
-
// On Python >= 3.6, this raises a ModuleNotFoundError, a subclass of ImportError
|
166 |
-
py::module::import("nonexistent");
|
167 |
-
}
|
168 |
-
catch (py::error_already_set &ex) {
|
169 |
-
if (!ex.matches(PyExc_ImportError)) throw;
|
170 |
-
return true;
|
171 |
-
}
|
172 |
-
return false;
|
173 |
-
});
|
174 |
-
|
175 |
-
m.def("throw_already_set", [](bool err) {
|
176 |
-
if (err)
|
177 |
-
PyErr_SetString(PyExc_ValueError, "foo");
|
178 |
-
try {
|
179 |
-
throw py::error_already_set();
|
180 |
-
} catch (const std::runtime_error& e) {
|
181 |
-
if ((err && e.what() != std::string("ValueError: foo")) ||
|
182 |
-
(!err && e.what() != std::string("Unknown internal error occurred")))
|
183 |
-
{
|
184 |
-
PyErr_Clear();
|
185 |
-
throw std::runtime_error("error message mismatch");
|
186 |
-
}
|
187 |
-
}
|
188 |
-
PyErr_Clear();
|
189 |
-
if (err)
|
190 |
-
PyErr_SetString(PyExc_ValueError, "foo");
|
191 |
-
throw py::error_already_set();
|
192 |
-
});
|
193 |
-
|
194 |
-
m.def("python_call_in_destructor", [](py::dict d) {
|
195 |
-
try {
|
196 |
-
PythonCallInDestructor set_dict_in_destructor(d);
|
197 |
-
PyErr_SetString(PyExc_ValueError, "foo");
|
198 |
-
throw py::error_already_set();
|
199 |
-
} catch (const py::error_already_set&) {
|
200 |
-
return true;
|
201 |
-
}
|
202 |
-
return false;
|
203 |
-
});
|
204 |
-
|
205 |
-
m.def("python_alreadyset_in_destructor", [](py::str s) {
|
206 |
-
PythonAlreadySetInDestructor alreadyset_in_destructor(s);
|
207 |
-
return true;
|
208 |
-
});
|
209 |
-
|
210 |
-
// test_nested_throws
|
211 |
-
m.def("try_catch", [m](py::object exc_type, py::function f, py::args args) {
|
212 |
-
try { f(*args); }
|
213 |
-
catch (py::error_already_set &ex) {
|
214 |
-
if (ex.matches(exc_type))
|
215 |
-
py::print(ex.what());
|
216 |
-
else
|
217 |
-
throw;
|
218 |
-
}
|
219 |
-
});
|
220 |
-
|
221 |
-
// Test repr that cannot be displayed
|
222 |
-
m.def("simple_bool_passthrough", [](bool x) {return x;});
|
223 |
-
|
224 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/random/linear_congruential_engine.h
DELETED
@@ -1,295 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
|
18 |
-
/*! \file linear_congruential_engine.h
|
19 |
-
* \brief A linear congruential pseudorandom number engine.
|
20 |
-
*/
|
21 |
-
|
22 |
-
#pragma once
|
23 |
-
|
24 |
-
#include <thrust/detail/config.h>
|
25 |
-
#include <iostream>
|
26 |
-
#include <thrust/detail/cstdint.h>
|
27 |
-
#include <thrust/random/detail/random_core_access.h>
|
28 |
-
#include <thrust/random/detail/linear_congruential_engine_discard.h>
|
29 |
-
|
30 |
-
namespace thrust
|
31 |
-
{
|
32 |
-
|
33 |
-
namespace random
|
34 |
-
{
|
35 |
-
|
36 |
-
/*! \addtogroup random_number_engine_templates Random Number Engine Class Templates
|
37 |
-
* \ingroup random
|
38 |
-
* \{
|
39 |
-
*/
|
40 |
-
|
41 |
-
/*! \class linear_congruential_engine
|
42 |
-
* \brief A \p linear_congruential_engine random number engine produces unsigned integer
|
43 |
-
* random numbers using a linear congruential random number generation algorithm.
|
44 |
-
*
|
45 |
-
* The generation algorithm has the form <tt>x_i = (a * x_{i-1} + c) mod m</tt>.
|
46 |
-
*
|
47 |
-
* \tparam UIntType The type of unsigned integer to produce.
|
48 |
-
* \tparam a The multiplier used in the generation algorithm.
|
49 |
-
* \tparam c The increment used in the generation algorithm.
|
50 |
-
* \tparam m The modulus used in the generation algorithm.
|
51 |
-
*
|
52 |
-
* \note Inexperienced users should not use this class template directly. Instead, use
|
53 |
-
* \p minstd_rand or \p minstd_rand0.
|
54 |
-
*
|
55 |
-
* The following code snippet shows examples of use of a \p linear_congruential_engine instance:
|
56 |
-
*
|
57 |
-
* \code
|
58 |
-
* #include <thrust/random/linear_congruential_engine.h>
|
59 |
-
* #include <iostream>
|
60 |
-
*
|
61 |
-
* int main(void)
|
62 |
-
* {
|
63 |
-
* // create a minstd_rand object, which is an instance of linear_congruential_engine
|
64 |
-
* thrust::minstd_rand rng1;
|
65 |
-
*
|
66 |
-
* // output some random values to cout
|
67 |
-
* std::cout << rng1() << std::endl;
|
68 |
-
*
|
69 |
-
* // a random value is printed
|
70 |
-
*
|
71 |
-
* // create a new minstd_rand from a seed
|
72 |
-
* thrust::minstd_rand rng2(13);
|
73 |
-
*
|
74 |
-
* // discard some random values
|
75 |
-
* rng2.discard(13);
|
76 |
-
*
|
77 |
-
* // stream the object to an iostream
|
78 |
-
* std::cout << rng2 << std::endl;
|
79 |
-
*
|
80 |
-
* // rng2's current state is printed
|
81 |
-
*
|
82 |
-
* // print the minimum and maximum values that minstd_rand can produce
|
83 |
-
* std::cout << thrust::minstd_rand::min << std::endl;
|
84 |
-
* std::cout << thrust::minstd_rand::max << std::endl;
|
85 |
-
*
|
86 |
-
* // the range of minstd_rand is printed
|
87 |
-
*
|
88 |
-
* // save the state of rng2 to a different object
|
89 |
-
* thrust::minstd_rand rng3 = rng2;
|
90 |
-
*
|
91 |
-
* // compare rng2 and rng3
|
92 |
-
* std::cout << (rng2 == rng3) << std::endl;
|
93 |
-
*
|
94 |
-
* // 1 is printed
|
95 |
-
*
|
96 |
-
* // re-seed rng2 with a different seed
|
97 |
-
* rng2.seed(7);
|
98 |
-
*
|
99 |
-
* // compare rng2 and rng3
|
100 |
-
* std::cout << (rng2 == rng3) << std::endl;
|
101 |
-
*
|
102 |
-
* // 0 is printed
|
103 |
-
*
|
104 |
-
* return 0;
|
105 |
-
* }
|
106 |
-
*
|
107 |
-
* \endcode
|
108 |
-
*
|
109 |
-
* \see thrust::random::minstd_rand
|
110 |
-
* \see thrust::random::minstd_rand0
|
111 |
-
*/
|
112 |
-
template<typename UIntType, UIntType a, UIntType c, UIntType m>
|
113 |
-
class linear_congruential_engine
|
114 |
-
{
|
115 |
-
public:
|
116 |
-
// types
|
117 |
-
|
118 |
-
/*! \typedef result_type
|
119 |
-
* \brief The type of the unsigned integer produced by this \p linear_congruential_engine.
|
120 |
-
*/
|
121 |
-
typedef UIntType result_type;
|
122 |
-
|
123 |
-
// engine characteristics
|
124 |
-
|
125 |
-
/*! The multiplier used in the generation algorithm.
|
126 |
-
*/
|
127 |
-
static const result_type multiplier = a;
|
128 |
-
|
129 |
-
/*! The increment used in the generation algorithm.
|
130 |
-
*/
|
131 |
-
static const result_type increment = c;
|
132 |
-
|
133 |
-
/*! The modulus used in the generation algorithm.
|
134 |
-
*/
|
135 |
-
static const result_type modulus = m;
|
136 |
-
|
137 |
-
/*! The smallest value this \p linear_congruential_engine may potentially produce.
|
138 |
-
*/
|
139 |
-
static const result_type min = c == 0u ? 1u : 0u;
|
140 |
-
|
141 |
-
/*! The largest value this \p linear_congruential_engine may potentially produce.
|
142 |
-
*/
|
143 |
-
static const result_type max = m - 1u;
|
144 |
-
|
145 |
-
/*! The default seed of this \p linear_congruential_engine.
|
146 |
-
*/
|
147 |
-
static const result_type default_seed = 1u;
|
148 |
-
|
149 |
-
// constructors and seeding functions
|
150 |
-
|
151 |
-
/*! This constructor, which optionally accepts a seed, initializes a new
|
152 |
-
* \p linear_congruential_engine.
|
153 |
-
*
|
154 |
-
* \param s The seed used to intialize this \p linear_congruential_engine's state.
|
155 |
-
*/
|
156 |
-
__host__ __device__
|
157 |
-
explicit linear_congruential_engine(result_type s = default_seed);
|
158 |
-
|
159 |
-
/*! This method initializes this \p linear_congruential_engine's state, and optionally accepts
|
160 |
-
* a seed value.
|
161 |
-
*
|
162 |
-
* \param s The seed used to initializes this \p linear_congruential_engine's state.
|
163 |
-
*/
|
164 |
-
__host__ __device__
|
165 |
-
void seed(result_type s = default_seed);
|
166 |
-
|
167 |
-
// generating functions
|
168 |
-
|
169 |
-
/*! This member function produces a new random value and updates this \p linear_congruential_engine's state.
|
170 |
-
* \return A new random number.
|
171 |
-
*/
|
172 |
-
__host__ __device__
|
173 |
-
result_type operator()(void);
|
174 |
-
|
175 |
-
/*! This member function advances this \p linear_congruential_engine's state a given number of times
|
176 |
-
* and discards the results.
|
177 |
-
*
|
178 |
-
* \param z The number of random values to discard.
|
179 |
-
* \note This function is provided because an implementation may be able to accelerate it.
|
180 |
-
*/
|
181 |
-
__host__ __device__
|
182 |
-
void discard(unsigned long long z);
|
183 |
-
|
184 |
-
/*! \cond
|
185 |
-
*/
|
186 |
-
private:
|
187 |
-
result_type m_x;
|
188 |
-
|
189 |
-
static void transition(result_type &state);
|
190 |
-
|
191 |
-
friend struct thrust::random::detail::random_core_access;
|
192 |
-
|
193 |
-
friend struct thrust::random::detail::linear_congruential_engine_discard;
|
194 |
-
|
195 |
-
__host__ __device__
|
196 |
-
bool equal(const linear_congruential_engine &rhs) const;
|
197 |
-
|
198 |
-
template<typename CharT, typename Traits>
|
199 |
-
std::basic_ostream<CharT,Traits>& stream_out(std::basic_ostream<CharT,Traits> &os) const;
|
200 |
-
|
201 |
-
template<typename CharT, typename Traits>
|
202 |
-
std::basic_istream<CharT,Traits>& stream_in(std::basic_istream<CharT,Traits> &is);
|
203 |
-
|
204 |
-
/*! \endcond
|
205 |
-
*/
|
206 |
-
}; // end linear_congruential_engine
|
207 |
-
|
208 |
-
|
209 |
-
/*! This function checks two \p linear_congruential_engines for equality.
|
210 |
-
* \param lhs The first \p linear_congruential_engine to test.
|
211 |
-
* \param rhs The second \p linear_congruential_engine to test.
|
212 |
-
* \return \c true if \p lhs is equal to \p rhs; \c false, otherwise.
|
213 |
-
*/
|
214 |
-
template<typename UIntType_, UIntType_ a_, UIntType_ c_, UIntType_ m_>
|
215 |
-
__host__ __device__
|
216 |
-
bool operator==(const linear_congruential_engine<UIntType_,a_,c_,m_> &lhs,
|
217 |
-
const linear_congruential_engine<UIntType_,a_,c_,m_> &rhs);
|
218 |
-
|
219 |
-
|
220 |
-
/*! This function checks two \p linear_congruential_engines for inequality.
|
221 |
-
* \param lhs The first \p linear_congruential_engine to test.
|
222 |
-
* \param rhs The second \p linear_congruential_engine to test.
|
223 |
-
* \return \c true if \p lhs is not equal to \p rhs; \c false, otherwise.
|
224 |
-
*/
|
225 |
-
template<typename UIntType_, UIntType_ a_, UIntType_ c_, UIntType_ m_>
|
226 |
-
__host__ __device__
|
227 |
-
bool operator!=(const linear_congruential_engine<UIntType_,a_,c_,m_> &lhs,
|
228 |
-
const linear_congruential_engine<UIntType_,a_,c_,m_> &rhs);
|
229 |
-
|
230 |
-
|
231 |
-
/*! This function streams a linear_congruential_engine to a \p std::basic_ostream.
|
232 |
-
* \param os The \p basic_ostream to stream out to.
|
233 |
-
* \param e The \p linear_congruential_engine to stream out.
|
234 |
-
* \return \p os
|
235 |
-
*/
|
236 |
-
template<typename UIntType_, UIntType_ a_, UIntType_ c_, UIntType_ m_,
|
237 |
-
typename CharT, typename Traits>
|
238 |
-
std::basic_ostream<CharT,Traits>&
|
239 |
-
operator<<(std::basic_ostream<CharT,Traits> &os,
|
240 |
-
const linear_congruential_engine<UIntType_,a_,c_,m_> &e);
|
241 |
-
|
242 |
-
|
243 |
-
/*! This function streams a linear_congruential_engine in from a std::basic_istream.
|
244 |
-
* \param is The \p basic_istream to stream from.
|
245 |
-
* \param e The \p linear_congruential_engine to stream in.
|
246 |
-
* \return \p is
|
247 |
-
*/
|
248 |
-
template<typename UIntType_, UIntType_ a_, UIntType_ c_, UIntType_ m_,
|
249 |
-
typename CharT, typename Traits>
|
250 |
-
std::basic_istream<CharT,Traits>&
|
251 |
-
operator>>(std::basic_istream<CharT,Traits> &is,
|
252 |
-
linear_congruential_engine<UIntType_,a_,c_,m_> &e);
|
253 |
-
|
254 |
-
|
255 |
-
/*! \} // random_number_engine_templates
|
256 |
-
*/
|
257 |
-
|
258 |
-
|
259 |
-
/*! \addtogroup predefined_random
|
260 |
-
* \{
|
261 |
-
*/
|
262 |
-
|
263 |
-
// XXX the type N2111 used here was uint_fast32_t
|
264 |
-
|
265 |
-
/*! \typedef minstd_rand0
|
266 |
-
* \brief A random number engine with predefined parameters which implements a version of
|
267 |
-
* the Minimal Standard random number generation algorithm.
|
268 |
-
* \note The 10000th consecutive invocation of a default-constructed object of type \p minstd_rand0
|
269 |
-
* shall produce the value \c 1043618065 .
|
270 |
-
*/
|
271 |
-
typedef linear_congruential_engine<thrust::detail::uint32_t, 16807, 0, 2147483647> minstd_rand0;
|
272 |
-
|
273 |
-
|
274 |
-
/*! \typedef minstd_rand
|
275 |
-
* \brief A random number engine with predefined parameters which implements a version of
|
276 |
-
* the Minimal Standard random number generation algorithm.
|
277 |
-
* \note The 10000th consecutive invocation of a default-constructed object of type \p minstd_rand
|
278 |
-
* shall produce the value \c 399268537 .
|
279 |
-
*/
|
280 |
-
typedef linear_congruential_engine<thrust::detail::uint32_t, 48271, 0, 2147483647> minstd_rand;
|
281 |
-
|
282 |
-
/*! \} // predefined_random
|
283 |
-
*/
|
284 |
-
|
285 |
-
} // end random
|
286 |
-
|
287 |
-
// import names into thrust::
|
288 |
-
using random::linear_congruential_engine;
|
289 |
-
using random::minstd_rand;
|
290 |
-
using random::minstd_rand0;
|
291 |
-
|
292 |
-
} // end thrust
|
293 |
-
|
294 |
-
#include <thrust/random/detail/linear_congruential_engine.inl>
|
295 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ChristopherMarais/Andrew_AI-BB_classification-beta/mysite/andrew_alpha/apps.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
from django.apps import AppConfig
|
2 |
-
|
3 |
-
|
4 |
-
class AndrewAlphaConfig(AppConfig):
|
5 |
-
default_auto_field = 'django.db.models.BigAutoField'
|
6 |
-
name = 'andrew_alpha'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Clebersla/RVC_V2_Huggingface_Version/lib/infer_pack/models_onnx.py
DELETED
@@ -1,819 +0,0 @@
|
|
1 |
-
import math, pdb, os
|
2 |
-
from time import time as ttime
|
3 |
-
import torch
|
4 |
-
from torch import nn
|
5 |
-
from torch.nn import functional as F
|
6 |
-
from lib.infer_pack import modules
|
7 |
-
from lib.infer_pack import attentions
|
8 |
-
from lib.infer_pack import commons
|
9 |
-
from lib.infer_pack.commons import init_weights, get_padding
|
10 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
11 |
-
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
12 |
-
from lib.infer_pack.commons import init_weights
|
13 |
-
import numpy as np
|
14 |
-
from lib.infer_pack import commons
|
15 |
-
|
16 |
-
|
17 |
-
class TextEncoder256(nn.Module):
|
18 |
-
def __init__(
|
19 |
-
self,
|
20 |
-
out_channels,
|
21 |
-
hidden_channels,
|
22 |
-
filter_channels,
|
23 |
-
n_heads,
|
24 |
-
n_layers,
|
25 |
-
kernel_size,
|
26 |
-
p_dropout,
|
27 |
-
f0=True,
|
28 |
-
):
|
29 |
-
super().__init__()
|
30 |
-
self.out_channels = out_channels
|
31 |
-
self.hidden_channels = hidden_channels
|
32 |
-
self.filter_channels = filter_channels
|
33 |
-
self.n_heads = n_heads
|
34 |
-
self.n_layers = n_layers
|
35 |
-
self.kernel_size = kernel_size
|
36 |
-
self.p_dropout = p_dropout
|
37 |
-
self.emb_phone = nn.Linear(256, hidden_channels)
|
38 |
-
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
|
39 |
-
if f0 == True:
|
40 |
-
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
|
41 |
-
self.encoder = attentions.Encoder(
|
42 |
-
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
|
43 |
-
)
|
44 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
45 |
-
|
46 |
-
def forward(self, phone, pitch, lengths):
|
47 |
-
if pitch == None:
|
48 |
-
x = self.emb_phone(phone)
|
49 |
-
else:
|
50 |
-
x = self.emb_phone(phone) + self.emb_pitch(pitch)
|
51 |
-
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
|
52 |
-
x = self.lrelu(x)
|
53 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
54 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
|
55 |
-
x.dtype
|
56 |
-
)
|
57 |
-
x = self.encoder(x * x_mask, x_mask)
|
58 |
-
stats = self.proj(x) * x_mask
|
59 |
-
|
60 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
61 |
-
return m, logs, x_mask
|
62 |
-
|
63 |
-
|
64 |
-
class TextEncoder768(nn.Module):
|
65 |
-
def __init__(
|
66 |
-
self,
|
67 |
-
out_channels,
|
68 |
-
hidden_channels,
|
69 |
-
filter_channels,
|
70 |
-
n_heads,
|
71 |
-
n_layers,
|
72 |
-
kernel_size,
|
73 |
-
p_dropout,
|
74 |
-
f0=True,
|
75 |
-
):
|
76 |
-
super().__init__()
|
77 |
-
self.out_channels = out_channels
|
78 |
-
self.hidden_channels = hidden_channels
|
79 |
-
self.filter_channels = filter_channels
|
80 |
-
self.n_heads = n_heads
|
81 |
-
self.n_layers = n_layers
|
82 |
-
self.kernel_size = kernel_size
|
83 |
-
self.p_dropout = p_dropout
|
84 |
-
self.emb_phone = nn.Linear(768, hidden_channels)
|
85 |
-
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
|
86 |
-
if f0 == True:
|
87 |
-
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
|
88 |
-
self.encoder = attentions.Encoder(
|
89 |
-
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
|
90 |
-
)
|
91 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
92 |
-
|
93 |
-
def forward(self, phone, pitch, lengths):
|
94 |
-
if pitch == None:
|
95 |
-
x = self.emb_phone(phone)
|
96 |
-
else:
|
97 |
-
x = self.emb_phone(phone) + self.emb_pitch(pitch)
|
98 |
-
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
|
99 |
-
x = self.lrelu(x)
|
100 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
101 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
|
102 |
-
x.dtype
|
103 |
-
)
|
104 |
-
x = self.encoder(x * x_mask, x_mask)
|
105 |
-
stats = self.proj(x) * x_mask
|
106 |
-
|
107 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
108 |
-
return m, logs, x_mask
|
109 |
-
|
110 |
-
|
111 |
-
class ResidualCouplingBlock(nn.Module):
|
112 |
-
def __init__(
|
113 |
-
self,
|
114 |
-
channels,
|
115 |
-
hidden_channels,
|
116 |
-
kernel_size,
|
117 |
-
dilation_rate,
|
118 |
-
n_layers,
|
119 |
-
n_flows=4,
|
120 |
-
gin_channels=0,
|
121 |
-
):
|
122 |
-
super().__init__()
|
123 |
-
self.channels = channels
|
124 |
-
self.hidden_channels = hidden_channels
|
125 |
-
self.kernel_size = kernel_size
|
126 |
-
self.dilation_rate = dilation_rate
|
127 |
-
self.n_layers = n_layers
|
128 |
-
self.n_flows = n_flows
|
129 |
-
self.gin_channels = gin_channels
|
130 |
-
|
131 |
-
self.flows = nn.ModuleList()
|
132 |
-
for i in range(n_flows):
|
133 |
-
self.flows.append(
|
134 |
-
modules.ResidualCouplingLayer(
|
135 |
-
channels,
|
136 |
-
hidden_channels,
|
137 |
-
kernel_size,
|
138 |
-
dilation_rate,
|
139 |
-
n_layers,
|
140 |
-
gin_channels=gin_channels,
|
141 |
-
mean_only=True,
|
142 |
-
)
|
143 |
-
)
|
144 |
-
self.flows.append(modules.Flip())
|
145 |
-
|
146 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
147 |
-
if not reverse:
|
148 |
-
for flow in self.flows:
|
149 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
150 |
-
else:
|
151 |
-
for flow in reversed(self.flows):
|
152 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
153 |
-
return x
|
154 |
-
|
155 |
-
def remove_weight_norm(self):
|
156 |
-
for i in range(self.n_flows):
|
157 |
-
self.flows[i * 2].remove_weight_norm()
|
158 |
-
|
159 |
-
|
160 |
-
class PosteriorEncoder(nn.Module):
|
161 |
-
def __init__(
|
162 |
-
self,
|
163 |
-
in_channels,
|
164 |
-
out_channels,
|
165 |
-
hidden_channels,
|
166 |
-
kernel_size,
|
167 |
-
dilation_rate,
|
168 |
-
n_layers,
|
169 |
-
gin_channels=0,
|
170 |
-
):
|
171 |
-
super().__init__()
|
172 |
-
self.in_channels = in_channels
|
173 |
-
self.out_channels = out_channels
|
174 |
-
self.hidden_channels = hidden_channels
|
175 |
-
self.kernel_size = kernel_size
|
176 |
-
self.dilation_rate = dilation_rate
|
177 |
-
self.n_layers = n_layers
|
178 |
-
self.gin_channels = gin_channels
|
179 |
-
|
180 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
181 |
-
self.enc = modules.WN(
|
182 |
-
hidden_channels,
|
183 |
-
kernel_size,
|
184 |
-
dilation_rate,
|
185 |
-
n_layers,
|
186 |
-
gin_channels=gin_channels,
|
187 |
-
)
|
188 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
189 |
-
|
190 |
-
def forward(self, x, x_lengths, g=None):
|
191 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
|
192 |
-
x.dtype
|
193 |
-
)
|
194 |
-
x = self.pre(x) * x_mask
|
195 |
-
x = self.enc(x, x_mask, g=g)
|
196 |
-
stats = self.proj(x) * x_mask
|
197 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
198 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
199 |
-
return z, m, logs, x_mask
|
200 |
-
|
201 |
-
def remove_weight_norm(self):
|
202 |
-
self.enc.remove_weight_norm()
|
203 |
-
|
204 |
-
|
205 |
-
class Generator(torch.nn.Module):
|
206 |
-
def __init__(
|
207 |
-
self,
|
208 |
-
initial_channel,
|
209 |
-
resblock,
|
210 |
-
resblock_kernel_sizes,
|
211 |
-
resblock_dilation_sizes,
|
212 |
-
upsample_rates,
|
213 |
-
upsample_initial_channel,
|
214 |
-
upsample_kernel_sizes,
|
215 |
-
gin_channels=0,
|
216 |
-
):
|
217 |
-
super(Generator, self).__init__()
|
218 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
219 |
-
self.num_upsamples = len(upsample_rates)
|
220 |
-
self.conv_pre = Conv1d(
|
221 |
-
initial_channel, upsample_initial_channel, 7, 1, padding=3
|
222 |
-
)
|
223 |
-
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
|
224 |
-
|
225 |
-
self.ups = nn.ModuleList()
|
226 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
227 |
-
self.ups.append(
|
228 |
-
weight_norm(
|
229 |
-
ConvTranspose1d(
|
230 |
-
upsample_initial_channel // (2**i),
|
231 |
-
upsample_initial_channel // (2 ** (i + 1)),
|
232 |
-
k,
|
233 |
-
u,
|
234 |
-
padding=(k - u) // 2,
|
235 |
-
)
|
236 |
-
)
|
237 |
-
)
|
238 |
-
|
239 |
-
self.resblocks = nn.ModuleList()
|
240 |
-
for i in range(len(self.ups)):
|
241 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
242 |
-
for j, (k, d) in enumerate(
|
243 |
-
zip(resblock_kernel_sizes, resblock_dilation_sizes)
|
244 |
-
):
|
245 |
-
self.resblocks.append(resblock(ch, k, d))
|
246 |
-
|
247 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
248 |
-
self.ups.apply(init_weights)
|
249 |
-
|
250 |
-
if gin_channels != 0:
|
251 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
252 |
-
|
253 |
-
def forward(self, x, g=None):
|
254 |
-
x = self.conv_pre(x)
|
255 |
-
if g is not None:
|
256 |
-
x = x + self.cond(g)
|
257 |
-
|
258 |
-
for i in range(self.num_upsamples):
|
259 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
260 |
-
x = self.ups[i](x)
|
261 |
-
xs = None
|
262 |
-
for j in range(self.num_kernels):
|
263 |
-
if xs is None:
|
264 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
265 |
-
else:
|
266 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
267 |
-
x = xs / self.num_kernels
|
268 |
-
x = F.leaky_relu(x)
|
269 |
-
x = self.conv_post(x)
|
270 |
-
x = torch.tanh(x)
|
271 |
-
|
272 |
-
return x
|
273 |
-
|
274 |
-
def remove_weight_norm(self):
|
275 |
-
for l in self.ups:
|
276 |
-
remove_weight_norm(l)
|
277 |
-
for l in self.resblocks:
|
278 |
-
l.remove_weight_norm()
|
279 |
-
|
280 |
-
|
281 |
-
class SineGen(torch.nn.Module):
|
282 |
-
"""Definition of sine generator
|
283 |
-
SineGen(samp_rate, harmonic_num = 0,
|
284 |
-
sine_amp = 0.1, noise_std = 0.003,
|
285 |
-
voiced_threshold = 0,
|
286 |
-
flag_for_pulse=False)
|
287 |
-
samp_rate: sampling rate in Hz
|
288 |
-
harmonic_num: number of harmonic overtones (default 0)
|
289 |
-
sine_amp: amplitude of sine-wavefrom (default 0.1)
|
290 |
-
noise_std: std of Gaussian noise (default 0.003)
|
291 |
-
voiced_thoreshold: F0 threshold for U/V classification (default 0)
|
292 |
-
flag_for_pulse: this SinGen is used inside PulseGen (default False)
|
293 |
-
Note: when flag_for_pulse is True, the first time step of a voiced
|
294 |
-
segment is always sin(np.pi) or cos(0)
|
295 |
-
"""
|
296 |
-
|
297 |
-
def __init__(
|
298 |
-
self,
|
299 |
-
samp_rate,
|
300 |
-
harmonic_num=0,
|
301 |
-
sine_amp=0.1,
|
302 |
-
noise_std=0.003,
|
303 |
-
voiced_threshold=0,
|
304 |
-
flag_for_pulse=False,
|
305 |
-
):
|
306 |
-
super(SineGen, self).__init__()
|
307 |
-
self.sine_amp = sine_amp
|
308 |
-
self.noise_std = noise_std
|
309 |
-
self.harmonic_num = harmonic_num
|
310 |
-
self.dim = self.harmonic_num + 1
|
311 |
-
self.sampling_rate = samp_rate
|
312 |
-
self.voiced_threshold = voiced_threshold
|
313 |
-
|
314 |
-
def _f02uv(self, f0):
|
315 |
-
# generate uv signal
|
316 |
-
uv = torch.ones_like(f0)
|
317 |
-
uv = uv * (f0 > self.voiced_threshold)
|
318 |
-
return uv
|
319 |
-
|
320 |
-
def forward(self, f0, upp):
|
321 |
-
"""sine_tensor, uv = forward(f0)
|
322 |
-
input F0: tensor(batchsize=1, length, dim=1)
|
323 |
-
f0 for unvoiced steps should be 0
|
324 |
-
output sine_tensor: tensor(batchsize=1, length, dim)
|
325 |
-
output uv: tensor(batchsize=1, length, 1)
|
326 |
-
"""
|
327 |
-
with torch.no_grad():
|
328 |
-
f0 = f0[:, None].transpose(1, 2)
|
329 |
-
f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
|
330 |
-
# fundamental component
|
331 |
-
f0_buf[:, :, 0] = f0[:, :, 0]
|
332 |
-
for idx in np.arange(self.harmonic_num):
|
333 |
-
f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
|
334 |
-
idx + 2
|
335 |
-
) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
|
336 |
-
rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
|
337 |
-
rand_ini = torch.rand(
|
338 |
-
f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
|
339 |
-
)
|
340 |
-
rand_ini[:, 0] = 0
|
341 |
-
rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
|
342 |
-
tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
|
343 |
-
tmp_over_one *= upp
|
344 |
-
tmp_over_one = F.interpolate(
|
345 |
-
tmp_over_one.transpose(2, 1),
|
346 |
-
scale_factor=upp,
|
347 |
-
mode="linear",
|
348 |
-
align_corners=True,
|
349 |
-
).transpose(2, 1)
|
350 |
-
rad_values = F.interpolate(
|
351 |
-
rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
|
352 |
-
).transpose(
|
353 |
-
2, 1
|
354 |
-
) #######
|
355 |
-
tmp_over_one %= 1
|
356 |
-
tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
|
357 |
-
cumsum_shift = torch.zeros_like(rad_values)
|
358 |
-
cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
|
359 |
-
sine_waves = torch.sin(
|
360 |
-
torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
|
361 |
-
)
|
362 |
-
sine_waves = sine_waves * self.sine_amp
|
363 |
-
uv = self._f02uv(f0)
|
364 |
-
uv = F.interpolate(
|
365 |
-
uv.transpose(2, 1), scale_factor=upp, mode="nearest"
|
366 |
-
).transpose(2, 1)
|
367 |
-
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
|
368 |
-
noise = noise_amp * torch.randn_like(sine_waves)
|
369 |
-
sine_waves = sine_waves * uv + noise
|
370 |
-
return sine_waves, uv, noise
|
371 |
-
|
372 |
-
|
373 |
-
class SourceModuleHnNSF(torch.nn.Module):
|
374 |
-
"""SourceModule for hn-nsf
|
375 |
-
SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
|
376 |
-
add_noise_std=0.003, voiced_threshod=0)
|
377 |
-
sampling_rate: sampling_rate in Hz
|
378 |
-
harmonic_num: number of harmonic above F0 (default: 0)
|
379 |
-
sine_amp: amplitude of sine source signal (default: 0.1)
|
380 |
-
add_noise_std: std of additive Gaussian noise (default: 0.003)
|
381 |
-
note that amplitude of noise in unvoiced is decided
|
382 |
-
by sine_amp
|
383 |
-
voiced_threshold: threhold to set U/V given F0 (default: 0)
|
384 |
-
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
|
385 |
-
F0_sampled (batchsize, length, 1)
|
386 |
-
Sine_source (batchsize, length, 1)
|
387 |
-
noise_source (batchsize, length 1)
|
388 |
-
uv (batchsize, length, 1)
|
389 |
-
"""
|
390 |
-
|
391 |
-
def __init__(
|
392 |
-
self,
|
393 |
-
sampling_rate,
|
394 |
-
harmonic_num=0,
|
395 |
-
sine_amp=0.1,
|
396 |
-
add_noise_std=0.003,
|
397 |
-
voiced_threshod=0,
|
398 |
-
is_half=True,
|
399 |
-
):
|
400 |
-
super(SourceModuleHnNSF, self).__init__()
|
401 |
-
|
402 |
-
self.sine_amp = sine_amp
|
403 |
-
self.noise_std = add_noise_std
|
404 |
-
self.is_half = is_half
|
405 |
-
# to produce sine waveforms
|
406 |
-
self.l_sin_gen = SineGen(
|
407 |
-
sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
|
408 |
-
)
|
409 |
-
|
410 |
-
# to merge source harmonics into a single excitation
|
411 |
-
self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
|
412 |
-
self.l_tanh = torch.nn.Tanh()
|
413 |
-
|
414 |
-
def forward(self, x, upp=None):
|
415 |
-
sine_wavs, uv, _ = self.l_sin_gen(x, upp)
|
416 |
-
if self.is_half:
|
417 |
-
sine_wavs = sine_wavs.half()
|
418 |
-
sine_merge = self.l_tanh(self.l_linear(sine_wavs))
|
419 |
-
return sine_merge, None, None # noise, uv
|
420 |
-
|
421 |
-
|
422 |
-
class GeneratorNSF(torch.nn.Module):
|
423 |
-
def __init__(
|
424 |
-
self,
|
425 |
-
initial_channel,
|
426 |
-
resblock,
|
427 |
-
resblock_kernel_sizes,
|
428 |
-
resblock_dilation_sizes,
|
429 |
-
upsample_rates,
|
430 |
-
upsample_initial_channel,
|
431 |
-
upsample_kernel_sizes,
|
432 |
-
gin_channels,
|
433 |
-
sr,
|
434 |
-
is_half=False,
|
435 |
-
):
|
436 |
-
super(GeneratorNSF, self).__init__()
|
437 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
438 |
-
self.num_upsamples = len(upsample_rates)
|
439 |
-
|
440 |
-
self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
|
441 |
-
self.m_source = SourceModuleHnNSF(
|
442 |
-
sampling_rate=sr, harmonic_num=0, is_half=is_half
|
443 |
-
)
|
444 |
-
self.noise_convs = nn.ModuleList()
|
445 |
-
self.conv_pre = Conv1d(
|
446 |
-
initial_channel, upsample_initial_channel, 7, 1, padding=3
|
447 |
-
)
|
448 |
-
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
|
449 |
-
|
450 |
-
self.ups = nn.ModuleList()
|
451 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
452 |
-
c_cur = upsample_initial_channel // (2 ** (i + 1))
|
453 |
-
self.ups.append(
|
454 |
-
weight_norm(
|
455 |
-
ConvTranspose1d(
|
456 |
-
upsample_initial_channel // (2**i),
|
457 |
-
upsample_initial_channel // (2 ** (i + 1)),
|
458 |
-
k,
|
459 |
-
u,
|
460 |
-
padding=(k - u) // 2,
|
461 |
-
)
|
462 |
-
)
|
463 |
-
)
|
464 |
-
if i + 1 < len(upsample_rates):
|
465 |
-
stride_f0 = np.prod(upsample_rates[i + 1 :])
|
466 |
-
self.noise_convs.append(
|
467 |
-
Conv1d(
|
468 |
-
1,
|
469 |
-
c_cur,
|
470 |
-
kernel_size=stride_f0 * 2,
|
471 |
-
stride=stride_f0,
|
472 |
-
padding=stride_f0 // 2,
|
473 |
-
)
|
474 |
-
)
|
475 |
-
else:
|
476 |
-
self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
|
477 |
-
|
478 |
-
self.resblocks = nn.ModuleList()
|
479 |
-
for i in range(len(self.ups)):
|
480 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
481 |
-
for j, (k, d) in enumerate(
|
482 |
-
zip(resblock_kernel_sizes, resblock_dilation_sizes)
|
483 |
-
):
|
484 |
-
self.resblocks.append(resblock(ch, k, d))
|
485 |
-
|
486 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
487 |
-
self.ups.apply(init_weights)
|
488 |
-
|
489 |
-
if gin_channels != 0:
|
490 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
491 |
-
|
492 |
-
self.upp = np.prod(upsample_rates)
|
493 |
-
|
494 |
-
def forward(self, x, f0, g=None):
|
495 |
-
har_source, noi_source, uv = self.m_source(f0, self.upp)
|
496 |
-
har_source = har_source.transpose(1, 2)
|
497 |
-
x = self.conv_pre(x)
|
498 |
-
if g is not None:
|
499 |
-
x = x + self.cond(g)
|
500 |
-
|
501 |
-
for i in range(self.num_upsamples):
|
502 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
503 |
-
x = self.ups[i](x)
|
504 |
-
x_source = self.noise_convs[i](har_source)
|
505 |
-
x = x + x_source
|
506 |
-
xs = None
|
507 |
-
for j in range(self.num_kernels):
|
508 |
-
if xs is None:
|
509 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
510 |
-
else:
|
511 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
512 |
-
x = xs / self.num_kernels
|
513 |
-
x = F.leaky_relu(x)
|
514 |
-
x = self.conv_post(x)
|
515 |
-
x = torch.tanh(x)
|
516 |
-
return x
|
517 |
-
|
518 |
-
def remove_weight_norm(self):
|
519 |
-
for l in self.ups:
|
520 |
-
remove_weight_norm(l)
|
521 |
-
for l in self.resblocks:
|
522 |
-
l.remove_weight_norm()
|
523 |
-
|
524 |
-
|
525 |
-
sr2sr = {
|
526 |
-
"32k": 32000,
|
527 |
-
"40k": 40000,
|
528 |
-
"48k": 48000,
|
529 |
-
}
|
530 |
-
|
531 |
-
|
532 |
-
class SynthesizerTrnMsNSFsidM(nn.Module):
|
533 |
-
def __init__(
|
534 |
-
self,
|
535 |
-
spec_channels,
|
536 |
-
segment_size,
|
537 |
-
inter_channels,
|
538 |
-
hidden_channels,
|
539 |
-
filter_channels,
|
540 |
-
n_heads,
|
541 |
-
n_layers,
|
542 |
-
kernel_size,
|
543 |
-
p_dropout,
|
544 |
-
resblock,
|
545 |
-
resblock_kernel_sizes,
|
546 |
-
resblock_dilation_sizes,
|
547 |
-
upsample_rates,
|
548 |
-
upsample_initial_channel,
|
549 |
-
upsample_kernel_sizes,
|
550 |
-
spk_embed_dim,
|
551 |
-
gin_channels,
|
552 |
-
sr,
|
553 |
-
version,
|
554 |
-
**kwargs
|
555 |
-
):
|
556 |
-
super().__init__()
|
557 |
-
if type(sr) == type("strr"):
|
558 |
-
sr = sr2sr[sr]
|
559 |
-
self.spec_channels = spec_channels
|
560 |
-
self.inter_channels = inter_channels
|
561 |
-
self.hidden_channels = hidden_channels
|
562 |
-
self.filter_channels = filter_channels
|
563 |
-
self.n_heads = n_heads
|
564 |
-
self.n_layers = n_layers
|
565 |
-
self.kernel_size = kernel_size
|
566 |
-
self.p_dropout = p_dropout
|
567 |
-
self.resblock = resblock
|
568 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
569 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
570 |
-
self.upsample_rates = upsample_rates
|
571 |
-
self.upsample_initial_channel = upsample_initial_channel
|
572 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
573 |
-
self.segment_size = segment_size
|
574 |
-
self.gin_channels = gin_channels
|
575 |
-
# self.hop_length = hop_length#
|
576 |
-
self.spk_embed_dim = spk_embed_dim
|
577 |
-
if version == "v1":
|
578 |
-
self.enc_p = TextEncoder256(
|
579 |
-
inter_channels,
|
580 |
-
hidden_channels,
|
581 |
-
filter_channels,
|
582 |
-
n_heads,
|
583 |
-
n_layers,
|
584 |
-
kernel_size,
|
585 |
-
p_dropout,
|
586 |
-
)
|
587 |
-
else:
|
588 |
-
self.enc_p = TextEncoder768(
|
589 |
-
inter_channels,
|
590 |
-
hidden_channels,
|
591 |
-
filter_channels,
|
592 |
-
n_heads,
|
593 |
-
n_layers,
|
594 |
-
kernel_size,
|
595 |
-
p_dropout,
|
596 |
-
)
|
597 |
-
self.dec = GeneratorNSF(
|
598 |
-
inter_channels,
|
599 |
-
resblock,
|
600 |
-
resblock_kernel_sizes,
|
601 |
-
resblock_dilation_sizes,
|
602 |
-
upsample_rates,
|
603 |
-
upsample_initial_channel,
|
604 |
-
upsample_kernel_sizes,
|
605 |
-
gin_channels=gin_channels,
|
606 |
-
sr=sr,
|
607 |
-
is_half=kwargs["is_half"],
|
608 |
-
)
|
609 |
-
self.enc_q = PosteriorEncoder(
|
610 |
-
spec_channels,
|
611 |
-
inter_channels,
|
612 |
-
hidden_channels,
|
613 |
-
5,
|
614 |
-
1,
|
615 |
-
16,
|
616 |
-
gin_channels=gin_channels,
|
617 |
-
)
|
618 |
-
self.flow = ResidualCouplingBlock(
|
619 |
-
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
|
620 |
-
)
|
621 |
-
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
|
622 |
-
self.speaker_map = None
|
623 |
-
print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
|
624 |
-
|
625 |
-
def remove_weight_norm(self):
|
626 |
-
self.dec.remove_weight_norm()
|
627 |
-
self.flow.remove_weight_norm()
|
628 |
-
self.enc_q.remove_weight_norm()
|
629 |
-
|
630 |
-
def construct_spkmixmap(self, n_speaker):
|
631 |
-
self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels))
|
632 |
-
for i in range(n_speaker):
|
633 |
-
self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]]))
|
634 |
-
self.speaker_map = self.speaker_map.unsqueeze(0)
|
635 |
-
|
636 |
-
def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None):
|
637 |
-
if self.speaker_map is not None: # [N, S] * [S, B, 1, H]
|
638 |
-
g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1]
|
639 |
-
g = g * self.speaker_map # [N, S, B, 1, H]
|
640 |
-
g = torch.sum(g, dim=1) # [N, 1, B, 1, H]
|
641 |
-
g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N]
|
642 |
-
else:
|
643 |
-
g = g.unsqueeze(0)
|
644 |
-
g = self.emb_g(g).transpose(1, 2)
|
645 |
-
|
646 |
-
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
|
647 |
-
z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask
|
648 |
-
z = self.flow(z_p, x_mask, g=g, reverse=True)
|
649 |
-
o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
|
650 |
-
return o
|
651 |
-
|
652 |
-
|
653 |
-
class MultiPeriodDiscriminator(torch.nn.Module):
|
654 |
-
def __init__(self, use_spectral_norm=False):
|
655 |
-
super(MultiPeriodDiscriminator, self).__init__()
|
656 |
-
periods = [2, 3, 5, 7, 11, 17]
|
657 |
-
# periods = [3, 5, 7, 11, 17, 23, 37]
|
658 |
-
|
659 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
660 |
-
discs = discs + [
|
661 |
-
DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
|
662 |
-
]
|
663 |
-
self.discriminators = nn.ModuleList(discs)
|
664 |
-
|
665 |
-
def forward(self, y, y_hat):
|
666 |
-
y_d_rs = [] #
|
667 |
-
y_d_gs = []
|
668 |
-
fmap_rs = []
|
669 |
-
fmap_gs = []
|
670 |
-
for i, d in enumerate(self.discriminators):
|
671 |
-
y_d_r, fmap_r = d(y)
|
672 |
-
y_d_g, fmap_g = d(y_hat)
|
673 |
-
# for j in range(len(fmap_r)):
|
674 |
-
# print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
|
675 |
-
y_d_rs.append(y_d_r)
|
676 |
-
y_d_gs.append(y_d_g)
|
677 |
-
fmap_rs.append(fmap_r)
|
678 |
-
fmap_gs.append(fmap_g)
|
679 |
-
|
680 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
681 |
-
|
682 |
-
|
683 |
-
class MultiPeriodDiscriminatorV2(torch.nn.Module):
|
684 |
-
def __init__(self, use_spectral_norm=False):
|
685 |
-
super(MultiPeriodDiscriminatorV2, self).__init__()
|
686 |
-
# periods = [2, 3, 5, 7, 11, 17]
|
687 |
-
periods = [2, 3, 5, 7, 11, 17, 23, 37]
|
688 |
-
|
689 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
690 |
-
discs = discs + [
|
691 |
-
DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
|
692 |
-
]
|
693 |
-
self.discriminators = nn.ModuleList(discs)
|
694 |
-
|
695 |
-
def forward(self, y, y_hat):
|
696 |
-
y_d_rs = [] #
|
697 |
-
y_d_gs = []
|
698 |
-
fmap_rs = []
|
699 |
-
fmap_gs = []
|
700 |
-
for i, d in enumerate(self.discriminators):
|
701 |
-
y_d_r, fmap_r = d(y)
|
702 |
-
y_d_g, fmap_g = d(y_hat)
|
703 |
-
# for j in range(len(fmap_r)):
|
704 |
-
# print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
|
705 |
-
y_d_rs.append(y_d_r)
|
706 |
-
y_d_gs.append(y_d_g)
|
707 |
-
fmap_rs.append(fmap_r)
|
708 |
-
fmap_gs.append(fmap_g)
|
709 |
-
|
710 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
711 |
-
|
712 |
-
|
713 |
-
class DiscriminatorS(torch.nn.Module):
|
714 |
-
def __init__(self, use_spectral_norm=False):
|
715 |
-
super(DiscriminatorS, self).__init__()
|
716 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
717 |
-
self.convs = nn.ModuleList(
|
718 |
-
[
|
719 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
720 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
721 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
722 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
723 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
724 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
725 |
-
]
|
726 |
-
)
|
727 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
728 |
-
|
729 |
-
def forward(self, x):
|
730 |
-
fmap = []
|
731 |
-
|
732 |
-
for l in self.convs:
|
733 |
-
x = l(x)
|
734 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
735 |
-
fmap.append(x)
|
736 |
-
x = self.conv_post(x)
|
737 |
-
fmap.append(x)
|
738 |
-
x = torch.flatten(x, 1, -1)
|
739 |
-
|
740 |
-
return x, fmap
|
741 |
-
|
742 |
-
|
743 |
-
class DiscriminatorP(torch.nn.Module):
|
744 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
745 |
-
super(DiscriminatorP, self).__init__()
|
746 |
-
self.period = period
|
747 |
-
self.use_spectral_norm = use_spectral_norm
|
748 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
749 |
-
self.convs = nn.ModuleList(
|
750 |
-
[
|
751 |
-
norm_f(
|
752 |
-
Conv2d(
|
753 |
-
1,
|
754 |
-
32,
|
755 |
-
(kernel_size, 1),
|
756 |
-
(stride, 1),
|
757 |
-
padding=(get_padding(kernel_size, 1), 0),
|
758 |
-
)
|
759 |
-
),
|
760 |
-
norm_f(
|
761 |
-
Conv2d(
|
762 |
-
32,
|
763 |
-
128,
|
764 |
-
(kernel_size, 1),
|
765 |
-
(stride, 1),
|
766 |
-
padding=(get_padding(kernel_size, 1), 0),
|
767 |
-
)
|
768 |
-
),
|
769 |
-
norm_f(
|
770 |
-
Conv2d(
|
771 |
-
128,
|
772 |
-
512,
|
773 |
-
(kernel_size, 1),
|
774 |
-
(stride, 1),
|
775 |
-
padding=(get_padding(kernel_size, 1), 0),
|
776 |
-
)
|
777 |
-
),
|
778 |
-
norm_f(
|
779 |
-
Conv2d(
|
780 |
-
512,
|
781 |
-
1024,
|
782 |
-
(kernel_size, 1),
|
783 |
-
(stride, 1),
|
784 |
-
padding=(get_padding(kernel_size, 1), 0),
|
785 |
-
)
|
786 |
-
),
|
787 |
-
norm_f(
|
788 |
-
Conv2d(
|
789 |
-
1024,
|
790 |
-
1024,
|
791 |
-
(kernel_size, 1),
|
792 |
-
1,
|
793 |
-
padding=(get_padding(kernel_size, 1), 0),
|
794 |
-
)
|
795 |
-
),
|
796 |
-
]
|
797 |
-
)
|
798 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
799 |
-
|
800 |
-
def forward(self, x):
|
801 |
-
fmap = []
|
802 |
-
|
803 |
-
# 1d to 2d
|
804 |
-
b, c, t = x.shape
|
805 |
-
if t % self.period != 0: # pad first
|
806 |
-
n_pad = self.period - (t % self.period)
|
807 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
808 |
-
t = t + n_pad
|
809 |
-
x = x.view(b, c, t // self.period, self.period)
|
810 |
-
|
811 |
-
for l in self.convs:
|
812 |
-
x = l(x)
|
813 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
814 |
-
fmap.append(x)
|
815 |
-
x = self.conv_post(x)
|
816 |
-
fmap.append(x)
|
817 |
-
x = torch.flatten(x, 1, -1)
|
818 |
-
|
819 |
-
return x, fmap
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cong723/gpt-academic-public/crazy_functions/test_project/cpp/cppipc/shm.cpp
DELETED
@@ -1,103 +0,0 @@
|
|
1 |
-
|
2 |
-
#include <string>
|
3 |
-
#include <utility>
|
4 |
-
|
5 |
-
#include "libipc/shm.h"
|
6 |
-
|
7 |
-
#include "libipc/utility/pimpl.h"
|
8 |
-
#include "libipc/memory/resource.h"
|
9 |
-
|
10 |
-
namespace ipc {
|
11 |
-
namespace shm {
|
12 |
-
|
13 |
-
class handle::handle_ : public pimpl<handle_> {
|
14 |
-
public:
|
15 |
-
shm::id_t id_ = nullptr;
|
16 |
-
void* m_ = nullptr;
|
17 |
-
|
18 |
-
ipc::string n_;
|
19 |
-
std::size_t s_ = 0;
|
20 |
-
};
|
21 |
-
|
22 |
-
handle::handle()
|
23 |
-
: p_(p_->make()) {
|
24 |
-
}
|
25 |
-
|
26 |
-
handle::handle(char const * name, std::size_t size, unsigned mode)
|
27 |
-
: handle() {
|
28 |
-
acquire(name, size, mode);
|
29 |
-
}
|
30 |
-
|
31 |
-
handle::handle(handle&& rhs)
|
32 |
-
: handle() {
|
33 |
-
swap(rhs);
|
34 |
-
}
|
35 |
-
|
36 |
-
handle::~handle() {
|
37 |
-
release();
|
38 |
-
p_->clear();
|
39 |
-
}
|
40 |
-
|
41 |
-
void handle::swap(handle& rhs) {
|
42 |
-
std::swap(p_, rhs.p_);
|
43 |
-
}
|
44 |
-
|
45 |
-
handle& handle::operator=(handle rhs) {
|
46 |
-
swap(rhs);
|
47 |
-
return *this;
|
48 |
-
}
|
49 |
-
|
50 |
-
bool handle::valid() const noexcept {
|
51 |
-
return impl(p_)->m_ != nullptr;
|
52 |
-
}
|
53 |
-
|
54 |
-
std::size_t handle::size() const noexcept {
|
55 |
-
return impl(p_)->s_;
|
56 |
-
}
|
57 |
-
|
58 |
-
char const * handle::name() const noexcept {
|
59 |
-
return impl(p_)->n_.c_str();
|
60 |
-
}
|
61 |
-
|
62 |
-
std::int32_t handle::ref() const noexcept {
|
63 |
-
return shm::get_ref(impl(p_)->id_);
|
64 |
-
}
|
65 |
-
|
66 |
-
void handle::sub_ref() noexcept {
|
67 |
-
shm::sub_ref(impl(p_)->id_);
|
68 |
-
}
|
69 |
-
|
70 |
-
bool handle::acquire(char const * name, std::size_t size, unsigned mode) {
|
71 |
-
release();
|
72 |
-
impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode);
|
73 |
-
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
|
74 |
-
return valid();
|
75 |
-
}
|
76 |
-
|
77 |
-
std::int32_t handle::release() {
|
78 |
-
if (impl(p_)->id_ == nullptr) return -1;
|
79 |
-
return shm::release(detach());
|
80 |
-
}
|
81 |
-
|
82 |
-
void* handle::get() const {
|
83 |
-
return impl(p_)->m_;
|
84 |
-
}
|
85 |
-
|
86 |
-
void handle::attach(id_t id) {
|
87 |
-
if (id == nullptr) return;
|
88 |
-
release();
|
89 |
-
impl(p_)->id_ = id;
|
90 |
-
impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_));
|
91 |
-
}
|
92 |
-
|
93 |
-
id_t handle::detach() {
|
94 |
-
auto old = impl(p_)->id_;
|
95 |
-
impl(p_)->id_ = nullptr;
|
96 |
-
impl(p_)->m_ = nullptr;
|
97 |
-
impl(p_)->s_ = 0;
|
98 |
-
impl(p_)->n_.clear();
|
99 |
-
return old;
|
100 |
-
}
|
101 |
-
|
102 |
-
} // namespace shm
|
103 |
-
} // namespace ipc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/vision.cpp
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
2 |
-
#include "nms.h"
|
3 |
-
#include "ROIAlign.h"
|
4 |
-
#include "ROIPool.h"
|
5 |
-
#include "SigmoidFocalLoss.h"
|
6 |
-
#include "dcn_v2.h"
|
7 |
-
|
8 |
-
|
9 |
-
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
|
10 |
-
m.def("nms", &nms, "non-maximum suppression");
|
11 |
-
m.def("roi_align_forward", &ROIAlign_forward, "ROIAlign_forward");
|
12 |
-
m.def("roi_align_backward", &ROIAlign_backward, "ROIAlign_backward");
|
13 |
-
m.def("roi_pool_forward", &ROIPool_forward, "ROIPool_forward");
|
14 |
-
m.def("roi_pool_backward", &ROIPool_backward, "ROIPool_backward");
|
15 |
-
m.def("sigmoid_focalloss_forward", &SigmoidFocalLoss_forward, "SigmoidFocalLoss_forward");
|
16 |
-
m.def("sigmoid_focalloss_backward", &SigmoidFocalLoss_backward, "SigmoidFocalLoss_backward");
|
17 |
-
m.def("dcn_v2_forward", &dcn_v2_forward, "dcn_v2_forward");
|
18 |
-
m.def("dcn_v2_backward", &dcn_v2_backward, "dcn_v2_backward");
|
19 |
-
m.def("dcn_v2_psroi_pooling_forward", &dcn_v2_psroi_pooling_forward, "dcn_v2_psroi_pooling_forward");
|
20 |
-
m.def("dcn_v2_psroi_pooling_backward", &dcn_v2_psroi_pooling_backward, "dcn_v2_psroi_pooling_backward");
|
21 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/aiohttp/base_protocol.py
DELETED
@@ -1,90 +0,0 @@
|
|
1 |
-
import asyncio
|
2 |
-
from typing import Optional, cast
|
3 |
-
|
4 |
-
from .tcp_helpers import tcp_nodelay
|
5 |
-
|
6 |
-
|
7 |
-
class BaseProtocol(asyncio.Protocol):
|
8 |
-
__slots__ = (
|
9 |
-
"_loop",
|
10 |
-
"_paused",
|
11 |
-
"_drain_waiter",
|
12 |
-
"_connection_lost",
|
13 |
-
"_reading_paused",
|
14 |
-
"transport",
|
15 |
-
)
|
16 |
-
|
17 |
-
def __init__(self, loop: asyncio.AbstractEventLoop) -> None:
|
18 |
-
self._loop: asyncio.AbstractEventLoop = loop
|
19 |
-
self._paused = False
|
20 |
-
self._drain_waiter: Optional[asyncio.Future[None]] = None
|
21 |
-
self._reading_paused = False
|
22 |
-
|
23 |
-
self.transport: Optional[asyncio.Transport] = None
|
24 |
-
|
25 |
-
@property
|
26 |
-
def connected(self) -> bool:
|
27 |
-
"""Return True if the connection is open."""
|
28 |
-
return self.transport is not None
|
29 |
-
|
30 |
-
def pause_writing(self) -> None:
|
31 |
-
assert not self._paused
|
32 |
-
self._paused = True
|
33 |
-
|
34 |
-
def resume_writing(self) -> None:
|
35 |
-
assert self._paused
|
36 |
-
self._paused = False
|
37 |
-
|
38 |
-
waiter = self._drain_waiter
|
39 |
-
if waiter is not None:
|
40 |
-
self._drain_waiter = None
|
41 |
-
if not waiter.done():
|
42 |
-
waiter.set_result(None)
|
43 |
-
|
44 |
-
def pause_reading(self) -> None:
|
45 |
-
if not self._reading_paused and self.transport is not None:
|
46 |
-
try:
|
47 |
-
self.transport.pause_reading()
|
48 |
-
except (AttributeError, NotImplementedError, RuntimeError):
|
49 |
-
pass
|
50 |
-
self._reading_paused = True
|
51 |
-
|
52 |
-
def resume_reading(self) -> None:
|
53 |
-
if self._reading_paused and self.transport is not None:
|
54 |
-
try:
|
55 |
-
self.transport.resume_reading()
|
56 |
-
except (AttributeError, NotImplementedError, RuntimeError):
|
57 |
-
pass
|
58 |
-
self._reading_paused = False
|
59 |
-
|
60 |
-
def connection_made(self, transport: asyncio.BaseTransport) -> None:
|
61 |
-
tr = cast(asyncio.Transport, transport)
|
62 |
-
tcp_nodelay(tr, True)
|
63 |
-
self.transport = tr
|
64 |
-
|
65 |
-
def connection_lost(self, exc: Optional[BaseException]) -> None:
|
66 |
-
# Wake up the writer if currently paused.
|
67 |
-
self.transport = None
|
68 |
-
if not self._paused:
|
69 |
-
return
|
70 |
-
waiter = self._drain_waiter
|
71 |
-
if waiter is None:
|
72 |
-
return
|
73 |
-
self._drain_waiter = None
|
74 |
-
if waiter.done():
|
75 |
-
return
|
76 |
-
if exc is None:
|
77 |
-
waiter.set_result(None)
|
78 |
-
else:
|
79 |
-
waiter.set_exception(exc)
|
80 |
-
|
81 |
-
async def _drain_helper(self) -> None:
|
82 |
-
if not self.connected:
|
83 |
-
raise ConnectionResetError("Connection lost")
|
84 |
-
if not self._paused:
|
85 |
-
return
|
86 |
-
waiter = self._drain_waiter
|
87 |
-
if waiter is None:
|
88 |
-
waiter = self._loop.create_future()
|
89 |
-
self._drain_waiter = waiter
|
90 |
-
await asyncio.shield(waiter)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dinoking/Guccio-AI-Designer/netdissect/runningstats.py
DELETED
@@ -1,773 +0,0 @@
|
|
1 |
-
'''
|
2 |
-
Running statistics on the GPU using pytorch.
|
3 |
-
|
4 |
-
RunningTopK maintains top-k statistics for a set of channels in parallel.
|
5 |
-
RunningQuantile maintains (sampled) quantile statistics for a set of channels.
|
6 |
-
'''
|
7 |
-
|
8 |
-
import torch, math, numpy
|
9 |
-
from collections import defaultdict
|
10 |
-
|
11 |
-
class RunningTopK:
|
12 |
-
'''
|
13 |
-
A class to keep a running tally of the the top k values (and indexes)
|
14 |
-
of any number of torch feature components. Will work on the GPU if
|
15 |
-
the data is on the GPU.
|
16 |
-
|
17 |
-
This version flattens all arrays to avoid crashes.
|
18 |
-
'''
|
19 |
-
def __init__(self, k=100, state=None):
|
20 |
-
if state is not None:
|
21 |
-
self.set_state_dict(state)
|
22 |
-
return
|
23 |
-
self.k = k
|
24 |
-
self.count = 0
|
25 |
-
# This version flattens all data internally to 2-d tensors,
|
26 |
-
# to avoid crashes with the current pytorch topk implementation.
|
27 |
-
# The data is puffed back out to arbitrary tensor shapes on ouput.
|
28 |
-
self.data_shape = None
|
29 |
-
self.top_data = None
|
30 |
-
self.top_index = None
|
31 |
-
self.next = 0
|
32 |
-
self.linear_index = 0
|
33 |
-
self.perm = None
|
34 |
-
|
35 |
-
def add(self, data):
|
36 |
-
'''
|
37 |
-
Adds a batch of data to be considered for the running top k.
|
38 |
-
The zeroth dimension enumerates the observations. All other
|
39 |
-
dimensions enumerate different features.
|
40 |
-
'''
|
41 |
-
if self.top_data is None:
|
42 |
-
# Allocation: allocate a buffer of size 5*k, at least 10, for each.
|
43 |
-
self.data_shape = data.shape[1:]
|
44 |
-
feature_size = int(numpy.prod(self.data_shape))
|
45 |
-
self.top_data = torch.zeros(
|
46 |
-
feature_size, max(10, self.k * 5), out=data.new())
|
47 |
-
self.top_index = self.top_data.clone().long()
|
48 |
-
self.linear_index = 0 if len(data.shape) == 1 else torch.arange(
|
49 |
-
feature_size, out=self.top_index.new()).mul_(
|
50 |
-
self.top_data.shape[-1])[:,None]
|
51 |
-
size = data.shape[0]
|
52 |
-
sk = min(size, self.k)
|
53 |
-
if self.top_data.shape[-1] < self.next + sk:
|
54 |
-
# Compression: if full, keep topk only.
|
55 |
-
self.top_data[:,:self.k], self.top_index[:,:self.k] = (
|
56 |
-
self.result(sorted=False, flat=True))
|
57 |
-
self.next = self.k
|
58 |
-
free = self.top_data.shape[-1] - self.next
|
59 |
-
# Pick: copy the top sk of the next batch into the buffer.
|
60 |
-
# Currently strided topk is slow. So we clone after transpose.
|
61 |
-
# TODO: remove the clone() if it becomes faster.
|
62 |
-
cdata = data.contiguous().view(size, -1).t().clone()
|
63 |
-
td, ti = cdata.topk(sk, sorted=False)
|
64 |
-
self.top_data[:,self.next:self.next+sk] = td
|
65 |
-
self.top_index[:,self.next:self.next+sk] = (ti + self.count)
|
66 |
-
self.next += sk
|
67 |
-
self.count += size
|
68 |
-
|
69 |
-
def result(self, sorted=True, flat=False):
|
70 |
-
'''
|
71 |
-
Returns top k data items and indexes in each dimension,
|
72 |
-
with channels in the first dimension and k in the last dimension.
|
73 |
-
'''
|
74 |
-
k = min(self.k, self.next)
|
75 |
-
# bti are top indexes relative to buffer array.
|
76 |
-
td, bti = self.top_data[:,:self.next].topk(k, sorted=sorted)
|
77 |
-
# we want to report top indexes globally, which is ti.
|
78 |
-
ti = self.top_index.view(-1)[
|
79 |
-
(bti + self.linear_index).view(-1)
|
80 |
-
].view(*bti.shape)
|
81 |
-
if flat:
|
82 |
-
return td, ti
|
83 |
-
else:
|
84 |
-
return (td.view(*(self.data_shape + (-1,))),
|
85 |
-
ti.view(*(self.data_shape + (-1,))))
|
86 |
-
|
87 |
-
def to_(self, device):
|
88 |
-
self.top_data = self.top_data.to(device)
|
89 |
-
self.top_index = self.top_index.to(device)
|
90 |
-
if isinstance(self.linear_index, torch.Tensor):
|
91 |
-
self.linear_index = self.linear_index.to(device)
|
92 |
-
|
93 |
-
def state_dict(self):
|
94 |
-
return dict(
|
95 |
-
constructor=self.__module__ + '.' +
|
96 |
-
self.__class__.__name__ + '()',
|
97 |
-
k=self.k,
|
98 |
-
count=self.count,
|
99 |
-
data_shape=tuple(self.data_shape),
|
100 |
-
top_data=self.top_data.cpu().numpy(),
|
101 |
-
top_index=self.top_index.cpu().numpy(),
|
102 |
-
next=self.next,
|
103 |
-
linear_index=(self.linear_index.cpu().numpy()
|
104 |
-
if isinstance(self.linear_index, torch.Tensor)
|
105 |
-
else self.linear_index),
|
106 |
-
perm=self.perm)
|
107 |
-
|
108 |
-
def set_state_dict(self, dic):
|
109 |
-
self.k = dic['k'].item()
|
110 |
-
self.count = dic['count'].item()
|
111 |
-
self.data_shape = tuple(dic['data_shape'])
|
112 |
-
self.top_data = torch.from_numpy(dic['top_data'])
|
113 |
-
self.top_index = torch.from_numpy(dic['top_index'])
|
114 |
-
self.next = dic['next'].item()
|
115 |
-
self.linear_index = (torch.from_numpy(dic['linear_index'])
|
116 |
-
if len(dic['linear_index'].shape) > 0
|
117 |
-
else dic['linear_index'].item())
|
118 |
-
|
119 |
-
class RunningQuantile:
|
120 |
-
"""
|
121 |
-
Streaming randomized quantile computation for torch.
|
122 |
-
|
123 |
-
Add any amount of data repeatedly via add(data). At any time,
|
124 |
-
quantile estimates (or old-style percentiles) can be read out using
|
125 |
-
quantiles(q) or percentiles(p).
|
126 |
-
|
127 |
-
Accuracy scales according to resolution: the default is to
|
128 |
-
set resolution to be accurate to better than 0.1%,
|
129 |
-
while limiting storage to about 50,000 samples.
|
130 |
-
|
131 |
-
Good for computing quantiles of huge data without using much memory.
|
132 |
-
Works well on arbitrary data with probability near 1.
|
133 |
-
|
134 |
-
Based on the optimal KLL quantile algorithm by Karnin, Lang, and Liberty
|
135 |
-
from FOCS 2016. http://ieee-focs.org/FOCS-2016-Papers/3933a071.pdf
|
136 |
-
"""
|
137 |
-
|
138 |
-
def __init__(self, resolution=6 * 1024, buffersize=None, seed=None,
|
139 |
-
state=None):
|
140 |
-
if state is not None:
|
141 |
-
self.set_state_dict(state)
|
142 |
-
return
|
143 |
-
self.depth = None
|
144 |
-
self.dtype = None
|
145 |
-
self.device = None
|
146 |
-
self.resolution = resolution
|
147 |
-
# Default buffersize: 128 samples (and smaller than resolution).
|
148 |
-
if buffersize is None:
|
149 |
-
buffersize = min(128, (resolution + 7) // 8)
|
150 |
-
self.buffersize = buffersize
|
151 |
-
self.samplerate = 1.0
|
152 |
-
self.data = None
|
153 |
-
self.firstfree = [0]
|
154 |
-
self.randbits = torch.ByteTensor(resolution)
|
155 |
-
self.currentbit = len(self.randbits) - 1
|
156 |
-
self.extremes = None
|
157 |
-
self.size = 0
|
158 |
-
|
159 |
-
def _lazy_init(self, incoming):
|
160 |
-
self.depth = incoming.shape[1]
|
161 |
-
self.dtype = incoming.dtype
|
162 |
-
self.device = incoming.device
|
163 |
-
self.data = [torch.zeros(self.depth, self.resolution,
|
164 |
-
dtype=self.dtype, device=self.device)]
|
165 |
-
self.extremes = torch.zeros(self.depth, 2,
|
166 |
-
dtype=self.dtype, device=self.device)
|
167 |
-
self.extremes[:,0] = float('inf')
|
168 |
-
self.extremes[:,-1] = -float('inf')
|
169 |
-
|
170 |
-
def to_(self, device):
|
171 |
-
"""Switches internal storage to specified device."""
|
172 |
-
if device != self.device:
|
173 |
-
old_data = self.data
|
174 |
-
old_extremes = self.extremes
|
175 |
-
self.data = [d.to(device) for d in self.data]
|
176 |
-
self.extremes = self.extremes.to(device)
|
177 |
-
self.device = self.extremes.device
|
178 |
-
del old_data
|
179 |
-
del old_extremes
|
180 |
-
|
181 |
-
def add(self, incoming):
|
182 |
-
if self.depth is None:
|
183 |
-
self._lazy_init(incoming)
|
184 |
-
assert len(incoming.shape) == 2
|
185 |
-
assert incoming.shape[1] == self.depth, (incoming.shape[1], self.depth)
|
186 |
-
self.size += incoming.shape[0]
|
187 |
-
# Convert to a flat torch array.
|
188 |
-
if self.samplerate >= 1.0:
|
189 |
-
self._add_every(incoming)
|
190 |
-
return
|
191 |
-
# If we are sampling, then subsample a large chunk at a time.
|
192 |
-
self._scan_extremes(incoming)
|
193 |
-
chunksize = int(math.ceil(self.buffersize / self.samplerate))
|
194 |
-
for index in range(0, len(incoming), chunksize):
|
195 |
-
batch = incoming[index:index+chunksize]
|
196 |
-
sample = sample_portion(batch, self.samplerate)
|
197 |
-
if len(sample):
|
198 |
-
self._add_every(sample)
|
199 |
-
|
200 |
-
def _add_every(self, incoming):
|
201 |
-
supplied = len(incoming)
|
202 |
-
index = 0
|
203 |
-
while index < supplied:
|
204 |
-
ff = self.firstfree[0]
|
205 |
-
available = self.data[0].shape[1] - ff
|
206 |
-
if available == 0:
|
207 |
-
if not self._shift():
|
208 |
-
# If we shifted by subsampling, then subsample.
|
209 |
-
incoming = incoming[index:]
|
210 |
-
if self.samplerate >= 0.5:
|
211 |
-
# First time sampling - the data source is very large.
|
212 |
-
self._scan_extremes(incoming)
|
213 |
-
incoming = sample_portion(incoming, self.samplerate)
|
214 |
-
index = 0
|
215 |
-
supplied = len(incoming)
|
216 |
-
ff = self.firstfree[0]
|
217 |
-
available = self.data[0].shape[1] - ff
|
218 |
-
copycount = min(available, supplied - index)
|
219 |
-
self.data[0][:,ff:ff + copycount] = torch.t(
|
220 |
-
incoming[index:index + copycount,:])
|
221 |
-
self.firstfree[0] += copycount
|
222 |
-
index += copycount
|
223 |
-
|
224 |
-
def _shift(self):
|
225 |
-
index = 0
|
226 |
-
# If remaining space at the current layer is less than half prev
|
227 |
-
# buffer size (rounding up), then we need to shift it up to ensure
|
228 |
-
# enough space for future shifting.
|
229 |
-
while self.data[index].shape[1] - self.firstfree[index] < (
|
230 |
-
-(-self.data[index-1].shape[1] // 2) if index else 1):
|
231 |
-
if index + 1 >= len(self.data):
|
232 |
-
return self._expand()
|
233 |
-
data = self.data[index][:,0:self.firstfree[index]]
|
234 |
-
data = data.sort()[0]
|
235 |
-
if index == 0 and self.samplerate >= 1.0:
|
236 |
-
self._update_extremes(data[:,0], data[:,-1])
|
237 |
-
offset = self._randbit()
|
238 |
-
position = self.firstfree[index + 1]
|
239 |
-
subset = data[:,offset::2]
|
240 |
-
self.data[index + 1][:,position:position + subset.shape[1]] = subset
|
241 |
-
self.firstfree[index] = 0
|
242 |
-
self.firstfree[index + 1] += subset.shape[1]
|
243 |
-
index += 1
|
244 |
-
return True
|
245 |
-
|
246 |
-
def _scan_extremes(self, incoming):
|
247 |
-
# When sampling, we need to scan every item still to get extremes
|
248 |
-
self._update_extremes(
|
249 |
-
torch.min(incoming, dim=0)[0],
|
250 |
-
torch.max(incoming, dim=0)[0])
|
251 |
-
|
252 |
-
def _update_extremes(self, minr, maxr):
|
253 |
-
self.extremes[:,0] = torch.min(
|
254 |
-
torch.stack([self.extremes[:,0], minr]), dim=0)[0]
|
255 |
-
self.extremes[:,-1] = torch.max(
|
256 |
-
torch.stack([self.extremes[:,-1], maxr]), dim=0)[0]
|
257 |
-
|
258 |
-
def _randbit(self):
|
259 |
-
self.currentbit += 1
|
260 |
-
if self.currentbit >= len(self.randbits):
|
261 |
-
self.randbits.random_(to=2)
|
262 |
-
self.currentbit = 0
|
263 |
-
return self.randbits[self.currentbit]
|
264 |
-
|
265 |
-
def state_dict(self):
|
266 |
-
return dict(
|
267 |
-
constructor=self.__module__ + '.' +
|
268 |
-
self.__class__.__name__ + '()',
|
269 |
-
resolution=self.resolution,
|
270 |
-
depth=self.depth,
|
271 |
-
buffersize=self.buffersize,
|
272 |
-
samplerate=self.samplerate,
|
273 |
-
data=[d.cpu().numpy()[:,:f].T
|
274 |
-
for d, f in zip(self.data, self.firstfree)],
|
275 |
-
sizes=[d.shape[1] for d in self.data],
|
276 |
-
extremes=self.extremes.cpu().numpy(),
|
277 |
-
size=self.size)
|
278 |
-
|
279 |
-
def set_state_dict(self, dic):
|
280 |
-
self.resolution = int(dic['resolution'])
|
281 |
-
self.randbits = torch.ByteTensor(self.resolution)
|
282 |
-
self.currentbit = len(self.randbits) - 1
|
283 |
-
self.depth = int(dic['depth'])
|
284 |
-
self.buffersize = int(dic['buffersize'])
|
285 |
-
self.samplerate = float(dic['samplerate'])
|
286 |
-
firstfree = []
|
287 |
-
buffers = []
|
288 |
-
for d, s in zip(dic['data'], dic['sizes']):
|
289 |
-
firstfree.append(d.shape[0])
|
290 |
-
buf = numpy.zeros((d.shape[1], s), dtype=d.dtype)
|
291 |
-
buf[:,:d.shape[0]] = d.T
|
292 |
-
buffers.append(torch.from_numpy(buf))
|
293 |
-
self.firstfree = firstfree
|
294 |
-
self.data = buffers
|
295 |
-
self.extremes = torch.from_numpy((dic['extremes']))
|
296 |
-
self.size = int(dic['size'])
|
297 |
-
self.dtype = self.extremes.dtype
|
298 |
-
self.device = self.extremes.device
|
299 |
-
|
300 |
-
def minmax(self):
|
301 |
-
if self.firstfree[0]:
|
302 |
-
self._scan_extremes(self.data[0][:,:self.firstfree[0]].t())
|
303 |
-
return self.extremes.clone()
|
304 |
-
|
305 |
-
def median(self):
|
306 |
-
return self.quantiles([0.5])[:,0]
|
307 |
-
|
308 |
-
def mean(self):
|
309 |
-
return self.integrate(lambda x: x) / self.size
|
310 |
-
|
311 |
-
def variance(self):
|
312 |
-
mean = self.mean()[:,None]
|
313 |
-
return self.integrate(lambda x: (x - mean).pow(2)) / (self.size - 1)
|
314 |
-
|
315 |
-
def stdev(self):
|
316 |
-
return self.variance().sqrt()
|
317 |
-
|
318 |
-
def _expand(self):
|
319 |
-
cap = self._next_capacity()
|
320 |
-
if cap > 0:
|
321 |
-
# First, make a new layer of the proper capacity.
|
322 |
-
self.data.insert(0, torch.zeros(self.depth, cap,
|
323 |
-
dtype=self.dtype, device=self.device))
|
324 |
-
self.firstfree.insert(0, 0)
|
325 |
-
else:
|
326 |
-
# Unless we're so big we are just subsampling.
|
327 |
-
assert self.firstfree[0] == 0
|
328 |
-
self.samplerate *= 0.5
|
329 |
-
for index in range(1, len(self.data)):
|
330 |
-
# Scan for existing data that needs to be moved down a level.
|
331 |
-
amount = self.firstfree[index]
|
332 |
-
if amount == 0:
|
333 |
-
continue
|
334 |
-
position = self.firstfree[index-1]
|
335 |
-
# Move data down if it would leave enough empty space there
|
336 |
-
# This is the key invariant: enough empty space to fit half
|
337 |
-
# of the previous level's buffer size (rounding up)
|
338 |
-
if self.data[index-1].shape[1] - (amount + position) >= (
|
339 |
-
-(-self.data[index-2].shape[1] // 2) if (index-1) else 1):
|
340 |
-
self.data[index-1][:,position:position + amount] = (
|
341 |
-
self.data[index][:,:amount])
|
342 |
-
self.firstfree[index-1] += amount
|
343 |
-
self.firstfree[index] = 0
|
344 |
-
else:
|
345 |
-
# Scrunch the data if it would not.
|
346 |
-
data = self.data[index][:,:amount]
|
347 |
-
data = data.sort()[0]
|
348 |
-
if index == 1:
|
349 |
-
self._update_extremes(data[:,0], data[:,-1])
|
350 |
-
offset = self._randbit()
|
351 |
-
scrunched = data[:,offset::2]
|
352 |
-
self.data[index][:,:scrunched.shape[1]] = scrunched
|
353 |
-
self.firstfree[index] = scrunched.shape[1]
|
354 |
-
return cap > 0
|
355 |
-
|
356 |
-
def _next_capacity(self):
|
357 |
-
cap = int(math.ceil(self.resolution * (0.67 ** len(self.data))))
|
358 |
-
if cap < 2:
|
359 |
-
return 0
|
360 |
-
# Round up to the nearest multiple of 8 for better GPU alignment.
|
361 |
-
cap = -8 * (-cap // 8)
|
362 |
-
return max(self.buffersize, cap)
|
363 |
-
|
364 |
-
def _weighted_summary(self, sort=True):
|
365 |
-
if self.firstfree[0]:
|
366 |
-
self._scan_extremes(self.data[0][:,:self.firstfree[0]].t())
|
367 |
-
size = sum(self.firstfree) + 2
|
368 |
-
weights = torch.FloatTensor(size) # Floating point
|
369 |
-
summary = torch.zeros(self.depth, size,
|
370 |
-
dtype=self.dtype, device=self.device)
|
371 |
-
weights[0:2] = 0
|
372 |
-
summary[:,0:2] = self.extremes
|
373 |
-
index = 2
|
374 |
-
for level, ff in enumerate(self.firstfree):
|
375 |
-
if ff == 0:
|
376 |
-
continue
|
377 |
-
summary[:,index:index + ff] = self.data[level][:,:ff]
|
378 |
-
weights[index:index + ff] = 2.0 ** level
|
379 |
-
index += ff
|
380 |
-
assert index == summary.shape[1]
|
381 |
-
if sort:
|
382 |
-
summary, order = torch.sort(summary, dim=-1)
|
383 |
-
weights = weights[order.view(-1).cpu()].view(order.shape)
|
384 |
-
return (summary, weights)
|
385 |
-
|
386 |
-
def quantiles(self, quantiles, old_style=False):
|
387 |
-
if self.size == 0:
|
388 |
-
return torch.full((self.depth, len(quantiles)), torch.nan)
|
389 |
-
summary, weights = self._weighted_summary()
|
390 |
-
cumweights = torch.cumsum(weights, dim=-1) - weights / 2
|
391 |
-
if old_style:
|
392 |
-
# To be convenient with torch.percentile
|
393 |
-
cumweights -= cumweights[:,0:1].clone()
|
394 |
-
cumweights /= cumweights[:,-1:].clone()
|
395 |
-
else:
|
396 |
-
cumweights /= torch.sum(weights, dim=-1, keepdim=True)
|
397 |
-
result = torch.zeros(self.depth, len(quantiles),
|
398 |
-
dtype=self.dtype, device=self.device)
|
399 |
-
# numpy is needed for interpolation
|
400 |
-
if not hasattr(quantiles, 'cpu'):
|
401 |
-
quantiles = torch.Tensor(quantiles)
|
402 |
-
nq = quantiles.cpu().numpy()
|
403 |
-
ncw = cumweights.cpu().numpy()
|
404 |
-
nsm = summary.cpu().numpy()
|
405 |
-
for d in range(self.depth):
|
406 |
-
result[d] = torch.tensor(numpy.interp(nq, ncw[d], nsm[d]),
|
407 |
-
dtype=self.dtype, device=self.device)
|
408 |
-
return result
|
409 |
-
|
410 |
-
def integrate(self, fun):
|
411 |
-
result = None
|
412 |
-
for level, ff in enumerate(self.firstfree):
|
413 |
-
if ff == 0:
|
414 |
-
continue
|
415 |
-
term = torch.sum(
|
416 |
-
fun(self.data[level][:,:ff]) * (2.0 ** level),
|
417 |
-
dim=-1)
|
418 |
-
if result is None:
|
419 |
-
result = term
|
420 |
-
else:
|
421 |
-
result += term
|
422 |
-
if result is not None:
|
423 |
-
result /= self.samplerate
|
424 |
-
return result
|
425 |
-
|
426 |
-
def percentiles(self, percentiles):
|
427 |
-
return self.quantiles(percentiles, old_style=True)
|
428 |
-
|
429 |
-
def readout(self, count=1001, old_style=True):
|
430 |
-
return self.quantiles(
|
431 |
-
torch.linspace(0.0, 1.0, count), old_style=old_style)
|
432 |
-
|
433 |
-
def normalize(self, data):
|
434 |
-
'''
|
435 |
-
Given input data as taken from the training distirbution,
|
436 |
-
normalizes every channel to reflect quantile values,
|
437 |
-
uniformly distributed, within [0, 1].
|
438 |
-
'''
|
439 |
-
assert self.size > 0
|
440 |
-
assert data.shape[0] == self.depth
|
441 |
-
summary, weights = self._weighted_summary()
|
442 |
-
cumweights = torch.cumsum(weights, dim=-1) - weights / 2
|
443 |
-
cumweights /= torch.sum(weights, dim=-1, keepdim=True)
|
444 |
-
result = torch.zeros_like(data).float()
|
445 |
-
# numpy is needed for interpolation
|
446 |
-
ndata = data.cpu().numpy().reshape((data.shape[0], -1))
|
447 |
-
ncw = cumweights.cpu().numpy()
|
448 |
-
nsm = summary.cpu().numpy()
|
449 |
-
for d in range(self.depth):
|
450 |
-
normed = torch.tensor(numpy.interp(ndata[d], nsm[d], ncw[d]),
|
451 |
-
dtype=torch.float, device=data.device).clamp_(0.0, 1.0)
|
452 |
-
if len(data.shape) > 1:
|
453 |
-
normed = normed.view(*(data.shape[1:]))
|
454 |
-
result[d] = normed
|
455 |
-
return result
|
456 |
-
|
457 |
-
|
458 |
-
class RunningConditionalQuantile:
|
459 |
-
'''
|
460 |
-
Equivalent to a map from conditions (any python hashable type)
|
461 |
-
to RunningQuantiles. The reason for the type is to allow limited
|
462 |
-
GPU memory to be exploited while counting quantile stats on many
|
463 |
-
different conditions, a few of which are common and which benefit
|
464 |
-
from GPU, but most of which are rare and would not all fit into
|
465 |
-
GPU RAM.
|
466 |
-
|
467 |
-
To move a set of conditions to a device, use rcq.to_(device, conds).
|
468 |
-
Then in the future, move the tallied data to the device before
|
469 |
-
calling rcq.add, that is, rcq.add(cond, data.to(device)).
|
470 |
-
|
471 |
-
To allow the caller to decide which conditions to allow to use GPU,
|
472 |
-
rcq.most_common_conditions(n) returns a list of the n most commonly
|
473 |
-
added conditions so far.
|
474 |
-
'''
|
475 |
-
def __init__(self, resolution=6 * 1024, buffersize=None, seed=None,
|
476 |
-
state=None):
|
477 |
-
self.first_rq = None
|
478 |
-
self.call_stats = defaultdict(int)
|
479 |
-
self.running_quantiles = {}
|
480 |
-
if state is not None:
|
481 |
-
self.set_state_dict(state)
|
482 |
-
return
|
483 |
-
self.rq_args = dict(resolution=resolution, buffersize=buffersize,
|
484 |
-
seed=seed)
|
485 |
-
|
486 |
-
def add(self, condition, incoming):
|
487 |
-
if condition not in self.running_quantiles:
|
488 |
-
self.running_quantiles[condition] = RunningQuantile(**self.rq_args)
|
489 |
-
if self.first_rq is None:
|
490 |
-
self.first_rq = self.running_quantiles[condition]
|
491 |
-
self.call_stats[condition] += 1
|
492 |
-
rq = self.running_quantiles[condition]
|
493 |
-
# For performance reasons, the caller can move some conditions to
|
494 |
-
# the CPU if they are not among the most common conditions.
|
495 |
-
if rq.device is not None and (rq.device != incoming.device):
|
496 |
-
rq.to_(incoming.device)
|
497 |
-
self.running_quantiles[condition].add(incoming)
|
498 |
-
|
499 |
-
def most_common_conditions(self, n):
|
500 |
-
return sorted(self.call_stats.keys(),
|
501 |
-
key=lambda c: -self.call_stats[c])[:n]
|
502 |
-
|
503 |
-
def collected_add(self, conditions, incoming):
|
504 |
-
for c in conditions:
|
505 |
-
self.add(c, incoming)
|
506 |
-
|
507 |
-
def conditional(self, c):
|
508 |
-
return self.running_quantiles[c]
|
509 |
-
|
510 |
-
def collected_quantiles(self, conditions, quantiles, old_style=False):
|
511 |
-
result = torch.zeros(
|
512 |
-
size=(len(conditions), self.first_rq.depth, len(quantiles)),
|
513 |
-
dtype=self.first_rq.dtype,
|
514 |
-
device=self.first_rq.device)
|
515 |
-
for i, c in enumerate(conditions):
|
516 |
-
if c in self.running_quantiles:
|
517 |
-
result[i] = self.running_quantiles[c].quantiles(
|
518 |
-
quantiles, old_style)
|
519 |
-
return result
|
520 |
-
|
521 |
-
def collected_normalize(self, conditions, values):
|
522 |
-
result = torch.zeros(
|
523 |
-
size=(len(conditions), values.shape[0], values.shape[1]),
|
524 |
-
dtype=torch.float,
|
525 |
-
device=self.first_rq.device)
|
526 |
-
for i, c in enumerate(conditions):
|
527 |
-
if c in self.running_quantiles:
|
528 |
-
result[i] = self.running_quantiles[c].normalize(values)
|
529 |
-
return result
|
530 |
-
|
531 |
-
def to_(self, device, conditions=None):
|
532 |
-
if conditions is None:
|
533 |
-
conditions = self.running_quantiles.keys()
|
534 |
-
for cond in conditions:
|
535 |
-
if cond in self.running_quantiles:
|
536 |
-
self.running_quantiles[cond].to_(device)
|
537 |
-
|
538 |
-
def state_dict(self):
|
539 |
-
conditions = sorted(self.running_quantiles.keys())
|
540 |
-
result = dict(
|
541 |
-
constructor=self.__module__ + '.' +
|
542 |
-
self.__class__.__name__ + '()',
|
543 |
-
rq_args=self.rq_args,
|
544 |
-
conditions=conditions)
|
545 |
-
for i, c in enumerate(conditions):
|
546 |
-
result.update({
|
547 |
-
'%d.%s' % (i, k): v
|
548 |
-
for k, v in self.running_quantiles[c].state_dict().items()})
|
549 |
-
return result
|
550 |
-
|
551 |
-
def set_state_dict(self, dic):
|
552 |
-
self.rq_args = dic['rq_args'].item()
|
553 |
-
conditions = list(dic['conditions'])
|
554 |
-
subdicts = defaultdict(dict)
|
555 |
-
for k, v in dic.items():
|
556 |
-
if '.' in k:
|
557 |
-
p, s = k.split('.', 1)
|
558 |
-
subdicts[p][s] = v
|
559 |
-
self.running_quantiles = {
|
560 |
-
c: RunningQuantile(state=subdicts[str(i)])
|
561 |
-
for i, c in enumerate(conditions)}
|
562 |
-
if conditions:
|
563 |
-
self.first_rq = self.running_quantiles[conditions[0]]
|
564 |
-
|
565 |
-
# example usage:
|
566 |
-
# levels = rqc.conditional(()).quantiles(1 - fracs)
|
567 |
-
# denoms = 1 - rqc.collected_normalize(cats, levels)
|
568 |
-
# isects = 1 - rqc.collected_normalize(labels, levels)
|
569 |
-
# unions = fracs + denoms[cats] - isects
|
570 |
-
# iou = isects / unions
|
571 |
-
|
572 |
-
|
573 |
-
|
574 |
-
|
575 |
-
class RunningCrossCovariance:
|
576 |
-
'''
|
577 |
-
Running computation. Use this when an off-diagonal block of the
|
578 |
-
covariance matrix is needed (e.g., when the whole covariance matrix
|
579 |
-
does not fit in the GPU).
|
580 |
-
|
581 |
-
Chan-style numerically stable update of mean and full covariance matrix.
|
582 |
-
Chan, Golub. LeVeque. 1983. http://www.jstor.org/stable/2683386
|
583 |
-
'''
|
584 |
-
def __init__(self, state=None):
|
585 |
-
if state is not None:
|
586 |
-
self.set_state_dict(state)
|
587 |
-
return
|
588 |
-
self.count = 0
|
589 |
-
self._mean = None
|
590 |
-
self.cmom2 = None
|
591 |
-
self.v_cmom2 = None
|
592 |
-
|
593 |
-
def add(self, a, b):
|
594 |
-
if len(a.shape) == 1:
|
595 |
-
a = a[None, :]
|
596 |
-
b = b[None, :]
|
597 |
-
assert(a.shape[0] == b.shape[0])
|
598 |
-
if len(a.shape) > 2:
|
599 |
-
a, b = [d.view(d.shape[0], d.shape[1], -1).permute(0, 2, 1
|
600 |
-
).contiguous().view(-1, d.shape[1]) for d in [a, b]]
|
601 |
-
batch_count = a.shape[0]
|
602 |
-
batch_mean = [d.sum(0) / batch_count for d in [a, b]]
|
603 |
-
centered = [d - bm for d, bm in zip([a, b], batch_mean)]
|
604 |
-
# If more than 10 billion operations, divide into batches.
|
605 |
-
sub_batch = -(-(10 << 30) // (a.shape[1] * b.shape[1]))
|
606 |
-
# Initial batch.
|
607 |
-
if self._mean is None:
|
608 |
-
self.count = batch_count
|
609 |
-
self._mean = batch_mean
|
610 |
-
self.v_cmom2 = [c.pow(2).sum(0) for c in centered]
|
611 |
-
self.cmom2 = a.new(a.shape[1], b.shape[1]).zero_()
|
612 |
-
progress_addbmm(self.cmom2, centered[0][:,:,None],
|
613 |
-
centered[1][:,None,:], sub_batch)
|
614 |
-
return
|
615 |
-
# Update a batch using Chan-style update for numerical stability.
|
616 |
-
oldcount = self.count
|
617 |
-
self.count += batch_count
|
618 |
-
new_frac = float(batch_count) / self.count
|
619 |
-
# Update the mean according to the batch deviation from the old mean.
|
620 |
-
delta = [bm.sub_(m).mul_(new_frac)
|
621 |
-
for bm, m in zip(batch_mean, self._mean)]
|
622 |
-
for m, d in zip(self._mean, delta):
|
623 |
-
m.add_(d)
|
624 |
-
# Update the cross-covariance using the batch deviation
|
625 |
-
progress_addbmm(self.cmom2, centered[0][:,:,None],
|
626 |
-
centered[1][:,None,:], sub_batch)
|
627 |
-
self.cmom2.addmm_(alpha=new_frac * oldcount,
|
628 |
-
mat1=delta[0][:,None], mat2=delta[1][None,:])
|
629 |
-
# Update the variance using the batch deviation
|
630 |
-
for c, vc2, d in zip(centered, self.v_cmom2, delta):
|
631 |
-
vc2.add_(c.pow(2).sum(0))
|
632 |
-
vc2.add_(d.pow_(2).mul_(new_frac * oldcount))
|
633 |
-
|
634 |
-
def mean(self):
|
635 |
-
return self._mean
|
636 |
-
|
637 |
-
def variance(self):
|
638 |
-
return [vc2 / (self.count - 1) for vc2 in self.v_cmom2]
|
639 |
-
|
640 |
-
def stdev(self):
|
641 |
-
return [v.sqrt() for v in self.variance()]
|
642 |
-
|
643 |
-
def covariance(self):
|
644 |
-
return self.cmom2 / (self.count - 1)
|
645 |
-
|
646 |
-
def correlation(self):
|
647 |
-
covariance = self.covariance()
|
648 |
-
rstdev = [s.reciprocal() for s in self.stdev()]
|
649 |
-
cor = rstdev[0][:,None] * covariance * rstdev[1][None,:]
|
650 |
-
# Remove NaNs
|
651 |
-
cor[torch.isnan(cor)] = 0
|
652 |
-
return cor
|
653 |
-
|
654 |
-
def to_(self, device):
|
655 |
-
self._mean = [m.to(device) for m in self._mean]
|
656 |
-
self.v_cmom2 = [vcs.to(device) for vcs in self.v_cmom2]
|
657 |
-
self.cmom2 = self.cmom2.to(device)
|
658 |
-
|
659 |
-
def state_dict(self):
|
660 |
-
return dict(
|
661 |
-
constructor=self.__module__ + '.' +
|
662 |
-
self.__class__.__name__ + '()',
|
663 |
-
count=self.count,
|
664 |
-
mean_a=self._mean[0].cpu().numpy(),
|
665 |
-
mean_b=self._mean[1].cpu().numpy(),
|
666 |
-
cmom2_a=self.v_cmom2[0].cpu().numpy(),
|
667 |
-
cmom2_b=self.v_cmom2[1].cpu().numpy(),
|
668 |
-
cmom2=self.cmom2.cpu().numpy())
|
669 |
-
|
670 |
-
def set_state_dict(self, dic):
|
671 |
-
self.count = dic['count'].item()
|
672 |
-
self._mean = [torch.from_numpy(dic[k]) for k in ['mean_a', 'mean_b']]
|
673 |
-
self.v_cmom2 = [torch.from_numpy(dic[k])
|
674 |
-
for k in ['cmom2_a', 'cmom2_b']]
|
675 |
-
self.cmom2 = torch.from_numpy(dic['cmom2'])
|
676 |
-
|
677 |
-
def progress_addbmm(accum, x, y, batch_size):
|
678 |
-
'''
|
679 |
-
Break up very large adbmm operations into batches so progress can be seen.
|
680 |
-
'''
|
681 |
-
from .progress import default_progress
|
682 |
-
if x.shape[0] <= batch_size:
|
683 |
-
return accum.addbmm_(x, y)
|
684 |
-
progress = default_progress(None)
|
685 |
-
for i in progress(range(0, x.shape[0], batch_size), desc='bmm'):
|
686 |
-
accum.addbmm_(x[i:i+batch_size], y[i:i+batch_size])
|
687 |
-
return accum
|
688 |
-
|
689 |
-
|
690 |
-
def sample_portion(vec, p=0.5):
|
691 |
-
bits = torch.bernoulli(torch.zeros(vec.shape[0], dtype=torch.uint8,
|
692 |
-
device=vec.device), p)
|
693 |
-
return vec[bits]
|
694 |
-
|
695 |
-
if __name__ == '__main__':
|
696 |
-
import warnings
|
697 |
-
warnings.filterwarnings("error")
|
698 |
-
import time
|
699 |
-
import argparse
|
700 |
-
parser = argparse.ArgumentParser(
|
701 |
-
description='Test things out')
|
702 |
-
parser.add_argument('--mode', default='cpu', help='cpu or cuda')
|
703 |
-
parser.add_argument('--test_size', type=int, default=1000000)
|
704 |
-
args = parser.parse_args()
|
705 |
-
|
706 |
-
# An adverarial case: we keep finding more numbers in the middle
|
707 |
-
# as the stream goes on.
|
708 |
-
amount = args.test_size
|
709 |
-
quantiles = 1000
|
710 |
-
data = numpy.arange(float(amount))
|
711 |
-
data[1::2] = data[-1::-2] + (len(data) - 1)
|
712 |
-
data /= 2
|
713 |
-
depth = 50
|
714 |
-
test_cuda = torch.cuda.is_available()
|
715 |
-
alldata = data[:,None] + (numpy.arange(depth) * amount)[None, :]
|
716 |
-
actual_sum = torch.FloatTensor(numpy.sum(alldata * alldata, axis=0))
|
717 |
-
amt = amount // depth
|
718 |
-
for r in range(depth):
|
719 |
-
numpy.random.shuffle(alldata[r*amt:r*amt+amt,r])
|
720 |
-
if args.mode == 'cuda':
|
721 |
-
alldata = torch.cuda.FloatTensor(alldata)
|
722 |
-
dtype = torch.float
|
723 |
-
device = torch.device('cuda')
|
724 |
-
else:
|
725 |
-
alldata = torch.FloatTensor(alldata)
|
726 |
-
dtype = torch.float
|
727 |
-
device = None
|
728 |
-
starttime = time.time()
|
729 |
-
qc = RunningQuantile(resolution=6 * 1024)
|
730 |
-
qc.add(alldata)
|
731 |
-
# Test state dict
|
732 |
-
saved = qc.state_dict()
|
733 |
-
# numpy.savez('foo.npz', **saved)
|
734 |
-
# saved = numpy.load('foo.npz')
|
735 |
-
qc = RunningQuantile(state=saved)
|
736 |
-
assert not qc.device.type == 'cuda'
|
737 |
-
qc.add(alldata)
|
738 |
-
actual_sum *= 2
|
739 |
-
ro = qc.readout(1001).cpu()
|
740 |
-
endtime = time.time()
|
741 |
-
gt = torch.linspace(0, amount, quantiles+1)[None,:] + (
|
742 |
-
torch.arange(qc.depth, dtype=torch.float) * amount)[:,None]
|
743 |
-
maxreldev = torch.max(torch.abs(ro - gt) / amount) * quantiles
|
744 |
-
print("Maximum relative deviation among %d perentiles: %f" % (
|
745 |
-
quantiles, maxreldev))
|
746 |
-
minerr = torch.max(torch.abs(qc.minmax().cpu()[:,0] -
|
747 |
-
torch.arange(qc.depth, dtype=torch.float) * amount))
|
748 |
-
maxerr = torch.max(torch.abs((qc.minmax().cpu()[:, -1] + 1) -
|
749 |
-
(torch.arange(qc.depth, dtype=torch.float) + 1) * amount))
|
750 |
-
print("Minmax error %f, %f" % (minerr, maxerr))
|
751 |
-
interr = torch.max(torch.abs(qc.integrate(lambda x: x * x).cpu()
|
752 |
-
- actual_sum) / actual_sum)
|
753 |
-
print("Integral error: %f" % interr)
|
754 |
-
medianerr = torch.max(torch.abs(qc.median() -
|
755 |
-
alldata.median(0)[0]) / alldata.median(0)[0]).cpu()
|
756 |
-
print("Median error: %f" % interr)
|
757 |
-
meanerr = torch.max(
|
758 |
-
torch.abs(qc.mean() - alldata.mean(0)) / alldata.mean(0)).cpu()
|
759 |
-
print("Mean error: %f" % meanerr)
|
760 |
-
varerr = torch.max(
|
761 |
-
torch.abs(qc.variance() - alldata.var(0)) / alldata.var(0)).cpu()
|
762 |
-
print("Variance error: %f" % varerr)
|
763 |
-
counterr = ((qc.integrate(lambda x: torch.ones(x.shape[-1]).cpu())
|
764 |
-
- qc.size) / (0.0 + qc.size)).item()
|
765 |
-
print("Count error: %f" % counterr)
|
766 |
-
print("Time %f" % (endtime - starttime))
|
767 |
-
# Algorithm is randomized, so some of these will fail with low probability.
|
768 |
-
assert maxreldev < 1.0
|
769 |
-
assert minerr == 0.0
|
770 |
-
assert maxerr == 0.0
|
771 |
-
assert interr < 0.01
|
772 |
-
assert abs(counterr) < 0.001
|
773 |
-
print("OK")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DrBenjamin/AI_Demo/AI_Demo.py
DELETED
@@ -1,291 +0,0 @@
|
|
1 |
-
##### `AI_Demo.py`
|
2 |
-
##### AI Demo, hosted on https://huggingface.co/spaces/DrBenjamin/AI_Demo
|
3 |
-
##### Please reach out to [email protected] for any questions
|
4 |
-
#### Loading needed Python libraries
|
5 |
-
import streamlit as st
|
6 |
-
import numpy as np
|
7 |
-
import audio2numpy as a2n
|
8 |
-
from pydub import AudioSegment
|
9 |
-
import cv2
|
10 |
-
from PIL import Image
|
11 |
-
import torch
|
12 |
-
from diffusers import StableDiffusionPipeline
|
13 |
-
from diffusers import StableDiffusionImg2ImgPipeline
|
14 |
-
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
|
15 |
-
from transformers import pipeline, set_seed
|
16 |
-
from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer
|
17 |
-
import os
|
18 |
-
|
19 |
-
os.environ['COMMANDLINE_ARGS'] = '--skip-torch-cuda-test --precision full --no-half'
|
20 |
-
os.environ['KMP_DUPLICATE_LIB_OK'] = 'True'
|
21 |
-
|
22 |
-
|
23 |
-
#### Functions
|
24 |
-
### Function predict_step = Image to Text recognition
|
25 |
-
def predict_step(image):
|
26 |
-
if image.mode != "RGB":
|
27 |
-
image = image.convert(mode = "RGB")
|
28 |
-
pixel_values = feature_extractor(images = image, return_tensors = "pt").pixel_values
|
29 |
-
pixel_values = pixel_values.to(device)
|
30 |
-
output_ids = model.generate(pixel_values, **gen_kwargs)
|
31 |
-
preds = tokenizer.batch_decode(output_ids, skip_special_tokens = True)
|
32 |
-
preds = [pred.strip() for pred in preds]
|
33 |
-
return str(preds[0]).capitalize() + '.'
|
34 |
-
|
35 |
-
|
36 |
-
#### Models
|
37 |
-
st.header('🤗 Hugging Face Diffusers')
|
38 |
-
st.write('State-of-the-art diffusion models for image, text and audio generation in PyTorch.')
|
39 |
-
devices = ["mps", "cpu", "cuda"]
|
40 |
-
device = st.selectbox(label = 'Select device', options = devices, index = 1, disabled = True)
|
41 |
-
st.write(':orange[MPS for Mac (Metal Performance Shaders), CPU for all systems and CUDA for systems with NVIDIA GPU.]')
|
42 |
-
models = ["runwayml/stable-diffusion-v1-5", "stabilityai/stable-diffusion-2-1", "hakurei/waifu-diffusion", "stabilityai/stable-diffusion-2-base",
|
43 |
-
"nlpconnect/vit-gpt2-image-captioning", "openai-gpt", "gpt2-large", "openai/whisper-large-v2"]
|
44 |
-
model_id_or_path = st.selectbox(label = 'Select model', options = models, index = 5, disabled = True)
|
45 |
-
if model_id_or_path == "runwayml/stable-diffusion-v1-5":
|
46 |
-
st.write(':orange[Stable Diffusion v1-5 is the state of the art text-to-image model.]')
|
47 |
-
elif model_id_or_path == "stabilityai/stable-diffusion-2-1":
|
48 |
-
st.write(':orange[New stable diffusion text-to-image model at 768x768 resolution.]')
|
49 |
-
elif model_id_or_path == "stabilityai/stable-diffusion-2-base":
|
50 |
-
st.write(':orange[New stable diffusion text-to-image model at 512x512 resolution.]')
|
51 |
-
elif model_id_or_path == "hakurei/waifu-diffusion":
|
52 |
-
st.write(
|
53 |
-
':orange[waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning.]')
|
54 |
-
elif model_id_or_path == "nlpconnect/vit-gpt2-image-captioning":
|
55 |
-
st.write(':orange[vit-gpt2 is an image captioning model.]')
|
56 |
-
elif model_id_or_path == "openai-gpt":
|
57 |
-
st.write(
|
58 |
-
':orange[openai-gpt is a transformer-based language model created and released by OpenAI. The model is a causal (unidirectional) transformer pre-trained using language modeling on a large corpus with long range dependencies.]')
|
59 |
-
elif model_id_or_path == "gpt2-large":
|
60 |
-
st.write(
|
61 |
-
':orange[GPT-2 Large is the 774M parameter version of GPT-2, a transformer-based language model created and released by OpenAI. The model is a pretrained model on English language using a causal language modeling (CLM) objective.]')
|
62 |
-
elif model_id_or_path == "openai/whisper-large-v2":
|
63 |
-
st.write(':orange[Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation.]')
|
64 |
-
|
65 |
-
control_net_models = ["None", "lllyasviel/sd-controlnet-canny", "lllyasviel/sd-controlnet-scribble"]
|
66 |
-
if model_id_or_path == "runwayml/stable-diffusion-v1-5":
|
67 |
-
disable = False
|
68 |
-
else:
|
69 |
-
disable = True
|
70 |
-
control_net_model = st.selectbox(label = 'Select control net model', options = control_net_models, disabled = disable)
|
71 |
-
if control_net_model == "lllyasviel/sd-controlnet-canny":
|
72 |
-
st.write(
|
73 |
-
':orange[ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Canny edges.]')
|
74 |
-
elif control_net_model == "lllyasviel/sd-controlnet-scribble":
|
75 |
-
st.write(
|
76 |
-
':orange[ControlNet is a neural network structure to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlNet conditioned on Scribble images.]')
|
77 |
-
if model_id_or_path != "runwayml/stable-diffusion-v1-5":
|
78 |
-
control_net_model = "None"
|
79 |
-
|
80 |
-
#### Stable diffusion image 2 image with Control Net
|
81 |
-
if model_id_or_path == "runwayml/stable-diffusion-v1-5" and control_net_model != "None":
|
82 |
-
with st.form('img2img (Control Net)'):
|
83 |
-
st.subheader('Image 2 Image (Control Net)')
|
84 |
-
st.write('Create an image from text input with an image as template.')
|
85 |
-
image = ''
|
86 |
-
uploaded_file = st.file_uploader(label = "Upload a picture", type = 'png')
|
87 |
-
prompt = st.text_input(label = 'Prompt',
|
88 |
-
value = 'A picture in comic style, bright colours, a house with red bricks, a dark sky with a full yellow moon, best quality, extremely detailed.')
|
89 |
-
submitted = st.form_submit_button('Submit')
|
90 |
-
if submitted:
|
91 |
-
# Check for image data
|
92 |
-
if uploaded_file is not None:
|
93 |
-
image = cv2.imdecode(np.frombuffer(uploaded_file.getvalue(), np.uint8), cv2.COLOR_GRAY2BGR)
|
94 |
-
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
|
95 |
-
|
96 |
-
# Resize image if existend and not 768x640 / 640x768 pixel
|
97 |
-
h, w = image.shape
|
98 |
-
if not (h == 768 and w == 640) and not (h == 640 and w == 768):
|
99 |
-
# Image is bigger in height than width
|
100 |
-
if h > w:
|
101 |
-
# Resize cropped image to standard dimensions
|
102 |
-
image = cv2.resize(image, (640, 768), interpolation = cv2.INTER_AREA)
|
103 |
-
|
104 |
-
# Image is smaller in height than width
|
105 |
-
else:
|
106 |
-
# Resize cropped image to standard dimensions
|
107 |
-
image = cv2.resize(image, (768, 640), interpolation = cv2.INTER_AREA)
|
108 |
-
|
109 |
-
# Get canny image
|
110 |
-
image = cv2.Canny(image, 100, 200)
|
111 |
-
image = image[:, :, None]
|
112 |
-
image = np.concatenate([image, image, image], axis = 2)
|
113 |
-
canny_image = Image.fromarray(image)
|
114 |
-
st.subheader('Preview annotator result')
|
115 |
-
st.image(canny_image)
|
116 |
-
|
117 |
-
# Load control net and stable diffusion v1-5
|
118 |
-
controlnet = ControlNetModel.from_pretrained(control_net_model, torch_dtype = torch.float32)
|
119 |
-
pipe = StableDiffusionControlNetPipeline.from_pretrained(model_id_or_path, controlnet = controlnet, torch_dtype = torch.float32)
|
120 |
-
pipe = pipe.to(device)
|
121 |
-
|
122 |
-
# Recommended if your computer has < 64 GB of RAM
|
123 |
-
pipe.enable_attention_slicing()
|
124 |
-
|
125 |
-
# Speed up diffusion process with faster scheduler and memory optimization
|
126 |
-
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
|
127 |
-
|
128 |
-
# Generate image
|
129 |
-
generator = torch.manual_seed(0)
|
130 |
-
image = pipe(prompt = prompt, negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality", num_inference_steps = 30,
|
131 |
-
generator = generator, image = canny_image).images[0]
|
132 |
-
st.subheader('Diffuser result')
|
133 |
-
st.write('Model :orange[' + model_id_or_path + '] + :red[' + control_net_model + ']')
|
134 |
-
st.image(image)
|
135 |
-
|
136 |
-
## Stable-Diffusion
|
137 |
-
if model_id_or_path == "runwayml/stable-diffusion-v1-5" and control_net_model == "None":
|
138 |
-
with st.form('img2img'):
|
139 |
-
st.subheader('Image 2 Image')
|
140 |
-
st.write('Create an image from text input with an image as template.')
|
141 |
-
image = ''
|
142 |
-
uploaded_file = st.file_uploader(label = "Upload a picture", type = 'png')
|
143 |
-
prompt = st.text_input(label = 'Prompt',
|
144 |
-
value = 'A picture in comic style, bright colours, a house with red bricks, a dark sky with a full yellow moon, best quality, extremely detailed.')
|
145 |
-
submitted = st.form_submit_button('Submit')
|
146 |
-
if submitted:
|
147 |
-
# Check for image data
|
148 |
-
if uploaded_file is not None:
|
149 |
-
image = cv2.imdecode(np.frombuffer(uploaded_file.getvalue(), np.uint8), cv2.IMREAD_COLOR)
|
150 |
-
|
151 |
-
# Resize image if existend and not 768x640 / 640x768 pixel
|
152 |
-
h, w, _ = image.shape
|
153 |
-
if not (h == 768 and w == 640) and not (h == 640 and w == 768):
|
154 |
-
# Image is bigger in height than width
|
155 |
-
if h > w:
|
156 |
-
# Resize cropped image to standard dimensions
|
157 |
-
image = cv2.resize(image, (640, 768), interpolation = cv2.INTER_AREA)
|
158 |
-
|
159 |
-
# Image is smaller in height than width
|
160 |
-
else:
|
161 |
-
# Resize cropped image to standard dimensions
|
162 |
-
image = cv2.resize(image, (768, 640), interpolation = cv2.INTER_AREA)
|
163 |
-
image = Image.fromarray(image)
|
164 |
-
|
165 |
-
# Load the pipeline
|
166 |
-
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype = torch.float32)
|
167 |
-
pipe = pipe.to(device)
|
168 |
-
|
169 |
-
# Recommended if your computer has < 64 GB of RAM
|
170 |
-
pipe.enable_attention_slicing()
|
171 |
-
|
172 |
-
# Speed up diffusion process with faster scheduler and memory optimization
|
173 |
-
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
|
174 |
-
|
175 |
-
# Create new image
|
176 |
-
images = pipe(prompt = prompt, negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality", num_inference_steps = 30,
|
177 |
-
image = image, strength = 0.75, guidance_scale = 7.5).images
|
178 |
-
|
179 |
-
# Show image
|
180 |
-
st.subheader('Diffuser result')
|
181 |
-
st.write('Model :orange[' + model_id_or_path + ']')
|
182 |
-
st.image(images[0])
|
183 |
-
|
184 |
-
#### Stable diffusion txt 2 image
|
185 |
-
if control_net_model == "None" and model_id_or_path != "nlpconnect/vit-gpt2-image-captioning" and model_id_or_path != "openai-gpt" and model_id_or_path != "gpt2-large" and model_id_or_path != "openai/whisper-large-v2":
|
186 |
-
with st.form('txt2img'):
|
187 |
-
st.subheader('Text 2 Image')
|
188 |
-
st.write('Create an image from text input.')
|
189 |
-
if model_id_or_path == "runwayml/stable-diffusion-v1-5" or model_id_or_path == "stabilityai/stable-diffusion-2-1":
|
190 |
-
value = 'A picture in comic style, bright colours, a house with red bricks, a dark sky with a full yellow moon, best quality, extremely detailed.'
|
191 |
-
if model_id_or_path == "hakurei/waifu-diffusion":
|
192 |
-
value = 'A picture in Anime style, bright colours, a house with red bricks, a dark sky with a full yellow moon, best quality, extremely detailed.'
|
193 |
-
if model_id_or_path == "stabilityai/stable-diffusion-2-base":
|
194 |
-
value = 'A picture in comic style, a castle with grey bricks in the background, a river is going through, a blue sky with a full yellow sun, best quality, extremely detailed.'
|
195 |
-
|
196 |
-
prompt = st.text_input(label = 'Prompt', value = value)
|
197 |
-
submitted = st.form_submit_button('Submit')
|
198 |
-
if submitted:
|
199 |
-
# Make sure you're logged in with `huggingface-cli login`
|
200 |
-
pipe = StableDiffusionPipeline.from_pretrained(model_id_or_path)
|
201 |
-
pipe = pipe.to(device)
|
202 |
-
|
203 |
-
# Recommended if your computer has < 64 GB of RAM
|
204 |
-
pipe.enable_attention_slicing()
|
205 |
-
|
206 |
-
# Speed up diffusion process with faster scheduler and memory optimization
|
207 |
-
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
|
208 |
-
|
209 |
-
# Results
|
210 |
-
if model_id_or_path == "hakurei/waifu-diffusion":
|
211 |
-
negative = "several scenes, more than one image, split picture"
|
212 |
-
else:
|
213 |
-
negative = "monochrome, lowres, bad anatomy, worst quality, low quality"
|
214 |
-
image = pipe(prompt = prompt, negative_prompt = negative, num_inference_steps = 30, guidance_scale = 7.5).images[0]
|
215 |
-
st.subheader('Diffuser result')
|
216 |
-
st.write('Model :orange[' + model_id_or_path + ']')
|
217 |
-
st.image(image)
|
218 |
-
|
219 |
-
#### Text (OpenAI gpt models)
|
220 |
-
if model_id_or_path == "openai-gpt" or model_id_or_path == "gpt2-large":
|
221 |
-
with st.form('GPT'):
|
222 |
-
st.subheader('Text generation')
|
223 |
-
st.write('Create text which is generated from text input.')
|
224 |
-
text_input = st.text_input(label = 'Give a start of a sentence', value = 'This is a test ')
|
225 |
-
submitted = st.form_submit_button('Submit')
|
226 |
-
if submitted:
|
227 |
-
generator = pipeline('text-generation', model = model_id_or_path)
|
228 |
-
set_seed(42)
|
229 |
-
generated = generator(text_input, max_length = 150, num_return_sequences = 1)
|
230 |
-
st.subheader('Diffuser result')
|
231 |
-
st.write('Model :orange[' + model_id_or_path + ']')
|
232 |
-
st.markdown('Text: ":green[' + str(generated[0]['generated_text']) + ']"')
|
233 |
-
|
234 |
-
#### Image to text
|
235 |
-
if model_id_or_path == "nlpconnect/vit-gpt2-image-captioning":
|
236 |
-
with st.form('Image2Text'):
|
237 |
-
st.subheader('Image 2 Text')
|
238 |
-
st.write('Create a description of an image.')
|
239 |
-
image = ''
|
240 |
-
uploaded_file = st.file_uploader(label = "Upload a picture", type = 'png')
|
241 |
-
submitted = st.form_submit_button('Submit')
|
242 |
-
if submitted:
|
243 |
-
# Check for image data
|
244 |
-
if uploaded_file is not None:
|
245 |
-
image = cv2.imdecode(np.frombuffer(uploaded_file.getvalue(), np.uint8), cv2.IMREAD_COLOR)
|
246 |
-
image = Image.fromarray(image)
|
247 |
-
model = VisionEncoderDecoderModel.from_pretrained(model_id_or_path)
|
248 |
-
feature_extractor = ViTImageProcessor.from_pretrained(model_id_or_path)
|
249 |
-
tokenizer = AutoTokenizer.from_pretrained(model_id_or_path)
|
250 |
-
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
251 |
-
model.to(device)
|
252 |
-
max_length = 16
|
253 |
-
num_beams = 4
|
254 |
-
gen_kwargs = {"max_length": max_length, "num_beams": num_beams}
|
255 |
-
output = predict_step(image)
|
256 |
-
st.subheader('Diffuser result')
|
257 |
-
st.write('Model :orange[nlpconnect/vit-gpt2-image-captioning]')
|
258 |
-
st.write('Description: ":green[' + str(output) + ']"')
|
259 |
-
|
260 |
-
#### Whisper Model
|
261 |
-
if model_id_or_path == "openai/whisper-large-v2":
|
262 |
-
with st.form('Image2Text'):
|
263 |
-
st.subheader('Audio 2 Text')
|
264 |
-
st.write('Create a transcription of an audio file.')
|
265 |
-
audio_file = st.file_uploader(label = "Upload an audio file", type = 'mp3')
|
266 |
-
submitted = st.form_submit_button('Submit')
|
267 |
-
if submitted:
|
268 |
-
if audio_file is not None:
|
269 |
-
audio = audio_file.getvalue()
|
270 |
-
with open("temp.mp3", "wb") as binary_file:
|
271 |
-
# Write bytes to file
|
272 |
-
binary_file.write(audio)
|
273 |
-
|
274 |
-
# Calling the split_to_mono method on the stereo audio file
|
275 |
-
stereo_audio = AudioSegment.from_file("temp.mp3", format = "mp3")
|
276 |
-
mono_audios = stereo_audio.split_to_mono()
|
277 |
-
mono_audios[0].export("temp.mp3", format = "mp3")
|
278 |
-
|
279 |
-
# Mp3 file to numpy array
|
280 |
-
audio, sr = a2n.audio_from_file('temp.mp3')
|
281 |
-
st.audio('temp.mp3')
|
282 |
-
if os.path.exists("temp.mp3"):
|
283 |
-
os.remove("temp.mp3")
|
284 |
-
|
285 |
-
# Load model and processor
|
286 |
-
pipe = pipeline("automatic-speech-recognition", model = "openai/whisper-large-v2", chunk_length_s = 30, device = "cpu",
|
287 |
-
ignore_warning = True)
|
288 |
-
prediction = pipe(audio, sampling_rate = sr)["text"]
|
289 |
-
st.subheader('Preview used audio')
|
290 |
-
st.write('Model :orange[' + model_id_or_path + ']')
|
291 |
-
st.write('Transcript: ":green[' + str(prediction) + ']"')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DragGan/DragGan/stylegan_human/openpose/src/__init__.py
DELETED
File without changes
|
spaces/ECCV2022/PSG/OpenPSG/configs/_base_/models/psgtr_r50.py
DELETED
@@ -1,82 +0,0 @@
|
|
1 |
-
model = dict(
|
2 |
-
type='PSGTr',
|
3 |
-
backbone=dict(type='ResNet',
|
4 |
-
depth=50,
|
5 |
-
num_stages=4,
|
6 |
-
out_indices=(0, 1, 2, 3),
|
7 |
-
frozen_stages=1,
|
8 |
-
norm_cfg=dict(type='BN', requires_grad=False),
|
9 |
-
norm_eval=True,
|
10 |
-
style='pytorch',
|
11 |
-
init_cfg=dict(type='Pretrained',
|
12 |
-
checkpoint='torchvision://resnet50')),
|
13 |
-
bbox_head=dict(type='PSGTrHead',
|
14 |
-
num_classes=80,
|
15 |
-
num_relations=117,
|
16 |
-
in_channels=2048,
|
17 |
-
transformer=dict(
|
18 |
-
type='Transformer',
|
19 |
-
encoder=dict(type='DetrTransformerEncoder',
|
20 |
-
num_layers=6,
|
21 |
-
transformerlayers=dict(
|
22 |
-
type='BaseTransformerLayer',
|
23 |
-
attn_cfgs=[
|
24 |
-
dict(type='MultiheadAttention',
|
25 |
-
embed_dims=256,
|
26 |
-
num_heads=8,
|
27 |
-
dropout=0.1)
|
28 |
-
],
|
29 |
-
feedforward_channels=2048,
|
30 |
-
ffn_dropout=0.1,
|
31 |
-
operation_order=('self_attn', 'norm',
|
32 |
-
'ffn', 'norm'))),
|
33 |
-
decoder=dict(
|
34 |
-
type='DetrTransformerDecoder',
|
35 |
-
return_intermediate=True,
|
36 |
-
num_layers=6,
|
37 |
-
transformerlayers=dict(
|
38 |
-
type='DetrTransformerDecoderLayer',
|
39 |
-
attn_cfgs=dict(type='MultiheadAttention',
|
40 |
-
embed_dims=256,
|
41 |
-
num_heads=8,
|
42 |
-
dropout=0.1),
|
43 |
-
feedforward_channels=2048,
|
44 |
-
ffn_dropout=0.1,
|
45 |
-
operation_order=('self_attn', 'norm',
|
46 |
-
'cross_attn', 'norm', 'ffn',
|
47 |
-
'norm')),
|
48 |
-
)),
|
49 |
-
positional_encoding=dict(type='SinePositionalEncoding',
|
50 |
-
num_feats=128,
|
51 |
-
normalize=True),
|
52 |
-
sub_loss_cls=dict(type='CrossEntropyLoss',
|
53 |
-
use_sigmoid=False,
|
54 |
-
loss_weight=1.0,
|
55 |
-
class_weight=1.0),
|
56 |
-
sub_loss_bbox=dict(type='L1Loss', loss_weight=5.0),
|
57 |
-
sub_loss_iou=dict(type='GIoULoss', loss_weight=2.0),
|
58 |
-
sub_focal_loss=dict(type='BCEFocalLoss', loss_weight=1.0),
|
59 |
-
sub_dice_loss=dict(type='psgtrDiceLoss', loss_weight=1.0),
|
60 |
-
obj_loss_cls=dict(type='CrossEntropyLoss',
|
61 |
-
use_sigmoid=False,
|
62 |
-
loss_weight=1.0,
|
63 |
-
class_weight=1.0),
|
64 |
-
obj_loss_bbox=dict(type='L1Loss', loss_weight=5.0),
|
65 |
-
obj_loss_iou=dict(type='GIoULoss', loss_weight=2.0),
|
66 |
-
obj_focal_loss=dict(type='BCEFocalLoss', loss_weight=1.0),
|
67 |
-
obj_dice_loss=dict(type='psgtrDiceLoss', loss_weight=1.0),
|
68 |
-
rel_loss_cls=dict(type='CrossEntropyLoss',
|
69 |
-
use_sigmoid=False,
|
70 |
-
loss_weight=2.0,
|
71 |
-
class_weight=1.0)),
|
72 |
-
# training and testing settings
|
73 |
-
train_cfg=dict(assigner=dict(
|
74 |
-
type='HTriMatcher',
|
75 |
-
s_cls_cost=dict(type='ClassificationCost', weight=1.),
|
76 |
-
s_reg_cost=dict(type='BBoxL1Cost', weight=5.0),
|
77 |
-
s_iou_cost=dict(type='IoUCost', iou_mode='giou', weight=2.0),
|
78 |
-
o_cls_cost=dict(type='ClassificationCost', weight=1.),
|
79 |
-
o_reg_cost=dict(type='BBoxL1Cost', weight=5.0),
|
80 |
-
o_iou_cost=dict(type='IoUCost', iou_mode='giou', weight=2.0),
|
81 |
-
r_cls_cost=dict(type='ClassificationCost', weight=2.))),
|
82 |
-
test_cfg=dict(max_per_img=100))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ECCV2022/bytetrack/tutorials/motr/evaluation.py
DELETED
@@ -1,207 +0,0 @@
|
|
1 |
-
# ------------------------------------------------------------------------
|
2 |
-
# Copyright (c) 2021 megvii-model. All Rights Reserved.
|
3 |
-
# ------------------------------------------------------------------------
|
4 |
-
# Modified from Deformable DETR (https://github.com/fundamentalvision/Deformable-DETR)
|
5 |
-
# Copyright (c) 2020 SenseTime. All Rights Reserved.
|
6 |
-
# ------------------------------------------------------------------------
|
7 |
-
# Modified from DETR (https://github.com/facebookresearch/detr)
|
8 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
|
9 |
-
# ------------------------------------------------------------------------
|
10 |
-
|
11 |
-
|
12 |
-
import os
|
13 |
-
import numpy as np
|
14 |
-
import copy
|
15 |
-
import motmetrics as mm
|
16 |
-
mm.lap.default_solver = 'lap'
|
17 |
-
import os
|
18 |
-
from typing import Dict
|
19 |
-
import numpy as np
|
20 |
-
import logging
|
21 |
-
|
22 |
-
def read_results(filename, data_type: str, is_gt=False, is_ignore=False):
|
23 |
-
if data_type in ('mot', 'lab'):
|
24 |
-
read_fun = read_mot_results
|
25 |
-
else:
|
26 |
-
raise ValueError('Unknown data type: {}'.format(data_type))
|
27 |
-
|
28 |
-
return read_fun(filename, is_gt, is_ignore)
|
29 |
-
|
30 |
-
# def read_mot_results(filename, is_gt, is_ignore):
|
31 |
-
# results_dict = dict()
|
32 |
-
# if os.path.isfile(filename):
|
33 |
-
# with open(filename, 'r') as f:
|
34 |
-
# for line in f.readlines():
|
35 |
-
# linelist = line.split(',')
|
36 |
-
# if len(linelist) < 7:
|
37 |
-
# continue
|
38 |
-
# fid = int(linelist[0])
|
39 |
-
# if fid < 1:
|
40 |
-
# continue
|
41 |
-
# results_dict.setdefault(fid, list())
|
42 |
-
|
43 |
-
# if is_gt:
|
44 |
-
# mark = int(float(linelist[6]))
|
45 |
-
# if mark == 0 :
|
46 |
-
# continue
|
47 |
-
# score = 1
|
48 |
-
# elif is_ignore:
|
49 |
-
# score = 1
|
50 |
-
# else:
|
51 |
-
# score = float(linelist[6])
|
52 |
-
|
53 |
-
# tlwh = tuple(map(float, linelist[2:6]))
|
54 |
-
# target_id = int(float(linelist[1]))
|
55 |
-
# results_dict[fid].append((tlwh, target_id, score))
|
56 |
-
|
57 |
-
# return results_dict
|
58 |
-
|
59 |
-
def read_mot_results(filename, is_gt, is_ignore):
|
60 |
-
valid_labels = {1}
|
61 |
-
ignore_labels = {0, 2, 7, 8, 12}
|
62 |
-
results_dict = dict()
|
63 |
-
if os.path.isfile(filename):
|
64 |
-
with open(filename, 'r') as f:
|
65 |
-
for line in f.readlines():
|
66 |
-
linelist = line.split(',')
|
67 |
-
if len(linelist) < 7:
|
68 |
-
continue
|
69 |
-
fid = int(linelist[0])
|
70 |
-
if fid < 1:
|
71 |
-
continue
|
72 |
-
results_dict.setdefault(fid, list())
|
73 |
-
|
74 |
-
if is_gt:
|
75 |
-
if 'MOT16-' in filename or 'MOT17-' in filename:
|
76 |
-
label = int(float(linelist[7]))
|
77 |
-
mark = int(float(linelist[6]))
|
78 |
-
if mark == 0 or label not in valid_labels:
|
79 |
-
continue
|
80 |
-
score = 1
|
81 |
-
elif is_ignore:
|
82 |
-
if 'MOT16-' in filename or 'MOT17-' in filename:
|
83 |
-
label = int(float(linelist[7]))
|
84 |
-
vis_ratio = float(linelist[8])
|
85 |
-
if label not in ignore_labels and vis_ratio >= 0:
|
86 |
-
continue
|
87 |
-
elif 'MOT15' in filename:
|
88 |
-
label = int(float(linelist[6]))
|
89 |
-
if label not in ignore_labels:
|
90 |
-
continue
|
91 |
-
else:
|
92 |
-
continue
|
93 |
-
score = 1
|
94 |
-
else:
|
95 |
-
score = float(linelist[6])
|
96 |
-
|
97 |
-
tlwh = tuple(map(float, linelist[2:6]))
|
98 |
-
target_id = int(linelist[1])
|
99 |
-
|
100 |
-
results_dict[fid].append((tlwh, target_id, score))
|
101 |
-
|
102 |
-
return results_dict
|
103 |
-
|
104 |
-
def unzip_objs(objs):
|
105 |
-
if len(objs) > 0:
|
106 |
-
tlwhs, ids, scores = zip(*objs)
|
107 |
-
else:
|
108 |
-
tlwhs, ids, scores = [], [], []
|
109 |
-
tlwhs = np.asarray(tlwhs, dtype=float).reshape(-1, 4)
|
110 |
-
return tlwhs, ids, scores
|
111 |
-
|
112 |
-
|
113 |
-
class Evaluator(object):
|
114 |
-
def __init__(self, data_root, seq_name, data_type='mot'):
|
115 |
-
|
116 |
-
self.data_root = data_root
|
117 |
-
self.seq_name = seq_name
|
118 |
-
self.data_type = data_type
|
119 |
-
|
120 |
-
self.load_annotations()
|
121 |
-
self.reset_accumulator()
|
122 |
-
|
123 |
-
def load_annotations(self):
|
124 |
-
assert self.data_type == 'mot'
|
125 |
-
|
126 |
-
gt_filename = os.path.join(self.data_root, self.seq_name, 'gt', 'gt.txt')
|
127 |
-
self.gt_frame_dict = read_results(gt_filename, self.data_type, is_gt=True)
|
128 |
-
self.gt_ignore_frame_dict = read_results(gt_filename, self.data_type, is_ignore=True)
|
129 |
-
|
130 |
-
def reset_accumulator(self):
|
131 |
-
self.acc = mm.MOTAccumulator(auto_id=True)
|
132 |
-
|
133 |
-
def eval_frame(self, frame_id, trk_tlwhs, trk_ids, rtn_events=False):
|
134 |
-
# results
|
135 |
-
trk_tlwhs = np.copy(trk_tlwhs)
|
136 |
-
trk_ids = np.copy(trk_ids)
|
137 |
-
|
138 |
-
# gts
|
139 |
-
gt_objs = self.gt_frame_dict.get(frame_id, [])
|
140 |
-
gt_tlwhs, gt_ids = unzip_objs(gt_objs)[:2]
|
141 |
-
|
142 |
-
# ignore boxes
|
143 |
-
ignore_objs = self.gt_ignore_frame_dict.get(frame_id, [])
|
144 |
-
ignore_tlwhs = unzip_objs(ignore_objs)[0]
|
145 |
-
# remove ignored results
|
146 |
-
keep = np.ones(len(trk_tlwhs), dtype=bool)
|
147 |
-
iou_distance = mm.distances.iou_matrix(ignore_tlwhs, trk_tlwhs, max_iou=0.5)
|
148 |
-
if len(iou_distance) > 0:
|
149 |
-
match_is, match_js = mm.lap.linear_sum_assignment(iou_distance)
|
150 |
-
match_is, match_js = map(lambda a: np.asarray(a, dtype=int), [match_is, match_js])
|
151 |
-
match_ious = iou_distance[match_is, match_js]
|
152 |
-
|
153 |
-
match_js = np.asarray(match_js, dtype=int)
|
154 |
-
match_js = match_js[np.logical_not(np.isnan(match_ious))]
|
155 |
-
keep[match_js] = False
|
156 |
-
trk_tlwhs = trk_tlwhs[keep]
|
157 |
-
trk_ids = trk_ids[keep]
|
158 |
-
|
159 |
-
# get distance matrix
|
160 |
-
iou_distance = mm.distances.iou_matrix(gt_tlwhs, trk_tlwhs, max_iou=0.5)
|
161 |
-
|
162 |
-
# acc
|
163 |
-
self.acc.update(gt_ids, trk_ids, iou_distance)
|
164 |
-
|
165 |
-
if rtn_events and iou_distance.size > 0 and hasattr(self.acc, 'last_mot_events'):
|
166 |
-
events = self.acc.last_mot_events # only supported by https://github.com/longcw/py-motmetrics
|
167 |
-
else:
|
168 |
-
events = None
|
169 |
-
return events
|
170 |
-
|
171 |
-
def eval_file(self, filename):
|
172 |
-
self.reset_accumulator()
|
173 |
-
|
174 |
-
result_frame_dict = read_results(filename, self.data_type, is_gt=False)
|
175 |
-
#frames = sorted(list(set(self.gt_frame_dict.keys()) | set(result_frame_dict.keys())))
|
176 |
-
frames = sorted(list(set(result_frame_dict.keys())))
|
177 |
-
|
178 |
-
for frame_id in frames:
|
179 |
-
trk_objs = result_frame_dict.get(frame_id, [])
|
180 |
-
trk_tlwhs, trk_ids = unzip_objs(trk_objs)[:2]
|
181 |
-
self.eval_frame(frame_id, trk_tlwhs, trk_ids, rtn_events=False)
|
182 |
-
|
183 |
-
return self.acc
|
184 |
-
|
185 |
-
@staticmethod
|
186 |
-
def get_summary(accs, names, metrics=('mota', 'num_switches', 'idp', 'idr', 'idf1', 'precision', 'recall')):
|
187 |
-
names = copy.deepcopy(names)
|
188 |
-
if metrics is None:
|
189 |
-
metrics = mm.metrics.motchallenge_metrics
|
190 |
-
metrics = copy.deepcopy(metrics)
|
191 |
-
|
192 |
-
mh = mm.metrics.create()
|
193 |
-
summary = mh.compute_many(
|
194 |
-
accs,
|
195 |
-
metrics=metrics,
|
196 |
-
names=names,
|
197 |
-
generate_overall=True
|
198 |
-
)
|
199 |
-
|
200 |
-
return summary
|
201 |
-
|
202 |
-
@staticmethod
|
203 |
-
def save_summary(summary, filename):
|
204 |
-
import pandas as pd
|
205 |
-
writer = pd.ExcelWriter(filename)
|
206 |
-
summary.to_excel(writer)
|
207 |
-
writer.save()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EDGAhab/Paimon-Talking/modules.py
DELETED
@@ -1,390 +0,0 @@
|
|
1 |
-
import copy
|
2 |
-
import math
|
3 |
-
import numpy as np
|
4 |
-
import scipy
|
5 |
-
import torch
|
6 |
-
from torch import nn
|
7 |
-
from torch.nn import functional as F
|
8 |
-
|
9 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
10 |
-
from torch.nn.utils import weight_norm, remove_weight_norm
|
11 |
-
|
12 |
-
import commons
|
13 |
-
from commons import init_weights, get_padding
|
14 |
-
from transforms import piecewise_rational_quadratic_transform
|
15 |
-
|
16 |
-
|
17 |
-
LRELU_SLOPE = 0.1
|
18 |
-
|
19 |
-
|
20 |
-
class LayerNorm(nn.Module):
|
21 |
-
def __init__(self, channels, eps=1e-5):
|
22 |
-
super().__init__()
|
23 |
-
self.channels = channels
|
24 |
-
self.eps = eps
|
25 |
-
|
26 |
-
self.gamma = nn.Parameter(torch.ones(channels))
|
27 |
-
self.beta = nn.Parameter(torch.zeros(channels))
|
28 |
-
|
29 |
-
def forward(self, x):
|
30 |
-
x = x.transpose(1, -1)
|
31 |
-
x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
|
32 |
-
return x.transpose(1, -1)
|
33 |
-
|
34 |
-
|
35 |
-
class ConvReluNorm(nn.Module):
|
36 |
-
def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
|
37 |
-
super().__init__()
|
38 |
-
self.in_channels = in_channels
|
39 |
-
self.hidden_channels = hidden_channels
|
40 |
-
self.out_channels = out_channels
|
41 |
-
self.kernel_size = kernel_size
|
42 |
-
self.n_layers = n_layers
|
43 |
-
self.p_dropout = p_dropout
|
44 |
-
assert n_layers > 1, "Number of layers should be larger than 0."
|
45 |
-
|
46 |
-
self.conv_layers = nn.ModuleList()
|
47 |
-
self.norm_layers = nn.ModuleList()
|
48 |
-
self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
|
49 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
50 |
-
self.relu_drop = nn.Sequential(
|
51 |
-
nn.ReLU(),
|
52 |
-
nn.Dropout(p_dropout))
|
53 |
-
for _ in range(n_layers-1):
|
54 |
-
self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
|
55 |
-
self.norm_layers.append(LayerNorm(hidden_channels))
|
56 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
|
57 |
-
self.proj.weight.data.zero_()
|
58 |
-
self.proj.bias.data.zero_()
|
59 |
-
|
60 |
-
def forward(self, x, x_mask):
|
61 |
-
x_org = x
|
62 |
-
for i in range(self.n_layers):
|
63 |
-
x = self.conv_layers[i](x * x_mask)
|
64 |
-
x = self.norm_layers[i](x)
|
65 |
-
x = self.relu_drop(x)
|
66 |
-
x = x_org + self.proj(x)
|
67 |
-
return x * x_mask
|
68 |
-
|
69 |
-
|
70 |
-
class DDSConv(nn.Module):
|
71 |
-
"""
|
72 |
-
Dialted and Depth-Separable Convolution
|
73 |
-
"""
|
74 |
-
def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
|
75 |
-
super().__init__()
|
76 |
-
self.channels = channels
|
77 |
-
self.kernel_size = kernel_size
|
78 |
-
self.n_layers = n_layers
|
79 |
-
self.p_dropout = p_dropout
|
80 |
-
|
81 |
-
self.drop = nn.Dropout(p_dropout)
|
82 |
-
self.convs_sep = nn.ModuleList()
|
83 |
-
self.convs_1x1 = nn.ModuleList()
|
84 |
-
self.norms_1 = nn.ModuleList()
|
85 |
-
self.norms_2 = nn.ModuleList()
|
86 |
-
for i in range(n_layers):
|
87 |
-
dilation = kernel_size ** i
|
88 |
-
padding = (kernel_size * dilation - dilation) // 2
|
89 |
-
self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
|
90 |
-
groups=channels, dilation=dilation, padding=padding
|
91 |
-
))
|
92 |
-
self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
|
93 |
-
self.norms_1.append(LayerNorm(channels))
|
94 |
-
self.norms_2.append(LayerNorm(channels))
|
95 |
-
|
96 |
-
def forward(self, x, x_mask, g=None):
|
97 |
-
if g is not None:
|
98 |
-
x = x + g
|
99 |
-
for i in range(self.n_layers):
|
100 |
-
y = self.convs_sep[i](x * x_mask)
|
101 |
-
y = self.norms_1[i](y)
|
102 |
-
y = F.gelu(y)
|
103 |
-
y = self.convs_1x1[i](y)
|
104 |
-
y = self.norms_2[i](y)
|
105 |
-
y = F.gelu(y)
|
106 |
-
y = self.drop(y)
|
107 |
-
x = x + y
|
108 |
-
return x * x_mask
|
109 |
-
|
110 |
-
|
111 |
-
class WN(torch.nn.Module):
|
112 |
-
def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
|
113 |
-
super(WN, self).__init__()
|
114 |
-
assert(kernel_size % 2 == 1)
|
115 |
-
self.hidden_channels =hidden_channels
|
116 |
-
self.kernel_size = kernel_size,
|
117 |
-
self.dilation_rate = dilation_rate
|
118 |
-
self.n_layers = n_layers
|
119 |
-
self.gin_channels = gin_channels
|
120 |
-
self.p_dropout = p_dropout
|
121 |
-
|
122 |
-
self.in_layers = torch.nn.ModuleList()
|
123 |
-
self.res_skip_layers = torch.nn.ModuleList()
|
124 |
-
self.drop = nn.Dropout(p_dropout)
|
125 |
-
|
126 |
-
if gin_channels != 0:
|
127 |
-
cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
|
128 |
-
self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
|
129 |
-
|
130 |
-
for i in range(n_layers):
|
131 |
-
dilation = dilation_rate ** i
|
132 |
-
padding = int((kernel_size * dilation - dilation) / 2)
|
133 |
-
in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
|
134 |
-
dilation=dilation, padding=padding)
|
135 |
-
in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
|
136 |
-
self.in_layers.append(in_layer)
|
137 |
-
|
138 |
-
# last one is not necessary
|
139 |
-
if i < n_layers - 1:
|
140 |
-
res_skip_channels = 2 * hidden_channels
|
141 |
-
else:
|
142 |
-
res_skip_channels = hidden_channels
|
143 |
-
|
144 |
-
res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
|
145 |
-
res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
|
146 |
-
self.res_skip_layers.append(res_skip_layer)
|
147 |
-
|
148 |
-
def forward(self, x, x_mask, g=None, **kwargs):
|
149 |
-
output = torch.zeros_like(x)
|
150 |
-
n_channels_tensor = torch.IntTensor([self.hidden_channels])
|
151 |
-
|
152 |
-
if g is not None:
|
153 |
-
g = self.cond_layer(g)
|
154 |
-
|
155 |
-
for i in range(self.n_layers):
|
156 |
-
x_in = self.in_layers[i](x)
|
157 |
-
if g is not None:
|
158 |
-
cond_offset = i * 2 * self.hidden_channels
|
159 |
-
g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
|
160 |
-
else:
|
161 |
-
g_l = torch.zeros_like(x_in)
|
162 |
-
|
163 |
-
acts = commons.fused_add_tanh_sigmoid_multiply(
|
164 |
-
x_in,
|
165 |
-
g_l,
|
166 |
-
n_channels_tensor)
|
167 |
-
acts = self.drop(acts)
|
168 |
-
|
169 |
-
res_skip_acts = self.res_skip_layers[i](acts)
|
170 |
-
if i < self.n_layers - 1:
|
171 |
-
res_acts = res_skip_acts[:,:self.hidden_channels,:]
|
172 |
-
x = (x + res_acts) * x_mask
|
173 |
-
output = output + res_skip_acts[:,self.hidden_channels:,:]
|
174 |
-
else:
|
175 |
-
output = output + res_skip_acts
|
176 |
-
return output * x_mask
|
177 |
-
|
178 |
-
def remove_weight_norm(self):
|
179 |
-
if self.gin_channels != 0:
|
180 |
-
torch.nn.utils.remove_weight_norm(self.cond_layer)
|
181 |
-
for l in self.in_layers:
|
182 |
-
torch.nn.utils.remove_weight_norm(l)
|
183 |
-
for l in self.res_skip_layers:
|
184 |
-
torch.nn.utils.remove_weight_norm(l)
|
185 |
-
|
186 |
-
|
187 |
-
class ResBlock1(torch.nn.Module):
|
188 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
|
189 |
-
super(ResBlock1, self).__init__()
|
190 |
-
self.convs1 = nn.ModuleList([
|
191 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
|
192 |
-
padding=get_padding(kernel_size, dilation[0]))),
|
193 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
|
194 |
-
padding=get_padding(kernel_size, dilation[1]))),
|
195 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
|
196 |
-
padding=get_padding(kernel_size, dilation[2])))
|
197 |
-
])
|
198 |
-
self.convs1.apply(init_weights)
|
199 |
-
|
200 |
-
self.convs2 = nn.ModuleList([
|
201 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
202 |
-
padding=get_padding(kernel_size, 1))),
|
203 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
204 |
-
padding=get_padding(kernel_size, 1))),
|
205 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
|
206 |
-
padding=get_padding(kernel_size, 1)))
|
207 |
-
])
|
208 |
-
self.convs2.apply(init_weights)
|
209 |
-
|
210 |
-
def forward(self, x, x_mask=None):
|
211 |
-
for c1, c2 in zip(self.convs1, self.convs2):
|
212 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
213 |
-
if x_mask is not None:
|
214 |
-
xt = xt * x_mask
|
215 |
-
xt = c1(xt)
|
216 |
-
xt = F.leaky_relu(xt, LRELU_SLOPE)
|
217 |
-
if x_mask is not None:
|
218 |
-
xt = xt * x_mask
|
219 |
-
xt = c2(xt)
|
220 |
-
x = xt + x
|
221 |
-
if x_mask is not None:
|
222 |
-
x = x * x_mask
|
223 |
-
return x
|
224 |
-
|
225 |
-
def remove_weight_norm(self):
|
226 |
-
for l in self.convs1:
|
227 |
-
remove_weight_norm(l)
|
228 |
-
for l in self.convs2:
|
229 |
-
remove_weight_norm(l)
|
230 |
-
|
231 |
-
|
232 |
-
class ResBlock2(torch.nn.Module):
|
233 |
-
def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
|
234 |
-
super(ResBlock2, self).__init__()
|
235 |
-
self.convs = nn.ModuleList([
|
236 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
|
237 |
-
padding=get_padding(kernel_size, dilation[0]))),
|
238 |
-
weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
|
239 |
-
padding=get_padding(kernel_size, dilation[1])))
|
240 |
-
])
|
241 |
-
self.convs.apply(init_weights)
|
242 |
-
|
243 |
-
def forward(self, x, x_mask=None):
|
244 |
-
for c in self.convs:
|
245 |
-
xt = F.leaky_relu(x, LRELU_SLOPE)
|
246 |
-
if x_mask is not None:
|
247 |
-
xt = xt * x_mask
|
248 |
-
xt = c(xt)
|
249 |
-
x = xt + x
|
250 |
-
if x_mask is not None:
|
251 |
-
x = x * x_mask
|
252 |
-
return x
|
253 |
-
|
254 |
-
def remove_weight_norm(self):
|
255 |
-
for l in self.convs:
|
256 |
-
remove_weight_norm(l)
|
257 |
-
|
258 |
-
|
259 |
-
class Log(nn.Module):
|
260 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
261 |
-
if not reverse:
|
262 |
-
y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
|
263 |
-
logdet = torch.sum(-y, [1, 2])
|
264 |
-
return y, logdet
|
265 |
-
else:
|
266 |
-
x = torch.exp(x) * x_mask
|
267 |
-
return x
|
268 |
-
|
269 |
-
|
270 |
-
class Flip(nn.Module):
|
271 |
-
def forward(self, x, *args, reverse=False, **kwargs):
|
272 |
-
x = torch.flip(x, [1])
|
273 |
-
if not reverse:
|
274 |
-
logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
|
275 |
-
return x, logdet
|
276 |
-
else:
|
277 |
-
return x
|
278 |
-
|
279 |
-
|
280 |
-
class ElementwiseAffine(nn.Module):
|
281 |
-
def __init__(self, channels):
|
282 |
-
super().__init__()
|
283 |
-
self.channels = channels
|
284 |
-
self.m = nn.Parameter(torch.zeros(channels,1))
|
285 |
-
self.logs = nn.Parameter(torch.zeros(channels,1))
|
286 |
-
|
287 |
-
def forward(self, x, x_mask, reverse=False, **kwargs):
|
288 |
-
if not reverse:
|
289 |
-
y = self.m + torch.exp(self.logs) * x
|
290 |
-
y = y * x_mask
|
291 |
-
logdet = torch.sum(self.logs * x_mask, [1,2])
|
292 |
-
return y, logdet
|
293 |
-
else:
|
294 |
-
x = (x - self.m) * torch.exp(-self.logs) * x_mask
|
295 |
-
return x
|
296 |
-
|
297 |
-
|
298 |
-
class ResidualCouplingLayer(nn.Module):
|
299 |
-
def __init__(self,
|
300 |
-
channels,
|
301 |
-
hidden_channels,
|
302 |
-
kernel_size,
|
303 |
-
dilation_rate,
|
304 |
-
n_layers,
|
305 |
-
p_dropout=0,
|
306 |
-
gin_channels=0,
|
307 |
-
mean_only=False):
|
308 |
-
assert channels % 2 == 0, "channels should be divisible by 2"
|
309 |
-
super().__init__()
|
310 |
-
self.channels = channels
|
311 |
-
self.hidden_channels = hidden_channels
|
312 |
-
self.kernel_size = kernel_size
|
313 |
-
self.dilation_rate = dilation_rate
|
314 |
-
self.n_layers = n_layers
|
315 |
-
self.half_channels = channels // 2
|
316 |
-
self.mean_only = mean_only
|
317 |
-
|
318 |
-
self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
|
319 |
-
self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
|
320 |
-
self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
|
321 |
-
self.post.weight.data.zero_()
|
322 |
-
self.post.bias.data.zero_()
|
323 |
-
|
324 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
325 |
-
x0, x1 = torch.split(x, [self.half_channels]*2, 1)
|
326 |
-
h = self.pre(x0) * x_mask
|
327 |
-
h = self.enc(h, x_mask, g=g)
|
328 |
-
stats = self.post(h) * x_mask
|
329 |
-
if not self.mean_only:
|
330 |
-
m, logs = torch.split(stats, [self.half_channels]*2, 1)
|
331 |
-
else:
|
332 |
-
m = stats
|
333 |
-
logs = torch.zeros_like(m)
|
334 |
-
|
335 |
-
if not reverse:
|
336 |
-
x1 = m + x1 * torch.exp(logs) * x_mask
|
337 |
-
x = torch.cat([x0, x1], 1)
|
338 |
-
logdet = torch.sum(logs, [1,2])
|
339 |
-
return x, logdet
|
340 |
-
else:
|
341 |
-
x1 = (x1 - m) * torch.exp(-logs) * x_mask
|
342 |
-
x = torch.cat([x0, x1], 1)
|
343 |
-
return x
|
344 |
-
|
345 |
-
|
346 |
-
class ConvFlow(nn.Module):
|
347 |
-
def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
|
348 |
-
super().__init__()
|
349 |
-
self.in_channels = in_channels
|
350 |
-
self.filter_channels = filter_channels
|
351 |
-
self.kernel_size = kernel_size
|
352 |
-
self.n_layers = n_layers
|
353 |
-
self.num_bins = num_bins
|
354 |
-
self.tail_bound = tail_bound
|
355 |
-
self.half_channels = in_channels // 2
|
356 |
-
|
357 |
-
self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
|
358 |
-
self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
|
359 |
-
self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
|
360 |
-
self.proj.weight.data.zero_()
|
361 |
-
self.proj.bias.data.zero_()
|
362 |
-
|
363 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
364 |
-
x0, x1 = torch.split(x, [self.half_channels]*2, 1)
|
365 |
-
h = self.pre(x0)
|
366 |
-
h = self.convs(h, x_mask, g=g)
|
367 |
-
h = self.proj(h) * x_mask
|
368 |
-
|
369 |
-
b, c, t = x0.shape
|
370 |
-
h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
|
371 |
-
|
372 |
-
unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
|
373 |
-
unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
|
374 |
-
unnormalized_derivatives = h[..., 2 * self.num_bins:]
|
375 |
-
|
376 |
-
x1, logabsdet = piecewise_rational_quadratic_transform(x1,
|
377 |
-
unnormalized_widths,
|
378 |
-
unnormalized_heights,
|
379 |
-
unnormalized_derivatives,
|
380 |
-
inverse=reverse,
|
381 |
-
tails='linear',
|
382 |
-
tail_bound=self.tail_bound
|
383 |
-
)
|
384 |
-
|
385 |
-
x = torch.cat([x0, x1], 1) * x_mask
|
386 |
-
logdet = torch.sum(logabsdet * x_mask, [1,2])
|
387 |
-
if not reverse:
|
388 |
-
return x, logdet
|
389 |
-
else:
|
390 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Enterprisium/Easy_GUI/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
|
2 |
-
import parselmouth
|
3 |
-
import numpy as np
|
4 |
-
|
5 |
-
|
6 |
-
class PMF0Predictor(F0Predictor):
|
7 |
-
def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
|
8 |
-
self.hop_length = hop_length
|
9 |
-
self.f0_min = f0_min
|
10 |
-
self.f0_max = f0_max
|
11 |
-
self.sampling_rate = sampling_rate
|
12 |
-
|
13 |
-
def interpolate_f0(self, f0):
|
14 |
-
"""
|
15 |
-
对F0进行插值处理
|
16 |
-
"""
|
17 |
-
|
18 |
-
data = np.reshape(f0, (f0.size, 1))
|
19 |
-
|
20 |
-
vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
|
21 |
-
vuv_vector[data > 0.0] = 1.0
|
22 |
-
vuv_vector[data <= 0.0] = 0.0
|
23 |
-
|
24 |
-
ip_data = data
|
25 |
-
|
26 |
-
frame_number = data.size
|
27 |
-
last_value = 0.0
|
28 |
-
for i in range(frame_number):
|
29 |
-
if data[i] <= 0.0:
|
30 |
-
j = i + 1
|
31 |
-
for j in range(i + 1, frame_number):
|
32 |
-
if data[j] > 0.0:
|
33 |
-
break
|
34 |
-
if j < frame_number - 1:
|
35 |
-
if last_value > 0.0:
|
36 |
-
step = (data[j] - data[i - 1]) / float(j - i)
|
37 |
-
for k in range(i, j):
|
38 |
-
ip_data[k] = data[i - 1] + step * (k - i + 1)
|
39 |
-
else:
|
40 |
-
for k in range(i, j):
|
41 |
-
ip_data[k] = data[j]
|
42 |
-
else:
|
43 |
-
for k in range(i, frame_number):
|
44 |
-
ip_data[k] = last_value
|
45 |
-
else:
|
46 |
-
ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
|
47 |
-
last_value = data[i]
|
48 |
-
|
49 |
-
return ip_data[:, 0], vuv_vector[:, 0]
|
50 |
-
|
51 |
-
def compute_f0(self, wav, p_len=None):
|
52 |
-
x = wav
|
53 |
-
if p_len is None:
|
54 |
-
p_len = x.shape[0] // self.hop_length
|
55 |
-
else:
|
56 |
-
assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
|
57 |
-
time_step = self.hop_length / self.sampling_rate * 1000
|
58 |
-
f0 = (
|
59 |
-
parselmouth.Sound(x, self.sampling_rate)
|
60 |
-
.to_pitch_ac(
|
61 |
-
time_step=time_step / 1000,
|
62 |
-
voicing_threshold=0.6,
|
63 |
-
pitch_floor=self.f0_min,
|
64 |
-
pitch_ceiling=self.f0_max,
|
65 |
-
)
|
66 |
-
.selected_array["frequency"]
|
67 |
-
)
|
68 |
-
|
69 |
-
pad_size = (p_len - len(f0) + 1) // 2
|
70 |
-
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
|
71 |
-
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
|
72 |
-
f0, uv = self.interpolate_f0(f0)
|
73 |
-
return f0
|
74 |
-
|
75 |
-
def compute_f0_uv(self, wav, p_len=None):
|
76 |
-
x = wav
|
77 |
-
if p_len is None:
|
78 |
-
p_len = x.shape[0] // self.hop_length
|
79 |
-
else:
|
80 |
-
assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
|
81 |
-
time_step = self.hop_length / self.sampling_rate * 1000
|
82 |
-
f0 = (
|
83 |
-
parselmouth.Sound(x, self.sampling_rate)
|
84 |
-
.to_pitch_ac(
|
85 |
-
time_step=time_step / 1000,
|
86 |
-
voicing_threshold=0.6,
|
87 |
-
pitch_floor=self.f0_min,
|
88 |
-
pitch_ceiling=self.f0_max,
|
89 |
-
)
|
90 |
-
.selected_array["frequency"]
|
91 |
-
)
|
92 |
-
|
93 |
-
pad_size = (p_len - len(f0) + 1) // 2
|
94 |
-
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
|
95 |
-
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
|
96 |
-
f0, uv = self.interpolate_f0(f0)
|
97 |
-
return f0, uv
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EronSamez/RVC_HFmeu/lib/infer_pack/models_dml.py
DELETED
@@ -1,1124 +0,0 @@
|
|
1 |
-
import math, pdb, os
|
2 |
-
from time import time as ttime
|
3 |
-
import torch
|
4 |
-
from torch import nn
|
5 |
-
from torch.nn import functional as F
|
6 |
-
from lib.infer_pack import modules
|
7 |
-
from lib.infer_pack import attentions
|
8 |
-
from lib.infer_pack import commons
|
9 |
-
from lib.infer_pack.commons import init_weights, get_padding
|
10 |
-
from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
|
11 |
-
from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm
|
12 |
-
from lib.infer_pack.commons import init_weights
|
13 |
-
import numpy as np
|
14 |
-
from lib.infer_pack import commons
|
15 |
-
|
16 |
-
|
17 |
-
class TextEncoder256(nn.Module):
|
18 |
-
def __init__(
|
19 |
-
self,
|
20 |
-
out_channels,
|
21 |
-
hidden_channels,
|
22 |
-
filter_channels,
|
23 |
-
n_heads,
|
24 |
-
n_layers,
|
25 |
-
kernel_size,
|
26 |
-
p_dropout,
|
27 |
-
f0=True,
|
28 |
-
):
|
29 |
-
super().__init__()
|
30 |
-
self.out_channels = out_channels
|
31 |
-
self.hidden_channels = hidden_channels
|
32 |
-
self.filter_channels = filter_channels
|
33 |
-
self.n_heads = n_heads
|
34 |
-
self.n_layers = n_layers
|
35 |
-
self.kernel_size = kernel_size
|
36 |
-
self.p_dropout = p_dropout
|
37 |
-
self.emb_phone = nn.Linear(256, hidden_channels)
|
38 |
-
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
|
39 |
-
if f0 == True:
|
40 |
-
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
|
41 |
-
self.encoder = attentions.Encoder(
|
42 |
-
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
|
43 |
-
)
|
44 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
45 |
-
|
46 |
-
def forward(self, phone, pitch, lengths):
|
47 |
-
if pitch == None:
|
48 |
-
x = self.emb_phone(phone)
|
49 |
-
else:
|
50 |
-
x = self.emb_phone(phone) + self.emb_pitch(pitch)
|
51 |
-
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
|
52 |
-
x = self.lrelu(x)
|
53 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
54 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
|
55 |
-
x.dtype
|
56 |
-
)
|
57 |
-
x = self.encoder(x * x_mask, x_mask)
|
58 |
-
stats = self.proj(x) * x_mask
|
59 |
-
|
60 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
61 |
-
return m, logs, x_mask
|
62 |
-
|
63 |
-
|
64 |
-
class TextEncoder768(nn.Module):
|
65 |
-
def __init__(
|
66 |
-
self,
|
67 |
-
out_channels,
|
68 |
-
hidden_channels,
|
69 |
-
filter_channels,
|
70 |
-
n_heads,
|
71 |
-
n_layers,
|
72 |
-
kernel_size,
|
73 |
-
p_dropout,
|
74 |
-
f0=True,
|
75 |
-
):
|
76 |
-
super().__init__()
|
77 |
-
self.out_channels = out_channels
|
78 |
-
self.hidden_channels = hidden_channels
|
79 |
-
self.filter_channels = filter_channels
|
80 |
-
self.n_heads = n_heads
|
81 |
-
self.n_layers = n_layers
|
82 |
-
self.kernel_size = kernel_size
|
83 |
-
self.p_dropout = p_dropout
|
84 |
-
self.emb_phone = nn.Linear(768, hidden_channels)
|
85 |
-
self.lrelu = nn.LeakyReLU(0.1, inplace=True)
|
86 |
-
if f0 == True:
|
87 |
-
self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256
|
88 |
-
self.encoder = attentions.Encoder(
|
89 |
-
hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout
|
90 |
-
)
|
91 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
92 |
-
|
93 |
-
def forward(self, phone, pitch, lengths):
|
94 |
-
if pitch == None:
|
95 |
-
x = self.emb_phone(phone)
|
96 |
-
else:
|
97 |
-
x = self.emb_phone(phone) + self.emb_pitch(pitch)
|
98 |
-
x = x * math.sqrt(self.hidden_channels) # [b, t, h]
|
99 |
-
x = self.lrelu(x)
|
100 |
-
x = torch.transpose(x, 1, -1) # [b, h, t]
|
101 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to(
|
102 |
-
x.dtype
|
103 |
-
)
|
104 |
-
x = self.encoder(x * x_mask, x_mask)
|
105 |
-
stats = self.proj(x) * x_mask
|
106 |
-
|
107 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
108 |
-
return m, logs, x_mask
|
109 |
-
|
110 |
-
|
111 |
-
class ResidualCouplingBlock(nn.Module):
|
112 |
-
def __init__(
|
113 |
-
self,
|
114 |
-
channels,
|
115 |
-
hidden_channels,
|
116 |
-
kernel_size,
|
117 |
-
dilation_rate,
|
118 |
-
n_layers,
|
119 |
-
n_flows=4,
|
120 |
-
gin_channels=0,
|
121 |
-
):
|
122 |
-
super().__init__()
|
123 |
-
self.channels = channels
|
124 |
-
self.hidden_channels = hidden_channels
|
125 |
-
self.kernel_size = kernel_size
|
126 |
-
self.dilation_rate = dilation_rate
|
127 |
-
self.n_layers = n_layers
|
128 |
-
self.n_flows = n_flows
|
129 |
-
self.gin_channels = gin_channels
|
130 |
-
|
131 |
-
self.flows = nn.ModuleList()
|
132 |
-
for i in range(n_flows):
|
133 |
-
self.flows.append(
|
134 |
-
modules.ResidualCouplingLayer(
|
135 |
-
channels,
|
136 |
-
hidden_channels,
|
137 |
-
kernel_size,
|
138 |
-
dilation_rate,
|
139 |
-
n_layers,
|
140 |
-
gin_channels=gin_channels,
|
141 |
-
mean_only=True,
|
142 |
-
)
|
143 |
-
)
|
144 |
-
self.flows.append(modules.Flip())
|
145 |
-
|
146 |
-
def forward(self, x, x_mask, g=None, reverse=False):
|
147 |
-
if not reverse:
|
148 |
-
for flow in self.flows:
|
149 |
-
x, _ = flow(x, x_mask, g=g, reverse=reverse)
|
150 |
-
else:
|
151 |
-
for flow in reversed(self.flows):
|
152 |
-
x = flow(x, x_mask, g=g, reverse=reverse)
|
153 |
-
return x
|
154 |
-
|
155 |
-
def remove_weight_norm(self):
|
156 |
-
for i in range(self.n_flows):
|
157 |
-
self.flows[i * 2].remove_weight_norm()
|
158 |
-
|
159 |
-
|
160 |
-
class PosteriorEncoder(nn.Module):
|
161 |
-
def __init__(
|
162 |
-
self,
|
163 |
-
in_channels,
|
164 |
-
out_channels,
|
165 |
-
hidden_channels,
|
166 |
-
kernel_size,
|
167 |
-
dilation_rate,
|
168 |
-
n_layers,
|
169 |
-
gin_channels=0,
|
170 |
-
):
|
171 |
-
super().__init__()
|
172 |
-
self.in_channels = in_channels
|
173 |
-
self.out_channels = out_channels
|
174 |
-
self.hidden_channels = hidden_channels
|
175 |
-
self.kernel_size = kernel_size
|
176 |
-
self.dilation_rate = dilation_rate
|
177 |
-
self.n_layers = n_layers
|
178 |
-
self.gin_channels = gin_channels
|
179 |
-
|
180 |
-
self.pre = nn.Conv1d(in_channels, hidden_channels, 1)
|
181 |
-
self.enc = modules.WN(
|
182 |
-
hidden_channels,
|
183 |
-
kernel_size,
|
184 |
-
dilation_rate,
|
185 |
-
n_layers,
|
186 |
-
gin_channels=gin_channels,
|
187 |
-
)
|
188 |
-
self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1)
|
189 |
-
|
190 |
-
def forward(self, x, x_lengths, g=None):
|
191 |
-
x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(
|
192 |
-
x.dtype
|
193 |
-
)
|
194 |
-
x = self.pre(x) * x_mask
|
195 |
-
x = self.enc(x, x_mask, g=g)
|
196 |
-
stats = self.proj(x) * x_mask
|
197 |
-
m, logs = torch.split(stats, self.out_channels, dim=1)
|
198 |
-
z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask
|
199 |
-
return z, m, logs, x_mask
|
200 |
-
|
201 |
-
def remove_weight_norm(self):
|
202 |
-
self.enc.remove_weight_norm()
|
203 |
-
|
204 |
-
|
205 |
-
class Generator(torch.nn.Module):
|
206 |
-
def __init__(
|
207 |
-
self,
|
208 |
-
initial_channel,
|
209 |
-
resblock,
|
210 |
-
resblock_kernel_sizes,
|
211 |
-
resblock_dilation_sizes,
|
212 |
-
upsample_rates,
|
213 |
-
upsample_initial_channel,
|
214 |
-
upsample_kernel_sizes,
|
215 |
-
gin_channels=0,
|
216 |
-
):
|
217 |
-
super(Generator, self).__init__()
|
218 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
219 |
-
self.num_upsamples = len(upsample_rates)
|
220 |
-
self.conv_pre = Conv1d(
|
221 |
-
initial_channel, upsample_initial_channel, 7, 1, padding=3
|
222 |
-
)
|
223 |
-
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
|
224 |
-
|
225 |
-
self.ups = nn.ModuleList()
|
226 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
227 |
-
self.ups.append(
|
228 |
-
weight_norm(
|
229 |
-
ConvTranspose1d(
|
230 |
-
upsample_initial_channel // (2**i),
|
231 |
-
upsample_initial_channel // (2 ** (i + 1)),
|
232 |
-
k,
|
233 |
-
u,
|
234 |
-
padding=(k - u) // 2,
|
235 |
-
)
|
236 |
-
)
|
237 |
-
)
|
238 |
-
|
239 |
-
self.resblocks = nn.ModuleList()
|
240 |
-
for i in range(len(self.ups)):
|
241 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
242 |
-
for j, (k, d) in enumerate(
|
243 |
-
zip(resblock_kernel_sizes, resblock_dilation_sizes)
|
244 |
-
):
|
245 |
-
self.resblocks.append(resblock(ch, k, d))
|
246 |
-
|
247 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
248 |
-
self.ups.apply(init_weights)
|
249 |
-
|
250 |
-
if gin_channels != 0:
|
251 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
252 |
-
|
253 |
-
def forward(self, x, g=None):
|
254 |
-
x = self.conv_pre(x)
|
255 |
-
if g is not None:
|
256 |
-
x = x + self.cond(g)
|
257 |
-
|
258 |
-
for i in range(self.num_upsamples):
|
259 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
260 |
-
x = self.ups[i](x)
|
261 |
-
xs = None
|
262 |
-
for j in range(self.num_kernels):
|
263 |
-
if xs is None:
|
264 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
265 |
-
else:
|
266 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
267 |
-
x = xs / self.num_kernels
|
268 |
-
x = F.leaky_relu(x)
|
269 |
-
x = self.conv_post(x)
|
270 |
-
x = torch.tanh(x)
|
271 |
-
|
272 |
-
return x
|
273 |
-
|
274 |
-
def remove_weight_norm(self):
|
275 |
-
for l in self.ups:
|
276 |
-
remove_weight_norm(l)
|
277 |
-
for l in self.resblocks:
|
278 |
-
l.remove_weight_norm()
|
279 |
-
|
280 |
-
|
281 |
-
class SineGen(torch.nn.Module):
|
282 |
-
"""Definition of sine generator
|
283 |
-
SineGen(samp_rate, harmonic_num = 0,
|
284 |
-
sine_amp = 0.1, noise_std = 0.003,
|
285 |
-
voiced_threshold = 0,
|
286 |
-
flag_for_pulse=False)
|
287 |
-
samp_rate: sampling rate in Hz
|
288 |
-
harmonic_num: number of harmonic overtones (default 0)
|
289 |
-
sine_amp: amplitude of sine-wavefrom (default 0.1)
|
290 |
-
noise_std: std of Gaussian noise (default 0.003)
|
291 |
-
voiced_thoreshold: F0 threshold for U/V classification (default 0)
|
292 |
-
flag_for_pulse: this SinGen is used inside PulseGen (default False)
|
293 |
-
Note: when flag_for_pulse is True, the first time step of a voiced
|
294 |
-
segment is always sin(np.pi) or cos(0)
|
295 |
-
"""
|
296 |
-
|
297 |
-
def __init__(
|
298 |
-
self,
|
299 |
-
samp_rate,
|
300 |
-
harmonic_num=0,
|
301 |
-
sine_amp=0.1,
|
302 |
-
noise_std=0.003,
|
303 |
-
voiced_threshold=0,
|
304 |
-
flag_for_pulse=False,
|
305 |
-
):
|
306 |
-
super(SineGen, self).__init__()
|
307 |
-
self.sine_amp = sine_amp
|
308 |
-
self.noise_std = noise_std
|
309 |
-
self.harmonic_num = harmonic_num
|
310 |
-
self.dim = self.harmonic_num + 1
|
311 |
-
self.sampling_rate = samp_rate
|
312 |
-
self.voiced_threshold = voiced_threshold
|
313 |
-
|
314 |
-
def _f02uv(self, f0):
|
315 |
-
# generate uv signal
|
316 |
-
uv = torch.ones_like(f0)
|
317 |
-
uv = uv * (f0 > self.voiced_threshold)
|
318 |
-
return uv.float()
|
319 |
-
|
320 |
-
def forward(self, f0, upp):
|
321 |
-
"""sine_tensor, uv = forward(f0)
|
322 |
-
input F0: tensor(batchsize=1, length, dim=1)
|
323 |
-
f0 for unvoiced steps should be 0
|
324 |
-
output sine_tensor: tensor(batchsize=1, length, dim)
|
325 |
-
output uv: tensor(batchsize=1, length, 1)
|
326 |
-
"""
|
327 |
-
with torch.no_grad():
|
328 |
-
f0 = f0[:, None].transpose(1, 2)
|
329 |
-
f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device)
|
330 |
-
# fundamental component
|
331 |
-
f0_buf[:, :, 0] = f0[:, :, 0]
|
332 |
-
for idx in np.arange(self.harmonic_num):
|
333 |
-
f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * (
|
334 |
-
idx + 2
|
335 |
-
) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic
|
336 |
-
rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化
|
337 |
-
rand_ini = torch.rand(
|
338 |
-
f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device
|
339 |
-
)
|
340 |
-
rand_ini[:, 0] = 0
|
341 |
-
rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
|
342 |
-
tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化
|
343 |
-
tmp_over_one *= upp
|
344 |
-
tmp_over_one = F.interpolate(
|
345 |
-
tmp_over_one.transpose(2, 1),
|
346 |
-
scale_factor=upp,
|
347 |
-
mode="linear",
|
348 |
-
align_corners=True,
|
349 |
-
).transpose(2, 1)
|
350 |
-
rad_values = F.interpolate(
|
351 |
-
rad_values.transpose(2, 1), scale_factor=upp, mode="nearest"
|
352 |
-
).transpose(
|
353 |
-
2, 1
|
354 |
-
) #######
|
355 |
-
tmp_over_one %= 1
|
356 |
-
tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0
|
357 |
-
cumsum_shift = torch.zeros_like(rad_values)
|
358 |
-
cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0
|
359 |
-
sine_waves = torch.sin(
|
360 |
-
torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi
|
361 |
-
)
|
362 |
-
sine_waves = sine_waves * self.sine_amp
|
363 |
-
uv = self._f02uv(f0)
|
364 |
-
uv = F.interpolate(
|
365 |
-
uv.transpose(2, 1), scale_factor=upp, mode="nearest"
|
366 |
-
).transpose(2, 1)
|
367 |
-
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
|
368 |
-
noise = noise_amp * torch.randn_like(sine_waves)
|
369 |
-
sine_waves = sine_waves * uv + noise
|
370 |
-
return sine_waves, uv, noise
|
371 |
-
|
372 |
-
|
373 |
-
class SourceModuleHnNSF(torch.nn.Module):
|
374 |
-
"""SourceModule for hn-nsf
|
375 |
-
SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
|
376 |
-
add_noise_std=0.003, voiced_threshod=0)
|
377 |
-
sampling_rate: sampling_rate in Hz
|
378 |
-
harmonic_num: number of harmonic above F0 (default: 0)
|
379 |
-
sine_amp: amplitude of sine source signal (default: 0.1)
|
380 |
-
add_noise_std: std of additive Gaussian noise (default: 0.003)
|
381 |
-
note that amplitude of noise in unvoiced is decided
|
382 |
-
by sine_amp
|
383 |
-
voiced_threshold: threhold to set U/V given F0 (default: 0)
|
384 |
-
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
|
385 |
-
F0_sampled (batchsize, length, 1)
|
386 |
-
Sine_source (batchsize, length, 1)
|
387 |
-
noise_source (batchsize, length 1)
|
388 |
-
uv (batchsize, length, 1)
|
389 |
-
"""
|
390 |
-
|
391 |
-
def __init__(
|
392 |
-
self,
|
393 |
-
sampling_rate,
|
394 |
-
harmonic_num=0,
|
395 |
-
sine_amp=0.1,
|
396 |
-
add_noise_std=0.003,
|
397 |
-
voiced_threshod=0,
|
398 |
-
is_half=True,
|
399 |
-
):
|
400 |
-
super(SourceModuleHnNSF, self).__init__()
|
401 |
-
|
402 |
-
self.sine_amp = sine_amp
|
403 |
-
self.noise_std = add_noise_std
|
404 |
-
self.is_half = is_half
|
405 |
-
# to produce sine waveforms
|
406 |
-
self.l_sin_gen = SineGen(
|
407 |
-
sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod
|
408 |
-
)
|
409 |
-
|
410 |
-
# to merge source harmonics into a single excitation
|
411 |
-
self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
|
412 |
-
self.l_tanh = torch.nn.Tanh()
|
413 |
-
|
414 |
-
def forward(self, x, upp=None):
|
415 |
-
sine_wavs, uv, _ = self.l_sin_gen(x, upp)
|
416 |
-
if self.is_half:
|
417 |
-
sine_wavs = sine_wavs.half()
|
418 |
-
sine_merge = self.l_tanh(self.l_linear(sine_wavs))
|
419 |
-
return sine_merge, None, None # noise, uv
|
420 |
-
|
421 |
-
|
422 |
-
class GeneratorNSF(torch.nn.Module):
|
423 |
-
def __init__(
|
424 |
-
self,
|
425 |
-
initial_channel,
|
426 |
-
resblock,
|
427 |
-
resblock_kernel_sizes,
|
428 |
-
resblock_dilation_sizes,
|
429 |
-
upsample_rates,
|
430 |
-
upsample_initial_channel,
|
431 |
-
upsample_kernel_sizes,
|
432 |
-
gin_channels,
|
433 |
-
sr,
|
434 |
-
is_half=False,
|
435 |
-
):
|
436 |
-
super(GeneratorNSF, self).__init__()
|
437 |
-
self.num_kernels = len(resblock_kernel_sizes)
|
438 |
-
self.num_upsamples = len(upsample_rates)
|
439 |
-
|
440 |
-
self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates))
|
441 |
-
self.m_source = SourceModuleHnNSF(
|
442 |
-
sampling_rate=sr, harmonic_num=0, is_half=is_half
|
443 |
-
)
|
444 |
-
self.noise_convs = nn.ModuleList()
|
445 |
-
self.conv_pre = Conv1d(
|
446 |
-
initial_channel, upsample_initial_channel, 7, 1, padding=3
|
447 |
-
)
|
448 |
-
resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2
|
449 |
-
|
450 |
-
self.ups = nn.ModuleList()
|
451 |
-
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
|
452 |
-
c_cur = upsample_initial_channel // (2 ** (i + 1))
|
453 |
-
self.ups.append(
|
454 |
-
weight_norm(
|
455 |
-
ConvTranspose1d(
|
456 |
-
upsample_initial_channel // (2**i),
|
457 |
-
upsample_initial_channel // (2 ** (i + 1)),
|
458 |
-
k,
|
459 |
-
u,
|
460 |
-
padding=(k - u) // 2,
|
461 |
-
)
|
462 |
-
)
|
463 |
-
)
|
464 |
-
if i + 1 < len(upsample_rates):
|
465 |
-
stride_f0 = np.prod(upsample_rates[i + 1 :])
|
466 |
-
self.noise_convs.append(
|
467 |
-
Conv1d(
|
468 |
-
1,
|
469 |
-
c_cur,
|
470 |
-
kernel_size=stride_f0 * 2,
|
471 |
-
stride=stride_f0,
|
472 |
-
padding=stride_f0 // 2,
|
473 |
-
)
|
474 |
-
)
|
475 |
-
else:
|
476 |
-
self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1))
|
477 |
-
|
478 |
-
self.resblocks = nn.ModuleList()
|
479 |
-
for i in range(len(self.ups)):
|
480 |
-
ch = upsample_initial_channel // (2 ** (i + 1))
|
481 |
-
for j, (k, d) in enumerate(
|
482 |
-
zip(resblock_kernel_sizes, resblock_dilation_sizes)
|
483 |
-
):
|
484 |
-
self.resblocks.append(resblock(ch, k, d))
|
485 |
-
|
486 |
-
self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False)
|
487 |
-
self.ups.apply(init_weights)
|
488 |
-
|
489 |
-
if gin_channels != 0:
|
490 |
-
self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1)
|
491 |
-
|
492 |
-
self.upp = np.prod(upsample_rates)
|
493 |
-
|
494 |
-
def forward(self, x, f0, g=None):
|
495 |
-
har_source, noi_source, uv = self.m_source(f0, self.upp)
|
496 |
-
har_source = har_source.transpose(1, 2)
|
497 |
-
x = self.conv_pre(x)
|
498 |
-
if g is not None:
|
499 |
-
x = x + self.cond(g)
|
500 |
-
|
501 |
-
for i in range(self.num_upsamples):
|
502 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
503 |
-
x = self.ups[i](x)
|
504 |
-
x_source = self.noise_convs[i](har_source)
|
505 |
-
x = x + x_source
|
506 |
-
xs = None
|
507 |
-
for j in range(self.num_kernels):
|
508 |
-
if xs is None:
|
509 |
-
xs = self.resblocks[i * self.num_kernels + j](x)
|
510 |
-
else:
|
511 |
-
xs += self.resblocks[i * self.num_kernels + j](x)
|
512 |
-
x = xs / self.num_kernels
|
513 |
-
x = F.leaky_relu(x)
|
514 |
-
x = self.conv_post(x)
|
515 |
-
x = torch.tanh(x)
|
516 |
-
return x
|
517 |
-
|
518 |
-
def remove_weight_norm(self):
|
519 |
-
for l in self.ups:
|
520 |
-
remove_weight_norm(l)
|
521 |
-
for l in self.resblocks:
|
522 |
-
l.remove_weight_norm()
|
523 |
-
|
524 |
-
|
525 |
-
sr2sr = {
|
526 |
-
"32k": 32000,
|
527 |
-
"40k": 40000,
|
528 |
-
"48k": 48000,
|
529 |
-
}
|
530 |
-
|
531 |
-
|
532 |
-
class SynthesizerTrnMs256NSFsid(nn.Module):
|
533 |
-
def __init__(
|
534 |
-
self,
|
535 |
-
spec_channels,
|
536 |
-
segment_size,
|
537 |
-
inter_channels,
|
538 |
-
hidden_channels,
|
539 |
-
filter_channels,
|
540 |
-
n_heads,
|
541 |
-
n_layers,
|
542 |
-
kernel_size,
|
543 |
-
p_dropout,
|
544 |
-
resblock,
|
545 |
-
resblock_kernel_sizes,
|
546 |
-
resblock_dilation_sizes,
|
547 |
-
upsample_rates,
|
548 |
-
upsample_initial_channel,
|
549 |
-
upsample_kernel_sizes,
|
550 |
-
spk_embed_dim,
|
551 |
-
gin_channels,
|
552 |
-
sr,
|
553 |
-
**kwargs
|
554 |
-
):
|
555 |
-
super().__init__()
|
556 |
-
if type(sr) == type("strr"):
|
557 |
-
sr = sr2sr[sr]
|
558 |
-
self.spec_channels = spec_channels
|
559 |
-
self.inter_channels = inter_channels
|
560 |
-
self.hidden_channels = hidden_channels
|
561 |
-
self.filter_channels = filter_channels
|
562 |
-
self.n_heads = n_heads
|
563 |
-
self.n_layers = n_layers
|
564 |
-
self.kernel_size = kernel_size
|
565 |
-
self.p_dropout = p_dropout
|
566 |
-
self.resblock = resblock
|
567 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
568 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
569 |
-
self.upsample_rates = upsample_rates
|
570 |
-
self.upsample_initial_channel = upsample_initial_channel
|
571 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
572 |
-
self.segment_size = segment_size
|
573 |
-
self.gin_channels = gin_channels
|
574 |
-
# self.hop_length = hop_length#
|
575 |
-
self.spk_embed_dim = spk_embed_dim
|
576 |
-
self.enc_p = TextEncoder256(
|
577 |
-
inter_channels,
|
578 |
-
hidden_channels,
|
579 |
-
filter_channels,
|
580 |
-
n_heads,
|
581 |
-
n_layers,
|
582 |
-
kernel_size,
|
583 |
-
p_dropout,
|
584 |
-
)
|
585 |
-
self.dec = GeneratorNSF(
|
586 |
-
inter_channels,
|
587 |
-
resblock,
|
588 |
-
resblock_kernel_sizes,
|
589 |
-
resblock_dilation_sizes,
|
590 |
-
upsample_rates,
|
591 |
-
upsample_initial_channel,
|
592 |
-
upsample_kernel_sizes,
|
593 |
-
gin_channels=gin_channels,
|
594 |
-
sr=sr,
|
595 |
-
is_half=kwargs["is_half"],
|
596 |
-
)
|
597 |
-
self.enc_q = PosteriorEncoder(
|
598 |
-
spec_channels,
|
599 |
-
inter_channels,
|
600 |
-
hidden_channels,
|
601 |
-
5,
|
602 |
-
1,
|
603 |
-
16,
|
604 |
-
gin_channels=gin_channels,
|
605 |
-
)
|
606 |
-
self.flow = ResidualCouplingBlock(
|
607 |
-
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
|
608 |
-
)
|
609 |
-
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
|
610 |
-
print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
|
611 |
-
|
612 |
-
def remove_weight_norm(self):
|
613 |
-
self.dec.remove_weight_norm()
|
614 |
-
self.flow.remove_weight_norm()
|
615 |
-
self.enc_q.remove_weight_norm()
|
616 |
-
|
617 |
-
def forward(
|
618 |
-
self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
|
619 |
-
): # 这里ds是id,[bs,1]
|
620 |
-
# print(1,pitch.shape)#[bs,t]
|
621 |
-
g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
|
622 |
-
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
|
623 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
624 |
-
z_p = self.flow(z, y_mask, g=g)
|
625 |
-
z_slice, ids_slice = commons.rand_slice_segments(
|
626 |
-
z, y_lengths, self.segment_size
|
627 |
-
)
|
628 |
-
# print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
|
629 |
-
pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
|
630 |
-
# print(-2,pitchf.shape,z_slice.shape)
|
631 |
-
o = self.dec(z_slice, pitchf, g=g)
|
632 |
-
return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
|
633 |
-
|
634 |
-
def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
|
635 |
-
g = self.emb_g(sid).unsqueeze(-1)
|
636 |
-
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
|
637 |
-
z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
|
638 |
-
z = self.flow(z_p, x_mask, g=g, reverse=True)
|
639 |
-
o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
|
640 |
-
return o, x_mask, (z, z_p, m_p, logs_p)
|
641 |
-
|
642 |
-
|
643 |
-
class SynthesizerTrnMs768NSFsid(nn.Module):
|
644 |
-
def __init__(
|
645 |
-
self,
|
646 |
-
spec_channels,
|
647 |
-
segment_size,
|
648 |
-
inter_channels,
|
649 |
-
hidden_channels,
|
650 |
-
filter_channels,
|
651 |
-
n_heads,
|
652 |
-
n_layers,
|
653 |
-
kernel_size,
|
654 |
-
p_dropout,
|
655 |
-
resblock,
|
656 |
-
resblock_kernel_sizes,
|
657 |
-
resblock_dilation_sizes,
|
658 |
-
upsample_rates,
|
659 |
-
upsample_initial_channel,
|
660 |
-
upsample_kernel_sizes,
|
661 |
-
spk_embed_dim,
|
662 |
-
gin_channels,
|
663 |
-
sr,
|
664 |
-
**kwargs
|
665 |
-
):
|
666 |
-
super().__init__()
|
667 |
-
if type(sr) == type("strr"):
|
668 |
-
sr = sr2sr[sr]
|
669 |
-
self.spec_channels = spec_channels
|
670 |
-
self.inter_channels = inter_channels
|
671 |
-
self.hidden_channels = hidden_channels
|
672 |
-
self.filter_channels = filter_channels
|
673 |
-
self.n_heads = n_heads
|
674 |
-
self.n_layers = n_layers
|
675 |
-
self.kernel_size = kernel_size
|
676 |
-
self.p_dropout = p_dropout
|
677 |
-
self.resblock = resblock
|
678 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
679 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
680 |
-
self.upsample_rates = upsample_rates
|
681 |
-
self.upsample_initial_channel = upsample_initial_channel
|
682 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
683 |
-
self.segment_size = segment_size
|
684 |
-
self.gin_channels = gin_channels
|
685 |
-
# self.hop_length = hop_length#
|
686 |
-
self.spk_embed_dim = spk_embed_dim
|
687 |
-
self.enc_p = TextEncoder768(
|
688 |
-
inter_channels,
|
689 |
-
hidden_channels,
|
690 |
-
filter_channels,
|
691 |
-
n_heads,
|
692 |
-
n_layers,
|
693 |
-
kernel_size,
|
694 |
-
p_dropout,
|
695 |
-
)
|
696 |
-
self.dec = GeneratorNSF(
|
697 |
-
inter_channels,
|
698 |
-
resblock,
|
699 |
-
resblock_kernel_sizes,
|
700 |
-
resblock_dilation_sizes,
|
701 |
-
upsample_rates,
|
702 |
-
upsample_initial_channel,
|
703 |
-
upsample_kernel_sizes,
|
704 |
-
gin_channels=gin_channels,
|
705 |
-
sr=sr,
|
706 |
-
is_half=kwargs["is_half"],
|
707 |
-
)
|
708 |
-
self.enc_q = PosteriorEncoder(
|
709 |
-
spec_channels,
|
710 |
-
inter_channels,
|
711 |
-
hidden_channels,
|
712 |
-
5,
|
713 |
-
1,
|
714 |
-
16,
|
715 |
-
gin_channels=gin_channels,
|
716 |
-
)
|
717 |
-
self.flow = ResidualCouplingBlock(
|
718 |
-
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
|
719 |
-
)
|
720 |
-
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
|
721 |
-
print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
|
722 |
-
|
723 |
-
def remove_weight_norm(self):
|
724 |
-
self.dec.remove_weight_norm()
|
725 |
-
self.flow.remove_weight_norm()
|
726 |
-
self.enc_q.remove_weight_norm()
|
727 |
-
|
728 |
-
def forward(
|
729 |
-
self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds
|
730 |
-
): # 这里ds是id,[bs,1]
|
731 |
-
# print(1,pitch.shape)#[bs,t]
|
732 |
-
g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
|
733 |
-
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
|
734 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
735 |
-
z_p = self.flow(z, y_mask, g=g)
|
736 |
-
z_slice, ids_slice = commons.rand_slice_segments(
|
737 |
-
z, y_lengths, self.segment_size
|
738 |
-
)
|
739 |
-
# print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length)
|
740 |
-
pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size)
|
741 |
-
# print(-2,pitchf.shape,z_slice.shape)
|
742 |
-
o = self.dec(z_slice, pitchf, g=g)
|
743 |
-
return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
|
744 |
-
|
745 |
-
def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None):
|
746 |
-
g = self.emb_g(sid).unsqueeze(-1)
|
747 |
-
m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths)
|
748 |
-
z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
|
749 |
-
z = self.flow(z_p, x_mask, g=g, reverse=True)
|
750 |
-
o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g)
|
751 |
-
return o, x_mask, (z, z_p, m_p, logs_p)
|
752 |
-
|
753 |
-
|
754 |
-
class SynthesizerTrnMs256NSFsid_nono(nn.Module):
|
755 |
-
def __init__(
|
756 |
-
self,
|
757 |
-
spec_channels,
|
758 |
-
segment_size,
|
759 |
-
inter_channels,
|
760 |
-
hidden_channels,
|
761 |
-
filter_channels,
|
762 |
-
n_heads,
|
763 |
-
n_layers,
|
764 |
-
kernel_size,
|
765 |
-
p_dropout,
|
766 |
-
resblock,
|
767 |
-
resblock_kernel_sizes,
|
768 |
-
resblock_dilation_sizes,
|
769 |
-
upsample_rates,
|
770 |
-
upsample_initial_channel,
|
771 |
-
upsample_kernel_sizes,
|
772 |
-
spk_embed_dim,
|
773 |
-
gin_channels,
|
774 |
-
sr=None,
|
775 |
-
**kwargs
|
776 |
-
):
|
777 |
-
super().__init__()
|
778 |
-
self.spec_channels = spec_channels
|
779 |
-
self.inter_channels = inter_channels
|
780 |
-
self.hidden_channels = hidden_channels
|
781 |
-
self.filter_channels = filter_channels
|
782 |
-
self.n_heads = n_heads
|
783 |
-
self.n_layers = n_layers
|
784 |
-
self.kernel_size = kernel_size
|
785 |
-
self.p_dropout = p_dropout
|
786 |
-
self.resblock = resblock
|
787 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
788 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
789 |
-
self.upsample_rates = upsample_rates
|
790 |
-
self.upsample_initial_channel = upsample_initial_channel
|
791 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
792 |
-
self.segment_size = segment_size
|
793 |
-
self.gin_channels = gin_channels
|
794 |
-
# self.hop_length = hop_length#
|
795 |
-
self.spk_embed_dim = spk_embed_dim
|
796 |
-
self.enc_p = TextEncoder256(
|
797 |
-
inter_channels,
|
798 |
-
hidden_channels,
|
799 |
-
filter_channels,
|
800 |
-
n_heads,
|
801 |
-
n_layers,
|
802 |
-
kernel_size,
|
803 |
-
p_dropout,
|
804 |
-
f0=False,
|
805 |
-
)
|
806 |
-
self.dec = Generator(
|
807 |
-
inter_channels,
|
808 |
-
resblock,
|
809 |
-
resblock_kernel_sizes,
|
810 |
-
resblock_dilation_sizes,
|
811 |
-
upsample_rates,
|
812 |
-
upsample_initial_channel,
|
813 |
-
upsample_kernel_sizes,
|
814 |
-
gin_channels=gin_channels,
|
815 |
-
)
|
816 |
-
self.enc_q = PosteriorEncoder(
|
817 |
-
spec_channels,
|
818 |
-
inter_channels,
|
819 |
-
hidden_channels,
|
820 |
-
5,
|
821 |
-
1,
|
822 |
-
16,
|
823 |
-
gin_channels=gin_channels,
|
824 |
-
)
|
825 |
-
self.flow = ResidualCouplingBlock(
|
826 |
-
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
|
827 |
-
)
|
828 |
-
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
|
829 |
-
print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
|
830 |
-
|
831 |
-
def remove_weight_norm(self):
|
832 |
-
self.dec.remove_weight_norm()
|
833 |
-
self.flow.remove_weight_norm()
|
834 |
-
self.enc_q.remove_weight_norm()
|
835 |
-
|
836 |
-
def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
|
837 |
-
g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
|
838 |
-
m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
|
839 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
840 |
-
z_p = self.flow(z, y_mask, g=g)
|
841 |
-
z_slice, ids_slice = commons.rand_slice_segments(
|
842 |
-
z, y_lengths, self.segment_size
|
843 |
-
)
|
844 |
-
o = self.dec(z_slice, g=g)
|
845 |
-
return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
|
846 |
-
|
847 |
-
def infer(self, phone, phone_lengths, sid, max_len=None):
|
848 |
-
g = self.emb_g(sid).unsqueeze(-1)
|
849 |
-
m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
|
850 |
-
z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
|
851 |
-
z = self.flow(z_p, x_mask, g=g, reverse=True)
|
852 |
-
o = self.dec((z * x_mask)[:, :, :max_len], g=g)
|
853 |
-
return o, x_mask, (z, z_p, m_p, logs_p)
|
854 |
-
|
855 |
-
|
856 |
-
class SynthesizerTrnMs768NSFsid_nono(nn.Module):
|
857 |
-
def __init__(
|
858 |
-
self,
|
859 |
-
spec_channels,
|
860 |
-
segment_size,
|
861 |
-
inter_channels,
|
862 |
-
hidden_channels,
|
863 |
-
filter_channels,
|
864 |
-
n_heads,
|
865 |
-
n_layers,
|
866 |
-
kernel_size,
|
867 |
-
p_dropout,
|
868 |
-
resblock,
|
869 |
-
resblock_kernel_sizes,
|
870 |
-
resblock_dilation_sizes,
|
871 |
-
upsample_rates,
|
872 |
-
upsample_initial_channel,
|
873 |
-
upsample_kernel_sizes,
|
874 |
-
spk_embed_dim,
|
875 |
-
gin_channels,
|
876 |
-
sr=None,
|
877 |
-
**kwargs
|
878 |
-
):
|
879 |
-
super().__init__()
|
880 |
-
self.spec_channels = spec_channels
|
881 |
-
self.inter_channels = inter_channels
|
882 |
-
self.hidden_channels = hidden_channels
|
883 |
-
self.filter_channels = filter_channels
|
884 |
-
self.n_heads = n_heads
|
885 |
-
self.n_layers = n_layers
|
886 |
-
self.kernel_size = kernel_size
|
887 |
-
self.p_dropout = p_dropout
|
888 |
-
self.resblock = resblock
|
889 |
-
self.resblock_kernel_sizes = resblock_kernel_sizes
|
890 |
-
self.resblock_dilation_sizes = resblock_dilation_sizes
|
891 |
-
self.upsample_rates = upsample_rates
|
892 |
-
self.upsample_initial_channel = upsample_initial_channel
|
893 |
-
self.upsample_kernel_sizes = upsample_kernel_sizes
|
894 |
-
self.segment_size = segment_size
|
895 |
-
self.gin_channels = gin_channels
|
896 |
-
# self.hop_length = hop_length#
|
897 |
-
self.spk_embed_dim = spk_embed_dim
|
898 |
-
self.enc_p = TextEncoder768(
|
899 |
-
inter_channels,
|
900 |
-
hidden_channels,
|
901 |
-
filter_channels,
|
902 |
-
n_heads,
|
903 |
-
n_layers,
|
904 |
-
kernel_size,
|
905 |
-
p_dropout,
|
906 |
-
f0=False,
|
907 |
-
)
|
908 |
-
self.dec = Generator(
|
909 |
-
inter_channels,
|
910 |
-
resblock,
|
911 |
-
resblock_kernel_sizes,
|
912 |
-
resblock_dilation_sizes,
|
913 |
-
upsample_rates,
|
914 |
-
upsample_initial_channel,
|
915 |
-
upsample_kernel_sizes,
|
916 |
-
gin_channels=gin_channels,
|
917 |
-
)
|
918 |
-
self.enc_q = PosteriorEncoder(
|
919 |
-
spec_channels,
|
920 |
-
inter_channels,
|
921 |
-
hidden_channels,
|
922 |
-
5,
|
923 |
-
1,
|
924 |
-
16,
|
925 |
-
gin_channels=gin_channels,
|
926 |
-
)
|
927 |
-
self.flow = ResidualCouplingBlock(
|
928 |
-
inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels
|
929 |
-
)
|
930 |
-
self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels)
|
931 |
-
print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim)
|
932 |
-
|
933 |
-
def remove_weight_norm(self):
|
934 |
-
self.dec.remove_weight_norm()
|
935 |
-
self.flow.remove_weight_norm()
|
936 |
-
self.enc_q.remove_weight_norm()
|
937 |
-
|
938 |
-
def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1]
|
939 |
-
g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的
|
940 |
-
m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
|
941 |
-
z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g)
|
942 |
-
z_p = self.flow(z, y_mask, g=g)
|
943 |
-
z_slice, ids_slice = commons.rand_slice_segments(
|
944 |
-
z, y_lengths, self.segment_size
|
945 |
-
)
|
946 |
-
o = self.dec(z_slice, g=g)
|
947 |
-
return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q)
|
948 |
-
|
949 |
-
def infer(self, phone, phone_lengths, sid, max_len=None):
|
950 |
-
g = self.emb_g(sid).unsqueeze(-1)
|
951 |
-
m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths)
|
952 |
-
z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask
|
953 |
-
z = self.flow(z_p, x_mask, g=g, reverse=True)
|
954 |
-
o = self.dec((z * x_mask)[:, :, :max_len], g=g)
|
955 |
-
return o, x_mask, (z, z_p, m_p, logs_p)
|
956 |
-
|
957 |
-
|
958 |
-
class MultiPeriodDiscriminator(torch.nn.Module):
|
959 |
-
def __init__(self, use_spectral_norm=False):
|
960 |
-
super(MultiPeriodDiscriminator, self).__init__()
|
961 |
-
periods = [2, 3, 5, 7, 11, 17]
|
962 |
-
# periods = [3, 5, 7, 11, 17, 23, 37]
|
963 |
-
|
964 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
965 |
-
discs = discs + [
|
966 |
-
DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
|
967 |
-
]
|
968 |
-
self.discriminators = nn.ModuleList(discs)
|
969 |
-
|
970 |
-
def forward(self, y, y_hat):
|
971 |
-
y_d_rs = [] #
|
972 |
-
y_d_gs = []
|
973 |
-
fmap_rs = []
|
974 |
-
fmap_gs = []
|
975 |
-
for i, d in enumerate(self.discriminators):
|
976 |
-
y_d_r, fmap_r = d(y)
|
977 |
-
y_d_g, fmap_g = d(y_hat)
|
978 |
-
# for j in range(len(fmap_r)):
|
979 |
-
# print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
|
980 |
-
y_d_rs.append(y_d_r)
|
981 |
-
y_d_gs.append(y_d_g)
|
982 |
-
fmap_rs.append(fmap_r)
|
983 |
-
fmap_gs.append(fmap_g)
|
984 |
-
|
985 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
986 |
-
|
987 |
-
|
988 |
-
class MultiPeriodDiscriminatorV2(torch.nn.Module):
|
989 |
-
def __init__(self, use_spectral_norm=False):
|
990 |
-
super(MultiPeriodDiscriminatorV2, self).__init__()
|
991 |
-
# periods = [2, 3, 5, 7, 11, 17]
|
992 |
-
periods = [2, 3, 5, 7, 11, 17, 23, 37]
|
993 |
-
|
994 |
-
discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)]
|
995 |
-
discs = discs + [
|
996 |
-
DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods
|
997 |
-
]
|
998 |
-
self.discriminators = nn.ModuleList(discs)
|
999 |
-
|
1000 |
-
def forward(self, y, y_hat):
|
1001 |
-
y_d_rs = [] #
|
1002 |
-
y_d_gs = []
|
1003 |
-
fmap_rs = []
|
1004 |
-
fmap_gs = []
|
1005 |
-
for i, d in enumerate(self.discriminators):
|
1006 |
-
y_d_r, fmap_r = d(y)
|
1007 |
-
y_d_g, fmap_g = d(y_hat)
|
1008 |
-
# for j in range(len(fmap_r)):
|
1009 |
-
# print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape)
|
1010 |
-
y_d_rs.append(y_d_r)
|
1011 |
-
y_d_gs.append(y_d_g)
|
1012 |
-
fmap_rs.append(fmap_r)
|
1013 |
-
fmap_gs.append(fmap_g)
|
1014 |
-
|
1015 |
-
return y_d_rs, y_d_gs, fmap_rs, fmap_gs
|
1016 |
-
|
1017 |
-
|
1018 |
-
class DiscriminatorS(torch.nn.Module):
|
1019 |
-
def __init__(self, use_spectral_norm=False):
|
1020 |
-
super(DiscriminatorS, self).__init__()
|
1021 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
1022 |
-
self.convs = nn.ModuleList(
|
1023 |
-
[
|
1024 |
-
norm_f(Conv1d(1, 16, 15, 1, padding=7)),
|
1025 |
-
norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)),
|
1026 |
-
norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)),
|
1027 |
-
norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)),
|
1028 |
-
norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)),
|
1029 |
-
norm_f(Conv1d(1024, 1024, 5, 1, padding=2)),
|
1030 |
-
]
|
1031 |
-
)
|
1032 |
-
self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1))
|
1033 |
-
|
1034 |
-
def forward(self, x):
|
1035 |
-
fmap = []
|
1036 |
-
|
1037 |
-
for l in self.convs:
|
1038 |
-
x = l(x)
|
1039 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
1040 |
-
fmap.append(x)
|
1041 |
-
x = self.conv_post(x)
|
1042 |
-
fmap.append(x)
|
1043 |
-
x = torch.flatten(x, 1, -1)
|
1044 |
-
|
1045 |
-
return x, fmap
|
1046 |
-
|
1047 |
-
|
1048 |
-
class DiscriminatorP(torch.nn.Module):
|
1049 |
-
def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False):
|
1050 |
-
super(DiscriminatorP, self).__init__()
|
1051 |
-
self.period = period
|
1052 |
-
self.use_spectral_norm = use_spectral_norm
|
1053 |
-
norm_f = weight_norm if use_spectral_norm == False else spectral_norm
|
1054 |
-
self.convs = nn.ModuleList(
|
1055 |
-
[
|
1056 |
-
norm_f(
|
1057 |
-
Conv2d(
|
1058 |
-
1,
|
1059 |
-
32,
|
1060 |
-
(kernel_size, 1),
|
1061 |
-
(stride, 1),
|
1062 |
-
padding=(get_padding(kernel_size, 1), 0),
|
1063 |
-
)
|
1064 |
-
),
|
1065 |
-
norm_f(
|
1066 |
-
Conv2d(
|
1067 |
-
32,
|
1068 |
-
128,
|
1069 |
-
(kernel_size, 1),
|
1070 |
-
(stride, 1),
|
1071 |
-
padding=(get_padding(kernel_size, 1), 0),
|
1072 |
-
)
|
1073 |
-
),
|
1074 |
-
norm_f(
|
1075 |
-
Conv2d(
|
1076 |
-
128,
|
1077 |
-
512,
|
1078 |
-
(kernel_size, 1),
|
1079 |
-
(stride, 1),
|
1080 |
-
padding=(get_padding(kernel_size, 1), 0),
|
1081 |
-
)
|
1082 |
-
),
|
1083 |
-
norm_f(
|
1084 |
-
Conv2d(
|
1085 |
-
512,
|
1086 |
-
1024,
|
1087 |
-
(kernel_size, 1),
|
1088 |
-
(stride, 1),
|
1089 |
-
padding=(get_padding(kernel_size, 1), 0),
|
1090 |
-
)
|
1091 |
-
),
|
1092 |
-
norm_f(
|
1093 |
-
Conv2d(
|
1094 |
-
1024,
|
1095 |
-
1024,
|
1096 |
-
(kernel_size, 1),
|
1097 |
-
1,
|
1098 |
-
padding=(get_padding(kernel_size, 1), 0),
|
1099 |
-
)
|
1100 |
-
),
|
1101 |
-
]
|
1102 |
-
)
|
1103 |
-
self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0)))
|
1104 |
-
|
1105 |
-
def forward(self, x):
|
1106 |
-
fmap = []
|
1107 |
-
|
1108 |
-
# 1d to 2d
|
1109 |
-
b, c, t = x.shape
|
1110 |
-
if t % self.period != 0: # pad first
|
1111 |
-
n_pad = self.period - (t % self.period)
|
1112 |
-
x = F.pad(x, (0, n_pad), "reflect")
|
1113 |
-
t = t + n_pad
|
1114 |
-
x = x.view(b, c, t // self.period, self.period)
|
1115 |
-
|
1116 |
-
for l in self.convs:
|
1117 |
-
x = l(x)
|
1118 |
-
x = F.leaky_relu(x, modules.LRELU_SLOPE)
|
1119 |
-
fmap.append(x)
|
1120 |
-
x = self.conv_post(x)
|
1121 |
-
fmap.append(x)
|
1122 |
-
x = torch.flatten(x, 1, -1)
|
1123 |
-
|
1124 |
-
return x, fmap
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|