parquet-converter commited on
Commit
878e5b1
·
1 Parent(s): b9cb6cc

Update parquet files (step 78 of 397)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/1gistliPinn/ChatGPT4/Examples/Bad Piggies 1.3.0 Crack !!TOP!! [PC] Hack Tooll.md +0 -6
  2. spaces/1gistliPinn/ChatGPT4/Examples/DOWNLOAD ADOBE ILLUSTRATOR CC 2019 64-BIT 23.1 PRE-ACTIVATED High Quality.md +0 -90
  3. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dawn AI APK The Most Powerful App for Selfies Portraits and Headshots.md +0 -112
  4. spaces/1phancelerku/anime-remove-background/Download Driving Zone Russia Mod APK with Unlimited Coins and Cash.md +0 -149
  5. spaces/1phancelerku/anime-remove-background/Download Plants vs Zombies 2 for PC and Protect Your Garden from the Undead.md +0 -53
  6. spaces/2ndelement/voicevox/voicevox_engine/kana_parser.py +0 -146
  7. spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/GenerSpeech/model/prosody_util.py +0 -385
  8. spaces/AIGC-Audio/Make_An_Audio_inpaint/README.md +0 -12
  9. spaces/ASJMO/freegpt/g4f/Provider/Providers/Bard.py +0 -74
  10. spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/pre-fix/librosa/__init__.py +0 -10
  11. spaces/Abhilashvj/planogram-compliance/classify/predict.py +0 -345
  12. spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Vitalentum.py +0 -69
  13. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customshapes/CustomShapes.d.ts +0 -2
  14. spaces/AjulorC/question_answering_bot_deployed_with_Gradio/README.md +0 -12
  15. spaces/Aki004/herta-so-vits/modules/__init__.py +0 -0
  16. spaces/Akmyradov/TurkmenTTSweSTT/vits/train_ms.py +0 -294
  17. spaces/Alpaca233/SadTalker/src/test_audio2coeff.py +0 -123
  18. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_flax_objects.py +0 -197
  19. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/pndm/__init__.py +0 -0
  20. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_image_variation.py +0 -314
  21. spaces/Andy1621/uniformer_image_detection/configs/groie/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py +0 -45
  22. spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/apcnet_r50-d8.py +0 -44
  23. spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_512x512_160k_ade20k.py +0 -2
  24. spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/stable_diffusion_pipeline.py +0 -848
  25. spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h +0 -33
  26. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/metadata/_json.py +0 -84
  27. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_pick.py +0 -17
  28. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/py36compat.py +0 -134
  29. spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py +0 -29
  30. spaces/BREWDAcademy/Brewd-Diffusion/style.css +0 -77
  31. spaces/Benson/text-generation/Examples/Coche De Carreras Juego De Configuracin Para Pc Windows 7.md +0 -85
  32. spaces/Benson/text-generation/Examples/Descargar Entre Nosotros Gamejolt.md +0 -58
  33. spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langhungarianmodel.py +0 -0
  34. spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/build_ext.py +0 -383
  35. spaces/BigSalmon/AbstractTwst/README.md +0 -12
  36. spaces/Bingsu/color_textual_inversion/app.py +0 -128
  37. spaces/BridgeEight/internlm-20B-chat-w4-turbomind/app.py +0 -128
  38. spaces/CVPR/LIVE/thrust/thrust/host_vector.h +0 -514
  39. spaces/CVPR/ml-talking-face/app.py +0 -202
  40. spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/admin/index.css +0 -75
  41. spaces/CikeyQI/meme-api/meme_generator/memes/dianzhongdian/__init__.py +0 -65
  42. spaces/CikeyQI/meme-api/meme_generator/memes/look_flat/__init__.py +0 -58
  43. spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/registry.py +0 -12
  44. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/voltLib/ast.py +0 -448
  45. spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Column-2853eb31.css +0 -1
  46. spaces/Daextream/Whisper-Auto-Subtitled-Video-Generator/pages/02_📼_Upload_Video_File.py +0 -230
  47. spaces/DemoLou/moe-tts/modules.py +0 -390
  48. spaces/Devaholic/fruit-demo/README.md +0 -12
  49. spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/convert_weight.py +0 -283
  50. spaces/DragGan/DragGan/stylegan_human/docs/Dataset.md +0 -74
spaces/1gistliPinn/ChatGPT4/Examples/Bad Piggies 1.3.0 Crack !!TOP!! [PC] Hack Tooll.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>Bad Piggies 1.3.0 Crack [PC] Hack Tooll</h2><br /><p><b><b>Download File</b> &#9734;&#9734;&#9734;&#9734;&#9734; <a href="https://imgfil.com/2uxYLS">https://imgfil.com/2uxYLS</a></b></p><br /><br />
2
-
3
- The Renegades Of Orion 2.0 Download For Pc [addons]l · Download Film The ... Previous · Bad Piggies 1.3.0 Crack [PC] Hack Tooll · Next. 1fdad05405<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/DOWNLOAD ADOBE ILLUSTRATOR CC 2019 64-BIT 23.1 PRE-ACTIVATED High Quality.md DELETED
@@ -1,90 +0,0 @@
1
-
2
- <h1>How to Download Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated</h1>
3
- <p>If you are looking for a way to download Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated, you have come to the right place. In this article, we will show you how to get this powerful vector graphics software for free, without any crack or patch needed.</p>
4
- <p>Adobe Illustrator CC 2019 is the industry-standard vector graphics app that lets you create logos, icons, drawings, typography, and illustrations for print, web, video, and mobile. It has all the drawing tools you need to turn simple shapes and colors into sophisticated logos, icons, and graphics. It also has amazing typography features that let you create stunning text designs. You can also use Illustrator to create freehand drawings, trace and recolor imported photos, and use your illustrations anywhere.</p>
5
- <h2>DOWNLOAD ADOBE ILLUSTRATOR CC 2019 64-BIT 23.1 PRE-ACTIVATED</h2><br /><p><b><b>Download File</b> &raquo; <a href="https://imgfil.com/2uy0NS">https://imgfil.com/2uy0NS</a></b></p><br /><br />
6
- <h2>Why Download Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated?</h2>
7
- <p>There are many reasons why you might want to download Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated. Here are some of them:</p>
8
- <ul>
9
- <li>You can save money by not paying for a subscription or a license.</li>
10
- <li>You can use the software offline without any internet connection.</li>
11
- <li>You can enjoy all the features and updates of the latest version of Illustrator.</li>
12
- <li>You can avoid any virus or malware that might come with cracked or patched versions.</li>
13
- <li>You can install and use the software easily and quickly.</li>
14
- </ul>
15
- <h2>How to Download Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated?</h2>
16
- <p>The process of downloading Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is very simple and straightforward. Here are the steps you need to follow:</p>
17
- <p></p>
18
- <ol>
19
- <li>Click on the link below to download the software from Google Drive.</li>
20
- <li>Extract the zip file using WinRAR or any other software.</li>
21
- <li>Double click on the installer and wait for the installation completed notification.</li>
22
- <li>The software will activate itself with built in crack, no additional cracking or patching needed.</li>
23
- <li>Launch the software from the start menu or taskbar and enjoy!</li>
24
- </ol>
25
- <p><a href="https://drive.google.com/drive/folders/1Ov-bpDL0YPrUweIHvlxe2Gnf_W8tcTZg">Download Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated</a></p>
26
- <h2>Conclusion</h2>
27
- <p>Adobe Illustrator CC 2019 is a great software for creating vector graphics and typography. You can download it for free from the link above and use it without any hassle. We hope this article was helpful and informative for you. If you have any questions or problems, feel free to leave a comment below or contact us through our website.</p>
28
- <h2>What are the Features of Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated?</h2>
29
- <p>Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is not only free to download and use, but also comes with many amazing features that make it a powerful vector graphics software. Here are some of the features that you can enjoy with this version:</p>
30
- <ul>
31
- <li>Freeform Gradients: This feature allows you to create natural and photorealistic gradients by dropping points of color on your object. You can blend multiple colors and create complex color transitions that look stunning.</li>
32
- <li>Global Editing: This feature allows you to edit similar objects across multiple artboards at once. You can change colors, shapes, rotation, and more with just a few clicks. This saves you time and ensures consistency in your design.</li>
33
- <li>Customizable Toolbar: This feature allows you to organize your workspace the way you want it. You can add or remove tools from the toolbar and group them according to your preference. You can also access Adobe Fonts directly from the toolbar and preview different fonts in your design.</li>
34
- <li>Content-Aware Crop: This feature uses Adobe Sensei technology to provide suggested crops for your images. You can crop your images without losing any important details or content.</li>
35
- <li>Puppet Warp Enhancement: This feature also uses Adobe Sensei technology to automatically suggest pins for your objects. You can use these pins to warp and transform your objects in a realistic way.</li>
36
- </ul>
37
- <h2>How to Use Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated?</h2>
38
- <p>Using Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is easy and fun. You can create any kind of vector graphics and typography with this software. Here are some tips and tricks to help you get started:</p>
39
- <ol>
40
- <li>Create a new document or open an existing one from the File menu.</li>
41
- <li>Select a tool from the toolbar or use the touch shortcuts on your screen.</li>
42
- <li>Draw, edit, and manipulate your objects on the artboard using the tool options and panels.</li>
43
- <li>Add effects, styles, and text to your objects using the Appearance panel and the Type tool.</li>
44
- <li>Export or save your artwork in various formats from the File menu.</li>
45
- </ol>
46
- <h2>Conclusion</h2>
47
- <p>Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is a great software for creating vector graphics and typography. You can download it for free from the link above and use it without any hassle. We hope this article was helpful and informative for you. If you have any questions or problems, feel free to leave a comment below or contact us through our website.</p>
48
- <h2>How to Learn Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated?</h2>
49
- <p>If you want to master Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated, you need to learn how to use its features and tools effectively. Fortunately, there are many resources available online that can help you learn Illustrator at your own pace and level. Here are some of the best ones:</p>
50
- <ul>
51
- <li>Adobe Illustrator Tutorials: This is the official website of Adobe that offers hundreds of tutorials for beginners, intermediate, and advanced users. You can learn the basics, new features, tips and techniques, and more. You can also watch video tutorials, hands-on projects, and live streams from experts.</li>
52
- <li>View All Adobe Illustrator Tutorials: This is another website by Adobe that shows you all the tutorials available for Illustrator. You can browse by topic, skill level, or product version. You can also filter by type, such as video, article, or project.</li>
53
- <li>Illustrator User Guide: This is the comprehensive online manual for Illustrator that covers everything you need to know about the software. You can read about the features, functions, workflows, and best practices of Illustrator. You can also find troubleshooting tips and answers to frequently asked questions.</li>
54
- </ul>
55
- <h2>What are the Benefits of Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated?</h2>
56
- <p>Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is not only a powerful vector graphics software, but also a versatile and creative tool that can help you achieve your design goals. Here are some of the benefits of using this software:</p>
57
- <ul>
58
- <li>You can create stunning vector graphics that are scalable, editable, and resolution-independent.</li>
59
- <li>You can design logos, icons, illustrations, typography, and more for any kind of project.</li>
60
- <li>You can use a variety of tools and effects to enhance your artwork and express your style.</li>
61
- <li>You can work with multiple artboards, layers, masks, guides, and grids to organize your content and layout.</li>
62
- <li>You can import and export your artwork in various formats and share it with others easily.</li>
63
- </ul>
64
- <h2>Conclusion</h2>
65
- <p>Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is a great software for creating vector graphics and typography. You can download it for free from the link above and use it without any hassle. We hope this article was helpful and informative for you. If you have any questions or problems, feel free to leave a comment below or contact us through our website.</p>
66
- <h2>What are the Reviews of Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated?</h2>
67
- <p>Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated has received many positive reviews from users and experts alike. Here are some of the highlights of what they have to say about this software:</p>
68
- <ul>
69
- <li>PCMag gave Illustrator a rating of 4.5 out of 5 stars, praising its vector design tools, touch type feature, free transform tool, puppet warp feature, and Adobe Fonts integration. They also noted that Illustrator is "the best vector-graphics editing program around, and it just keeps getting better."</li>
70
- <li>Creative Bloq gave Illustrator a rating of 4 out of 5 stars, commending its new 3D functions, elegant vector editing, and cloud teamwork. They also noted that Illustrator is "the standard for a reason – it’s the best."</li>
71
- <li>TrustRadius gave Illustrator a rating of 8.8 out of 10, based on 1,147 ratings from verified users. They highlighted its features such as freeform gradients, global editing, customizable toolbar, content-aware crop, and puppet warp enhancement. They also noted that Illustrator is "the best software for vector graphics and professional photographs."</li>
72
- <li>GetApp gave Illustrator a rating of 4.6 out of 5 stars, based on 1,012 reviews from verified users. They emphasized its features such as drawing tools, typography tools, artboards, effects, and export options. They also noted that Illustrator is "very expansive" and "a mainstay within my design workflow."</li>
73
- </ul>
74
- <h2>Conclusion</h2>
75
- <p>Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is a great software for creating vector graphics and typography. You can download it for free from the link above and use it without any hassle. We hope this article was helpful and informative for you. If you have any questions or problems, feel free to leave a comment below or contact us through our website.</p>
76
- <h2>What are the Alternatives to Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated?</h2>
77
- <p>Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is not the only software that can create and edit vector graphics. There are many other alternatives that you can try, depending on your needs and preferences. Here are some of the most popular ones:</p>
78
- <ul>
79
- <li>Inkscape: This is a free and open-source vector editor that runs on Windows, Mac, and Linux. It has many features similar to Illustrator, such as gradients, paths, shapes, text, filters, and more. It also supports SVG format and can import and export various file types.</li>
80
- <li>Affinity Designer: This is a paid vector editor that runs on Windows, Mac, and iPad. It has a sleek and intuitive interface and offers many features such as artboards, layers, symbols, styles, brushes, effects, and more. It also supports PSD format and can import and export various file types.</li>
81
- <li>CorelDRAW: This is a paid vector editor that runs on Windows and Mac. It has a long history and a loyal fan base in the graphic design industry. It offers many features such as drawing tools, typography tools, bitmap-to-vector tracing, photo editing, web graphics, and more. It also supports CDR format and can import and export various file types.</li>
82
- <li>Sketch: This is a paid vector editor that runs on Mac and iPad. It is mainly designed for UI/UX design and web design. It offers many features such as artboards, symbols, styles, plugins, prototyping, collaboration, and more. It also supports SKETCH format and can import and export various file types.</li>
83
- <li>Vectr: This is a free vector editor that runs on Windows, Mac, Linux, Chrome OS, and web browser. It is simple and easy to use for beginners and casual users. It offers basic features such as shapes, text, paths, filters, gradients, and more. It also supports PNG format and can import and export various file types.</li>
84
- </ul>
85
- <h2>Conclusion</h2>
86
- <p>Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is a great software for creating vector graphics and typography. You can download it for free from the link above and use it without any hassle. We hope this article was helpful and informative for you. If you have any questions or problems, feel free to leave a comment below or contact us through our website.</p>
87
- <h2>Conclusion</h2>
88
- <p>Adobe Illustrator CC 2019 64-Bit 23.1 Pre-Activated is a great software for creating vector graphics and typography. You can download it for free from the link above and use it without any hassle. We hope this article was helpful and informative for you. If you have any questions or problems, feel free to leave a comment below or contact us through our website.</p> 3cee63e6c2<br />
89
- <br />
90
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Dawn AI APK The Most Powerful App for Selfies Portraits and Headshots.md DELETED
@@ -1,112 +0,0 @@
1
- <br />
2
- <h1>Download Dawn AI APK Mod: How to Create Stunning Avatars with AI</h1>
3
- <p>Do you want to transform your selfies into amazing avatars using the latest AI technology? Do you want to explore endless styles and settings and generate fun and unique images with AI? Do you want to have unlimited access to all the features and styles of Dawn AI without paying anything? If you answered yes to any of these questions, then you should download Dawn AI APK Mod, a free avatars and stickers creator app that lets you create unlimited stylish avatars. It uses cutting-edge AI technology to convert your photos into exceptionally detailed sticker art.</p>
4
- <p>In this article, we will tell you what is Dawn AI APK Mod, how to download and install it, how to use it, what are the benefits of using it, and what are some alternatives to it. By the end of this article, you will be able to create stunning avatars with AI using Dawn AI APK Mod.</p>
5
- <h2>download dawn ai apk mod</h2><br /><p><b><b>Download Zip</b> &#9745; <a href="https://urlin.us/2uSV5H">https://urlin.us/2uSV5H</a></b></p><br /><br />
6
- <h2>What is Dawn AI APK Mod?</h2>
7
- <p>Dawn AI APK Mod is a modified version of Dawn AI, an app that allows users to turn their words into art using the latest AI technology. With just a text prompt, the app generates an entirely new and beautiful image that users can save and share with others. The app offers a variety of creative styles and possibilities, from photorealism to fantasy, oil painting to anime, and more.</p>
8
- <p>Dawn AI APK Mod is a free version of Dawn AI that unlocks all the premium features and styles of the app. Users can enjoy unlimited access to all the filters, overlays, themes, packs, and settings of the app without paying anything. Moreover, users can also remove the watermarks and ads from the generated images, making them more professional and attractive.</p>
9
- <h3>Features of Dawn AI APK Mod</h3>
10
- <p>Dawn AI APK Mod has many features that make it a powerful and versatile app for creating avatars with AI. Some of these features are:</p>
11
- <p>How to download dawn ai apk mod for free<br />
12
- Dawn ai apk mod latest version download<br />
13
- Dawn ai premium apk mod unlocked all features<br />
14
- Download dawn ai apk mod and create amazing avatars<br />
15
- Dawn ai apk mod no watermark download<br />
16
- Best dawn ai apk mod alternatives in 2023<br />
17
- Dawn ai apk mod review and tutorial<br />
18
- Download dawn ai apk mod and get unlimited stickers<br />
19
- Dawn ai apk mod vs other avatar creator apps<br />
20
- Dawn ai apk mod features and benefits<br />
21
- Download dawn ai apk mod and enjoy AI-powered photo editing<br />
22
- Dawn ai apk mod download link and installation guide<br />
23
- Dawn ai apk mod compatibility and requirements<br />
24
- Download dawn ai apk mod and share your sticker art with friends<br />
25
- Dawn ai apk mod pros and cons<br />
26
- Download dawn ai apk mod and customize your avatars<br />
27
- Dawn ai apk mod tips and tricks<br />
28
- Download dawn ai apk mod and join the community<br />
29
- Dawn ai apk mod FAQs and answers<br />
30
- Download dawn ai apk mod and have fun with your photos<br />
31
- Dawn ai apk mod update and changelog<br />
32
- Download dawn ai apk mod and access premium content<br />
33
- Dawn ai apk mod feedback and ratings<br />
34
- Download dawn ai apk mod and discover new styles<br />
35
- Dawn ai apk mod bugs and fixes</p>
36
- <h4>Image recreation and enhancement</h4>
37
- <p>Dawn AI APK Mod takes your pictures and recreates them to virtually anything you want. You can enter a text prompt or a sketch and let the app generate an image based on your input. You can also use the @me tag feature to create your own avatars based on your selfies. The app uses advanced AI technology to analyze your photos and learn what you look like, then produces stunning portraits with thousands of possible styles.</p>
38
- <h4>Gender switch</h4>
39
- <p>Think you could be better as the other gender? With Dawn AI APK Mod, you can find out how you would look as a different gender. Just upload your photo and select the gender switch option. The app will transform your face into a male or female version, depending on your choice. You can also adjust the intensity of the transformation using a slider.</p>
40
- <h4>Large database for styles</h4>
41
- <p>The app (dawn AI mod apk) obviously has a large database to contain almost any profession and look you can think of. You can choose from a variety of styles and settings, such as 3D render, fine art, pen sketch, black and white, cartoon, anime, and more. You can also mix and match different styles and settings to create your own unique combinations. The app constantly updates its database with new styles and themes, so you will never run out of options.</p>
42
- <h4>Filters and overlays</h4>
43
- <p>Dawn AI APK Mod also allows you to apply various filters and overlays to your images to enhance their appearance and mood. You can choose from a range of filters, such as vintage, sepia, noir, retro, and more. You can also add overlays, such as stickers, emojis, text, frames, and more. You can adjust the opacity and size of the filters and overlays to suit your preferences.</p>
44
- <h4>Ease of usage</h4>
45
- <p>The app is very easy to use and does not require any technical skills or knowledge. You just need to enter a text prompt or a sketch and let the app do the rest. You can also use the @me tag feature to create your own avatars based on your selfies. The app has a simple and intuitive interface that guides you through the process of creating avatars with AI.</p>
46
- <h4>Share with friends</h4>
47
- <p>Once you have created your avatars with AI, you can easily save and share them with your friends and family. You can export your images in high resolution and quality. You can also share them on social media platforms, such as Facebook, Instagram, Twitter, WhatsApp, and more. You can also use your avatars as profile pictures, wallpapers, stickers, memes, and more.</p>
48
- <h2>How to download and install Dawn AI APK Mod?</h2>
49
- <p>If you want to download and install Dawn AI APK Mod on your Android device, you need to follow these simple steps:</p>
50
- <h3>Step 1: Download the APK file</h3>
51
- <p>The first step is to download the APK file of Dawn AI APK Mod from a reliable source. You can use the link below to download the latest version of the app:</p>
52
- <p><a href="">Download Dawn AI APK Mod</a></p>
53
- <h3>Step 2: Enable unknown sources</h3>
54
- <p>The next step is to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.</p>
55
- <h3>Step 3: Install the APK file</h3>
56
- <p>The third step is to install the APK file on your device. To do this, locate the downloaded file in your file manager and tap on it. Follow the instructions on the screen to complete the installation.</p>
57
- <h3>Step 4: Launch the app and enjoy</h3>
58
- <p>The final step is to launch the app and enjoy creating stunning avatars with AI. You can access all the features and styles of the app without any limitations or restrictions.</p>
59
- <h2>How to use Dawn AI APK Mod?</h2>
60
- <p>Dawn AI APK Mod is very easy to use and does not require any technical skills or knowledge. You can use it in three different modes: text mode, sketch mode, and @me tag mode.</p>
61
- <h3>Text mode</h3>
62
- <p>In text mode, you can enter a text prompt and let the app generate an image based on your input. For example, you can enter "a beautiful woman with long blonde hair wearing a red dress" and see what the app produces. You can also enter a genre or a style, such as "fantasy", "anime", "oil painting", etc., and see what the app creates.</p>
63
- <h3>Sketch mode</h3>
64
- <p>In sketch mode, you can draw a sketch on the screen and let the app generate an image based on your drawing. For example, you can draw a face or a body shape and see what the app produces. You can also use different colors and brushes to create more detailed sketches.</p>
65
- <h3>@me tag mode</h3>
66
- <p>In @me tag mode, you can create your own avatars based on your selfies. To do this, you need to upload your photo and add the @me tag at the end of your text prompt. For example, you can enter "a handsome man wearing glasses @me" and see what the app produces. The app will analyze your photo and learn what you look like, then generate a portrait with thousands of possible styles.</p>
67
- <h2>What are the benefits of using Dawn AI APK Mod?</h2>
68
- <p>Dawn AI APK Mod has many benefits that make it a great app for creating avatars with AI. Some of these benefits are:</p>
69
- <h3>Unlimited access to all features and styles</h3>
70
- <p>Dawn AI APK Mod unlocks all the premium features and styles of Dawn AI without paying anything. You can enjoy unlimited access to all the filters, overlays, themes, packs, and settings of the app and create stunning avatars with AI. You can also remove the watermarks and ads from the generated images, making them more professional and attractive.</p>
71
- <h3>No watermarks or ads</h3>
72
- <p>Dawn AI APK Mod also removes the watermarks and ads from the generated images, making them more professional and attractive. You can save and share your images without any distractions or interruptions. You can also enjoy a smooth and fast performance of the app without any lags or crashes.</p>
73
- <h3>High-quality image generation</h3>
74
- <p>Dawn AI APK Mod uses cutting-edge AI technology to generate high-quality images that are exceptionally detailed and realistic. The app analyzes your photos and learns what you look like, then produces stunning portraits with thousands of possible styles. The app also recreates and enhances your images to virtually anything you want, from photorealism to fantasy, oil painting to anime, and more.</p>
75
- <h3>Fun and creative content creation</h3>
76
- <p>Dawn AI APK Mod allows you to have fun and be creative with your content creation. You can explore endless styles and settings and generate fun and unique images with AI. You can also mix and match different styles and settings to create your own unique combinations. You can also use your avatars as profile pictures, wallpapers, stickers, memes, and more.</p>
77
- <h2>What are some alternatives to Dawn AI APK Mod?</h2>
78
- <p>If you are looking for some alternatives to Dawn AI APK Mod, you can try these apps that also allow you to create avatars with AI:</p>
79
- <h3>Arible AI</h3>
80
- <p>Arible AI is an app that allows you to create realistic 3D avatars from your photos. You can customize your avatars with different hairstyles, outfits, accessories, and backgrounds. You can also animate your avatars with different expressions, poses, and movements. You can also chat with other users using your avatars.</p>
81
- <h3>Reface NEOCORTEXT</h3>
82
- <p>Reface NEOCORTEXT is an app that allows you to swap your face with celebrities, movie characters, cartoons, and more. You can use your photos or videos and choose from a huge library of faces to create hilarious and amazing face swaps. You can also share your creations on social media platforms.</p>
83
- <h3>TheDream.AI</h3>
84
- <p>TheDream.AI is an app that allows you to create dream-like images from your photos. You can use different filters, effects, stickers, and texts to transform your photos into surreal and artistic creations. You can also use the app's AI to generate images based on your text prompts or sketches.</p>
85
- <h2>Conclusion</h2>
86
- <p>Dawn AI APK Mod is a free avatars and stickers creator app that lets you create unlimited stylish avatars using the latest AI technology. It uses cutting-edge AI technology to convert your photos into exceptionally detailed sticker art. You can enjoy unlimited access to all the features and styles of the app without paying anything. Moreover, you can also remove the watermarks and ads from the generated images, making them more professional and attractive.</p>
87
- <p>If you want to download Dawn AI APK Mod on your Android device, you can follow the steps mentioned in this article. You can also use the app in three different modes: text mode, sketch mode, and @me tag mode. You can also try some alternatives to Dawn AI APK Mod if you want to explore more options for creating avatars with AI.</p>
88
- <p>We hope this article was helpful for you. If you have any questions or feedback, please feel free to leave a comment below.</p>
89
- <h2>FAQs</h2>
90
- <p>Here are some frequently asked questions about Dawn AI APK Mod:</p>
91
- <ol>
92
- <li>Is Dawn AI APK Mod safe to use?</li>
93
- <p>Dawn AI APK Mod is safe to use as long as you download it from a reliable source. However, since it is a modified version of Dawn AI, it may not be compatible with some devices or updates. Therefore, use it at your own risk.</p>
94
- <li>How do I update Dawn AI APK Mod?</li>
95
- <p>To update Dawn AI APK Mod, you need to download the latest version of the APK file from a reliable source and install it on your device. However, since it is a modified version of Dawn AI, it may not be compatible with some devices or updates. Therefore, check the compatibility before updating.</p>
96
- <li>Can I use Dawn AI APK Mod on PC?</li>
97
- <p>To use Dawn AI APK Mod on PC, you need to use an Android emulator software that allows you to run Android apps on PC. Some popular Android emulators are BlueStacks, NoxPlayer, MEmu, etc. However, since it is a modified version of Dawn AI, it may not work properly on some emulators.</p>
98
- <li>Can I use Dawn AI APK Mod offline?</li>
99
- <p>Dawn AI APK Mod requires an internet connection to work properly. The app uses AI technology to generate images based on your input, which requires a lot of data and processing power. Therefore, you cannot use the app offline.</p>
100
- <li>What are some tips and tricks for using Dawn AI APK Mod?</li>
101
- <p>Some tips and tricks for using Dawn AI APK Mod are:</p>
102
- <ul>
103
- <li>Use clear and descriptive text prompts or sketches to get better results.</li>
104
- <li>Experiment with different styles and settings to create unique combinations.</li>
105
- <li>Use the @me tag feature to create your own avatars based on your selfies.</li>
106
- <li>Use the gender switch feature to see how you would look as a different gender.</li>
107
- <li>Use the filters and overlays to enhance the appearance and mood of your images.</li>
108
- <li>Save and share your images with your friends and family.</li>
109
- </ul>
110
- </ol></p> 197e85843d<br />
111
- <br />
112
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Driving Zone Russia Mod APK with Unlimited Coins and Cash.md DELETED
@@ -1,149 +0,0 @@
1
- <br />
2
- <h1>Driving Zone Russia Mod APK Unlimited Money: A Review</h1>
3
- <p>If you are a fan of racing games, especially those that feature cars from Russia, then you might want to check out Driving Zone Russia. This is a simulator of street racing on the cars produced in Russia, with online and offline game modes. You can choose from classic cars produced in Russia, and the most modern models, each with its own character and a real engine sound. You can also customize your car with different colors, rims, spoilers, body kits, and liveries.</p>
4
- <h2>driving zone russia mod apk unlimited money</h2><br /><p><b><b>Download</b> &rarr; <a href="https://jinyurl.com/2uNRWI">https://jinyurl.com/2uNRWI</a></b></p><br /><br />
5
- <p>But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money to buy any car you want, or unlock all the features of the game? Well, there is a way to do that, and that is by downloading and installing Driving Zone Russia Mod APK Unlimited Money. This is a modified version of the original game that gives you access to unlimited money, coins, and other resources. You can use them to buy any car you want, upgrade your car's performance, or change the appearance of your car. You can also enjoy all the tracks, modes, and settings of the game without any ads or interruptions.</p>
6
- <p>In this article, we will review Driving Zone Russia Mod APK Unlimited Money and tell you everything you need to know about it. We will cover its features, how to download and install it, tips and tricks for playing it, and its pros and cons. By the end of this article, you will be able to decide if this mod apk is worth trying or not.</p>
7
- <h2>Features of Driving Zone Russia Mod APK Unlimited Money</h2>
8
- <p>Driving Zone Russia Mod APK Unlimited Money has many features that make it an exciting and enjoyable racing game. Here are some of them:</p>
9
- <p>driving zone russia mod apk unlimited coins<br />
10
- driving zone russia mod apk download<br />
11
- driving zone russia mod apk latest version<br />
12
- driving zone russia mod apk android 1<br />
13
- driving zone russia mod apk revdl<br />
14
- driving zone russia mod apk hack<br />
15
- driving zone russia mod apk free shopping<br />
16
- driving zone russia mod apk all cars unlocked<br />
17
- driving zone russia mod apk rexdl<br />
18
- driving zone russia mod apk no ads<br />
19
- driving zone russia mod apk unlimited everything<br />
20
- driving zone russia mod apk obb<br />
21
- driving zone russia mod apk offline<br />
22
- driving zone russia mod apk old version<br />
23
- driving zone russia mod apk 2023<br />
24
- driving zone russia mod apk 2022<br />
25
- driving zone russia mod apk 2021<br />
26
- driving zone russia mod apk 2020<br />
27
- driving zone russia mod apk 2019<br />
28
- driving zone russia mod apk 2018<br />
29
- driving zone russia mod apk 2017<br />
30
- driving zone russia mod apk 2016<br />
31
- driving zone russia mod apk 2015<br />
32
- driving zone russia hack apk download<br />
33
- driving zone russia hack apk unlimited money and coins<br />
34
- driving zone russia hack apk latest version<br />
35
- driving zone russia hack apk android 1<br />
36
- driving zone russia hack apk revdl<br />
37
- driving zone russia hack apk rexdl<br />
38
- driving zone russia hack apk no ads<br />
39
- download game driving zone russia mod apk unlimited money<br />
40
- download game driving zone russia mod apk unlimited coins<br />
41
- download game driving zone russia mod apk latest version<br />
42
- download game driving zone russia mod apk android 1<br />
43
- download game driving zone russia mod apk revdl<br />
44
- download game driving zone russia mod apk rexdl<br />
45
- download game driving zone russia mod apk no ads<br />
46
- how to install driving zone russia mod apk unlimited money<br />
47
- how to install driving zone russia mod apk unlimited coins<br />
48
- how to install driving zone russia mod apk latest version<br />
49
- how to install driving zone russia mod apk android 1<br />
50
- how to install driving zone russia mod apk revdl<br />
51
- how to install driving zone russia mod apk rexdl<br />
52
- how to install driving zone russia mod apk no ads<br />
53
- how to play driving zone russia mod apk unlimited money<br />
54
- how to play driving zone russia mod apk unlimited coins<br />
55
- how to play driving zone russia mod apk latest version<br />
56
- how to play driving zone russia mod apk android 1</p>
57
- <h3>Modern graphics and realistic physics</h3>
58
- <p>The game has modern beautiful graphics that create a realistic atmosphere of street racing. The cars are detailed and well-designed, with accurate body and interior models. The tracks are also diverse and immersive, with different scenery and weather conditions. The game also has realistic car physics that simulate the behavior of real cars on different surfaces and situations. You can feel the speed, acceleration, braking, steering, and drifting of your car as you race along the busy highway or the challenging track.</p>
59
- <h3>Qualitatively modeled Russian cars</h3>
60
- <p>The game features a variety of cars produced in Russia, from classic models to modern ones. Each car has its own character and a real engine sound that matches its performance. You can choose from sedans, hatchbacks, coupes, SUVs, sports cars, or even trucks. Some of the cars available in the game are Lada Priora, Lada Granta, Lada Kalina, Lada Niva, Volga GAZ-24, Volga GAZ-3110, Moskvich-2141 AZLK-2141-02 Svyatogor Turbo Plus (Moskvich-2141), Moskvich-412 IE AZLK-412-028 (Moskvich-412), VAZ-2106 Zhiguli (Lada Riva), VAZ-2107 Zhiguli (Lada Riva), VAZ-2108 Sputnik (L ada Samara), VAZ-2109 Sputnik (Lada Samara), VAZ-2110 Lada (Lada 110), VAZ-2112 Lada (Lada 112), VAZ-2113 Lada (Lada 113), VAZ-2114 Lada (Lada 114), VAZ-2115 Lada (Lada 115), VAZ-2121 Niva (Lada Niva), GAZ-66, GAZ-3302 Gazelle, GAZ-3307, GAZ-3309, GAZ-3310 Valdai, GAZ-2330 Tigr, UAZ-469, UAZ-452, UAZ Patriot, UAZ Hunter, UAZ Cargo, Kamaz-4310, Kamaz-43118, Kamaz-5350 Mustang, Kamaz-6520, Kamaz-6522, ZIL-130, ZIL-131, ZIL-133, ZIL-157, ZIL-4331, MAZ-504, MAZ-5337, MAZ-5432, MAZ-6422, MAZ-7310 Uragan (MAZ 543M), and many more.</p>
61
- <h3>Four tracks with different weather conditions</h3>
62
- <p>The game offers four different tracks to race on, each with its own weather conditions and time of day. You can choose from a city track with busy traffic and pedestrians, a suburban track with a quieter atmosphere and a scenic view, a winter track with snow and ice on the road, or a desert track with sand and dust. Each track has its own challenges and surprises that will test your driving skills and reflexes. You can also change the weather conditions and the time of day in real-time to create your own unique racing experience.</p>
63
- <h3>Change the time of day in real-time</h3>
64
- <p>One of the most impressive features of the game is the ability to change the time of day in real-time. You can switch from day to night or vice versa with a simple swipe on the screen. This will affect the lighting and the visibility of the track, as well as the behavior of other drivers and pedestrians. You can also adjust the speed of time to make the day or night last longer or shorter. This feature adds more realism and variety to the game and allows you to enjoy different scenarios and atmospheres.</p>
65
- <h3>First person view / interior camera</h3>
66
- <p>The game also gives you the option to switch from the third person view to the first person view or the interior camera. This will let you see the road from the driver's perspective and feel more immersed in the game. You can also see the dashboard and the steering wheel of your car, as well as the mirrors and the indicators. The interior camera also shows you the damage and the wear of your car parts, such as the tires, the engine, or the brakes. You can use this feature to enhance your driving experience and challenge yourself more.</p>
67
- <h2>How to download and install Driving Zone Russia Mod APK Unlimited Money</h2>
68
- <p>If you are interested in trying Driving Zone Russia Mod APK Unlimited Money, you will need to follow these steps to download and install it on your device:</p>
69
- <h3>Requirements and compatibility</h3>
70
- <p>Before you download and install Driving Zone Russia Mod APK Unlimited Money, you will need to make sure that your device meets these requirements:</p>
71
- <ul>
72
- <li>Your device must have Android 4.1 or higher operating system.</li>
73
- <li>Your device must have at least 100 MB of free storage space.</li>
74
- <li>Your device must have a stable internet connection.</li>
75
- <li>You must enable unknown sources on your device settings to allow installation of apps from third-party sources.</li>
76
- </ul>
77
- <p>Driving Zone Russia Mod APK Unlimited Money is compatible with most Android devices, including smartphones and tablets. However, some devices may not support some features or functions of the game due to hardware limitations or software issues.</p>
78
- <h3>Steps to download and install</h3>
79
- <p>Once you have checked that your device meets the requirements and compatibility, you can proceed to download and install Driving Zone Russia Mod APK Unlimited Money by following these steps:</p>
80
- <ol>
81
- <li>Click on this link to download Driving Zone Russia Mod APK Unlimited Money file on your device.</li>
82
- <li>Locate the downloaded file on your device storage and tap on it to start the installation process.</li>
83
- <li>Follow the instructions on the screen to complete the installation process.</li>
84
- <li>Launch the game from your app drawer or home screen and enjoy unlimited money and other mod features.</li>
85
- </ol>
86
- <h3>How to use the mod features</h3>
87
- <p>After you have successfully installed Driving Zone Russia Mod APK Unlimited Money on your device, you can start using the mod features to enhance your gameplay. Here are some of the mod features and how to use them:</p>
88
- <ul>
89
- <li>Unlimited money: You can use unlimited money to buy any car you want, upgrade your car's performance, or change the appearance of your car. You can also use it to unlock all the tracks, modes, and settings of the game. To use this feature, simply go to the shop or the garage and select the item you want to buy or upgrade. You will see that the price is zero and you can buy or upgrade it without spending any money.</li>
90
- <li>No ads: You can enjoy the game without any ads or interruptions. This feature will remove all the ads that pop up on the screen or play before or after the game. To use this feature, simply launch the game and play as usual. You will not see any ads on the screen or hear any ads on the sound.</li>
91
- <li>Other resources: You can also use other resources such as coins, gems, diamonds, or tokens to access some special features or items in the game. These resources are also unlimited and you can use them as much as you want. To use this feature, simply go to the shop or the menu and select the feature or item you want to use or buy. You will see that you have enough resources to use or buy it without any problem.</li>
92
- </ul>
93
- <h2>Tips and tricks for playing Driving Zone Russia Mod APK Unlimited Money</h2>
94
- <p>Driving Zone Russia Mod APK Unlimited Money is a fun and addictive racing game that will keep you entertained for hours. However, if you want to master the game and become a pro racer, you will need some tips and tricks to help you out. Here are some of them:</p>
95
- <h3>Choose the right car for each track</h3>
96
- <p>The game offers a variety of cars to choose from, each with its own strengths and weaknesses. Some cars are faster, some are more agile, some are more durable, and some are more balanced. You will need to choose the right car for each track depending on the weather conditions, the road surface, and the traffic situation. For example, if you are racing on a winter track with snow and ice, you might want to choose a car with good traction and stability, such as an SUV or a truck. If you are racing on a desert track with sand and dust, you might want to choose a car with good speed and acceleration, such as a sports car or a coupe.</p>
97
- <h3>Adjust the settings to suit your preference</h3>
98
- <p>The game also allows you to adjust the settings to suit your preference and play style. You can change the difficulty level, the camera angle, the control method, the sound volume, and the graphics quality. You can also enable or disable some features such as traffic rules, damage system, police chase, or online mode. You can experiment with different settings until you find the ones that work best for you.</p>
99
- <h3>Use the brake and accelerator wisely</h3>
100
- <p>The game has realistic car physics that require you to use the brake and accelerator wisely. You cannot just press the gas pedal all the time and expect to win the race. You will need to slow down when approaching a turn, a curve, an obstacle, or a traffic jam. You will also need to accelerate when exiting a turn, a curve, an obstacle, or a traffic jam. You will need to balance between speed and control to avoid crashing or losing control of your car.</p>
101
- <h3>Record and share your gameplay videos</h3>
102
- <p>The game also has a feature that allows you to record and share your gameplay videos with other players online. You can capture your best moments in the game, such as your fastest lap time, your most epic drift, your most daring overtaking, or your most spectacular crash. You can then share your videos with other players on social media platforms such as Facebook, Twitter, Instagram, YouTube, or TikTok. You can also watch other players' videos and learn from their skills and strategies.</p>
103
- <h2>Pros and cons of Driving Zone Russia Mod APK Unlimited Money</h2>
104
- <p>Driving Zone Russia Mod APK Unlimited Money is not a perfect game and it has its pros and cons. Here are some of them:</p>
105
- <h3>Pros</h3>
106
- <ul>
107
- <li>The game has modern graphics and realistic physics that create a realistic atmosphere of street racing.</li>
108
- <li>The game features a variety of cars produced in Russia, each with its own character and a real engine sound.</li>
109
- <li>The game offers four different tracks with different weather conditions and time of day.</li>
110
- <li>The game allows you to change the time of day in real-time.</li>
111
- <li>The game gives you the option to switch from the third person view to the first person view or the interior camera.</li>
112
- <li>The game has unlimited money and other resources that allow you to buy any car you want, upgrade your car's performance, or change the appearance of your car.</li>
113
- <li>The game has no ads or interruptions that can ruin your gameplay.</li>
114
- <li>The game has a feature that allows you to record and share your gameplay videos with other players online.</li>
115
- </ul>
116
- <h3>Cons</h3>
117
- <ul>
118
- <li>The game can be too easy or boring for some players who prefer more challenge or variety in their racing games.</li>
119
- <li>The game can be too hard or frustrating for some players who are not used to the realistic car physics or the traffic rules.</li>
120
- <li>The game can be too repetitive or monotonous for some players who want more tracks, modes, or features in their racing games.</li>
121
- <li>The game can have some bugs or glitches that can affect the gameplay or the graphics quality.</li>
122
- <li>The game can be risky or illegal to download and install from third-party sources that may contain viruses or malware.</li>
123
- </ul>
124
- <h2>Conclusion</h2>
125
- <p>Driving Zone Russia Mod APK Unlimited Money is a simulator of street racing on the cars produced in Russia, with online and offline game modes. The game has modern graphics and realistic physics that create a realistic atmosphere of street racing. The game features a variety of cars produced in Russia, each with its own character and a real engine sound. The game offers four different tracks with different weather conditions and time of day. The game allows you to change the time of day in real-time. The game gives you the option to switch from the third person view to the first person view or the interior camera. The game has unlimited money and other resources that allow you to buy any car you want, upgrade your car's performance, or change the appearance of your car. The game has no ads or interruptions that can ruin your gameplay. The game has a feature that allows you to record and share your gameplay videos with other players online.</p>
126
- <p>However, the game also has some drawbacks that may affect your enjoyment of the game. The game can be too easy or boring for some players who prefer more challenge or variety in their racing games. The game can be too hard or frustrating for some players who are not used to the realistic car physics or the traffic rules. The game can be too repetitive or monotonous for some players who want more tracks, modes, or features in their racing games. The game can have some bugs or glitches that can affect the gameplay or the graphics quality. The game can be risky or illegal to download and install from third-party sources that may contain viruses or malware.</p>
127
- <p>Therefore, we recommend Driving Zone Russia Mod APK Unlimited Money to anyone who loves racing games, especially those that feature cars from Russia. This game will give you a realistic and immersive experience of street racing on Russian cars, with unlimited money and other mod features. However, we also advise you to be careful when downloading and installing this mod apk from third-party sources, as they may not be safe or legal. We also suggest you to try the original version of Driving Zone Russia first before trying this mod apk, as it may suit your preference better.</p>
128
- <h2>FAQs</h2>
129
- <p>Here are some frequently asked questions about Driving Zone Russia Mod APK Unlimited Money:</p>
130
- <ol>
131
- <li>Q: Is Driving Zone Russia Mod APK Unlimited Money free?</li>
132
- <li>A: Yes, Driving Zone Russia Mod APK Unlimited Money is free to download and play. However, you may need to pay for some in-app purchases or subscriptions if you want to access some premium features or items in the game.</li>
133
- <li>Q: Is Driving Zone Russia Mod APK Unlimited Money safe?</li>
134
- <li>A: Driving Zone Russia Mod APK Unlimited Money is not officially endorsed by the developers of Driving Zone Russia, so it may not be safe or legal to download and install from third-party sources. You may encounter some viruses or malware that can harm your device or compromise your privacy. You may also face some legal issues if you violate the terms and conditions of Driving Zone Russia.</li>
135
- <li>Q: How do I update Driving Zone Russia Mod APK Unlimited Money?</li>
136
- <li>A: Driving Zone Russia Mod APK Unlimited Money may not be compatible with the latest version of Driving Zone Russia, so you may need to update it manually from time to time. You will need to check for updates from the source where you downloaded it from, and follow the instructions on how to download and install the updated version.</li>
137
- <li>Q: How do I uninstall Driving Zone Russia Mod APK Unlimited Money?</li>
138
- <li>A: If you want to uninstall Driving Zone Russia Mod APK Unlimited Money from your device, you will need to follow these steps:</li>
139
- <ul>
140
- <li>Go to your device settings and select apps or applications.</li>
141
- <li>Find and tap on Driving Zone Russia Mod APK Unlimited Money and tap on uninstall.</li>
142
- <li>Confirm your action and wait for the process to finish.</li>
143
- <li>Alternatively, you can also delete the Driving Zone Russia Mod APK Unlimited Money file from your device storage.</li>
144
- </ul>
145
- <li>Q: Can I play Driving Zone Russia Mod APK Unlimited Money offline?</li>
146
- <li>A: Yes, you can play Driving Zone Russia Mod APK Unlimited Money offline without an internet connection. However, you may not be able to access some online features or modes, such as multiplayer mode, leaderboards, or achievements.</li>
147
- </ol></p> 197e85843d<br />
148
- <br />
149
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Plants vs Zombies 2 for PC and Protect Your Garden from the Undead.md DELETED
@@ -1,53 +0,0 @@
1
-
2
- <h1>How to Download Plants vs Zombies 2 on PC</h1>
3
- <p>Plants vs Zombies 2 is a popular tower defense game that pits you against hordes of zombies who want to eat your brains. You have to use various plants with different abilities to stop them from reaching your house. The game features hundreds of levels across different worlds and time periods, as well as endless modes, mini-games, daily events, and online competitions.</p>
4
- <p>While Plants vs Zombies 2 is primarily designed for mobile devices, you might want to play it on your PC for various reasons. Maybe you want to enjoy the game on a bigger screen, or use a keyboard and mouse or a controller for better control. Maybe you want to save battery life on your phone or tablet, or avoid interruptions from calls and notifications. Whatever your reason, there are several ways to download Plants vs Zombies 2 on PC. In this article, we will show you three methods that you can use to play this fun and addictive game on your PC.</p>
5
- <h2>download plants vs zombies 2 in pc</h2><br /><p><b><b>Download</b> &#9889; <a href="https://jinyurl.com/2uNPYz">https://jinyurl.com/2uNPYz</a></b></p><br /><br />
6
- <h2>Method 1: Use Windows 11 and native Android emulation</h2>
7
- <p>One of the easiest ways to download Plants vs Zombies 2 on PC is to use Windows 11, the latest version of Microsoft's operating system. Windows 11 comes with a built-in feature that allows you to run Android apps natively on your PC, without the need for any third-party software. Here are the steps to follow:</p>
8
- <ol>
9
- <li>Check if your PC meets the minimum requirements for Windows 11 and Android emulation. You will need a 64-bit processor, 4 GB of RAM, 64 GB of storage, a TPM 2.0 chip, and a DirectX 12 compatible graphics card. You can use the <a href="">PC Health Check app</a> to see if your PC is eligible for the upgrade.</li>
10
- <li>Update your PC to Windows 11 and enable the Windows Subsystem for Android. You can download Windows 11 from the <a href="">Windows Update</a> section in the Settings app. To enable the Windows Subsystem for Android, go to the <a href="">Microsoft Store</a> and install the app. You will also need to install the <a href="">Amazon Appstore</a> app, which will allow you to access Android apps on your PC.</li>
11
- <li>Install Plants vs Zombies 2 from the Amazon Appstore or the Google Play Store. You can launch the Amazon Appstore app from the Start menu or the Taskbar and search for Plants vs Zombies 2. Alternatively, you can also install the Google Play Store app from the Amazon Appstore and use it to download Plants vs Zombies 2. You will need to sign in with your Google account to access the Google Play Store.</li>
12
- <li>Launch the game and enjoy playing it on your PC. You can find Plants vs Zombies 2 in the Start menu or the Taskbar, along with other Android apps. You can use your mouse or touchpad to interact with the game, or connect a keyboard or a controller for better control. You can also adjust the window size and orientation of the game according to your preference.</li>
13
- </ol>
14
- <h2>Method 2: Use an Android emulator such as Bluestacks or Nox Player</h2>
15
- <p>Another way to download Plants vs Zombies 2 on PC is to use an Android emulator, which is a software that simulates an Android device on your PC. There are many Android emulators available online, but some of the most popular ones are Bluestacks and Nox Player. Here are the steps to follow:</p>
16
- <ol>
17
- <li>Download and install an Android emulator of your choice from their official websites. You can visit <a href="">Bluestacks.com</a> or <a href="">Bignox.com</a> to download Bluestacks or Nox Player respectively . Follow the instructions on the screen to install and set up the emulator on your PC.</li>
18
- <li>Sign in with your Google account and access the Google Play Store. Once you launch the emulator, you will need to sign in with your Google account to access the Google Play Store and other Google services. You can use an existing account or create a new one.</li>
19
- <li>Search for Plants vs Zombies 2 and install it on your emulator. You can use the search bar in the Google Play Store to find Plants vs Zombies 2 and click on the Install button to download it on your emulator.</li>
20
- <li>Launch the game and customize the controls according to your preference. You can find Plants vs Zombies 2 on the home screen or in the app drawer of your emulator. You can use your mouse or touchpad to interact with the game, or connect a keyboard or a controller for better control. You can also customize the controls by using the settings menu of your emulator.</li>
21
- </ol>
22
- <h2>Method 3: Use Parsec to stream the game from your PC to your Android device</h2>
23
- <p>A third way to download Plants vs Zombies 2 on PC is to use Parsec, which is a software that allows you to stream games from your PC to your Android device. This way, you can play Plants vs Zombies 2 on your PC without installing it, as long as you have a good internet connection and a compatible device. Here are the steps to follow:</p>
24
- <ol>
25
- <li>Download and install Parsec on both your PC and your Android device. You can visit <a href="">Parsecgaming.com</a> to download Parsec for free. Follow the instructions on the screen to install and set up Parsec on your PC and your Android device.</li>
26
- <li>Create a Parsec account and sign in on both devices. You will need to create a Parsec account and sign in on both your PC and your Android device to use the streaming service. You can use an existing account or create a new one.</li>
27
- <li>Launch Plants vs Zombies 2 on your PC and start hosting a game session. You can launch Plants vs Zombies 2 on your PC from the Start menu or the Taskbar, or from any other source that you have installed it from. Once the game is running, open Parsec on your PC and click on the Host tab. You will see a code that you can use to invite other devices to join your game session.</li>
28
- <li>Connect to your PC from your Android device using Parsec and start playing the game. Open Parsec on your Android device and click on the Friends tab. You will see a list of devices that you can connect to, including your PC. Enter the code that you got from your PC and click on the Connect button. You will see the game screen on your Android device and you can start playing the game.</li>
29
- </ol>
30
- <h2>Conclusion</h2>
31
- <p>Plants vs Zombies 2 is a fun and addictive game that you can enjoy on your PC using any of the methods mentioned above. Whether you use Windows 11, an Android emulator, or Parsec, you can experience the game on a bigger screen, with better control and performance. Here are some tips and tricks for playing the game on PC:</p>
32
- <ul>
33
- <li>Use a keyboard and mouse or a controller for better control and performance. You can use the default controls or customize them according to your preference. You can also use hotkeys to quickly access certain functions such as Plant Food, Power Ups, or Pause.</li>
34
- <li>Adjust the graphics settings and resolution to suit your PC's capabilities. You can change the graphics quality, resolution, frame rate, and full screen mode of the game from the settings menu. You can also use the zoom function to get a closer or wider view of the lawn.</li>
35
- <li>Join the online community and compete with other players in Arena mode. You can access Arena mode from the main menu of the game and participate in weekly tournaments that test your skills and strategy. You can earn rewards such as coins, gems, mints, gauntlets, and trophies by ranking high on the leaderboards.</li>
36
- </ul>
37
- <h3>FAQs</h3>
38
- <p>Here are some common questions that readers might have about downloading Plants vs Zombies 2 on PC:</p>
39
- <dl>
40
- <dt>Q1: Is Plants vs Zombies 2 free to play?</dt>
41
- <dd>A1: Yes, Plants vs Zombies 2 is free to play, but it contains ads and in-app purchases that can enhance your gaming experience.</dd>
42
- <dt>Q2: Can I play Plants vs Zombies 2 offline?</dt>
43
- <dd>A2: Yes, you can play Plants vs Zombies 2 offline, but you will need an internet connection to access some features such as daily events, leaderboards, and cloud save.</dd>
44
- <dt>Q3: How many worlds are there in Plants vs Zombies 2?</dt>
45
- <dd>A3: There are currently 11 worlds in Plants vs Zombies 2, each with its own theme, plants, zombies, and challenges. You can unlock them by collecting stars or paying with gems.</dd>
46
- <dt>Q4: What is Plant Food and how do I use it?</dt>
47
- <dd>A4: Plant Food is a special item that can boost your plants' abilities for a short time. You can get Plant Food by killing glowing zombies, planting Power Lily, or buying it with coins. To use Plant Food, just drag it onto any plant on the lawn.</dd>
48
- <dt>Q5: What is the best strategy for playing Plants vs Zombies 2?</dt>
49
- <dd>A5: There is no definitive answer to this question, as different strategies work for different levels, modes, and preferences. However, some general tips are to plan ahead, use a variety of plants, upgrade your plants regularly, and experiment with different combinations.</dd>
50
- </dl>
51
- <p>I hope you found this article helpful and informative. If you have any questions or feedback, please leave a comment below. Happy gaming!</p> 401be4b1e0<br />
52
- <br />
53
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/2ndelement/voicevox/voicevox_engine/kana_parser.py DELETED
@@ -1,146 +0,0 @@
1
- from typing import List, Optional
2
-
3
- from .model import AccentPhrase, Mora, ParseKanaError, ParseKanaErrorCode
4
- from .mora_list import openjtalk_text2mora
5
-
6
- LOOP_LIMIT = 300
7
- UNVOICE_SYMBOL = "_"
8
- ACCENT_SYMBOL = "'"
9
- NOPAUSE_DELIMITER = "/"
10
- PAUSE_DELIMITER = "、"
11
- WIDE_INTERROGATION_MARK = "?"
12
-
13
- text2mora_with_unvoice = {}
14
- for text, (consonant, vowel) in openjtalk_text2mora.items():
15
- text2mora_with_unvoice[text] = Mora(
16
- text=text,
17
- consonant=consonant if len(consonant) > 0 else None,
18
- consonant_length=0 if len(consonant) > 0 else None,
19
- vowel=vowel,
20
- vowel_length=0,
21
- pitch=0,
22
- is_interrogative=False,
23
- )
24
- if vowel in ["a", "i", "u", "e", "o"]:
25
- text2mora_with_unvoice[UNVOICE_SYMBOL + text] = Mora(
26
- text=text,
27
- consonant=consonant if len(consonant) > 0 else None,
28
- consonant_length=0 if len(consonant) > 0 else None,
29
- vowel=vowel.upper(),
30
- vowel_length=0,
31
- pitch=0,
32
- is_interrogative=False,
33
- )
34
-
35
-
36
- def _text_to_accent_phrase(phrase: str) -> AccentPhrase:
37
- """
38
- longest matchにより読み仮名からAccentPhraseを生成
39
- 入力長Nに対し計算量O(N^2)
40
- """
41
- accent_index: Optional[int] = None
42
- moras: List[Mora] = []
43
-
44
- base_index = 0 # パース開始位置。ここから右の文字列をstackに詰めていく。
45
- stack = "" # 保留中の文字列
46
- matched_text: Optional[str] = None # 保留中の文字列内で最後にマッチした仮名
47
-
48
- outer_loop = 0
49
- while base_index < len(phrase):
50
- outer_loop += 1
51
- if phrase[base_index] == ACCENT_SYMBOL:
52
- if len(moras) == 0:
53
- raise ParseKanaError(ParseKanaErrorCode.ACCENT_TOP, text=phrase)
54
- if accent_index is not None:
55
- raise ParseKanaError(ParseKanaErrorCode.ACCENT_TWICE, text=phrase)
56
- accent_index = len(moras)
57
- base_index += 1
58
- continue
59
- for watch_index in range(base_index, len(phrase)):
60
- if phrase[watch_index] == ACCENT_SYMBOL:
61
- break
62
- # 普通の文字の場合
63
- stack += phrase[watch_index]
64
- if stack in text2mora_with_unvoice:
65
- matched_text = stack
66
- # push mora
67
- if matched_text is None:
68
- raise ParseKanaError(ParseKanaErrorCode.UNKNOWN_TEXT, text=stack)
69
- else:
70
- moras.append(text2mora_with_unvoice[matched_text].copy(deep=True))
71
- base_index += len(matched_text)
72
- stack = ""
73
- matched_text = None
74
- if outer_loop > LOOP_LIMIT:
75
- raise ParseKanaError(ParseKanaErrorCode.INFINITE_LOOP)
76
- if accent_index is None:
77
- raise ParseKanaError(ParseKanaErrorCode.ACCENT_NOTFOUND, text=phrase)
78
- else:
79
- return AccentPhrase(moras=moras, accent=accent_index, pause_mora=None)
80
-
81
-
82
- def parse_kana(text: str) -> List[AccentPhrase]:
83
- """
84
- AquesTalkライクな読み仮名をパースして音長・音高未指定のaccent phraseに変換
85
- """
86
-
87
- parsed_results: List[AccentPhrase] = []
88
- phrase_base = 0
89
- if len(text) == 0:
90
- raise ParseKanaError(ParseKanaErrorCode.EMPTY_PHRASE, position=1)
91
-
92
- for i in range(len(text) + 1):
93
- if i == len(text) or text[i] in [PAUSE_DELIMITER, NOPAUSE_DELIMITER]:
94
- phrase = text[phrase_base:i]
95
- if len(phrase) == 0:
96
- raise ParseKanaError(
97
- ParseKanaErrorCode.EMPTY_PHRASE,
98
- position=str(len(parsed_results) + 1),
99
- )
100
- phrase_base = i + 1
101
-
102
- is_interrogative = WIDE_INTERROGATION_MARK in phrase
103
- if is_interrogative:
104
- if WIDE_INTERROGATION_MARK in phrase[:-1]:
105
- raise ParseKanaError(
106
- ParseKanaErrorCode.INTERROGATION_MARK_NOT_AT_END, text=phrase
107
- )
108
- phrase = phrase.replace(WIDE_INTERROGATION_MARK, "")
109
-
110
- accent_phrase: AccentPhrase = _text_to_accent_phrase(phrase)
111
- if i < len(text) and text[i] == PAUSE_DELIMITER:
112
- accent_phrase.pause_mora = Mora(
113
- text="、",
114
- consonant=None,
115
- consonant_length=None,
116
- vowel="pau",
117
- vowel_length=0,
118
- pitch=0,
119
- )
120
- accent_phrase.is_interrogative = is_interrogative
121
-
122
- parsed_results.append(accent_phrase)
123
-
124
- return parsed_results
125
-
126
-
127
- def create_kana(accent_phrases: List[AccentPhrase]) -> str:
128
- text = ""
129
- for i, phrase in enumerate(accent_phrases):
130
- for j, mora in enumerate(phrase.moras):
131
- if mora.vowel in ["A", "I", "U", "E", "O"]:
132
- text += UNVOICE_SYMBOL
133
-
134
- text += mora.text
135
- if j + 1 == phrase.accent:
136
- text += ACCENT_SYMBOL
137
-
138
- if phrase.is_interrogative:
139
- text += WIDE_INTERROGATION_MARK
140
-
141
- if i < len(accent_phrases) - 1:
142
- if phrase.pause_mora is None:
143
- text += NOPAUSE_DELIMITER
144
- else:
145
- text += PAUSE_DELIMITER
146
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/NeuralSeq/modules/GenerSpeech/model/prosody_util.py DELETED
@@ -1,385 +0,0 @@
1
- from torch import nn
2
- import copy
3
- import torch
4
- from utils.hparams import hparams
5
- from modules.GenerSpeech.model.wavenet import WN
6
- import math
7
-
8
- from modules.fastspeech.tts_modules import LayerNorm
9
- import torch.nn.functional as F
10
- from utils.tts_utils import group_hidden_by_segs, sequence_mask
11
-
12
- from scipy.cluster.vq import kmeans2
13
- from torch.nn import functional as F
14
-
15
-
16
- class VQEmbeddingEMA(nn.Module):
17
- def __init__(self, n_embeddings, embedding_dim, commitment_cost=0.25, decay=0.999, epsilon=1e-5,
18
- print_vq_prob=False):
19
- super(VQEmbeddingEMA, self).__init__()
20
- self.commitment_cost = commitment_cost
21
- self.n_embeddings = n_embeddings
22
- self.decay = decay
23
- self.epsilon = epsilon
24
- self.print_vq_prob = print_vq_prob
25
- self.register_buffer('data_initialized', torch.zeros(1))
26
- init_bound = 1 / 512
27
- embedding = torch.Tensor(n_embeddings, embedding_dim)
28
- embedding.uniform_(-init_bound, init_bound)
29
- self.register_buffer("embedding", embedding)
30
- self.register_buffer("ema_count", torch.zeros(n_embeddings))
31
- self.register_buffer("ema_weight", self.embedding.clone())
32
-
33
- def encode(self, x):
34
- B, T, _ = x.shape
35
- M, D = self.embedding.size()
36
- x_flat = x.detach().reshape(-1, D)
37
-
38
- distances = torch.addmm(torch.sum(self.embedding ** 2, dim=1) +
39
- torch.sum(x_flat ** 2, dim=1, keepdim=True),
40
- x_flat, self.embedding.t(),
41
- alpha=-2.0, beta=1.0) # [B*T_mel, N_vq]
42
- indices = torch.argmin(distances.float(), dim=-1) # [B*T_mel]
43
- quantized = F.embedding(indices, self.embedding)
44
- quantized = quantized.view_as(x)
45
- return x_flat, quantized, indices
46
-
47
- def forward(self, x):
48
- """
49
-
50
- :param x: [B, T, D]
51
- :return: [B, T, D]
52
- """
53
- B, T, _ = x.shape
54
- M, D = self.embedding.size()
55
- if self.training and self.data_initialized.item() == 0:
56
- print('| running kmeans in VQVAE') # data driven initialization for the embeddings
57
- x_flat = x.detach().reshape(-1, D)
58
- rp = torch.randperm(x_flat.size(0))
59
- kd = kmeans2(x_flat[rp].data.cpu().numpy(), self.n_embeddings, minit='points')
60
- self.embedding.copy_(torch.from_numpy(kd[0]))
61
- x_flat, quantized, indices = self.encode(x)
62
- encodings = F.one_hot(indices, M).float()
63
- self.ema_weight.copy_(torch.matmul(encodings.t(), x_flat))
64
- self.ema_count.copy_(torch.sum(encodings, dim=0))
65
-
66
- x_flat, quantized, indices = self.encode(x)
67
- encodings = F.one_hot(indices, M).float()
68
- indices = indices.reshape(B, T)
69
-
70
- if self.training and self.data_initialized.item() != 0:
71
- self.ema_count = self.decay * self.ema_count + (1 - self.decay) * torch.sum(encodings, dim=0)
72
-
73
- n = torch.sum(self.ema_count)
74
- self.ema_count = (self.ema_count + self.epsilon) / (n + M * self.epsilon) * n
75
-
76
- dw = torch.matmul(encodings.t(), x_flat)
77
- self.ema_weight = self.decay * self.ema_weight + (1 - self.decay) * dw
78
-
79
- self.embedding = self.ema_weight / self.ema_count.unsqueeze(-1)
80
- self.data_initialized.fill_(1)
81
-
82
- e_latent_loss = F.mse_loss(x, quantized.detach(), reduction='none')
83
- nonpadding = (x.abs().sum(-1) > 0).float()
84
- e_latent_loss = (e_latent_loss.mean(-1) * nonpadding).sum() / nonpadding.sum()
85
- loss = self.commitment_cost * e_latent_loss
86
-
87
- quantized = x + (quantized - x).detach()
88
-
89
- avg_probs = torch.mean(encodings, dim=0)
90
- perplexity = torch.exp(-torch.sum(avg_probs * torch.log(avg_probs + 1e-10)))
91
- if self.print_vq_prob:
92
- print("| VQ code avg_probs: ", avg_probs)
93
- return quantized, loss, indices, perplexity
94
-
95
- class CrossAttenLayer(nn.Module):
96
- def __init__(self, d_model, nhead, dim_feedforward=2048, dropout=0.1):
97
- super(CrossAttenLayer, self).__init__()
98
- self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout)
99
- self.linear1 = nn.Linear(d_model, dim_feedforward)
100
- self.dropout1 = nn.Dropout(dropout)
101
- self.norm1 = nn.LayerNorm(d_model)
102
- self.linear2 = nn.Linear(dim_feedforward, d_model)
103
- self.dropout2 = nn.Dropout(dropout)
104
- self.norm2 = nn.LayerNorm(d_model)
105
- self.activation = nn.ReLU()
106
-
107
- def forward(self, src, local_emotion, emotion_key_padding_mask=None, forcing=False):
108
- # src: (Tph, B, 256) local_emotion: (Temo, B, 256) emotion_key_padding_mask: (B, Temo)
109
- if forcing:
110
- maxlength = src.shape[0]
111
- k = local_emotion.shape[0] / src.shape[0]
112
- lengths1 = torch.ceil(torch.tensor([i for i in range(maxlength)]).to(src.device) * k) + 1
113
- lengths2 = torch.floor(torch.tensor([i for i in range(maxlength)]).to(src.device) * k) - 1
114
- mask1 = sequence_mask(lengths1, local_emotion.shape[0])
115
- mask2 = sequence_mask(lengths2, local_emotion.shape[0])
116
- mask = mask1.float() - mask2.float()
117
- attn_emo = mask.repeat(src.shape[1], 1, 1) # (B, Tph, Temo)
118
- src2 = torch.matmul(local_emotion.permute(1, 2, 0), attn_emo.float().transpose(1, 2)).permute(2, 0, 1)
119
- else:
120
- src2, attn_emo = self.multihead_attn(src, local_emotion, local_emotion, key_padding_mask=emotion_key_padding_mask)
121
- src = src + self.dropout1(src2)
122
- src = self.norm1(src)
123
- src2 = self.linear2(self.activation(self.linear1(src)))
124
- src = src + self.dropout2(src2)
125
- src = self.norm2(src)
126
- return src, attn_emo
127
-
128
-
129
- class ProsodyAligner(nn.Module):
130
- def __init__(self, num_layers, guided_sigma=0.3, guided_layers=None, norm=None):
131
- super(ProsodyAligner, self).__init__()
132
- self.layers = nn.ModuleList([CrossAttenLayer(d_model=hparams['hidden_size'], nhead=2) for _ in range(num_layers)])
133
- self.num_layers = num_layers
134
- self.norm = norm
135
- self.guided_sigma = guided_sigma
136
- self.guided_layers = guided_layers if guided_layers is not None else num_layers
137
-
138
- def forward(self, src, local_emotion, src_key_padding_mask=None, emotion_key_padding_mask=None, forcing=False):
139
- output = src
140
- guided_loss = 0
141
- attn_emo_list = []
142
- for i, mod in enumerate(self.layers):
143
- # output: (Tph, B, 256), global_emotion: (1, B, 256), local_emotion: (Temo, B, 256) mask: None, src_key_padding_mask: (B, Tph),
144
- # emotion_key_padding_mask: (B, Temo)
145
- output, attn_emo = mod(output, local_emotion, emotion_key_padding_mask=emotion_key_padding_mask, forcing=forcing)
146
- attn_emo_list.append(attn_emo.unsqueeze(1))
147
- # attn_emo: (B, Tph, Temo) attn: (B, Tph, Tph)
148
- if i < self.guided_layers and src_key_padding_mask is not None:
149
- s_length = (~src_key_padding_mask).float().sum(-1) # B
150
- emo_length = (~emotion_key_padding_mask).float().sum(-1)
151
- attn_w_emo = _make_guided_attention_mask(src_key_padding_mask.size(-1), s_length, emotion_key_padding_mask.size(-1), emo_length, self.guided_sigma)
152
-
153
- g_loss_emo = attn_emo * attn_w_emo # N, L, S
154
- non_padding_mask = (~src_key_padding_mask).unsqueeze(-1) & (~emotion_key_padding_mask).unsqueeze(1)
155
- guided_loss = g_loss_emo[non_padding_mask].mean() + guided_loss
156
-
157
- if self.norm is not None:
158
- output = self.norm(output)
159
-
160
- return output, guided_loss, attn_emo_list
161
-
162
- def _make_guided_attention_mask(ilen, rilen, olen, rolen, sigma):
163
- grid_x, grid_y = torch.meshgrid(torch.arange(ilen, device=rilen.device), torch.arange(olen, device=rolen.device))
164
- grid_x = grid_x.unsqueeze(0).expand(rilen.size(0), -1, -1)
165
- grid_y = grid_y.unsqueeze(0).expand(rolen.size(0), -1, -1)
166
- rilen = rilen.unsqueeze(1).unsqueeze(1)
167
- rolen = rolen.unsqueeze(1).unsqueeze(1)
168
- return 1.0 - torch.exp(
169
- -((grid_y.float() / rolen - grid_x.float() / rilen) ** 2) / (2 * (sigma ** 2))
170
- )
171
-
172
- class LocalStyleAdaptor(nn.Module):
173
- def __init__(self, hidden_size, num_vq_codes=64, padding_idx=0):
174
- super(LocalStyleAdaptor, self).__init__()
175
- self.encoder = ConvBlocks(80, hidden_size, [1] * 5, 5, dropout=hparams['vae_dropout'])
176
- self.n_embed = num_vq_codes
177
- self.vqvae = VQEmbeddingEMA(self.n_embed, hidden_size, commitment_cost=hparams['lambda_commit'])
178
- self.wavenet = WN(hidden_channels=80, gin_channels=80, kernel_size=3, dilation_rate=1, n_layers=4)
179
- self.padding_idx = padding_idx
180
- self.hidden_size = hidden_size
181
-
182
- def forward(self, ref_mels, mel2ph=None, no_vq=False):
183
- """
184
-
185
- :param ref_mels: [B, T, 80]
186
- :return: [B, 1, H]
187
- """
188
- padding_mask = ref_mels[:, :, 0].eq(self.padding_idx).data
189
- ref_mels = self.wavenet(ref_mels.transpose(1, 2), x_mask=(~padding_mask).unsqueeze(1).repeat([1, 80, 1])).transpose(1, 2)
190
- if mel2ph is not None:
191
- ref_ph, _ = group_hidden_by_segs(ref_mels, mel2ph, torch.max(mel2ph))
192
- else:
193
- ref_ph = ref_mels
194
- prosody = self.encoder(ref_ph)
195
- if no_vq:
196
- return prosody
197
- z, vq_loss, vq_tokens, ppl = self.vqvae(prosody)
198
- vq_loss = vq_loss.mean()
199
- return z, vq_loss, ppl
200
-
201
-
202
-
203
-
204
- class LambdaLayer(nn.Module):
205
- def __init__(self, lambd):
206
- super(LambdaLayer, self).__init__()
207
- self.lambd = lambd
208
-
209
- def forward(self, x):
210
- return self.lambd(x)
211
-
212
-
213
- class Conv1d(nn.Conv1d):
214
- """A wrapper around nn.Conv1d, that works on (batch, time, channels)"""
215
-
216
- def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, dilation=1, groups=1, bias=True, padding=0):
217
- super(Conv1d, self).__init__(in_channels=in_channels, out_channels=out_channels,
218
- kernel_size=kernel_size, stride=stride, dilation=dilation,
219
- groups=groups, bias=bias, padding=padding)
220
-
221
- def forward(self, x):
222
- return super().forward(x.transpose(2, 1)).transpose(2, 1)
223
-
224
-
225
- def init_weights_func(m):
226
- classname = m.__class__.__name__
227
- if classname.find("Conv1d") != -1:
228
- torch.nn.init.xavier_uniform_(m.weight)
229
-
230
-
231
- class ResidualBlock(nn.Module):
232
- """Implements conv->PReLU->norm n-times"""
233
-
234
- def __init__(self, channels, kernel_size, dilation, n=2, norm_type='bn', dropout=0.0,
235
- c_multiple=2, ln_eps=1e-12):
236
- super(ResidualBlock, self).__init__()
237
-
238
- if norm_type == 'bn':
239
- norm_builder = lambda: nn.BatchNorm1d(channels)
240
- elif norm_type == 'in':
241
- norm_builder = lambda: nn.InstanceNorm1d(channels, affine=True)
242
- elif norm_type == 'gn':
243
- norm_builder = lambda: nn.GroupNorm(8, channels)
244
- elif norm_type == 'ln':
245
- norm_builder = lambda: LayerNorm(channels, dim=1, eps=ln_eps)
246
- else:
247
- norm_builder = lambda: nn.Identity()
248
-
249
- self.blocks = [
250
- nn.Sequential(
251
- norm_builder(),
252
- nn.Conv1d(channels, c_multiple * channels, kernel_size, dilation=dilation,
253
- padding=(dilation * (kernel_size - 1)) // 2),
254
- LambdaLayer(lambda x: x * kernel_size ** -0.5),
255
- nn.GELU(),
256
- nn.Conv1d(c_multiple * channels, channels, 1, dilation=dilation),
257
- )
258
- for i in range(n)
259
- ]
260
-
261
- self.blocks = nn.ModuleList(self.blocks)
262
- self.dropout = dropout
263
-
264
- def forward(self, x):
265
- nonpadding = (x.abs().sum(1) > 0).float()[:, None, :]
266
- for b in self.blocks:
267
- x_ = b(x)
268
- if self.dropout > 0 and self.training:
269
- x_ = F.dropout(x_, self.dropout, training=self.training)
270
- x = x + x_
271
- x = x * nonpadding
272
- return x
273
-
274
-
275
- class Pad(nn.ZeroPad2d):
276
- def __init__(self, kernel_size, dilation):
277
- pad_total = dilation * (kernel_size - 1)
278
- begin = pad_total // 2
279
- end = pad_total - begin
280
-
281
- super(Pad, self).__init__((begin, end, begin, end))
282
-
283
-
284
- class ZeroTemporalPad(nn.ZeroPad2d):
285
- """Pad sequences to equal lentgh in the temporal dimension"""
286
-
287
- def __init__(self, kernel_size, dilation, causal=False):
288
- total_pad = (dilation * (kernel_size - 1))
289
-
290
- if causal:
291
- super(ZeroTemporalPad, self).__init__((total_pad, 0))
292
- else:
293
- begin = total_pad // 2
294
- end = total_pad - begin
295
- super(ZeroTemporalPad, self).__init__((begin, end))
296
-
297
-
298
- class ConvBlocks(nn.Module):
299
- """Decodes the expanded phoneme encoding into spectrograms"""
300
-
301
- def __init__(self, channels, out_dims, dilations, kernel_size,
302
- norm_type='ln', layers_in_block=2, c_multiple=2,
303
- dropout=0.0, ln_eps=1e-5, init_weights=True):
304
- super(ConvBlocks, self).__init__()
305
- self.res_blocks = nn.Sequential(
306
- *[ResidualBlock(channels, kernel_size, d,
307
- n=layers_in_block, norm_type=norm_type, c_multiple=c_multiple,
308
- dropout=dropout, ln_eps=ln_eps)
309
- for d in dilations],
310
- )
311
- if norm_type == 'bn':
312
- norm = nn.BatchNorm1d(channels)
313
- elif norm_type == 'in':
314
- norm = nn.InstanceNorm1d(channels, affine=True)
315
- elif norm_type == 'gn':
316
- norm = nn.GroupNorm(8, channels)
317
- elif norm_type == 'ln':
318
- norm = LayerNorm(channels, dim=1, eps=ln_eps)
319
- self.last_norm = norm
320
- self.post_net1 = nn.Conv1d(channels, out_dims, kernel_size=3, padding=1)
321
- if init_weights:
322
- self.apply(init_weights_func)
323
-
324
- def forward(self, x):
325
- """
326
-
327
- :param x: [B, T, H]
328
- :return: [B, T, H]
329
- """
330
- x = x.transpose(1, 2)
331
- nonpadding = (x.abs().sum(1) > 0).float()[:, None, :]
332
- x = self.res_blocks(x) * nonpadding
333
- x = self.last_norm(x) * nonpadding
334
- x = self.post_net1(x) * nonpadding
335
- return x.transpose(1, 2)
336
-
337
-
338
- class TextConvEncoder(ConvBlocks):
339
- def __init__(self, embed_tokens, channels, out_dims, dilations, kernel_size,
340
- norm_type='ln', layers_in_block=2, c_multiple=2,
341
- dropout=0.0, ln_eps=1e-5, init_weights=True):
342
- super().__init__(channels, out_dims, dilations, kernel_size,
343
- norm_type, layers_in_block, c_multiple,
344
- dropout, ln_eps, init_weights)
345
- self.embed_tokens = embed_tokens
346
- self.embed_scale = math.sqrt(channels)
347
-
348
- def forward(self, txt_tokens):
349
- """
350
-
351
- :param txt_tokens: [B, T]
352
- :return: {
353
- 'encoder_out': [B x T x C]
354
- }
355
- """
356
- x = self.embed_scale * self.embed_tokens(txt_tokens)
357
- return super().forward(x)
358
-
359
-
360
- class ConditionalConvBlocks(ConvBlocks):
361
- def __init__(self, channels, g_channels, out_dims, dilations, kernel_size,
362
- norm_type='ln', layers_in_block=2, c_multiple=2,
363
- dropout=0.0, ln_eps=1e-5, init_weights=True, is_BTC=True):
364
- super().__init__(channels, out_dims, dilations, kernel_size,
365
- norm_type, layers_in_block, c_multiple,
366
- dropout, ln_eps, init_weights)
367
- self.g_prenet = nn.Conv1d(g_channels, channels, 3, padding=1)
368
- self.is_BTC = is_BTC
369
- if init_weights:
370
- self.g_prenet.apply(init_weights_func)
371
-
372
- def forward(self, x, g, x_mask):
373
- if self.is_BTC:
374
- x = x.transpose(1, 2)
375
- g = g.transpose(1, 2)
376
- x_mask = x_mask.transpose(1, 2)
377
- x = x + self.g_prenet(g)
378
- x = x * x_mask
379
-
380
- if not self.is_BTC:
381
- x = x.transpose(1, 2)
382
- x = super(ConditionalConvBlocks, self).forward(x) # input needs to be BTC
383
- if not self.is_BTC:
384
- x = x.transpose(1, 2)
385
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/Make_An_Audio_inpaint/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Make An Audio Inpaint
3
- emoji: 🔥
4
- colorFrom: green
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 3.17.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/g4f/Provider/Providers/Bard.py DELETED
@@ -1,74 +0,0 @@
1
- import os, requests, json, browser_cookie3, re, random
2
- from ...typing import sha256, Dict, get_type_hints
3
-
4
- url = 'https://bard.google.com'
5
- model = ['Palm2']
6
- supports_stream = False
7
- needs_auth = True
8
-
9
- def _create_completion(model: str, messages: list, stream: bool, **kwargs):
10
- psid = {cookie.name: cookie.value for cookie in browser_cookie3.chrome(
11
- domain_name='.google.com')}['__Secure-1PSID']
12
-
13
- formatted = '\n'.join([
14
- '%s: %s' % (message['role'], message['content']) for message in messages
15
- ])
16
- prompt = f'{formatted}\nAssistant:'
17
-
18
- proxy = kwargs.get('proxy', False)
19
- if proxy == False:
20
- print('warning!, you did not give a proxy, a lot of countries are banned from Google Bard, so it may not work')
21
-
22
- snlm0e = None
23
- conversation_id = None
24
- response_id = None
25
- choice_id = None
26
-
27
- client = requests.Session()
28
- client.proxies = {
29
- 'http': f'http://{proxy}',
30
- 'https': f'http://{proxy}'} if proxy else None
31
-
32
- client.headers = {
33
- 'authority': 'bard.google.com',
34
- 'content-type': 'application/x-www-form-urlencoded;charset=UTF-8',
35
- 'origin': 'https://bard.google.com',
36
- 'referer': 'https://bard.google.com/',
37
- 'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
38
- 'x-same-domain': '1',
39
- 'cookie': f'__Secure-1PSID={psid}'
40
- }
41
-
42
- snlm0e = re.search(r'SNlM0e\":\"(.*?)\"',
43
- client.get('https://bard.google.com/').text).group(1) if not snlm0e else snlm0e
44
-
45
- params = {
46
- 'bl': 'boq_assistant-bard-web-server_20230326.21_p0',
47
- '_reqid': random.randint(1111, 9999),
48
- 'rt': 'c'
49
- }
50
-
51
- data = {
52
- 'at': snlm0e,
53
- 'f.req': json.dumps([None, json.dumps([[prompt], None, [conversation_id, response_id, choice_id]])])}
54
-
55
- intents = '.'.join([
56
- 'assistant',
57
- 'lamda',
58
- 'BardFrontendService'
59
- ])
60
-
61
- response = client.post(f'https://bard.google.com/_/BardChatUi/data/{intents}/StreamGenerate',
62
- data=data, params=params)
63
-
64
- chat_data = json.loads(response.content.splitlines()[3])[0][2]
65
- if chat_data:
66
- json_chat_data = json.loads(chat_data)
67
-
68
- yield json_chat_data[0][0]
69
-
70
- else:
71
- yield 'error'
72
-
73
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
74
- '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AbeShinzo0708/AI_Kishida_Fumio_speaker/pre-fix/librosa/__init__.py DELETED
@@ -1,10 +0,0 @@
1
- # https://github.com/librosa/librosa/issues/1682
2
-
3
- import lazy_loader as lazy
4
- from .version import version as __version__
5
-
6
- _filename = __file__
7
- if _filename.endswith('.pyc'):
8
- _filename = _filename[:-1]
9
-
10
- __getattr__, __dir__, __all__ = lazy.attach_stub(__name__, _filename)
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhilashvj/planogram-compliance/classify/predict.py DELETED
@@ -1,345 +0,0 @@
1
- # YOLOv5 🚀 by Ultralytics, GPL-3.0 license
2
- """
3
- Run YOLOv5 classification inference on images, videos, directories, globs, YouTube, webcam, streams, etc.
4
-
5
- Usage - sources:
6
- $ python classify/predict.py --weights yolov5s-cls.pt --source 0 # webcam
7
- img.jpg # image
8
- vid.mp4 # video
9
- screen # screenshot
10
- path/ # directory
11
- list.txt # list of images
12
- list.streams # list of streams
13
- 'path/*.jpg' # glob
14
- 'https://youtu.be/Zgi9g1ksQHc' # YouTube
15
- 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
16
-
17
- Usage - formats:
18
- $ python classify/predict.py --weights yolov5s-cls.pt # PyTorch
19
- yolov5s-cls.torchscript # TorchScript
20
- yolov5s-cls.onnx # ONNX Runtime or OpenCV DNN with --dnn
21
- yolov5s-cls_openvino_model # OpenVINO
22
- yolov5s-cls.engine # TensorRT
23
- yolov5s-cls.mlmodel # CoreML (macOS-only)
24
- yolov5s-cls_saved_model # TensorFlow SavedModel
25
- yolov5s-cls.pb # TensorFlow GraphDef
26
- yolov5s-cls.tflite # TensorFlow Lite
27
- yolov5s-cls_edgetpu.tflite # TensorFlow Edge TPU
28
- yolov5s-cls_paddle_model # PaddlePaddle
29
- """
30
-
31
- import argparse
32
- import os
33
- import platform
34
- import sys
35
- from pathlib import Path
36
-
37
- import torch
38
- import torch.nn.functional as F
39
-
40
- FILE = Path(__file__).resolve()
41
- ROOT = FILE.parents[1] # YOLOv5 root directory
42
- if str(ROOT) not in sys.path:
43
- sys.path.append(str(ROOT)) # add ROOT to PATH
44
- ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative
45
-
46
- from models.common import DetectMultiBackend
47
- from utils.augmentations import classify_transforms
48
- from utils.dataloaders import (
49
- IMG_FORMATS,
50
- VID_FORMATS,
51
- LoadImages,
52
- LoadScreenshots,
53
- LoadStreams,
54
- )
55
- from utils.general import (
56
- LOGGER,
57
- Profile,
58
- check_file,
59
- check_img_size,
60
- check_imshow,
61
- check_requirements,
62
- colorstr,
63
- cv2,
64
- increment_path,
65
- print_args,
66
- strip_optimizer,
67
- )
68
- from utils.plots import Annotator
69
- from utils.torch_utils import select_device, smart_inference_mode
70
-
71
-
72
- @smart_inference_mode()
73
- def run(
74
- weights=ROOT / "yolov5s-cls.pt", # model.pt path(s)
75
- source=ROOT / "data/images", # file/dir/URL/glob/screen/0(webcam)
76
- data=ROOT / "data/coco128.yaml", # dataset.yaml path
77
- imgsz=(224, 224), # inference size (height, width)
78
- device="", # cuda device, i.e. 0 or 0,1,2,3 or cpu
79
- view_img=False, # show results
80
- save_txt=False, # save results to *.txt
81
- nosave=False, # do not save images/videos
82
- augment=False, # augmented inference
83
- visualize=False, # visualize features
84
- update=False, # update all models
85
- project=ROOT / "runs/predict-cls", # save results to project/name
86
- name="exp", # save results to project/name
87
- exist_ok=False, # existing project/name ok, do not increment
88
- half=False, # use FP16 half-precision inference
89
- dnn=False, # use OpenCV DNN for ONNX inference
90
- vid_stride=1, # video frame-rate stride
91
- ):
92
- source = str(source)
93
- save_img = not nosave and not source.endswith(
94
- ".txt"
95
- ) # save inference images
96
- is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS)
97
- is_url = source.lower().startswith(
98
- ("rtsp://", "rtmp://", "http://", "https://")
99
- )
100
- webcam = (
101
- source.isnumeric()
102
- or source.endswith(".streams")
103
- or (is_url and not is_file)
104
- )
105
- screenshot = source.lower().startswith("screen")
106
- if is_url and is_file:
107
- source = check_file(source) # download
108
-
109
- # Directories
110
- save_dir = increment_path(
111
- Path(project) / name, exist_ok=exist_ok
112
- ) # increment run
113
- (save_dir / "labels" if save_txt else save_dir).mkdir(
114
- parents=True, exist_ok=True
115
- ) # make dir
116
-
117
- # Load model
118
- device = select_device(device)
119
- model = DetectMultiBackend(
120
- weights, device=device, dnn=dnn, data=data, fp16=half
121
- )
122
- stride, names, pt = model.stride, model.names, model.pt
123
- imgsz = check_img_size(imgsz, s=stride) # check image size
124
-
125
- # Dataloader
126
- bs = 1 # batch_size
127
- if webcam:
128
- view_img = check_imshow(warn=True)
129
- dataset = LoadStreams(
130
- source,
131
- img_size=imgsz,
132
- transforms=classify_transforms(imgsz[0]),
133
- vid_stride=vid_stride,
134
- )
135
- bs = len(dataset)
136
- elif screenshot:
137
- dataset = LoadScreenshots(
138
- source, img_size=imgsz, stride=stride, auto=pt
139
- )
140
- else:
141
- dataset = LoadImages(
142
- source,
143
- img_size=imgsz,
144
- transforms=classify_transforms(imgsz[0]),
145
- vid_stride=vid_stride,
146
- )
147
- vid_path, vid_writer = [None] * bs, [None] * bs
148
-
149
- # Run inference
150
- model.warmup(imgsz=(1 if pt else bs, 3, *imgsz)) # warmup
151
- seen, windows, dt = 0, [], (Profile(), Profile(), Profile())
152
- for path, im, im0s, vid_cap, s in dataset:
153
- with dt[0]:
154
- im = torch.Tensor(im).to(model.device)
155
- im = im.half() if model.fp16 else im.float() # uint8 to fp16/32
156
- if len(im.shape) == 3:
157
- im = im[None] # expand for batch dim
158
-
159
- # Inference
160
- with dt[1]:
161
- results = model(im)
162
-
163
- # Post-process
164
- with dt[2]:
165
- pred = F.softmax(results, dim=1) # probabilities
166
-
167
- # Process predictions
168
- for i, prob in enumerate(pred): # per image
169
- seen += 1
170
- if webcam: # batch_size >= 1
171
- p, im0, frame = path[i], im0s[i].copy(), dataset.count
172
- s += f"{i}: "
173
- else:
174
- p, im0, frame = path, im0s.copy(), getattr(dataset, "frame", 0)
175
-
176
- p = Path(p) # to Path
177
- save_path = str(save_dir / p.name) # im.jpg
178
- txt_path = str(save_dir / "labels" / p.stem) + (
179
- "" if dataset.mode == "image" else f"_{frame}"
180
- ) # im.txt
181
-
182
- s += "%gx%g " % im.shape[2:] # print string
183
- annotator = Annotator(im0, example=str(names), pil=True)
184
-
185
- # Print results
186
- top5i = prob.argsort(0, descending=True)[
187
- :5
188
- ].tolist() # top 5 indices
189
- s += f"{', '.join(f'{names[j]} {prob[j]:.2f}' for j in top5i)}, "
190
-
191
- # Write results
192
- text = "\n".join(f"{prob[j]:.2f} {names[j]}" for j in top5i)
193
- if save_img or view_img: # Add bbox to image
194
- annotator.text((32, 32), text, txt_color=(255, 255, 255))
195
- if save_txt: # Write to file
196
- with open(f"{txt_path}.txt", "a") as f:
197
- f.write(text + "\n")
198
-
199
- # Stream results
200
- im0 = annotator.result()
201
- if view_img:
202
- if platform.system() == "Linux" and p not in windows:
203
- windows.append(p)
204
- cv2.namedWindow(
205
- str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO
206
- ) # allow window resize (Linux)
207
- cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
208
- cv2.imshow(str(p), im0)
209
- cv2.waitKey(1) # 1 millisecond
210
-
211
- # Save results (image with detections)
212
- if save_img:
213
- if dataset.mode == "image":
214
- cv2.imwrite(save_path, im0)
215
- else: # 'video' or 'stream'
216
- if vid_path[i] != save_path: # new video
217
- vid_path[i] = save_path
218
- if isinstance(vid_writer[i], cv2.VideoWriter):
219
- vid_writer[
220
- i
221
- ].release() # release previous video writer
222
- if vid_cap: # video
223
- fps = vid_cap.get(cv2.CAP_PROP_FPS)
224
- w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
225
- h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
226
- else: # stream
227
- fps, w, h = 30, im0.shape[1], im0.shape[0]
228
- save_path = str(
229
- Path(save_path).with_suffix(".mp4")
230
- ) # force *.mp4 suffix on results videos
231
- vid_writer[i] = cv2.VideoWriter(
232
- save_path,
233
- cv2.VideoWriter_fourcc(*"mp4v"),
234
- fps,
235
- (w, h),
236
- )
237
- vid_writer[i].write(im0)
238
-
239
- # Print time (inference-only)
240
- LOGGER.info(f"{s}{dt[1].dt * 1E3:.1f}ms")
241
-
242
- # Print results
243
- t = tuple(x.t / seen * 1e3 for x in dt) # speeds per image
244
- LOGGER.info(
245
- f"Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}"
246
- % t
247
- )
248
- if save_txt or save_img:
249
- s = (
250
- f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}"
251
- if save_txt
252
- else ""
253
- )
254
- LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
255
- if update:
256
- strip_optimizer(
257
- weights[0]
258
- ) # update model (to fix SourceChangeWarning)
259
-
260
-
261
- def parse_opt():
262
- parser = argparse.ArgumentParser()
263
- parser.add_argument(
264
- "--weights",
265
- nargs="+",
266
- type=str,
267
- default=ROOT / "yolov5s-cls.pt",
268
- help="model path(s)",
269
- )
270
- parser.add_argument(
271
- "--source",
272
- type=str,
273
- default=ROOT / "data/images",
274
- help="file/dir/URL/glob/screen/0(webcam)",
275
- )
276
- parser.add_argument(
277
- "--data",
278
- type=str,
279
- default=ROOT / "data/coco128.yaml",
280
- help="(optional) dataset.yaml path",
281
- )
282
- parser.add_argument(
283
- "--imgsz",
284
- "--img",
285
- "--img-size",
286
- nargs="+",
287
- type=int,
288
- default=[224],
289
- help="inference size h,w",
290
- )
291
- parser.add_argument(
292
- "--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu"
293
- )
294
- parser.add_argument("--view-img", action="store_true", help="show results")
295
- parser.add_argument(
296
- "--save-txt", action="store_true", help="save results to *.txt"
297
- )
298
- parser.add_argument(
299
- "--nosave", action="store_true", help="do not save images/videos"
300
- )
301
- parser.add_argument(
302
- "--augment", action="store_true", help="augmented inference"
303
- )
304
- parser.add_argument(
305
- "--visualize", action="store_true", help="visualize features"
306
- )
307
- parser.add_argument(
308
- "--update", action="store_true", help="update all models"
309
- )
310
- parser.add_argument(
311
- "--project",
312
- default=ROOT / "runs/predict-cls",
313
- help="save results to project/name",
314
- )
315
- parser.add_argument(
316
- "--name", default="exp", help="save results to project/name"
317
- )
318
- parser.add_argument(
319
- "--exist-ok",
320
- action="store_true",
321
- help="existing project/name ok, do not increment",
322
- )
323
- parser.add_argument(
324
- "--half", action="store_true", help="use FP16 half-precision inference"
325
- )
326
- parser.add_argument(
327
- "--dnn", action="store_true", help="use OpenCV DNN for ONNX inference"
328
- )
329
- parser.add_argument(
330
- "--vid-stride", type=int, default=1, help="video frame-rate stride"
331
- )
332
- opt = parser.parse_args()
333
- opt.imgsz *= 2 if len(opt.imgsz) == 1 else 1 # expand
334
- print_args(vars(opt))
335
- return opt
336
-
337
-
338
- def main(opt):
339
- check_requirements(exclude=("tensorboard", "thop"))
340
- run(**vars(opt))
341
-
342
-
343
- if __name__ == "__main__":
344
- opt = parse_opt()
345
- main(opt)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AchyuthGamer/OpenGPT/g4f/Provider/Providers/Vitalentum.py DELETED
@@ -1,69 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import json
4
- from aiohttp import ClientSession
5
-
6
- from .base_provider import AsyncGeneratorProvider
7
- from ..typing import AsyncResult, Messages
8
-
9
- class Vitalentum(AsyncGeneratorProvider):
10
- url = "https://app.vitalentum.io"
11
- working = True
12
- supports_gpt_35_turbo = True
13
-
14
-
15
- @classmethod
16
- async def create_async_generator(
17
- cls,
18
- model: str,
19
- messages: Messages,
20
- proxy: str = None,
21
- **kwargs
22
- ) -> AsyncResult:
23
- headers = {
24
- "User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36",
25
- "Accept" : "text/event-stream",
26
- "Accept-language" : "de,en-US;q=0.7,en;q=0.3",
27
- "Origin" : cls.url,
28
- "Referer" : cls.url + "/",
29
- "Sec-Fetch-Dest" : "empty",
30
- "Sec-Fetch-Mode" : "cors",
31
- "Sec-Fetch-Site" : "same-origin",
32
- }
33
- conversation = json.dumps({"history": [{
34
- "speaker": "human" if message["role"] == "user" else "bot",
35
- "text": message["content"],
36
- } for message in messages]})
37
- data = {
38
- "conversation": conversation,
39
- "temperature": 0.7,
40
- **kwargs
41
- }
42
- async with ClientSession(
43
- headers=headers
44
- ) as session:
45
- async with session.post(cls.url + "/api/converse-edge", json=data, proxy=proxy) as response:
46
- response.raise_for_status()
47
- async for line in response.content:
48
- line = line.decode()
49
- if line.startswith("data: "):
50
- if line.startswith("data: [DONE]"):
51
- break
52
- line = json.loads(line[6:-1])
53
- content = line["choices"][0]["delta"].get("content")
54
- if content:
55
- yield content
56
-
57
-
58
- @classmethod
59
- @property
60
- def params(cls):
61
- params = [
62
- ("model", "str"),
63
- ("messages", "list[dict[str, str]]"),
64
- ("stream", "bool"),
65
- ("proxy", "str"),
66
- ("temperature", "float"),
67
- ]
68
- param = ", ".join([": ".join(p) for p in params])
69
- return f"g4f.provider.{cls.__name__} supports: ({param})"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/customshapes/CustomShapes.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import CustomShapes from '../../../plugins/customshapes';
2
- export default CustomShapes;
 
 
 
spaces/AjulorC/question_answering_bot_deployed_with_Gradio/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Question_answering_bot_deployed_with_Gradio
3
- emoji: 🦀
4
- colorFrom: green
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 2.8.11
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aki004/herta-so-vits/modules/__init__.py DELETED
File without changes
spaces/Akmyradov/TurkmenTTSweSTT/vits/train_ms.py DELETED
@@ -1,294 +0,0 @@
1
- import os
2
- import json
3
- import argparse
4
- import itertools
5
- import math
6
- import torch
7
- from torch import nn, optim
8
- from torch.nn import functional as F
9
- from torch.utils.data import DataLoader
10
- from torch.utils.tensorboard import SummaryWriter
11
- import torch.multiprocessing as mp
12
- import torch.distributed as dist
13
- from torch.nn.parallel import DistributedDataParallel as DDP
14
- from torch.cuda.amp import autocast, GradScaler
15
-
16
- import commons
17
- import utils
18
- from data_utils import (
19
- TextAudioSpeakerLoader,
20
- TextAudioSpeakerCollate,
21
- DistributedBucketSampler
22
- )
23
- from models import (
24
- SynthesizerTrn,
25
- MultiPeriodDiscriminator,
26
- )
27
- from losses import (
28
- generator_loss,
29
- discriminator_loss,
30
- feature_loss,
31
- kl_loss
32
- )
33
- from mel_processing import mel_spectrogram_torch, spec_to_mel_torch
34
- from text.symbols import symbols
35
-
36
-
37
- torch.backends.cudnn.benchmark = True
38
- global_step = 0
39
-
40
-
41
- def main():
42
- """Assume Single Node Multi GPUs Training Only"""
43
- assert torch.cuda.is_available(), "CPU training is not allowed."
44
-
45
- n_gpus = torch.cuda.device_count()
46
- os.environ['MASTER_ADDR'] = 'localhost'
47
- os.environ['MASTER_PORT'] = '80000'
48
-
49
- hps = utils.get_hparams()
50
- mp.spawn(run, nprocs=n_gpus, args=(n_gpus, hps,))
51
-
52
-
53
- def run(rank, n_gpus, hps):
54
- global global_step
55
- if rank == 0:
56
- logger = utils.get_logger(hps.model_dir)
57
- logger.info(hps)
58
- utils.check_git_hash(hps.model_dir)
59
- writer = SummaryWriter(log_dir=hps.model_dir)
60
- writer_eval = SummaryWriter(log_dir=os.path.join(hps.model_dir, "eval"))
61
-
62
- dist.init_process_group(backend='nccl', init_method='env://', world_size=n_gpus, rank=rank)
63
- torch.manual_seed(hps.train.seed)
64
- torch.cuda.set_device(rank)
65
-
66
- train_dataset = TextAudioSpeakerLoader(hps.data.training_files, hps.data)
67
- train_sampler = DistributedBucketSampler(
68
- train_dataset,
69
- hps.train.batch_size,
70
- [32,300,400,500,600,700,800,900,1000],
71
- num_replicas=n_gpus,
72
- rank=rank,
73
- shuffle=True)
74
- collate_fn = TextAudioSpeakerCollate()
75
- train_loader = DataLoader(train_dataset, num_workers=8, shuffle=False, pin_memory=True,
76
- collate_fn=collate_fn, batch_sampler=train_sampler)
77
- if rank == 0:
78
- eval_dataset = TextAudioSpeakerLoader(hps.data.validation_files, hps.data)
79
- eval_loader = DataLoader(eval_dataset, num_workers=8, shuffle=False,
80
- batch_size=hps.train.batch_size, pin_memory=True,
81
- drop_last=False, collate_fn=collate_fn)
82
-
83
- net_g = SynthesizerTrn(
84
- len(symbols),
85
- hps.data.filter_length // 2 + 1,
86
- hps.train.segment_size // hps.data.hop_length,
87
- n_speakers=hps.data.n_speakers,
88
- **hps.model).cuda(rank)
89
- net_d = MultiPeriodDiscriminator(hps.model.use_spectral_norm).cuda(rank)
90
- optim_g = torch.optim.AdamW(
91
- net_g.parameters(),
92
- hps.train.learning_rate,
93
- betas=hps.train.betas,
94
- eps=hps.train.eps)
95
- optim_d = torch.optim.AdamW(
96
- net_d.parameters(),
97
- hps.train.learning_rate,
98
- betas=hps.train.betas,
99
- eps=hps.train.eps)
100
- net_g = DDP(net_g, device_ids=[rank])
101
- net_d = DDP(net_d, device_ids=[rank])
102
-
103
- try:
104
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "G_*.pth"), net_g, optim_g)
105
- _, _, _, epoch_str = utils.load_checkpoint(utils.latest_checkpoint_path(hps.model_dir, "D_*.pth"), net_d, optim_d)
106
- global_step = (epoch_str - 1) * len(train_loader)
107
- except:
108
- epoch_str = 1
109
- global_step = 0
110
-
111
- scheduler_g = torch.optim.lr_scheduler.ExponentialLR(optim_g, gamma=hps.train.lr_decay, last_epoch=epoch_str-2)
112
- scheduler_d = torch.optim.lr_scheduler.ExponentialLR(optim_d, gamma=hps.train.lr_decay, last_epoch=epoch_str-2)
113
-
114
- scaler = GradScaler(enabled=hps.train.fp16_run)
115
-
116
- for epoch in range(epoch_str, hps.train.epochs + 1):
117
- if rank==0:
118
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, eval_loader], logger, [writer, writer_eval])
119
- else:
120
- train_and_evaluate(rank, epoch, hps, [net_g, net_d], [optim_g, optim_d], [scheduler_g, scheduler_d], scaler, [train_loader, None], None, None)
121
- scheduler_g.step()
122
- scheduler_d.step()
123
-
124
-
125
- def train_and_evaluate(rank, epoch, hps, nets, optims, schedulers, scaler, loaders, logger, writers):
126
- net_g, net_d = nets
127
- optim_g, optim_d = optims
128
- scheduler_g, scheduler_d = schedulers
129
- train_loader, eval_loader = loaders
130
- if writers is not None:
131
- writer, writer_eval = writers
132
-
133
- train_loader.batch_sampler.set_epoch(epoch)
134
- global global_step
135
-
136
- net_g.train()
137
- net_d.train()
138
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(train_loader):
139
- x, x_lengths = x.cuda(rank, non_blocking=True), x_lengths.cuda(rank, non_blocking=True)
140
- spec, spec_lengths = spec.cuda(rank, non_blocking=True), spec_lengths.cuda(rank, non_blocking=True)
141
- y, y_lengths = y.cuda(rank, non_blocking=True), y_lengths.cuda(rank, non_blocking=True)
142
- speakers = speakers.cuda(rank, non_blocking=True)
143
-
144
- with autocast(enabled=hps.train.fp16_run):
145
- y_hat, l_length, attn, ids_slice, x_mask, z_mask,\
146
- (z, z_p, m_p, logs_p, m_q, logs_q) = net_g(x, x_lengths, spec, spec_lengths, speakers)
147
-
148
- mel = spec_to_mel_torch(
149
- spec,
150
- hps.data.filter_length,
151
- hps.data.n_mel_channels,
152
- hps.data.sampling_rate,
153
- hps.data.mel_fmin,
154
- hps.data.mel_fmax)
155
- y_mel = commons.slice_segments(mel, ids_slice, hps.train.segment_size // hps.data.hop_length)
156
- y_hat_mel = mel_spectrogram_torch(
157
- y_hat.squeeze(1),
158
- hps.data.filter_length,
159
- hps.data.n_mel_channels,
160
- hps.data.sampling_rate,
161
- hps.data.hop_length,
162
- hps.data.win_length,
163
- hps.data.mel_fmin,
164
- hps.data.mel_fmax
165
- )
166
-
167
- y = commons.slice_segments(y, ids_slice * hps.data.hop_length, hps.train.segment_size) # slice
168
-
169
- # Discriminator
170
- y_d_hat_r, y_d_hat_g, _, _ = net_d(y, y_hat.detach())
171
- with autocast(enabled=False):
172
- loss_disc, losses_disc_r, losses_disc_g = discriminator_loss(y_d_hat_r, y_d_hat_g)
173
- loss_disc_all = loss_disc
174
- optim_d.zero_grad()
175
- scaler.scale(loss_disc_all).backward()
176
- scaler.unscale_(optim_d)
177
- grad_norm_d = commons.clip_grad_value_(net_d.parameters(), None)
178
- scaler.step(optim_d)
179
-
180
- with autocast(enabled=hps.train.fp16_run):
181
- # Generator
182
- y_d_hat_r, y_d_hat_g, fmap_r, fmap_g = net_d(y, y_hat)
183
- with autocast(enabled=False):
184
- loss_dur = torch.sum(l_length.float())
185
- loss_mel = F.l1_loss(y_mel, y_hat_mel) * hps.train.c_mel
186
- loss_kl = kl_loss(z_p, logs_q, m_p, logs_p, z_mask) * hps.train.c_kl
187
-
188
- loss_fm = feature_loss(fmap_r, fmap_g)
189
- loss_gen, losses_gen = generator_loss(y_d_hat_g)
190
- loss_gen_all = loss_gen + loss_fm + loss_mel + loss_dur + loss_kl
191
- optim_g.zero_grad()
192
- scaler.scale(loss_gen_all).backward()
193
- scaler.unscale_(optim_g)
194
- grad_norm_g = commons.clip_grad_value_(net_g.parameters(), None)
195
- scaler.step(optim_g)
196
- scaler.update()
197
-
198
- if rank==0:
199
- if global_step % hps.train.log_interval == 0:
200
- lr = optim_g.param_groups[0]['lr']
201
- losses = [loss_disc, loss_gen, loss_fm, loss_mel, loss_dur, loss_kl]
202
- logger.info('Train Epoch: {} [{:.0f}%]'.format(
203
- epoch,
204
- 100. * batch_idx / len(train_loader)))
205
- logger.info([x.item() for x in losses] + [global_step, lr])
206
-
207
- scalar_dict = {"loss/g/total": loss_gen_all, "loss/d/total": loss_disc_all, "learning_rate": lr, "grad_norm_d": grad_norm_d, "grad_norm_g": grad_norm_g}
208
- scalar_dict.update({"loss/g/fm": loss_fm, "loss/g/mel": loss_mel, "loss/g/dur": loss_dur, "loss/g/kl": loss_kl})
209
-
210
- scalar_dict.update({"loss/g/{}".format(i): v for i, v in enumerate(losses_gen)})
211
- scalar_dict.update({"loss/d_r/{}".format(i): v for i, v in enumerate(losses_disc_r)})
212
- scalar_dict.update({"loss/d_g/{}".format(i): v for i, v in enumerate(losses_disc_g)})
213
- image_dict = {
214
- "slice/mel_org": utils.plot_spectrogram_to_numpy(y_mel[0].data.cpu().numpy()),
215
- "slice/mel_gen": utils.plot_spectrogram_to_numpy(y_hat_mel[0].data.cpu().numpy()),
216
- "all/mel": utils.plot_spectrogram_to_numpy(mel[0].data.cpu().numpy()),
217
- "all/attn": utils.plot_alignment_to_numpy(attn[0,0].data.cpu().numpy())
218
- }
219
- utils.summarize(
220
- writer=writer,
221
- global_step=global_step,
222
- images=image_dict,
223
- scalars=scalar_dict)
224
-
225
- if global_step % hps.train.eval_interval == 0:
226
- evaluate(hps, net_g, eval_loader, writer_eval)
227
- utils.save_checkpoint(net_g, optim_g, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "G_{}.pth".format(global_step)))
228
- utils.save_checkpoint(net_d, optim_d, hps.train.learning_rate, epoch, os.path.join(hps.model_dir, "D_{}.pth".format(global_step)))
229
- global_step += 1
230
-
231
- if rank == 0:
232
- logger.info('====> Epoch: {}'.format(epoch))
233
-
234
-
235
- def evaluate(hps, generator, eval_loader, writer_eval):
236
- generator.eval()
237
- with torch.no_grad():
238
- for batch_idx, (x, x_lengths, spec, spec_lengths, y, y_lengths, speakers) in enumerate(eval_loader):
239
- x, x_lengths = x.cuda(0), x_lengths.cuda(0)
240
- spec, spec_lengths = spec.cuda(0), spec_lengths.cuda(0)
241
- y, y_lengths = y.cuda(0), y_lengths.cuda(0)
242
- speakers = speakers.cuda(0)
243
-
244
- # remove else
245
- x = x[:1]
246
- x_lengths = x_lengths[:1]
247
- spec = spec[:1]
248
- spec_lengths = spec_lengths[:1]
249
- y = y[:1]
250
- y_lengths = y_lengths[:1]
251
- speakers = speakers[:1]
252
- break
253
- y_hat, attn, mask, *_ = generator.module.infer(x, x_lengths, speakers, max_len=1000)
254
- y_hat_lengths = mask.sum([1,2]).long() * hps.data.hop_length
255
-
256
- mel = spec_to_mel_torch(
257
- spec,
258
- hps.data.filter_length,
259
- hps.data.n_mel_channels,
260
- hps.data.sampling_rate,
261
- hps.data.mel_fmin,
262
- hps.data.mel_fmax)
263
- y_hat_mel = mel_spectrogram_torch(
264
- y_hat.squeeze(1).float(),
265
- hps.data.filter_length,
266
- hps.data.n_mel_channels,
267
- hps.data.sampling_rate,
268
- hps.data.hop_length,
269
- hps.data.win_length,
270
- hps.data.mel_fmin,
271
- hps.data.mel_fmax
272
- )
273
- image_dict = {
274
- "gen/mel": utils.plot_spectrogram_to_numpy(y_hat_mel[0].cpu().numpy())
275
- }
276
- audio_dict = {
277
- "gen/audio": y_hat[0,:,:y_hat_lengths[0]]
278
- }
279
- if global_step == 0:
280
- image_dict.update({"gt/mel": utils.plot_spectrogram_to_numpy(mel[0].cpu().numpy())})
281
- audio_dict.update({"gt/audio": y[0,:,:y_lengths[0]]})
282
-
283
- utils.summarize(
284
- writer=writer_eval,
285
- global_step=global_step,
286
- images=image_dict,
287
- audios=audio_dict,
288
- audio_sampling_rate=hps.data.sampling_rate
289
- )
290
- generator.train()
291
-
292
-
293
- if __name__ == "__main__":
294
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Alpaca233/SadTalker/src/test_audio2coeff.py DELETED
@@ -1,123 +0,0 @@
1
- import os
2
- import torch
3
- import numpy as np
4
- from scipy.io import savemat, loadmat
5
- from yacs.config import CfgNode as CN
6
- from scipy.signal import savgol_filter
7
-
8
- import safetensors
9
- import safetensors.torch
10
-
11
- from src.audio2pose_models.audio2pose import Audio2Pose
12
- from src.audio2exp_models.networks import SimpleWrapperV2
13
- from src.audio2exp_models.audio2exp import Audio2Exp
14
- from src.utils.safetensor_helper import load_x_from_safetensor
15
-
16
- def load_cpk(checkpoint_path, model=None, optimizer=None, device="cpu"):
17
- checkpoint = torch.load(checkpoint_path, map_location=torch.device(device))
18
- if model is not None:
19
- model.load_state_dict(checkpoint['model'])
20
- if optimizer is not None:
21
- optimizer.load_state_dict(checkpoint['optimizer'])
22
-
23
- return checkpoint['epoch']
24
-
25
- class Audio2Coeff():
26
-
27
- def __init__(self, sadtalker_path, device):
28
- #load config
29
- fcfg_pose = open(sadtalker_path['audio2pose_yaml_path'])
30
- cfg_pose = CN.load_cfg(fcfg_pose)
31
- cfg_pose.freeze()
32
- fcfg_exp = open(sadtalker_path['audio2exp_yaml_path'])
33
- cfg_exp = CN.load_cfg(fcfg_exp)
34
- cfg_exp.freeze()
35
-
36
- # load audio2pose_model
37
- self.audio2pose_model = Audio2Pose(cfg_pose, None, device=device)
38
- self.audio2pose_model = self.audio2pose_model.to(device)
39
- self.audio2pose_model.eval()
40
- for param in self.audio2pose_model.parameters():
41
- param.requires_grad = False
42
-
43
- try:
44
- if sadtalker_path['use_safetensor']:
45
- checkpoints = safetensors.torch.load_file(sadtalker_path['checkpoint'])
46
- self.audio2pose_model.load_state_dict(load_x_from_safetensor(checkpoints, 'audio2pose'))
47
- else:
48
- load_cpk(sadtalker_path['audio2pose_checkpoint'], model=self.audio2pose_model, device=device)
49
- except:
50
- raise Exception("Failed in loading audio2pose_checkpoint")
51
-
52
- # load audio2exp_model
53
- netG = SimpleWrapperV2()
54
- netG = netG.to(device)
55
- for param in netG.parameters():
56
- netG.requires_grad = False
57
- netG.eval()
58
- try:
59
- if sadtalker_path['use_safetensor']:
60
- checkpoints = safetensors.torch.load_file(sadtalker_path['checkpoint'])
61
- netG.load_state_dict(load_x_from_safetensor(checkpoints, 'audio2exp'))
62
- else:
63
- load_cpk(sadtalker_path['audio2exp_checkpoint'], model=netG, device=device)
64
- except:
65
- raise Exception("Failed in loading audio2exp_checkpoint")
66
- self.audio2exp_model = Audio2Exp(netG, cfg_exp, device=device, prepare_training_loss=False)
67
- self.audio2exp_model = self.audio2exp_model.to(device)
68
- for param in self.audio2exp_model.parameters():
69
- param.requires_grad = False
70
- self.audio2exp_model.eval()
71
-
72
- self.device = device
73
-
74
- def generate(self, batch, coeff_save_dir, pose_style, ref_pose_coeff_path=None):
75
-
76
- with torch.no_grad():
77
- #test
78
- results_dict_exp= self.audio2exp_model.test(batch)
79
- exp_pred = results_dict_exp['exp_coeff_pred'] #bs T 64
80
-
81
- #for class_id in range(1):
82
- #class_id = 0#(i+10)%45
83
- #class_id = random.randint(0,46) #46 styles can be selected
84
- batch['class'] = torch.LongTensor([pose_style]).to(self.device)
85
- results_dict_pose = self.audio2pose_model.test(batch)
86
- pose_pred = results_dict_pose['pose_pred'] #bs T 6
87
-
88
- pose_len = pose_pred.shape[1]
89
- if pose_len<13:
90
- pose_len = int((pose_len-1)/2)*2+1
91
- pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), pose_len, 2, axis=1)).to(self.device)
92
- else:
93
- pose_pred = torch.Tensor(savgol_filter(np.array(pose_pred.cpu()), 13, 2, axis=1)).to(self.device)
94
-
95
- coeffs_pred = torch.cat((exp_pred, pose_pred), dim=-1) #bs T 70
96
-
97
- coeffs_pred_numpy = coeffs_pred[0].clone().detach().cpu().numpy()
98
-
99
- if ref_pose_coeff_path is not None:
100
- coeffs_pred_numpy = self.using_refpose(coeffs_pred_numpy, ref_pose_coeff_path)
101
-
102
- savemat(os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name'])),
103
- {'coeff_3dmm': coeffs_pred_numpy})
104
-
105
- return os.path.join(coeff_save_dir, '%s##%s.mat'%(batch['pic_name'], batch['audio_name']))
106
-
107
- def using_refpose(self, coeffs_pred_numpy, ref_pose_coeff_path):
108
- num_frames = coeffs_pred_numpy.shape[0]
109
- refpose_coeff_dict = loadmat(ref_pose_coeff_path)
110
- refpose_coeff = refpose_coeff_dict['coeff_3dmm'][:,64:70]
111
- refpose_num_frames = refpose_coeff.shape[0]
112
- if refpose_num_frames<num_frames:
113
- div = num_frames//refpose_num_frames
114
- re = num_frames%refpose_num_frames
115
- refpose_coeff_list = [refpose_coeff for i in range(div)]
116
- refpose_coeff_list.append(refpose_coeff[:re, :])
117
- refpose_coeff = np.concatenate(refpose_coeff_list, axis=0)
118
-
119
- #### relative head pose
120
- coeffs_pred_numpy[:, 64:70] = coeffs_pred_numpy[:, 64:70] + ( refpose_coeff[:num_frames, :] - refpose_coeff[0:1, :] )
121
- return coeffs_pred_numpy
122
-
123
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_flax_objects.py DELETED
@@ -1,197 +0,0 @@
1
- # This file is autogenerated by the command `make fix-copies`, do not edit.
2
- from ..utils import DummyObject, requires_backends
3
-
4
-
5
- class FlaxControlNetModel(metaclass=DummyObject):
6
- _backends = ["flax"]
7
-
8
- def __init__(self, *args, **kwargs):
9
- requires_backends(self, ["flax"])
10
-
11
- @classmethod
12
- def from_config(cls, *args, **kwargs):
13
- requires_backends(cls, ["flax"])
14
-
15
- @classmethod
16
- def from_pretrained(cls, *args, **kwargs):
17
- requires_backends(cls, ["flax"])
18
-
19
-
20
- class FlaxModelMixin(metaclass=DummyObject):
21
- _backends = ["flax"]
22
-
23
- def __init__(self, *args, **kwargs):
24
- requires_backends(self, ["flax"])
25
-
26
- @classmethod
27
- def from_config(cls, *args, **kwargs):
28
- requires_backends(cls, ["flax"])
29
-
30
- @classmethod
31
- def from_pretrained(cls, *args, **kwargs):
32
- requires_backends(cls, ["flax"])
33
-
34
-
35
- class FlaxUNet2DConditionModel(metaclass=DummyObject):
36
- _backends = ["flax"]
37
-
38
- def __init__(self, *args, **kwargs):
39
- requires_backends(self, ["flax"])
40
-
41
- @classmethod
42
- def from_config(cls, *args, **kwargs):
43
- requires_backends(cls, ["flax"])
44
-
45
- @classmethod
46
- def from_pretrained(cls, *args, **kwargs):
47
- requires_backends(cls, ["flax"])
48
-
49
-
50
- class FlaxAutoencoderKL(metaclass=DummyObject):
51
- _backends = ["flax"]
52
-
53
- def __init__(self, *args, **kwargs):
54
- requires_backends(self, ["flax"])
55
-
56
- @classmethod
57
- def from_config(cls, *args, **kwargs):
58
- requires_backends(cls, ["flax"])
59
-
60
- @classmethod
61
- def from_pretrained(cls, *args, **kwargs):
62
- requires_backends(cls, ["flax"])
63
-
64
-
65
- class FlaxDiffusionPipeline(metaclass=DummyObject):
66
- _backends = ["flax"]
67
-
68
- def __init__(self, *args, **kwargs):
69
- requires_backends(self, ["flax"])
70
-
71
- @classmethod
72
- def from_config(cls, *args, **kwargs):
73
- requires_backends(cls, ["flax"])
74
-
75
- @classmethod
76
- def from_pretrained(cls, *args, **kwargs):
77
- requires_backends(cls, ["flax"])
78
-
79
-
80
- class FlaxDDIMScheduler(metaclass=DummyObject):
81
- _backends = ["flax"]
82
-
83
- def __init__(self, *args, **kwargs):
84
- requires_backends(self, ["flax"])
85
-
86
- @classmethod
87
- def from_config(cls, *args, **kwargs):
88
- requires_backends(cls, ["flax"])
89
-
90
- @classmethod
91
- def from_pretrained(cls, *args, **kwargs):
92
- requires_backends(cls, ["flax"])
93
-
94
-
95
- class FlaxDDPMScheduler(metaclass=DummyObject):
96
- _backends = ["flax"]
97
-
98
- def __init__(self, *args, **kwargs):
99
- requires_backends(self, ["flax"])
100
-
101
- @classmethod
102
- def from_config(cls, *args, **kwargs):
103
- requires_backends(cls, ["flax"])
104
-
105
- @classmethod
106
- def from_pretrained(cls, *args, **kwargs):
107
- requires_backends(cls, ["flax"])
108
-
109
-
110
- class FlaxDPMSolverMultistepScheduler(metaclass=DummyObject):
111
- _backends = ["flax"]
112
-
113
- def __init__(self, *args, **kwargs):
114
- requires_backends(self, ["flax"])
115
-
116
- @classmethod
117
- def from_config(cls, *args, **kwargs):
118
- requires_backends(cls, ["flax"])
119
-
120
- @classmethod
121
- def from_pretrained(cls, *args, **kwargs):
122
- requires_backends(cls, ["flax"])
123
-
124
-
125
- class FlaxKarrasVeScheduler(metaclass=DummyObject):
126
- _backends = ["flax"]
127
-
128
- def __init__(self, *args, **kwargs):
129
- requires_backends(self, ["flax"])
130
-
131
- @classmethod
132
- def from_config(cls, *args, **kwargs):
133
- requires_backends(cls, ["flax"])
134
-
135
- @classmethod
136
- def from_pretrained(cls, *args, **kwargs):
137
- requires_backends(cls, ["flax"])
138
-
139
-
140
- class FlaxLMSDiscreteScheduler(metaclass=DummyObject):
141
- _backends = ["flax"]
142
-
143
- def __init__(self, *args, **kwargs):
144
- requires_backends(self, ["flax"])
145
-
146
- @classmethod
147
- def from_config(cls, *args, **kwargs):
148
- requires_backends(cls, ["flax"])
149
-
150
- @classmethod
151
- def from_pretrained(cls, *args, **kwargs):
152
- requires_backends(cls, ["flax"])
153
-
154
-
155
- class FlaxPNDMScheduler(metaclass=DummyObject):
156
- _backends = ["flax"]
157
-
158
- def __init__(self, *args, **kwargs):
159
- requires_backends(self, ["flax"])
160
-
161
- @classmethod
162
- def from_config(cls, *args, **kwargs):
163
- requires_backends(cls, ["flax"])
164
-
165
- @classmethod
166
- def from_pretrained(cls, *args, **kwargs):
167
- requires_backends(cls, ["flax"])
168
-
169
-
170
- class FlaxSchedulerMixin(metaclass=DummyObject):
171
- _backends = ["flax"]
172
-
173
- def __init__(self, *args, **kwargs):
174
- requires_backends(self, ["flax"])
175
-
176
- @classmethod
177
- def from_config(cls, *args, **kwargs):
178
- requires_backends(cls, ["flax"])
179
-
180
- @classmethod
181
- def from_pretrained(cls, *args, **kwargs):
182
- requires_backends(cls, ["flax"])
183
-
184
-
185
- class FlaxScoreSdeVeScheduler(metaclass=DummyObject):
186
- _backends = ["flax"]
187
-
188
- def __init__(self, *args, **kwargs):
189
- requires_backends(self, ["flax"])
190
-
191
- @classmethod
192
- def from_config(cls, *args, **kwargs):
193
- requires_backends(cls, ["flax"])
194
-
195
- @classmethod
196
- def from_pretrained(cls, *args, **kwargs):
197
- requires_backends(cls, ["flax"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/pndm/__init__.py DELETED
File without changes
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion/test_stable_diffusion_image_variation.py DELETED
@@ -1,314 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import gc
17
- import random
18
- import unittest
19
-
20
- import numpy as np
21
- import torch
22
- from PIL import Image
23
- from transformers import CLIPImageProcessor, CLIPVisionConfig, CLIPVisionModelWithProjection
24
-
25
- from diffusers import (
26
- AutoencoderKL,
27
- DPMSolverMultistepScheduler,
28
- PNDMScheduler,
29
- StableDiffusionImageVariationPipeline,
30
- UNet2DConditionModel,
31
- )
32
- from diffusers.utils import floats_tensor, load_image, load_numpy, nightly, slow, torch_device
33
- from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
34
-
35
- from ..pipeline_params import IMAGE_VARIATION_BATCH_PARAMS, IMAGE_VARIATION_PARAMS
36
- from ..test_pipelines_common import PipelineKarrasSchedulerTesterMixin, PipelineLatentTesterMixin, PipelineTesterMixin
37
-
38
-
39
- enable_full_determinism()
40
-
41
-
42
- class StableDiffusionImageVariationPipelineFastTests(
43
- PipelineLatentTesterMixin, PipelineKarrasSchedulerTesterMixin, PipelineTesterMixin, unittest.TestCase
44
- ):
45
- pipeline_class = StableDiffusionImageVariationPipeline
46
- params = IMAGE_VARIATION_PARAMS
47
- batch_params = IMAGE_VARIATION_BATCH_PARAMS
48
- image_params = frozenset([])
49
- # TO-DO: update image_params once pipeline is refactored with VaeImageProcessor.preprocess
50
- image_latents_params = frozenset([])
51
-
52
- def get_dummy_components(self):
53
- torch.manual_seed(0)
54
- unet = UNet2DConditionModel(
55
- block_out_channels=(32, 64),
56
- layers_per_block=2,
57
- sample_size=32,
58
- in_channels=4,
59
- out_channels=4,
60
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
61
- up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
62
- cross_attention_dim=32,
63
- )
64
- scheduler = PNDMScheduler(skip_prk_steps=True)
65
- torch.manual_seed(0)
66
- vae = AutoencoderKL(
67
- block_out_channels=[32, 64],
68
- in_channels=3,
69
- out_channels=3,
70
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
71
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
72
- latent_channels=4,
73
- )
74
- torch.manual_seed(0)
75
- image_encoder_config = CLIPVisionConfig(
76
- hidden_size=32,
77
- projection_dim=32,
78
- intermediate_size=37,
79
- layer_norm_eps=1e-05,
80
- num_attention_heads=4,
81
- num_hidden_layers=5,
82
- image_size=32,
83
- patch_size=4,
84
- )
85
- image_encoder = CLIPVisionModelWithProjection(image_encoder_config)
86
- feature_extractor = CLIPImageProcessor(crop_size=32, size=32)
87
-
88
- components = {
89
- "unet": unet,
90
- "scheduler": scheduler,
91
- "vae": vae,
92
- "image_encoder": image_encoder,
93
- "feature_extractor": feature_extractor,
94
- "safety_checker": None,
95
- }
96
- return components
97
-
98
- def get_dummy_inputs(self, device, seed=0):
99
- image = floats_tensor((1, 3, 32, 32), rng=random.Random(seed))
100
- image = image.cpu().permute(0, 2, 3, 1)[0]
101
- image = Image.fromarray(np.uint8(image)).convert("RGB").resize((32, 32))
102
- if str(device).startswith("mps"):
103
- generator = torch.manual_seed(seed)
104
- else:
105
- generator = torch.Generator(device=device).manual_seed(seed)
106
- inputs = {
107
- "image": image,
108
- "generator": generator,
109
- "num_inference_steps": 2,
110
- "guidance_scale": 6.0,
111
- "output_type": "numpy",
112
- }
113
- return inputs
114
-
115
- def test_stable_diffusion_img_variation_default_case(self):
116
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
117
- components = self.get_dummy_components()
118
- sd_pipe = StableDiffusionImageVariationPipeline(**components)
119
- sd_pipe = sd_pipe.to(device)
120
- sd_pipe.set_progress_bar_config(disable=None)
121
-
122
- inputs = self.get_dummy_inputs(device)
123
- image = sd_pipe(**inputs).images
124
- image_slice = image[0, -3:, -3:, -1]
125
-
126
- assert image.shape == (1, 64, 64, 3)
127
- expected_slice = np.array([0.5239, 0.5723, 0.4796, 0.5049, 0.5550, 0.4685, 0.5329, 0.4891, 0.4921])
128
-
129
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
130
-
131
- def test_stable_diffusion_img_variation_multiple_images(self):
132
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
133
- components = self.get_dummy_components()
134
- sd_pipe = StableDiffusionImageVariationPipeline(**components)
135
- sd_pipe = sd_pipe.to(device)
136
- sd_pipe.set_progress_bar_config(disable=None)
137
-
138
- inputs = self.get_dummy_inputs(device)
139
- inputs["image"] = 2 * [inputs["image"]]
140
- output = sd_pipe(**inputs)
141
-
142
- image = output.images
143
-
144
- image_slice = image[-1, -3:, -3:, -1]
145
-
146
- assert image.shape == (2, 64, 64, 3)
147
- expected_slice = np.array([0.6892, 0.5637, 0.5836, 0.5771, 0.6254, 0.6409, 0.5580, 0.5569, 0.5289])
148
-
149
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-3
150
-
151
- def test_inference_batch_single_identical(self):
152
- super().test_inference_batch_single_identical(expected_max_diff=3e-3)
153
-
154
-
155
- @slow
156
- @require_torch_gpu
157
- class StableDiffusionImageVariationPipelineSlowTests(unittest.TestCase):
158
- def tearDown(self):
159
- super().tearDown()
160
- gc.collect()
161
- torch.cuda.empty_cache()
162
-
163
- def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0):
164
- generator = torch.Generator(device=generator_device).manual_seed(seed)
165
- init_image = load_image(
166
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
167
- "/stable_diffusion_imgvar/input_image_vermeer.png"
168
- )
169
- latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64))
170
- latents = torch.from_numpy(latents).to(device=device, dtype=dtype)
171
- inputs = {
172
- "image": init_image,
173
- "latents": latents,
174
- "generator": generator,
175
- "num_inference_steps": 3,
176
- "guidance_scale": 7.5,
177
- "output_type": "numpy",
178
- }
179
- return inputs
180
-
181
- def test_stable_diffusion_img_variation_pipeline_default(self):
182
- sd_pipe = StableDiffusionImageVariationPipeline.from_pretrained(
183
- "lambdalabs/sd-image-variations-diffusers", safety_checker=None
184
- )
185
- sd_pipe = sd_pipe.to(torch_device)
186
- sd_pipe.set_progress_bar_config(disable=None)
187
-
188
- inputs = self.get_inputs(torch_device)
189
- image = sd_pipe(**inputs).images
190
- image_slice = image[0, -3:, -3:, -1].flatten()
191
-
192
- assert image.shape == (1, 512, 512, 3)
193
- expected_slice = np.array([0.84491, 0.90789, 0.75708, 0.78734, 0.83485, 0.70099, 0.66938, 0.68727, 0.61379])
194
- assert np.abs(image_slice - expected_slice).max() < 6e-3
195
-
196
- def test_stable_diffusion_img_variation_intermediate_state(self):
197
- number_of_steps = 0
198
-
199
- def callback_fn(step: int, timestep: int, latents: torch.FloatTensor) -> None:
200
- callback_fn.has_been_called = True
201
- nonlocal number_of_steps
202
- number_of_steps += 1
203
- if step == 1:
204
- latents = latents.detach().cpu().numpy()
205
- assert latents.shape == (1, 4, 64, 64)
206
- latents_slice = latents[0, -3:, -3:, -1]
207
- expected_slice = np.array(
208
- [-0.1621, 0.2837, -0.7979, -0.1221, -1.3057, 0.7681, -2.1191, 0.0464, 1.6309]
209
- )
210
-
211
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
212
- elif step == 2:
213
- latents = latents.detach().cpu().numpy()
214
- assert latents.shape == (1, 4, 64, 64)
215
- latents_slice = latents[0, -3:, -3:, -1]
216
- expected_slice = np.array([0.6299, 1.7500, 1.1992, -2.1582, -1.8994, 0.7334, -0.7090, 1.0137, 1.5273])
217
-
218
- assert np.abs(latents_slice.flatten() - expected_slice).max() < 5e-2
219
-
220
- callback_fn.has_been_called = False
221
-
222
- pipe = StableDiffusionImageVariationPipeline.from_pretrained(
223
- "fusing/sd-image-variations-diffusers",
224
- safety_checker=None,
225
- torch_dtype=torch.float16,
226
- )
227
- pipe.to(torch_device)
228
- pipe.set_progress_bar_config(disable=None)
229
- pipe.enable_attention_slicing()
230
-
231
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
232
- pipe(**inputs, callback=callback_fn, callback_steps=1)
233
- assert callback_fn.has_been_called
234
- assert number_of_steps == inputs["num_inference_steps"]
235
-
236
- def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self):
237
- torch.cuda.empty_cache()
238
- torch.cuda.reset_max_memory_allocated()
239
- torch.cuda.reset_peak_memory_stats()
240
-
241
- model_id = "fusing/sd-image-variations-diffusers"
242
- pipe = StableDiffusionImageVariationPipeline.from_pretrained(
243
- model_id, safety_checker=None, torch_dtype=torch.float16
244
- )
245
- pipe = pipe.to(torch_device)
246
- pipe.set_progress_bar_config(disable=None)
247
- pipe.enable_attention_slicing(1)
248
- pipe.enable_sequential_cpu_offload()
249
-
250
- inputs = self.get_inputs(torch_device, dtype=torch.float16)
251
- _ = pipe(**inputs)
252
-
253
- mem_bytes = torch.cuda.max_memory_allocated()
254
- # make sure that less than 2.6 GB is allocated
255
- assert mem_bytes < 2.6 * 10**9
256
-
257
-
258
- @nightly
259
- @require_torch_gpu
260
- class StableDiffusionImageVariationPipelineNightlyTests(unittest.TestCase):
261
- def tearDown(self):
262
- super().tearDown()
263
- gc.collect()
264
- torch.cuda.empty_cache()
265
-
266
- def get_inputs(self, device, generator_device="cpu", dtype=torch.float32, seed=0):
267
- generator = torch.Generator(device=generator_device).manual_seed(seed)
268
- init_image = load_image(
269
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
270
- "/stable_diffusion_imgvar/input_image_vermeer.png"
271
- )
272
- latents = np.random.RandomState(seed).standard_normal((1, 4, 64, 64))
273
- latents = torch.from_numpy(latents).to(device=device, dtype=dtype)
274
- inputs = {
275
- "image": init_image,
276
- "latents": latents,
277
- "generator": generator,
278
- "num_inference_steps": 50,
279
- "guidance_scale": 7.5,
280
- "output_type": "numpy",
281
- }
282
- return inputs
283
-
284
- def test_img_variation_pndm(self):
285
- sd_pipe = StableDiffusionImageVariationPipeline.from_pretrained("fusing/sd-image-variations-diffusers")
286
- sd_pipe.to(torch_device)
287
- sd_pipe.set_progress_bar_config(disable=None)
288
-
289
- inputs = self.get_inputs(torch_device)
290
- image = sd_pipe(**inputs).images[0]
291
-
292
- expected_image = load_numpy(
293
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
294
- "/stable_diffusion_imgvar/lambdalabs_variations_pndm.npy"
295
- )
296
- max_diff = np.abs(expected_image - image).max()
297
- assert max_diff < 1e-3
298
-
299
- def test_img_variation_dpm(self):
300
- sd_pipe = StableDiffusionImageVariationPipeline.from_pretrained("fusing/sd-image-variations-diffusers")
301
- sd_pipe.scheduler = DPMSolverMultistepScheduler.from_config(sd_pipe.scheduler.config)
302
- sd_pipe.to(torch_device)
303
- sd_pipe.set_progress_bar_config(disable=None)
304
-
305
- inputs = self.get_inputs(torch_device)
306
- inputs["num_inference_steps"] = 25
307
- image = sd_pipe(**inputs).images[0]
308
-
309
- expected_image = load_numpy(
310
- "https://huggingface.co/datasets/diffusers/test-arrays/resolve/main"
311
- "/stable_diffusion_imgvar/lambdalabs_variations_dpm_multi.npy"
312
- )
313
- max_diff = np.abs(expected_image - image).max()
314
- assert max_diff < 1e-3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/groie/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_groie_1x_coco.py DELETED
@@ -1,45 +0,0 @@
1
- _base_ = '../gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r4_gcb_c3-c5_1x_coco.py'
2
- # model settings
3
- model = dict(
4
- roi_head=dict(
5
- bbox_roi_extractor=dict(
6
- type='GenericRoIExtractor',
7
- aggregation='sum',
8
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2),
9
- out_channels=256,
10
- featmap_strides=[4, 8, 16, 32],
11
- pre_cfg=dict(
12
- type='ConvModule',
13
- in_channels=256,
14
- out_channels=256,
15
- kernel_size=5,
16
- padding=2,
17
- inplace=False,
18
- ),
19
- post_cfg=dict(
20
- type='GeneralizedAttention',
21
- in_channels=256,
22
- spatial_range=-1,
23
- num_heads=6,
24
- attention_type='0100',
25
- kv_stride=2)),
26
- mask_roi_extractor=dict(
27
- type='GenericRoIExtractor',
28
- roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=2),
29
- out_channels=256,
30
- featmap_strides=[4, 8, 16, 32],
31
- pre_cfg=dict(
32
- type='ConvModule',
33
- in_channels=256,
34
- out_channels=256,
35
- kernel_size=5,
36
- padding=2,
37
- inplace=False,
38
- ),
39
- post_cfg=dict(
40
- type='GeneralizedAttention',
41
- in_channels=256,
42
- spatial_range=-1,
43
- num_heads=6,
44
- attention_type='0100',
45
- kv_stride=2))))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/_base_/models/apcnet_r50-d8.py DELETED
@@ -1,44 +0,0 @@
1
- # model settings
2
- norm_cfg = dict(type='SyncBN', requires_grad=True)
3
- model = dict(
4
- type='EncoderDecoder',
5
- pretrained='open-mmlab://resnet50_v1c',
6
- backbone=dict(
7
- type='ResNetV1c',
8
- depth=50,
9
- num_stages=4,
10
- out_indices=(0, 1, 2, 3),
11
- dilations=(1, 1, 2, 4),
12
- strides=(1, 2, 1, 1),
13
- norm_cfg=norm_cfg,
14
- norm_eval=False,
15
- style='pytorch',
16
- contract_dilation=True),
17
- decode_head=dict(
18
- type='APCHead',
19
- in_channels=2048,
20
- in_index=3,
21
- channels=512,
22
- pool_scales=(1, 2, 3, 6),
23
- dropout_ratio=0.1,
24
- num_classes=19,
25
- norm_cfg=dict(type='SyncBN', requires_grad=True),
26
- align_corners=False,
27
- loss_decode=dict(
28
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
29
- auxiliary_head=dict(
30
- type='FCNHead',
31
- in_channels=1024,
32
- in_index=2,
33
- channels=256,
34
- num_convs=1,
35
- concat_input=False,
36
- dropout_ratio=0.1,
37
- num_classes=19,
38
- norm_cfg=norm_cfg,
39
- align_corners=False,
40
- loss_decode=dict(
41
- type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
42
- # model training and testing settings
43
- train_cfg=dict(),
44
- test_cfg=dict(mode='whole'))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/gcnet/gcnet_r101-d8_512x512_160k_ade20k.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './gcnet_r50-d8_512x512_160k_ade20k.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/ArtGAN/Video-Diffusion-WebUI/video_diffusion/stable_diffusion_video/stable_diffusion_pipeline.py DELETED
@@ -1,848 +0,0 @@
1
- import inspect
2
- import json
3
- import math
4
- import time
5
- from pathlib import Path
6
- from typing import Callable, List, Optional, Tuple, Union
7
-
8
- import numpy as np
9
- import torch
10
- from diffusers.configuration_utils import FrozenDict
11
- from diffusers.models import AutoencoderKL, UNet2DConditionModel
12
- from diffusers.pipeline_utils import DiffusionPipeline
13
- from diffusers.pipelines.stable_diffusion import StableDiffusionPipelineOutput
14
- from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
15
- from diffusers.schedulers import (
16
- DDIMScheduler,
17
- DPMSolverMultistepScheduler,
18
- EulerAncestralDiscreteScheduler,
19
- EulerDiscreteScheduler,
20
- LMSDiscreteScheduler,
21
- PNDMScheduler,
22
- )
23
- from diffusers.utils import deprecate, logging
24
- from packaging import version
25
- from torch import nn
26
- from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
27
-
28
- from .upsampling import RealESRGANModel
29
- from .utils import get_timesteps_arr, make_video_pyav, slerp
30
-
31
- logging.set_verbosity_info()
32
- logger = logging.get_logger(__name__)
33
-
34
-
35
- class StableDiffusionWalkPipeline(DiffusionPipeline):
36
- r"""
37
- Pipeline for generating videos by interpolating Stable Diffusion's latent space.
38
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
39
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
40
- Args:
41
- vae ([`AutoencoderKL`]):
42
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
43
- text_encoder ([`CLIPTextModel`]):
44
- Frozen text-encoder. Stable Diffusion uses the text portion of
45
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
46
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
47
- tokenizer (`CLIPTokenizer`):
48
- Tokenizer of class
49
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
50
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
51
- scheduler ([`SchedulerMixin`]):
52
- A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of
53
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
54
- safety_checker ([`StableDiffusionSafetyChecker`]):
55
- Classification module that estimates whether generated images could be considered offensive or harmful.
56
- Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details.
57
- feature_extractor ([`CLIPFeatureExtractor`]):
58
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
59
- """
60
- _optional_components = ["safety_checker", "feature_extractor"]
61
-
62
- def __init__(
63
- self,
64
- vae: AutoencoderKL,
65
- text_encoder: CLIPTextModel,
66
- tokenizer: CLIPTokenizer,
67
- unet: UNet2DConditionModel,
68
- scheduler: Union[
69
- DDIMScheduler,
70
- PNDMScheduler,
71
- LMSDiscreteScheduler,
72
- EulerDiscreteScheduler,
73
- EulerAncestralDiscreteScheduler,
74
- DPMSolverMultistepScheduler,
75
- ],
76
- safety_checker: StableDiffusionSafetyChecker,
77
- feature_extractor: CLIPFeatureExtractor,
78
- requires_safety_checker: bool = True,
79
- ):
80
- super().__init__()
81
-
82
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
83
- deprecation_message = (
84
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
85
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
86
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
87
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
88
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
89
- " file"
90
- )
91
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
92
- new_config = dict(scheduler.config)
93
- new_config["steps_offset"] = 1
94
- scheduler._internal_dict = FrozenDict(new_config)
95
-
96
- if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
97
- deprecation_message = (
98
- f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
99
- " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
100
- " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
101
- " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
102
- " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
103
- )
104
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
105
- new_config = dict(scheduler.config)
106
- new_config["clip_sample"] = False
107
- scheduler._internal_dict = FrozenDict(new_config)
108
-
109
- if safety_checker is None and requires_safety_checker:
110
- logger.warning(
111
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
112
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
113
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
114
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
115
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
116
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
117
- )
118
-
119
- if safety_checker is not None and feature_extractor is None:
120
- raise ValueError(
121
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
122
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
123
- )
124
-
125
- is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
126
- version.parse(unet.config._diffusers_version).base_version
127
- ) < version.parse("0.9.0.dev0")
128
- is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
129
- if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
130
- deprecation_message = (
131
- "The configuration file of the unet has set the default `sample_size` to smaller than"
132
- " 64 which seems highly unlikely .If you're checkpoint is a fine-tuned version of any of the"
133
- " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
134
- " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
135
- " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
136
- " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
137
- " in the config might lead to incorrect results in future versions. If you have downloaded this"
138
- " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
139
- " the `unet/config.json` file"
140
- )
141
- deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
142
- new_config = dict(unet.config)
143
- new_config["sample_size"] = 64
144
- unet._internal_dict = FrozenDict(new_config)
145
-
146
- self.register_modules(
147
- vae=vae,
148
- text_encoder=text_encoder,
149
- tokenizer=tokenizer,
150
- unet=unet,
151
- scheduler=scheduler,
152
- safety_checker=safety_checker,
153
- feature_extractor=feature_extractor,
154
- )
155
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
156
- self.register_to_config(requires_safety_checker=requires_safety_checker)
157
-
158
- def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"):
159
- r"""
160
- Enable sliced attention computation.
161
- When this option is enabled, the attention module will split the input tensor in slices, to compute attention
162
- in several steps. This is useful to save some memory in exchange for a small speed decrease.
163
- Args:
164
- slice_size (`str` or `int`, *optional*, defaults to `"auto"`):
165
- When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If
166
- a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case,
167
- `attention_head_dim` must be a multiple of `slice_size`.
168
- """
169
- if slice_size == "auto":
170
- if isinstance(self.unet.config.attention_head_dim, int):
171
- # half the attention head size is usually a good trade-off between
172
- # speed and memory
173
- slice_size = self.unet.config.attention_head_dim // 2
174
- else:
175
- # if `attention_head_dim` is a list, take the smallest head size
176
- slice_size = min(self.unet.config.attention_head_dim)
177
-
178
- self.unet.set_attention_slice(slice_size)
179
-
180
- def disable_attention_slicing(self):
181
- r"""
182
- Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go
183
- back to computing attention in one step.
184
- """
185
- # set slice_size = `None` to disable `attention slicing`
186
- self.enable_attention_slicing(None)
187
-
188
- @torch.no_grad()
189
- def __call__(
190
- self,
191
- prompt: Optional[Union[str, List[str]]] = None,
192
- height: Optional[int] = None,
193
- width: Optional[int] = None,
194
- num_inference_steps: int = 50,
195
- guidance_scale: float = 7.5,
196
- negative_prompt: Optional[Union[str, List[str]]] = None,
197
- num_images_per_prompt: Optional[int] = 1,
198
- eta: float = 0.0,
199
- generator: Optional[torch.Generator] = None,
200
- latents: Optional[torch.FloatTensor] = None,
201
- output_type: Optional[str] = "pil",
202
- return_dict: bool = True,
203
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
204
- callback_steps: Optional[int] = 1,
205
- text_embeddings: Optional[torch.FloatTensor] = None,
206
- **kwargs,
207
- ):
208
- r"""
209
- Function invoked when calling the pipeline for generation.
210
- Args:
211
- prompt (`str` or `List[str]`, *optional*, defaults to `None`):
212
- The prompt or prompts to guide the image generation. If not provided, `text_embeddings` is required.
213
- height (`int`, *optional*, defaults to 512):
214
- The height in pixels of the generated image.
215
- width (`int`, *optional*, defaults to 512):
216
- The width in pixels of the generated image.
217
- num_inference_steps (`int`, *optional*, defaults to 50):
218
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
219
- expense of slower inference.
220
- guidance_scale (`float`, *optional*, defaults to 7.5):
221
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
222
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
223
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
224
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
225
- usually at the expense of lower image quality.
226
- negative_prompt (`str` or `List[str]`, *optional*):
227
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
228
- if `guidance_scale` is less than `1`).
229
- num_images_per_prompt (`int`, *optional*, defaults to 1):
230
- The number of images to generate per prompt.
231
- eta (`float`, *optional*, defaults to 0.0):
232
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
233
- [`schedulers.DDIMScheduler`], will be ignored for others.
234
- generator (`torch.Generator`, *optional*):
235
- A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation
236
- deterministic.
237
- latents (`torch.FloatTensor`, *optional*):
238
- Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
239
- generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
240
- tensor will ge generated by sampling using the supplied random `generator`.
241
- output_type (`str`, *optional*, defaults to `"pil"`):
242
- The output format of the generate image. Choose between
243
- [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
244
- return_dict (`bool`, *optional*, defaults to `True`):
245
- Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
246
- plain tuple.
247
- callback (`Callable`, *optional*):
248
- A function that will be called every `callback_steps` steps during inference. The function will be
249
- called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
250
- callback_steps (`int`, *optional*, defaults to 1):
251
- The frequency at which the `callback` function will be called. If not specified, the callback will be
252
- called at every step.
253
- text_embeddings (`torch.FloatTensor`, *optional*, defaults to `None`):
254
- Pre-generated text embeddings to be used as inputs for image generation. Can be used in place of
255
- `prompt` to avoid re-computing the embeddings. If not provided, the embeddings will be generated from
256
- the supplied `prompt`.
257
- Returns:
258
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
259
- [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple.
260
- When returning a tuple, the first element is a list with the generated images, and the second element is a
261
- list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work"
262
- (nsfw) content, according to the `safety_checker`.
263
- """
264
- # 0. Default height and width to unet
265
- height = height or self.unet.config.sample_size * self.vae_scale_factor
266
- width = width or self.unet.config.sample_size * self.vae_scale_factor
267
-
268
- if height % 8 != 0 or width % 8 != 0:
269
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
270
-
271
- if (callback_steps is None) or (
272
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
273
- ):
274
- raise ValueError(
275
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
276
- f" {type(callback_steps)}."
277
- )
278
-
279
- if text_embeddings is None:
280
- if isinstance(prompt, str):
281
- batch_size = 1
282
- elif isinstance(prompt, list):
283
- batch_size = len(prompt)
284
- else:
285
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
286
-
287
- # get prompt text embeddings
288
- text_inputs = self.tokenizer(
289
- prompt,
290
- padding="max_length",
291
- max_length=self.tokenizer.model_max_length,
292
- return_tensors="pt",
293
- )
294
- text_input_ids = text_inputs.input_ids
295
-
296
- if text_input_ids.shape[-1] > self.tokenizer.model_max_length:
297
- removed_text = self.tokenizer.batch_decode(text_input_ids[:, self.tokenizer.model_max_length :])
298
- print(
299
- "The following part of your input was truncated because CLIP can only handle sequences up to"
300
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
301
- )
302
- text_input_ids = text_input_ids[:, : self.tokenizer.model_max_length]
303
- text_embeddings = self.text_encoder(text_input_ids.to(self.device))[0]
304
- else:
305
- batch_size = text_embeddings.shape[0]
306
-
307
- # duplicate text embeddings for each generation per prompt, using mps friendly method
308
- bs_embed, seq_len, _ = text_embeddings.shape
309
- text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1)
310
- text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1)
311
-
312
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
313
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
314
- # corresponds to doing no classifier free guidance.
315
- do_classifier_free_guidance = guidance_scale > 1.0
316
- # get unconditional embeddings for classifier free guidance
317
- if do_classifier_free_guidance:
318
- uncond_tokens: List[str]
319
- if negative_prompt is None:
320
- uncond_tokens = [""]
321
- elif text_embeddings is None and type(prompt) is not type(negative_prompt):
322
- raise TypeError(
323
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
324
- f" {type(prompt)}."
325
- )
326
- elif isinstance(negative_prompt, str):
327
- uncond_tokens = [negative_prompt]
328
- elif batch_size != len(negative_prompt):
329
- raise ValueError(
330
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
331
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
332
- " the batch size of `prompt`."
333
- )
334
- else:
335
- uncond_tokens = negative_prompt
336
-
337
- max_length = self.tokenizer.model_max_length
338
- uncond_input = self.tokenizer(
339
- uncond_tokens,
340
- padding="max_length",
341
- max_length=max_length,
342
- truncation=True,
343
- return_tensors="pt",
344
- )
345
- uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0]
346
-
347
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
348
- seq_len = uncond_embeddings.shape[1]
349
- uncond_embeddings = uncond_embeddings.repeat(batch_size, num_images_per_prompt, 1)
350
- uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1)
351
-
352
- # For classifier free guidance, we need to do two forward passes.
353
- # Here we concatenate the unconditional and text embeddings into a single batch
354
- # to avoid doing two forward passes
355
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
356
-
357
- # get the initial random noise unless the user supplied it
358
-
359
- # Unlike in other pipelines, latents need to be generated in the target device
360
- # for 1-to-1 results reproducibility with the CompVis implementation.
361
- # However this currently doesn't work in `mps`.
362
- latents_shape = (
363
- batch_size * num_images_per_prompt,
364
- self.unet.in_channels,
365
- height // 8,
366
- width // 8,
367
- )
368
- latents_dtype = text_embeddings.dtype
369
- if latents is None:
370
- if self.device.type == "mps":
371
- # randn does not exist on mps
372
- latents = torch.randn(
373
- latents_shape,
374
- generator=generator,
375
- device="cpu",
376
- dtype=latents_dtype,
377
- ).to(self.device)
378
- else:
379
- latents = torch.randn(
380
- latents_shape,
381
- generator=generator,
382
- device=self.device,
383
- dtype=latents_dtype,
384
- )
385
- else:
386
- if latents.shape != latents_shape:
387
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}")
388
- latents = latents.to(self.device)
389
-
390
- # set timesteps
391
- self.scheduler.set_timesteps(num_inference_steps)
392
-
393
- # Some schedulers like PNDM have timesteps as arrays
394
- # It's more optimized to move all timesteps to correct device beforehand
395
- timesteps_tensor = self.scheduler.timesteps.to(self.device)
396
-
397
- # scale the initial noise by the standard deviation required by the scheduler
398
- latents = latents * self.scheduler.init_noise_sigma
399
-
400
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
401
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
402
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
403
- # and should be between [0, 1]
404
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
405
- extra_step_kwargs = {}
406
- if accepts_eta:
407
- extra_step_kwargs["eta"] = eta
408
-
409
- for i, t in enumerate(self.progress_bar(timesteps_tensor)):
410
- # expand the latents if we are doing classifier free guidance
411
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
412
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
413
-
414
- # predict the noise residual
415
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
416
-
417
- # perform guidance
418
- if do_classifier_free_guidance:
419
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
420
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
421
-
422
- # compute the previous noisy sample x_t -> x_t-1
423
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
424
-
425
- # call the callback, if provided
426
- if callback is not None and i % callback_steps == 0:
427
- callback(i, t, latents)
428
-
429
- latents = 1 / 0.18215 * latents
430
- image = self.vae.decode(latents).sample
431
-
432
- image = (image / 2 + 0.5).clamp(0, 1)
433
-
434
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
435
- image = image.cpu().permute(0, 2, 3, 1).float().numpy()
436
-
437
- if self.safety_checker is not None:
438
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device)
439
- image, has_nsfw_concept = self.safety_checker(
440
- images=image,
441
- clip_input=safety_checker_input.pixel_values.to(text_embeddings.dtype),
442
- )
443
- else:
444
- has_nsfw_concept = None
445
-
446
- if output_type == "pil":
447
- image = self.numpy_to_pil(image)
448
-
449
- if not return_dict:
450
- return (image, has_nsfw_concept)
451
-
452
- return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept)
453
-
454
- def generate_inputs(self, prompt_a, prompt_b, seed_a, seed_b, noise_shape, T, batch_size):
455
- embeds_a = self.embed_text(prompt_a)
456
- embeds_b = self.embed_text(prompt_b)
457
- latents_dtype = embeds_a.dtype
458
- latents_a = self.init_noise(seed_a, noise_shape, latents_dtype)
459
- latents_b = self.init_noise(seed_b, noise_shape, latents_dtype)
460
-
461
- batch_idx = 0
462
- embeds_batch, noise_batch = None, None
463
- for i, t in enumerate(T):
464
- embeds = torch.lerp(embeds_a, embeds_b, t)
465
- noise = slerp(float(t), latents_a, latents_b)
466
-
467
- embeds_batch = embeds if embeds_batch is None else torch.cat([embeds_batch, embeds])
468
- noise_batch = noise if noise_batch is None else torch.cat([noise_batch, noise])
469
- batch_is_ready = embeds_batch.shape[0] == batch_size or i + 1 == T.shape[0]
470
- if not batch_is_ready:
471
- continue
472
- yield batch_idx, embeds_batch, noise_batch
473
- batch_idx += 1
474
- del embeds_batch, noise_batch
475
- torch.cuda.empty_cache()
476
- embeds_batch, noise_batch = None, None
477
-
478
- def make_clip_frames(
479
- self,
480
- prompt_a: str,
481
- prompt_b: str,
482
- seed_a: int,
483
- seed_b: int,
484
- num_interpolation_steps: int = 5,
485
- save_path: Union[str, Path] = "outputs/",
486
- num_inference_steps: int = 50,
487
- guidance_scale: float = 7.5,
488
- eta: float = 0.0,
489
- height: Optional[int] = None,
490
- width: Optional[int] = None,
491
- upsample: bool = False,
492
- batch_size: int = 1,
493
- image_file_ext: str = ".png",
494
- T: np.ndarray = None,
495
- skip: int = 0,
496
- negative_prompt: str = None,
497
- step: Optional[Tuple[int, int]] = None,
498
- ):
499
- # 0. Default height and width to unet
500
- height = height or self.unet.config.sample_size * self.vae_scale_factor
501
- width = width or self.unet.config.sample_size * self.vae_scale_factor
502
-
503
- save_path = Path(save_path)
504
- save_path.mkdir(parents=True, exist_ok=True)
505
-
506
- T = T if T is not None else np.linspace(0.0, 1.0, num_interpolation_steps)
507
- if T.shape[0] != num_interpolation_steps:
508
- raise ValueError(f"Unexpected T shape, got {T.shape}, expected dim 0 to be {num_interpolation_steps}")
509
-
510
- if upsample:
511
- if getattr(self, "upsampler", None) is None:
512
- self.upsampler = RealESRGANModel.from_pretrained("nateraw/real-esrgan")
513
- self.upsampler.to(self.device)
514
-
515
- batch_generator = self.generate_inputs(
516
- prompt_a,
517
- prompt_b,
518
- seed_a,
519
- seed_b,
520
- (1, self.unet.in_channels, height // 8, width // 8),
521
- T[skip:],
522
- batch_size,
523
- )
524
- num_batches = math.ceil(num_interpolation_steps / batch_size)
525
-
526
- log_prefix = "" if step is None else f"[{step[0]}/{step[1]}] "
527
-
528
- frame_index = skip
529
- for batch_idx, embeds_batch, noise_batch in batch_generator:
530
- if batch_size == 1:
531
- msg = f"Generating frame {frame_index}"
532
- else:
533
- msg = f"Generating frames {frame_index}-{frame_index+embeds_batch.shape[0]-1}"
534
- logger.info(f"{log_prefix}[{batch_idx}/{num_batches}] {msg}")
535
- outputs = self(
536
- latents=noise_batch,
537
- text_embeddings=embeds_batch,
538
- height=height,
539
- width=width,
540
- guidance_scale=guidance_scale,
541
- eta=eta,
542
- num_inference_steps=num_inference_steps,
543
- output_type="pil" if not upsample else "numpy",
544
- negative_prompt=negative_prompt,
545
- )["images"]
546
-
547
- for image in outputs:
548
- frame_filepath = save_path / (f"frame%06d{image_file_ext}" % frame_index)
549
- image = image if not upsample else self.upsampler(image)
550
- image.save(frame_filepath)
551
- frame_index += 1
552
-
553
- def walk(
554
- self,
555
- prompts: Optional[List[str]] = None,
556
- seeds: Optional[List[int]] = None,
557
- num_interpolation_steps: Optional[Union[int, List[int]]] = 5, # int or list of int
558
- output_dir: Optional[str] = "./dreams",
559
- name: Optional[str] = None,
560
- image_file_ext: Optional[str] = ".png",
561
- fps: Optional[int] = 30,
562
- num_inference_steps: Optional[int] = 50,
563
- guidance_scale: Optional[float] = 7.5,
564
- eta: Optional[float] = 0.0,
565
- height: Optional[int] = None,
566
- width: Optional[int] = None,
567
- upsample: Optional[bool] = False,
568
- batch_size: Optional[int] = 1,
569
- resume: Optional[bool] = False,
570
- audio_filepath: str = None,
571
- audio_start_sec: Optional[Union[int, float]] = None,
572
- margin: Optional[float] = 1.0,
573
- smooth: Optional[float] = 0.0,
574
- negative_prompt: Optional[str] = None,
575
- make_video: Optional[bool] = True,
576
- ):
577
- """Generate a video from a sequence of prompts and seeds. Optionally, add audio to the
578
- video to interpolate to the intensity of the audio.
579
- Args:
580
- prompts (Optional[List[str]], optional):
581
- list of text prompts. Defaults to None.
582
- seeds (Optional[List[int]], optional):
583
- list of random seeds corresponding to prompts. Defaults to None.
584
- num_interpolation_steps (Union[int, List[int]], *optional*):
585
- How many interpolation steps between each prompt. Defaults to None.
586
- output_dir (Optional[str], optional):
587
- Where to save the video. Defaults to './dreams'.
588
- name (Optional[str], optional):
589
- Name of the subdirectory of output_dir. Defaults to None.
590
- image_file_ext (Optional[str], *optional*, defaults to '.png'):
591
- The extension to use when writing video frames.
592
- fps (Optional[int], *optional*, defaults to 30):
593
- The frames per second in the resulting output videos.
594
- num_inference_steps (Optional[int], *optional*, defaults to 50):
595
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
596
- expense of slower inference.
597
- guidance_scale (Optional[float], *optional*, defaults to 7.5):
598
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
599
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
600
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
601
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
602
- usually at the expense of lower image quality.
603
- eta (Optional[float], *optional*, defaults to 0.0):
604
- Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
605
- [`schedulers.DDIMScheduler`], will be ignored for others.
606
- height (Optional[int], *optional*, defaults to None):
607
- height of the images to generate.
608
- width (Optional[int], *optional*, defaults to None):
609
- width of the images to generate.
610
- upsample (Optional[bool], *optional*, defaults to False):
611
- When True, upsamples images with realesrgan.
612
- batch_size (Optional[int], *optional*, defaults to 1):
613
- Number of images to generate at once.
614
- resume (Optional[bool], *optional*, defaults to False):
615
- When True, resumes from the last frame in the output directory based
616
- on available prompt config. Requires you to provide the `name` argument.
617
- audio_filepath (str, *optional*, defaults to None):
618
- Optional path to an audio file to influence the interpolation rate.
619
- audio_start_sec (Optional[Union[int, float]], *optional*, defaults to 0):
620
- Global start time of the provided audio_filepath.
621
- margin (Optional[float], *optional*, defaults to 1.0):
622
- Margin from librosa hpss to use for audio interpolation.
623
- smooth (Optional[float], *optional*, defaults to 0.0):
624
- Smoothness of the audio interpolation. 1.0 means linear interpolation.
625
- negative_prompt (Optional[str], *optional*, defaults to None):
626
- Optional negative prompt to use. Same across all prompts.
627
- make_video (Optional[bool], *optional*, defaults to True):
628
- When True, makes a video from the generated frames. If False, only
629
- generates the frames.
630
- This function will create sub directories for each prompt and seed pair.
631
- For example, if you provide the following prompts and seeds:
632
- ```
633
- prompts = ['a dog', 'a cat', 'a bird']
634
- seeds = [1, 2, 3]
635
- num_interpolation_steps = 5
636
- output_dir = 'output_dir'
637
- name = 'name'
638
- fps = 5
639
- ```
640
- Then the following directories will be created:
641
- ```
642
- output_dir
643
- ├── name
644
- │ ├── name_000000
645
- │ │ ├── frame000000.png
646
- │ │ ├── ...
647
- │ │ ├── frame000004.png
648
- │ │ ├── name_000000.mp4
649
- │ ├── name_000001
650
- │ │ ├── frame000000.png
651
- │ │ ├── ...
652
- │ │ ├── frame000004.png
653
- │ │ ├── name_000001.mp4
654
- │ ├── ...
655
- │ ├── name.mp4
656
- | |── prompt_config.json
657
- ```
658
- Returns:
659
- str: The resulting video filepath. This video includes all sub directories' video clips.
660
- """
661
- # 0. Default height and width to unet
662
- height = height or self.unet.config.sample_size * self.vae_scale_factor
663
- width = width or self.unet.config.sample_size * self.vae_scale_factor
664
-
665
- output_path = Path(output_dir)
666
-
667
- name = name or time.strftime("%Y%m%d-%H%M%S")
668
- save_path_root = output_path / name
669
- save_path_root.mkdir(parents=True, exist_ok=True)
670
-
671
- # Where the final video of all the clips combined will be saved
672
- output_filepath = save_path_root / f"{name}.mp4"
673
-
674
- # If using same number of interpolation steps between, we turn into list
675
- if not resume and isinstance(num_interpolation_steps, int):
676
- num_interpolation_steps = [num_interpolation_steps] * (len(prompts) - 1)
677
-
678
- if not resume:
679
- audio_start_sec = audio_start_sec or 0
680
-
681
- # Save/reload prompt config
682
- prompt_config_path = save_path_root / "prompt_config.json"
683
- if not resume:
684
- prompt_config_path.write_text(
685
- json.dumps(
686
- dict(
687
- prompts=prompts,
688
- seeds=seeds,
689
- num_interpolation_steps=num_interpolation_steps,
690
- fps=fps,
691
- num_inference_steps=num_inference_steps,
692
- guidance_scale=guidance_scale,
693
- eta=eta,
694
- upsample=upsample,
695
- height=height,
696
- width=width,
697
- audio_filepath=audio_filepath,
698
- audio_start_sec=audio_start_sec,
699
- negative_prompt=negative_prompt,
700
- ),
701
- indent=2,
702
- sort_keys=False,
703
- )
704
- )
705
- else:
706
- data = json.load(open(prompt_config_path))
707
- prompts = data["prompts"]
708
- seeds = data["seeds"]
709
- num_interpolation_steps = data["num_interpolation_steps"]
710
- fps = data["fps"]
711
- num_inference_steps = data["num_inference_steps"]
712
- guidance_scale = data["guidance_scale"]
713
- eta = data["eta"]
714
- upsample = data["upsample"]
715
- height = data["height"]
716
- width = data["width"]
717
- audio_filepath = data["audio_filepath"]
718
- audio_start_sec = data["audio_start_sec"]
719
- negative_prompt = data.get("negative_prompt", None)
720
-
721
- for i, (prompt_a, prompt_b, seed_a, seed_b, num_step) in enumerate(
722
- zip(prompts, prompts[1:], seeds, seeds[1:], num_interpolation_steps)
723
- ):
724
- # {name}_000000 / {name}_000001 / ...
725
- save_path = save_path_root / f"{name}_{i:06d}"
726
-
727
- # Where the individual clips will be saved
728
- step_output_filepath = save_path / f"{name}_{i:06d}.mp4"
729
-
730
- # Determine if we need to resume from a previous run
731
- skip = 0
732
- if resume:
733
- if step_output_filepath.exists():
734
- print(f"Skipping {save_path} because frames already exist")
735
- continue
736
-
737
- existing_frames = sorted(save_path.glob(f"*{image_file_ext}"))
738
- if existing_frames:
739
- skip = int(existing_frames[-1].stem[-6:]) + 1
740
- if skip + 1 >= num_step:
741
- print(f"Skipping {save_path} because frames already exist")
742
- continue
743
- print(f"Resuming {save_path.name} from frame {skip}")
744
-
745
- audio_offset = audio_start_sec + sum(num_interpolation_steps[:i]) / fps
746
- audio_duration = num_step / fps
747
-
748
- self.make_clip_frames(
749
- prompt_a,
750
- prompt_b,
751
- seed_a,
752
- seed_b,
753
- num_interpolation_steps=num_step,
754
- save_path=save_path,
755
- num_inference_steps=num_inference_steps,
756
- guidance_scale=guidance_scale,
757
- eta=eta,
758
- height=height,
759
- width=width,
760
- upsample=upsample,
761
- batch_size=batch_size,
762
- T=get_timesteps_arr(
763
- audio_filepath,
764
- offset=audio_offset,
765
- duration=audio_duration,
766
- fps=fps,
767
- margin=margin,
768
- smooth=smooth,
769
- )
770
- if audio_filepath
771
- else None,
772
- skip=skip,
773
- negative_prompt=negative_prompt,
774
- step=(i, len(prompts) - 1),
775
- )
776
- if make_video:
777
- make_video_pyav(
778
- save_path,
779
- audio_filepath=audio_filepath,
780
- fps=fps,
781
- output_filepath=step_output_filepath,
782
- glob_pattern=f"*{image_file_ext}",
783
- audio_offset=audio_offset,
784
- audio_duration=audio_duration,
785
- sr=44100,
786
- )
787
- if make_video:
788
- return make_video_pyav(
789
- save_path_root,
790
- audio_filepath=audio_filepath,
791
- fps=fps,
792
- audio_offset=audio_start_sec,
793
- audio_duration=sum(num_interpolation_steps) / fps,
794
- output_filepath=output_filepath,
795
- glob_pattern=f"**/*{image_file_ext}",
796
- sr=44100,
797
- )
798
-
799
- def embed_text(self, text, negative_prompt=None):
800
- """Helper to embed some text"""
801
- text_input = self.tokenizer(
802
- text,
803
- padding="max_length",
804
- max_length=self.tokenizer.model_max_length,
805
- truncation=True,
806
- return_tensors="pt",
807
- )
808
- with torch.no_grad():
809
- embed = self.text_encoder(text_input.input_ids.to(self.device))[0]
810
- return embed
811
-
812
- def init_noise(self, seed, noise_shape, dtype):
813
- """Helper to initialize noise"""
814
- # randn does not exist on mps, so we create noise on CPU here and move it to the device after initialization
815
- if self.device.type == "mps":
816
- noise = torch.randn(
817
- noise_shape,
818
- device="cpu",
819
- generator=torch.Generator(device="cpu").manual_seed(seed),
820
- ).to(self.device)
821
- else:
822
- noise = torch.randn(
823
- noise_shape,
824
- device=self.device,
825
- generator=torch.Generator(device=self.device).manual_seed(seed),
826
- dtype=dtype,
827
- )
828
- return noise
829
-
830
- @classmethod
831
- def from_pretrained(cls, *args, tiled=False, **kwargs):
832
- """Same as diffusers `from_pretrained` but with tiled option, which makes images tilable"""
833
- if tiled:
834
-
835
- def patch_conv(**patch):
836
- cls = nn.Conv2d
837
- init = cls.__init__
838
-
839
- def __init__(self, *args, **kwargs):
840
- return init(self, *args, **kwargs, **patch)
841
-
842
- cls.__init__ = __init__
843
-
844
- patch_conv(padding_mode="circular")
845
-
846
- pipeline = super().from_pretrained(*args, **kwargs)
847
- pipeline.tiled = tiled
848
- return pipeline
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cuda.h DELETED
@@ -1,33 +0,0 @@
1
- /*!
2
- **************************************************************************************************
3
- * Deformable DETR
4
- * Copyright (c) 2020 SenseTime. All Rights Reserved.
5
- * Licensed under the Apache License, Version 2.0 [see LICENSE for details]
6
- **************************************************************************************************
7
- * Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0
8
- **************************************************************************************************
9
- */
10
-
11
- #pragma once
12
- #include <torch/extension.h>
13
-
14
- namespace groundingdino {
15
-
16
- at::Tensor ms_deform_attn_cuda_forward(
17
- const at::Tensor &value,
18
- const at::Tensor &spatial_shapes,
19
- const at::Tensor &level_start_index,
20
- const at::Tensor &sampling_loc,
21
- const at::Tensor &attn_weight,
22
- const int im2col_step);
23
-
24
- std::vector<at::Tensor> ms_deform_attn_cuda_backward(
25
- const at::Tensor &value,
26
- const at::Tensor &spatial_shapes,
27
- const at::Tensor &level_start_index,
28
- const at::Tensor &sampling_loc,
29
- const at::Tensor &attn_weight,
30
- const at::Tensor &grad_output,
31
- const int im2col_step);
32
-
33
- } // namespace groundingdino
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/metadata/_json.py DELETED
@@ -1,84 +0,0 @@
1
- # Extracted from https://github.com/pfmoore/pkg_metadata
2
-
3
- from email.header import Header, decode_header, make_header
4
- from email.message import Message
5
- from typing import Any, Dict, List, Union
6
-
7
- METADATA_FIELDS = [
8
- # Name, Multiple-Use
9
- ("Metadata-Version", False),
10
- ("Name", False),
11
- ("Version", False),
12
- ("Dynamic", True),
13
- ("Platform", True),
14
- ("Supported-Platform", True),
15
- ("Summary", False),
16
- ("Description", False),
17
- ("Description-Content-Type", False),
18
- ("Keywords", False),
19
- ("Home-page", False),
20
- ("Download-URL", False),
21
- ("Author", False),
22
- ("Author-email", False),
23
- ("Maintainer", False),
24
- ("Maintainer-email", False),
25
- ("License", False),
26
- ("Classifier", True),
27
- ("Requires-Dist", True),
28
- ("Requires-Python", False),
29
- ("Requires-External", True),
30
- ("Project-URL", True),
31
- ("Provides-Extra", True),
32
- ("Provides-Dist", True),
33
- ("Obsoletes-Dist", True),
34
- ]
35
-
36
-
37
- def json_name(field: str) -> str:
38
- return field.lower().replace("-", "_")
39
-
40
-
41
- def msg_to_json(msg: Message) -> Dict[str, Any]:
42
- """Convert a Message object into a JSON-compatible dictionary."""
43
-
44
- def sanitise_header(h: Union[Header, str]) -> str:
45
- if isinstance(h, Header):
46
- chunks = []
47
- for bytes, encoding in decode_header(h):
48
- if encoding == "unknown-8bit":
49
- try:
50
- # See if UTF-8 works
51
- bytes.decode("utf-8")
52
- encoding = "utf-8"
53
- except UnicodeDecodeError:
54
- # If not, latin1 at least won't fail
55
- encoding = "latin1"
56
- chunks.append((bytes, encoding))
57
- return str(make_header(chunks))
58
- return str(h)
59
-
60
- result = {}
61
- for field, multi in METADATA_FIELDS:
62
- if field not in msg:
63
- continue
64
- key = json_name(field)
65
- if multi:
66
- value: Union[str, List[str]] = [
67
- sanitise_header(v) for v in msg.get_all(field)
68
- ]
69
- else:
70
- value = sanitise_header(msg.get(field))
71
- if key == "keywords":
72
- # Accept both comma-separated and space-separated
73
- # forms, for better compatibility with old data.
74
- if "," in value:
75
- value = [v.strip() for v in value.split(",")]
76
- else:
77
- value = value.split()
78
- result[key] = value
79
-
80
- payload = msg.get_payload()
81
- if payload:
82
- result["description"] = payload
83
-
84
- return result
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/_pick.py DELETED
@@ -1,17 +0,0 @@
1
- from typing import Optional
2
-
3
-
4
- def pick_bool(*values: Optional[bool]) -> bool:
5
- """Pick the first non-none bool or return the last value.
6
-
7
- Args:
8
- *values (bool): Any number of boolean or None values.
9
-
10
- Returns:
11
- bool: First non-none boolean.
12
- """
13
- assert values, "1 or more values required"
14
- for value in values:
15
- if value is not None:
16
- return value
17
- return bool(value)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/command/py36compat.py DELETED
@@ -1,134 +0,0 @@
1
- import os
2
- from glob import glob
3
- from distutils.util import convert_path
4
- from distutils.command import sdist
5
-
6
-
7
- class sdist_add_defaults:
8
- """
9
- Mix-in providing forward-compatibility for functionality as found in
10
- distutils on Python 3.7.
11
-
12
- Do not edit the code in this class except to update functionality
13
- as implemented in distutils. Instead, override in the subclass.
14
- """
15
-
16
- def add_defaults(self):
17
- """Add all the default files to self.filelist:
18
- - README or README.txt
19
- - setup.py
20
- - test/test*.py
21
- - all pure Python modules mentioned in setup script
22
- - all files pointed by package_data (build_py)
23
- - all files defined in data_files.
24
- - all files defined as scripts.
25
- - all C sources listed as part of extensions or C libraries
26
- in the setup script (doesn't catch C headers!)
27
- Warns if (README or README.txt) or setup.py are missing; everything
28
- else is optional.
29
- """
30
- self._add_defaults_standards()
31
- self._add_defaults_optional()
32
- self._add_defaults_python()
33
- self._add_defaults_data_files()
34
- self._add_defaults_ext()
35
- self._add_defaults_c_libs()
36
- self._add_defaults_scripts()
37
-
38
- @staticmethod
39
- def _cs_path_exists(fspath):
40
- """
41
- Case-sensitive path existence check
42
-
43
- >>> sdist_add_defaults._cs_path_exists(__file__)
44
- True
45
- >>> sdist_add_defaults._cs_path_exists(__file__.upper())
46
- False
47
- """
48
- if not os.path.exists(fspath):
49
- return False
50
- # make absolute so we always have a directory
51
- abspath = os.path.abspath(fspath)
52
- directory, filename = os.path.split(abspath)
53
- return filename in os.listdir(directory)
54
-
55
- def _add_defaults_standards(self):
56
- standards = [self.READMES, self.distribution.script_name]
57
- for fn in standards:
58
- if isinstance(fn, tuple):
59
- alts = fn
60
- got_it = False
61
- for fn in alts:
62
- if self._cs_path_exists(fn):
63
- got_it = True
64
- self.filelist.append(fn)
65
- break
66
-
67
- if not got_it:
68
- self.warn("standard file not found: should have one of " +
69
- ', '.join(alts))
70
- else:
71
- if self._cs_path_exists(fn):
72
- self.filelist.append(fn)
73
- else:
74
- self.warn("standard file '%s' not found" % fn)
75
-
76
- def _add_defaults_optional(self):
77
- optional = ['test/test*.py', 'setup.cfg']
78
- for pattern in optional:
79
- files = filter(os.path.isfile, glob(pattern))
80
- self.filelist.extend(files)
81
-
82
- def _add_defaults_python(self):
83
- # build_py is used to get:
84
- # - python modules
85
- # - files defined in package_data
86
- build_py = self.get_finalized_command('build_py')
87
-
88
- # getting python files
89
- if self.distribution.has_pure_modules():
90
- self.filelist.extend(build_py.get_source_files())
91
-
92
- # getting package_data files
93
- # (computed in build_py.data_files by build_py.finalize_options)
94
- for pkg, src_dir, build_dir, filenames in build_py.data_files:
95
- for filename in filenames:
96
- self.filelist.append(os.path.join(src_dir, filename))
97
-
98
- def _add_defaults_data_files(self):
99
- # getting distribution.data_files
100
- if self.distribution.has_data_files():
101
- for item in self.distribution.data_files:
102
- if isinstance(item, str):
103
- # plain file
104
- item = convert_path(item)
105
- if os.path.isfile(item):
106
- self.filelist.append(item)
107
- else:
108
- # a (dirname, filenames) tuple
109
- dirname, filenames = item
110
- for f in filenames:
111
- f = convert_path(f)
112
- if os.path.isfile(f):
113
- self.filelist.append(f)
114
-
115
- def _add_defaults_ext(self):
116
- if self.distribution.has_ext_modules():
117
- build_ext = self.get_finalized_command('build_ext')
118
- self.filelist.extend(build_ext.get_source_files())
119
-
120
- def _add_defaults_c_libs(self):
121
- if self.distribution.has_c_libraries():
122
- build_clib = self.get_finalized_command('build_clib')
123
- self.filelist.extend(build_clib.get_source_files())
124
-
125
- def _add_defaults_scripts(self):
126
- if self.distribution.has_scripts():
127
- build_scripts = self.get_finalized_command('build_scripts')
128
- self.filelist.extend(build_scripts.get_source_files())
129
-
130
-
131
- if hasattr(sdist.sdist, '_add_defaults_standards'):
132
- # disable the functionality already available upstream
133
- class sdist_add_defaults: # noqa
134
- pass
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/configs/new_baselines/mask_rcnn_regnetx_4gf_dds_FPN_100ep_LSJ.py DELETED
@@ -1,29 +0,0 @@
1
- from .mask_rcnn_R_50_FPN_100ep_LSJ import (
2
- dataloader,
3
- lr_multiplier,
4
- model,
5
- optimizer,
6
- train,
7
- )
8
- from detectron2.config import LazyCall as L
9
- from detectron2.modeling.backbone import RegNet
10
- from detectron2.modeling.backbone.regnet import SimpleStem, ResBottleneckBlock
11
-
12
- # Config source:
13
- # https://github.com/facebookresearch/detectron2/blob/main/configs/COCO-InstanceSegmentation/mask_rcnn_regnetx_4gf_dds_fpn_1x.py # noqa
14
- model.backbone.bottom_up = L(RegNet)(
15
- stem_class=SimpleStem,
16
- stem_width=32,
17
- block_class=ResBottleneckBlock,
18
- depth=23,
19
- w_a=38.65,
20
- w_0=96,
21
- w_m=2.43,
22
- group_width=40,
23
- norm="SyncBN",
24
- out_features=["s1", "s2", "s3", "s4"],
25
- )
26
- model.pixel_std = [57.375, 57.120, 58.395]
27
-
28
- # RegNets benefit from enabling cudnn benchmark mode
29
- train.cudnn_benchmark = True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BREWDAcademy/Brewd-Diffusion/style.css DELETED
@@ -1,77 +0,0 @@
1
- /* Main background and font styles to fit the steampunk/parchment theme */
2
- body {
3
- background-color: #f5f5dc; /* Parchment color */
4
- font-family: 'IM Fell English SC', serif; /* Steampunk-inspired font */
5
- color: #3a2e22; /* Dark brown text color */
6
- }
7
-
8
- /* Style for header element with class 'pretty' */
9
- .pretty h1 {
10
- text-align: center;
11
- font-family: 'IM Fell English SC', serif; /* Steampunk font */
12
- color: #806c50; /* Muted brown color */
13
- }
14
-
15
- /* Style for button element with ID 'duplicate-button' */
16
- #duplicate-button {
17
- margin: auto;
18
- color: #efe0c9; /* Light parchment color */
19
- background-color: #806c50; /* Leather-like brown */
20
- border-radius: 4px; /* Less roundness for a more vintage look */
21
- cursor: pointer;
22
- padding: 10px 20px;
23
- border: none;
24
- font-size: 1em;
25
- }
26
-
27
- /* Style for the Gradio interface elements to match the steampunk theme */
28
- .gradio_container {
29
- background-color: #f2e5bc; /* Light beige for input areas */
30
- border: 1px solid #9e7053; /* Darker border to stand out on parchment */
31
- }
32
-
33
- /* Style for gallery/result container */
34
- .gr-gallery {
35
- background-color: #fff; /* Clean white for results to stand out */
36
- border: 2px solid #9e7053; /* A darker border for contrast */
37
- }
38
-
39
- /* Style for input text and text areas */
40
- input[type='text'], textarea {
41
- background-color: #f2e5bc; /* Light beige, like old paper */
42
- color: #3a2e22; /* Dark brown text color */
43
- border: 1px solid #9e7053; /* Leather-like border */
44
- }
45
-
46
- /* Style for sliders */
47
- input[type='range'] {
48
- background: #806c50; /* A leather-like slider background */
49
- }
50
-
51
- /* Style for radio buttons and checkboxes */
52
- input[type='radio'], input[type='checkbox'] {
53
- accent-color: #806c50; /* Leather-like accent color */
54
- }
55
-
56
- /* Adjust the style for buttons in the interface */
57
- button {
58
- background-color: #806c50; /* Leather-like brown */
59
- color: #efe0c9; /* Parchment color text */
60
- border: none; /* Remove default border */
61
- }
62
-
63
- /* Style adjustments for the accordion */
64
- .gr-accordion {
65
- background-color: #f2e5bc; /* Light beige */
66
- color: #3a2e22; /* Dark brown text color */
67
- }
68
-
69
- /* Ensure links match the theme as well */
70
- a {
71
- color: #3a2e22; /* Dark brown, similar to the text */
72
- }
73
-
74
- /* Style for the progress bar */
75
- .gr-progress-bar {
76
- background-color: #c0a080; /* A muted brown progress bar */
77
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Coche De Carreras Juego De Configuracin Para Pc Windows 7.md DELETED
@@ -1,85 +0,0 @@
1
-
2
- <h1>Configuración del juego de carreras de coches Descargar para PC Windows 7</h1>
3
- <p>Si eres un fan de la velocidad, la adrenalina y la emoción, es posible que te interese jugar juegos de carreras de coches. Los juegos de carreras de autos son videojuegos que simulan conducir un vehículo en una pista, una carretera o un terreno todoterreno. Pueden ser realistas, árcade o futuristas, dependiendo del estilo y el tema del juego. Los juegos de carreras de coches son populares entre los jugadores de todas las edades y preferencias, ya que ofrecen una variedad de desafíos, modos, vehículos y entornos para elegir. </p>
4
- <p>Una de las ventajas de jugar juegos de carreras de coches es que se puede disfrutar de ellos en diferentes plataformas, incluyendo PC Windows 7. PC Windows 7 es un sistema operativo confiable y compatible que puede ejecutar muchos juegos de carreras de coches sin problemas y de manera eficiente. Jugar juegos de carreras de coches en PC Windows 7 también le da más control sobre la configuración, los gráficos y el rendimiento del juego. También puede utilizar diferentes dispositivos de entrada, como un teclado, un ratón, un joystick o un volante, para mejorar su experiencia de juego. </p>
5
- <h2>coche de carreras juego de configuración para pc windows 7</h2><br /><p><b><b>Download Zip</b> &#10003; <a href="https://bltlly.com/2v6Mln">https://bltlly.com/2v6Mln</a></b></p><br /><br />
6
- <p>Pero ¿cómo encontrar y jugar juegos de carreras de coches en PC Windows 7? Hay muchas fuentes donde se puede descargar juegos de carreras de coches para PC Windows 7, tales como sitios web oficiales, tiendas en línea, o plataformas de terceros. Sin embargo, debes tener cuidado con la calidad, seguridad y legalidad de los archivos de juego que descargas. También debe seguir las instrucciones para instalar y ejecutar el juego en su PC Windows 7.</p>
7
- <p>En este artículo, le presentaremos algunos de los mejores juegos de carreras de coches para PC Windows 7 que puede descargar y jugar. También le proporcionaremos información sobre sus características, pros y contras, y cómo descargarlos e instalarlos en su PC Windows 7. ¡Comencemos! </p>
8
- <h2>BeamNG.drive</h2>
9
- <h3>¿Qué es BeamNG.drive y cuáles son sus características? </h3>
10
-
11
- <p>BeamNG.drive no es solo un juego de carreras de coches, sino también un juego de caja de arena que le permite experimentar con diferentes situaciones y resultados. Puede estrellar sus vehículos contra paredes, árboles, edificios u otros vehículos, y ver cómo se deforman y se rompen. También puedes probar tus habilidades de conducción en varios desafíos, como pruebas de tiempo, cursos de acrobacias, persecuciones policiales o aventuras fuera de la carretera <p>. También puedes jugar con tus amigos online o offline en modo multijugador. </p>
12
- <h3>Cómo descargar e instalar BeamNG.drive en PC Windows 7?</h3>
13
- <p>Para descargar e instalar BeamNG.drive en PC Windows 7, debe seguir estos pasos:</p>
14
- <ol>
15
- <li>Vaya al sitio web oficial de BeamNG.drive y haga clic en el botón "Comprar ahora". Serás redirigido a la tienda de Steam, donde puedes comprar el juego por $24.99. </li>
16
- <li>Después de comprar el juego, necesita descargar e instalar Steam en su PC Windows 7. Steam es una plataforma de distribución digital que le permite administrar sus juegos y acceder a las funciones en línea. </li>
17
- <li>Una vez que haya instalado Steam, inicie sesión con su cuenta. Luego, vaya a su biblioteca y busque BeamNG.drive. Haga clic en el botón "Instalar" y espere a que el juego se descargue e instale en su PC Windows 7.</li>
18
- <li>Después de que la instalación se haya completado, puede iniciar el juego desde Steam o desde el acceso directo de su escritorio. Disfrute! </li>
19
- </ol>
20
- <h3>Pros y contras de BeamNG.drive</h3>
21
- <p>BeamNG.drive es un divertido y realista juego de simulación de vehículos que ofrece muchas posibilidades y libertad. Sin embargo, también tiene algunos inconvenientes que debe tener en cuenta. Estos son algunos de los pros y contras de BeamNG.drive:</p>
22
- <tabla>
23
- <tr><th>Pros</th><th>Contras</th></tr>
24
- <tr><td>- Increíbles gráficos y física que hacen que los vehículos y entornos se vean y se sientan reales. </td><td>- Altos requisitos del sistema que pueden no funcionar bien en PC más viejos o más débiles.</td></tr>
25
-
26
- <tr><td>- Una comunidad creativa y activa que crea y comparte nuevos contenidos y comentarios. </td><td>- Falta de una historia clara u objetivos que puedan hacer que el juego sea aburrido o repetitivo para algunos jugadores. </td></tr>
27
- <tr><td>- Un modo multijugador que te permite jugar con tus amigos online o offline. </td><td>- Algunos errores y fallos que pueden afectar el juego o el rendimiento del juego. </td></tr>
28
- </tabla>
29
- <h2>Necesidad de velocidad</h2>
30
- <h3>¿Cuál es la necesidad de velocidad y cuáles son sus características? </h3>
31
- <p>Need for Speed es una de las franquicias de juegos de carreras de coches más populares y exitosas del mundo. Ha existido desde 1994 y ha lanzado más de 20 títulos en diferentes plataformas. Los juegos de Need for Speed son conocidos por su ritmo rápido, estilo árcade y juego de carreras callejeras. También cuentan con una variedad de coches, pistas, modos, opciones de personalización e historias. </p>
32
- <p></p>
33
- <p>Uno de los mejores juegos de Need for Speed para PC Windows 7 es Need for Speed: Most Wanted (2012). Este juego es un reinicio de la original Need for Speed: Most Wanted (2005) y se desarrolla en una ciudad de mundo abierto llamada Fairhaven. Juegas como un corredor callejero que tiene que competir con otros corredores, evadir a la policía y desafiar a la lista de los más buscados. Puedes conducir cualquier coche que veas en la ciudad, desde coches deportivos exóticos hasta coches deportivos y todoterrenos. También puede actualizar sus coches con piezas de rendimiento, trabajos de pintura, vinilos, matrículas y más. También puedes participar en diferentes eventos, como carreras, persecuciones, carreras de velocidad, emboscadas o hitos. </p>
34
- <h3> ¿Cómo descargar e instalar Necesidad de velocidad en PC Windows 7?</h3>
35
- <p>Para descargar e instalar Need for Speed: Most Wanted (2012) en PC Windows 7, debe seguir estos pasos:</p>
36
- <ol>
37
- <li>Ir al sitio web oficial de Need for Speed: Most Wanted (2012) y haga clic en el botón "Comprar ahora". Serás redirigido a la tienda de Origin, donde puedes comprar el juego por $19.99. </li>
38
-
39
- <li>Una vez que tenga Origin instalado, ejecútelo e inicie sesión con su cuenta. Luego, vaya a su biblioteca y encuentre Need for Speed: Most Wanted (2012). Haga clic en el botón "Descargar" y espere a que el juego se descargue e instale en su PC Windows 7.</li>
40
- <li>Después de que la instalación se haya completado, puede iniciar el juego desde Origin o desde el acceso directo de su escritorio. Disfrute! </li>
41
- </ol>
42
- <h3>Pros y contras de la necesidad de velocidad</h3>
43
- <p>Need for Speed: Most Wanted (2012) es un emocionante y adictivo juego de carreras de coches que ofrece mucha acción y diversión. Sin embargo, también tiene algunos inconvenientes que debes tener en cuenta. Estos son algunos de los pros y contras de Need for Speed: Most Wanted (2012):</p>
44
- <tabla>
45
- <tr><th>Pros</th><th>Contras</th></tr>
46
- <tr><td>- Impresionantes gráficos y efectos de sonido que hacen que la ciudad y los coches se ven y suenan increíble. </td><td>- Altos requisitos del sistema que pueden no funcionar bien en PC más viejos o más débiles.</td></tr>
47
- <tr><td>- Una gran y diversa ciudad de mundo abierto que puedes explorar y descubrir. </td><td>- Una historia repetitiva y superficial que puede no atraer a algunos jugadores. </td></tr>
48
- <tr><td>- Una gran selección de coches, opciones de personalización y eventos para elegir. </td><td>- La falta de una opción de transmisión manual que puede decepcionar a algunos corredores hardcore. </td></tr>
49
- <tr><td>- Un modo multijugador que te permite jugar con tus amigos online o offline. </td><td>- Algunos errores y fallos que pueden afectar el juego o el rendimiento del juego. </td></tr>
50
- </tabla>
51
- <h2>Carreras de la ciudad</h2>
52
- <h3>¿Qué es City Racing y cuáles son sus características? </h3>
53
- <p>City Racing es un juego de carreras de coches gratis que te permite conducir por una gran ciudad y competir con otros corredores. Puede elegir entre diferentes coches, desde sedanes hasta autos deportivos, y personalizarlos con diferentes colores, ruedas, spoilers y más. También puede actualizar sus coches con mejores motores, frenos, neumáticos y suspensión. También puede reparar sus coches cuando se dañan o se ensucian. </p>
54
-
55
- <h3> ¿Cómo descargar e instalar City Racing en PC Windows 7?</h3>
56
- <p>Para descargar e instalar City Racing en PC Windows 7, debe seguir estos pasos:</p>
57
- <ol>
58
- <li>Ir a la página web oficial de City Racing y haga clic en el "Descargar" botón. Serás redirigido a una plataforma de terceros llamada GameTop, donde puedes descargar el juego gratis. </li>
59
- <li>Después de descargar el archivo del juego, haga doble clic en él y siga el asistente de instalación. Es posible que necesite aceptar algunos términos y condiciones y elegir una carpeta de destino para el juego. </li>
60
- <li> Después de la instalación se ha completado, puede iniciar el juego desde el acceso directo del escritorio o desde el menú de inicio. Disfrute! </li>
61
- </ol>
62
- <h3>Pros y contras de City Racing</h3>
63
- <p>City Racing es un juego de carreras de coches divertido y gratuito que ofrece mucha variedad y emoción. Sin embargo, también tiene algunos inconvenientes que debes tener en cuenta. Estos son algunos de los pros y contras de City Racing:</p>
64
- <tabla>
65
- <tr><th>Pros</th><th>Contras</th></tr>
66
- <tr><td>- Gratis para descargar y jugar sin limitaciones o anuncios. </td><td>- Gráficos y efectos de sonido de baja calidad que pueden no parecer o sonar atractivos. </td></tr>
67
- <tr><td>- Una gran y diversa ciudad de mundo abierto que puedes explorar y disfrutar. </td><td>- La falta de un mapa o un sistema GPS que puede dificultar la navegación o encontrar el camino. </td></tr>
68
- <tr><td>- Una amplia gama de coches, opciones de personalización y carreras para elegir. </td><td>- Un sistema de física poco realista y fácil que puede hacer la conducción demasiado simple o aburrido. </td></tr>
69
- <tr><td>- Un modo multijugador que te permite jugar con tus amigos online o offline. </td><td>- Algunos malware o virus que pueden venir con el archivo del juego o la plataforma de terceros. </td></tr>
70
- </tabla>
71
- <h2>Conclusión</h2>
72
-
73
- <p>En este artículo, le hemos presentado algunos de los mejores juegos de carreras de coches para PC Windows 7 que puede descargar y jugar. También le hemos proporcionado alguna información sobre sus características, pros y contras, y cómo descargar e instalar confirmar que desea eliminar el juego de su PC.</li>
74
- <li>Siga las instrucciones y avisos que aparecen en la pantalla para completar el proceso de actualización o desinstalación. </li>
75
- </ol>
76
- <h3> ¿Dónde encontrar más juegos de carreras de coches para PC Windows 7?</h3>
77
- <p>Si está buscando más juegos de carreras de coches para PC Windows 7, puede consultar algunos de estos sitios web que ofrecen una variedad de juegos de forma gratuita o por una tarifa:</p>
78
- <ul>
79
- <li>[GameTop]: Un sitio web que ofrece juegos de carreras de coches gratuitos y legales para PC Windows 7, como City Racing, Moto Racing y Super Bikes.</li>
80
- <li>[Steam]: Un sitio web que ofrece una gran colección de juegos de carreras de coches para PC Windows 7, como BeamNG.drive, Assetto Corsa y Dirt Rally.</li>
81
- <li>[Origin]: Un sitio web que ofrece algunos de los mejores juegos de carreras de coches para PC Windows 7, como Need for Speed, Burnout Paradise y Shift 2 Unleashed.</li>
82
- <li>[GOG]: Un sitio web que ofrece juegos de carreras de coches clásicos y libres de DRM para PC Windows 7, como FlatOut, Carmageddon y Test Drive.</li>
83
- </ul></p> 64aa2da5cf<br />
84
- <br />
85
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Descargar Entre Nosotros Gamejolt.md DELETED
@@ -1,58 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar entre nosotros desde GameJolt</h1>
3
- <p>Si está buscando un juego divertido y atractivo para jugar con sus amigos en línea, es posible que haya oído hablar de Among Us. Este es un juego multijugador donde tienes que trabajar junto con otros jugadores para arreglar una nave espacial mientras tratas de averiguar quién de ustedes es un impostor. En este artículo, explicaremos qué es Among Us, qué es GameJolt y cómo descargar e instalar Among Us desde GameJolt. También te daremos algunos consejos y trucos para jugar el juego y responder a algunas preguntas frecuentes. </p>
4
- <h2>descargar entre nosotros gamejolt</h2><br /><p><b><b>Download</b> &mdash;&mdash;&mdash; <a href="https://bltlly.com/2v6KXj">https://bltlly.com/2v6KXj</a></b></p><br /><br />
5
- <h2>¿Qué hay entre nosotros? </h2>
6
- <p>Among Us es un juego que fue lanzado en 2018 por Innersloth, un estudio de juegos estadounidense. El juego fue inspirado por el juego de fiesta Mafia y la película de terror de ciencia ficción The Thing. El juego permite el juego multiplataforma, lo que significa que puedes jugarlo en diferentes dispositivos como Android, iOS, Windows, Nintendo Switch, PlayStation o Xbox.</p>
7
- <h3>Un juego de deducción social en el espacio</h3>
8
- <p>El juego se lleva a cabo en el espacio con temas de configuración donde los jugadores son coloridos, astronautas de dibujos animados sin brazos. Cada jugador toma uno de dos roles: la mayoría son compañeros de equipo, pero un pequeño número son impostores. Los compañeros de equipo trabajan para completar las tareas asignadas en el juego mientras identifican y expulsan a los presuntos Impostores (que parecen idénticos a los Compañeros de Equipo) por medio de deducción social, mientras que los Impostores tienen el objetivo de matar a los Compañeros de Equipo o sabotear un sistema crítico en el mapa. </p>
9
- <p>El juego puede ser jugado por cuatro a quince jugadores, con hasta tres impostores por ronda. Hay cuatro mapas jugables disponibles: una nave espacial llamada "The Skeld", un edificio de oficinas llamado "MIRA HQ", una base planetaria llamada "Polus" o "The Airship", una configuración de la serie de Innersloth Henry Stickmin. </p>
10
- <h3>¿Por qué es tan popular? </h3>
11
-
12
- <p>El juego es simple y al estilo de fiesta, con los jugadores asumiendo el papel de una persona del espacio de dibujos animados a bordo de una nave que necesita algunas reparaciones bastante urgentes. Es un juego sobre el trabajo en equipo, donde trabajan juntos para averiguar quién puede y no se puede confiar dentro del grupo de jugadores. Esto puede ser un beneficio, ya que ayuda a perfeccionar la lógica y las habilidades sociales de los niños, especialmente durante un momento en que los niños pueden estar atrapados en casa y necesitan socialización. </p>
13
- <h2>¿Qué es GameJolt? </h2>
14
- <p>GameJolt es una plataforma para juegos indie, donde los desarrolladores pueden compartir sus creaciones con jugadores de todo el mundo. GameJolt alberga miles de juegos de varios géneros, como acción, aventura, terror, rompecabezas, simulación, estrategia y más. Puedes encontrar juegos gratuitos o de pago, completos o en desarrollo, para un jugador o multijugador. </p>
15
- <h3>Una plataforma para juegos indie</h3>
16
- <p>GameJolt fue fundada en 2004 por David DeCarmine como una forma de mostrar sus propios juegos. Desde entonces, se ha convertido en un sitio impulsado por la comunidad que apoya a los desarrolladores independientes y los jugadores por igual. GameJolt permite a los desarrolladores subir sus juegos, establecer sus propios precios, obtener ingresos de anuncios o donaciones , e interactuar con sus fans. GameJolt también cuenta con un sistema de clasificación, un sistema de trofeos, un sistema de chat, un foro y un blog para cada juego. </p>
17
- <p></p>
18
- <h3>Cómo crear una cuenta y navegar por los juegos</h3>
19
- <p>Para usar GameJolt, primero necesitas crear una cuenta. Puedes registrarte con tu dirección de correo electrónico o tus cuentas de redes sociales, como Facebook, Twitter o Google. Una vez que tenga una cuenta, puede personalizar su perfil, seguir sus juegos y desarrolladores favoritos, unirse a grupos y ganar trofeos y puntos XP. </p>
20
-
21
- <h2>Cómo descargar e instalar Entre nosotros desde GameJolt</h2>
22
- <p>Ahora que sabes lo que entre nosotros y GameJolt son, vamos a ver cómo se puede descargar e instalar entre nosotros desde GameJolt. El proceso es simple y sencillo, pero debe asegurarse de que tiene un dispositivo compatible y suficiente espacio de almacenamiento. Estos son los pasos a seguir:</p>
23
- <h3>Paso 1: Encuentra el juego en GameJolt</h3>
24
- <p>El primer paso es encontrar el juego en GameJolt. Puedes usar este enlace para ir directamente a la página del juego: <a href="">https://gamejolt.com/games/among-us/516139</a>. Alternativamente, puede buscar "Entre nosotros" en el sitio web de GameJolt y buscar el juego con el logotipo oficial y el nombre del desarrollador "Innersloth". </p>
25
- <p>En la página del juego, verá una breve descripción del juego, algunas capturas de pantalla y videos, la calificación y comentarios, y las opciones de descarga. También verás un botón que dice "Seguir" si quieres ser notificado de cualquier actualización o noticia sobre el juego. </p>
26
- <h3>Paso 2: Elija la versión y descargue el archivo</h3>
27
- <p>El siguiente paso es elegir la versión del juego que desea descargar. Hay dos versiones disponibles en GameJolt: una para Windows y otra para Android. La versión para Windows es de 57 MB y la versión para Android es de 70 MB. Asegúrese de que tiene suficiente espacio de almacenamiento en su dispositivo antes de descargar. </p>
28
- <p>Para descargar el archivo, haga clic en el botón verde que dice "Descargar" junto a la versión que desee. Será redirigido a otra página donde verá un temporizador de cuenta atrás. Espere unos segundos hasta que el temporizador llegue a cero y luego haga clic en el botón azul que dice "Descargar ahora". El archivo comenzará a descargarse automáticamente. </p>
29
- <h3>Paso 3: Extraer el archivo y ejecutar el juego</h3>
30
-
31
- <p>Una vez que haya extraído el archivo, verá una carpeta llamada "Entre nosotros". Ábrala y busque el archivo ejecutable que tiene el logotipo del juego. Haga doble clic en él para ejecutar el juego. Verá una ventana que le pide que elija su idioma. Seleccione su idioma preferido y haga clic en "Aceptar". El juego se iniciará y verás el menú principal. </p>
32
- <h2>Consejos y trucos para jugar entre nosotros</h2>
33
- <p>Felicidades! Usted ha descargado e instalado con éxito entre nosotros de GameJolt. Ahora usted está listo para jugar el juego con sus amigos u otros jugadores en línea. Pero antes de entrar en un juego, aquí hay algunos consejos y trucos que te ayudarán a disfrutar más del juego:</p>
34
- <h3>Cómo personalizar el carácter y la configuración</h3>
35
- <p>Antes de unirse o alojar un juego, puede personalizar su personaje y la configuración en el menú principal. Para personalizar tu personaje, haz clic en "Personalizar" en la esquina inferior derecha de la pantalla. Puede cambiar su nombre, color, sombrero, mascota, piel o atuendo haciendo clic en ellos. También puede usar flechas para desplazarse a través de diferentes opciones. </p>
36
- <p>Para personalizar la configuración, haga clic en "Configuración" en la esquina inferior izquierda de la pantalla. Puede ajustar varias opciones como efectos de sonido, volumen de música, calidad de gráficos, resolución, modo de pantalla completa, idioma o tipo de chat haciendo clic en ellos. También puede usar deslizadores o botones para cambiar los valores. Para restablecer los valores predeterminados, haga clic en "Restablecer a Predeterminado" en la parte inferior de la pantalla. </p>
37
- <h3>Cómo unirse o organizar un juego en línea o localmente</h3>
38
- <p>Para unirse o alojar un juego en línea o localmente, haga clic en "Online" o "Local" en el menú principal. Si eliges "Online", puedes unirte a un juego público, crear un juego privado o introducir un código para unirte a un juego privado. Si eliges "Local", puedes alojar un juego o unirte a un juego alojado en la misma red Wi-Fi que tú. </p>
39
-
40
- <h3>Cómo jugar como un compañero de equipo o un impostor</h3>
41
- <p>Una vez que comience el juego, serás asignado como Compañero de Equipo o Impostor. Su papel se mostrará en la pantalla junto con sus compañeros de equipo si usted es un impostor. También verás una lista de tareas que necesitas completar si eres un Crewmate.</p>
42
- <p>Si usted es un compañero de equipo, su objetivo es completar sus tareas y averiguar quiénes son los impostores. Puede moverse por el mapa utilizando el joystick en el lado izquierdo de la pantalla e interactuar con los objetos utilizando el botón en el lado derecho de la pantalla. También puede usar respiraderos, cámaras, tabla de administración, signos vitales o registro de puertas para recopilar información. Puedes reportar un cadáver o llamar a una reunión de emergencia si encuentras algo sospechoso. A continuación, puede votar por quien cree que es un impostor o saltar la votación si no está seguro. </p>
43
- <p>Si eres un impostor, tu objetivo es matar a todos los compañeros de equipo o sabotear un sistema crítico. Puedes moverte por el mapa e interactuar con objetos como un compañero de equipo, pero también tienes algunas habilidades especiales. Puedes usar los respiraderos para viajar rápida y discretamente, matar a los Compañeros de Tripulación cuando están solos o en grupos, y sabotear los sistemas para causar caos o distraer a los Compañeros de Equipo. También puede falsificar tareas, mentir y acusar a otros para evitar sospechas. Puede votar por quién desea eliminar o omitir el voto si desea mezclarse. </p>
44
- <h2>Conclusión y preguntas frecuentes</h2>
45
- <p>En conclusión, Entre nosotros es un juego divertido y atractivo que puedes jugar con tus amigos en línea o localmente. Puedes descargarlo e instalarlo desde GameJolt siguiendo los pasos que hemos explicado en este artículo. También puedes personalizar tu personaje y tu configuración, unirte o organizar un juego y jugar como Crewmate o Impostor. Esperamos que este artículo te haya ayudado a aprender más sobre Among Us y GameJolt y que disfrutes jugando el juego. </p>
46
- <p>Aquí hay algunas preguntas frecuentes que puede tener sobre Among Us y GameJolt:</p>
47
- <h4>Q: ¿Cuánto cuesta Among Us en GameJolt? </h4>
48
-
49
- <h4>Q: ¿Puedo jugar entre nosotros con personas que tienen diferentes dispositivos? </h4>
50
- <p>A: Sí, puedes jugar entre nosotros con personas que tienen diferentes dispositivos, siempre y cuando tengan la misma versión del juego. El juego admite el juego multiplataforma entre dispositivos Android, iOS, Windows, Nintendo Switch, PlayStation y Xbox. </p>
51
- <h4>Q: ¿Cómo puedo actualizar entre nosotros en GameJolt? </h4>
52
- <p>A: Para actualizar entre nosotros en GameJolt, es necesario descargar e instalar la última versión del juego de GameJolt. Puedes comprobar si hay una nueva versión disponible visitando la página del juego en GameJolt y buscando actualizaciones o noticias de los desarrolladores. </p>
53
- <h4>Q: ¿Cómo puedo reportar errores o problemas con Among Us? </h4>
54
- <p>A: Si encuentra algún error o problema con Among Us, puede informar a los desarrolladores utilizando su dirección de correo electrónico oficial: [email protected]. También puede utilizar su servidor de discordia: https://discord.gg/innersloth.</p>
55
- <h4>Q: ¿Cómo puedo encontrar más juegos como Entre nosotros en GameJolt? </h4>
56
- <p>A: Si te gusta Among Us y quieres encontrar más juegos como Among Us en GameJolt, puedes usar las etiquetas o los géneros para filtrar los juegos. Por ejemplo, puedes buscar juegos que tengan etiquetas como "multijugador", "deducción social", "misterio de asesinato", "espacio" o "horror". También puedes buscar juegos que pertenezcan a géneros como "acción", "aventura", "rompecabezas" o "simulación". También puede navegar por los juegos destacados, populares o de tendencia en GameJolt para descubrir juegos nuevos y emocionantes. </p> 64aa2da5cf<br />
57
- <br />
58
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/langhungarianmodel.py DELETED
The diff for this file is too large to render. See raw diff
 
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/command/build_ext.py DELETED
@@ -1,383 +0,0 @@
1
- import os
2
- import sys
3
- import itertools
4
- from importlib.machinery import EXTENSION_SUFFIXES
5
- from importlib.util import cache_from_source as _compiled_file_name
6
- from typing import Dict, Iterator, List, Tuple
7
-
8
- from distutils.command.build_ext import build_ext as _du_build_ext
9
- from distutils.ccompiler import new_compiler
10
- from distutils.sysconfig import customize_compiler, get_config_var
11
- from distutils import log
12
-
13
- from setuptools.errors import BaseError
14
- from setuptools.extension import Extension, Library
15
-
16
- try:
17
- # Attempt to use Cython for building extensions, if available
18
- from Cython.Distutils.build_ext import build_ext as _build_ext
19
- # Additionally, assert that the compiler module will load
20
- # also. Ref #1229.
21
- __import__('Cython.Compiler.Main')
22
- except ImportError:
23
- _build_ext = _du_build_ext
24
-
25
- # make sure _config_vars is initialized
26
- get_config_var("LDSHARED")
27
- from distutils.sysconfig import _config_vars as _CONFIG_VARS # noqa
28
-
29
-
30
- def _customize_compiler_for_shlib(compiler):
31
- if sys.platform == "darwin":
32
- # building .dylib requires additional compiler flags on OSX; here we
33
- # temporarily substitute the pyconfig.h variables so that distutils'
34
- # 'customize_compiler' uses them before we build the shared libraries.
35
- tmp = _CONFIG_VARS.copy()
36
- try:
37
- # XXX Help! I don't have any idea whether these are right...
38
- _CONFIG_VARS['LDSHARED'] = (
39
- "gcc -Wl,-x -dynamiclib -undefined dynamic_lookup")
40
- _CONFIG_VARS['CCSHARED'] = " -dynamiclib"
41
- _CONFIG_VARS['SO'] = ".dylib"
42
- customize_compiler(compiler)
43
- finally:
44
- _CONFIG_VARS.clear()
45
- _CONFIG_VARS.update(tmp)
46
- else:
47
- customize_compiler(compiler)
48
-
49
-
50
- have_rtld = False
51
- use_stubs = False
52
- libtype = 'shared'
53
-
54
- if sys.platform == "darwin":
55
- use_stubs = True
56
- elif os.name != 'nt':
57
- try:
58
- import dl
59
- use_stubs = have_rtld = hasattr(dl, 'RTLD_NOW')
60
- except ImportError:
61
- pass
62
-
63
-
64
- def if_dl(s):
65
- return s if have_rtld else ''
66
-
67
-
68
- def get_abi3_suffix():
69
- """Return the file extension for an abi3-compliant Extension()"""
70
- for suffix in EXTENSION_SUFFIXES:
71
- if '.abi3' in suffix: # Unix
72
- return suffix
73
- elif suffix == '.pyd': # Windows
74
- return suffix
75
-
76
-
77
- class build_ext(_build_ext):
78
- editable_mode: bool = False
79
- inplace: bool = False
80
-
81
- def run(self):
82
- """Build extensions in build directory, then copy if --inplace"""
83
- old_inplace, self.inplace = self.inplace, 0
84
- _build_ext.run(self)
85
- self.inplace = old_inplace
86
- if old_inplace:
87
- self.copy_extensions_to_source()
88
-
89
- def _get_inplace_equivalent(self, build_py, ext: Extension) -> Tuple[str, str]:
90
- fullname = self.get_ext_fullname(ext.name)
91
- filename = self.get_ext_filename(fullname)
92
- modpath = fullname.split('.')
93
- package = '.'.join(modpath[:-1])
94
- package_dir = build_py.get_package_dir(package)
95
- inplace_file = os.path.join(package_dir, os.path.basename(filename))
96
- regular_file = os.path.join(self.build_lib, filename)
97
- return (inplace_file, regular_file)
98
-
99
- def copy_extensions_to_source(self):
100
- build_py = self.get_finalized_command('build_py')
101
- for ext in self.extensions:
102
- inplace_file, regular_file = self._get_inplace_equivalent(build_py, ext)
103
-
104
- # Always copy, even if source is older than destination, to ensure
105
- # that the right extensions for the current Python/platform are
106
- # used.
107
- if os.path.exists(regular_file) or not ext.optional:
108
- self.copy_file(regular_file, inplace_file, level=self.verbose)
109
-
110
- if ext._needs_stub:
111
- inplace_stub = self._get_equivalent_stub(ext, inplace_file)
112
- self._write_stub_file(inplace_stub, ext, compile=True)
113
- # Always compile stub and remove the original (leave the cache behind)
114
- # (this behaviour was observed in previous iterations of the code)
115
-
116
- def _get_equivalent_stub(self, ext: Extension, output_file: str) -> str:
117
- dir_ = os.path.dirname(output_file)
118
- _, _, name = ext.name.rpartition(".")
119
- return f"{os.path.join(dir_, name)}.py"
120
-
121
- def _get_output_mapping(self) -> Iterator[Tuple[str, str]]:
122
- if not self.inplace:
123
- return
124
-
125
- build_py = self.get_finalized_command('build_py')
126
- opt = self.get_finalized_command('install_lib').optimize or ""
127
-
128
- for ext in self.extensions:
129
- inplace_file, regular_file = self._get_inplace_equivalent(build_py, ext)
130
- yield (regular_file, inplace_file)
131
-
132
- if ext._needs_stub:
133
- # This version of `build_ext` always builds artifacts in another dir,
134
- # when "inplace=True" is given it just copies them back.
135
- # This is done in the `copy_extensions_to_source` function, which
136
- # always compile stub files via `_compile_and_remove_stub`.
137
- # At the end of the process, a `.pyc` stub file is created without the
138
- # corresponding `.py`.
139
-
140
- inplace_stub = self._get_equivalent_stub(ext, inplace_file)
141
- regular_stub = self._get_equivalent_stub(ext, regular_file)
142
- inplace_cache = _compiled_file_name(inplace_stub, optimization=opt)
143
- output_cache = _compiled_file_name(regular_stub, optimization=opt)
144
- yield (output_cache, inplace_cache)
145
-
146
- def get_ext_filename(self, fullname):
147
- so_ext = os.getenv('SETUPTOOLS_EXT_SUFFIX')
148
- if so_ext:
149
- filename = os.path.join(*fullname.split('.')) + so_ext
150
- else:
151
- filename = _build_ext.get_ext_filename(self, fullname)
152
- so_ext = get_config_var('EXT_SUFFIX')
153
-
154
- if fullname in self.ext_map:
155
- ext = self.ext_map[fullname]
156
- use_abi3 = getattr(ext, 'py_limited_api') and get_abi3_suffix()
157
- if use_abi3:
158
- filename = filename[:-len(so_ext)]
159
- so_ext = get_abi3_suffix()
160
- filename = filename + so_ext
161
- if isinstance(ext, Library):
162
- fn, ext = os.path.splitext(filename)
163
- return self.shlib_compiler.library_filename(fn, libtype)
164
- elif use_stubs and ext._links_to_dynamic:
165
- d, fn = os.path.split(filename)
166
- return os.path.join(d, 'dl-' + fn)
167
- return filename
168
-
169
- def initialize_options(self):
170
- _build_ext.initialize_options(self)
171
- self.shlib_compiler = None
172
- self.shlibs = []
173
- self.ext_map = {}
174
- self.editable_mode = False
175
-
176
- def finalize_options(self):
177
- _build_ext.finalize_options(self)
178
- self.extensions = self.extensions or []
179
- self.check_extensions_list(self.extensions)
180
- self.shlibs = [ext for ext in self.extensions
181
- if isinstance(ext, Library)]
182
- if self.shlibs:
183
- self.setup_shlib_compiler()
184
- for ext in self.extensions:
185
- ext._full_name = self.get_ext_fullname(ext.name)
186
- for ext in self.extensions:
187
- fullname = ext._full_name
188
- self.ext_map[fullname] = ext
189
-
190
- # distutils 3.1 will also ask for module names
191
- # XXX what to do with conflicts?
192
- self.ext_map[fullname.split('.')[-1]] = ext
193
-
194
- ltd = self.shlibs and self.links_to_dynamic(ext) or False
195
- ns = ltd and use_stubs and not isinstance(ext, Library)
196
- ext._links_to_dynamic = ltd
197
- ext._needs_stub = ns
198
- filename = ext._file_name = self.get_ext_filename(fullname)
199
- libdir = os.path.dirname(os.path.join(self.build_lib, filename))
200
- if ltd and libdir not in ext.library_dirs:
201
- ext.library_dirs.append(libdir)
202
- if ltd and use_stubs and os.curdir not in ext.runtime_library_dirs:
203
- ext.runtime_library_dirs.append(os.curdir)
204
-
205
- if self.editable_mode:
206
- self.inplace = True
207
-
208
- def setup_shlib_compiler(self):
209
- compiler = self.shlib_compiler = new_compiler(
210
- compiler=self.compiler, dry_run=self.dry_run, force=self.force
211
- )
212
- _customize_compiler_for_shlib(compiler)
213
-
214
- if self.include_dirs is not None:
215
- compiler.set_include_dirs(self.include_dirs)
216
- if self.define is not None:
217
- # 'define' option is a list of (name,value) tuples
218
- for (name, value) in self.define:
219
- compiler.define_macro(name, value)
220
- if self.undef is not None:
221
- for macro in self.undef:
222
- compiler.undefine_macro(macro)
223
- if self.libraries is not None:
224
- compiler.set_libraries(self.libraries)
225
- if self.library_dirs is not None:
226
- compiler.set_library_dirs(self.library_dirs)
227
- if self.rpath is not None:
228
- compiler.set_runtime_library_dirs(self.rpath)
229
- if self.link_objects is not None:
230
- compiler.set_link_objects(self.link_objects)
231
-
232
- # hack so distutils' build_extension() builds a library instead
233
- compiler.link_shared_object = link_shared_object.__get__(compiler)
234
-
235
- def get_export_symbols(self, ext):
236
- if isinstance(ext, Library):
237
- return ext.export_symbols
238
- return _build_ext.get_export_symbols(self, ext)
239
-
240
- def build_extension(self, ext):
241
- ext._convert_pyx_sources_to_lang()
242
- _compiler = self.compiler
243
- try:
244
- if isinstance(ext, Library):
245
- self.compiler = self.shlib_compiler
246
- _build_ext.build_extension(self, ext)
247
- if ext._needs_stub:
248
- build_lib = self.get_finalized_command('build_py').build_lib
249
- self.write_stub(build_lib, ext)
250
- finally:
251
- self.compiler = _compiler
252
-
253
- def links_to_dynamic(self, ext):
254
- """Return true if 'ext' links to a dynamic lib in the same package"""
255
- # XXX this should check to ensure the lib is actually being built
256
- # XXX as dynamic, and not just using a locally-found version or a
257
- # XXX static-compiled version
258
- libnames = dict.fromkeys([lib._full_name for lib in self.shlibs])
259
- pkg = '.'.join(ext._full_name.split('.')[:-1] + [''])
260
- return any(pkg + libname in libnames for libname in ext.libraries)
261
-
262
- def get_outputs(self) -> List[str]:
263
- if self.inplace:
264
- return list(self.get_output_mapping().keys())
265
- return sorted(_build_ext.get_outputs(self) + self.__get_stubs_outputs())
266
-
267
- def get_output_mapping(self) -> Dict[str, str]:
268
- """See :class:`setuptools.commands.build.SubCommand`"""
269
- mapping = self._get_output_mapping()
270
- return dict(sorted(mapping, key=lambda x: x[0]))
271
-
272
- def __get_stubs_outputs(self):
273
- # assemble the base name for each extension that needs a stub
274
- ns_ext_bases = (
275
- os.path.join(self.build_lib, *ext._full_name.split('.'))
276
- for ext in self.extensions
277
- if ext._needs_stub
278
- )
279
- # pair each base with the extension
280
- pairs = itertools.product(ns_ext_bases, self.__get_output_extensions())
281
- return list(base + fnext for base, fnext in pairs)
282
-
283
- def __get_output_extensions(self):
284
- yield '.py'
285
- yield '.pyc'
286
- if self.get_finalized_command('build_py').optimize:
287
- yield '.pyo'
288
-
289
- def write_stub(self, output_dir, ext, compile=False):
290
- stub_file = os.path.join(output_dir, *ext._full_name.split('.')) + '.py'
291
- self._write_stub_file(stub_file, ext, compile)
292
-
293
- def _write_stub_file(self, stub_file: str, ext: Extension, compile=False):
294
- log.info("writing stub loader for %s to %s", ext._full_name, stub_file)
295
- if compile and os.path.exists(stub_file):
296
- raise BaseError(stub_file + " already exists! Please delete.")
297
- if not self.dry_run:
298
- f = open(stub_file, 'w')
299
- f.write(
300
- '\n'.join([
301
- "def __bootstrap__():",
302
- " global __bootstrap__, __file__, __loader__",
303
- " import sys, os, pkg_resources, importlib.util" +
304
- if_dl(", dl"),
305
- " __file__ = pkg_resources.resource_filename"
306
- "(__name__,%r)"
307
- % os.path.basename(ext._file_name),
308
- " del __bootstrap__",
309
- " if '__loader__' in globals():",
310
- " del __loader__",
311
- if_dl(" old_flags = sys.getdlopenflags()"),
312
- " old_dir = os.getcwd()",
313
- " try:",
314
- " os.chdir(os.path.dirname(__file__))",
315
- if_dl(" sys.setdlopenflags(dl.RTLD_NOW)"),
316
- " spec = importlib.util.spec_from_file_location(",
317
- " __name__, __file__)",
318
- " mod = importlib.util.module_from_spec(spec)",
319
- " spec.loader.exec_module(mod)",
320
- " finally:",
321
- if_dl(" sys.setdlopenflags(old_flags)"),
322
- " os.chdir(old_dir)",
323
- "__bootstrap__()",
324
- "" # terminal \n
325
- ])
326
- )
327
- f.close()
328
- if compile:
329
- self._compile_and_remove_stub(stub_file)
330
-
331
- def _compile_and_remove_stub(self, stub_file: str):
332
- from distutils.util import byte_compile
333
-
334
- byte_compile([stub_file], optimize=0,
335
- force=True, dry_run=self.dry_run)
336
- optimize = self.get_finalized_command('install_lib').optimize
337
- if optimize > 0:
338
- byte_compile([stub_file], optimize=optimize,
339
- force=True, dry_run=self.dry_run)
340
- if os.path.exists(stub_file) and not self.dry_run:
341
- os.unlink(stub_file)
342
-
343
-
344
- if use_stubs or os.name == 'nt':
345
- # Build shared libraries
346
- #
347
- def link_shared_object(
348
- self, objects, output_libname, output_dir=None, libraries=None,
349
- library_dirs=None, runtime_library_dirs=None, export_symbols=None,
350
- debug=0, extra_preargs=None, extra_postargs=None, build_temp=None,
351
- target_lang=None):
352
- self.link(
353
- self.SHARED_LIBRARY, objects, output_libname,
354
- output_dir, libraries, library_dirs, runtime_library_dirs,
355
- export_symbols, debug, extra_preargs, extra_postargs,
356
- build_temp, target_lang
357
- )
358
- else:
359
- # Build static libraries everywhere else
360
- libtype = 'static'
361
-
362
- def link_shared_object(
363
- self, objects, output_libname, output_dir=None, libraries=None,
364
- library_dirs=None, runtime_library_dirs=None, export_symbols=None,
365
- debug=0, extra_preargs=None, extra_postargs=None, build_temp=None,
366
- target_lang=None):
367
- # XXX we need to either disallow these attrs on Library instances,
368
- # or warn/abort here if set, or something...
369
- # libraries=None, library_dirs=None, runtime_library_dirs=None,
370
- # export_symbols=None, extra_preargs=None, extra_postargs=None,
371
- # build_temp=None
372
-
373
- assert output_dir is None # distutils build_ext doesn't pass this
374
- output_dir, filename = os.path.split(output_libname)
375
- basename, ext = os.path.splitext(filename)
376
- if self.library_filename("x").startswith('lib'):
377
- # strip 'lib' prefix; this is kludgy if some platform uses
378
- # a different prefix
379
- basename = basename[3:]
380
-
381
- self.create_static_lib(
382
- objects, basename, output_dir, debug, target_lang
383
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BigSalmon/AbstractTwst/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: AbstractTwst
3
- emoji: 🐨
4
- colorFrom: yellow
5
- colorTo: green
6
- sdk: streamlit
7
- sdk_version: 1.21.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Bingsu/color_textual_inversion/app.py DELETED
@@ -1,128 +0,0 @@
1
- from __future__ import annotations
2
-
3
- import shlex
4
- import subprocess
5
- from pathlib import Path
6
- from tempfile import TemporaryDirectory
7
- from textwrap import dedent
8
-
9
- import numpy as np
10
- import streamlit as st
11
- import torch
12
- from PIL import Image
13
- from transformers import CLIPTokenizer
14
-
15
-
16
- def hex_to_rgb(s: str) -> tuple[int, int, int]:
17
- value = s.lstrip("#")
18
- return (int(value[:2], 16), int(value[2:4], 16), int(value[4:6], 16))
19
-
20
-
21
- st.header("Color Textual Inversion")
22
- with st.expander(label="info"):
23
- with open("info.txt", "r", encoding="utf-8") as f:
24
- st.markdown(f.read())
25
-
26
- duplicate_button = """<a class="duplicate-button" style="display:inline-block" target="_blank" href="https://huggingface.co/spaces/Bingsu/color_textual_inversion?duplicate=true"><img style="margin: 0" src="https://img.shields.io/badge/-Duplicate%20Space-blue?labelColor=white&style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAAXNSR0IArs4c6QAAAP5JREFUOE+lk7FqAkEURY+ltunEgFXS2sZGIbXfEPdLlnxJyDdYB62sbbUKpLbVNhyYFzbrrA74YJlh9r079973psed0cvUD4A+4HoCjsA85X0Dfn/RBLBgBDxnQPfAEJgBY+A9gALA4tcbamSzS4xq4FOQAJgCDwV2CPKV8tZAJcAjMMkUe1vX+U+SMhfAJEHasQIWmXNN3abzDwHUrgcRGmYcgKe0bxrblHEB4E/pndMazNpSZGcsZdBlYJcEL9Afo75molJyM2FxmPgmgPqlWNLGfwZGG6UiyEvLzHYDmoPkDDiNm9JR9uboiONcBXrpY1qmgs21x1QwyZcpvxt9NS09PlsPAAAAAElFTkSuQmCC&logoWidth=14" alt="Duplicate Space"></a>"""
27
- st.markdown(duplicate_button, unsafe_allow_html=True)
28
-
29
- col1, col2 = st.columns([15, 85])
30
- color = col1.color_picker("Pick a color", "#00f900")
31
- col2.text_input("", color, disabled=True)
32
-
33
- emb_name = st.text_input("Embedding name", color.lstrip("#").upper())
34
- init_token = st.text_input("Initializer token", "init token name")
35
- rgb = hex_to_rgb(color)
36
-
37
- img_array = np.zeros((128, 128, 3), dtype=np.uint8)
38
- for i in range(3):
39
- img_array[..., i] = rgb[i]
40
-
41
- dataset_temp = TemporaryDirectory(prefix="dataset_", dir=".")
42
- dataset_path = Path(dataset_temp.name)
43
- output_temp = TemporaryDirectory(prefix="output_", dir=".")
44
- output_path = Path(output_temp.name)
45
-
46
- img_path = dataset_path / f"{emb_name}.png"
47
- Image.fromarray(img_array).save(img_path)
48
-
49
- with st.sidebar:
50
- model_name = st.text_input("Model name", "Linaqruf/anything-v3.0")
51
- steps = st.slider("Steps", 1, 100, value=1, step=1)
52
- learning_rate = st.text_input("Learning rate", "0.001")
53
- learning_rate = float(learning_rate)
54
-
55
- tokenizer = CLIPTokenizer.from_pretrained(model_name, subfolder="tokenizer")
56
-
57
- # case 1: init_token is not a single token
58
- token = tokenizer.tokenize(init_token)
59
- if len(token) > 1:
60
- st.warning("Initializer token must be a single token")
61
- st.stop()
62
-
63
- # case 2: init_token already exists in the tokenizer
64
- num_added_tokens = tokenizer.add_tokens(emb_name)
65
- if num_added_tokens == 0:
66
- st.warning(f"The tokenizer already contains the token {emb_name}")
67
- st.stop()
68
-
69
- cmd = """
70
- accelerate launch textual_inversion.py \
71
- --pretrained_model_name_or_path={model_name} \
72
- --train_data_dir={dataset_path} \
73
- --learnable_property="style" \
74
- --placeholder_token="{emb_name}" \
75
- --initializer_token="{init}" \
76
- --resolution=128 \
77
- --train_batch_size=1 \
78
- --repeats=1 \
79
- --gradient_accumulation_steps=1 \
80
- --max_train_steps={steps} \
81
- --learning_rate={lr} \
82
- --output_dir={output_path} \
83
- --only_save_embeds
84
- """.strip()
85
-
86
- cmd = dedent(cmd).format(
87
- model_name=model_name,
88
- dataset_path=dataset_path.as_posix(),
89
- emb_name=emb_name,
90
- init=init_token,
91
- steps=steps,
92
- lr=learning_rate,
93
- output_path=output_path.as_posix(),
94
- )
95
- cmd = shlex.split(cmd)
96
-
97
- result_path = output_path / "learned_embeds.bin"
98
- captured = ""
99
-
100
- start_button = st.button("Start")
101
- download_button = st.empty()
102
-
103
- if start_button:
104
- with st.spinner("Training..."):
105
- placeholder = st.empty()
106
- p = subprocess.Popen(
107
- cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, encoding="utf-8"
108
- )
109
-
110
- while line := p.stderr.readline():
111
- captured += line
112
- placeholder.code(captured, language="bash")
113
-
114
- if not result_path.exists():
115
- st.stop()
116
-
117
- # fix unknown file volume bug
118
- trained_emb = torch.load(result_path, map_location="cpu")
119
- for k, v in trained_emb.items():
120
- trained_emb[k] = torch.from_numpy(v.numpy())
121
- torch.save(trained_emb, result_path)
122
-
123
- file = result_path.read_bytes()
124
- download_button.download_button(f"Download {emb_name}.pt", file, f"{emb_name}.pt")
125
- st.download_button(f"Download {emb_name}.pt ", file, f"{emb_name}.pt")
126
-
127
- dataset_temp.cleanup()
128
- output_temp.cleanup()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BridgeEight/internlm-20B-chat-w4-turbomind/app.py DELETED
@@ -1,128 +0,0 @@
1
- import os,random
2
- os.system('sh install_lmdeploy.sh')
3
- import gradio as gr
4
- from lmdeploy.serve.gradio.app import *
5
- os.system('sh download.sh')
6
-
7
- InterFace.async_engine = AsyncEngine(model_path='turbomind-internlm-chat-20b-w4',
8
- instance_num=2,
9
- tp=1)
10
-
11
-
12
- async def reset_local_demo(instruction_txtbox: gr.Textbox,
13
- state_chatbot: gr.State, request: gr.Request):
14
- """reset the session.
15
-
16
- Args:
17
- instruction_txtbox (str): user's prompt
18
- state_chatbot (Sequence): the chatting history
19
- request (gr.Request): the request from a user
20
- """
21
- state_chatbot = []
22
-
23
- return (
24
- state_chatbot,
25
- state_chatbot,
26
- gr.Textbox.update(value=''),
27
- )
28
-
29
-
30
- async def cancel_local_demo(state_chatbot: gr.State, cancel_btn: gr.Button,
31
- reset_btn: gr.Button, request: gr.Request):
32
- """stop the session.
33
-
34
- Args:
35
- instruction_txtbox (str): user's prompt
36
- state_chatbot (Sequence): the chatting history
37
- request (gr.Request): the request from a user
38
- """
39
- return (state_chatbot, disable_btn, disable_btn)
40
-
41
- async def chat_stream_demo(
42
- instruction: str,
43
- state_chatbot: Sequence,
44
- cancel_btn: gr.Button,
45
- reset_btn: gr.Button,
46
- request: gr.Request,
47
- ):
48
- """Chat with AI assistant.
49
-
50
- Args:
51
- instruction (str): user's prompt
52
- state_chatbot (Sequence): the chatting history
53
- request (gr.Request): the request from a user
54
- """
55
- session_id = random.randint(0,100000)
56
- bot_summarized_response = ''
57
- state_chatbot = state_chatbot + [(instruction, None)]
58
- messages = []
59
- for item in state_chatbot:
60
- messages.append(dict(role='user', content=item[0]))
61
- if item[1] is not None:
62
- messages.append(dict(role='assistant', content=item[1]))
63
-
64
- yield (state_chatbot, state_chatbot, disable_btn, disable_btn,
65
- f'{bot_summarized_response}'.strip())
66
-
67
- async for outputs in InterFace.async_engine.generate(
68
- messages,
69
- session_id,
70
- stream_response=True,
71
- sequence_start=True,
72
- sequence_end=True):
73
- response = outputs.response
74
- if outputs.finish_reason == 'length':
75
- gr.Warning('WARNING: exceed session max length.'
76
- ' Please restart the session by reset button.')
77
- if outputs.generate_token_len < 0:
78
- gr.Warning('WARNING: running on the old session.'
79
- ' Please restart the session by reset button.')
80
- if state_chatbot[-1][-1] is None:
81
- state_chatbot[-1] = (state_chatbot[-1][0], response)
82
- else:
83
- state_chatbot[-1] = (state_chatbot[-1][0],
84
- state_chatbot[-1][1] + response
85
- ) # piece by piece
86
- yield (state_chatbot, state_chatbot, disable_btn, disable_btn,
87
- f'{bot_summarized_response}'.strip())
88
-
89
- yield (state_chatbot, state_chatbot, disable_btn, disable_btn,
90
- f'{bot_summarized_response}'.strip())
91
-
92
-
93
- with gr.Blocks(css=CSS, theme=THEME) as demo:
94
- state_chatbot = gr.State([])
95
-
96
- with gr.Column(elem_id='container'):
97
- gr.Markdown('## LMDeploy Playground')
98
-
99
- chatbot = gr.Chatbot(
100
- elem_id='chatbot',
101
- label=InterFace.async_engine.tm_model.model_name)
102
- instruction_txtbox = gr.Textbox(
103
- placeholder='Please input the instruction',
104
- label='Instruction')
105
- with gr.Row():
106
- cancel_btn = gr.Button(value='Cancel', interactive=False, visible=False)
107
- reset_btn = gr.Button(value='Reset', interactive=False, visible=False)
108
-
109
- send_event = instruction_txtbox.submit(
110
- chat_stream_demo,
111
- [instruction_txtbox, state_chatbot, cancel_btn, reset_btn],
112
- [state_chatbot, chatbot, cancel_btn, reset_btn])
113
- instruction_txtbox.submit(
114
- lambda: gr.Textbox.update(value=''),
115
- [],
116
- [instruction_txtbox],
117
- )
118
- cancel_btn.click(cancel_local_demo,
119
- [state_chatbot, cancel_btn, reset_btn],
120
- [state_chatbot, cancel_btn, reset_btn],
121
- cancels=[send_event])
122
-
123
- reset_btn.click(reset_local_demo, [instruction_txtbox, state_chatbot],
124
- [state_chatbot, chatbot, instruction_txtbox],
125
- cancels=[send_event])
126
-
127
- # print(f'server is gonna mount on: http://{server_name}:{server_port}')
128
- demo.queue(concurrency_count=4, max_size=100).launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/LIVE/thrust/thrust/host_vector.h DELETED
@@ -1,514 +0,0 @@
1
- /*
2
- * Copyright 2008-2018 NVIDIA Corporation
3
- *
4
- * Licensed under the Apache License, Version 2.0 (the "License");
5
- * you may not use this file except in compliance with the License.
6
- * You may obtain a copy of the License at
7
- *
8
- * http://www.apache.org/licenses/LICENSE-2.0
9
- *
10
- * Unless required by applicable law or agreed to in writing, software
11
- * distributed under the License is distributed on an "AS IS" BASIS,
12
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- * See the License for the specific language governing permissions and
14
- * limitations under the License.
15
- */
16
-
17
-
18
- /*! \file host_vector.h
19
- * \brief A dynamically-sizable array of elements which reside in the "host" memory space
20
- */
21
-
22
- #pragma once
23
-
24
- #include <thrust/detail/config.h>
25
- #include <thrust/detail/memory_wrapper.h>
26
- #include <thrust/detail/vector_base.h>
27
- #include <vector>
28
- #include <utility>
29
-
30
- namespace thrust
31
- {
32
-
33
- // forward declaration of device_vector
34
- template<typename T, typename Alloc> class device_vector;
35
-
36
- /*! \addtogroup container_classes Container Classes
37
- * \addtogroup host_containers Host Containers
38
- * \ingroup container_classes
39
- * \{
40
- */
41
-
42
- /*! A \p host_vector is a container that supports random access to elements,
43
- * constant time removal of elements at the end, and linear time insertion
44
- * and removal of elements at the beginning or in the middle. The number of
45
- * elements in a \p host_vector may vary dynamically; memory management is
46
- * automatic. The memory associated with a \p host_vector resides in the memory
47
- * space of the host associated with a parallel device.
48
- *
49
- * \see http://www.sgi.com/tech/stl/Vector.html
50
- * \see device_vector
51
- */
52
- template<typename T, typename Alloc = std::allocator<T> >
53
- class host_vector
54
- : public detail::vector_base<T,Alloc>
55
- {
56
- private:
57
- typedef detail::vector_base<T,Alloc> Parent;
58
-
59
- public:
60
- /*! \cond
61
- */
62
- typedef typename Parent::size_type size_type;
63
- typedef typename Parent::value_type value_type;
64
- /*! \endcond
65
- */
66
-
67
- /*! This constructor creates an empty \p host_vector.
68
- */
69
- __host__
70
- host_vector(void)
71
- :Parent() {}
72
-
73
- /*! This constructor creates an empty \p host_vector.
74
- * \param alloc The allocator to use by this host_vector.
75
- */
76
- __host__
77
- host_vector(const Alloc &alloc)
78
- :Parent(alloc) {}
79
-
80
- /*! The destructor erases the elements.
81
- */
82
- // Define an empty destructor to explicitly specify
83
- // its execution space qualifier, as a workaround for nvcc warning
84
- __host__
85
- ~host_vector(void) {}
86
-
87
- /*! This constructor creates a \p host_vector with the given
88
- * size.
89
- * \param n The number of elements to initially create.
90
- */
91
- __host__
92
- explicit host_vector(size_type n)
93
- :Parent(n) {}
94
-
95
- /*! This constructor creates a \p host_vector with the given
96
- * size.
97
- * \param n The number of elements to initially create.
98
- * \param alloc The allocator to use by this host_vector.
99
- */
100
- __host__
101
- explicit host_vector(size_type n, const Alloc &alloc)
102
- :Parent(n,alloc) {}
103
-
104
- /*! This constructor creates a \p host_vector with copies
105
- * of an exemplar element.
106
- * \param n The number of elements to initially create.
107
- * \param value An element to copy.
108
- */
109
- __host__
110
- explicit host_vector(size_type n, const value_type &value)
111
- :Parent(n,value) {}
112
-
113
- /*! This constructor creates a \p host_vector with copies
114
- * of an exemplar element.
115
- * \param n The number of elements to initially create.
116
- * \param value An element to copy.
117
- * \param alloc The allocator to use by this host_vector.
118
- */
119
- __host__
120
- explicit host_vector(size_type n, const value_type &value, const Alloc &alloc)
121
- :Parent(n,value,alloc) {}
122
-
123
- /*! Copy constructor copies from an exemplar \p host_vector.
124
- * \param v The \p host_vector to copy.
125
- */
126
- __host__
127
- host_vector(const host_vector &v)
128
- :Parent(v) {}
129
-
130
- /*! Copy constructor copies from an exemplar \p host_vector.
131
- * \param v The \p host_vector to copy.
132
- * \param alloc The allocator to use by this host_vector.
133
- */
134
- __host__
135
- host_vector(const host_vector &v, const Alloc &alloc)
136
- :Parent(v,alloc) {}
137
-
138
- #if THRUST_CPP_DIALECT >= 2011
139
- /*! Move constructor moves from another host_vector.
140
- * \param v The host_vector to move.
141
- */
142
- __host__
143
- host_vector(host_vector &&v)
144
- :Parent(std::move(v)) {}
145
-
146
- /*! Move constructor moves from another host_vector.
147
- * \param v The host_vector to move.
148
- * \param alloc The allocator to use by this host_vector.
149
- */
150
- __host__
151
- host_vector(host_vector &&v, const Alloc &alloc)
152
- :Parent(std::move(v),alloc) {}
153
- #endif
154
-
155
- /*! Assign operator copies from an exemplar \p host_vector.
156
- * \param v The \p host_vector to copy.
157
- */
158
- __host__
159
- host_vector &operator=(const host_vector &v)
160
- { Parent::operator=(v); return *this; }
161
-
162
- #if THRUST_CPP_DIALECT >= 2011
163
- /*! Move assign operator moves from another host_vector.
164
- * \param v The host_vector to move.
165
- */
166
- __host__
167
- host_vector &operator=(host_vector &&v)
168
- { Parent::operator=(std::move(v)); return *this; }
169
- #endif
170
-
171
- /*! Copy constructor copies from an exemplar \p host_vector with different type.
172
- * \param v The \p host_vector to copy.
173
- */
174
- template<typename OtherT, typename OtherAlloc>
175
- __host__
176
- host_vector(const host_vector<OtherT,OtherAlloc> &v)
177
- :Parent(v) {}
178
-
179
- /*! Assign operator copies from an exemplar \p host_vector with different type.
180
- * \param v The \p host_vector to copy.
181
- */
182
- template<typename OtherT, typename OtherAlloc>
183
- __host__
184
- host_vector &operator=(const host_vector<OtherT,OtherAlloc> &v)
185
- { Parent::operator=(v); return *this; }
186
-
187
- /*! Copy constructor copies from an exemplar <tt>std::vector</tt>.
188
- * \param v The <tt>std::vector</tt> to copy.
189
- */
190
- template<typename OtherT, typename OtherAlloc>
191
- __host__
192
- host_vector(const std::vector<OtherT,OtherAlloc> &v)
193
- :Parent(v) {}
194
-
195
- /*! Assign operator copies from an exemplar <tt>std::vector</tt>.
196
- * \param v The <tt>std::vector</tt> to copy.
197
- */
198
- template<typename OtherT, typename OtherAlloc>
199
- __host__
200
- host_vector &operator=(const std::vector<OtherT,OtherAlloc> &v)
201
- { Parent::operator=(v); return *this;}
202
-
203
- /*! Copy constructor copies from an exemplar \p device_vector with possibly different type.
204
- * \param v The \p device_vector to copy.
205
- */
206
- template<typename OtherT, typename OtherAlloc>
207
- __host__
208
- host_vector(const device_vector<OtherT,OtherAlloc> &v);
209
-
210
- /*! Assign operator copies from an exemplar \p device_vector.
211
- * \param v The \p device_vector to copy.
212
- */
213
- template<typename OtherT, typename OtherAlloc>
214
- __host__
215
- host_vector &operator=(const device_vector<OtherT,OtherAlloc> &v)
216
- { Parent::operator=(v); return *this; }
217
-
218
- /*! This constructor builds a \p host_vector from a range.
219
- * \param first The beginning of the range.
220
- * \param last The end of the range.
221
- */
222
- template<typename InputIterator>
223
- __host__
224
- host_vector(InputIterator first, InputIterator last)
225
- :Parent(first, last) {}
226
-
227
- /*! This constructor builds a \p host_vector from a range.
228
- * \param first The beginning of the range.
229
- * \param last The end of the range.
230
- * \param alloc The allocator to use by this host_vector.
231
- */
232
- template<typename InputIterator>
233
- __host__
234
- host_vector(InputIterator first, InputIterator last, const Alloc &alloc)
235
- :Parent(first, last, alloc) {}
236
-
237
- // declare these members for the purpose of Doxygenating them
238
- // they actually exist in a derived-from class
239
- #if 0
240
- /*! \brief Resizes this vector to the specified number of elements.
241
- * \param new_size Number of elements this vector should contain.
242
- * \param x Data with which new elements should be populated.
243
- * \throw std::length_error If n exceeds max_size().
244
- *
245
- * This method will resize this vector to the specified number of
246
- * elements. If the number is smaller than this vector's current
247
- * size this vector is truncated, otherwise this vector is
248
- * extended and new elements are populated with given data.
249
- */
250
- void resize(size_type new_size, const value_type &x = value_type());
251
-
252
- /*! Returns the number of elements in this vector.
253
- */
254
- size_type size(void) const;
255
-
256
- /*! Returns the size() of the largest possible vector.
257
- * \return The largest possible return value of size().
258
- */
259
- size_type max_size(void) const;
260
-
261
- /*! \brief If n is less than or equal to capacity(), this call has no effect.
262
- * Otherwise, this method is a request for allocation of additional memory. If
263
- * the request is successful, then capacity() is greater than or equal to
264
- * n; otherwise, capacity() is unchanged. In either case, size() is unchanged.
265
- * \throw std::length_error If n exceeds max_size().
266
- */
267
- void reserve(size_type n);
268
-
269
- /*! Returns the number of elements which have been reserved in this
270
- * vector.
271
- */
272
- size_type capacity(void) const;
273
-
274
- /*! This method shrinks the capacity of this vector to exactly
275
- * fit its elements.
276
- */
277
- void shrink_to_fit(void);
278
-
279
- /*! \brief Subscript access to the data contained in this vector_dev.
280
- * \param n The index of the element for which data should be accessed.
281
- * \return Read/write reference to data.
282
- *
283
- * This operator allows for easy, array-style, data access.
284
- * Note that data access with this operator is unchecked and
285
- * out_of_range lookups are not defined.
286
- */
287
- reference operator[](size_type n);
288
-
289
- /*! \brief Subscript read access to the data contained in this vector_dev.
290
- * \param n The index of the element for which data should be accessed.
291
- * \return Read reference to data.
292
- *
293
- * This operator allows for easy, array-style, data access.
294
- * Note that data access with this operator is unchecked and
295
- * out_of_range lookups are not defined.
296
- */
297
- const_reference operator[](size_type n) const;
298
-
299
- /*! This method returns an iterator pointing to the beginning of
300
- * this vector.
301
- * \return mStart
302
- */
303
- iterator begin(void);
304
-
305
- /*! This method returns a const_iterator pointing to the beginning
306
- * of this vector.
307
- * \return mStart
308
- */
309
- const_iterator begin(void) const;
310
-
311
- /*! This method returns a const_iterator pointing to the beginning
312
- * of this vector.
313
- * \return mStart
314
- */
315
- const_iterator cbegin(void) const;
316
-
317
- /*! This method returns a reverse_iterator pointing to the beginning of
318
- * this vector's reversed sequence.
319
- * \return A reverse_iterator pointing to the beginning of this
320
- * vector's reversed sequence.
321
- */
322
- reverse_iterator rbegin(void);
323
-
324
- /*! This method returns a const_reverse_iterator pointing to the beginning of
325
- * this vector's reversed sequence.
326
- * \return A const_reverse_iterator pointing to the beginning of this
327
- * vector's reversed sequence.
328
- */
329
- const_reverse_iterator rbegin(void) const;
330
-
331
- /*! This method returns a const_reverse_iterator pointing to the beginning of
332
- * this vector's reversed sequence.
333
- * \return A const_reverse_iterator pointing to the beginning of this
334
- * vector's reversed sequence.
335
- */
336
- const_reverse_iterator crbegin(void) const;
337
-
338
- /*! This method returns an iterator pointing to one element past the
339
- * last of this vector.
340
- * \return begin() + size().
341
- */
342
- iterator end(void);
343
-
344
- /*! This method returns a const_iterator pointing to one element past the
345
- * last of this vector.
346
- * \return begin() + size().
347
- */
348
- const_iterator end(void) const;
349
-
350
- /*! This method returns a const_iterator pointing to one element past the
351
- * last of this vector.
352
- * \return begin() + size().
353
- */
354
- const_iterator cend(void) const;
355
-
356
- /*! This method returns a reverse_iterator pointing to one element past the
357
- * last of this vector's reversed sequence.
358
- * \return rbegin() + size().
359
- */
360
- reverse_iterator rend(void);
361
-
362
- /*! This method returns a const_reverse_iterator pointing to one element past the
363
- * last of this vector's reversed sequence.
364
- * \return rbegin() + size().
365
- */
366
- const_reverse_iterator rend(void) const;
367
-
368
- /*! This method returns a const_reverse_iterator pointing to one element past the
369
- * last of this vector's reversed sequence.
370
- * \return rbegin() + size().
371
- */
372
- const_reverse_iterator crend(void) const;
373
-
374
- /*! This method returns a const_reference referring to the first element of this
375
- * vector.
376
- * \return The first element of this vector.
377
- */
378
- const_reference front(void) const;
379
-
380
- /*! This method returns a reference pointing to the first element of this
381
- * vector.
382
- * \return The first element of this vector.
383
- */
384
- reference front(void);
385
-
386
- /*! This method returns a const reference pointing to the last element of
387
- * this vector.
388
- * \return The last element of this vector.
389
- */
390
- const_reference back(void) const;
391
-
392
- /*! This method returns a reference referring to the last element of
393
- * this vector_dev.
394
- * \return The last element of this vector.
395
- */
396
- reference back(void);
397
-
398
- /*! This method returns a pointer to this vector's first element.
399
- * \return A pointer to the first element of this vector.
400
- */
401
- pointer data(void);
402
-
403
- /*! This method returns a const_pointer to this vector's first element.
404
- * \return a const_pointer to the first element of this vector.
405
- */
406
- const_pointer data(void) const;
407
-
408
- /*! This method resizes this vector to 0.
409
- */
410
- void clear(void);
411
-
412
- /*! This method returns true iff size() == 0.
413
- * \return true if size() == 0; false, otherwise.
414
- */
415
- bool empty(void) const;
416
-
417
- /*! This method appends the given element to the end of this vector.
418
- * \param x The element to append.
419
- */
420
- void push_back(const value_type &x);
421
-
422
- /*! This method erases the last element of this vector, invalidating
423
- * all iterators and references to it.
424
- */
425
- void pop_back(void);
426
-
427
- /*! This method swaps the contents of this host_vector with another vector.
428
- * \param v The vector with which to swap.
429
- */
430
- void swap(host_vector &v);
431
-
432
- /*! This method removes the element at position pos.
433
- * \param pos The position of the element of interest.
434
- * \return An iterator pointing to the new location of the element that followed the element
435
- * at position pos.
436
- */
437
- iterator erase(iterator pos);
438
-
439
- /*! This method removes the range of elements [first,last) from this vector.
440
- * \param first The beginning of the range of elements to remove.
441
- * \param last The end of the range of elements to remove.
442
- * \return An iterator pointing to the new location of the element that followed the last
443
- * element in the sequence [first,last).
444
- */
445
- iterator erase(iterator first, iterator last);
446
-
447
- /*! This method inserts a single copy of a given exemplar value at the
448
- * specified position in this vector.
449
- * \param position The insertion position.
450
- * \param x The exemplar element to copy & insert.
451
- * \return An iterator pointing to the newly inserted element.
452
- */
453
- iterator insert(iterator position, const T &x);
454
-
455
- /*! This method inserts a copy of an exemplar value to a range at the
456
- * specified position in this vector.
457
- * \param position The insertion position
458
- * \param n The number of insertions to perform.
459
- * \param x The value to replicate and insert.
460
- */
461
- void insert(iterator position, size_type n, const T &x);
462
-
463
- /*! This method inserts a copy of an input range at the specified position
464
- * in this vector.
465
- * \param position The insertion position.
466
- * \param first The beginning of the range to copy.
467
- * \param last The end of the range to copy.
468
- *
469
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator.html>Input Iterator</a>,
470
- * and \p InputIterator's \c value_type is a model of <a href="http://www.sgi.com/tech/stl/Assignable.html">Assignable</a>.
471
- */
472
- template<typename InputIterator>
473
- void insert(iterator position, InputIterator first, InputIterator last);
474
-
475
- /*! This version of \p assign replicates a given exemplar
476
- * \p n times into this vector.
477
- * \param n The number of times to copy \p x.
478
- * \param x The exemplar element to replicate.
479
- */
480
- void assign(size_type n, const T &x);
481
-
482
- /*! This version of \p assign makes this vector a copy of a given input range.
483
- * \param first The beginning of the range to copy.
484
- * \param last The end of the range to copy.
485
- *
486
- * \tparam InputIterator is a model of <a href="http://www.sgi.com/tech/stl/InputIterator">Input Iterator</a>.
487
- */
488
- template<typename InputIterator>
489
- void assign(InputIterator first, InputIterator last);
490
-
491
- /*! This method returns a copy of this vector's allocator.
492
- * \return A copy of the alloctor used by this vector.
493
- */
494
- allocator_type get_allocator(void) const;
495
- #endif // end doxygen-only members
496
- }; // end host_vector
497
-
498
- /*! Exchanges the values of two vectors.
499
- * \p x The first \p host_vector of interest.
500
- * \p y The second \p host_vector of interest.
501
- */
502
- template<typename T, typename Alloc>
503
- void swap(host_vector<T,Alloc> &a, host_vector<T,Alloc> &b)
504
- {
505
- a.swap(b);
506
- } // end swap()
507
-
508
- /*! \}
509
- */
510
-
511
- } // end thrust
512
-
513
- #include <thrust/detail/host_vector.inl>
514
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CVPR/ml-talking-face/app.py DELETED
@@ -1,202 +0,0 @@
1
- # https://huggingface.co/deepkyu/ml-talking-face
2
- import os
3
- import subprocess
4
-
5
- REST_IP = os.environ['REST_IP']
6
- SERVICE_PORT = int(os.environ['SERVICE_PORT'])
7
- TRANSLATION_APIKEY_URL = os.environ['TRANSLATION_APIKEY_URL']
8
- GOOGLE_APPLICATION_CREDENTIALS = os.environ['GOOGLE_APPLICATION_CREDENTIALS']
9
- subprocess.call(f"wget --no-check-certificate -O {GOOGLE_APPLICATION_CREDENTIALS} {TRANSLATION_APIKEY_URL}", shell=True)
10
-
11
- TOXICITY_THRESHOLD = float(os.getenv('TOXICITY_THRESHOLD', 0.7))
12
-
13
- import gradio as gr
14
- from toxicity_estimator import PerspectiveAPI
15
- from translator import Translator
16
- from client_rest import RestAPIApplication
17
- from pathlib import Path
18
- import argparse
19
- import threading
20
- import yaml
21
-
22
- TITLE = Path("docs/title.txt").read_text()
23
- DESCRIPTION = Path("docs/description.md").read_text()
24
-
25
-
26
- class GradioApplication:
27
- def __init__(self, rest_ip, rest_port, max_seed):
28
- self.lang_list = {
29
- 'ko': 'ko_KR',
30
- 'en': 'en_US',
31
- 'ja': 'ja_JP',
32
- 'zh': 'zh_CN',
33
- 'zh-CN': 'zh_CN'
34
- }
35
- self.background_list = [None,
36
- "background_image/cvpr.png",
37
- "background_image/black.png",
38
- "background_image/river.mp4",
39
- "background_image/sky.mp4"]
40
-
41
- self.perspective_api = PerspectiveAPI()
42
- self.translator = Translator()
43
- self.rest_application = RestAPIApplication(rest_ip, rest_port)
44
- self.output_dir = Path("output_file")
45
-
46
- inputs = prepare_input()
47
- outputs = prepare_output()
48
-
49
- self.iface = gr.Interface(fn=self.infer,
50
- title=TITLE,
51
- description=DESCRIPTION,
52
- inputs=inputs,
53
- outputs=outputs,
54
- allow_flagging='never',
55
- article=Path("docs/article.md").read_text())
56
-
57
- self.max_seed = max_seed
58
- self._file_seed = 0
59
- self.lock = threading.Lock()
60
-
61
-
62
- def _get_file_seed(self):
63
- return f"{self._file_seed % self.max_seed:02d}"
64
-
65
- def _reset_file_seed(self):
66
- self._file_seed = 0
67
-
68
- def _counter_file_seed(self):
69
- with self.lock:
70
- self._file_seed += 1
71
-
72
- def get_lang_code(self, lang):
73
- return self.lang_list[lang]
74
-
75
- def get_background_data(self, background_index):
76
- # get background filename and its extension
77
- data_path = self.background_list[background_index]
78
-
79
- if data_path is not None:
80
- with open(data_path, 'rb') as rf:
81
- background_data = rf.read()
82
- is_video_background = str(data_path).endswith(".mp4")
83
- else:
84
- background_data = None
85
- is_video_background = False
86
-
87
- return background_data, is_video_background
88
-
89
- @staticmethod
90
- def return_format(toxicity_prob, target_text, lang_dest, video_filename, detail=""):
91
- return {'Toxicity': toxicity_prob}, f"Language: {lang_dest}\nText: {target_text}\n-\nDetails: {detail}", str(video_filename)
92
-
93
- def infer(self, text, lang, duration_rate, action, background_index):
94
- self._counter_file_seed()
95
- print(f"File Seed: {self._file_seed}")
96
- toxicity_prob = 0.0
97
- target_text = ""
98
- lang_dest = ""
99
- video_filename = "vacant.mp4"
100
-
101
- # Toxicity estimation
102
- try:
103
- toxicity_prob = self.perspective_api.get_score(text)
104
- except Exception as e: # when Perspective API doesn't work
105
- pass
106
-
107
- if toxicity_prob > TOXICITY_THRESHOLD:
108
- detail = "Sorry, it seems that the input text is too toxic."
109
- return self.return_format(toxicity_prob, target_text, lang_dest, video_filename, detail=f"Error: {detail}")
110
-
111
- # Google Translate API
112
- try:
113
- target_text, lang_dest = self.translator.get_translation(text, lang)
114
- except Exception as e:
115
- target_text = ""
116
- lang_dest = ""
117
- detail = f"Error from language translation: ({e})"
118
- return self.return_format(toxicity_prob, target_text, lang_dest, video_filename, detail=f"Error: {detail}")
119
-
120
- try:
121
- self.translator.length_check(lang_dest, target_text) # assertion check
122
- except AssertionError as e:
123
- return self.return_format(toxicity_prob, target_text, lang_dest, video_filename, detail=f"Error: {str(e)}")
124
-
125
- lang_rpc_code = self.get_lang_code(lang_dest)
126
-
127
- # Video Inference
128
- background_data, is_video_background = self.get_background_data(background_index)
129
-
130
- video_data = self.rest_application.get_video(target_text, lang_rpc_code, duration_rate, action.lower(),
131
- background_data, is_video_background)
132
- print(f"Video data size: {len(video_data)}")
133
-
134
- video_filename = self.output_dir / f"{self._file_seed:02d}.mkv"
135
- with open(video_filename, "wb") as video_file:
136
- video_file.write(video_data)
137
-
138
- return self.return_format(toxicity_prob, target_text, lang_dest, video_filename)
139
-
140
- def run(self, server_port=7860, share=False):
141
- try:
142
- self.iface.launch(height=900,
143
- share=share, server_port=server_port,
144
- enable_queue=True)
145
-
146
- except KeyboardInterrupt:
147
- gr.close_all()
148
-
149
-
150
- def prepare_input():
151
- text_input = gr.Textbox(lines=2,
152
- placeholder="Type your text with English, Chinese, Korean, and Japanese.",
153
- value="Hello, this is demonstration for talking face generation "
154
- "with multilingual text-to-speech.",
155
- label="Text")
156
- lang_input = gr.Radio(['Korean', 'English', 'Japanese', 'Chinese'],
157
- type='value',
158
- value=None,
159
- label="Language")
160
- duration_rate_input = gr.Slider(minimum=0.8,
161
- maximum=1.2,
162
- step=0.01,
163
- value=1.0,
164
- label="Duration (The bigger the value, the slower the speech)")
165
- action_input = gr.Radio(['Default', 'Hand', 'BothHand', 'HandDown', 'Sorry'],
166
- type='value',
167
- value='Default',
168
- label="Select an action ...")
169
- background_input = gr.Radio(['None', 'CVPR', 'Black', 'River', 'Sky'],
170
- type='index',
171
- value='None',
172
- label="Select a background image/video ...")
173
-
174
- return [text_input, lang_input, duration_rate_input,
175
- action_input, background_input]
176
-
177
-
178
- def prepare_output():
179
- toxicity_output = gr.Label(num_top_classes=1, label="Toxicity (from Perspective API)")
180
- translation_result_otuput = gr.Textbox(type="str", label="Translation Result")
181
- video_output = gr.Video(format='mp4')
182
- return [toxicity_output, translation_result_otuput, video_output]
183
-
184
-
185
- def parse_args():
186
- parser = argparse.ArgumentParser(
187
- description='GRADIO DEMO for talking face generation submitted to CVPR2022')
188
- parser.add_argument('-p', '--port', dest='gradio_port', type=int, default=7860, help="Port for gradio")
189
- parser.add_argument('--rest_ip', type=str, default=REST_IP, help="IP for REST API")
190
- parser.add_argument('--rest_port', type=int, default=SERVICE_PORT, help="Port for REST API")
191
- parser.add_argument('--max_seed', type=int, default=20, help="Max seed for saving video")
192
- parser.add_argument('--share', action='store_true', help='get publicly sharable link')
193
- args = parser.parse_args()
194
- return args
195
-
196
-
197
- if __name__ == '__main__':
198
- args = parse_args()
199
-
200
- gradio_application = GradioApplication(args.rest_ip, args.rest_port, args.max_seed)
201
- gradio_application.run(server_port=args.gradio_port, share=args.share)
202
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/admin/index.css DELETED
@@ -1,75 +0,0 @@
1
- body {
2
- transform: scale(1);
3
- width: 660px;
4
- }
5
- .container {
6
- background: url("./imgs/bg.png") #000144 left top repeat-y;
7
- background-size: 700px auto;
8
- width: 660px;
9
- }
10
- .head-box {
11
- margin: 0 0 80px 0;
12
- }
13
- .cfg-box {
14
- border-radius: 15px;
15
- margin-top: 20px;
16
- margin-bottom: 20px;
17
- padding: 5px 15px;
18
- overflow: hidden;
19
- background: #f5f5f5;
20
- box-shadow: 0 5px 10px 0 rgba(0, 0, 0, 0.15);
21
- position: relative;
22
- background: rgba(35, 38, 57, 0.8);
23
- }
24
- .cfg-group {
25
- color: #ceb78b;
26
- font-size: 18px;
27
- font-weight: bold;
28
- padding: 10px 20px;
29
- }
30
- .cfg-li {
31
- border-radius: 18px;
32
- min-height: 36px;
33
- position: relative;
34
- overflow: hidden;
35
- margin-bottom: 10px;
36
- background: rgba(203, 196, 190, 0);
37
- }
38
- .cfg-line {
39
- color: #4e5769;
40
- line-height: 36px;
41
- padding-left: 20px;
42
- font-weight: bold;
43
- border-radius: 16px;
44
- box-shadow: 0 0 2px rgba(0, 0, 0, 0.5);
45
- background: url("./imgs/cfg-right.jpg") right top #cbc4be no-repeat;
46
- background-size: auto 36px;
47
- }
48
- .cfg-hint {
49
- font-size: 12px;
50
- font-weight: normal;
51
- margin-top: 3px;
52
- margin-bottom: -3px;
53
- }
54
- .cfg-status {
55
- position: absolute;
56
- top: 0;
57
- right: 0;
58
- height: 36px;
59
- width: 160px;
60
- text-align: center;
61
- line-height: 36px;
62
- font-size: 16px;
63
- color: #495366;
64
- font-weight: bold;
65
- border-radius: 0 16px 16px 0;
66
- }
67
- .cfg-status.status-off {
68
- color: #a95151;
69
- }
70
- .cfg-desc {
71
- font-size: 12px;
72
- color: #cbc4be;
73
- margin: 5px 0 5px 20px;
74
- }
75
- /*# sourceMappingURL=index.css.map */
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/dianzhongdian/__init__.py DELETED
@@ -1,65 +0,0 @@
1
- from typing import List
2
-
3
- from pil_utils import BuildImage
4
-
5
- from meme_generator import add_meme
6
- from meme_generator.exception import TextOverLength
7
- from meme_generator.utils import run_sync, translate
8
-
9
-
10
- @run_sync
11
- def _dianzhongdian(img: BuildImage, text: str, trans: str):
12
- img = img.convert("L").resize_width(500)
13
- text_img1 = BuildImage.new("RGBA", (500, 60))
14
- text_img2 = BuildImage.new("RGBA", (500, 35))
15
-
16
- try:
17
- text_img1.draw_text(
18
- (20, 0, text_img1.width - 20, text_img1.height),
19
- text,
20
- max_fontsize=50,
21
- min_fontsize=25,
22
- fill="white",
23
- )
24
- except ValueError:
25
- raise TextOverLength(text)
26
-
27
- try:
28
- text_img2.draw_text(
29
- (20, 0, text_img2.width - 20, text_img2.height),
30
- trans,
31
- max_fontsize=25,
32
- min_fontsize=10,
33
- fill="white",
34
- )
35
- except ValueError:
36
- raise TextOverLength(text)
37
-
38
- frame = BuildImage.new("RGBA", (500, img.height + 100), "black")
39
- frame.paste(img, alpha=True)
40
- frame.paste(text_img1, (0, img.height), alpha=True)
41
- frame.paste(text_img2, (0, img.height + 60), alpha=True)
42
- return frame.save_jpg()
43
-
44
-
45
- async def dianzhongdian(images: List[BuildImage], texts: List[str], args):
46
- if len(texts) == 1:
47
- text = texts[0]
48
- trans = await translate(text, lang_to="jp")
49
- else:
50
- text = texts[0]
51
- trans = texts[1]
52
-
53
- return await _dianzhongdian(images[0], text, trans)
54
-
55
-
56
- add_meme(
57
- "dianzhongdian",
58
- dianzhongdian,
59
- min_images=1,
60
- max_images=1,
61
- min_texts=1,
62
- max_texts=2,
63
- default_texts=["救命啊"],
64
- keywords=["入典", "典中典", "黑白草图"],
65
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/CikeyQI/meme-api/meme_generator/memes/look_flat/__init__.py DELETED
@@ -1,58 +0,0 @@
1
- from typing import List
2
-
3
- from pil_utils import BuildImage
4
- from pydantic import Field
5
-
6
- from meme_generator import MemeArgsModel, MemeArgsParser, MemeArgsType, add_meme
7
- from meme_generator.exception import TextOverLength
8
- from meme_generator.utils import make_jpg_or_gif
9
-
10
- help = "图片“压扁”比例"
11
-
12
- parser = MemeArgsParser()
13
- parser.add_argument("-r", "--ratio", type=int, default=2, help=help)
14
-
15
-
16
- class Model(MemeArgsModel):
17
- ratio: int = Field(2, description=help)
18
-
19
-
20
- def look_flat(images: List[BuildImage], texts: List[str], args: Model):
21
- text = texts[0] if texts else "可恶...被人看扁了"
22
- ratio = args.ratio
23
-
24
- img_w = 500
25
- text_h = 80
26
- text_frame = BuildImage.new("RGBA", (img_w, text_h), "white")
27
- try:
28
- text_frame.draw_text(
29
- (10, 0, img_w - 10, text_h),
30
- text,
31
- max_fontsize=55,
32
- min_fontsize=30,
33
- weight="bold",
34
- )
35
- except ValueError:
36
- raise TextOverLength(text)
37
-
38
- def make(img: BuildImage) -> BuildImage:
39
- img = img.convert("RGBA").resize_width(img_w)
40
- img = img.resize((img_w, img.height // ratio))
41
- img_h = img.height
42
- frame = BuildImage.new("RGBA", (img_w, img_h + text_h), "white")
43
- return frame.paste(img, alpha=True).paste(text_frame, (0, img_h), alpha=True)
44
-
45
- return make_jpg_or_gif(images[0], make)
46
-
47
-
48
- add_meme(
49
- "look_flat",
50
- look_flat,
51
- min_images=1,
52
- max_images=1,
53
- min_texts=0,
54
- max_texts=1,
55
- default_texts=["可恶...被人看扁了"],
56
- args_type=MemeArgsType(parser, Model),
57
- keywords=["看扁"],
58
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/registry.py DELETED
@@ -1,12 +0,0 @@
1
- # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2
-
3
- from maskrcnn_benchmark.utils.registry import Registry
4
-
5
- BACKBONES = Registry()
6
- RPN_HEADS = Registry()
7
- ROI_BOX_FEATURE_EXTRACTORS = Registry()
8
- ROI_BOX_PREDICTOR = Registry()
9
- ROI_KEYPOINT_FEATURE_EXTRACTORS = Registry()
10
- ROI_KEYPOINT_PREDICTOR = Registry()
11
- ROI_MASK_FEATURE_EXTRACTORS = Registry()
12
- ROI_MASK_PREDICTOR = Registry()
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/voltLib/ast.py DELETED
@@ -1,448 +0,0 @@
1
- from fontTools.voltLib.error import VoltLibError
2
- from typing import NamedTuple
3
-
4
-
5
- class Pos(NamedTuple):
6
- adv: int
7
- dx: int
8
- dy: int
9
- adv_adjust_by: dict
10
- dx_adjust_by: dict
11
- dy_adjust_by: dict
12
-
13
- def __str__(self):
14
- res = " POS"
15
- for attr in ("adv", "dx", "dy"):
16
- value = getattr(self, attr)
17
- if value is not None:
18
- res += f" {attr.upper()} {value}"
19
- adjust_by = getattr(self, f"{attr}_adjust_by", {})
20
- for size, adjustment in adjust_by.items():
21
- res += f" ADJUST_BY {adjustment} AT {size}"
22
- res += " END_POS"
23
- return res
24
-
25
-
26
- class Element(object):
27
- def __init__(self, location=None):
28
- self.location = location
29
-
30
- def build(self, builder):
31
- pass
32
-
33
- def __str__(self):
34
- raise NotImplementedError
35
-
36
-
37
- class Statement(Element):
38
- pass
39
-
40
-
41
- class Expression(Element):
42
- pass
43
-
44
-
45
- class VoltFile(Statement):
46
- def __init__(self):
47
- Statement.__init__(self, location=None)
48
- self.statements = []
49
-
50
- def build(self, builder):
51
- for s in self.statements:
52
- s.build(builder)
53
-
54
- def __str__(self):
55
- return "\n" + "\n".join(str(s) for s in self.statements) + " END\n"
56
-
57
-
58
- class GlyphDefinition(Statement):
59
- def __init__(self, name, gid, gunicode, gtype, components, location=None):
60
- Statement.__init__(self, location)
61
- self.name = name
62
- self.id = gid
63
- self.unicode = gunicode
64
- self.type = gtype
65
- self.components = components
66
-
67
- def __str__(self):
68
- res = f'DEF_GLYPH "{self.name}" ID {self.id}'
69
- if self.unicode is not None:
70
- if len(self.unicode) > 1:
71
- unicodes = ",".join(f"U+{u:04X}" for u in self.unicode)
72
- res += f' UNICODEVALUES "{unicodes}"'
73
- else:
74
- res += f" UNICODE {self.unicode[0]}"
75
- if self.type is not None:
76
- res += f" TYPE {self.type}"
77
- if self.components is not None:
78
- res += f" COMPONENTS {self.components}"
79
- res += " END_GLYPH"
80
- return res
81
-
82
-
83
- class GroupDefinition(Statement):
84
- def __init__(self, name, enum, location=None):
85
- Statement.__init__(self, location)
86
- self.name = name
87
- self.enum = enum
88
- self.glyphs_ = None
89
-
90
- def glyphSet(self, groups=None):
91
- if groups is not None and self.name in groups:
92
- raise VoltLibError(
93
- 'Group "%s" contains itself.' % (self.name), self.location
94
- )
95
- if self.glyphs_ is None:
96
- if groups is None:
97
- groups = set({self.name})
98
- else:
99
- groups.add(self.name)
100
- self.glyphs_ = self.enum.glyphSet(groups)
101
- return self.glyphs_
102
-
103
- def __str__(self):
104
- enum = self.enum and str(self.enum) or ""
105
- return f'DEF_GROUP "{self.name}"\n{enum}\nEND_GROUP'
106
-
107
-
108
- class GlyphName(Expression):
109
- """A single glyph name, such as cedilla."""
110
-
111
- def __init__(self, glyph, location=None):
112
- Expression.__init__(self, location)
113
- self.glyph = glyph
114
-
115
- def glyphSet(self):
116
- return (self.glyph,)
117
-
118
- def __str__(self):
119
- return f' GLYPH "{self.glyph}"'
120
-
121
-
122
- class Enum(Expression):
123
- """An enum"""
124
-
125
- def __init__(self, enum, location=None):
126
- Expression.__init__(self, location)
127
- self.enum = enum
128
-
129
- def __iter__(self):
130
- for e in self.glyphSet():
131
- yield e
132
-
133
- def glyphSet(self, groups=None):
134
- glyphs = []
135
- for element in self.enum:
136
- if isinstance(element, (GroupName, Enum)):
137
- glyphs.extend(element.glyphSet(groups))
138
- else:
139
- glyphs.extend(element.glyphSet())
140
- return tuple(glyphs)
141
-
142
- def __str__(self):
143
- enum = "".join(str(e) for e in self.enum)
144
- return f" ENUM{enum} END_ENUM"
145
-
146
-
147
- class GroupName(Expression):
148
- """A glyph group"""
149
-
150
- def __init__(self, group, parser, location=None):
151
- Expression.__init__(self, location)
152
- self.group = group
153
- self.parser_ = parser
154
-
155
- def glyphSet(self, groups=None):
156
- group = self.parser_.resolve_group(self.group)
157
- if group is not None:
158
- self.glyphs_ = group.glyphSet(groups)
159
- return self.glyphs_
160
- else:
161
- raise VoltLibError(
162
- 'Group "%s" is used but undefined.' % (self.group), self.location
163
- )
164
-
165
- def __str__(self):
166
- return f' GROUP "{self.group}"'
167
-
168
-
169
- class Range(Expression):
170
- """A glyph range"""
171
-
172
- def __init__(self, start, end, parser, location=None):
173
- Expression.__init__(self, location)
174
- self.start = start
175
- self.end = end
176
- self.parser = parser
177
-
178
- def glyphSet(self):
179
- return tuple(self.parser.glyph_range(self.start, self.end))
180
-
181
- def __str__(self):
182
- return f' RANGE "{self.start}" TO "{self.end}"'
183
-
184
-
185
- class ScriptDefinition(Statement):
186
- def __init__(self, name, tag, langs, location=None):
187
- Statement.__init__(self, location)
188
- self.name = name
189
- self.tag = tag
190
- self.langs = langs
191
-
192
- def __str__(self):
193
- res = "DEF_SCRIPT"
194
- if self.name is not None:
195
- res += f' NAME "{self.name}"'
196
- res += f' TAG "{self.tag}"\n\n'
197
- for lang in self.langs:
198
- res += f"{lang}"
199
- res += "END_SCRIPT"
200
- return res
201
-
202
-
203
- class LangSysDefinition(Statement):
204
- def __init__(self, name, tag, features, location=None):
205
- Statement.__init__(self, location)
206
- self.name = name
207
- self.tag = tag
208
- self.features = features
209
-
210
- def __str__(self):
211
- res = "DEF_LANGSYS"
212
- if self.name is not None:
213
- res += f' NAME "{self.name}"'
214
- res += f' TAG "{self.tag}"\n\n'
215
- for feature in self.features:
216
- res += f"{feature}"
217
- res += "END_LANGSYS\n"
218
- return res
219
-
220
-
221
- class FeatureDefinition(Statement):
222
- def __init__(self, name, tag, lookups, location=None):
223
- Statement.__init__(self, location)
224
- self.name = name
225
- self.tag = tag
226
- self.lookups = lookups
227
-
228
- def __str__(self):
229
- res = f'DEF_FEATURE NAME "{self.name}" TAG "{self.tag}"\n'
230
- res += " " + " ".join(f'LOOKUP "{l}"' for l in self.lookups) + "\n"
231
- res += "END_FEATURE\n"
232
- return res
233
-
234
-
235
- class LookupDefinition(Statement):
236
- def __init__(
237
- self,
238
- name,
239
- process_base,
240
- process_marks,
241
- mark_glyph_set,
242
- direction,
243
- reversal,
244
- comments,
245
- context,
246
- sub,
247
- pos,
248
- location=None,
249
- ):
250
- Statement.__init__(self, location)
251
- self.name = name
252
- self.process_base = process_base
253
- self.process_marks = process_marks
254
- self.mark_glyph_set = mark_glyph_set
255
- self.direction = direction
256
- self.reversal = reversal
257
- self.comments = comments
258
- self.context = context
259
- self.sub = sub
260
- self.pos = pos
261
-
262
- def __str__(self):
263
- res = f'DEF_LOOKUP "{self.name}"'
264
- res += f' {self.process_base and "PROCESS_BASE" or "SKIP_BASE"}'
265
- if self.process_marks:
266
- res += " PROCESS_MARKS "
267
- if self.mark_glyph_set:
268
- res += f'MARK_GLYPH_SET "{self.mark_glyph_set}"'
269
- elif isinstance(self.process_marks, str):
270
- res += f'"{self.process_marks}"'
271
- else:
272
- res += "ALL"
273
- else:
274
- res += " SKIP_MARKS"
275
- if self.direction is not None:
276
- res += f" DIRECTION {self.direction}"
277
- if self.reversal:
278
- res += " REVERSAL"
279
- if self.comments is not None:
280
- comments = self.comments.replace("\n", r"\n")
281
- res += f'\nCOMMENTS "{comments}"'
282
- if self.context:
283
- res += "\n" + "\n".join(str(c) for c in self.context)
284
- else:
285
- res += "\nIN_CONTEXT\nEND_CONTEXT"
286
- if self.sub:
287
- res += f"\n{self.sub}"
288
- if self.pos:
289
- res += f"\n{self.pos}"
290
- return res
291
-
292
-
293
- class SubstitutionDefinition(Statement):
294
- def __init__(self, mapping, location=None):
295
- Statement.__init__(self, location)
296
- self.mapping = mapping
297
-
298
- def __str__(self):
299
- res = "AS_SUBSTITUTION\n"
300
- for src, dst in self.mapping.items():
301
- src = "".join(str(s) for s in src)
302
- dst = "".join(str(d) for d in dst)
303
- res += f"SUB{src}\nWITH{dst}\nEND_SUB\n"
304
- res += "END_SUBSTITUTION"
305
- return res
306
-
307
-
308
- class SubstitutionSingleDefinition(SubstitutionDefinition):
309
- pass
310
-
311
-
312
- class SubstitutionMultipleDefinition(SubstitutionDefinition):
313
- pass
314
-
315
-
316
- class SubstitutionLigatureDefinition(SubstitutionDefinition):
317
- pass
318
-
319
-
320
- class SubstitutionReverseChainingSingleDefinition(SubstitutionDefinition):
321
- pass
322
-
323
-
324
- class PositionAttachDefinition(Statement):
325
- def __init__(self, coverage, coverage_to, location=None):
326
- Statement.__init__(self, location)
327
- self.coverage = coverage
328
- self.coverage_to = coverage_to
329
-
330
- def __str__(self):
331
- coverage = "".join(str(c) for c in self.coverage)
332
- res = f"AS_POSITION\nATTACH{coverage}\nTO"
333
- for coverage, anchor in self.coverage_to:
334
- coverage = "".join(str(c) for c in coverage)
335
- res += f'{coverage} AT ANCHOR "{anchor}"'
336
- res += "\nEND_ATTACH\nEND_POSITION"
337
- return res
338
-
339
-
340
- class PositionAttachCursiveDefinition(Statement):
341
- def __init__(self, coverages_exit, coverages_enter, location=None):
342
- Statement.__init__(self, location)
343
- self.coverages_exit = coverages_exit
344
- self.coverages_enter = coverages_enter
345
-
346
- def __str__(self):
347
- res = "AS_POSITION\nATTACH_CURSIVE"
348
- for coverage in self.coverages_exit:
349
- coverage = "".join(str(c) for c in coverage)
350
- res += f"\nEXIT {coverage}"
351
- for coverage in self.coverages_enter:
352
- coverage = "".join(str(c) for c in coverage)
353
- res += f"\nENTER {coverage}"
354
- res += "\nEND_ATTACH\nEND_POSITION"
355
- return res
356
-
357
-
358
- class PositionAdjustPairDefinition(Statement):
359
- def __init__(self, coverages_1, coverages_2, adjust_pair, location=None):
360
- Statement.__init__(self, location)
361
- self.coverages_1 = coverages_1
362
- self.coverages_2 = coverages_2
363
- self.adjust_pair = adjust_pair
364
-
365
- def __str__(self):
366
- res = "AS_POSITION\nADJUST_PAIR\n"
367
- for coverage in self.coverages_1:
368
- coverage = " ".join(str(c) for c in coverage)
369
- res += f" FIRST {coverage}"
370
- res += "\n"
371
- for coverage in self.coverages_2:
372
- coverage = " ".join(str(c) for c in coverage)
373
- res += f" SECOND {coverage}"
374
- res += "\n"
375
- for (id_1, id_2), (pos_1, pos_2) in self.adjust_pair.items():
376
- res += f" {id_1} {id_2} BY{pos_1}{pos_2}\n"
377
- res += "\nEND_ADJUST\nEND_POSITION"
378
- return res
379
-
380
-
381
- class PositionAdjustSingleDefinition(Statement):
382
- def __init__(self, adjust_single, location=None):
383
- Statement.__init__(self, location)
384
- self.adjust_single = adjust_single
385
-
386
- def __str__(self):
387
- res = "AS_POSITION\nADJUST_SINGLE"
388
- for coverage, pos in self.adjust_single:
389
- coverage = "".join(str(c) for c in coverage)
390
- res += f"{coverage} BY{pos}"
391
- res += "\nEND_ADJUST\nEND_POSITION"
392
- return res
393
-
394
-
395
- class ContextDefinition(Statement):
396
- def __init__(self, ex_or_in, left=None, right=None, location=None):
397
- Statement.__init__(self, location)
398
- self.ex_or_in = ex_or_in
399
- self.left = left if left is not None else []
400
- self.right = right if right is not None else []
401
-
402
- def __str__(self):
403
- res = self.ex_or_in + "\n"
404
- for coverage in self.left:
405
- coverage = "".join(str(c) for c in coverage)
406
- res += f" LEFT{coverage}\n"
407
- for coverage in self.right:
408
- coverage = "".join(str(c) for c in coverage)
409
- res += f" RIGHT{coverage}\n"
410
- res += "END_CONTEXT"
411
- return res
412
-
413
-
414
- class AnchorDefinition(Statement):
415
- def __init__(self, name, gid, glyph_name, component, locked, pos, location=None):
416
- Statement.__init__(self, location)
417
- self.name = name
418
- self.gid = gid
419
- self.glyph_name = glyph_name
420
- self.component = component
421
- self.locked = locked
422
- self.pos = pos
423
-
424
- def __str__(self):
425
- locked = self.locked and " LOCKED" or ""
426
- return (
427
- f'DEF_ANCHOR "{self.name}"'
428
- f" ON {self.gid}"
429
- f" GLYPH {self.glyph_name}"
430
- f" COMPONENT {self.component}"
431
- f"{locked}"
432
- f" AT {self.pos} END_ANCHOR"
433
- )
434
-
435
-
436
- class SettingDefinition(Statement):
437
- def __init__(self, name, value, location=None):
438
- Statement.__init__(self, location)
439
- self.name = name
440
- self.value = value
441
-
442
- def __str__(self):
443
- if self.value is True:
444
- return f"{self.name}"
445
- if isinstance(self.value, (tuple, list)):
446
- value = " ".join(str(v) for v in self.value)
447
- return f"{self.name} {value}"
448
- return f"{self.name} {self.value}"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Column-2853eb31.css DELETED
@@ -1 +0,0 @@
1
- div.svelte-vt1mxs{display:flex;position:relative;flex-direction:column}div.svelte-vt1mxs>*,div.svelte-vt1mxs>.form>*{width:var(--size-full)}.gap.svelte-vt1mxs{gap:var(--layout-gap)}.hide.svelte-vt1mxs{display:none}.compact.svelte-vt1mxs>*,.compact.svelte-vt1mxs .box{border-radius:0}.compact.svelte-vt1mxs,.panel.svelte-vt1mxs{border:solid var(--panel-border-width) var(--panel-border-color);border-radius:var(--container-radius);background:var(--panel-background-fill);padding:var(--spacing-lg)}
 
 
spaces/Daextream/Whisper-Auto-Subtitled-Video-Generator/pages/02_📼_Upload_Video_File.py DELETED
@@ -1,230 +0,0 @@
1
- import whisper
2
- import streamlit as st
3
- from streamlit_lottie import st_lottie
4
- from utils import write_vtt, write_srt
5
- import ffmpeg
6
- import requests
7
- from typing import Iterator
8
- from io import StringIO
9
- import numpy as np
10
- import pathlib
11
- import os
12
-
13
- st.set_page_config(page_title="Auto Subtitled Video Generator", page_icon=":movie_camera:", layout="wide")
14
-
15
- # Define a function that we can use to load lottie files from a link.
16
- @st.cache(allow_output_mutation=True)
17
- def load_lottieurl(url: str):
18
- r = requests.get(url)
19
- if r.status_code != 200:
20
- return None
21
- return r.json()
22
-
23
-
24
- APP_DIR = pathlib.Path(__file__).parent.absolute()
25
-
26
- LOCAL_DIR = APP_DIR / "local"
27
- LOCAL_DIR.mkdir(exist_ok=True)
28
- save_dir = LOCAL_DIR / "output"
29
- save_dir.mkdir(exist_ok=True)
30
-
31
-
32
- loaded_model = whisper.load_model("base")
33
- current_size = "None"
34
-
35
-
36
- col1, col2 = st.columns([1, 3])
37
- with col1:
38
- lottie = load_lottieurl("https://assets1.lottiefiles.com/packages/lf20_HjK9Ol.json")
39
- st_lottie(lottie)
40
-
41
- with col2:
42
- st.write("""
43
- ## Auto Subtitled Video Generator
44
- ##### Upload a video file and get a video with subtitles.
45
- ###### ➠ If you want to transcribe the video in its original language, select the task as "Transcribe"
46
- ###### ➠ If you want to translate the subtitles to English, select the task as "Translate"
47
- ###### I recommend starting with the base model and then experimenting with the larger models, the small and medium models often work well. """)
48
-
49
-
50
- @st.cache(allow_output_mutation=True)
51
- def change_model(current_size, size):
52
- if current_size != size:
53
- loaded_model = whisper.load_model(size)
54
- return loaded_model
55
- else:
56
- raise Exception("Model size is the same as the current size.")
57
-
58
-
59
- @st.cache(allow_output_mutation=True)
60
- def inferecence(loaded_model, uploaded_file, task):
61
- with open(f"{save_dir}/input.mp4", "wb") as f:
62
- f.write(uploaded_file.read())
63
- audio = ffmpeg.input(f"{save_dir}/input.mp4")
64
- audio = ffmpeg.output(audio, f"{save_dir}/output.wav", acodec="pcm_s16le", ac=1, ar="16k")
65
- ffmpeg.run(audio, overwrite_output=True)
66
- if task == "Transcribe":
67
- options = dict(task="transcribe", best_of=5)
68
- results = loaded_model.transcribe(f"{save_dir}/output.wav", **options)
69
- vtt = getSubs(results["segments"], "vtt", 80)
70
- srt = getSubs(results["segments"], "srt", 80)
71
- lang = results["language"]
72
- return results["text"], vtt, srt, lang
73
- elif task == "Translate":
74
- options = dict(task="translate", best_of=5)
75
- results = loaded_model.transcribe(f"{save_dir}/output.wav", **options)
76
- vtt = getSubs(results["segments"], "vtt", 80)
77
- srt = getSubs(results["segments"], "srt", 80)
78
- lang = results["language"]
79
- return results["text"], vtt, srt, lang
80
- else:
81
- raise ValueError("Task not supported")
82
-
83
-
84
- def getSubs(segments: Iterator[dict], format: str, maxLineWidth: int) -> str:
85
- segmentStream = StringIO()
86
-
87
- if format == 'vtt':
88
- write_vtt(segments, file=segmentStream, maxLineWidth=maxLineWidth)
89
- elif format == 'srt':
90
- write_srt(segments, file=segmentStream, maxLineWidth=maxLineWidth)
91
- else:
92
- raise Exception("Unknown format " + format)
93
-
94
- segmentStream.seek(0)
95
- return segmentStream.read()
96
-
97
-
98
- def generate_subtitled_video(video, audio, transcript):
99
- video_file = ffmpeg.input(video)
100
- audio_file = ffmpeg.input(audio)
101
- ffmpeg.concat(video_file.filter("subtitles", transcript), audio_file, v=1, a=1).output("final.mp4").run(quiet=True, overwrite_output=True)
102
- video_with_subs = open("final.mp4", "rb")
103
- return video_with_subs
104
-
105
-
106
- def main():
107
- size = st.selectbox("Select Model Size (The larger the model, the more accurate the transcription will be, but it will take longer)", ["tiny", "base", "small", "medium", "large"], index=1)
108
- loaded_model = change_model(current_size, size)
109
- st.write(f"Model is {'multilingual' if loaded_model.is_multilingual else 'English-only'} "
110
- f"and has {sum(np.prod(p.shape) for p in loaded_model.parameters()):,} parameters.")
111
- input_file = st.file_uploader("File", type=["mp4", "avi", "mov", "mkv"])
112
- # get the name of the input_file
113
- if input_file is not None:
114
- filename = input_file.name[:-4]
115
- else:
116
- filename = None
117
- task = st.selectbox("Select Task", ["Transcribe", "Translate"], index=0)
118
- if task == "Transcribe":
119
- if st.button("Transcribe"):
120
- results = inferecence(loaded_model, input_file, task)
121
- col3, col4 = st.columns(2)
122
- col5, col6, col7, col8 = st.columns(4)
123
- col9, col10 = st.columns(2)
124
- with col3:
125
- st.video(input_file)
126
-
127
- with open("transcript.txt", "w+", encoding='utf8') as f:
128
- f.writelines(results[0])
129
- f.close()
130
- with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f:
131
- datatxt = f.read()
132
-
133
- with open("transcript.vtt", "w+",encoding='utf8') as f:
134
- f.writelines(results[1])
135
- f.close()
136
- with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f:
137
- datavtt = f.read()
138
-
139
- with open("transcript.srt", "w+",encoding='utf8') as f:
140
- f.writelines(results[2])
141
- f.close()
142
- with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f:
143
- datasrt = f.read()
144
-
145
- with col5:
146
- st.download_button(label="Download Transcript (.txt)",
147
- data=datatxt,
148
- file_name="transcript.txt")
149
- with col6:
150
- st.download_button(label="Download Transcript (.vtt)",
151
- data=datavtt,
152
- file_name="transcript.vtt")
153
- with col7:
154
- st.download_button(label="Download Transcript (.srt)",
155
- data=datasrt,
156
- file_name="transcript.srt")
157
- with col9:
158
- st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.")
159
- with col10:
160
- st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.")
161
-
162
- with col4:
163
- with st.spinner("Generating Subtitled Video"):
164
- video_with_subs = generate_subtitled_video(f"{save_dir}/input.mp4", f"{save_dir}/output.wav", "transcript.srt")
165
- st.video(video_with_subs)
166
- st.snow()
167
- with col8:
168
- st.download_button(label="Download Video with Subtitles",
169
- data=video_with_subs,
170
- file_name=f"{filename}_with_subs.mp4")
171
- elif task == "Translate":
172
- if st.button("Translate to English"):
173
- results = inferecence(loaded_model, input_file, task)
174
- col3, col4 = st.columns(2)
175
- col5, col6, col7, col8 = st.columns(4)
176
- col9, col10 = st.columns(2)
177
- with col3:
178
- st.video(input_file)
179
-
180
- with open("transcript.txt", "w+", encoding='utf8') as f:
181
- f.writelines(results[0])
182
- f.close()
183
- with open(os.path.join(os.getcwd(), "transcript.txt"), "rb") as f:
184
- datatxt = f.read()
185
-
186
- with open("transcript.vtt", "w+",encoding='utf8') as f:
187
- f.writelines(results[1])
188
- f.close()
189
- with open(os.path.join(os.getcwd(), "transcript.vtt"), "rb") as f:
190
- datavtt = f.read()
191
-
192
- with open("transcript.srt", "w+",encoding='utf8') as f:
193
- f.writelines(results[2])
194
- f.close()
195
- with open(os.path.join(os.getcwd(), "transcript.srt"), "rb") as f:
196
- datasrt = f.read()
197
-
198
- with col5:
199
- st.download_button(label="Download Transcript (.txt)",
200
- data=datatxt,
201
- file_name="transcript.txt")
202
- with col6:
203
- st.download_button(label="Download Transcript (.vtt)",
204
- data=datavtt,
205
- file_name="transcript.vtt")
206
- with col7:
207
- st.download_button(label="Download Transcript (.srt)",
208
- data=datasrt,
209
- file_name="transcript.srt")
210
- with col9:
211
- st.success("You can download the transcript in .srt format, edit it (if you need to) and upload it to YouTube to create subtitles for your video.")
212
- with col10:
213
- st.info("Streamlit refreshes after the download button is clicked. The data is cached so you can download the transcript again without having to transcribe the video again.")
214
-
215
- with col4:
216
- with st.spinner("Generating Subtitled Video"):
217
- video_with_subs = generate_subtitled_video(f"{save_dir}/input.mp4", f"{save_dir}/output.wav", "transcript.srt")
218
- st.video(video_with_subs)
219
- st.snow()
220
- with col8:
221
- st.download_button(label="Download Video with Subtitles",
222
- data=video_with_subs,
223
- file_name=f"{filename}_with_subs.mp4")
224
- else:
225
- st.error("Please select a task.")
226
-
227
-
228
- if __name__ == "__main__":
229
- main()
230
- st.markdown("###### Made with :heart: by [@BatuhanYılmaz](https://twitter.com/batuhan3326) [![this is an image link](https://i.imgur.com/thJhzOO.png)](https://www.buymeacoffee.com/batuhanylmz)")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DemoLou/moe-tts/modules.py DELETED
@@ -1,390 +0,0 @@
1
- import copy
2
- import math
3
- import numpy as np
4
- import scipy
5
- import torch
6
- from torch import nn
7
- from torch.nn import functional as F
8
-
9
- from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d
10
- from torch.nn.utils import weight_norm, remove_weight_norm
11
-
12
- import commons
13
- from commons import init_weights, get_padding
14
- from transforms import piecewise_rational_quadratic_transform
15
-
16
-
17
- LRELU_SLOPE = 0.1
18
-
19
-
20
- class LayerNorm(nn.Module):
21
- def __init__(self, channels, eps=1e-5):
22
- super().__init__()
23
- self.channels = channels
24
- self.eps = eps
25
-
26
- self.gamma = nn.Parameter(torch.ones(channels))
27
- self.beta = nn.Parameter(torch.zeros(channels))
28
-
29
- def forward(self, x):
30
- x = x.transpose(1, -1)
31
- x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps)
32
- return x.transpose(1, -1)
33
-
34
-
35
- class ConvReluNorm(nn.Module):
36
- def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout):
37
- super().__init__()
38
- self.in_channels = in_channels
39
- self.hidden_channels = hidden_channels
40
- self.out_channels = out_channels
41
- self.kernel_size = kernel_size
42
- self.n_layers = n_layers
43
- self.p_dropout = p_dropout
44
- assert n_layers > 1, "Number of layers should be larger than 0."
45
-
46
- self.conv_layers = nn.ModuleList()
47
- self.norm_layers = nn.ModuleList()
48
- self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2))
49
- self.norm_layers.append(LayerNorm(hidden_channels))
50
- self.relu_drop = nn.Sequential(
51
- nn.ReLU(),
52
- nn.Dropout(p_dropout))
53
- for _ in range(n_layers-1):
54
- self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2))
55
- self.norm_layers.append(LayerNorm(hidden_channels))
56
- self.proj = nn.Conv1d(hidden_channels, out_channels, 1)
57
- self.proj.weight.data.zero_()
58
- self.proj.bias.data.zero_()
59
-
60
- def forward(self, x, x_mask):
61
- x_org = x
62
- for i in range(self.n_layers):
63
- x = self.conv_layers[i](x * x_mask)
64
- x = self.norm_layers[i](x)
65
- x = self.relu_drop(x)
66
- x = x_org + self.proj(x)
67
- return x * x_mask
68
-
69
-
70
- class DDSConv(nn.Module):
71
- """
72
- Dilated and Depth-Separable Convolution
73
- """
74
- def __init__(self, channels, kernel_size, n_layers, p_dropout=0.):
75
- super().__init__()
76
- self.channels = channels
77
- self.kernel_size = kernel_size
78
- self.n_layers = n_layers
79
- self.p_dropout = p_dropout
80
-
81
- self.drop = nn.Dropout(p_dropout)
82
- self.convs_sep = nn.ModuleList()
83
- self.convs_1x1 = nn.ModuleList()
84
- self.norms_1 = nn.ModuleList()
85
- self.norms_2 = nn.ModuleList()
86
- for i in range(n_layers):
87
- dilation = kernel_size ** i
88
- padding = (kernel_size * dilation - dilation) // 2
89
- self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size,
90
- groups=channels, dilation=dilation, padding=padding
91
- ))
92
- self.convs_1x1.append(nn.Conv1d(channels, channels, 1))
93
- self.norms_1.append(LayerNorm(channels))
94
- self.norms_2.append(LayerNorm(channels))
95
-
96
- def forward(self, x, x_mask, g=None):
97
- if g is not None:
98
- x = x + g
99
- for i in range(self.n_layers):
100
- y = self.convs_sep[i](x * x_mask)
101
- y = self.norms_1[i](y)
102
- y = F.gelu(y)
103
- y = self.convs_1x1[i](y)
104
- y = self.norms_2[i](y)
105
- y = F.gelu(y)
106
- y = self.drop(y)
107
- x = x + y
108
- return x * x_mask
109
-
110
-
111
- class WN(torch.nn.Module):
112
- def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0):
113
- super(WN, self).__init__()
114
- assert(kernel_size % 2 == 1)
115
- self.hidden_channels =hidden_channels
116
- self.kernel_size = kernel_size,
117
- self.dilation_rate = dilation_rate
118
- self.n_layers = n_layers
119
- self.gin_channels = gin_channels
120
- self.p_dropout = p_dropout
121
-
122
- self.in_layers = torch.nn.ModuleList()
123
- self.res_skip_layers = torch.nn.ModuleList()
124
- self.drop = nn.Dropout(p_dropout)
125
-
126
- if gin_channels != 0:
127
- cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1)
128
- self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight')
129
-
130
- for i in range(n_layers):
131
- dilation = dilation_rate ** i
132
- padding = int((kernel_size * dilation - dilation) / 2)
133
- in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size,
134
- dilation=dilation, padding=padding)
135
- in_layer = torch.nn.utils.weight_norm(in_layer, name='weight')
136
- self.in_layers.append(in_layer)
137
-
138
- # last one is not necessary
139
- if i < n_layers - 1:
140
- res_skip_channels = 2 * hidden_channels
141
- else:
142
- res_skip_channels = hidden_channels
143
-
144
- res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1)
145
- res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight')
146
- self.res_skip_layers.append(res_skip_layer)
147
-
148
- def forward(self, x, x_mask, g=None, **kwargs):
149
- output = torch.zeros_like(x)
150
- n_channels_tensor = torch.IntTensor([self.hidden_channels])
151
-
152
- if g is not None:
153
- g = self.cond_layer(g)
154
-
155
- for i in range(self.n_layers):
156
- x_in = self.in_layers[i](x)
157
- if g is not None:
158
- cond_offset = i * 2 * self.hidden_channels
159
- g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:]
160
- else:
161
- g_l = torch.zeros_like(x_in)
162
-
163
- acts = commons.fused_add_tanh_sigmoid_multiply(
164
- x_in,
165
- g_l,
166
- n_channels_tensor)
167
- acts = self.drop(acts)
168
-
169
- res_skip_acts = self.res_skip_layers[i](acts)
170
- if i < self.n_layers - 1:
171
- res_acts = res_skip_acts[:,:self.hidden_channels,:]
172
- x = (x + res_acts) * x_mask
173
- output = output + res_skip_acts[:,self.hidden_channels:,:]
174
- else:
175
- output = output + res_skip_acts
176
- return output * x_mask
177
-
178
- def remove_weight_norm(self):
179
- if self.gin_channels != 0:
180
- torch.nn.utils.remove_weight_norm(self.cond_layer)
181
- for l in self.in_layers:
182
- torch.nn.utils.remove_weight_norm(l)
183
- for l in self.res_skip_layers:
184
- torch.nn.utils.remove_weight_norm(l)
185
-
186
-
187
- class ResBlock1(torch.nn.Module):
188
- def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)):
189
- super(ResBlock1, self).__init__()
190
- self.convs1 = nn.ModuleList([
191
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
192
- padding=get_padding(kernel_size, dilation[0]))),
193
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
194
- padding=get_padding(kernel_size, dilation[1]))),
195
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2],
196
- padding=get_padding(kernel_size, dilation[2])))
197
- ])
198
- self.convs1.apply(init_weights)
199
-
200
- self.convs2 = nn.ModuleList([
201
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
202
- padding=get_padding(kernel_size, 1))),
203
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
204
- padding=get_padding(kernel_size, 1))),
205
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1,
206
- padding=get_padding(kernel_size, 1)))
207
- ])
208
- self.convs2.apply(init_weights)
209
-
210
- def forward(self, x, x_mask=None):
211
- for c1, c2 in zip(self.convs1, self.convs2):
212
- xt = F.leaky_relu(x, LRELU_SLOPE)
213
- if x_mask is not None:
214
- xt = xt * x_mask
215
- xt = c1(xt)
216
- xt = F.leaky_relu(xt, LRELU_SLOPE)
217
- if x_mask is not None:
218
- xt = xt * x_mask
219
- xt = c2(xt)
220
- x = xt + x
221
- if x_mask is not None:
222
- x = x * x_mask
223
- return x
224
-
225
- def remove_weight_norm(self):
226
- for l in self.convs1:
227
- remove_weight_norm(l)
228
- for l in self.convs2:
229
- remove_weight_norm(l)
230
-
231
-
232
- class ResBlock2(torch.nn.Module):
233
- def __init__(self, channels, kernel_size=3, dilation=(1, 3)):
234
- super(ResBlock2, self).__init__()
235
- self.convs = nn.ModuleList([
236
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0],
237
- padding=get_padding(kernel_size, dilation[0]))),
238
- weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1],
239
- padding=get_padding(kernel_size, dilation[1])))
240
- ])
241
- self.convs.apply(init_weights)
242
-
243
- def forward(self, x, x_mask=None):
244
- for c in self.convs:
245
- xt = F.leaky_relu(x, LRELU_SLOPE)
246
- if x_mask is not None:
247
- xt = xt * x_mask
248
- xt = c(xt)
249
- x = xt + x
250
- if x_mask is not None:
251
- x = x * x_mask
252
- return x
253
-
254
- def remove_weight_norm(self):
255
- for l in self.convs:
256
- remove_weight_norm(l)
257
-
258
-
259
- class Log(nn.Module):
260
- def forward(self, x, x_mask, reverse=False, **kwargs):
261
- if not reverse:
262
- y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask
263
- logdet = torch.sum(-y, [1, 2])
264
- return y, logdet
265
- else:
266
- x = torch.exp(x) * x_mask
267
- return x
268
-
269
-
270
- class Flip(nn.Module):
271
- def forward(self, x, *args, reverse=False, **kwargs):
272
- x = torch.flip(x, [1])
273
- if not reverse:
274
- logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device)
275
- return x, logdet
276
- else:
277
- return x
278
-
279
-
280
- class ElementwiseAffine(nn.Module):
281
- def __init__(self, channels):
282
- super().__init__()
283
- self.channels = channels
284
- self.m = nn.Parameter(torch.zeros(channels,1))
285
- self.logs = nn.Parameter(torch.zeros(channels,1))
286
-
287
- def forward(self, x, x_mask, reverse=False, **kwargs):
288
- if not reverse:
289
- y = self.m + torch.exp(self.logs) * x
290
- y = y * x_mask
291
- logdet = torch.sum(self.logs * x_mask, [1,2])
292
- return y, logdet
293
- else:
294
- x = (x - self.m) * torch.exp(-self.logs) * x_mask
295
- return x
296
-
297
-
298
- class ResidualCouplingLayer(nn.Module):
299
- def __init__(self,
300
- channels,
301
- hidden_channels,
302
- kernel_size,
303
- dilation_rate,
304
- n_layers,
305
- p_dropout=0,
306
- gin_channels=0,
307
- mean_only=False):
308
- assert channels % 2 == 0, "channels should be divisible by 2"
309
- super().__init__()
310
- self.channels = channels
311
- self.hidden_channels = hidden_channels
312
- self.kernel_size = kernel_size
313
- self.dilation_rate = dilation_rate
314
- self.n_layers = n_layers
315
- self.half_channels = channels // 2
316
- self.mean_only = mean_only
317
-
318
- self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1)
319
- self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels)
320
- self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1)
321
- self.post.weight.data.zero_()
322
- self.post.bias.data.zero_()
323
-
324
- def forward(self, x, x_mask, g=None, reverse=False):
325
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
326
- h = self.pre(x0) * x_mask
327
- h = self.enc(h, x_mask, g=g)
328
- stats = self.post(h) * x_mask
329
- if not self.mean_only:
330
- m, logs = torch.split(stats, [self.half_channels]*2, 1)
331
- else:
332
- m = stats
333
- logs = torch.zeros_like(m)
334
-
335
- if not reverse:
336
- x1 = m + x1 * torch.exp(logs) * x_mask
337
- x = torch.cat([x0, x1], 1)
338
- logdet = torch.sum(logs, [1,2])
339
- return x, logdet
340
- else:
341
- x1 = (x1 - m) * torch.exp(-logs) * x_mask
342
- x = torch.cat([x0, x1], 1)
343
- return x
344
-
345
-
346
- class ConvFlow(nn.Module):
347
- def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0):
348
- super().__init__()
349
- self.in_channels = in_channels
350
- self.filter_channels = filter_channels
351
- self.kernel_size = kernel_size
352
- self.n_layers = n_layers
353
- self.num_bins = num_bins
354
- self.tail_bound = tail_bound
355
- self.half_channels = in_channels // 2
356
-
357
- self.pre = nn.Conv1d(self.half_channels, filter_channels, 1)
358
- self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.)
359
- self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1)
360
- self.proj.weight.data.zero_()
361
- self.proj.bias.data.zero_()
362
-
363
- def forward(self, x, x_mask, g=None, reverse=False):
364
- x0, x1 = torch.split(x, [self.half_channels]*2, 1)
365
- h = self.pre(x0)
366
- h = self.convs(h, x_mask, g=g)
367
- h = self.proj(h) * x_mask
368
-
369
- b, c, t = x0.shape
370
- h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?]
371
-
372
- unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels)
373
- unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels)
374
- unnormalized_derivatives = h[..., 2 * self.num_bins:]
375
-
376
- x1, logabsdet = piecewise_rational_quadratic_transform(x1,
377
- unnormalized_widths,
378
- unnormalized_heights,
379
- unnormalized_derivatives,
380
- inverse=reverse,
381
- tails='linear',
382
- tail_bound=self.tail_bound
383
- )
384
-
385
- x = torch.cat([x0, x1], 1) * x_mask
386
- logdet = torch.sum(logabsdet * x_mask, [1,2])
387
- if not reverse:
388
- return x, logdet
389
- else:
390
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Devaholic/fruit-demo/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Fruit Demo
3
- emoji: 🐨
4
- colorFrom: purple
5
- colorTo: gray
6
- sdk: gradio
7
- sdk_version: 3.0.18
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Dinoking/Guccio-AI-Designer/models/stylegan2/stylegan2-pytorch/convert_weight.py DELETED
@@ -1,283 +0,0 @@
1
- import argparse
2
- import os
3
- import sys
4
- import pickle
5
- import math
6
-
7
- import torch
8
- import numpy as np
9
- from torchvision import utils
10
-
11
- from model import Generator, Discriminator
12
-
13
-
14
- def convert_modconv(vars, source_name, target_name, flip=False):
15
- weight = vars[source_name + '/weight'].value().eval()
16
- mod_weight = vars[source_name + '/mod_weight'].value().eval()
17
- mod_bias = vars[source_name + '/mod_bias'].value().eval()
18
- noise = vars[source_name + '/noise_strength'].value().eval()
19
- bias = vars[source_name + '/bias'].value().eval()
20
-
21
- dic = {
22
- 'conv.weight': np.expand_dims(weight.transpose((3, 2, 0, 1)), 0),
23
- 'conv.modulation.weight': mod_weight.transpose((1, 0)),
24
- 'conv.modulation.bias': mod_bias + 1,
25
- 'noise.weight': np.array([noise]),
26
- 'activate.bias': bias,
27
- }
28
-
29
- dic_torch = {}
30
-
31
- for k, v in dic.items():
32
- dic_torch[target_name + '.' + k] = torch.from_numpy(v)
33
-
34
- if flip:
35
- dic_torch[target_name + '.conv.weight'] = torch.flip(
36
- dic_torch[target_name + '.conv.weight'], [3, 4]
37
- )
38
-
39
- return dic_torch
40
-
41
-
42
- def convert_conv(vars, source_name, target_name, bias=True, start=0):
43
- weight = vars[source_name + '/weight'].value().eval()
44
-
45
- dic = {'weight': weight.transpose((3, 2, 0, 1))}
46
-
47
- if bias:
48
- dic['bias'] = vars[source_name + '/bias'].value().eval()
49
-
50
- dic_torch = {}
51
-
52
- dic_torch[target_name + f'.{start}.weight'] = torch.from_numpy(dic['weight'])
53
-
54
- if bias:
55
- dic_torch[target_name + f'.{start + 1}.bias'] = torch.from_numpy(dic['bias'])
56
-
57
- return dic_torch
58
-
59
-
60
- def convert_torgb(vars, source_name, target_name):
61
- weight = vars[source_name + '/weight'].value().eval()
62
- mod_weight = vars[source_name + '/mod_weight'].value().eval()
63
- mod_bias = vars[source_name + '/mod_bias'].value().eval()
64
- bias = vars[source_name + '/bias'].value().eval()
65
-
66
- dic = {
67
- 'conv.weight': np.expand_dims(weight.transpose((3, 2, 0, 1)), 0),
68
- 'conv.modulation.weight': mod_weight.transpose((1, 0)),
69
- 'conv.modulation.bias': mod_bias + 1,
70
- 'bias': bias.reshape((1, 3, 1, 1)),
71
- }
72
-
73
- dic_torch = {}
74
-
75
- for k, v in dic.items():
76
- dic_torch[target_name + '.' + k] = torch.from_numpy(v)
77
-
78
- return dic_torch
79
-
80
-
81
- def convert_dense(vars, source_name, target_name):
82
- weight = vars[source_name + '/weight'].value().eval()
83
- bias = vars[source_name + '/bias'].value().eval()
84
-
85
- dic = {'weight': weight.transpose((1, 0)), 'bias': bias}
86
-
87
- dic_torch = {}
88
-
89
- for k, v in dic.items():
90
- dic_torch[target_name + '.' + k] = torch.from_numpy(v)
91
-
92
- return dic_torch
93
-
94
-
95
- def update(state_dict, new):
96
- for k, v in new.items():
97
- if k not in state_dict:
98
- raise KeyError(k + ' is not found')
99
-
100
- if v.shape != state_dict[k].shape:
101
- raise ValueError(f'Shape mismatch: {v.shape} vs {state_dict[k].shape}')
102
-
103
- state_dict[k] = v
104
-
105
-
106
- def discriminator_fill_statedict(statedict, vars, size):
107
- log_size = int(math.log(size, 2))
108
-
109
- update(statedict, convert_conv(vars, f'{size}x{size}/FromRGB', 'convs.0'))
110
-
111
- conv_i = 1
112
-
113
- for i in range(log_size - 2, 0, -1):
114
- reso = 4 * 2 ** i
115
- update(
116
- statedict,
117
- convert_conv(vars, f'{reso}x{reso}/Conv0', f'convs.{conv_i}.conv1'),
118
- )
119
- update(
120
- statedict,
121
- convert_conv(
122
- vars, f'{reso}x{reso}/Conv1_down', f'convs.{conv_i}.conv2', start=1
123
- ),
124
- )
125
- update(
126
- statedict,
127
- convert_conv(
128
- vars, f'{reso}x{reso}/Skip', f'convs.{conv_i}.skip', start=1, bias=False
129
- ),
130
- )
131
- conv_i += 1
132
-
133
- update(statedict, convert_conv(vars, f'4x4/Conv', 'final_conv'))
134
- update(statedict, convert_dense(vars, f'4x4/Dense0', 'final_linear.0'))
135
- update(statedict, convert_dense(vars, f'Output', 'final_linear.1'))
136
-
137
- return statedict
138
-
139
-
140
- def fill_statedict(state_dict, vars, size):
141
- log_size = int(math.log(size, 2))
142
-
143
- for i in range(8):
144
- update(state_dict, convert_dense(vars, f'G_mapping/Dense{i}', f'style.{i + 1}'))
145
-
146
- update(
147
- state_dict,
148
- {
149
- 'input.input': torch.from_numpy(
150
- vars['G_synthesis/4x4/Const/const'].value().eval()
151
- )
152
- },
153
- )
154
-
155
- update(state_dict, convert_torgb(vars, 'G_synthesis/4x4/ToRGB', 'to_rgb1'))
156
-
157
- for i in range(log_size - 2):
158
- reso = 4 * 2 ** (i + 1)
159
- update(
160
- state_dict,
161
- convert_torgb(vars, f'G_synthesis/{reso}x{reso}/ToRGB', f'to_rgbs.{i}'),
162
- )
163
-
164
- update(state_dict, convert_modconv(vars, 'G_synthesis/4x4/Conv', 'conv1'))
165
-
166
- conv_i = 0
167
-
168
- for i in range(log_size - 2):
169
- reso = 4 * 2 ** (i + 1)
170
- update(
171
- state_dict,
172
- convert_modconv(
173
- vars,
174
- f'G_synthesis/{reso}x{reso}/Conv0_up',
175
- f'convs.{conv_i}',
176
- flip=True,
177
- ),
178
- )
179
- update(
180
- state_dict,
181
- convert_modconv(
182
- vars, f'G_synthesis/{reso}x{reso}/Conv1', f'convs.{conv_i + 1}'
183
- ),
184
- )
185
- conv_i += 2
186
-
187
- for i in range(0, (log_size - 2) * 2 + 1):
188
- update(
189
- state_dict,
190
- {
191
- f'noises.noise_{i}': torch.from_numpy(
192
- vars[f'G_synthesis/noise{i}'].value().eval()
193
- )
194
- },
195
- )
196
-
197
- return state_dict
198
-
199
-
200
- if __name__ == '__main__':
201
- device = 'cuda' if torch.cuda.is_available() else 'cpu'
202
- print('Using PyTorch device', device)
203
-
204
- parser = argparse.ArgumentParser()
205
- parser.add_argument('--repo', type=str, required=True)
206
- parser.add_argument('--gen', action='store_true')
207
- parser.add_argument('--disc', action='store_true')
208
- parser.add_argument('--channel_multiplier', type=int, default=2)
209
- parser.add_argument('path', metavar='PATH')
210
-
211
- args = parser.parse_args()
212
-
213
- sys.path.append(args.repo)
214
-
215
- import dnnlib
216
- from dnnlib import tflib
217
-
218
- tflib.init_tf()
219
-
220
- with open(args.path, 'rb') as f:
221
- generator, discriminator, g_ema = pickle.load(f)
222
-
223
- size = g_ema.output_shape[2]
224
-
225
- g = Generator(size, 512, 8, channel_multiplier=args.channel_multiplier)
226
- state_dict = g.state_dict()
227
- state_dict = fill_statedict(state_dict, g_ema.vars, size)
228
-
229
- g.load_state_dict(state_dict)
230
-
231
- latent_avg = torch.from_numpy(g_ema.vars['dlatent_avg'].value().eval())
232
-
233
- ckpt = {'g_ema': state_dict, 'latent_avg': latent_avg}
234
-
235
- if args.gen:
236
- g_train = Generator(size, 512, 8, channel_multiplier=args.channel_multiplier)
237
- g_train_state = g_train.state_dict()
238
- g_train_state = fill_statedict(g_train_state, generator.vars, size)
239
- ckpt['g'] = g_train_state
240
-
241
- if args.disc:
242
- disc = Discriminator(size, channel_multiplier=args.channel_multiplier)
243
- d_state = disc.state_dict()
244
- d_state = discriminator_fill_statedict(d_state, discriminator.vars, size)
245
- ckpt['d'] = d_state
246
-
247
- name = os.path.splitext(os.path.basename(args.path))[0]
248
- outpath = os.path.join(os.getcwd(), f'{name}.pt')
249
- print('Saving', outpath)
250
- try:
251
- torch.save(ckpt, outpath, _use_new_zipfile_serialization=False)
252
- except TypeError:
253
- torch.save(ckpt, outpath)
254
-
255
-
256
- print('Generating TF-Torch comparison images')
257
- batch_size = {256: 8, 512: 4, 1024: 2}
258
- n_sample = batch_size.get(size, 4)
259
-
260
- g = g.to(device)
261
-
262
- z = np.random.RandomState(0).randn(n_sample, 512).astype('float32')
263
-
264
- with torch.no_grad():
265
- img_pt, _ = g(
266
- [torch.from_numpy(z).to(device)],
267
- truncation=0.5,
268
- truncation_latent=latent_avg.to(device),
269
- )
270
-
271
- img_tf = g_ema.run(z, None, randomize_noise=False)
272
- img_tf = torch.from_numpy(img_tf).to(device)
273
-
274
- img_diff = ((img_pt + 1) / 2).clamp(0.0, 1.0) - ((img_tf.to(device) + 1) / 2).clamp(
275
- 0.0, 1.0
276
- )
277
-
278
- img_concat = torch.cat((img_tf, img_pt, img_diff), dim=0)
279
- utils.save_image(
280
- img_concat, name + '.png', nrow=n_sample, normalize=True, range=(-1, 1)
281
- )
282
- print('Done')
283
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/DragGan/DragGan/stylegan_human/docs/Dataset.md DELETED
@@ -1,74 +0,0 @@
1
- # SHHQ Dataset
2
- <img src="../img/preview_samples1.png" width="96%" height="96%">
3
-
4
- ## Overview
5
- SHHQ is a dataset with high-quality full-body human images in a resolution of 1024 × 512.
6
- Since we need to follow a rigorous legal review in our institute, we can not release all of the data at once.
7
-
8
- For now, SHHQ-1.0 with 40K images is released! More data will be released in the later versions.
9
-
10
-
11
- ## Data Sources
12
- Images are collected in two main ways:
13
- 1) From the Internet.
14
- We developed a crawler tool with an official API, mainly downloading images from Flickr, Pixabay and Pexels. So you need to meet all the following licenses when using the dataset: CC0, [Pixabay License](https://pixabay.com/service/license/), and [Pexels Licenses](https://www.pexels.com/license/).
15
- 2) From the data providers.
16
- We purchased images from databases of individual photographers, modeling agencies and other suppliers.
17
- Images were reviewed by our legal team prior to purchase to ensure permission for use in research.
18
-
19
- ### Note:
20
- The composition of SHHQ-1.0:
21
-
22
- 1) Images obtained from the above sources.
23
- 2) Processed 9991 DeepFashion [[1]](#1) images (retain only full body images).
24
- 3) 1940 African images from the InFashAI [[2]](#2) dataset to increase data diversity.
25
-
26
- ## Data License
27
- We are aware of privacy concerns and seriously treat the license and privacy issues. All released data will be ensured under the license of CC0 and free for research use. Also, persons in the dataset are anonymised without additional private or sensitive metadata.
28
-
29
- ## Agreement
30
- The SHHQ is available for non-commercial research purposes only.
31
-
32
- You agree not to reproduce, duplicate, copy, sell, trade, resell or exploit any portion of the images and any portion of the derived data for commercial purposes.
33
-
34
- You agree NOT to further copy, publish or distribute any portion of SHHQ to any third party for any purpose. Except, for internal use at a single site within the same organization it is allowed to make copies of the dataset.
35
-
36
- Shanghai AI Lab reserves the right to terminate your access to the SHHQ at any time.
37
-
38
- ## Dataset Preview
39
- For those interested in our dataset, we provide a preview version with 100 images randomly sampled from SHHQ-1.0: [SHHQ-1.0_samples](https://drive.google.com/file/d/1tnNFfmFtzRbYL3qEnNXQ_ShaN9YV5tI5/view?usp=sharing).
40
-
41
- In SHHQ-1.0, we provide aligned raw images along with machine-calculated segmentation masks. Later we are planning to release manually annotated human-parsing version of these 40,000 images. Please stay tuned.
42
-
43
- > We also provide script [bg_white.py](../bg_white.py) to whiten the background of the raw image using its segmentation mask.
44
-
45
- If you want to access the full SHHQ-1.0, please read the following instructions.
46
-
47
- ## Model trained using SHHQ-1.0
48
-
49
- | Structure | 1024x512 | Metric | Scores | 512x256 | Metric | Scores |
50
- | --------- |:----------:| :----------:| :----------:| :-----: | :-----: | :-----: |
51
- | StyleGAN1 | to be released | - | - | to be released | - | - |
52
- | StyleGAN2 | [SHHQ-1.0_sg2_1024.pkl](https://drive.google.com/file/d/1PuvE72xpc69Zq4y58dohuKbG9dFnnjEX/view?usp=sharing) | fid50k_full | 3.56 | [SHHQ-1.0_sg2_512.pkl](https://drive.google.com/file/d/170t2FRWxR8_TG3_y0nVtDBogLPOClnyf/view?usp=sharing) | fid50k_full | 3.68 |
53
- | StyleGAN3 | to be released | - | - |to be released | - | - |
54
-
55
-
56
- ## Download Instructions
57
- Please download the SHHQ Dataset Release Agreement from [link](./SHHQ_Dataset_Release_Agreement.pdf).
58
- Read it carefully, complete and sign it appropriately.
59
-
60
- Please send the completed form to Jianglin Fu ([email protected]) and Shikai Li ([email protected]), and cc to Wayne Wu ([email protected]) using institutional email address. The email Subject Title is "SHHQ Dataset Release Agreement". We will verify your request and contact you with the dataset link and password to unzip the image data.
61
-
62
- Note:
63
-
64
- 1. We are currently facing large incoming applications, and we need to carefully verify all the applicants, please be patient, and we will reply to you as soon as possible.
65
-
66
- 2. The signature in the agreement should be hand-written.
67
-
68
- ## References
69
- <a id="1">[1]</a>
70
- Liu, Ziwei and Luo, Ping and Qiu, Shi and Wang, Xiaogang and Tang, Xiaoou. DeepFashion: Powering Robust Clothes Recognition and Retrieval with Rich Annotations. CVPR (2016)
71
-
72
- <a id="2">[2]</a>
73
- Hacheme, Gilles and Sayouti, Noureini. Neural fashion image captioning: Accounting for data diversity. arXiv preprint arXiv:2106.12154 (2021)
74
-