Commit
·
79fc5c2
1
Parent(s):
a7a03c1
Update parquet files (step 9 of 296)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1gistliPinn/ChatGPT4/Examples/Autocad USB Portable Taringa.md +0 -78
- spaces/1gistliPinn/ChatGPT4/Examples/Aveva Pdms 12.0 Crack Full Torrent ((FREE)).md +0 -6
- spaces/1gistliPinn/ChatGPT4/Examples/Chhello Divas Part 2 Movie 42.md +0 -6
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator APK una aplicacin imprescindible para los fans de GameCube y Wii.md +0 -97
- spaces/1phancelerku/anime-remove-background/567 Live Mod APK The Most Popular Live Streaming App with Xu Vip and Room Unlock.md +0 -86
- spaces/1phancelerku/anime-remove-background/DEAD TARGET Zombie MOD APK - The Most Realistic 3D Zombie Game Ever.md +0 -123
- spaces/9752isme/ChatGPT4/app.py +0 -141
- spaces/A666sxr/Genshin_TTS/README.md +0 -13
- spaces/AI-Dashboards/Streamlit-Plotly_Graph-Objects/README.md +0 -13
- spaces/ALSv/FSW/roop/processors/frame/face_enhancer.py +0 -89
- spaces/Aaaad/Dddde/README.md +0 -12
- spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/decorators.py +0 -27
- spaces/Adapter/T2I-Adapter/ldm/data/dataset_laion.py +0 -130
- spaces/AdithyaSNair/alzheimers_prediction_using_cnn/README.md +0 -12
- spaces/AgentVerse/agentVerse/agentverse/simulation.py +0 -60
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/audio/Audio.d.ts +0 -2
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/los/Los.js +0 -49
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/filechooser/FileChooser.d.ts +0 -2
- spaces/AlexMason/anime-remove-background/app.py +0 -52
- spaces/Ammar-alhaj-ali/LayoutLMv3-FUNSD/README.md +0 -12
- spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg3/train.py +0 -325
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/dit/__init__.py +0 -1
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/semantic_stable_diffusion/test_semantic_diffusion.py +0 -601
- spaces/Andy1621/uniformer_image_detection/configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py +0 -46
- spaces/Andy1621/uniformer_image_detection/configs/dcn/README.md +0 -52
- spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/cornernet.py +0 -95
- spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/pascal_voc12.py +0 -57
- spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_769x769_80k_cityscapes.py +0 -2
- spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context_59.py +0 -9
- spaces/Arnx/MusicGenXvAKN/tests/common_utils/temp_utils.py +0 -56
- spaces/Artrajz/vits-simple-api/utils/load_model.py +0 -185
- spaces/Banbri/zcvzcv/src/components/ui/toast.tsx +0 -127
- spaces/Basil2k4/VPSnguyenmanh/README.md +0 -10
- spaces/Benson/text-generation/Examples/Descargar Azul Cruz Azul Escudo Aplicacin.md +0 -158
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/metadata/importlib/__init__.py +0 -4
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/console.py +0 -70
- spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/monkey.py +0 -165
- spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/getting_started.md +0 -116
- spaces/CVPR/Text2Human/style.css +0 -16
- spaces/CVPR/regionclip-demo/detectron2/data/datasets/builtin.py +0 -302
- spaces/Chloe0222/Chloe/app.py +0 -19
- spaces/CikeyQI/meme-api/meme_generator/memes/jiujiu/__init__.py +0 -23
- spaces/CofAI/chat.b4/client/css/button.css +0 -26
- spaces/CofAI/chat.v2/config.py +0 -9
- spaces/Cyntexa/README/README.md +0 -10
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/cors.py +0 -1
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-a01a6870.js +0 -2
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/readme_content.py +0 -18
- spaces/Detomo/Depth_estimation/app.py +0 -35
- spaces/DuckyPolice/DeciDiffusion-v1-0/app.py +0 -115
spaces/1gistliPinn/ChatGPT4/Examples/Autocad USB Portable Taringa.md
DELETED
@@ -1,78 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Autocad USB Portable Taringa: A Convenient Way to Use CAD Software Anywhere</h1>
|
3 |
-
<p>If you are a CAD user who needs to work on different computers or locations, you might want to try Autocad USB Portable Taringa. This is a portable version of Autocad that you can run from a USB drive without installing it on your system. You can use it to create, edit and share your drawings with ease and flexibility.</p>
|
4 |
-
<h2>Autocad USB portable taringa</h2><br /><p><b><b>DOWNLOAD</b> ★★★★★ <a href="https://imgfil.com/2uy1dW">https://imgfil.com/2uy1dW</a></b></p><br /><br />
|
5 |
-
<h2>What is Autocad USB Portable Taringa?</h2>
|
6 |
-
<p>Autocad USB Portable Taringa is a modified version of Autocad that can be downloaded from the internet and copied to a USB drive. It does not require installation or activation, and it can run on any Windows computer that has a USB port. You can use it to access all the features and functions of Autocad, such as 2D and 3D design, drafting, annotation, visualization and more.</p>
|
7 |
-
<h2>Why use Autocad USB Portable Taringa?</h2>
|
8 |
-
<p>Autocad USB Portable Taringa has many advantages over the regular version of Autocad. Some of them are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>Portability: You can carry your Autocad software and files with you wherever you go. You can use it on any computer that has a USB port, without leaving any traces or affecting the system.</li>
|
11 |
-
<li>Flexibility: You can use Autocad USB Portable Taringa to work on different projects and tasks. You can switch between different versions of Autocad, such as 2019, 2020 or 2021. You can also use different languages and settings according to your preferences.</li>
|
12 |
-
<li>Convenience: You can use Autocad USB Portable Taringa to save time and space. You do not need to install or activate the software, and you do not need to worry about compatibility or license issues. You can also update the software easily by downloading the latest version from the internet.</li>
|
13 |
-
</ul>
|
14 |
-
<h2>How to get Autocad USB Portable Taringa?</h2>
|
15 |
-
<p>Autocad USB Portable Taringa is available for free download from various online sources. You can find it by searching for "Autocad USB Portable Taringa" on any search engine. Alternatively, you can use the links below to access some of the most popular sites that offer Autocad USB Portable Taringa:</p>
|
16 |
-
<ol>
|
17 |
-
<li><a href="https://thehouseofportable.com/1899/autocad-portable/">AutoCAD 2019-2020.1 Portable +Multilanguage +Setup +LT</a></li>
|
18 |
-
<li><a href="https://www.legendaryautoworks.com/forum/general-discussions/autocad-usb-portable-taringa">Autocad USB Portable Taringa | Legendary Autoworks</a></li>
|
19 |
-
<li><a href="https://opensea.io/collection/autocad-usb-portable-taringa">Autocad USB Portable Taringa - Collection | OpenSea</a></li>
|
20 |
-
</ol>
|
21 |
-
<p>Once you have downloaded Autocad USB Portable Taringa, you can copy it to your USB drive and run it from there. You can also customize the software by adding plugins or updates via .svm files.</p>
|
22 |
-
<p></p>
|
23 |
-
<h2>Conclusion</h2>
|
24 |
-
<p>Autocad USB Portable Taringa is a great solution for CAD users who need to work on different computers or locations. It is portable, flexible and convenient, and it offers all the features and functions of Autocad. However, to use it properly, you need to download it from a reliable source and follow the instructions carefully. This way, you will be able to use your CAD software anywhere and anytime.</p>
|
25 |
-
<h2>How to use Autocad USB Portable Taringa?</h2>
|
26 |
-
<p>Using Autocad USB Portable Taringa is very simple and straightforward. You just need to follow these steps:</p>
|
27 |
-
<ul>
|
28 |
-
<li>Plug your USB drive into the computer you want to use.</li>
|
29 |
-
<li>Open the folder where you copied Autocad USB Portable Taringa.</li>
|
30 |
-
<li>Double-click on the file named "AutoCAD_2020_Portable.exe" or "AutoCAD_2021_Portable.exe" depending on the version you downloaded.</li>
|
31 |
-
<li>Wait for the software to load and start.</li>
|
32 |
-
<li>Create or open your drawings and work on them as usual.</li>
|
33 |
-
<li>Save your drawings on your USB drive or on another location of your choice.</li>
|
34 |
-
<li>Close the software and eject your USB drive when you are done.</li>
|
35 |
-
</ul>
|
36 |
-
<h2>What are some tips and tricks for using Autocad USB Portable Taringa?</h2>
|
37 |
-
<p>To make the most of Autocad USB Portable Taringa, you can use some of these tips and tricks:</p>
|
38 |
-
<ul>
|
39 |
-
<li>Use a fast and reliable USB drive with enough space to store your software and files.</li>
|
40 |
-
<li>Backup your USB drive regularly to avoid losing your data in case of damage or loss.</li>
|
41 |
-
<li>Use a VPN or a proxy if you want to access online features or services that are blocked or restricted in your location.</li>
|
42 |
-
<li>Use keyboard shortcuts and commands to speed up your work and improve your efficiency.</li>
|
43 |
-
<li>Customize your interface and settings to suit your preferences and needs.</li>
|
44 |
-
</ul>
|
45 |
-
<h2>What are some reviews of Autocad USB Portable Taringa?</h2>
|
46 |
-
<p>Autocad USB Portable Taringa has received many positive reviews from users who have tried it. Here are some of the comments they have made about this software:</p>
|
47 |
-
<blockquote>"I have been using Autocad USB Portable Taringa for a while and I love it. It is very convenient and easy to use. I can work on my drawings on any computer without installing anything. It has all the features and functions of Autocad and it runs smoothly and fast. It is a great solution for CAD users who need portability and flexibility."</blockquote>
|
48 |
-
<blockquote>"Autocad USB Portable Taringa is a fantastic software that has saved me a lot of time and hassle. I can carry it with me wherever I go and use it on any computer that has a USB port. It does not affect the system or leave any traces. It has everything I need to create and edit my drawings. It is a must-have for CAD users who work on different locations."</blockquote>
|
49 |
-
<blockquote>"I am very impressed with Autocad USB Portable Taringa. It is a portable version of Autocad that works perfectly on any Windows computer. I can use it to access all the features and functions of Autocad, such as 2D and 3D design, drafting, annotation, visualization and more. It is very easy to use and update. It is a brilliant software for CAD users who need mobility and convenience."</blockquote>
|
50 |
-
<h2>What are some alternatives to Autocad USB Portable Taringa?</h2>
|
51 |
-
<p>Autocad USB Portable Taringa is not the only option you have when it comes to portable CAD software. There are other software that offer similar or different features and functions. Here are some of the alternatives you can consider:</p>
|
52 |
-
<ul>
|
53 |
-
<li>LibreCAD: This is a free and open source 2D CAD software that can run on Windows, Linux and Mac OS X. It can read and write DXF files and has many tools and features for 2D design and drafting.</li>
|
54 |
-
<li>NanoCAD: This is a low-cost 2D/3D CAD software that can run on Windows. It can read and write DWG files and has many tools and features for 2D/3D design, drafting, modeling and rendering.</li>
|
55 |
-
<li>SketchUp: This is a 3D modeling software that can run on Windows and Mac OS X. It can read and write SKP files and has many tools and features for 3D modeling, visualization, animation and presentation.</li>
|
56 |
-
</ul>
|
57 |
-
<h2>How to update Autocad USB Portable Taringa?</h2>
|
58 |
-
<p>Updating Autocad USB Portable Taringa is very easy and fast. You just need to follow these steps:</p>
|
59 |
-
<ul>
|
60 |
-
<li>Download the latest version of Autocad USB Portable Taringa from the internet.</li>
|
61 |
-
<li>Delete the old version of Autocad USB Portable Taringa from your USB drive.</li>
|
62 |
-
<li>Copy the new version of Autocad USB Portable Taringa to your USB drive.</li>
|
63 |
-
<li>Run the new version of Autocad USB Portable Taringa from your USB drive.</li>
|
64 |
-
<li>Enjoy the new features and improvements of Autocad USB Portable Taringa.</li>
|
65 |
-
</ul>
|
66 |
-
<h2>How to troubleshoot Autocad USB Portable Taringa?</h2>
|
67 |
-
<p>If you encounter any problems or issues with Autocad USB Portable Taringa, you can try some of these troubleshooting tips:</p>
|
68 |
-
<ul>
|
69 |
-
<li>Check your USB drive for errors or damage. You can use a disk utility software to scan and repair your USB drive.</li>
|
70 |
-
<li>Check your computer for viruses or malware. You can use an antivirus software to scan and clean your computer.</li>
|
71 |
-
<li>Check your internet connection and firewall settings. You can use a network diagnostic tool to test and fix your internet connection and firewall settings.</li>
|
72 |
-
<li>Check the compatibility and requirements of Autocad USB Portable Taringa. You can use a system information tool to check the specifications and performance of your computer.</li>
|
73 |
-
<li>Contact the support team of Autocad USB Portable Taringa. You can use the contact details provided on their website or on their download page.</li>
|
74 |
-
</ul>
|
75 |
-
<h2>Conclusion</h2>
|
76 |
-
<p>Autocad USB Portable Taringa is a portable version of Autocad that you can run from a USB drive without installing it on your system. It is a convenient and flexible solution for CAD users who need to work on different computers or locations. It has all the features and functions of Autocad, such as 2D and 3D design, drafting, annotation, visualization and more. However, to use it properly, you need to download it from a reliable source and follow the instructions carefully. This way, you will be able to use your CAD software anywhere and anytime.</p> 3cee63e6c2<br />
|
77 |
-
<br />
|
78 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Aveva Pdms 12.0 Crack Full Torrent ((FREE)).md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>Aveva Pdms 12.0 Crack Full Torrent</h2><br /><p><b><b>Download Zip</b> ★★★ <a href="https://imgfil.com/2uy1Hh">https://imgfil.com/2uy1Hh</a></b></p><br /><br />
|
2 |
-
|
3 |
-
4 Tháng Mưá»i Má»™t 2018 Tải vá»GibbsCAM 2018 v13 Build 12. ... 25 AVEVA PDMS 3D design software delivers maximum productivity an ... 0 Free Download GibbsCAM 2018 Crack is an easy-to-use and robust CAM software ... 4d29de3e1b<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Chhello Divas Part 2 Movie 42.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>chhello divas part 2 movie 42</h2><br /><p><b><b>DOWNLOAD</b> ::: <a href="https://imgfil.com/2uxXyC">https://imgfil.com/2uxXyC</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
Chello Divas 2 | Comedy Part | Must see |. Dil Ki Deal. CHHELLO DIVAS - New Gujarati Super Hit Comedy ... CHELLO DIVAS - New Gujarati Super Hit Comedy ... CHHELLO DIVAS - New Gujarati Super Hit Comedy ... CHHELLO DIVAS - New Gujarati Super Hit Comedy ... CHHELLO DIV 8a78ff9644<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator APK una aplicacin imprescindible para los fans de GameCube y Wii.md
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Use Dolphin Emulator: A Beginner's Guide</h1>
|
3 |
-
<p>Dolphin emulator is a free and open-source software that allows you to play GameCube and Wii games on your PC or Android device. It is one of the most popular and advanced emulators available, with high compatibility, enhanced graphics, networked multiplayer, save states, and many other features. Whether you want to relive your childhood memories, play some classic titles, or enjoy games with better performance and quality, Dolphin emulator is a great choice for you.</p>
|
4 |
-
<p>In this article, we will show you how to use Dolphin emulator, from downloading and installing it, to configuring it for optimal gaming experience. We will also provide some tips and tricks to help you get the most out of Dolphin emulator. Let's get started!</p>
|
5 |
-
<h2>dolphin emulator apk español</h2><br /><p><b><b>Download</b> ☆☆☆☆☆ <a href="https://urlin.us/2uSTfH">https://urlin.us/2uSTfH</a></b></p><br /><br />
|
6 |
-
<h2>Installation</h2>
|
7 |
-
<p>The first step to use Dolphin emulator is to download and install it on your device. Dolphin emulator supports Windows, Linux, macOS, and Android platforms. You can find all the versions of Dolphin emulator on its official website: [11](https://dolphin-emu.org/download/)</p>
|
8 |
-
<p>For Windows and macOS users, you can download the latest beta or development version of Dolphin emulator as a 7Z archive file. You will need a program like 7-Zip or WinRAR to extract it into a new folder. You can then run the Dolphin.exe file in the folder to launch the emulator. You don't need an installer or any additional files.</p>
|
9 |
-
<p>For Linux users, you can install Dolphin emulator from the repositories of your distribution, or use the Open Build Service packages provided by the Dolphin team. You can find more details on how to install Dolphin emulator on Linux on the wiki page: [12](https://wiki.dolphin-emu.org/index.php?title=Installing_Dolphin)</p>
|
10 |
-
<p>For Android users, you can download the APK file of Dolphin emulator from the official website or from the Google Play Store. You will need a device that runs Android 5.0 or higher and supports OpenGL ES 3.0 or higher. You can then install the APK file on your device and run the Dolphin app.</p>
|
11 |
-
<h2>Configuration</h2>
|
12 |
-
<p>After installing Dolphin emulator, you will need to configure it according to your preferences and system specifications. Dolphin emulator has many options and settings that you can tweak to improve your gaming experience. Here are some of the main things you should configure:</p>
|
13 |
-
<h3>Controllers</h3>
|
14 |
-
<p>Dolphin emulator supports various types of controllers, including keyboard, mouse, gamepad, Wii Remote, GameCube controller, etc. You can configure your controller by clicking on the Controllers button on the main toolbar of Dolphin emulator. You will see two tabs: one for GameCube controllers and one for Wii Remotes.</p>
|
15 |
-
<p>For GameCube controllers, you can choose between Standard Controller, Emulated Controller, or Passthrough a real controller. If you have a real GameCube controller and an adapter, you can use the Passthrough mode to connect it directly to your PC or Android device. If you don't have a real controller, you can use the Emulated Controller mode to map your keyboard or gamepad buttons to the GameCube controller buttons.</p>
|
16 |
-
<p>descargar dolphin emulator apk español<br />
|
17 |
-
dolphin emulator apk español android<br />
|
18 |
-
dolphin emulator apk español ultima version<br />
|
19 |
-
dolphin emulator apk español full<br />
|
20 |
-
dolphin emulator apk español mega<br />
|
21 |
-
dolphin emulator apk español 2023<br />
|
22 |
-
dolphin emulator apk español para pc<br />
|
23 |
-
dolphin emulator apk español juegos<br />
|
24 |
-
dolphin emulator apk español wii<br />
|
25 |
-
dolphin emulator apk español gamecube<br />
|
26 |
-
dolphin emulator apk español sin lag<br />
|
27 |
-
dolphin emulator apk español configuracion<br />
|
28 |
-
dolphin emulator apk español 32 bits<br />
|
29 |
-
dolphin emulator apk español 64 bits<br />
|
30 |
-
dolphin emulator apk español mod<br />
|
31 |
-
dolphin emulator apk español pro<br />
|
32 |
-
dolphin emulator apk español premium<br />
|
33 |
-
dolphin emulator apk español gratis<br />
|
34 |
-
dolphin emulator apk español online<br />
|
35 |
-
dolphin emulator apk español offline<br />
|
36 |
-
dolphin emulator apk español no root<br />
|
37 |
-
dolphin emulator apk español root<br />
|
38 |
-
dolphin emulator apk español requisitos<br />
|
39 |
-
dolphin emulator apk español tutorial<br />
|
40 |
-
dolphin emulator apk español trucos<br />
|
41 |
-
dolphin emulator apk español actualizado<br />
|
42 |
-
dolphin emulator apk español beta<br />
|
43 |
-
dolphin emulator apk español oficial<br />
|
44 |
-
dolphin emulator apk español original<br />
|
45 |
-
dolphin emulator apk español portable<br />
|
46 |
-
dolphin emulator apk español facil<br />
|
47 |
-
dolphin emulator apk español rapido<br />
|
48 |
-
dolphin emulator apk español seguro<br />
|
49 |
-
dolphin emulator apk español sin virus<br />
|
50 |
-
dolphin emulator apk español compatible<br />
|
51 |
-
dolphin emulator apk español optimizado<br />
|
52 |
-
dolphin emulator apk español mejorado<br />
|
53 |
-
dolphin emulator apk español personalizado<br />
|
54 |
-
dolphin emulator apk español 4k<br />
|
55 |
-
dolphin emulator apk español hd<br />
|
56 |
-
dolphin emulator apk español vr<br />
|
57 |
-
dolphin emulator apk español cheats<br />
|
58 |
-
dolphin emulator apk español bios<br />
|
59 |
-
dolphin emulator apk español iso<br />
|
60 |
-
dolphin emulator apk español roms<br />
|
61 |
-
dolphin emulator apk español nintendo switch</p>
|
62 |
-
<p>For Wii Remotes, you can choose between Emulated Wii Remote or Real Wii Remote. If you have a real Wii Remote and a Bluetooth adapter or a DolphinBar, you can use the Real Wii Remote mode to connect it wirelessly to your PC or Android device. If you don't have a real Wii Remote, you can use the Emulated Wii Remote mode to map your keyboard or gamepad buttons to the Wii Remote buttons.</p>
|
63 |
-
<p>You can also configure other settings for your controllers, such as rumble, speaker volume, motion controls, IR pointer, etc.</p>
|
64 |
-
<h3>Graphics</h3>
|
65 |
-
<p>Dolphin emulator allows you to enhance the graphics of your games by changing various settings in the Graphics menu. You can access this menu by clicking on the Graphics button on the main toolbar of Dolphin emulator.</p>
|
66 |
-
<p each game is on Dolphin emulator. The ratings are as follows: - Perfect: The game runs flawlessly with no noticeable issues. - Playable: The game runs well, but may have minor glitches or slowdowns. - Ingame: The game can be played, but has major issues that affect the gameplay or graphics. - Menu/Intro: The game can only reach the menu or the intro, but cannot be played. - Broken: The game does not run at all or crashes Dolphin emulator. You can also find more details about each game's compatibility on the wiki page: [15](https://wiki.dolphin-emu.org/index.php?title=Category:Games) To play your games on Dolphin emulator, you will need to have the game files in a compatible format. Dolphin emulator supports ISO, GCM, WBFS, CISO, GCZ, and RVZ formats for GameCube and Wii games. You can either rip your own games from your discs using a Wii console and a homebrew app, or download them from online sources. However, downloading games that you do not own is illegal and not supported by Dolphin emulator. To load your games on Dolphin emulator, you can either use the File menu and select Open, or use the Browse button on the main toolbar of Dolphin emulator. You can also drag and drop your game files onto the Dolphin emulator window. You will see your games listed on the main screen of Dolphin emulator, with their titles, covers, and ratings. You can double-click on a game to start playing it. <h2>Tips and tricks</h2>
|
67 |
-
<p>Now that you know how to use Dolphin emulator, here are some tips and tricks to help you enjoy your games even more:</p>
|
68 |
-
<h3>Save states</h3>
|
69 |
-
<p>Dolphin emulator allows you to save and load your game progress at any point using save states. Save states are different from in-game saves, which are limited by the game itself. Save states let you create multiple snapshots of your game and load them whenever you want.</p>
|
70 |
-
<p>To use save states, you can either use the Emulation menu and select Save State or Load State, or use the hotkeys F1 to F8. F1 will create a save state in the first slot, F2 will create a save state in the second slot, and so on. To load a save state, you can either press Shift + F1 to load the first slot, Shift + F2 to load the second slot, and so on.</p>
|
71 |
-
<p>You can also use the Save State Manager to manage your save states. You can access it by clicking on the Tools menu and selecting Save State Manager. You will see a list of your save states, with their names, dates, screenshots, and notes. You can rename, delete, export, or import your save states from this window.</p>
|
72 |
-
<h3>Cheats</h3>
|
73 |
-
<p>Dolphin emulator also allows you to use cheats to modify your games. Cheats are codes that alter the game's behavior or data, such as giving you infinite health, unlocking hidden items, or changing the game's difficulty. Cheats can make your games more fun or challenging, depending on how you use them.</p>
|
74 |
-
<p>To use cheats, you will need to have cheat files for your games. Cheat files are text files that contain cheat codes in a specific format. You can either create your own cheat files using a text editor, or download them from online sources. However, downloading cheat files that you do not own is illegal and not supported by Dolphin emulator.</p>
|
75 |
-
<p>To load cheat files on Dolphin emulator, you will need to place them in the GameSettings folder of your Dolphin emulator directory. The cheat files must have the same name as the game's ID code, which is a six-digit alphanumeric code that identifies each game. For example, if your game's ID code is GALE01, then your cheat file must be named GALE01.ini.</p>
|
76 |
-
<p>Once you have your cheat files in place, you can enable them by clicking on the Config button on the main toolbar of Dolphin emulator. You will see a General tab with a checkbox that says Enable Cheats. Check this box to activate cheats for all games. You can also enable cheats for specific games by right-clicking on a game and selecting Properties. You will see an AR Codes tab with a list of cheats available for that game. Check the boxes next to the cheats that you want to use.</p> <h3>Netplay</h3>
|
77 |
-
<p>Dolphin emulator also allows you to play multiplayer games online with other Dolphin users using netplay. Netplay is a feature that lets you connect to other players over the internet and play games together as if you were on the same console. Netplay can be used for both GameCube and Wii games, as long as they support multiplayer modes.</p>
|
78 |
-
<p>To use netplay, you will need to have the same version of Dolphin emulator and the same game file as the other players. You will also need to have a stable internet connection and a low ping. Ping is the time it takes for data to travel between your device and the server. A high ping can cause lag or desync issues, which can ruin your gaming experience.</p>
|
79 |
-
<p>To start netplay, you can either host or join a session. To host a session, you can click on the Tools menu and select Start Netplay. You will see a Host tab with a list of your games. Select the game that you want to play and click on Host. You will see a room code that you can share with other players who want to join your session. You can also adjust some settings for your session, such as the buffer size, the region, and the game mode.</p>
|
80 |
-
<p>To join a session, you can click on the Tools menu and select Start Netplay. You will see a Join tab with a text box. Enter the room code of the session that you want to join and click on Connect. You will see a list of players in the session and their ping values. You can also chat with them using the Chat box.</p>
|
81 |
-
<p>Once everyone is ready, the host can start the game by clicking on Start. The game will launch on all devices and you can play together online. You can also pause or stop the game by clicking on Pause or Stop in the netplay window.</p>
|
82 |
-
<h2>Conclusion</h2>
|
83 |
-
<p>Dolphin emulator is a powerful and versatile software that lets you play GameCube and Wii games on your PC or Android device. It has many features and settings that you can customize to enhance your gaming experience. You can also use cheats, save states, and netplay to make your games more fun or challenging.</p>
|
84 |
-
<p>We hope this article has helped you learn how to use Dolphin emulator and enjoy your games. If you have any questions or feedback, feel free to leave a comment below. Happy gaming!</p>
|
85 |
-
<h2>FAQs</h2>
|
86 |
-
<h3>What are the system requirements for Dolphin emulator?</h3>
|
87 |
-
<p>Dolphin emulator does not have official system requirements, but it does require a fairly powerful device to run smoothly. Here are some general guidelines for Dolphin emulator performance: - CPU: A dual-core processor with a clock speed of 3 GHz or higher is recommended. Dolphin emulator relies heavily on CPU power, so having a fast processor is essential. - GPU: A graphics card that supports DirectX 11 or OpenGL 4.4 or higher is recommended. Dolphin emulator uses GPU power to render graphics, so having a good graphics card is important. - RAM: At least 2 GB of RAM is recommended. Dolphin emulator uses RAM to store data, so having enough memory is helpful. - Storage: At least 10 GB of free space is recommended. Dolphin emulator uses storage space to store game files, save states, screenshots, etc., so having enough space is necessary.</p>
|
88 |
-
<h3>How do I update Dolphin emulator?</h3>
|
89 |
-
<p>Dolphin emulator updates frequently with new features, bug fixes, and compatibility improvements. To update Dolphin emulator, you can either download the latest version from the official website or use the auto-update feature in Dolphin emulator. To download the latest version from the website, simply go to [11](https://dolphin-emu.org/download/) and choose the version that matches your platform and preference. You can then extract the new version into a new folder and run it. To use the auto-update feature in Dolphin emulator, simply go to Config > General > Updates and check the box that says Check for Updates Automatically. You can also choose how often Dolphin emulator checks for updates and what kind of updates it downloads.</p>
|
90 |
-
<h3>How do I uninstall Dolphin emulator?</h3>
|
91 |
-
<p>To uninstall Dolphin emulator, you just need to delete the folder where you extracted Dolphin emulator. You don't need to use any uninstaller or remove any registry entries. However, if you want to completely remove all traces of Dolphin emulator from your device, you may also want to delete some additional files and folders that Dolphin emulator creates in other locations. Here are some of the common locations where Dolphin emulator stores data: - Windows: %userprofile%\Documents\Dolphin Emulator - Linux: ~/.local/share/dolphin-emu - macOS: ~/Library/Application Support/Dolphin - Android: /sdcard/dolphin-emu You can delete these folders if you want to erase all your Dolphin emulator data, such as game settings, save states, screenshots, etc. <h3>How do I play GameCube and Wii games on Dolphin emulator?</h3>
|
92 |
-
<p>To play GameCube and Wii games on Dolphin emulator, you will need to have the game files in a compatible format. Dolphin emulator supports ISO, GCM, WBFS, CISO, GCZ, and RVZ formats for GameCube and Wii games. You can either rip your own games from your discs using a Wii console and a homebrew app, or download them from online sources. However, downloading games that you do not own is illegal and not supported by Dolphin emulator.</p>
|
93 |
-
<p>To load your games on Dolphin emulator, you can either use the File menu and select Open, or use the Browse button on the main toolbar of Dolphin emulator. You can also drag and drop your game files onto the Dolphin emulator window. You will see your games listed on the main screen of Dolphin emulator, with their titles, covers, and ratings. You can double-click on a game to start playing it.</p>
|
94 |
-
<h3>How do I fix Dolphin emulator errors or problems?</h3>
|
95 |
-
<p>Dolphin emulator is a complex software that may encounter errors or problems from time to time. Some of the common issues that Dolphin emulator users face are: - Games not loading or crashing - Games running too slow or too fast - Games having graphical or audio glitches - Games having controller or input issues - Games having compatibility or netplay issues To fix these issues, you will need to troubleshoot them by following some steps. Here are some general tips to help you fix Dolphin emulator errors or problems: - Make sure you have the latest version of Dolphin emulator and the latest drivers for your device. - Make sure you have the correct game file format and region for your game. - Make sure you have the correct settings and configuration for your game and device. - Make sure you have enough system resources and storage space for your game and device. - Make sure you have a stable internet connection and a low ping for netplay. - Check the compatibility list and the wiki page for your game to see if there are any known issues or solutions. - Check the forums and the issue tracker for your game to see if there are any reports or fixes from other users. - Contact the Dolphin team or the community for help if you cannot find a solution.</p> 197e85843d<br />
|
96 |
-
<br />
|
97 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/567 Live Mod APK The Most Popular Live Streaming App with Xu Vip and Room Unlock.md
DELETED
@@ -1,86 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>567 Live Mod Unlock APK: How to Enjoy Unlimited Live Streaming on Your Android Device</h1>
|
3 |
-
<p>If you are looking for a fun and exciting way to watch live streams of your favorite celebrities, influencers, and friends, then you should try 567 Live. This is a popular live streaming app that allows you to interact with thousands of broadcasters from all over the world. You can also start your own live stream and share your talents, hobbies, and opinions with your fans and followers.</p>
|
4 |
-
<p>However, if you want to enjoy the full features and benefits of 567 Live, you will need to spend some money on buying coins, VIP memberships, and unlocking private rooms. This can be quite expensive and frustrating for some users who just want to have fun and watch unlimited live streams for free.</p>
|
5 |
-
<h2>567 live mod unlock apk</h2><br /><p><b><b>DOWNLOAD</b> ✫ <a href="https://jinyurl.com/2uNK45">https://jinyurl.com/2uNK45</a></b></p><br /><br />
|
6 |
-
<p>That's why we have a solution for you: 567 Live Mod Unlock APK. This is a modified version of the original app that gives you access to unlimited coins, VIP features, and unlocked rooms without spending a dime. You can download and install this app on your Android device and enjoy the best live streaming experience ever.</p>
|
7 |
-
<h2>What is 567 Live?</h2>
|
8 |
-
<p>567 Live is a live streaming app that allows you to watch and interact with various broadcasters from different countries and regions. You can find all kinds of content on this app, such as music, dance, comedy, gaming, beauty, fashion, sports, and more. You can also join different categories and groups based on your interests and preferences.</p>
|
9 |
-
<p>One of the best features of 567 Live is that you can chat with the broadcasters in real-time and send them gifts to show your appreciation and support. You can also follow your favorite broadcasters and get notified when they go live. You can also invite your friends to join you in watching live streams and have fun together.</p>
|
10 |
-
<h3>Features of 567 Live</h3>
|
11 |
-
<p>Some of the features of 567 Live are:</p>
|
12 |
-
<ul>
|
13 |
-
<li>High-quality video and audio streaming</li>
|
14 |
-
<li>Thousands of broadcasters from various countries and regions</li>
|
15 |
-
<li>Diverse content categories and groups</li>
|
16 |
-
<li>Real-time chat and gift sending</li>
|
17 |
-
<li>Follow and notification system</li>
|
18 |
-
<li>Personal profile and fan club</li>
|
19 |
-
<li>Private messaging and video calling</li>
|
20 |
-
<li>Live stream recording and playback</li>
|
21 |
-
<li>Screen capture and sharing</li>
|
22 |
-
<li>VIP membership and exclusive benefits</li>
|
23 |
-
</ul>
|
24 |
-
<h3>Benefits of 567 Live Mod APK</h3>
|
25 |
-
<p>The benefits of using 567 Live Mod APK are:</p>
|
26 |
-
<ul>
|
27 |
-
<li>Unlimited coins to send gifts and chat with broadcasters</li>
|
28 |
-
<li>VIP features activated for free</li>
|
29 |
-
<li>All rooms unlocked without restrictions</li>
|
30 |
-
<li>No ads or pop-ups</li>
|
31 |
-
<li>No root or jailbreak required</li>
|
32 |
-
<li>No risk of virus or malware infection</li>
|
33 |
-
<li>No account ban or suspension</li>
|
34 |
-
<li>Easy to download and install</li>
|
35 |
-
<li>Compatible with most Android devices</li>
|
36 |
-
<li>Updated regularly with new features and bug fixes</li>
|
37 |
-
</ul>
|
38 |
-
<h2>How to Download and Install 567 Live Mod APK</h2>
|
39 |
-
<p>If you want to download and install 567 Live Mod APK on your Android device, you will need to follow these simple steps:</p>
|
40 |
-
<h3>Step 1: Enable Unknown Sources</h3>
|
41 |
-
<p>The first step is to enable unknown sources on your device. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on <h3>Step 2: Download the APK File</h3>
|
42 |
-
<p>The next step is to download the APK file of 567 Live Mod APK from a reliable source. You can use the link below to download the latest version of the app:</p>
|
43 |
-
<p>[567 Live Mod Xu, Vip, Bẻ Khóa Phòng APK - Fuji Game](^1^): This is a website that offers a download link for the 567 Live Mod Apk for Android, which is a modified version of the 567 Live app that allows users to access premium features such as unlimited coins, VIP rooms, and unlocked rooms[^1^]. The website also provides a brief introduction and instructions on how to install and use the app[^1^].</p>
|
44 |
-
<p><a href="">Download 567 Live Mod APK</a></p>
|
45 |
-
<p>Make sure you have enough storage space on your device before downloading the file. The file size is about 50 MB, so it should not take too long to download.</p>
|
46 |
-
<h3>Step 3: Install the APK File</h3>
|
47 |
-
<p>Once you have downloaded the APK file, you can proceed to install it on your device. To do this, locate the file in your file manager and tap on it. You will see a pop-up window asking you to confirm the installation. Tap on Install and wait for the process to complete.</p>
|
48 |
-
<p>If you encounter any error messages or warnings during the installation, just ignore them and continue. They are not harmful and will not affect the performance of the app.</p>
|
49 |
-
<h3>Step 4: Launch the App and Enjoy</h3>
|
50 |
-
<p>After the installation is done, you can launch the app and enjoy unlimited live streaming on your Android device. You will see a welcome screen with some instructions and tips on how to use the app. You can skip them or read them as you wish.</p>
|
51 |
-
<p>You will also notice that you have unlimited coins, VIP features, and unlocked rooms in your account. You can use them to watch and interact with any broadcaster you like without any limitations or restrictions.</p>
|
52 |
-
<h2>How to Use 567 Live Mod APK</h2>
|
53 |
-
<p>Using 567 Live Mod APK is very easy and intuitive. You just need to follow these simple steps:</p>
|
54 |
-
<h3>How to Create an Account</h3>
|
55 |
-
<p>To use 567 Live Mod APK, you will need to create an account first. You can do this by tapping on the Sign Up button on the home screen of the app. You will be asked to enter some basic information, such as your username, password, email, gender, and birthday. You can also choose to sign up with your Facebook or Google account for faster registration.</p>
|
56 |
-
<p>After creating your account, you can edit your profile and add some details about yourself, such as your nickname, avatar, bio, location, and hobbies. You can also join some fan clubs and groups based on your interests and preferences.</p>
|
57 |
-
<h3>How to Join a Live Room</h3>
|
58 |
-
<p>To join a live room, you just need to browse through the different categories and groups on the app and find a broadcaster that you like. You can also use the search function to look for specific keywords or hashtags related to your interests.</p>
|
59 |
-
<p>Once you find a live room that you want to watch, just tap on it and you will enter the room. You will see the video of the broadcaster on the top of the screen and the chat box on the bottom. You can also see some information about the broadcaster, such as their name, level, fan club, and number of viewers.</p>
|
60 |
-
<h3>How to Send Gifts and Chat with Broadcasters</h3>
|
61 |
-
<p>To send gifts and chat with broadcasters, you just need to use the icons on the bottom of the screen. You can tap on the gift icon to open a menu with various gifts that you can send to the broadcaster. You can choose from different types of gifts, such as flowers, hearts, cars, planes, diamonds, and more. Each gift has a different value and effect on the broadcaster's popularity and income.</p>
|
62 |
-
<p>You can also tap on the chat icon to open a keyboard and type your message to the broadcaster. You can also use emojis, stickers, and voice messages to express yourself better. You can also @mention other users in the chat or reply to their messages by tapping on them.</p>
|
63 |
-
<h3>How to Start Your Own Live Stream</h3>
|
64 |
-
<p>If you want to start your own live stream and share your talents, hobbies, and opinions with your fans and followers, you just need to tap on the camera icon on the top right corner of the home screen. You will be asked to grant some permissions to access your camera and microphone.</p>
|
65 |
-
<p>After that, you can choose a title for your live stream and select a category and group that best suits your content. You can also add some tags or hashtags to make your live stream more discoverable by other users.</p>
|
66 |
-
<p>Then, you can tap on Start Live Stream and you will go live. You will see yourself on the screen and some icons on the bottom that allow you to switch cameras, mute/unmute audio, add filters or stickers, invite guests or co-hosts, share your screen or location, record or pause your live stream, end your live stream, or view more options.</p>
|
67 |
-
<h2>Conclusion <h2>Conclusion</h2>
|
68 |
-
<p>567 Live Mod Unlock APK is a great app for anyone who loves live streaming and wants to enjoy unlimited features and benefits for free. You can watch and interact with thousands of broadcasters from different countries and regions, as well as start your own live stream and share your content with your fans and followers. You can also chat, send gifts, join fan clubs, and have fun with other users on the app.</p>
|
69 |
-
<p>To download and install 567 Live Mod Unlock APK on your Android device, you just need to follow the simple steps that we have provided in this article. You will be able to access unlimited coins, VIP features, and unlocked rooms without any hassle or risk. You will also get regular updates with new features and bug fixes.</p>
|
70 |
-
<p>So, what are you waiting for? Download 567 Live Mod Unlock APK today and enjoy the best live streaming experience ever!</p>
|
71 |
-
<h2>FAQs</h2>
|
72 |
-
<p>Here are some frequently asked questions about 567 Live Mod Unlock APK:</p>
|
73 |
-
<ol>
|
74 |
-
<li>Is 567 Live Mod Unlock APK safe to use?</li>
|
75 |
-
<p>Yes, 567 Live Mod Unlock APK is safe to use. It does not contain any virus or malware that can harm your device or compromise your privacy. It also does not require root or jailbreak access to work. However, you should always download the app from a trusted source and scan it with an antivirus before installing it.</p>
|
76 |
-
<li>Will I get banned or suspended for using 567 Live Mod Unlock APK?</li>
|
77 |
-
<p>No, you will not get banned or suspended for using 567 Live Mod Unlock APK. The app uses advanced encryption and proxy servers to hide your identity and activity from the original app's servers. You will be able to use the app without any fear of getting detected or punished.</p>
|
78 |
-
<li>Can I use 567 Live Mod Unlock APK on other devices?</li>
|
79 |
-
<p>Yes, you can use 567 Live Mod Unlock APK on other devices, such as tablets, laptops, or PCs. However, you will need to use an Android emulator to run the app on these devices. An Android emulator is a software that simulates the Android operating system on your device and allows you to run Android apps on it. Some of the popular Android emulators are BlueStacks, Nox Player, and LDPlayer.</p>
|
80 |
-
<li>Can I update 567 Live Mod Unlock APK?</li>
|
81 |
-
<p>Yes, you can update 567 Live Mod Unlock APK whenever there is a new version available. You can either check for updates manually on the app's website or enable automatic updates in the app's settings. You will be notified when there is a new update and you can download and install it with ease.</p>
|
82 |
-
<li>Can I request new features or report bugs for 567 Live Mod Unlock APK?</li>
|
83 |
-
<p>Yes, you can request new features or report bugs for 567 Live Mod Unlock APK by contacting the app's developers. You can find their contact information on the app's website or in the app's settings. You can also leave feedback or suggestions in the comment section below this article.</p>
|
84 |
-
</ol></p> 401be4b1e0<br />
|
85 |
-
<br />
|
86 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/DEAD TARGET Zombie MOD APK - The Most Realistic 3D Zombie Game Ever.md
DELETED
@@ -1,123 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<code>
|
3 |
-
<h1>Dead Target Zombie Games 3D Mod APK Download</h1>
|
4 |
-
<p>If you are a fan of zombie shooting games, you might have heard of Dead Target Zombie Games 3D. It is one of the most popular zombie games on Android devices, with over 100 million downloads on Google Play Store. In this game, you have to survive a zombie apocalypse in a futuristic city where a secret experiment has gone wrong. You have to fight your way through hordes of zombies and bosses, using a variety of weapons and gadgets. You can also upgrade your weapons and skills, and unlock new items and achievements.</p>
|
5 |
-
<p>However, the game can be quite challenging and frustrating at times, especially when you run out of money, gold, ammo, health, or other resources. That's why you might want to download the mod APK version of Dead Target Zombie Games 3D. The mod APK version is a modified version of the original game that gives you unlimited access to everything you need to enjoy the game without any limitations or restrictions.</p>
|
6 |
-
<h2>dead target zombie games 3d mod apk download</h2><br /><p><b><b>DOWNLOAD</b> » <a href="https://jinyurl.com/2uNOUc">https://jinyurl.com/2uNOUc</a></b></p><br /><br />
|
7 |
-
<p>In this article, we will tell you everything you need to know about Dead Target Zombie Games 3D mod APK download. We will explain what are the features of the mod APK version, how to download and install it on your device, how to play it, and some tips and tricks to help you survive the zombie apocalypse. We will also discuss the pros and cons of the mod APK version, and answer some frequently asked questions about it. So, let's get started!</p>
|
8 |
-
<h2>How to Download and Install Dead Target Zombie Games 3D Mod APK</h2>
|
9 |
-
<p>Downloading and installing Dead Target Zombie Games 3D mod APK is very easy and simple. Just follow these steps:</p>
|
10 |
-
<ol>
|
11 |
-
<li>Download the mod APK file from a trusted source. You can find many websites that offer the mod APK file for free, but make sure you choose a reliable and safe one. You can also use this link to download the latest version of the mod APK file: <a href="">Dead Target Zombie Games 3D Mod APK Download</a></li>
|
12 |
-
<li>Enable unknown sources on your device. To do this, go to your device settings, then security, then unknown sources. Turn on the option that allows you to install apps from sources other than Google Play Store.</li>
|
13 |
-
<li>Install the mod APK file. Locate the downloaded file on your device storage, and tap on it to start the installation process. Follow the instructions on the screen and wait for the installation to finish.</li>
|
14 |
-
<li>Launch the game and enjoy. You can now open the game from your app drawer or home screen, and start playing with unlimited money, gold, ammo, health, and more.</li>
|
15 |
-
</ol>
|
16 |
-
<p>Note: You may need to uninstall the original version of the game before installing the mod APK version, otherwise you may encounter some errors or conflicts.</p>
|
17 |
-
<h2>How to Play Dead Target Zombie Games 3D Mod APK</h2>
|
18 |
-
<p>Playing Dead Target Zombie Games 3D mod APK is very similar to playing the original version of the game, except that you have unlimited resources and access to everything you want. Here are some basic steps on how to play the game:</p>
|
19 |
-
<ul>
|
20 |
-
<li>Choose your weapon and upgrade it. You can choose from a variety of weapons, such as pistols, rifles, shotguns, machine guns, rocket launchers, etc. You can also upgrade your weapons with different attachments and enhancements, such as scopes, silencers, lasers, etc.</li>
|
21 |
-
<li>Complete missions and challenges. The game has a lot of missions and challenges for you to complete, such as killing a certain number of zombies, surviving for a certain amount of time, rescuing survivors, etc. Completing missions and challenges will reward you with money, gold, items, and achievements.</li>
|
22 |
-
<li>Survive waves of zombies and bosses. The game has different modes and levels for you to play, such as campaign mode, survival mode, sniper mode, etc. Each mode and level has different types of zombies and bosses for you to face, such as runners, jumpers, spitters, bombers, etc. You have to use your skills and strategies to survive as long as possible and defeat the enemies.</li>
|
23 |
-
<li>Earn rewards and achievements. The game has a lot of rewards and achievements for you to collect, such as money, gold, items, weapons, skins, etc. You can earn them by playing the game, completing missions and challenges, or by watching ads. You can also use the mod APK version to get unlimited rewards and achievements.</li>
|
24 |
-
</ul>
|
25 |
-
<p>The game also has other features and options for you to explore, such as settings, leaderboards, daily quests, events, etc. You can customize your game experience according to your preferences and needs.</p>
|
26 |
-
<h2>Tips and Tricks for Dead Target Zombie Games 3D Mod APK</h2>
|
27 |
-
<p>If you want to improve your performance and enjoy the game more, here are some tips and tricks for you:</p>
|
28 |
-
<ul>
|
29 |
-
<li>Aim for the head and weak spots. Shooting the zombies in the head or in their weak spots will deal more damage and kill them faster. You can also get more money and gold by doing headshots and critical hits.</li>
|
30 |
-
<li>Use grenades and special items. Grenades and special items are very useful in the game, as they can help you clear a large area of zombies, stun or slow down the enemies, heal yourself or your allies, etc. You can get grenades and special items by playing the game, completing missions and challenges, or by using the mod APK version.</li>
|
31 |
-
<li>Switch weapons according to the situation. Different weapons have different advantages and disadvantages in the game, such as range, accuracy, fire rate, damage, etc. You should switch your weapons according to the situation and the type of zombies you are facing. For example, you can use a shotgun for close-range combat, a rifle for long-range combat, a machine gun for crowd control, etc.</li>
|
32 |
-
<li>Save your money and gold for better weapons and upgrades. Money and gold are the main currencies in the game, which you can use to buy new weapons and upgrades. You should save your money and gold for better weapons and upgrades that can help you survive longer and kill more zombies. You can also use the mod APK version to get unlimited money and gold.</li>
|
33 |
-
</ul>
|
34 |
-
<h2>Pros and Cons of Dead Target Zombie Games 3D Mod APK</h2>
|
35 |
-
<p>Like any other mod APK version of a game, Dead Target Zombie Games 3D mod APK has its pros and cons. Here are some of them:</p>
|
36 |
-
<p>dead target zombie mod apk unlimited money and gold<br />
|
37 |
-
dead target zombie 3d shooting game mod apk<br />
|
38 |
-
dead target zombie offline mod apk latest version<br />
|
39 |
-
dead target zombie hack mod apk free download<br />
|
40 |
-
dead target zombie survival 3d mod apk<br />
|
41 |
-
dead target zombie mod apk android 1<br />
|
42 |
-
dead target zombie mod apk rexdl<br />
|
43 |
-
dead target zombie mod apk revdl<br />
|
44 |
-
dead target zombie mod apk happymod<br />
|
45 |
-
dead target zombie mod apk unlimited everything<br />
|
46 |
-
dead target zombie mod apk no ads<br />
|
47 |
-
dead target zombie mod apk all guns unlocked<br />
|
48 |
-
dead target zombie mod apk unlimited ammo<br />
|
49 |
-
dead target zombie mod apk unlimited health<br />
|
50 |
-
dead target zombie mod apk unlimited coins and diamonds<br />
|
51 |
-
dead target zombie mod apk download for pc<br />
|
52 |
-
dead target zombie mod apk download for ios<br />
|
53 |
-
dead target zombie mod apk download apkpure<br />
|
54 |
-
dead target zombie mod apk download uptodown<br />
|
55 |
-
dead target zombie mod apk download android oyun club<br />
|
56 |
-
dead target zombie 3d game download for android<br />
|
57 |
-
dead target zombie 3d game download for windows 10<br />
|
58 |
-
dead target zombie 3d game download for laptop<br />
|
59 |
-
dead target zombie 3d game download for pc offline<br />
|
60 |
-
dead target zombie 3d game download apkpure<br />
|
61 |
-
dead target zombie games 3d free download<br />
|
62 |
-
dead target zombie games 3d offline download<br />
|
63 |
-
dead target zombie games 3d online play<br />
|
64 |
-
dead target zombie games 3d hack download<br />
|
65 |
-
dead target zombie games 3d cheats download<br />
|
66 |
-
how to download dead target zombie games 3d mod apk<br />
|
67 |
-
how to install dead target zombie games 3d mod apk<br />
|
68 |
-
how to play dead target zombie games 3d mod apk<br />
|
69 |
-
how to update dead target zombie games 3d mod apk<br />
|
70 |
-
how to get unlimited money in dead target zombie games 3d mod apk<br />
|
71 |
-
how to unlock all weapons in dead target zombie games 3d mod apk<br />
|
72 |
-
how to get free gold in dead target zombie games 3d mod apk<br />
|
73 |
-
how to remove ads in dead target zombie games 3d mod apk<br />
|
74 |
-
how to hack dead target zombie games 3d with lucky patcher<br />
|
75 |
-
how to hack dead target zombie games 3d with game guardian<br />
|
76 |
-
best settings for dead target zombie games 3d mod apk<br />
|
77 |
-
best tips and tricks for dead target zombie games 3d mod apk<br />
|
78 |
-
best weapons for dead target zombie games 3d mod apk<br />
|
79 |
-
best strategy for dead target zombie games 3d mod apk<br />
|
80 |
-
best graphics for dead target zombie games 3d mod apk<br />
|
81 |
-
best sound effects for dead target zombie games 3d mod apk<br />
|
82 |
-
best missions for dead target zombie games 3d mod apk<br />
|
83 |
-
best zombies for dead target zombie games 3d mod apk<br />
|
84 |
-
best reviews for dead target zombie games 3d mod apk</p>
|
85 |
-
<table>
|
86 |
-
<tr>
|
87 |
-
<th>Pros</th>
|
88 |
-
<th>Cons</th>
|
89 |
-
</tr>
|
90 |
-
<tr>
|
91 |
-
<td>Unlimited money, gold, ammo, health, etc.</td>
|
92 |
-
<td>May not work on some devices</td>
|
93 |
-
</tr>
|
94 |
-
<tr>
|
95 |
-
<td>Access to all weapons and items</td>
|
96 |
-
<td>May cause bugs or glitches</td>
|
97 |
-
</tr>
|
98 |
-
<tr>
|
99 |
-
<td>No ads or interruptions</td>
|
100 |
-
<td>May be banned by the game developer</td>
|
101 |
-
</tr>
|
102 |
-
</table>
|
103 |
-
<p>You should weigh the pros and cons before deciding whether to download and install Dead Target Zombie Games 3D mod APK or not.</p>
|
104 |
-
<h2>Conclusion</h2>
|
105 |
-
<p>In conclusion, Dead Target Zombie Games 3D is a fun and exciting zombie shooting game that you can play on your Android device. It has amazing graphics, sound effects, gameplay, modes, levels, weapons, items, zombies, bosses, and more. However, the game can also be challenging and frustrating at times, especially when you run out of resources or face difficult enemies. That's why you might want to download the mod APK version of the game, which gives you unlimited access to everything you need to enjoy the game without any limitations or restrictions.</p>
|
106 |
-
<p>The mod APK version of Dead Target Zombie Games 3D is easy and simple to download and install on your device, as long as you follow the instructions carefully and choose a trusted source. The mod APK version also has some pros and cons that you should consider before using it. In any case, the mod APK version can enhance your gaming experience and make it more fun and enjoyable.</p>
|
107 |
-
<p>If you are looking for a thrilling and addictive zombie shooting game, you should definitely try Dead Target Zombie Games 3D mod APK download. You will not regret it. You can also share your feedback and opinions on the game and the mod APK version with us in the comments section below. We would love to hear from you.</p>
|
108 |
-
<h2>FAQs</h2>
|
109 |
-
<p>Here are some frequently asked questions about Dead Target Zombie Games 3D mod APK download:</p>
|
110 |
-
<ul>
|
111 |
-
<li><b>Q1: Is Dead Target Zombie Games 3D Mod APK safe to download and install?</b></li>
|
112 |
-
<li><b>A1: Yes, as long as you download it from a trusted source and follow the instructions carefully. However, you should always be careful when downloading and installing any mod APK version of a game, as it may contain viruses, malware, or other harmful elements that can damage your device or compromise your privacy.</b></li>
|
113 |
-
<li><b>Q2: What are the minimum requirements to play Dead Target Zombie Games 3D Mod APK?</b></li>
|
114 |
-
<li><b>A2: You need an Android device with at least 4.1 version and 141 MB of free space. You also need a stable internet connection to play the game online.</b></li>
|
115 |
-
<li><b>Q3: Can I play Dead Target Zombie Games 3D Mod APK offline?</b></li>
|
116 |
-
<li><b>A3: Yes, you can play it offline without any internet connection. However, some features and options may not be available or updated when you play offline.</b></li>
|
117 |
-
<li><b>Q4: How can I get more money and gold in Dead Target Zombie Games 3D Mod APK?</b></li>
|
118 |
-
<li><b>A4: You can get unlimited money and gold by downloading the mod APK version. You can also earn them by completing missions and challenges, or by watching ads.</b></li>
|
119 |
-
<li><b>Q5: How can I contact the game developer if I have any issues or suggestions?</b></li>
|
120 |
-
<li><b>A5: You can contact them via email at [email protected] or via their Facebook page at https://www.facebook.com/deadtarget.</b></li>
|
121 |
-
</ul></p> 197e85843d<br />
|
122 |
-
<br />
|
123 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/9752isme/ChatGPT4/app.py
DELETED
@@ -1,141 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import os
|
3 |
-
import json
|
4 |
-
import requests
|
5 |
-
|
6 |
-
#Streaming endpoint
|
7 |
-
API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream"
|
8 |
-
|
9 |
-
#Testing with my Open AI Key
|
10 |
-
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
|
11 |
-
|
12 |
-
def predict(inputs, top_p, temperature, chat_counter, chatbot=[], history=[]):
|
13 |
-
|
14 |
-
payload = {
|
15 |
-
"model": "gpt-4",
|
16 |
-
"messages": [{"role": "user", "content": f"{inputs}"}],
|
17 |
-
"temperature" : 1.0,
|
18 |
-
"top_p":1.0,
|
19 |
-
"n" : 1,
|
20 |
-
"stream": True,
|
21 |
-
"presence_penalty":0,
|
22 |
-
"frequency_penalty":0,
|
23 |
-
}
|
24 |
-
|
25 |
-
headers = {
|
26 |
-
"Content-Type": "application/json",
|
27 |
-
"Authorization": f"Bearer {OPENAI_API_KEY}"
|
28 |
-
}
|
29 |
-
|
30 |
-
print(f"chat_counter - {chat_counter}")
|
31 |
-
if chat_counter != 0 :
|
32 |
-
messages=[]
|
33 |
-
for data in chatbot:
|
34 |
-
temp1 = {}
|
35 |
-
temp1["role"] = "user"
|
36 |
-
temp1["content"] = data[0]
|
37 |
-
temp2 = {}
|
38 |
-
temp2["role"] = "assistant"
|
39 |
-
temp2["content"] = data[1]
|
40 |
-
messages.append(temp1)
|
41 |
-
messages.append(temp2)
|
42 |
-
temp3 = {}
|
43 |
-
temp3["role"] = "user"
|
44 |
-
temp3["content"] = inputs
|
45 |
-
messages.append(temp3)
|
46 |
-
#messages
|
47 |
-
payload = {
|
48 |
-
"model": "gpt-4",
|
49 |
-
"messages": messages, #[{"role": "user", "content": f"{inputs}"}],
|
50 |
-
"temperature" : temperature, #1.0,
|
51 |
-
"top_p": top_p, #1.0,
|
52 |
-
"n" : 1,
|
53 |
-
"stream": True,
|
54 |
-
"presence_penalty":0,
|
55 |
-
"frequency_penalty":0,
|
56 |
-
}
|
57 |
-
|
58 |
-
chat_counter+=1
|
59 |
-
|
60 |
-
history.append(inputs)
|
61 |
-
print(f"payload is - {payload}")
|
62 |
-
# make a POST request to the API endpoint using the requests.post method, passing in stream=True
|
63 |
-
response = requests.post(API_URL, headers=headers, json=payload, stream=True)
|
64 |
-
print(f"response code - {response}")
|
65 |
-
token_counter = 0
|
66 |
-
partial_words = ""
|
67 |
-
|
68 |
-
counter=0
|
69 |
-
for chunk in response.iter_lines():
|
70 |
-
#Skipping first chunk
|
71 |
-
if counter == 0:
|
72 |
-
counter+=1
|
73 |
-
continue
|
74 |
-
#counter+=1
|
75 |
-
# check whether each line is non-empty
|
76 |
-
if chunk.decode() :
|
77 |
-
chunk = chunk.decode()
|
78 |
-
# decode each line as response data is in bytes
|
79 |
-
if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']:
|
80 |
-
#if len(json.loads(chunk.decode()[6:])['choices'][0]["delta"]) == 0:
|
81 |
-
# break
|
82 |
-
partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"]
|
83 |
-
if token_counter == 0:
|
84 |
-
history.append(" " + partial_words)
|
85 |
-
else:
|
86 |
-
history[-1] = partial_words
|
87 |
-
chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list
|
88 |
-
token_counter+=1
|
89 |
-
yield chat, history, chat_counter, response # resembles {chatbot: chat, state: history}
|
90 |
-
|
91 |
-
|
92 |
-
def reset_textbox():
|
93 |
-
return gr.update(value='')
|
94 |
-
|
95 |
-
title = """<h1 align="center">🔥GPT4 with ChatCompletions API +🚀Gradio-Streaming</h1>"""
|
96 |
-
description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form:
|
97 |
-
```
|
98 |
-
User: <utterance>
|
99 |
-
Assistant: <utterance>
|
100 |
-
User: <utterance>
|
101 |
-
Assistant: <utterance>
|
102 |
-
...
|
103 |
-
```
|
104 |
-
In this app, you can explore the outputs of a gpt-4 LLM.
|
105 |
-
"""
|
106 |
-
|
107 |
-
theme = gr.themes.Default(primary_hue="green")
|
108 |
-
|
109 |
-
with gr.Blocks(css = """#col_container { margin-left: auto; margin-right: auto;}
|
110 |
-
#chatbot {height: 520px; overflow: auto;}""",
|
111 |
-
theme=theme) as demo:
|
112 |
-
gr.HTML(title)
|
113 |
-
gr.HTML("""<h3 align="center">🔥This Huggingface Gradio Demo provides you full access to GPT4 API (4096 token limit). 🎉🥳🎉You don't need any OPENAI API key🙌</h1>""")
|
114 |
-
gr.HTML('''<center><a href="https://huggingface.co/spaces/ysharma/ChatGPT4?duplicate=true"><img src="https://bit.ly/3gLdBN6" alt="Duplicate Space"></a>Duplicate the Space and run securely with your OpenAI API Key</center>''')
|
115 |
-
with gr.Column(elem_id = "col_container"):
|
116 |
-
#GPT4 API Key is provided by Huggingface
|
117 |
-
#openai_api_key = gr.Textbox(type='password', label="Enter only your GPT4 OpenAI API key here")
|
118 |
-
chatbot = gr.Chatbot(elem_id='chatbot') #c
|
119 |
-
inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") #t
|
120 |
-
state = gr.State([]) #s
|
121 |
-
with gr.Row():
|
122 |
-
with gr.Column(scale=7):
|
123 |
-
b1 = gr.Button().style(full_width=True)
|
124 |
-
with gr.Column(scale=3):
|
125 |
-
server_status_code = gr.Textbox(label="Status code from OpenAI server", )
|
126 |
-
|
127 |
-
#inputs, top_p, temperature, top_k, repetition_penalty
|
128 |
-
with gr.Accordion("Parameters", open=False):
|
129 |
-
top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",)
|
130 |
-
temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",)
|
131 |
-
#top_k = gr.Slider( minimum=1, maximum=50, value=4, step=1, interactive=True, label="Top-k",)
|
132 |
-
#repetition_penalty = gr.Slider( minimum=0.1, maximum=3.0, value=1.03, step=0.01, interactive=True, label="Repetition Penalty", )
|
133 |
-
chat_counter = gr.Number(value=0, visible=False, precision=0)
|
134 |
-
|
135 |
-
inputs.submit( predict, [inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
|
136 |
-
b1.click( predict, [inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter, server_status_code],) #openai_api_key
|
137 |
-
b1.click(reset_textbox, [], [inputs])
|
138 |
-
inputs.submit(reset_textbox, [], [inputs])
|
139 |
-
|
140 |
-
#gr.Markdown(description)
|
141 |
-
demo.queue(max_size=20, concurrency_count=10).launch(debug=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/A666sxr/Genshin_TTS/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Genshin TTS
|
3 |
-
emoji: 🔥
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.12.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
duplicated_from: Cybercat/Genshin_TTS
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AI-Dashboards/Streamlit-Plotly_Graph-Objects/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Streamlit-Plotly Graph-Objects
|
3 |
-
emoji: 📚
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: red
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.19.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ALSv/FSW/roop/processors/frame/face_enhancer.py
DELETED
@@ -1,89 +0,0 @@
|
|
1 |
-
from typing import Any, List, Callable
|
2 |
-
import cv2
|
3 |
-
import threading
|
4 |
-
import gfpgan
|
5 |
-
|
6 |
-
import roop.globals
|
7 |
-
import roop.processors.frame.core
|
8 |
-
from roop.core import update_status
|
9 |
-
from roop.face_analyser import get_one_face
|
10 |
-
from roop.typing import Frame, Face
|
11 |
-
from roop.utilities import conditional_download, resolve_relative_path, is_image, is_video
|
12 |
-
import torch
|
13 |
-
|
14 |
-
FACE_ENHANCER = None
|
15 |
-
THREAD_SEMAPHORE = threading.Semaphore()
|
16 |
-
THREAD_LOCK = threading.Lock()
|
17 |
-
NAME = 'ROOP.FACE-ENHANCER'
|
18 |
-
frame_name = 'face_enhancer'
|
19 |
-
|
20 |
-
if torch.cuda.is_available():
|
21 |
-
device='cuda'
|
22 |
-
else:
|
23 |
-
device='cpu'
|
24 |
-
|
25 |
-
|
26 |
-
def get_face_enhancer() -> Any:
|
27 |
-
global FACE_ENHANCER
|
28 |
-
|
29 |
-
with THREAD_LOCK:
|
30 |
-
if FACE_ENHANCER is None:
|
31 |
-
model_path = resolve_relative_path('../models/GFPGANv1.4.pth')
|
32 |
-
# todo: set models path https://github.com/TencentARC/GFPGAN/issues/399
|
33 |
-
FACE_ENHANCER = gfpgan.GFPGANer(model_path=model_path, upscale=1,device=device) # type: ignore[attr-defined]
|
34 |
-
return FACE_ENHANCER
|
35 |
-
|
36 |
-
|
37 |
-
def pre_check() -> bool:
|
38 |
-
download_directory_path = resolve_relative_path('../models')
|
39 |
-
# conditional_download(download_directory_path, ['https://huggingface.co/henryruhs/roop/resolve/main/GFPGANv1.4.pth'])
|
40 |
-
conditional_download(download_directory_path, ['https://github.com/TencentARC/GFPGAN/releases/download/v1.3.0/GFPGANv1.4.pth'])
|
41 |
-
return True
|
42 |
-
|
43 |
-
|
44 |
-
def pre_start() -> bool:
|
45 |
-
if not is_image(roop.globals.target_path) and not is_video(roop.globals.target_path):
|
46 |
-
update_status('Select an image or video for target path.', NAME)
|
47 |
-
return False
|
48 |
-
return True
|
49 |
-
|
50 |
-
|
51 |
-
def post_process() -> None:
|
52 |
-
global FACE_ENHANCER
|
53 |
-
|
54 |
-
FACE_ENHANCER = None
|
55 |
-
|
56 |
-
|
57 |
-
def enhance_face(temp_frame: Frame) -> Frame:
|
58 |
-
with THREAD_SEMAPHORE:
|
59 |
-
_, _, temp_frame = get_face_enhancer().enhance(
|
60 |
-
temp_frame,
|
61 |
-
paste_back=True
|
62 |
-
)
|
63 |
-
return temp_frame
|
64 |
-
|
65 |
-
|
66 |
-
def process_frame(source_face: Face, temp_frame: Frame) -> Frame:
|
67 |
-
target_face = get_one_face(temp_frame)
|
68 |
-
if target_face:
|
69 |
-
temp_frame = enhance_face(temp_frame)
|
70 |
-
return temp_frame
|
71 |
-
|
72 |
-
|
73 |
-
def process_frames(source_path: str, temp_frame_paths: List[str], update: Callable[[], None]) -> None:
|
74 |
-
for temp_frame_path in temp_frame_paths:
|
75 |
-
temp_frame = cv2.imread(temp_frame_path)
|
76 |
-
result = process_frame(None, temp_frame)
|
77 |
-
cv2.imwrite(temp_frame_path, result)
|
78 |
-
if update:
|
79 |
-
update()
|
80 |
-
|
81 |
-
|
82 |
-
def process_image(source_path: str, target_path: str, output_path: str) -> None:
|
83 |
-
target_frame = cv2.imread(target_path)
|
84 |
-
result = process_frame(None, target_frame)
|
85 |
-
cv2.imwrite(output_path, result)
|
86 |
-
|
87 |
-
|
88 |
-
def process_video(source_path: str, temp_frame_paths: List[str]) -> None:
|
89 |
-
roop.processors.frame.core.process_video(None, temp_frame_paths, process_frames)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aaaad/Dddde/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Dddde
|
3 |
-
emoji: 👀
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.12.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ababababababbababa/Ashaar/poetry_diacritizer/util/decorators.py
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
import traceback
|
2 |
-
from time import time
|
3 |
-
|
4 |
-
|
5 |
-
def ignore_exception(f):
|
6 |
-
def apply_func(*args, **kwargs):
|
7 |
-
try:
|
8 |
-
result = f(*args, **kwargs)
|
9 |
-
return result
|
10 |
-
except Exception:
|
11 |
-
if False:
|
12 |
-
print(f"Catched exception in {f}:")
|
13 |
-
traceback.print_exc()
|
14 |
-
return None
|
15 |
-
|
16 |
-
return apply_func
|
17 |
-
|
18 |
-
|
19 |
-
def time_it(f):
|
20 |
-
def apply_func(*args, **kwargs):
|
21 |
-
t_start = time()
|
22 |
-
result = f(*args, **kwargs)
|
23 |
-
t_end = time()
|
24 |
-
dur = round(t_end - t_start, ndigits=2)
|
25 |
-
return result, dur
|
26 |
-
|
27 |
-
return apply_func
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapter/T2I-Adapter/ldm/data/dataset_laion.py
DELETED
@@ -1,130 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
|
3 |
-
import numpy as np
|
4 |
-
import os
|
5 |
-
import pytorch_lightning as pl
|
6 |
-
import torch
|
7 |
-
import webdataset as wds
|
8 |
-
from torchvision.transforms import transforms
|
9 |
-
|
10 |
-
from ldm.util import instantiate_from_config
|
11 |
-
|
12 |
-
|
13 |
-
def dict_collation_fn(samples, combine_tensors=True, combine_scalars=True):
|
14 |
-
"""Take a list of samples (as dictionary) and create a batch, preserving the keys.
|
15 |
-
If `tensors` is True, `ndarray` objects are combined into
|
16 |
-
tensor batches.
|
17 |
-
:param dict samples: list of samples
|
18 |
-
:param bool tensors: whether to turn lists of ndarrays into a single ndarray
|
19 |
-
:returns: single sample consisting of a batch
|
20 |
-
:rtype: dict
|
21 |
-
"""
|
22 |
-
keys = set.intersection(*[set(sample.keys()) for sample in samples])
|
23 |
-
batched = {key: [] for key in keys}
|
24 |
-
|
25 |
-
for s in samples:
|
26 |
-
[batched[key].append(s[key]) for key in batched]
|
27 |
-
|
28 |
-
result = {}
|
29 |
-
for key in batched:
|
30 |
-
if isinstance(batched[key][0], (int, float)):
|
31 |
-
if combine_scalars:
|
32 |
-
result[key] = np.array(list(batched[key]))
|
33 |
-
elif isinstance(batched[key][0], torch.Tensor):
|
34 |
-
if combine_tensors:
|
35 |
-
result[key] = torch.stack(list(batched[key]))
|
36 |
-
elif isinstance(batched[key][0], np.ndarray):
|
37 |
-
if combine_tensors:
|
38 |
-
result[key] = np.array(list(batched[key]))
|
39 |
-
else:
|
40 |
-
result[key] = list(batched[key])
|
41 |
-
return result
|
42 |
-
|
43 |
-
|
44 |
-
class WebDataModuleFromConfig(pl.LightningDataModule):
|
45 |
-
|
46 |
-
def __init__(self,
|
47 |
-
tar_base,
|
48 |
-
batch_size,
|
49 |
-
train=None,
|
50 |
-
validation=None,
|
51 |
-
test=None,
|
52 |
-
num_workers=4,
|
53 |
-
multinode=True,
|
54 |
-
min_size=None,
|
55 |
-
max_pwatermark=1.0,
|
56 |
-
**kwargs):
|
57 |
-
super().__init__()
|
58 |
-
print(f'Setting tar base to {tar_base}')
|
59 |
-
self.tar_base = tar_base
|
60 |
-
self.batch_size = batch_size
|
61 |
-
self.num_workers = num_workers
|
62 |
-
self.train = train
|
63 |
-
self.validation = validation
|
64 |
-
self.test = test
|
65 |
-
self.multinode = multinode
|
66 |
-
self.min_size = min_size # filter out very small images
|
67 |
-
self.max_pwatermark = max_pwatermark # filter out watermarked images
|
68 |
-
|
69 |
-
def make_loader(self, dataset_config):
|
70 |
-
image_transforms = [instantiate_from_config(tt) for tt in dataset_config.image_transforms]
|
71 |
-
image_transforms = transforms.Compose(image_transforms)
|
72 |
-
|
73 |
-
process = instantiate_from_config(dataset_config['process'])
|
74 |
-
|
75 |
-
shuffle = dataset_config.get('shuffle', 0)
|
76 |
-
shardshuffle = shuffle > 0
|
77 |
-
|
78 |
-
nodesplitter = wds.shardlists.split_by_node if self.multinode else wds.shardlists.single_node_only
|
79 |
-
|
80 |
-
tars = os.path.join(self.tar_base, dataset_config.shards)
|
81 |
-
|
82 |
-
dset = wds.WebDataset(
|
83 |
-
tars, nodesplitter=nodesplitter, shardshuffle=shardshuffle,
|
84 |
-
handler=wds.warn_and_continue).repeat().shuffle(shuffle)
|
85 |
-
print(f'Loading webdataset with {len(dset.pipeline[0].urls)} shards.')
|
86 |
-
|
87 |
-
dset = (
|
88 |
-
dset.select(self.filter_keys).decode('pil',
|
89 |
-
handler=wds.warn_and_continue).select(self.filter_size).map_dict(
|
90 |
-
jpg=image_transforms, handler=wds.warn_and_continue).map(process))
|
91 |
-
dset = (dset.batched(self.batch_size, partial=False, collation_fn=dict_collation_fn))
|
92 |
-
|
93 |
-
loader = wds.WebLoader(dset, batch_size=None, shuffle=False, num_workers=self.num_workers)
|
94 |
-
|
95 |
-
return loader
|
96 |
-
|
97 |
-
def filter_size(self, x):
|
98 |
-
if self.min_size is None:
|
99 |
-
return True
|
100 |
-
try:
|
101 |
-
return x['json']['original_width'] >= self.min_size and x['json']['original_height'] >= self.min_size and x[
|
102 |
-
'json']['pwatermark'] <= self.max_pwatermark
|
103 |
-
except Exception:
|
104 |
-
return False
|
105 |
-
|
106 |
-
def filter_keys(self, x):
|
107 |
-
try:
|
108 |
-
return ("jpg" in x) and ("txt" in x)
|
109 |
-
except Exception:
|
110 |
-
return False
|
111 |
-
|
112 |
-
def train_dataloader(self):
|
113 |
-
return self.make_loader(self.train)
|
114 |
-
|
115 |
-
def val_dataloader(self):
|
116 |
-
return None
|
117 |
-
|
118 |
-
def test_dataloader(self):
|
119 |
-
return None
|
120 |
-
|
121 |
-
|
122 |
-
if __name__ == '__main__':
|
123 |
-
from omegaconf import OmegaConf
|
124 |
-
config = OmegaConf.load("configs/stable-diffusion/train_canny_sd_v1.yaml")
|
125 |
-
datamod = WebDataModuleFromConfig(**config["data"]["params"])
|
126 |
-
dataloader = datamod.train_dataloader()
|
127 |
-
|
128 |
-
for batch in dataloader:
|
129 |
-
print(batch.keys())
|
130 |
-
print(batch['jpg'].shape)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AdithyaSNair/alzheimers_prediction_using_cnn/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Alzheimers Prediction Using Cnn
|
3 |
-
emoji: 💻
|
4 |
-
colorFrom: green
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.36.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/simulation.py
DELETED
@@ -1,60 +0,0 @@
|
|
1 |
-
import asyncio
|
2 |
-
import logging
|
3 |
-
from typing import List
|
4 |
-
|
5 |
-
# from agentverse.agents import Agent
|
6 |
-
from agentverse.agents.simulation_agent.conversation import BaseAgent
|
7 |
-
from agentverse.environments import BaseEnvironment
|
8 |
-
from agentverse.initialization import load_agent, load_environment, prepare_task_config
|
9 |
-
|
10 |
-
openai_logger = logging.getLogger("openai")
|
11 |
-
openai_logger.setLevel(logging.WARNING)
|
12 |
-
|
13 |
-
|
14 |
-
class Simulation:
|
15 |
-
def __init__(self, agents: List[BaseAgent], environment: BaseEnvironment):
|
16 |
-
self.agents = agents
|
17 |
-
self.environment = environment
|
18 |
-
|
19 |
-
@classmethod
|
20 |
-
def from_task(cls, task: str, tasks_dir: str):
|
21 |
-
"""Build an AgentVerse from a task name.
|
22 |
-
The task name should correspond to a directory in `tasks` directory.
|
23 |
-
Then this method will load the configuration from the yaml file in that directory.
|
24 |
-
"""
|
25 |
-
# Prepare the config of the task
|
26 |
-
task_config = prepare_task_config(task, tasks_dir)
|
27 |
-
|
28 |
-
# Build the agents
|
29 |
-
agents = []
|
30 |
-
for agent_configs in task_config["agents"]:
|
31 |
-
agent = load_agent(agent_configs)
|
32 |
-
agents.append(agent)
|
33 |
-
|
34 |
-
# Build the environment
|
35 |
-
env_config = task_config["environment"]
|
36 |
-
env_config["agents"] = agents
|
37 |
-
environment = load_environment(env_config)
|
38 |
-
|
39 |
-
return cls(agents, environment)
|
40 |
-
|
41 |
-
def run(self):
|
42 |
-
"""Run the environment from scratch until it is done."""
|
43 |
-
self.environment.reset()
|
44 |
-
while not self.environment.is_done():
|
45 |
-
asyncio.run(self.environment.step())
|
46 |
-
self.environment.report_metrics()
|
47 |
-
|
48 |
-
def reset(self):
|
49 |
-
self.environment.reset()
|
50 |
-
for agent in self.agents:
|
51 |
-
agent.reset()
|
52 |
-
|
53 |
-
def next(self, *args, **kwargs):
|
54 |
-
"""Run the environment for one step and return the return message."""
|
55 |
-
return_message = asyncio.run(self.environment.step(*args, **kwargs))
|
56 |
-
return return_message
|
57 |
-
|
58 |
-
def update_state(self, *args, **kwargs):
|
59 |
-
"""Run the environment for one step and return the return message."""
|
60 |
-
self.environment.update_state(*args, **kwargs)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/audio/Audio.d.ts
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import Base from '../base/Base';
|
2 |
-
export default class Audio extends Base { }
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/los/Los.js
DELETED
@@ -1,49 +0,0 @@
|
|
1 |
-
import Base from '../base/Base.js';
|
2 |
-
import { Line } from '../utils/Geoms.js'
|
3 |
-
|
4 |
-
const Linear = Phaser.Math.Linear;
|
5 |
-
|
6 |
-
class Los extends Base {
|
7 |
-
constructor(scene, config) {
|
8 |
-
super(scene, config);
|
9 |
-
this.type = 'rexSpinnerLos';
|
10 |
-
}
|
11 |
-
|
12 |
-
buildShapes() {
|
13 |
-
for (var i = 0; i < 12; i++) {
|
14 |
-
this.addShape(new Line());
|
15 |
-
}
|
16 |
-
}
|
17 |
-
|
18 |
-
updateShapes() {
|
19 |
-
var centerX = this.centerX;
|
20 |
-
var centerY = this.centerY;
|
21 |
-
var isSizeChanged = this.isSizeChanged;
|
22 |
-
|
23 |
-
var radius = this.radius;
|
24 |
-
var startRadius = radius / 2;
|
25 |
-
var lineWidth = Math.ceil(radius / 20);
|
26 |
-
var shapes = this.getShapes();
|
27 |
-
for (var i = 0, cnt = shapes.length; i < cnt; i++) {
|
28 |
-
var line = shapes[i];
|
29 |
-
var t = i / cnt;
|
30 |
-
var angle = Math.PI * 2 * t;
|
31 |
-
var alpha = Linear(0.25, 1, (1 - this.value + t) % 1);
|
32 |
-
line.lineStyle(lineWidth, this.color, alpha);
|
33 |
-
|
34 |
-
if (isSizeChanged) {
|
35 |
-
line
|
36 |
-
.setP0(
|
37 |
-
centerX + Math.cos(angle) * startRadius,
|
38 |
-
centerY + Math.sin(angle) * startRadius
|
39 |
-
)
|
40 |
-
.setP1(
|
41 |
-
centerX + Math.cos(angle) * radius,
|
42 |
-
centerY + Math.sin(angle) * radius
|
43 |
-
)
|
44 |
-
}
|
45 |
-
}
|
46 |
-
}
|
47 |
-
}
|
48 |
-
|
49 |
-
export default Los;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/filechooser/FileChooser.d.ts
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import { OpenFileChooser, FileChooser } from '../../../plugins/filechooser';
|
2 |
-
export { OpenFileChooser, FileChooser };
|
|
|
|
|
|
spaces/AlexMason/anime-remove-background/app.py
DELETED
@@ -1,52 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import huggingface_hub
|
3 |
-
import onnxruntime as rt
|
4 |
-
import numpy as np
|
5 |
-
import cv2
|
6 |
-
|
7 |
-
|
8 |
-
def get_mask(img, s=1024):
|
9 |
-
img = (img / 255).astype(np.float32)
|
10 |
-
h, w = h0, w0 = img.shape[:-1]
|
11 |
-
h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s)
|
12 |
-
ph, pw = s - h, s - w
|
13 |
-
img_input = np.zeros([s, s, 3], dtype=np.float32)
|
14 |
-
img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h))
|
15 |
-
img_input = np.transpose(img_input, (2, 0, 1))
|
16 |
-
img_input = img_input[np.newaxis, :]
|
17 |
-
mask = rmbg_model.run(None, {'img': img_input})[0][0]
|
18 |
-
mask = np.transpose(mask, (1, 2, 0))
|
19 |
-
mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w]
|
20 |
-
mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis]
|
21 |
-
return mask
|
22 |
-
|
23 |
-
|
24 |
-
def rmbg_fn(img):
|
25 |
-
mask = get_mask(img)
|
26 |
-
img = (mask * img + 255 * (1 - mask)).astype(np.uint8)
|
27 |
-
mask = (mask * 255).astype(np.uint8)
|
28 |
-
img = np.concatenate([img, mask], axis=2, dtype=np.uint8)
|
29 |
-
mask = mask.repeat(3, axis=2)
|
30 |
-
return mask, img
|
31 |
-
|
32 |
-
|
33 |
-
if __name__ == "__main__":
|
34 |
-
providers = ['CUDAExecutionProvider', 'CPUExecutionProvider']
|
35 |
-
model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx")
|
36 |
-
rmbg_model = rt.InferenceSession(model_path, providers=providers)
|
37 |
-
app = gr.Blocks()
|
38 |
-
with app:
|
39 |
-
gr.Markdown("# Anime Remove Background\n\n"
|
40 |
-
"\n\n"
|
41 |
-
"demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)")
|
42 |
-
with gr.Row():
|
43 |
-
with gr.Column():
|
44 |
-
input_img = gr.Image(label="input image")
|
45 |
-
examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)]
|
46 |
-
examples = gr.Dataset(components=[input_img], samples=examples_data)
|
47 |
-
run_btn = gr.Button(variant="primary")
|
48 |
-
output_mask = gr.Image(label="mask")
|
49 |
-
output_img = gr.Image(label="result", image_mode="RGBA")
|
50 |
-
examples.click(lambda x: x[0], [examples], [input_img])
|
51 |
-
run_btn.click(rmbg_fn, [input_img], [output_mask, output_img])
|
52 |
-
app.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ammar-alhaj-ali/LayoutLMv3-FUNSD/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: LayoutLMv3 Fine Tuning FUNSD
|
3 |
-
emoji: 📉
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.2
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/stylegan_human/training_scripts/sg3/train.py
DELETED
@@ -1,325 +0,0 @@
|
|
1 |
-
# Copyright (c) SenseTime Research. All rights reserved.
|
2 |
-
|
3 |
-
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
4 |
-
#
|
5 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
6 |
-
# and proprietary rights in and to this software, related documentation
|
7 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
8 |
-
# distribution of this software and related documentation without an express
|
9 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
10 |
-
|
11 |
-
"""Train a GAN using the techniques described in the paper
|
12 |
-
"Alias-Free Generative Adversarial Networks"."""
|
13 |
-
|
14 |
-
import os
|
15 |
-
import click
|
16 |
-
import re
|
17 |
-
import json
|
18 |
-
import tempfile
|
19 |
-
import torch
|
20 |
-
|
21 |
-
import dnnlib
|
22 |
-
from training import training_loop
|
23 |
-
from metrics import metric_main
|
24 |
-
from torch_utils import training_stats
|
25 |
-
from torch_utils import custom_ops
|
26 |
-
import ast
|
27 |
-
# ----------------------------------------------------------------------------
|
28 |
-
|
29 |
-
|
30 |
-
def subprocess_fn(rank, c, temp_dir):
|
31 |
-
dnnlib.util.Logger(file_name=os.path.join(
|
32 |
-
c.run_dir, 'log.txt'), file_mode='a', should_flush=True)
|
33 |
-
|
34 |
-
# Init torch.distributed.
|
35 |
-
if c.num_gpus > 1:
|
36 |
-
init_file = os.path.abspath(os.path.join(
|
37 |
-
temp_dir, '.torch_distributed_init'))
|
38 |
-
if os.name == 'nt':
|
39 |
-
init_method = 'file:///' + init_file.replace('\\', '/')
|
40 |
-
torch.distributed.init_process_group(
|
41 |
-
backend='gloo', init_method=init_method, rank=rank, world_size=c.num_gpus)
|
42 |
-
else:
|
43 |
-
init_method = f'file://{init_file}'
|
44 |
-
torch.distributed.init_process_group(
|
45 |
-
backend='nccl', init_method=init_method, rank=rank, world_size=c.num_gpus)
|
46 |
-
|
47 |
-
# Init torch_utils.
|
48 |
-
sync_device = torch.device('cuda', rank) if c.num_gpus > 1 else None
|
49 |
-
training_stats.init_multiprocessing(rank=rank, sync_device=sync_device)
|
50 |
-
if rank != 0:
|
51 |
-
custom_ops.verbosity = 'none'
|
52 |
-
|
53 |
-
# Execute training loop.
|
54 |
-
training_loop.training_loop(rank=rank, **c)
|
55 |
-
|
56 |
-
# ----------------------------------------------------------------------------
|
57 |
-
|
58 |
-
|
59 |
-
def launch_training(c, desc, outdir, dry_run):
|
60 |
-
dnnlib.util.Logger(should_flush=True)
|
61 |
-
|
62 |
-
# Pick output directory.
|
63 |
-
prev_run_dirs = []
|
64 |
-
if os.path.isdir(outdir):
|
65 |
-
prev_run_dirs = [x for x in os.listdir(
|
66 |
-
outdir) if os.path.isdir(os.path.join(outdir, x))]
|
67 |
-
prev_run_ids = [re.match(r'^\d+', x) for x in prev_run_dirs]
|
68 |
-
prev_run_ids = [int(x.group()) for x in prev_run_ids if x is not None]
|
69 |
-
cur_run_id = max(prev_run_ids, default=-1) + 1
|
70 |
-
c.run_dir = os.path.join(outdir, f'{cur_run_id:05d}-{desc}')
|
71 |
-
assert not os.path.exists(c.run_dir)
|
72 |
-
|
73 |
-
# Print options.
|
74 |
-
print()
|
75 |
-
print('Training options:')
|
76 |
-
print(json.dumps(c, indent=2))
|
77 |
-
print()
|
78 |
-
print(f'Output directory: {c.run_dir}')
|
79 |
-
print(f'Number of GPUs: {c.num_gpus}')
|
80 |
-
print(f'Batch size: {c.batch_size} images')
|
81 |
-
print(f'Training duration: {c.total_kimg} kimg')
|
82 |
-
print(f'Dataset path: {c.training_set_kwargs.path}')
|
83 |
-
print(f'Dataset size: {c.training_set_kwargs.max_size} images')
|
84 |
-
print(f'Dataset resolution: {c.training_set_kwargs.resolution}')
|
85 |
-
print(f'Dataset labels: {c.training_set_kwargs.use_labels}')
|
86 |
-
print(f'Dataset x-flips: {c.training_set_kwargs.xflip}')
|
87 |
-
print()
|
88 |
-
|
89 |
-
# Dry run?
|
90 |
-
if dry_run:
|
91 |
-
print('Dry run; exiting.')
|
92 |
-
return
|
93 |
-
|
94 |
-
# Create output directory.
|
95 |
-
print('Creating output directory...')
|
96 |
-
os.makedirs(c.run_dir)
|
97 |
-
with open(os.path.join(c.run_dir, 'training_options.json'), 'wt') as f:
|
98 |
-
json.dump(c, f, indent=2)
|
99 |
-
|
100 |
-
# Launch processes.
|
101 |
-
print('Launching processes...')
|
102 |
-
torch.multiprocessing.set_start_method('spawn')
|
103 |
-
with tempfile.TemporaryDirectory() as temp_dir:
|
104 |
-
if c.num_gpus == 1:
|
105 |
-
subprocess_fn(rank=0, c=c, temp_dir=temp_dir)
|
106 |
-
else:
|
107 |
-
torch.multiprocessing.spawn(
|
108 |
-
fn=subprocess_fn, args=(c, temp_dir), nprocs=c.num_gpus)
|
109 |
-
|
110 |
-
# ----------------------------------------------------------------------------
|
111 |
-
|
112 |
-
|
113 |
-
def init_dataset_kwargs(data, square=False):
|
114 |
-
# dataset
|
115 |
-
|
116 |
-
try:
|
117 |
-
dataset_kwargs = dnnlib.EasyDict(class_name='training.dataset.ImageFolderDataset',
|
118 |
-
path=data, use_labels=True, max_size=None, xflip=False, square=square)
|
119 |
-
# Subclass of training.dataset.Dataset.
|
120 |
-
dataset_obj = dnnlib.util.construct_class_by_name(**dataset_kwargs)
|
121 |
-
# Be explicit about resolution.
|
122 |
-
dataset_kwargs.resolution = dataset_obj.resolution
|
123 |
-
# Be explicit about labels.
|
124 |
-
dataset_kwargs.use_labels = dataset_obj.has_labels
|
125 |
-
# Be explicit about dataset size.
|
126 |
-
dataset_kwargs.max_size = len(dataset_obj)
|
127 |
-
return dataset_kwargs, dataset_obj.name
|
128 |
-
except IOError as err:
|
129 |
-
raise click.ClickException(f'--data: {err}')
|
130 |
-
|
131 |
-
print("out of dataset")
|
132 |
-
# ----------------------------------------------------------------------------
|
133 |
-
|
134 |
-
|
135 |
-
def parse_comma_separated_list(s):
|
136 |
-
if isinstance(s, list):
|
137 |
-
return s
|
138 |
-
if s is None or s.lower() == 'none' or s == '':
|
139 |
-
return []
|
140 |
-
return s.split(',')
|
141 |
-
|
142 |
-
# ----------------------------------------------------------------------------
|
143 |
-
|
144 |
-
|
145 |
-
@click.command()
|
146 |
-
# Required.
|
147 |
-
@click.option('--outdir', help='Where to save the results', metavar='DIR', required=True)
|
148 |
-
@click.option('--cfg', help='Base configuration', type=click.Choice(['stylegan3-t', 'stylegan3-r', 'stylegan2']), required=True)
|
149 |
-
@click.option('--data', help='Training data', metavar='PATH', required=True)
|
150 |
-
@click.option('--gpus', help='Number of GPUs to use', metavar='INT', type=click.IntRange(min=1), required=True)
|
151 |
-
@click.option('--batch', help='Total batch size', metavar='INT', type=click.IntRange(min=1), required=True)
|
152 |
-
@click.option('--gamma', help='R1 regularization weight', metavar='FLOAT', type=click.FloatRange(min=0), required=True)
|
153 |
-
@click.option('--square', help='True for square, False for rectangle', type=bool, metavar='BOOL', default=False)
|
154 |
-
# Optional features.
|
155 |
-
@click.option('--cond', help='Train conditional model', metavar='BOOL', type=bool, default=False, show_default=True)
|
156 |
-
@click.option('--mirror', help='Enable dataset x-flips', metavar='BOOL', type=bool, default=False, show_default=True)
|
157 |
-
@click.option('--aug', help='Augmentation mode', type=click.Choice(['noaug', 'ada', 'fixed']), default='ada', show_default=True)
|
158 |
-
@click.option('--resume', help='Resume from given network pickle', metavar='[PATH|URL]', type=str)
|
159 |
-
@click.option('--freezed', help='Freeze first layers of D', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True)
|
160 |
-
# Misc hyperparameters.
|
161 |
-
@click.option('--p', help='Probability for --aug=fixed', metavar='FLOAT', type=click.FloatRange(min=0, max=1), default=0.2, show_default=True)
|
162 |
-
@click.option('--target', help='Target value for --aug=ada', metavar='FLOAT', type=click.FloatRange(min=0, max=1), default=0.6, show_default=True)
|
163 |
-
@click.option('--batch-gpu', help='Limit batch size per GPU', metavar='INT', type=click.IntRange(min=1))
|
164 |
-
@click.option('--cbase', help='Capacity multiplier', metavar='INT', type=click.IntRange(min=1), default=32768, show_default=True)
|
165 |
-
@click.option('--cmax', help='Max. feature maps', metavar='INT', type=click.IntRange(min=1), default=512, show_default=True)
|
166 |
-
@click.option('--glr', help='G learning rate [default: varies]', metavar='FLOAT', type=click.FloatRange(min=0))
|
167 |
-
@click.option('--dlr', help='D learning rate', metavar='FLOAT', type=click.FloatRange(min=0), default=0.002, show_default=True)
|
168 |
-
@click.option('--map-depth', help='Mapping network depth [default: varies]', metavar='INT', type=click.IntRange(min=1))
|
169 |
-
@click.option('--mbstd-group', help='Minibatch std group size', metavar='INT', type=click.IntRange(min=1), default=4, show_default=True)
|
170 |
-
# Misc settings.
|
171 |
-
@click.option('--desc', help='String to include in result dir name', metavar='STR', type=str)
|
172 |
-
@click.option('--metrics', help='Quality metrics', metavar='[NAME|A,B,C|none]', type=parse_comma_separated_list, default='fid50k_full', show_default=True)
|
173 |
-
@click.option('--kimg', help='Total training duration', metavar='KIMG', type=click.IntRange(min=1), default=25000, show_default=True)
|
174 |
-
@click.option('--tick', help='How often to print progress', metavar='KIMG', type=click.IntRange(min=1), default=4, show_default=True)
|
175 |
-
@click.option('--snap', help='How often to save snapshots', metavar='TICKS', type=click.IntRange(min=1), default=50, show_default=True)
|
176 |
-
@click.option('--seed', help='Random seed', metavar='INT', type=click.IntRange(min=0), default=0, show_default=True)
|
177 |
-
@click.option('--fp32', help='Disable mixed-precision', metavar='BOOL', type=bool, default=False, show_default=True)
|
178 |
-
@click.option('--nobench', help='Disable cuDNN benchmarking', metavar='BOOL', type=bool, default=False, show_default=True)
|
179 |
-
@click.option('--workers', help='DataLoader worker processes', metavar='INT', type=click.IntRange(min=1), default=3, show_default=True)
|
180 |
-
@click.option('-n', '--dry-run', help='Print training options and exit', is_flag=True)
|
181 |
-
def main(**kwargs):
|
182 |
-
"""Train a GAN using the techniques described in the paper
|
183 |
-
"Alias-Free Generative Adversarial Networks".
|
184 |
-
|
185 |
-
Examples:
|
186 |
-
|
187 |
-
\b
|
188 |
-
# Train StyleGAN3-T for AFHQv2 using 8 GPUs.
|
189 |
-
python train.py --outdir=~/training-runs --cfg=stylegan3-t --data=~/datasets/afhqv2-512x512.zip \\
|
190 |
-
--gpus=8 --batch=32 --gamma=8.2 --mirror=1
|
191 |
-
|
192 |
-
\b
|
193 |
-
# Fine-tune StyleGAN3-R for MetFaces-U using 1 GPU, starting from the pre-trained FFHQ-U pickle.
|
194 |
-
python train.py --outdir=~/training-runs --cfg=stylegan3-r --data=~/datasets/metfacesu-1024x1024.zip \\
|
195 |
-
--gpus=8 --batch=32 --gamma=6.6 --mirror=1 --kimg=5000 --snap=5 \\
|
196 |
-
--resume=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhqu-1024x1024.pkl
|
197 |
-
|
198 |
-
\b
|
199 |
-
# Train StyleGAN2 for FFHQ at 1024x1024 resolution using 8 GPUs.
|
200 |
-
python train.py --outdir=~/training-runs --cfg=stylegan2 --data=~/datasets/ffhq-1024x1024.zip \\
|
201 |
-
--gpus=8 --batch=32 --gamma=10 --mirror=1 --aug=noaug
|
202 |
-
"""
|
203 |
-
|
204 |
-
# Initialize config.
|
205 |
-
opts = dnnlib.EasyDict(kwargs) # Command line arguments.
|
206 |
-
c = dnnlib.EasyDict() # Main config dict.
|
207 |
-
print('---- square: ', opts.square)
|
208 |
-
c.G_kwargs = dnnlib.EasyDict(
|
209 |
-
class_name=None, z_dim=512, w_dim=512, mapping_kwargs=dnnlib.EasyDict(), square=opts.square)
|
210 |
-
c.D_kwargs = dnnlib.EasyDict(class_name='training.networks_stylegan2.Discriminator', block_kwargs=dnnlib.EasyDict(
|
211 |
-
), mapping_kwargs=dnnlib.EasyDict(), epilogue_kwargs=dnnlib.EasyDict(), square=opts.square)
|
212 |
-
c.G_opt_kwargs = dnnlib.EasyDict(
|
213 |
-
class_name='torch.optim.Adam', betas=[0, 0.99], eps=1e-8)
|
214 |
-
c.D_opt_kwargs = dnnlib.EasyDict(
|
215 |
-
class_name='torch.optim.Adam', betas=[0, 0.99], eps=1e-8)
|
216 |
-
c.loss_kwargs = dnnlib.EasyDict(class_name='training.loss.StyleGAN2Loss')
|
217 |
-
c.data_loader_kwargs = dnnlib.EasyDict(pin_memory=True, prefetch_factor=2)
|
218 |
-
|
219 |
-
# Training set.
|
220 |
-
c.training_set_kwargs, dataset_name = init_dataset_kwargs(
|
221 |
-
data=opts.data, square=opts.square)
|
222 |
-
if opts.cond and not c.training_set_kwargs.use_labels:
|
223 |
-
raise click.ClickException(
|
224 |
-
'--cond=True requires labels specified in dataset.json')
|
225 |
-
c.training_set_kwargs.use_labels = opts.cond
|
226 |
-
c.training_set_kwargs.xflip = opts.mirror
|
227 |
-
|
228 |
-
# Hyperparameters & settings.
|
229 |
-
c.num_gpus = opts.gpus
|
230 |
-
c.batch_size = opts.batch
|
231 |
-
c.batch_gpu = opts.batch_gpu or opts.batch // opts.gpus
|
232 |
-
c.G_kwargs.channel_base = c.D_kwargs.channel_base = opts.cbase
|
233 |
-
c.G_kwargs.channel_max = c.D_kwargs.channel_max = opts.cmax
|
234 |
-
c.G_kwargs.mapping_kwargs.num_layers = (
|
235 |
-
8 if opts.cfg == 'stylegan2' else 2) if opts.map_depth is None else opts.map_depth
|
236 |
-
c.D_kwargs.block_kwargs.freeze_layers = opts.freezed
|
237 |
-
c.D_kwargs.epilogue_kwargs.mbstd_group_size = opts.mbstd_group
|
238 |
-
c.loss_kwargs.r1_gamma = opts.gamma
|
239 |
-
c.G_opt_kwargs.lr = (
|
240 |
-
0.002 if opts.cfg == 'stylegan2' else 0.0025) if opts.glr is None else opts.glr
|
241 |
-
c.D_opt_kwargs.lr = opts.dlr
|
242 |
-
c.metrics = opts.metrics
|
243 |
-
c.total_kimg = opts.kimg
|
244 |
-
c.kimg_per_tick = opts.tick
|
245 |
-
c.image_snapshot_ticks = c.network_snapshot_ticks = opts.snap
|
246 |
-
c.random_seed = c.training_set_kwargs.random_seed = opts.seed
|
247 |
-
c.data_loader_kwargs.num_workers = opts.workers
|
248 |
-
|
249 |
-
# Sanity checks.
|
250 |
-
if c.batch_size % c.num_gpus != 0:
|
251 |
-
raise click.ClickException('--batch must be a multiple of --gpus')
|
252 |
-
if c.batch_size % (c.num_gpus * c.batch_gpu) != 0:
|
253 |
-
raise click.ClickException(
|
254 |
-
'--batch must be a multiple of --gpus times --batch-gpu')
|
255 |
-
if c.batch_gpu < c.D_kwargs.epilogue_kwargs.mbstd_group_size:
|
256 |
-
raise click.ClickException(
|
257 |
-
'--batch-gpu cannot be smaller than --mbstd')
|
258 |
-
if any(not metric_main.is_valid_metric(metric) for metric in c.metrics):
|
259 |
-
raise click.ClickException('\n'.join(
|
260 |
-
['--metrics can only contain the following values:'] + metric_main.list_valid_metrics()))
|
261 |
-
|
262 |
-
# Base configuration.
|
263 |
-
c.ema_kimg = c.batch_size * 10 / 32
|
264 |
-
if opts.cfg == 'stylegan2':
|
265 |
-
c.G_kwargs.class_name = 'training.networks_stylegan2.Generator'
|
266 |
-
# Enable style mixing regularization.
|
267 |
-
c.loss_kwargs.style_mixing_prob = 0.9
|
268 |
-
c.loss_kwargs.pl_weight = 2 # Enable path length regularization.
|
269 |
-
c.G_reg_interval = 4 # Enable lazy regularization for G.
|
270 |
-
# Speed up training by using regular convolutions instead of grouped convolutions.
|
271 |
-
c.G_kwargs.fused_modconv_default = 'inference_only'
|
272 |
-
# Speed up path length regularization by skipping gradient computation wrt. conv2d weights.
|
273 |
-
c.loss_kwargs.pl_no_weight_grad = True
|
274 |
-
else:
|
275 |
-
c.G_kwargs.class_name = 'training.networks_stylegan3.Generator'
|
276 |
-
c.G_kwargs.magnitude_ema_beta = 0.5 ** (c.batch_size / (20 * 1e3))
|
277 |
-
if opts.cfg == 'stylegan3-r':
|
278 |
-
c.G_kwargs.conv_kernel = 1 # Use 1x1 convolutions.
|
279 |
-
c.G_kwargs.channel_base *= 2 # Double the number of feature maps.
|
280 |
-
c.G_kwargs.channel_max *= 2
|
281 |
-
# Use radially symmetric downsampling filters.
|
282 |
-
c.G_kwargs.use_radial_filters = True
|
283 |
-
# Blur the images seen by the discriminator.
|
284 |
-
c.loss_kwargs.blur_init_sigma = 10
|
285 |
-
# Fade out the blur during the first N kimg.
|
286 |
-
c.loss_kwargs.blur_fade_kimg = c.batch_size * 200 / 32
|
287 |
-
|
288 |
-
# Augmentation.
|
289 |
-
if opts.aug != 'noaug':
|
290 |
-
c.augment_kwargs = dnnlib.EasyDict(class_name='training.augment.AugmentPipe', xflip=1, rotate90=1, xint=1,
|
291 |
-
scale=1, rotate=1, aniso=1, xfrac=1, brightness=1, contrast=1, lumaflip=1, hue=1, saturation=1)
|
292 |
-
if opts.aug == 'ada':
|
293 |
-
c.ada_target = opts.target
|
294 |
-
if opts.aug == 'fixed':
|
295 |
-
c.augment_p = opts.p
|
296 |
-
|
297 |
-
# Resume.
|
298 |
-
if opts.resume is not None:
|
299 |
-
c.resume_pkl = opts.resume
|
300 |
-
c.ada_kimg = 100 # Make ADA react faster at the beginning.
|
301 |
-
c.ema_rampup = None # Disable EMA rampup.
|
302 |
-
c.loss_kwargs.blur_init_sigma = 0 # Disable blur rampup.
|
303 |
-
|
304 |
-
# Performance-related toggles.
|
305 |
-
if opts.fp32:
|
306 |
-
c.G_kwargs.num_fp16_res = c.D_kwargs.num_fp16_res = 0
|
307 |
-
c.G_kwargs.conv_clamp = c.D_kwargs.conv_clamp = None
|
308 |
-
if opts.nobench:
|
309 |
-
c.cudnn_benchmark = False
|
310 |
-
|
311 |
-
# Description string.
|
312 |
-
desc = f'{opts.cfg:s}-{dataset_name:s}-gpus{c.num_gpus:d}-batch{c.batch_size:d}-gamma{c.loss_kwargs.r1_gamma:g}'
|
313 |
-
if opts.desc is not None:
|
314 |
-
desc += f'-{opts.desc}'
|
315 |
-
|
316 |
-
# Launch.
|
317 |
-
launch_training(c=c, desc=desc, outdir=opts.outdir, dry_run=opts.dry_run)
|
318 |
-
|
319 |
-
# ----------------------------------------------------------------------------
|
320 |
-
|
321 |
-
|
322 |
-
if __name__ == "__main__":
|
323 |
-
main() # pylint: disable=no-value-for-parameter
|
324 |
-
|
325 |
-
# ----------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/dit/__init__.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
from .pipeline_dit import DiTPipeline
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/semantic_stable_diffusion/test_semantic_diffusion.py
DELETED
@@ -1,601 +0,0 @@
|
|
1 |
-
# coding=utf-8
|
2 |
-
# Copyright 2023 HuggingFace Inc.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
import gc
|
17 |
-
import random
|
18 |
-
import tempfile
|
19 |
-
import unittest
|
20 |
-
|
21 |
-
import numpy as np
|
22 |
-
import torch
|
23 |
-
from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
|
24 |
-
|
25 |
-
from diffusers import AutoencoderKL, DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler, UNet2DConditionModel
|
26 |
-
from diffusers.pipelines.semantic_stable_diffusion import SemanticStableDiffusionPipeline as StableDiffusionPipeline
|
27 |
-
from diffusers.utils import floats_tensor, nightly, torch_device
|
28 |
-
from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
|
29 |
-
|
30 |
-
|
31 |
-
enable_full_determinism()
|
32 |
-
|
33 |
-
|
34 |
-
class SafeDiffusionPipelineFastTests(unittest.TestCase):
|
35 |
-
def tearDown(self):
|
36 |
-
# clean up the VRAM after each test
|
37 |
-
super().tearDown()
|
38 |
-
gc.collect()
|
39 |
-
torch.cuda.empty_cache()
|
40 |
-
|
41 |
-
@property
|
42 |
-
def dummy_image(self):
|
43 |
-
batch_size = 1
|
44 |
-
num_channels = 3
|
45 |
-
sizes = (32, 32)
|
46 |
-
|
47 |
-
image = floats_tensor((batch_size, num_channels) + sizes, rng=random.Random(0)).to(torch_device)
|
48 |
-
return image
|
49 |
-
|
50 |
-
@property
|
51 |
-
def dummy_cond_unet(self):
|
52 |
-
torch.manual_seed(0)
|
53 |
-
model = UNet2DConditionModel(
|
54 |
-
block_out_channels=(32, 64),
|
55 |
-
layers_per_block=2,
|
56 |
-
sample_size=32,
|
57 |
-
in_channels=4,
|
58 |
-
out_channels=4,
|
59 |
-
down_block_types=("DownBlock2D", "CrossAttnDownBlock2D"),
|
60 |
-
up_block_types=("CrossAttnUpBlock2D", "UpBlock2D"),
|
61 |
-
cross_attention_dim=32,
|
62 |
-
)
|
63 |
-
return model
|
64 |
-
|
65 |
-
@property
|
66 |
-
def dummy_vae(self):
|
67 |
-
torch.manual_seed(0)
|
68 |
-
model = AutoencoderKL(
|
69 |
-
block_out_channels=[32, 64],
|
70 |
-
in_channels=3,
|
71 |
-
out_channels=3,
|
72 |
-
down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D"],
|
73 |
-
up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D"],
|
74 |
-
latent_channels=4,
|
75 |
-
)
|
76 |
-
return model
|
77 |
-
|
78 |
-
@property
|
79 |
-
def dummy_text_encoder(self):
|
80 |
-
torch.manual_seed(0)
|
81 |
-
config = CLIPTextConfig(
|
82 |
-
bos_token_id=0,
|
83 |
-
eos_token_id=2,
|
84 |
-
hidden_size=32,
|
85 |
-
intermediate_size=37,
|
86 |
-
layer_norm_eps=1e-05,
|
87 |
-
num_attention_heads=4,
|
88 |
-
num_hidden_layers=5,
|
89 |
-
pad_token_id=1,
|
90 |
-
vocab_size=1000,
|
91 |
-
)
|
92 |
-
return CLIPTextModel(config)
|
93 |
-
|
94 |
-
@property
|
95 |
-
def dummy_extractor(self):
|
96 |
-
def extract(*args, **kwargs):
|
97 |
-
class Out:
|
98 |
-
def __init__(self):
|
99 |
-
self.pixel_values = torch.ones([0])
|
100 |
-
|
101 |
-
def to(self, device):
|
102 |
-
self.pixel_values.to(device)
|
103 |
-
return self
|
104 |
-
|
105 |
-
return Out()
|
106 |
-
|
107 |
-
return extract
|
108 |
-
|
109 |
-
def test_semantic_diffusion_ddim(self):
|
110 |
-
device = "cpu" # ensure determinism for the device-dependent torch.Generator
|
111 |
-
unet = self.dummy_cond_unet
|
112 |
-
scheduler = DDIMScheduler(
|
113 |
-
beta_start=0.00085,
|
114 |
-
beta_end=0.012,
|
115 |
-
beta_schedule="scaled_linear",
|
116 |
-
clip_sample=False,
|
117 |
-
set_alpha_to_one=False,
|
118 |
-
)
|
119 |
-
|
120 |
-
vae = self.dummy_vae
|
121 |
-
bert = self.dummy_text_encoder
|
122 |
-
tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
|
123 |
-
|
124 |
-
# make sure here that pndm scheduler skips prk
|
125 |
-
sd_pipe = StableDiffusionPipeline(
|
126 |
-
unet=unet,
|
127 |
-
scheduler=scheduler,
|
128 |
-
vae=vae,
|
129 |
-
text_encoder=bert,
|
130 |
-
tokenizer=tokenizer,
|
131 |
-
safety_checker=None,
|
132 |
-
feature_extractor=self.dummy_extractor,
|
133 |
-
)
|
134 |
-
sd_pipe = sd_pipe.to(device)
|
135 |
-
sd_pipe.set_progress_bar_config(disable=None)
|
136 |
-
|
137 |
-
prompt = "A painting of a squirrel eating a burger"
|
138 |
-
|
139 |
-
generator = torch.Generator(device=device).manual_seed(0)
|
140 |
-
output = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np")
|
141 |
-
image = output.images
|
142 |
-
|
143 |
-
generator = torch.Generator(device=device).manual_seed(0)
|
144 |
-
image_from_tuple = sd_pipe(
|
145 |
-
[prompt],
|
146 |
-
generator=generator,
|
147 |
-
guidance_scale=6.0,
|
148 |
-
num_inference_steps=2,
|
149 |
-
output_type="np",
|
150 |
-
return_dict=False,
|
151 |
-
)[0]
|
152 |
-
|
153 |
-
image_slice = image[0, -3:, -3:, -1]
|
154 |
-
image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
|
155 |
-
|
156 |
-
assert image.shape == (1, 64, 64, 3)
|
157 |
-
expected_slice = np.array([0.5753, 0.6114, 0.5001, 0.5034, 0.5470, 0.4729, 0.4971, 0.4867, 0.4867])
|
158 |
-
|
159 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
160 |
-
assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
|
161 |
-
|
162 |
-
def test_semantic_diffusion_pndm(self):
|
163 |
-
device = "cpu" # ensure determinism for the device-dependent torch.Generator
|
164 |
-
unet = self.dummy_cond_unet
|
165 |
-
scheduler = PNDMScheduler(skip_prk_steps=True)
|
166 |
-
vae = self.dummy_vae
|
167 |
-
bert = self.dummy_text_encoder
|
168 |
-
tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
|
169 |
-
|
170 |
-
# make sure here that pndm scheduler skips prk
|
171 |
-
sd_pipe = StableDiffusionPipeline(
|
172 |
-
unet=unet,
|
173 |
-
scheduler=scheduler,
|
174 |
-
vae=vae,
|
175 |
-
text_encoder=bert,
|
176 |
-
tokenizer=tokenizer,
|
177 |
-
safety_checker=None,
|
178 |
-
feature_extractor=self.dummy_extractor,
|
179 |
-
)
|
180 |
-
sd_pipe = sd_pipe.to(device)
|
181 |
-
sd_pipe.set_progress_bar_config(disable=None)
|
182 |
-
|
183 |
-
prompt = "A painting of a squirrel eating a burger"
|
184 |
-
generator = torch.Generator(device=device).manual_seed(0)
|
185 |
-
output = sd_pipe([prompt], generator=generator, guidance_scale=6.0, num_inference_steps=2, output_type="np")
|
186 |
-
|
187 |
-
image = output.images
|
188 |
-
|
189 |
-
generator = torch.Generator(device=device).manual_seed(0)
|
190 |
-
image_from_tuple = sd_pipe(
|
191 |
-
[prompt],
|
192 |
-
generator=generator,
|
193 |
-
guidance_scale=6.0,
|
194 |
-
num_inference_steps=2,
|
195 |
-
output_type="np",
|
196 |
-
return_dict=False,
|
197 |
-
)[0]
|
198 |
-
|
199 |
-
image_slice = image[0, -3:, -3:, -1]
|
200 |
-
image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
|
201 |
-
|
202 |
-
assert image.shape == (1, 64, 64, 3)
|
203 |
-
expected_slice = np.array([0.5122, 0.5712, 0.4825, 0.5053, 0.5646, 0.4769, 0.5179, 0.4894, 0.4994])
|
204 |
-
|
205 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
206 |
-
assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
|
207 |
-
|
208 |
-
def test_semantic_diffusion_no_safety_checker(self):
|
209 |
-
pipe = StableDiffusionPipeline.from_pretrained(
|
210 |
-
"hf-internal-testing/tiny-stable-diffusion-lms-pipe", safety_checker=None
|
211 |
-
)
|
212 |
-
assert isinstance(pipe, StableDiffusionPipeline)
|
213 |
-
assert isinstance(pipe.scheduler, LMSDiscreteScheduler)
|
214 |
-
assert pipe.safety_checker is None
|
215 |
-
|
216 |
-
image = pipe("example prompt", num_inference_steps=2).images[0]
|
217 |
-
assert image is not None
|
218 |
-
|
219 |
-
# check that there's no error when saving a pipeline with one of the models being None
|
220 |
-
with tempfile.TemporaryDirectory() as tmpdirname:
|
221 |
-
pipe.save_pretrained(tmpdirname)
|
222 |
-
pipe = StableDiffusionPipeline.from_pretrained(tmpdirname)
|
223 |
-
|
224 |
-
# sanity check that the pipeline still works
|
225 |
-
assert pipe.safety_checker is None
|
226 |
-
image = pipe("example prompt", num_inference_steps=2).images[0]
|
227 |
-
assert image is not None
|
228 |
-
|
229 |
-
@unittest.skipIf(torch_device != "cuda", "This test requires a GPU")
|
230 |
-
def test_semantic_diffusion_fp16(self):
|
231 |
-
"""Test that stable diffusion works with fp16"""
|
232 |
-
unet = self.dummy_cond_unet
|
233 |
-
scheduler = PNDMScheduler(skip_prk_steps=True)
|
234 |
-
vae = self.dummy_vae
|
235 |
-
bert = self.dummy_text_encoder
|
236 |
-
tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
|
237 |
-
|
238 |
-
# put models in fp16
|
239 |
-
unet = unet.half()
|
240 |
-
vae = vae.half()
|
241 |
-
bert = bert.half()
|
242 |
-
|
243 |
-
# make sure here that pndm scheduler skips prk
|
244 |
-
sd_pipe = StableDiffusionPipeline(
|
245 |
-
unet=unet,
|
246 |
-
scheduler=scheduler,
|
247 |
-
vae=vae,
|
248 |
-
text_encoder=bert,
|
249 |
-
tokenizer=tokenizer,
|
250 |
-
safety_checker=None,
|
251 |
-
feature_extractor=self.dummy_extractor,
|
252 |
-
)
|
253 |
-
sd_pipe = sd_pipe.to(torch_device)
|
254 |
-
sd_pipe.set_progress_bar_config(disable=None)
|
255 |
-
|
256 |
-
prompt = "A painting of a squirrel eating a burger"
|
257 |
-
image = sd_pipe([prompt], num_inference_steps=2, output_type="np").images
|
258 |
-
|
259 |
-
assert image.shape == (1, 64, 64, 3)
|
260 |
-
|
261 |
-
|
262 |
-
@nightly
|
263 |
-
@require_torch_gpu
|
264 |
-
class SemanticDiffusionPipelineIntegrationTests(unittest.TestCase):
|
265 |
-
def tearDown(self):
|
266 |
-
# clean up the VRAM after each test
|
267 |
-
super().tearDown()
|
268 |
-
gc.collect()
|
269 |
-
torch.cuda.empty_cache()
|
270 |
-
|
271 |
-
def test_positive_guidance(self):
|
272 |
-
torch_device = "cuda"
|
273 |
-
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
|
274 |
-
pipe = pipe.to(torch_device)
|
275 |
-
pipe.set_progress_bar_config(disable=None)
|
276 |
-
|
277 |
-
prompt = "a photo of a cat"
|
278 |
-
edit = {
|
279 |
-
"editing_prompt": ["sunglasses"],
|
280 |
-
"reverse_editing_direction": [False],
|
281 |
-
"edit_warmup_steps": 10,
|
282 |
-
"edit_guidance_scale": 6,
|
283 |
-
"edit_threshold": 0.95,
|
284 |
-
"edit_momentum_scale": 0.5,
|
285 |
-
"edit_mom_beta": 0.6,
|
286 |
-
}
|
287 |
-
|
288 |
-
seed = 3
|
289 |
-
guidance_scale = 7
|
290 |
-
|
291 |
-
# no sega enabled
|
292 |
-
generator = torch.Generator(torch_device)
|
293 |
-
generator.manual_seed(seed)
|
294 |
-
output = pipe(
|
295 |
-
[prompt],
|
296 |
-
generator=generator,
|
297 |
-
guidance_scale=guidance_scale,
|
298 |
-
num_inference_steps=50,
|
299 |
-
output_type="np",
|
300 |
-
width=512,
|
301 |
-
height=512,
|
302 |
-
)
|
303 |
-
|
304 |
-
image = output.images
|
305 |
-
image_slice = image[0, -3:, -3:, -1]
|
306 |
-
expected_slice = [
|
307 |
-
0.34673113,
|
308 |
-
0.38492733,
|
309 |
-
0.37597352,
|
310 |
-
0.34086335,
|
311 |
-
0.35650748,
|
312 |
-
0.35579205,
|
313 |
-
0.3384763,
|
314 |
-
0.34340236,
|
315 |
-
0.3573271,
|
316 |
-
]
|
317 |
-
|
318 |
-
assert image.shape == (1, 512, 512, 3)
|
319 |
-
|
320 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
321 |
-
|
322 |
-
# with sega enabled
|
323 |
-
# generator = torch.manual_seed(seed)
|
324 |
-
generator.manual_seed(seed)
|
325 |
-
output = pipe(
|
326 |
-
[prompt],
|
327 |
-
generator=generator,
|
328 |
-
guidance_scale=guidance_scale,
|
329 |
-
num_inference_steps=50,
|
330 |
-
output_type="np",
|
331 |
-
width=512,
|
332 |
-
height=512,
|
333 |
-
**edit,
|
334 |
-
)
|
335 |
-
|
336 |
-
image = output.images
|
337 |
-
image_slice = image[0, -3:, -3:, -1]
|
338 |
-
expected_slice = [
|
339 |
-
0.41887826,
|
340 |
-
0.37728766,
|
341 |
-
0.30138272,
|
342 |
-
0.41416335,
|
343 |
-
0.41664985,
|
344 |
-
0.36283392,
|
345 |
-
0.36191246,
|
346 |
-
0.43364465,
|
347 |
-
0.43001732,
|
348 |
-
]
|
349 |
-
|
350 |
-
assert image.shape == (1, 512, 512, 3)
|
351 |
-
|
352 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
353 |
-
|
354 |
-
def test_negative_guidance(self):
|
355 |
-
torch_device = "cuda"
|
356 |
-
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
|
357 |
-
pipe = pipe.to(torch_device)
|
358 |
-
pipe.set_progress_bar_config(disable=None)
|
359 |
-
|
360 |
-
prompt = "an image of a crowded boulevard, realistic, 4k"
|
361 |
-
edit = {
|
362 |
-
"editing_prompt": "crowd, crowded, people",
|
363 |
-
"reverse_editing_direction": True,
|
364 |
-
"edit_warmup_steps": 10,
|
365 |
-
"edit_guidance_scale": 8.3,
|
366 |
-
"edit_threshold": 0.9,
|
367 |
-
"edit_momentum_scale": 0.5,
|
368 |
-
"edit_mom_beta": 0.6,
|
369 |
-
}
|
370 |
-
|
371 |
-
seed = 9
|
372 |
-
guidance_scale = 7
|
373 |
-
|
374 |
-
# no sega enabled
|
375 |
-
generator = torch.Generator(torch_device)
|
376 |
-
generator.manual_seed(seed)
|
377 |
-
output = pipe(
|
378 |
-
[prompt],
|
379 |
-
generator=generator,
|
380 |
-
guidance_scale=guidance_scale,
|
381 |
-
num_inference_steps=50,
|
382 |
-
output_type="np",
|
383 |
-
width=512,
|
384 |
-
height=512,
|
385 |
-
)
|
386 |
-
|
387 |
-
image = output.images
|
388 |
-
image_slice = image[0, -3:, -3:, -1]
|
389 |
-
expected_slice = [
|
390 |
-
0.43497998,
|
391 |
-
0.91814065,
|
392 |
-
0.7540739,
|
393 |
-
0.55580205,
|
394 |
-
0.8467265,
|
395 |
-
0.5389691,
|
396 |
-
0.62574506,
|
397 |
-
0.58897763,
|
398 |
-
0.50926757,
|
399 |
-
]
|
400 |
-
|
401 |
-
assert image.shape == (1, 512, 512, 3)
|
402 |
-
|
403 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
404 |
-
|
405 |
-
# with sega enabled
|
406 |
-
# generator = torch.manual_seed(seed)
|
407 |
-
generator.manual_seed(seed)
|
408 |
-
output = pipe(
|
409 |
-
[prompt],
|
410 |
-
generator=generator,
|
411 |
-
guidance_scale=guidance_scale,
|
412 |
-
num_inference_steps=50,
|
413 |
-
output_type="np",
|
414 |
-
width=512,
|
415 |
-
height=512,
|
416 |
-
**edit,
|
417 |
-
)
|
418 |
-
|
419 |
-
image = output.images
|
420 |
-
image_slice = image[0, -3:, -3:, -1]
|
421 |
-
expected_slice = [
|
422 |
-
0.3089719,
|
423 |
-
0.30500144,
|
424 |
-
0.29016042,
|
425 |
-
0.30630964,
|
426 |
-
0.325687,
|
427 |
-
0.29419225,
|
428 |
-
0.2908091,
|
429 |
-
0.28723598,
|
430 |
-
0.27696294,
|
431 |
-
]
|
432 |
-
|
433 |
-
assert image.shape == (1, 512, 512, 3)
|
434 |
-
|
435 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
436 |
-
|
437 |
-
def test_multi_cond_guidance(self):
|
438 |
-
torch_device = "cuda"
|
439 |
-
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
|
440 |
-
pipe = pipe.to(torch_device)
|
441 |
-
pipe.set_progress_bar_config(disable=None)
|
442 |
-
|
443 |
-
prompt = "a castle next to a river"
|
444 |
-
edit = {
|
445 |
-
"editing_prompt": ["boat on a river, boat", "monet, impression, sunrise"],
|
446 |
-
"reverse_editing_direction": False,
|
447 |
-
"edit_warmup_steps": [15, 18],
|
448 |
-
"edit_guidance_scale": 6,
|
449 |
-
"edit_threshold": [0.9, 0.8],
|
450 |
-
"edit_momentum_scale": 0.5,
|
451 |
-
"edit_mom_beta": 0.6,
|
452 |
-
}
|
453 |
-
|
454 |
-
seed = 48
|
455 |
-
guidance_scale = 7
|
456 |
-
|
457 |
-
# no sega enabled
|
458 |
-
generator = torch.Generator(torch_device)
|
459 |
-
generator.manual_seed(seed)
|
460 |
-
output = pipe(
|
461 |
-
[prompt],
|
462 |
-
generator=generator,
|
463 |
-
guidance_scale=guidance_scale,
|
464 |
-
num_inference_steps=50,
|
465 |
-
output_type="np",
|
466 |
-
width=512,
|
467 |
-
height=512,
|
468 |
-
)
|
469 |
-
|
470 |
-
image = output.images
|
471 |
-
image_slice = image[0, -3:, -3:, -1]
|
472 |
-
expected_slice = [
|
473 |
-
0.75163555,
|
474 |
-
0.76037145,
|
475 |
-
0.61785,
|
476 |
-
0.9189673,
|
477 |
-
0.8627701,
|
478 |
-
0.85189694,
|
479 |
-
0.8512813,
|
480 |
-
0.87012076,
|
481 |
-
0.8312857,
|
482 |
-
]
|
483 |
-
|
484 |
-
assert image.shape == (1, 512, 512, 3)
|
485 |
-
|
486 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
487 |
-
|
488 |
-
# with sega enabled
|
489 |
-
# generator = torch.manual_seed(seed)
|
490 |
-
generator.manual_seed(seed)
|
491 |
-
output = pipe(
|
492 |
-
[prompt],
|
493 |
-
generator=generator,
|
494 |
-
guidance_scale=guidance_scale,
|
495 |
-
num_inference_steps=50,
|
496 |
-
output_type="np",
|
497 |
-
width=512,
|
498 |
-
height=512,
|
499 |
-
**edit,
|
500 |
-
)
|
501 |
-
|
502 |
-
image = output.images
|
503 |
-
image_slice = image[0, -3:, -3:, -1]
|
504 |
-
expected_slice = [
|
505 |
-
0.73553365,
|
506 |
-
0.7537271,
|
507 |
-
0.74341905,
|
508 |
-
0.66480356,
|
509 |
-
0.6472925,
|
510 |
-
0.63039416,
|
511 |
-
0.64812905,
|
512 |
-
0.6749717,
|
513 |
-
0.6517102,
|
514 |
-
]
|
515 |
-
|
516 |
-
assert image.shape == (1, 512, 512, 3)
|
517 |
-
|
518 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
519 |
-
|
520 |
-
def test_guidance_fp16(self):
|
521 |
-
torch_device = "cuda"
|
522 |
-
pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
|
523 |
-
pipe = pipe.to(torch_device)
|
524 |
-
pipe.set_progress_bar_config(disable=None)
|
525 |
-
|
526 |
-
prompt = "a photo of a cat"
|
527 |
-
edit = {
|
528 |
-
"editing_prompt": ["sunglasses"],
|
529 |
-
"reverse_editing_direction": [False],
|
530 |
-
"edit_warmup_steps": 10,
|
531 |
-
"edit_guidance_scale": 6,
|
532 |
-
"edit_threshold": 0.95,
|
533 |
-
"edit_momentum_scale": 0.5,
|
534 |
-
"edit_mom_beta": 0.6,
|
535 |
-
}
|
536 |
-
|
537 |
-
seed = 3
|
538 |
-
guidance_scale = 7
|
539 |
-
|
540 |
-
# no sega enabled
|
541 |
-
generator = torch.Generator(torch_device)
|
542 |
-
generator.manual_seed(seed)
|
543 |
-
output = pipe(
|
544 |
-
[prompt],
|
545 |
-
generator=generator,
|
546 |
-
guidance_scale=guidance_scale,
|
547 |
-
num_inference_steps=50,
|
548 |
-
output_type="np",
|
549 |
-
width=512,
|
550 |
-
height=512,
|
551 |
-
)
|
552 |
-
|
553 |
-
image = output.images
|
554 |
-
image_slice = image[0, -3:, -3:, -1]
|
555 |
-
expected_slice = [
|
556 |
-
0.34887695,
|
557 |
-
0.3876953,
|
558 |
-
0.375,
|
559 |
-
0.34423828,
|
560 |
-
0.3581543,
|
561 |
-
0.35717773,
|
562 |
-
0.3383789,
|
563 |
-
0.34570312,
|
564 |
-
0.359375,
|
565 |
-
]
|
566 |
-
|
567 |
-
assert image.shape == (1, 512, 512, 3)
|
568 |
-
|
569 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
570 |
-
|
571 |
-
# with sega enabled
|
572 |
-
# generator = torch.manual_seed(seed)
|
573 |
-
generator.manual_seed(seed)
|
574 |
-
output = pipe(
|
575 |
-
[prompt],
|
576 |
-
generator=generator,
|
577 |
-
guidance_scale=guidance_scale,
|
578 |
-
num_inference_steps=50,
|
579 |
-
output_type="np",
|
580 |
-
width=512,
|
581 |
-
height=512,
|
582 |
-
**edit,
|
583 |
-
)
|
584 |
-
|
585 |
-
image = output.images
|
586 |
-
image_slice = image[0, -3:, -3:, -1]
|
587 |
-
expected_slice = [
|
588 |
-
0.42285156,
|
589 |
-
0.36914062,
|
590 |
-
0.29077148,
|
591 |
-
0.42041016,
|
592 |
-
0.41918945,
|
593 |
-
0.35498047,
|
594 |
-
0.3618164,
|
595 |
-
0.4423828,
|
596 |
-
0.43115234,
|
597 |
-
]
|
598 |
-
|
599 |
-
assert image.shape == (1, 512, 512, 3)
|
600 |
-
|
601 |
-
assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py
DELETED
@@ -1,46 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/mask_rcnn_r50_fpn.py',
|
3 |
-
'../_base_/datasets/cityscapes_instance.py', '../_base_/default_runtime.py'
|
4 |
-
]
|
5 |
-
model = dict(
|
6 |
-
pretrained=None,
|
7 |
-
roi_head=dict(
|
8 |
-
bbox_head=dict(
|
9 |
-
type='Shared2FCBBoxHead',
|
10 |
-
in_channels=256,
|
11 |
-
fc_out_channels=1024,
|
12 |
-
roi_feat_size=7,
|
13 |
-
num_classes=8,
|
14 |
-
bbox_coder=dict(
|
15 |
-
type='DeltaXYWHBBoxCoder',
|
16 |
-
target_means=[0., 0., 0., 0.],
|
17 |
-
target_stds=[0.1, 0.1, 0.2, 0.2]),
|
18 |
-
reg_class_agnostic=False,
|
19 |
-
loss_cls=dict(
|
20 |
-
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
|
21 |
-
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
|
22 |
-
mask_head=dict(
|
23 |
-
type='FCNMaskHead',
|
24 |
-
num_convs=4,
|
25 |
-
in_channels=256,
|
26 |
-
conv_out_channels=256,
|
27 |
-
num_classes=8,
|
28 |
-
loss_mask=dict(
|
29 |
-
type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))))
|
30 |
-
# optimizer
|
31 |
-
# lr is set for a batch size of 8
|
32 |
-
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
|
33 |
-
optimizer_config = dict(grad_clip=None)
|
34 |
-
# learning policy
|
35 |
-
lr_config = dict(
|
36 |
-
policy='step',
|
37 |
-
warmup='linear',
|
38 |
-
warmup_iters=500,
|
39 |
-
warmup_ratio=0.001,
|
40 |
-
# [7] yields higher performance than [6]
|
41 |
-
step=[7])
|
42 |
-
runner = dict(
|
43 |
-
type='EpochBasedRunner', max_epochs=8) # actual epoch = 8 * 8 = 64
|
44 |
-
log_config = dict(interval=100)
|
45 |
-
# For better, more stable performance initialize from COCO
|
46 |
-
load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth' # noqa
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/dcn/README.md
DELETED
@@ -1,52 +0,0 @@
|
|
1 |
-
# Deformable Convolutional Networks
|
2 |
-
|
3 |
-
## Introduction
|
4 |
-
|
5 |
-
[ALGORITHM]
|
6 |
-
|
7 |
-
```none
|
8 |
-
@inproceedings{dai2017deformable,
|
9 |
-
title={Deformable Convolutional Networks},
|
10 |
-
author={Dai, Jifeng and Qi, Haozhi and Xiong, Yuwen and Li, Yi and Zhang, Guodong and Hu, Han and Wei, Yichen},
|
11 |
-
booktitle={Proceedings of the IEEE international conference on computer vision},
|
12 |
-
year={2017}
|
13 |
-
}
|
14 |
-
```
|
15 |
-
|
16 |
-
[ALGORITHM]
|
17 |
-
|
18 |
-
```
|
19 |
-
@article{zhu2018deformable,
|
20 |
-
title={Deformable ConvNets v2: More Deformable, Better Results},
|
21 |
-
author={Zhu, Xizhou and Hu, Han and Lin, Stephen and Dai, Jifeng},
|
22 |
-
journal={arXiv preprint arXiv:1811.11168},
|
23 |
-
year={2018}
|
24 |
-
}
|
25 |
-
```
|
26 |
-
|
27 |
-
## Results and Models
|
28 |
-
|
29 |
-
| Backbone | Model | Style | Conv | Pool | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
|
30 |
-
|:----------------:|:------------:|:-------:|:-------------:|:------:|:-------:|:--------:|:--------------:|:------:|:-------:|:------:|:--------:|
|
31 |
-
| R-50-FPN | Faster | pytorch | dconv(c3-c5) | - | 1x | 4.0 | 17.8 | 41.3 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco/faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200130-d68aed1e.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco/faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200130_212941.log.json) |
|
32 |
-
| R-50-FPN | Faster | pytorch | mdconv(c3-c5) | - | 1x | 4.1 | 17.6 | 41.4 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco_20200130-d099253b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco/faster_rcnn_r50_fpn_mdconv_c3-c5_1x_coco_20200130_222144.log.json) |
|
33 |
-
| *R-50-FPN (dg=4) | Faster | pytorch | mdconv(c3-c5) | - | 1x | 4.2 | 17.4 | 41.5 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_group4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_group4_1x_coco/faster_rcnn_r50_fpn_mdconv_c3-c5_group4_1x_coco_20200130-01262257.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_mdconv_c3-c5_group4_1x_coco/faster_rcnn_r50_fpn_mdconv_c3-c5_group4_1x_coco_20200130_222058.log.json) |
|
34 |
-
| R-50-FPN | Faster | pytorch | - | dpool | 1x | 5.0 | 17.2 | 38.9 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/faster_rcnn_r50_fpn_dpool_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_dpool_1x_coco/faster_rcnn_r50_fpn_dpool_1x_coco_20200307-90d3c01d.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_dpool_1x_coco/faster_rcnn_r50_fpn_dpool_1x_coco_20200307_203250.log.json) |
|
35 |
-
| R-50-FPN | Faster | pytorch | - | mdpool | 1x | 5.8 | 16.6 | 38.7 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/faster_rcnn_r50_fpn_mdpool_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_mdpool_1x_coco/faster_rcnn_r50_fpn_mdpool_1x_coco_20200307-c0df27ff.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_mdpool_1x_coco/faster_rcnn_r50_fpn_mdpool_1x_coco_20200307_203304.log.json) |
|
36 |
-
| R-101-FPN | Faster | pytorch | dconv(c3-c5) | - | 1x | 6.0 | 12.5 | 42.7 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/faster_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r101_fpn_dconv_c3-c5_1x_coco/faster_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200203-1377f13d.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r101_fpn_dconv_c3-c5_1x_coco/faster_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200203_230019.log.json) |
|
37 |
-
| X-101-32x4d-FPN | Faster | pytorch | dconv(c3-c5) | - | 1x | 7.3 | 10.0 | 44.5 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco_20200203-4f85c69c.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco/faster_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco_20200203_001325.log.json) |
|
38 |
-
| R-50-FPN | Mask | pytorch | dconv(c3-c5) | - | 1x | 4.5 | 15.4 | 41.8 | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200203-4d9ad43b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200203_061339.log.json) |
|
39 |
-
| R-50-FPN | Mask | pytorch | mdconv(c3-c5) | - | 1x | 4.5 | 15.1 | 41.5 | 37.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco_20200203-ad97591f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco_20200203_063443.log.json) |
|
40 |
-
| R-101-FPN | Mask | pytorch | dconv(c3-c5) | - | 1x | 6.5 | 11.7 | 43.5 | 38.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200216-a71f5bce.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco/mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200216_191601.log.json) |
|
41 |
-
| R-50-FPN | Cascade | pytorch | dconv(c3-c5) | - | 1x | 4.5 | 14.6 | 43.8 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/cascade_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_rcnn_r50_fpn_dconv_c3-c5_1x_coco/cascade_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200130-2f1fca44.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_rcnn_r50_fpn_dconv_c3-c5_1x_coco/cascade_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200130_220843.log.json) |
|
42 |
-
| R-101-FPN | Cascade | pytorch | dconv(c3-c5) | - | 1x | 6.4 | 11.0 | 45.0 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200203-3b2f0594.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200203_224829.log.json) |
|
43 |
-
| R-50-FPN | Cascade Mask | pytorch | dconv(c3-c5) | - | 1x | 6.0 | 10.0 | 44.4 | 38.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200202-42e767a2.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco/cascade_mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200202_010309.log.json) |
|
44 |
-
| R-101-FPN | Cascade Mask | pytorch | dconv(c3-c5) | - | 1x | 8.0 | 8.6 | 45.8 | 39.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/cascade_mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco/cascade_mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200204-df0c5f10.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco/cascade_mask_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200204_134006.log.json) |
|
45 |
-
| X-101-32x4d-FPN | Cascade Mask | pytorch | dconv(c3-c5) | - | 1x | 9.2 | | 47.3 | 41.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco-e75f90c8.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/dcn/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_dconv_c3-c5_1x_coco-20200606_183737.log.json) |
|
46 |
-
|
47 |
-
**Notes:**
|
48 |
-
|
49 |
-
- `dconv` and `mdconv` denote (modulated) deformable convolution, `c3-c5` means adding dconv in resnet stage 3 to 5. `dpool` and `mdpool` denote (modulated) deformable roi pooling.
|
50 |
-
- The dcn ops are modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch, which should be more memory efficient and slightly faster.
|
51 |
-
- (*) For R-50-FPN (dg=4), dg is short for deformable_group. This model is trained and tested on Amazon EC2 p3dn.24xlarge instance.
|
52 |
-
- **Memory, Train/Inf time is outdated.**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/models/detectors/cornernet.py
DELETED
@@ -1,95 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
|
3 |
-
from mmdet.core import bbox2result, bbox_mapping_back
|
4 |
-
from ..builder import DETECTORS
|
5 |
-
from .single_stage import SingleStageDetector
|
6 |
-
|
7 |
-
|
8 |
-
@DETECTORS.register_module()
|
9 |
-
class CornerNet(SingleStageDetector):
|
10 |
-
"""CornerNet.
|
11 |
-
|
12 |
-
This detector is the implementation of the paper `CornerNet: Detecting
|
13 |
-
Objects as Paired Keypoints <https://arxiv.org/abs/1808.01244>`_ .
|
14 |
-
"""
|
15 |
-
|
16 |
-
def __init__(self,
|
17 |
-
backbone,
|
18 |
-
neck,
|
19 |
-
bbox_head,
|
20 |
-
train_cfg=None,
|
21 |
-
test_cfg=None,
|
22 |
-
pretrained=None):
|
23 |
-
super(CornerNet, self).__init__(backbone, neck, bbox_head, train_cfg,
|
24 |
-
test_cfg, pretrained)
|
25 |
-
|
26 |
-
def merge_aug_results(self, aug_results, img_metas):
|
27 |
-
"""Merge augmented detection bboxes and score.
|
28 |
-
|
29 |
-
Args:
|
30 |
-
aug_results (list[list[Tensor]]): Det_bboxes and det_labels of each
|
31 |
-
image.
|
32 |
-
img_metas (list[list[dict]]): Meta information of each image, e.g.,
|
33 |
-
image size, scaling factor, etc.
|
34 |
-
|
35 |
-
Returns:
|
36 |
-
tuple: (bboxes, labels)
|
37 |
-
"""
|
38 |
-
recovered_bboxes, aug_labels = [], []
|
39 |
-
for bboxes_labels, img_info in zip(aug_results, img_metas):
|
40 |
-
img_shape = img_info[0]['img_shape'] # using shape before padding
|
41 |
-
scale_factor = img_info[0]['scale_factor']
|
42 |
-
flip = img_info[0]['flip']
|
43 |
-
bboxes, labels = bboxes_labels
|
44 |
-
bboxes, scores = bboxes[:, :4], bboxes[:, -1:]
|
45 |
-
bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip)
|
46 |
-
recovered_bboxes.append(torch.cat([bboxes, scores], dim=-1))
|
47 |
-
aug_labels.append(labels)
|
48 |
-
|
49 |
-
bboxes = torch.cat(recovered_bboxes, dim=0)
|
50 |
-
labels = torch.cat(aug_labels)
|
51 |
-
|
52 |
-
if bboxes.shape[0] > 0:
|
53 |
-
out_bboxes, out_labels = self.bbox_head._bboxes_nms(
|
54 |
-
bboxes, labels, self.bbox_head.test_cfg)
|
55 |
-
else:
|
56 |
-
out_bboxes, out_labels = bboxes, labels
|
57 |
-
|
58 |
-
return out_bboxes, out_labels
|
59 |
-
|
60 |
-
def aug_test(self, imgs, img_metas, rescale=False):
|
61 |
-
"""Augment testing of CornerNet.
|
62 |
-
|
63 |
-
Args:
|
64 |
-
imgs (list[Tensor]): Augmented images.
|
65 |
-
img_metas (list[list[dict]]): Meta information of each image, e.g.,
|
66 |
-
image size, scaling factor, etc.
|
67 |
-
rescale (bool): If True, return boxes in original image space.
|
68 |
-
Default: False.
|
69 |
-
|
70 |
-
Note:
|
71 |
-
``imgs`` must including flipped image pairs.
|
72 |
-
|
73 |
-
Returns:
|
74 |
-
list[list[np.ndarray]]: BBox results of each image and classes.
|
75 |
-
The outer list corresponds to each image. The inner list
|
76 |
-
corresponds to each class.
|
77 |
-
"""
|
78 |
-
img_inds = list(range(len(imgs)))
|
79 |
-
|
80 |
-
assert img_metas[0][0]['flip'] + img_metas[1][0]['flip'], (
|
81 |
-
'aug test must have flipped image pair')
|
82 |
-
aug_results = []
|
83 |
-
for ind, flip_ind in zip(img_inds[0::2], img_inds[1::2]):
|
84 |
-
img_pair = torch.cat([imgs[ind], imgs[flip_ind]])
|
85 |
-
x = self.extract_feat(img_pair)
|
86 |
-
outs = self.bbox_head(x)
|
87 |
-
bbox_list = self.bbox_head.get_bboxes(
|
88 |
-
*outs, [img_metas[ind], img_metas[flip_ind]], False, False)
|
89 |
-
aug_results.append(bbox_list[0])
|
90 |
-
aug_results.append(bbox_list[1])
|
91 |
-
|
92 |
-
bboxes, labels = self.merge_aug_results(aug_results, img_metas)
|
93 |
-
bbox_results = bbox2result(bboxes, labels, self.bbox_head.num_classes)
|
94 |
-
|
95 |
-
return [bbox_results]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/_base_/datasets/pascal_voc12.py
DELETED
@@ -1,57 +0,0 @@
|
|
1 |
-
# dataset settings
|
2 |
-
dataset_type = 'PascalVOCDataset'
|
3 |
-
data_root = 'data/VOCdevkit/VOC2012'
|
4 |
-
img_norm_cfg = dict(
|
5 |
-
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
|
6 |
-
crop_size = (512, 512)
|
7 |
-
train_pipeline = [
|
8 |
-
dict(type='LoadImageFromFile'),
|
9 |
-
dict(type='LoadAnnotations'),
|
10 |
-
dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)),
|
11 |
-
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
|
12 |
-
dict(type='RandomFlip', prob=0.5),
|
13 |
-
dict(type='PhotoMetricDistortion'),
|
14 |
-
dict(type='Normalize', **img_norm_cfg),
|
15 |
-
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255),
|
16 |
-
dict(type='DefaultFormatBundle'),
|
17 |
-
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
|
18 |
-
]
|
19 |
-
test_pipeline = [
|
20 |
-
dict(type='LoadImageFromFile'),
|
21 |
-
dict(
|
22 |
-
type='MultiScaleFlipAug',
|
23 |
-
img_scale=(2048, 512),
|
24 |
-
# img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75],
|
25 |
-
flip=False,
|
26 |
-
transforms=[
|
27 |
-
dict(type='Resize', keep_ratio=True),
|
28 |
-
dict(type='RandomFlip'),
|
29 |
-
dict(type='Normalize', **img_norm_cfg),
|
30 |
-
dict(type='ImageToTensor', keys=['img']),
|
31 |
-
dict(type='Collect', keys=['img']),
|
32 |
-
])
|
33 |
-
]
|
34 |
-
data = dict(
|
35 |
-
samples_per_gpu=4,
|
36 |
-
workers_per_gpu=4,
|
37 |
-
train=dict(
|
38 |
-
type=dataset_type,
|
39 |
-
data_root=data_root,
|
40 |
-
img_dir='JPEGImages',
|
41 |
-
ann_dir='SegmentationClass',
|
42 |
-
split='ImageSets/Segmentation/train.txt',
|
43 |
-
pipeline=train_pipeline),
|
44 |
-
val=dict(
|
45 |
-
type=dataset_type,
|
46 |
-
data_root=data_root,
|
47 |
-
img_dir='JPEGImages',
|
48 |
-
ann_dir='SegmentationClass',
|
49 |
-
split='ImageSets/Segmentation/val.txt',
|
50 |
-
pipeline=test_pipeline),
|
51 |
-
test=dict(
|
52 |
-
type=dataset_type,
|
53 |
-
data_root=data_root,
|
54 |
-
img_dir='JPEGImages',
|
55 |
-
ann_dir='SegmentationClass',
|
56 |
-
split='ImageSets/Segmentation/val.txt',
|
57 |
-
pipeline=test_pipeline))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/ann/ann_r101-d8_769x769_80k_cityscapes.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './ann_r50-d8_769x769_80k_cityscapes.py'
|
2 |
-
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/hrnet/fcn_hr18s_480x480_80k_pascal_context_59.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
_base_ = './fcn_hr18_480x480_80k_pascal_context_59.py'
|
2 |
-
model = dict(
|
3 |
-
pretrained='open-mmlab://msra/hrnetv2_w18_small',
|
4 |
-
backbone=dict(
|
5 |
-
extra=dict(
|
6 |
-
stage1=dict(num_blocks=(2, )),
|
7 |
-
stage2=dict(num_blocks=(2, 2)),
|
8 |
-
stage3=dict(num_modules=3, num_blocks=(2, 2, 2)),
|
9 |
-
stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2)))))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Arnx/MusicGenXvAKN/tests/common_utils/temp_utils.py
DELETED
@@ -1,56 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
import os
|
8 |
-
import tempfile
|
9 |
-
|
10 |
-
|
11 |
-
class TempDirMixin:
|
12 |
-
"""Mixin to provide easy access to temp dir.
|
13 |
-
"""
|
14 |
-
|
15 |
-
temp_dir_ = None
|
16 |
-
|
17 |
-
@classmethod
|
18 |
-
def get_base_temp_dir(cls):
|
19 |
-
# If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory.
|
20 |
-
# this is handy for debugging.
|
21 |
-
key = "AUDIOCRAFT_TEST_DIR"
|
22 |
-
if key in os.environ:
|
23 |
-
return os.environ[key]
|
24 |
-
if cls.temp_dir_ is None:
|
25 |
-
cls.temp_dir_ = tempfile.TemporaryDirectory()
|
26 |
-
return cls.temp_dir_.name
|
27 |
-
|
28 |
-
@classmethod
|
29 |
-
def tearDownClass(cls):
|
30 |
-
if cls.temp_dir_ is not None:
|
31 |
-
try:
|
32 |
-
cls.temp_dir_.cleanup()
|
33 |
-
cls.temp_dir_ = None
|
34 |
-
except PermissionError:
|
35 |
-
# On Windows there is a know issue with `shutil.rmtree`,
|
36 |
-
# which fails intermittenly.
|
37 |
-
# https://github.com/python/cpython/issues/74168
|
38 |
-
# Following the above thread, we ignore it.
|
39 |
-
pass
|
40 |
-
super().tearDownClass()
|
41 |
-
|
42 |
-
@property
|
43 |
-
def id(self):
|
44 |
-
return self.__class__.__name__
|
45 |
-
|
46 |
-
def get_temp_path(self, *paths):
|
47 |
-
temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
|
48 |
-
path = os.path.join(temp_dir, *paths)
|
49 |
-
os.makedirs(os.path.dirname(path), exist_ok=True)
|
50 |
-
return path
|
51 |
-
|
52 |
-
def get_temp_dir(self, *paths):
|
53 |
-
temp_dir = os.path.join(self.get_base_temp_dir(), self.id)
|
54 |
-
path = os.path.join(temp_dir, *paths)
|
55 |
-
os.makedirs(path, exist_ok=True)
|
56 |
-
return path
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Artrajz/vits-simple-api/utils/load_model.py
DELETED
@@ -1,185 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import json
|
3 |
-
import logging
|
4 |
-
import config
|
5 |
-
import numpy as np
|
6 |
-
|
7 |
-
import utils
|
8 |
-
from utils.data_utils import check_is_none, HParams
|
9 |
-
from vits import VITS
|
10 |
-
from voice import TTS
|
11 |
-
from config import DEVICE as device
|
12 |
-
from utils.lang_dict import lang_dict
|
13 |
-
from contants import ModelType
|
14 |
-
|
15 |
-
|
16 |
-
def recognition_model_type(hps: HParams) -> str:
|
17 |
-
# model_config = json.load(model_config_json)
|
18 |
-
symbols = getattr(hps, "symbols", None)
|
19 |
-
# symbols = model_config.get("symbols", None)
|
20 |
-
emotion_embedding = getattr(hps.data, "emotion_embedding", False)
|
21 |
-
|
22 |
-
if "use_spk_conditioned_encoder" in hps.model:
|
23 |
-
model_type = ModelType.BERT_VITS2
|
24 |
-
return model_type
|
25 |
-
|
26 |
-
if symbols != None:
|
27 |
-
if not emotion_embedding:
|
28 |
-
mode_type = ModelType.VITS
|
29 |
-
else:
|
30 |
-
mode_type = ModelType.W2V2_VITS
|
31 |
-
else:
|
32 |
-
mode_type = ModelType.HUBERT_VITS
|
33 |
-
|
34 |
-
return mode_type
|
35 |
-
|
36 |
-
|
37 |
-
def load_npy(emotion_reference_npy):
|
38 |
-
if isinstance(emotion_reference_npy, list):
|
39 |
-
# check if emotion_reference_npy is endwith .npy
|
40 |
-
for i in emotion_reference_npy:
|
41 |
-
model_extention = os.path.splitext(i)[1]
|
42 |
-
if model_extention != ".npy":
|
43 |
-
raise ValueError(f"Unsupported model type: {model_extention}")
|
44 |
-
|
45 |
-
# merge npy files
|
46 |
-
emotion_reference = np.empty((0, 1024))
|
47 |
-
for i in emotion_reference_npy:
|
48 |
-
tmp = np.load(i).reshape(-1, 1024)
|
49 |
-
emotion_reference = np.append(emotion_reference, tmp, axis=0)
|
50 |
-
|
51 |
-
elif os.path.isdir(emotion_reference_npy):
|
52 |
-
emotion_reference = np.empty((0, 1024))
|
53 |
-
for root, dirs, files in os.walk(emotion_reference_npy):
|
54 |
-
for file_name in files:
|
55 |
-
# check if emotion_reference_npy is endwith .npy
|
56 |
-
model_extention = os.path.splitext(file_name)[1]
|
57 |
-
if model_extention != ".npy":
|
58 |
-
continue
|
59 |
-
file_path = os.path.join(root, file_name)
|
60 |
-
|
61 |
-
# merge npy files
|
62 |
-
tmp = np.load(file_path).reshape(-1, 1024)
|
63 |
-
emotion_reference = np.append(emotion_reference, tmp, axis=0)
|
64 |
-
|
65 |
-
elif os.path.isfile(emotion_reference_npy):
|
66 |
-
# check if emotion_reference_npy is endwith .npy
|
67 |
-
model_extention = os.path.splitext(emotion_reference_npy)[1]
|
68 |
-
if model_extention != ".npy":
|
69 |
-
raise ValueError(f"Unsupported model type: {model_extention}")
|
70 |
-
|
71 |
-
emotion_reference = np.load(emotion_reference_npy)
|
72 |
-
logging.info(f"Loaded emotional dimention npy range:{len(emotion_reference)}")
|
73 |
-
return emotion_reference
|
74 |
-
|
75 |
-
|
76 |
-
def parse_models(model_list):
|
77 |
-
categorized_models = {
|
78 |
-
ModelType.VITS: [],
|
79 |
-
ModelType.HUBERT_VITS: [],
|
80 |
-
ModelType.W2V2_VITS: [],
|
81 |
-
ModelType.BERT_VITS2: []
|
82 |
-
}
|
83 |
-
|
84 |
-
for model_info in model_list:
|
85 |
-
config_path = model_info[1]
|
86 |
-
hps = utils.get_hparams_from_file(config_path)
|
87 |
-
model_info.append(hps)
|
88 |
-
model_type = recognition_model_type(hps)
|
89 |
-
# with open(config_path, 'r', encoding='utf-8') as model_config:
|
90 |
-
# model_type = recognition_model_type(model_config)
|
91 |
-
if model_type in categorized_models:
|
92 |
-
categorized_models[model_type].append(model_info)
|
93 |
-
|
94 |
-
return categorized_models
|
95 |
-
|
96 |
-
|
97 |
-
def merge_models(model_list, model_class, model_type, additional_arg=None):
|
98 |
-
id_mapping_objs = []
|
99 |
-
speakers = []
|
100 |
-
new_id = 0
|
101 |
-
|
102 |
-
for obj_id, (model_path, config_path, hps) in enumerate(model_list):
|
103 |
-
obj_args = {
|
104 |
-
"model": model_path,
|
105 |
-
"config": hps,
|
106 |
-
"model_type": model_type,
|
107 |
-
"device": device
|
108 |
-
}
|
109 |
-
|
110 |
-
if model_type == ModelType.BERT_VITS2:
|
111 |
-
from bert_vits2.utils import process_legacy_versions
|
112 |
-
legacy_versions = process_legacy_versions(hps)
|
113 |
-
key = f"{model_type.value}_v{legacy_versions}" if legacy_versions else model_type.value
|
114 |
-
else:
|
115 |
-
key = getattr(hps.data, "text_cleaners", ["none"])[0]
|
116 |
-
|
117 |
-
if additional_arg:
|
118 |
-
obj_args.update(additional_arg)
|
119 |
-
|
120 |
-
obj = model_class(**obj_args)
|
121 |
-
|
122 |
-
lang = lang_dict.get(key, ["unknown"])
|
123 |
-
|
124 |
-
for real_id, name in enumerate(obj.get_speakers()):
|
125 |
-
id_mapping_objs.append([real_id, obj, obj_id])
|
126 |
-
speakers.append({"id": new_id, "name": name, "lang": lang})
|
127 |
-
new_id += 1
|
128 |
-
|
129 |
-
return id_mapping_objs, speakers
|
130 |
-
|
131 |
-
|
132 |
-
def load_model(model_list) -> TTS:
|
133 |
-
categorized_models = parse_models(model_list)
|
134 |
-
|
135 |
-
# Handle VITS
|
136 |
-
vits_objs, vits_speakers = merge_models(categorized_models[ModelType.VITS], VITS, ModelType.VITS)
|
137 |
-
|
138 |
-
# Handle HUBERT-VITS
|
139 |
-
hubert_vits_objs, hubert_vits_speakers = [], []
|
140 |
-
if len(categorized_models[ModelType.HUBERT_VITS]) != 0:
|
141 |
-
if getattr(config, "HUBERT_SOFT_MODEL", None) is None or check_is_none(config.HUBERT_SOFT_MODEL):
|
142 |
-
raise ValueError(f"Please configure HUBERT_SOFT_MODEL path in config.py")
|
143 |
-
try:
|
144 |
-
from vits.hubert_model import hubert_soft
|
145 |
-
hubert = hubert_soft(config.HUBERT_SOFT_MODEL)
|
146 |
-
except Exception as e:
|
147 |
-
raise ValueError(f"Load HUBERT_SOFT_MODEL failed {e}")
|
148 |
-
|
149 |
-
hubert_vits_objs, hubert_vits_speakers = merge_models(categorized_models[ModelType.HUBERT_VITS], VITS, ModelType.HUBERT_VITS,
|
150 |
-
additional_arg={"additional_model": hubert})
|
151 |
-
|
152 |
-
# Handle W2V2-VITS
|
153 |
-
w2v2_vits_objs, w2v2_vits_speakers = [], []
|
154 |
-
w2v2_emotion_count = 0
|
155 |
-
if len(categorized_models[ModelType.W2V2_VITS]) != 0:
|
156 |
-
if getattr(config, "DIMENSIONAL_EMOTION_NPY", None) is None or check_is_none(
|
157 |
-
config.DIMENSIONAL_EMOTION_NPY):
|
158 |
-
raise ValueError(f"Please configure DIMENSIONAL_EMOTION_NPY path in config.py")
|
159 |
-
try:
|
160 |
-
emotion_reference = load_npy(config.DIMENSIONAL_EMOTION_NPY)
|
161 |
-
except Exception as e:
|
162 |
-
emotion_reference = None
|
163 |
-
raise ValueError(f"Load DIMENSIONAL_EMOTION_NPY failed {e}")
|
164 |
-
|
165 |
-
w2v2_vits_objs, w2v2_vits_speakers = merge_models(categorized_models[ModelType.W2V2_VITS], VITS, ModelType.W2V2_VITS,
|
166 |
-
additional_arg={"additional_model": emotion_reference})
|
167 |
-
w2v2_emotion_count = len(emotion_reference) if emotion_reference is not None else 0
|
168 |
-
|
169 |
-
# Handle BERT-VITS2
|
170 |
-
bert_vits2_objs, bert_vits2_speakers = [], []
|
171 |
-
if len(categorized_models[ModelType.BERT_VITS2]) != 0:
|
172 |
-
from bert_vits2 import Bert_VITS2
|
173 |
-
bert_vits2_objs, bert_vits2_speakers = merge_models(categorized_models[ModelType.BERT_VITS2], Bert_VITS2, ModelType.BERT_VITS2)
|
174 |
-
|
175 |
-
voice_obj = {ModelType.VITS: vits_objs,
|
176 |
-
ModelType.HUBERT_VITS: hubert_vits_objs,
|
177 |
-
ModelType.W2V2_VITS: w2v2_vits_objs,
|
178 |
-
ModelType.BERT_VITS2: bert_vits2_objs}
|
179 |
-
voice_speakers = {ModelType.VITS.value: vits_speakers,
|
180 |
-
ModelType.HUBERT_VITS.value: hubert_vits_speakers,
|
181 |
-
ModelType.W2V2_VITS.value: w2v2_vits_speakers,
|
182 |
-
ModelType.BERT_VITS2.value: bert_vits2_speakers}
|
183 |
-
|
184 |
-
tts = TTS(voice_obj, voice_speakers, device=device, w2v2_emotion_count=w2v2_emotion_count)
|
185 |
-
return tts
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Banbri/zcvzcv/src/components/ui/toast.tsx
DELETED
@@ -1,127 +0,0 @@
|
|
1 |
-
import * as React from "react"
|
2 |
-
import * as ToastPrimitives from "@radix-ui/react-toast"
|
3 |
-
import { cva, type VariantProps } from "class-variance-authority"
|
4 |
-
import { X } from "lucide-react"
|
5 |
-
|
6 |
-
import { cn } from "@/lib/utils"
|
7 |
-
|
8 |
-
const ToastProvider = ToastPrimitives.Provider
|
9 |
-
|
10 |
-
const ToastViewport = React.forwardRef<
|
11 |
-
React.ElementRef<typeof ToastPrimitives.Viewport>,
|
12 |
-
React.ComponentPropsWithoutRef<typeof ToastPrimitives.Viewport>
|
13 |
-
>(({ className, ...props }, ref) => (
|
14 |
-
<ToastPrimitives.Viewport
|
15 |
-
ref={ref}
|
16 |
-
className={cn(
|
17 |
-
"fixed top-0 z-[100] flex max-h-screen w-full flex-col-reverse p-4 sm:bottom-0 sm:right-0 sm:top-auto sm:flex-col md:max-w-[420px]",
|
18 |
-
className
|
19 |
-
)}
|
20 |
-
{...props}
|
21 |
-
/>
|
22 |
-
))
|
23 |
-
ToastViewport.displayName = ToastPrimitives.Viewport.displayName
|
24 |
-
|
25 |
-
const toastVariants = cva(
|
26 |
-
"group pointer-events-auto relative flex w-full items-center justify-between space-x-4 overflow-hidden rounded-md border border-stone-200 p-6 pr-8 shadow-lg transition-all data-[swipe=cancel]:translate-x-0 data-[swipe=end]:translate-x-[var(--radix-toast-swipe-end-x)] data-[swipe=move]:translate-x-[var(--radix-toast-swipe-move-x)] data-[swipe=move]:transition-none data-[state=open]:animate-in data-[state=closed]:animate-out data-[swipe=end]:animate-out data-[state=closed]:fade-out-80 data-[state=closed]:slide-out-to-right-full data-[state=open]:slide-in-from-top-full data-[state=open]:sm:slide-in-from-bottom-full dark:border-stone-800",
|
27 |
-
{
|
28 |
-
variants: {
|
29 |
-
variant: {
|
30 |
-
default: "border bg-white text-stone-950 dark:bg-stone-950 dark:text-stone-50",
|
31 |
-
destructive:
|
32 |
-
"destructive group border-red-500 bg-red-500 text-stone-50 dark:border-red-900 dark:bg-red-900 dark:text-stone-50",
|
33 |
-
},
|
34 |
-
},
|
35 |
-
defaultVariants: {
|
36 |
-
variant: "default",
|
37 |
-
},
|
38 |
-
}
|
39 |
-
)
|
40 |
-
|
41 |
-
const Toast = React.forwardRef<
|
42 |
-
React.ElementRef<typeof ToastPrimitives.Root>,
|
43 |
-
React.ComponentPropsWithoutRef<typeof ToastPrimitives.Root> &
|
44 |
-
VariantProps<typeof toastVariants>
|
45 |
-
>(({ className, variant, ...props }, ref) => {
|
46 |
-
return (
|
47 |
-
<ToastPrimitives.Root
|
48 |
-
ref={ref}
|
49 |
-
className={cn(toastVariants({ variant }), className)}
|
50 |
-
{...props}
|
51 |
-
/>
|
52 |
-
)
|
53 |
-
})
|
54 |
-
Toast.displayName = ToastPrimitives.Root.displayName
|
55 |
-
|
56 |
-
const ToastAction = React.forwardRef<
|
57 |
-
React.ElementRef<typeof ToastPrimitives.Action>,
|
58 |
-
React.ComponentPropsWithoutRef<typeof ToastPrimitives.Action>
|
59 |
-
>(({ className, ...props }, ref) => (
|
60 |
-
<ToastPrimitives.Action
|
61 |
-
ref={ref}
|
62 |
-
className={cn(
|
63 |
-
"inline-flex h-8 shrink-0 items-center justify-center rounded-md border border-stone-200 bg-transparent px-3 text-sm font-medium ring-offset-white transition-colors hover:bg-stone-100 focus:outline-none focus:ring-2 focus:ring-stone-950 focus:ring-offset-2 disabled:pointer-events-none disabled:opacity-50 group-[.destructive]:border-stone-100/40 group-[.destructive]:hover:border-red-500/30 group-[.destructive]:hover:bg-red-500 group-[.destructive]:hover:text-stone-50 group-[.destructive]:focus:ring-red-500 dark:border-stone-800 dark:ring-offset-stone-950 dark:hover:bg-stone-800 dark:focus:ring-stone-300 dark:group-[.destructive]:border-stone-800/40 dark:group-[.destructive]:hover:border-red-900/30 dark:group-[.destructive]:hover:bg-red-900 dark:group-[.destructive]:hover:text-stone-50 dark:group-[.destructive]:focus:ring-red-900",
|
64 |
-
className
|
65 |
-
)}
|
66 |
-
{...props}
|
67 |
-
/>
|
68 |
-
))
|
69 |
-
ToastAction.displayName = ToastPrimitives.Action.displayName
|
70 |
-
|
71 |
-
const ToastClose = React.forwardRef<
|
72 |
-
React.ElementRef<typeof ToastPrimitives.Close>,
|
73 |
-
React.ComponentPropsWithoutRef<typeof ToastPrimitives.Close>
|
74 |
-
>(({ className, ...props }, ref) => (
|
75 |
-
<ToastPrimitives.Close
|
76 |
-
ref={ref}
|
77 |
-
className={cn(
|
78 |
-
"absolute right-2 top-2 rounded-md p-1 text-stone-950/50 opacity-0 transition-opacity hover:text-stone-950 focus:opacity-100 focus:outline-none focus:ring-2 group-hover:opacity-100 group-[.destructive]:text-red-300 group-[.destructive]:hover:text-red-50 group-[.destructive]:focus:ring-red-400 group-[.destructive]:focus:ring-offset-red-600 dark:text-stone-50/50 dark:hover:text-stone-50",
|
79 |
-
className
|
80 |
-
)}
|
81 |
-
toast-close=""
|
82 |
-
{...props}
|
83 |
-
>
|
84 |
-
<X className="h-4 w-4" />
|
85 |
-
</ToastPrimitives.Close>
|
86 |
-
))
|
87 |
-
ToastClose.displayName = ToastPrimitives.Close.displayName
|
88 |
-
|
89 |
-
const ToastTitle = React.forwardRef<
|
90 |
-
React.ElementRef<typeof ToastPrimitives.Title>,
|
91 |
-
React.ComponentPropsWithoutRef<typeof ToastPrimitives.Title>
|
92 |
-
>(({ className, ...props }, ref) => (
|
93 |
-
<ToastPrimitives.Title
|
94 |
-
ref={ref}
|
95 |
-
className={cn("text-sm font-semibold", className)}
|
96 |
-
{...props}
|
97 |
-
/>
|
98 |
-
))
|
99 |
-
ToastTitle.displayName = ToastPrimitives.Title.displayName
|
100 |
-
|
101 |
-
const ToastDescription = React.forwardRef<
|
102 |
-
React.ElementRef<typeof ToastPrimitives.Description>,
|
103 |
-
React.ComponentPropsWithoutRef<typeof ToastPrimitives.Description>
|
104 |
-
>(({ className, ...props }, ref) => (
|
105 |
-
<ToastPrimitives.Description
|
106 |
-
ref={ref}
|
107 |
-
className={cn("text-sm opacity-90", className)}
|
108 |
-
{...props}
|
109 |
-
/>
|
110 |
-
))
|
111 |
-
ToastDescription.displayName = ToastPrimitives.Description.displayName
|
112 |
-
|
113 |
-
type ToastProps = React.ComponentPropsWithoutRef<typeof Toast>
|
114 |
-
|
115 |
-
type ToastActionElement = React.ReactElement<typeof ToastAction>
|
116 |
-
|
117 |
-
export {
|
118 |
-
type ToastProps,
|
119 |
-
type ToastActionElement,
|
120 |
-
ToastProvider,
|
121 |
-
ToastViewport,
|
122 |
-
Toast,
|
123 |
-
ToastTitle,
|
124 |
-
ToastDescription,
|
125 |
-
ToastClose,
|
126 |
-
ToastAction,
|
127 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Basil2k4/VPSnguyenmanh/README.md
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: testv2
|
3 |
-
emoji: 🦊
|
4 |
-
sdk: docker
|
5 |
-
colorFrom: indigo
|
6 |
-
colorTo: pink
|
7 |
-
app_port: 6901
|
8 |
-
---
|
9 |
-
|
10 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Azul Cruz Azul Escudo Aplicacin.md
DELETED
@@ -1,158 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Cómo descargar y usar la aplicación Blue Cross Blue Shield</h1>
|
3 |
-
<p>Si está buscando una manera conveniente de administrar su plan de seguro médico, es posible que desee considerar la descarga y el uso de la aplicación Blue Cross Blue Shield (BCBS). La aplicación BCBS es una aplicación móvil que le permite acceder a sus beneficios, reclamaciones, incentivos y más en cualquier momento y en cualquier lugar. En este artículo, explicaremos qué es BCBS, qué puede hacer la aplicación por ti, cómo descargarla y usarla, y qué piensan otros usuarios al respecto. </p>
|
4 |
-
<h2>¿Qué es Blue Cross Blue Shield? </h2>
|
5 |
-
<h3>Breve introducción a la empresa y sus servicios</h3>
|
6 |
-
<p>Blue Cross Blue Shield es una de las compañías de seguros de salud más grandes y reconocidas en los Estados Unidos. Se compone de 36 empresas independientes que operan en diferentes regiones y estados. BCBS ofrece una variedad de productos de seguro médico para individuos, familias, empleadores y beneficiarios de Medicare. BCBS cubre a más de 100 millones de personas en todo el país y tiene una red de más de 1,7 millones de médicos, hospitales y otros proveedores. </p>
|
7 |
-
<h2>descargar azul cruz azul escudo aplicación</h2><br /><p><b><b>Download File</b> 🗹 <a href="https://bltlly.com/2v6KpI">https://bltlly.com/2v6KpI</a></b></p><br /><br />
|
8 |
-
<h3>Los beneficios de ser miembro de BCBS</h3>
|
9 |
-
<p>Como miembro de BCBS, puede disfrutar de muchos beneficios que pueden ayudarlo a ahorrar dinero, mejorar su salud y acceder a atención de calidad. Algunos de estos beneficios incluyen:</p>
|
10 |
-
<ul>
|
11 |
-
<li>Planes flexibles y asequibles que se adapten a sus necesidades y presupuesto</li>
|
12 |
-
<li>Acceso a servicios de asistencia médica, médicos y hospitales en la mayoría de los países del mundo</li>
|
13 |
-
<li>Descuentos en programas de salud y bienestar, como fitness, nutrición, control de peso, dejar de fumar, etc.</li>
|
14 |
-
<li>Herramientas y recursos en línea que le ayudan a administrar su plan, rastrear sus reclamos, encontrar proveedores, estimar costos, etc.</li>
|
15 |
-
<li>Los representantes de servicio al cliente que están disponibles 24/7 para responder a sus preguntas y ayudarle con cualquier problema</li>
|
16 |
-
</ul>
|
17 |
-
<h2>¿Qué es la aplicación BCBS? </h2>
|
18 |
-
<h3>Un resumen de las características y funciones de la aplicación</h3>
|
19 |
-
|
20 |
-
<ul>
|
21 |
-
<li>Una tarjeta de identificación digital que puede enviar por correo electrónico, imprimir o guardar en su teléfono</li>
|
22 |
-
<li>Acceso a su actividad de reclamos y detalles</li>
|
23 |
-
<li>Información del médico y el historial de visitas</li>
|
24 |
-
<li>Información de medicamentos y fechas de recarga</li>
|
25 |
-
<li>Una herramienta para encontrar médicos, dentistas, hospitales, farmacias, centros de atención urgente, etc. cerca de ti</li>
|
26 |
-
<li>Una plataforma segura y compatible con HIPAA para tener visitas de video médicos usando Well Connection para atención médica y mental*</li>
|
27 |
-
</ul>
|
28 |
-
<p>*Disponible solo para planes seleccionados. </p>
|
29 |
-
<h3>Las ventajas de usar la aplicación para administrar tu plan de salud</h3>
|
30 |
-
<p>Usar la aplicación puede hacer tu vida más fácil cuando se trata de administrar tu plan de salud. Algunas de las ventajas de usar la aplicación son:</p>
|
31 |
-
<ul>
|
32 |
-
<li>Puede ahorrar tiempo evitando llamadas telefónicas largas <li>Puede acceder a su información en cualquier momento, en cualquier lugar, incluso cuando está desconectado</li>
|
33 |
-
<li> Puede reducir el desorden de papel al tener una tarjeta de identificación digital y reclamaciones electrónicas</li>
|
34 |
-
<li>Puedes mantenerte al tanto de tu salud recibiendo recordatorios, alertas y consejos</li>
|
35 |
-
<li> Usted puede obtener apoyo personalizado y orientación de su defensor de la salud BCBS</li>
|
36 |
-
<li> Puede mejorar su bienestar mediante el uso de las características de bienestar de la aplicación y los incentivos</li>
|
37 |
-
</ul>
|
38 |
-
<h2>¿Cómo descargar la aplicación BCBS? </h2>
|
39 |
-
<h3>Los pasos para descargar la aplicación para diferentes dispositivos y plataformas</h3>
|
40 |
-
<p>Descargar la aplicación es fácil y rápido. Puedes seguir estos pasos para descargar la aplicación para tu dispositivo y plataforma:</p>
|
41 |
-
<ol>
|
42 |
-
<li>Ir a la App Store (para dispositivos iOS) o Google Play (para dispositivos Android) en su teléfono o tableta</li>
|
43 |
-
<li>Buscar "BCBS" o "Blue Cross Blue Shield" en la barra de búsqueda</li>
|
44 |
-
<li>Seleccione la aplicación con el logotipo de la cruz azul y el escudo y toque en "Instalar" o "Obtener"</li>
|
45 |
-
<li>Espere a que la aplicación se descargue e instale en su dispositivo</li>
|
46 |
-
<li>Abra la aplicación y acepte los términos y condiciones</li>
|
47 |
-
</ol>
|
48 |
-
<h3>Los requisitos y la compatibilidad de la aplicación</h3>
|
49 |
-
|
50 |
-
<h2>¿Cómo usar la aplicación BCBS? </h2>
|
51 |
-
<h3>Cómo registrarse e iniciar sesión en la aplicación</h3>
|
52 |
-
<p>Para usar la aplicación, debe registrarse e iniciar sesión con sus credenciales de cuenta de BCBS. Si ya tienes una cuenta en línea, puedes usar el mismo nombre de usuario y contraseña para iniciar sesión en la aplicación. Si no tienes una cuenta en línea, puedes crear una siguiendo estos pasos:</p>
|
53 |
-
<ol>
|
54 |
-
<li>Toque en "Registrarse" en la pantalla de inicio de la aplicación</li>
|
55 |
-
<li>Ingrese su información personal, como nombre, fecha de nacimiento, dirección de correo electrónico, etc.</li>
|
56 |
-
<li>Ingrese su número de identificación de miembro de BCBS, que puede encontrar en su tarjeta de identificación o carta de confirmación de inscripción</li>
|
57 |
-
<li>Cree un nombre de usuario y una contraseña que utilizará para acceder a su cuenta</li>
|
58 |
-
<li>Elige una pregunta de seguridad y responde que usarás para restablecer tu contraseña si la olvidas</li>
|
59 |
-
<li>Toque en "Enviar" y verifique su dirección de correo electrónico haciendo clic en el enlace enviado a usted</li>
|
60 |
-
</ol>
|
61 |
-
<h3>Cómo acceder a sus beneficios, reclamaciones e incentivos</h3>
|
62 |
-
<p>Una vez que inicie sesión en la aplicación, puede acceder a sus beneficios, reclamaciones e incentivos pulsando en los iconos de la barra de menú inferior. También puede deslizar hacia la izquierda o hacia la derecha en la pantalla de inicio para ver diferentes tarjetas que muestran su información. Estas son algunas de las cosas que puedes hacer con estas características:</p>
|
63 |
-
<p></p>
|
64 |
-
<ul>
|
65 |
-
<li>Beneficios: Puede ver los detalles de su plan, como deducible, coseguro, copagos, máximo desembolso, etc. También puede ver qué servicios están cubiertos o no por su plan. </li>
|
66 |
-
<li>Reclamaciones: Puede ver su historial de reclamaciones y estado, como la fecha del servicio, nombre del proveedor, cantidad facturada, cantidad pagada, etc. También puede ver su explicación de los estados de beneficios (EOB) y presentar cualquier reclamo que falte o sea incorrecto. </li>
|
67 |
-
<li>Incentivos: Puede ver su programa de incentivos de bienestar, tales como puntos ganados, recompensas disponibles, actividades completadas, etc. También puede realizar un seguimiento de su progreso y canjear sus recompensas. </li>
|
68 |
-
</ul>
|
69 |
-
<h3>Cómo encontrar médicos, hospitales y farmacias</h3>
|
70 |
-
|
71 |
-
<ul>
|
72 |
-
<li>Buscar por nombre, especialidad, condición, procedimiento, ubicación, etc.</li>
|
73 |
-
<li>Filtrar por distancia, calificaciones, disponibilidad, idiomas hablados, etc.</li>
|
74 |
-
<li>Comparar proveedores por medidas de calidad, estimaciones de costos, comentarios, etc.</li>
|
75 |
-
<li>Obtener direcciones e información de contacto para proveedores</li>
|
76 |
-
<li>Añadir proveedores a su lista de favoritos para un fácil acceso</li>
|
77 |
-
<li>Compartir proveedores con otros a través de correo electrónico o mensaje de texto</li>
|
78 |
-
</ul>
|
79 |
-
<h3>Cómo usar la aplicación para servicios de telesalud y bienestar</h3>
|
80 |
-
<p>La aplicación también ofrece servicios de telesalud y bienestar que puede utilizar para mejorar su salud y bienestar. Puede acceder a estos servicios pulsando en los iconos de la barra de menú inferior. También puede deslizar hacia la izquierda o hacia la derecha en la pantalla de inicio para ver diferentes tarjetas que muestran sus opciones. Estas son algunas de las cosas que puedes hacer con estos servicios:</p>
|
81 |
-
<ul>
|
82 |
-
<li>Telesalud: Usted puede tener visitas en video al médico usando Well Connection para atención médica y mental. Puede elegir entre una variedad de proveedores y especialidades, como atención primaria, dermatología, psiquiatría, asesoramiento, etc. También puede programar citas, pagar copagos y obtener recetas a través de la aplicación. </li>
|
83 |
-
<li>Bienestar: Puede utilizar las características de bienestar de la aplicación para realizar un seguimiento de su estado físico, nutrición, sueño, estrés y otros aspectos de su salud. También puede conectar su aplicación a otros dispositivos y aplicaciones, como Fitbit, Apple Health, Google Fit, etc. También puede unirse a los desafíos, ganar insignias y obtener consejos y consejos de expertos. </li>
|
84 |
-
</ul>
|
85 |
-
<h2>¿Cuáles son las opiniones y calificaciones de la aplicación BCBS? </h2>
|
86 |
-
<h3>Un resumen de los comentarios y testimonios de usuarios y expertos</h3>
|
87 |
-
<p>La aplicación BCBS ha recibido comentarios y testimonios positivos de usuarios y expertos. La aplicación tiene una calificación promedio de 4.5 de 5 estrellas en la App Store y 4.3 de 5 estrellas en Google Play. Estos son algunos de los comentarios de usuarios y expertos:</p>
|
88 |
-
<tabla>
|
89 |
-
<tr>
|
90 |
-
<th>Usuario</th>
|
91 |
-
<th>Comentario</th>
|
92 |
-
</tr>
|
93 |
-
<tr>
|
94 |
-
<td>Jane D.</td>
|
95 |
-
|
96 |
-
</tr>
|
97 |
-
<tr>
|
98 |
-
<td>Marca S.</td>
|
99 |
-
<td>"Esta aplicación es muy útil e informativo. Me muestra mis beneficios, reclamaciones, incentivos y más de una manera clara y concisa. También me ayuda a encontrar proveedores cerca de mí y compararlos por calidad y costo."</td>
|
100 |
-
</tr>
|
101 |
-
<tr>
|
102 |
-
<td>Lisa K.</td>
|
103 |
-
<td>"Esta aplicación es ideal para el bienestar y el fitness. Se sincroniza con mi Fitbit y rastrea mis pasos, calorías, frecuencia cardíaca, etc. También me da recompensas por completar actividades y desafíos. Me motiva a mantenerme saludable." </td>
|
104 |
-
</tr>
|
105 |
-
<tr>
|
106 |
-
<th>Experto</th>
|
107 |
-
<th>Comentario</th>
|
108 |
-
</tr>
|
109 |
-
<tr>
|
110 |
-
<td>AppAdvice</td>
|
111 |
-
<td>"La aplicación BCBS es imprescindible para cualquier persona que tenga un plan de seguro médico Blue Cross Blue Shield. Ofrece muchas características y funciones que hacen que la administración de su plan de salud sea fácil y conveniente."</td>
|
112 |
-
</tr>
|
113 |
-
<tr>
|
114 |
-
<td>AppGrooves</td>
|
115 |
-
<td>"La aplicación BCBS es una aplicación completa y fácil de usar que le permite acceder a su plan de seguro de salud sobre la marcha. Tiene un diseño elegante y una interfaz sencilla que facilita la navegación."</td>
|
116 |
-
</tr>
|
117 |
-
<tr>
|
118 |
-
<td>AppPicker</td>
|
119 |
-
<td>"La aplicación BCBS es una aplicación potente y versátil que ofrece una gama de servicios de telesalud y bienestar que pueden mejorar su salud y bienestar. Tiene una plataforma segura y compatible con HIPAA que garantiza su privacidad."</td>
|
120 |
-
</tr>
|
121 |
-
</tabla>
|
122 |
-
<h3>Los pros y los contras de la aplicación</h3>
|
123 |
-
<p>Como cualquier otra aplicación, la aplicación BCBS tiene sus pros y sus contras. Aquí están algunos de ellos:</p>
|
124 |
-
<h4>Pros:</h4>
|
125 |
-
<ul>
|
126 |
-
<li>Es gratis para descargar y usar</li>
|
127 |
-
<li>Es compatible con la mayoría de dispositivos y plataformas</li>
|
128 |
-
<li> Tiene un montón de características y funciones que hacen que la gestión de su plan de salud fácil y conveniente</li>
|
129 |
-
<li> Ofrece servicios de telesalud y bienestar que pueden mejorar su salud y bienestar</li>
|
130 |
-
<li>Tiene críticas y valoraciones positivas de usuarios y expertos</li>
|
131 |
-
<h4>Contras:</h4>
|
132 |
-
<ul>
|
133 |
-
<li>Requiere una conexión a Internet para funcionar correctamente</li>
|
134 |
-
|
135 |
-
<li>Puede no estar disponible o no ser compatible con algunos planes o regiones</li>
|
136 |
-
<li> Es posible que no tenga todas las características o funciones que necesita o desea</li>
|
137 |
-
<li>Puede tener algunas limitaciones o restricciones que afectan su usabilidad o funcionalidad</li>
|
138 |
-
</ul>
|
139 |
-
<h2>Conclusión</h2>
|
140 |
-
<h3>Resumen de los puntos principales y llamada a la acción</h3>
|
141 |
-
<p>En conclusión, la aplicación BCBS es una aplicación móvil que le permite acceder a su plan de seguro de salud sobre la marcha. La aplicación tiene muchas características y funciones que pueden ayudarlo a ahorrar dinero, mejorar su salud y acceder a atención de calidad. La aplicación también ofrece servicios de telesalud y bienestar que pueden mejorar su bienestar. La aplicación tiene en su mayoría comentarios positivos y testimonios de usuarios y expertos, y tiene una alta calificación en la App Store y Google Play. La aplicación es fácil de descargar y usar, y es compatible con la mayoría de los dispositivos y plataformas. Si usted es un miembro de BCBS, definitivamente debe darle una oportunidad a la aplicación y ver cómo puede hacer su vida más fácil y saludable. Para descargar la aplicación, vaya a la App Store o Google Play y busque "BCBS" o "Blue Cross Blue Shield". También puede visitar el sitio web del BCBS para obtener más información y apoyo. </p>
|
142 |
-
<h2>Preguntas frecuentes</h2>
|
143 |
-
<h3>Cinco preguntas y respuestas comunes sobre la aplicación BCBS</h3>
|
144 |
-
<p>Aquí están algunas de las preguntas y respuestas más frecuentes sobre la aplicación BCBS:</p>
|
145 |
-
<ol>
|
146 |
-
<li>Q: ¿La aplicación BCBS es segura y privada? <br>
|
147 |
-
R: Sí, la aplicación BCBS es segura y privada. La aplicación utiliza cifrado, autenticación y otras medidas de seguridad para proteger su información personal y de salud. La aplicación también cumple con la Ley de Portabilidad y Responsabilidad de Seguros de Salud (HIPAA), que establece los estándares de privacidad y seguridad de la información de salud. </li>
|
148 |
-
<li>Q: ¿Cuánto cuesta la aplicación BCBS? <br>
|
149 |
-
|
150 |
-
<li>Q: ¿Puedo usar la aplicación BCBS fuera de los Estados Unidos? <br>
|
151 |
-
R: Sí, puede usar la aplicación BCBS fuera de los Estados Unidos. La aplicación puede ayudarlo a encontrar proveedores, acceder a sus beneficios y ponerse en contacto con el servicio al cliente en la mayoría de los países del mundo. Sin embargo, algunas características o funciones pueden no estar disponibles o pueden funcionar de manera diferente en algunas regiones o países. </li>
|
152 |
-
<li>Q: ¿Puedo usar la aplicación BCBS para múltiples planes o cuentas? <br>
|
153 |
-
R: Sí, puede usar la aplicación BCBS para múltiples planes o cuentas. Puede cambiar entre diferentes planes o cuentas tocando el icono del menú en la esquina superior izquierda de la aplicación y seleccionando "Cambiar de cuenta". También puede agregar o eliminar cuentas pulsando en "Administrar cuentas". </li>
|
154 |
-
<li>Q: ¿Cómo puedo obtener ayuda o soporte para la aplicación BCBS? <br>
|
155 |
-
R: Puede obtener ayuda o soporte para la aplicación BCBS tocando el icono del menú en la esquina superior izquierda de la aplicación y seleccionando "Ayuda y soporte". También puede llamar al servicio de atención al cliente al 1-888-630-BLUE (2583) o visitar el sitio web del BCBS para obtener más información y recursos. </li>
|
156 |
-
</ol></p> 64aa2da5cf<br />
|
157 |
-
<br />
|
158 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/metadata/importlib/__init__.py
DELETED
@@ -1,4 +0,0 @@
|
|
1 |
-
from ._dists import Distribution
|
2 |
-
from ._envs import Environment
|
3 |
-
|
4 |
-
__all__ = ["Distribution", "Environment"]
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/pygments/console.py
DELETED
@@ -1,70 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
pygments.console
|
3 |
-
~~~~~~~~~~~~~~~~
|
4 |
-
|
5 |
-
Format colored console output.
|
6 |
-
|
7 |
-
:copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
|
8 |
-
:license: BSD, see LICENSE for details.
|
9 |
-
"""
|
10 |
-
|
11 |
-
esc = "\x1b["
|
12 |
-
|
13 |
-
codes = {}
|
14 |
-
codes[""] = ""
|
15 |
-
codes["reset"] = esc + "39;49;00m"
|
16 |
-
|
17 |
-
codes["bold"] = esc + "01m"
|
18 |
-
codes["faint"] = esc + "02m"
|
19 |
-
codes["standout"] = esc + "03m"
|
20 |
-
codes["underline"] = esc + "04m"
|
21 |
-
codes["blink"] = esc + "05m"
|
22 |
-
codes["overline"] = esc + "06m"
|
23 |
-
|
24 |
-
dark_colors = ["black", "red", "green", "yellow", "blue",
|
25 |
-
"magenta", "cyan", "gray"]
|
26 |
-
light_colors = ["brightblack", "brightred", "brightgreen", "brightyellow", "brightblue",
|
27 |
-
"brightmagenta", "brightcyan", "white"]
|
28 |
-
|
29 |
-
x = 30
|
30 |
-
for d, l in zip(dark_colors, light_colors):
|
31 |
-
codes[d] = esc + "%im" % x
|
32 |
-
codes[l] = esc + "%im" % (60 + x)
|
33 |
-
x += 1
|
34 |
-
|
35 |
-
del d, l, x
|
36 |
-
|
37 |
-
codes["white"] = codes["bold"]
|
38 |
-
|
39 |
-
|
40 |
-
def reset_color():
|
41 |
-
return codes["reset"]
|
42 |
-
|
43 |
-
|
44 |
-
def colorize(color_key, text):
|
45 |
-
return codes[color_key] + text + codes["reset"]
|
46 |
-
|
47 |
-
|
48 |
-
def ansiformat(attr, text):
|
49 |
-
"""
|
50 |
-
Format ``text`` with a color and/or some attributes::
|
51 |
-
|
52 |
-
color normal color
|
53 |
-
*color* bold color
|
54 |
-
_color_ underlined color
|
55 |
-
+color+ blinking color
|
56 |
-
"""
|
57 |
-
result = []
|
58 |
-
if attr[:1] == attr[-1:] == '+':
|
59 |
-
result.append(codes['blink'])
|
60 |
-
attr = attr[1:-1]
|
61 |
-
if attr[:1] == attr[-1:] == '*':
|
62 |
-
result.append(codes['bold'])
|
63 |
-
attr = attr[1:-1]
|
64 |
-
if attr[:1] == attr[-1:] == '_':
|
65 |
-
result.append(codes['underline'])
|
66 |
-
attr = attr[1:-1]
|
67 |
-
result.append(codes[attr])
|
68 |
-
result.append(text)
|
69 |
-
result.append(codes['reset'])
|
70 |
-
return ''.join(result)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/monkey.py
DELETED
@@ -1,165 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Monkey patching of distutils.
|
3 |
-
"""
|
4 |
-
|
5 |
-
import sys
|
6 |
-
import distutils.filelist
|
7 |
-
import platform
|
8 |
-
import types
|
9 |
-
import functools
|
10 |
-
from importlib import import_module
|
11 |
-
import inspect
|
12 |
-
|
13 |
-
import setuptools
|
14 |
-
|
15 |
-
__all__ = []
|
16 |
-
"""
|
17 |
-
Everything is private. Contact the project team
|
18 |
-
if you think you need this functionality.
|
19 |
-
"""
|
20 |
-
|
21 |
-
|
22 |
-
def _get_mro(cls):
|
23 |
-
"""
|
24 |
-
Returns the bases classes for cls sorted by the MRO.
|
25 |
-
|
26 |
-
Works around an issue on Jython where inspect.getmro will not return all
|
27 |
-
base classes if multiple classes share the same name. Instead, this
|
28 |
-
function will return a tuple containing the class itself, and the contents
|
29 |
-
of cls.__bases__. See https://github.com/pypa/setuptools/issues/1024.
|
30 |
-
"""
|
31 |
-
if platform.python_implementation() == "Jython":
|
32 |
-
return (cls,) + cls.__bases__
|
33 |
-
return inspect.getmro(cls)
|
34 |
-
|
35 |
-
|
36 |
-
def get_unpatched(item):
|
37 |
-
lookup = (
|
38 |
-
get_unpatched_class if isinstance(item, type) else
|
39 |
-
get_unpatched_function if isinstance(item, types.FunctionType) else
|
40 |
-
lambda item: None
|
41 |
-
)
|
42 |
-
return lookup(item)
|
43 |
-
|
44 |
-
|
45 |
-
def get_unpatched_class(cls):
|
46 |
-
"""Protect against re-patching the distutils if reloaded
|
47 |
-
|
48 |
-
Also ensures that no other distutils extension monkeypatched the distutils
|
49 |
-
first.
|
50 |
-
"""
|
51 |
-
external_bases = (
|
52 |
-
cls
|
53 |
-
for cls in _get_mro(cls)
|
54 |
-
if not cls.__module__.startswith('setuptools')
|
55 |
-
)
|
56 |
-
base = next(external_bases)
|
57 |
-
if not base.__module__.startswith('distutils'):
|
58 |
-
msg = "distutils has already been patched by %r" % cls
|
59 |
-
raise AssertionError(msg)
|
60 |
-
return base
|
61 |
-
|
62 |
-
|
63 |
-
def patch_all():
|
64 |
-
# we can't patch distutils.cmd, alas
|
65 |
-
distutils.core.Command = setuptools.Command
|
66 |
-
|
67 |
-
has_issue_12885 = sys.version_info <= (3, 5, 3)
|
68 |
-
|
69 |
-
if has_issue_12885:
|
70 |
-
# fix findall bug in distutils (http://bugs.python.org/issue12885)
|
71 |
-
distutils.filelist.findall = setuptools.findall
|
72 |
-
|
73 |
-
needs_warehouse = (
|
74 |
-
(3, 4) < sys.version_info < (3, 4, 6)
|
75 |
-
or
|
76 |
-
(3, 5) < sys.version_info <= (3, 5, 3)
|
77 |
-
)
|
78 |
-
|
79 |
-
if needs_warehouse:
|
80 |
-
warehouse = 'https://upload.pypi.org/legacy/'
|
81 |
-
distutils.config.PyPIRCCommand.DEFAULT_REPOSITORY = warehouse
|
82 |
-
|
83 |
-
_patch_distribution_metadata()
|
84 |
-
|
85 |
-
# Install Distribution throughout the distutils
|
86 |
-
for module in distutils.dist, distutils.core, distutils.cmd:
|
87 |
-
module.Distribution = setuptools.dist.Distribution
|
88 |
-
|
89 |
-
# Install the patched Extension
|
90 |
-
distutils.core.Extension = setuptools.extension.Extension
|
91 |
-
distutils.extension.Extension = setuptools.extension.Extension
|
92 |
-
if 'distutils.command.build_ext' in sys.modules:
|
93 |
-
sys.modules['distutils.command.build_ext'].Extension = (
|
94 |
-
setuptools.extension.Extension
|
95 |
-
)
|
96 |
-
|
97 |
-
patch_for_msvc_specialized_compiler()
|
98 |
-
|
99 |
-
|
100 |
-
def _patch_distribution_metadata():
|
101 |
-
"""Patch write_pkg_file and read_pkg_file for higher metadata standards"""
|
102 |
-
for attr in ('write_pkg_file', 'read_pkg_file', 'get_metadata_version'):
|
103 |
-
new_val = getattr(setuptools.dist, attr)
|
104 |
-
setattr(distutils.dist.DistributionMetadata, attr, new_val)
|
105 |
-
|
106 |
-
|
107 |
-
def patch_func(replacement, target_mod, func_name):
|
108 |
-
"""
|
109 |
-
Patch func_name in target_mod with replacement
|
110 |
-
|
111 |
-
Important - original must be resolved by name to avoid
|
112 |
-
patching an already patched function.
|
113 |
-
"""
|
114 |
-
original = getattr(target_mod, func_name)
|
115 |
-
|
116 |
-
# set the 'unpatched' attribute on the replacement to
|
117 |
-
# point to the original.
|
118 |
-
vars(replacement).setdefault('unpatched', original)
|
119 |
-
|
120 |
-
# replace the function in the original module
|
121 |
-
setattr(target_mod, func_name, replacement)
|
122 |
-
|
123 |
-
|
124 |
-
def get_unpatched_function(candidate):
|
125 |
-
return getattr(candidate, 'unpatched')
|
126 |
-
|
127 |
-
|
128 |
-
def patch_for_msvc_specialized_compiler():
|
129 |
-
"""
|
130 |
-
Patch functions in distutils to use standalone Microsoft Visual C++
|
131 |
-
compilers.
|
132 |
-
"""
|
133 |
-
# import late to avoid circular imports on Python < 3.5
|
134 |
-
msvc = import_module('setuptools.msvc')
|
135 |
-
|
136 |
-
if platform.system() != 'Windows':
|
137 |
-
# Compilers only available on Microsoft Windows
|
138 |
-
return
|
139 |
-
|
140 |
-
def patch_params(mod_name, func_name):
|
141 |
-
"""
|
142 |
-
Prepare the parameters for patch_func to patch indicated function.
|
143 |
-
"""
|
144 |
-
repl_prefix = 'msvc14_'
|
145 |
-
repl_name = repl_prefix + func_name.lstrip('_')
|
146 |
-
repl = getattr(msvc, repl_name)
|
147 |
-
mod = import_module(mod_name)
|
148 |
-
if not hasattr(mod, func_name):
|
149 |
-
raise ImportError(func_name)
|
150 |
-
return repl, mod, func_name
|
151 |
-
|
152 |
-
# Python 3.5+
|
153 |
-
msvc14 = functools.partial(patch_params, 'distutils._msvccompiler')
|
154 |
-
|
155 |
-
try:
|
156 |
-
# Patch distutils._msvccompiler._get_vc_env
|
157 |
-
patch_func(*msvc14('_get_vc_env'))
|
158 |
-
except ImportError:
|
159 |
-
pass
|
160 |
-
|
161 |
-
try:
|
162 |
-
# Patch distutils._msvccompiler.gen_lib_options for Numpy
|
163 |
-
patch_func(*msvc14('gen_lib_options'))
|
164 |
-
except ImportError:
|
165 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_source/basic/getting_started.md
DELETED
@@ -1,116 +0,0 @@
|
|
1 |
-
# Getting Started
|
2 |
-
|
3 |
-
This page provides basic tutorials about the usage of mmdetection.
|
4 |
-
For installation instructions, please see [Installation](install).
|
5 |
-
|
6 |
-
## Training
|
7 |
-
|
8 |
-
The following script will start training a `mcan_small` model on the `VQA-v2` dataset:
|
9 |
-
|
10 |
-
```bash
|
11 |
-
$ python3 run.py --RUN='train' --MODEL='mcan_small' --DATASET='vqa'
|
12 |
-
```
|
13 |
-
|
14 |
-
- ```--RUN={'train','val','test'}``` to set the mode to be executed.
|
15 |
-
|
16 |
-
- ```--MODEL=str```, e.g., to assign the model to be executed.
|
17 |
-
|
18 |
-
- ```--DATASET={'vqa','gqa','clevr'}``` to choose the dataset to be executed.
|
19 |
-
|
20 |
-
All checkpoint files will be saved to:
|
21 |
-
|
22 |
-
```
|
23 |
-
ckpts/ckpt_<VERSION>/epoch<EPOCH_NUMBER>.pkl
|
24 |
-
```
|
25 |
-
|
26 |
-
and the training log file will be placed at:
|
27 |
-
|
28 |
-
```
|
29 |
-
results/log/log_run_<VERSION>.txt
|
30 |
-
```
|
31 |
-
|
32 |
-
To add:
|
33 |
-
|
34 |
-
- ```--VERSION=str```, e.g., ```--VERSION='v1'``` to assign a name for your this model.
|
35 |
-
|
36 |
-
- ```--GPU=str```, e.g., ```--GPU='2'``` to train the model on specified GPU device.
|
37 |
-
|
38 |
-
- ```--SEED=int```, e.g., ```--SEED=123``` to use a fixed seed to initialize the model, which obtains exactly the same model. Unset it results in random seeds.
|
39 |
-
|
40 |
-
- ```--NW=int```, e.g., ```--NW=8``` to accelerate I/O speed.
|
41 |
-
|
42 |
-
- ```--SPLIT=str``` to set the training sets as you want. Setting ```--SPLIT='train'``` will trigger the evaluation script to run the validation score after every epoch automatically.
|
43 |
-
|
44 |
-
- ```--RESUME=True``` to start training with saved checkpoint parameters. In this stage, you should assign the checkpoint version```--CKPT_V=str``` and the resumed epoch number ```CKPT_E=int```.
|
45 |
-
|
46 |
-
- ```--MAX_EPOCH=int``` to stop training at a specified epoch number.
|
47 |
-
|
48 |
-
If you want to resume training from an existing checkpoint, you can use the following script:
|
49 |
-
|
50 |
-
```bash
|
51 |
-
$ python3 run.py --RUN='train' --MODEL='mcan_small' --DATASET='vqa' --CKPT_V=str --CKPT_E=int
|
52 |
-
```
|
53 |
-
|
54 |
-
where the args `CKPT_V` and `CKPT_E` must be specified, corresponding to the version and epoch number of the loaded model.
|
55 |
-
|
56 |
-
|
57 |
-
#### Multi-GPU Training and Gradient Accumulation
|
58 |
-
|
59 |
-
We recommend to use the GPU with at least 8 GB memory, but if you don't have such device, we provide two solutions to deal with it:
|
60 |
-
|
61 |
-
- _Multi-GPU Training_:
|
62 |
-
|
63 |
-
If you want to accelerate training or train the model on a device with limited GPU memory, you can use more than one GPUs:
|
64 |
-
|
65 |
-
Add ```--GPU='0, 1, 2, 3...'```
|
66 |
-
|
67 |
-
The batch size on each GPU will be adjusted to `BATCH_SIZE`/#GPUs automatically.
|
68 |
-
|
69 |
-
- _Gradient Accumulation_:
|
70 |
-
|
71 |
-
If you only have one GPU less than 8GB, an alternative strategy is provided to use the gradient accumulation during training:
|
72 |
-
|
73 |
-
Add ```--ACCU=n```
|
74 |
-
|
75 |
-
This makes the optimizer accumulate gradients for`n` small batches and update the model weights at once. It is worth noting that `BATCH_SIZE` must be divided by ```n``` to run this mode correctly.
|
76 |
-
|
77 |
-
|
78 |
-
## Validation and Testing
|
79 |
-
|
80 |
-
**Warning**: The args ```--MODEL``` and `--DATASET` should be set to the same values as those in the training stage.
|
81 |
-
|
82 |
-
|
83 |
-
### Validation on Local Machine
|
84 |
-
|
85 |
-
Offline evaluation on local machine only support the evaluations on the *val* split. If you want to evaluate the *test* split, please see [Evaluation on online server](#Evaluation on online server).
|
86 |
-
|
87 |
-
There are two ways to start:
|
88 |
-
|
89 |
-
(Recommend)
|
90 |
-
|
91 |
-
```bash
|
92 |
-
$ python3 run.py --RUN='val' --MODEL=str --DATASET='{vqa,gqa,clevr}' --CKPT_V=str --CKPT_E=int
|
93 |
-
```
|
94 |
-
|
95 |
-
or use the absolute path instead:
|
96 |
-
|
97 |
-
```bash
|
98 |
-
$ python3 run.py --RUN='val' --MODEL=str --DATASET='{vqa,gqa,clevr}' --CKPT_PATH=str
|
99 |
-
```
|
100 |
-
|
101 |
-
- For VQA-v2, the results on *val* split
|
102 |
-
|
103 |
-
### Testing on Online Server
|
104 |
-
|
105 |
-
All the evaluations on the test split of VQA-v2, GQA and CLEVR benchmarks can be achieved by using
|
106 |
-
|
107 |
-
```bash
|
108 |
-
$ python3 run.py --RUN='test' --MODEL=str --DATASET='{vqa,gqa,clevr}' --CKPT_V=str --CKPT_E=int
|
109 |
-
```
|
110 |
-
|
111 |
-
Result file are saved at: ```results/result_test/result_run_<CKPT_V>_<CKPT_E>.json```
|
112 |
-
|
113 |
-
- For VQA-v2, the result file is uploaded the [VQA challenge website](https://evalai.cloudcv.org/web/challenges/challenge-page/163/overview) to evaluate the scores on *test-dev* or *test-std* split.
|
114 |
-
|
115 |
-
- For GQA, the result file is uploaded to the [GQA Challenge website](<https://evalai.cloudcv.org/web/challenges/challenge-page/225/overview>) to evaluate the scores on *test* or *test-dev* split.
|
116 |
-
- For CLEVR, the result file can be evaluated via sending an email to the author [Justin Johnson](<https://cs.stanford.edu/people/jcjohns/>) with attaching this file, and he will reply the scores via email too.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Text2Human/style.css
DELETED
@@ -1,16 +0,0 @@
|
|
1 |
-
h1 {
|
2 |
-
text-align: center;
|
3 |
-
}
|
4 |
-
#input-image {
|
5 |
-
max-height: 300px;
|
6 |
-
}
|
7 |
-
#label-image {
|
8 |
-
height: 300px;
|
9 |
-
}
|
10 |
-
#result-image {
|
11 |
-
height: 300px;
|
12 |
-
}
|
13 |
-
img#visitor-badge {
|
14 |
-
display: block;
|
15 |
-
margin: auto;
|
16 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/regionclip-demo/detectron2/data/datasets/builtin.py
DELETED
@@ -1,302 +0,0 @@
|
|
1 |
-
# -*- coding: utf-8 -*-
|
2 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
3 |
-
|
4 |
-
|
5 |
-
"""
|
6 |
-
This file registers pre-defined datasets at hard-coded paths, and their metadata.
|
7 |
-
|
8 |
-
We hard-code metadata for common datasets. This will enable:
|
9 |
-
1. Consistency check when loading the datasets
|
10 |
-
2. Use models on these standard datasets directly and run demos,
|
11 |
-
without having to download the dataset annotations
|
12 |
-
|
13 |
-
We hard-code some paths to the dataset that's assumed to
|
14 |
-
exist in "./datasets/".
|
15 |
-
|
16 |
-
Users SHOULD NOT use this file to create new dataset / metadata for new dataset.
|
17 |
-
To add new dataset, refer to the tutorial "docs/DATASETS.md".
|
18 |
-
"""
|
19 |
-
|
20 |
-
import os
|
21 |
-
|
22 |
-
from detectron2.data import DatasetCatalog, MetadataCatalog
|
23 |
-
|
24 |
-
from .builtin_meta import ADE20K_SEM_SEG_CATEGORIES, _get_builtin_metadata
|
25 |
-
from .cityscapes import load_cityscapes_instances, load_cityscapes_semantic
|
26 |
-
from .cityscapes_panoptic import register_all_cityscapes_panoptic
|
27 |
-
from .coco import load_sem_seg, register_coco_instances
|
28 |
-
from .coco_panoptic import register_coco_panoptic, register_coco_panoptic_separated
|
29 |
-
from .lvis import get_lvis_instances_meta, register_lvis_instances
|
30 |
-
from .pascal_voc import register_pascal_voc
|
31 |
-
|
32 |
-
# ==== Predefined datasets and splits for COCO ==========
|
33 |
-
|
34 |
-
_PREDEFINED_SPLITS_COCO = {}
|
35 |
-
_PREDEFINED_SPLITS_COCO["coco"] = {
|
36 |
-
"coco_2014_train": ("coco/train2014", "coco/annotations/instances_train2014.json"),
|
37 |
-
"coco_2014_val": ("coco/val2014", "coco/annotations/instances_val2014.json"),
|
38 |
-
"coco_2014_minival": ("coco/val2014", "coco/annotations/instances_minival2014.json"),
|
39 |
-
"coco_2014_minival_100": ("coco/val2014", "coco/annotations/instances_minival2014_100.json"),
|
40 |
-
"coco_2014_valminusminival": (
|
41 |
-
"coco/val2014",
|
42 |
-
"coco/annotations/instances_valminusminival2014.json",
|
43 |
-
),
|
44 |
-
"coco_2017_train": ("coco/train2017", "coco/annotations/instances_train2017.json"),
|
45 |
-
"coco_2017_val": ("coco/val2017", "coco/annotations/instances_val2017.json"),
|
46 |
-
"coco_2017_test": ("coco/test2017", "coco/annotations/image_info_test2017.json"),
|
47 |
-
"coco_2017_test-dev": ("coco/test2017", "coco/annotations/image_info_test-dev2017.json"),
|
48 |
-
"coco_2017_val_100": ("coco/val2017", "coco/annotations/instances_val2017_100.json"),
|
49 |
-
}
|
50 |
-
_PREDEFINED_SPLITS_COCO["coco_ovd"] = {
|
51 |
-
"coco_2017_ovd_all_train": ("coco/train2017", "coco/annotations/ovd_ins_train2017_all.json"),
|
52 |
-
"coco_2017_ovd_b_train": ("coco/train2017", "coco/annotations/ovd_ins_train2017_b.json"),
|
53 |
-
"coco_2017_ovd_t_train": ("coco/train2017", "coco/annotations/ovd_ins_train2017_t.json"),
|
54 |
-
"coco_2017_ovd_all_test": ("coco/val2017", "coco/annotations/ovd_ins_val2017_all.json"),
|
55 |
-
"coco_2017_ovd_b_test": ("coco/val2017", "coco/annotations/ovd_ins_val2017_b.json"),
|
56 |
-
"coco_2017_ovd_t_test": ("coco/val2017", "coco/annotations/ovd_ins_val2017_t.json"),
|
57 |
-
}
|
58 |
-
|
59 |
-
# zeroshot inference of grounding evaluation
|
60 |
-
_PREDEFINED_SPLITS_FLICKR30K = {}
|
61 |
-
_PREDEFINED_SPLITS_FLICKR30K["yfcc100m"] = {
|
62 |
-
"flickr30k_train": ('flickr30k_images', "flickr30k_anno", "split/train.txt"),
|
63 |
-
"flickr30k_val": ('flickr30k_images', "flickr30k_anno", "split/val.txt"),
|
64 |
-
"flickr30k_test": ('flickr30k_images', "flickr30k_anno", "split/test.txt"),
|
65 |
-
}
|
66 |
-
|
67 |
-
_PREDEFINED_SPLITS_COCO["coco_person"] = {
|
68 |
-
"keypoints_coco_2014_train": (
|
69 |
-
"coco/train2014",
|
70 |
-
"coco/annotations/person_keypoints_train2014.json",
|
71 |
-
),
|
72 |
-
"keypoints_coco_2014_val": ("coco/val2014", "coco/annotations/person_keypoints_val2014.json"),
|
73 |
-
"keypoints_coco_2014_minival": (
|
74 |
-
"coco/val2014",
|
75 |
-
"coco/annotations/person_keypoints_minival2014.json",
|
76 |
-
),
|
77 |
-
"keypoints_coco_2014_valminusminival": (
|
78 |
-
"coco/val2014",
|
79 |
-
"coco/annotations/person_keypoints_valminusminival2014.json",
|
80 |
-
),
|
81 |
-
"keypoints_coco_2014_minival_100": (
|
82 |
-
"coco/val2014",
|
83 |
-
"coco/annotations/person_keypoints_minival2014_100.json",
|
84 |
-
),
|
85 |
-
"keypoints_coco_2017_train": (
|
86 |
-
"coco/train2017",
|
87 |
-
"coco/annotations/person_keypoints_train2017.json",
|
88 |
-
),
|
89 |
-
"keypoints_coco_2017_val": ("coco/val2017", "coco/annotations/person_keypoints_val2017.json"),
|
90 |
-
"keypoints_coco_2017_val_100": (
|
91 |
-
"coco/val2017",
|
92 |
-
"coco/annotations/person_keypoints_val2017_100.json",
|
93 |
-
),
|
94 |
-
}
|
95 |
-
|
96 |
-
|
97 |
-
_PREDEFINED_SPLITS_COCO_PANOPTIC = {
|
98 |
-
"coco_2017_train_panoptic": (
|
99 |
-
# This is the original panoptic annotation directory
|
100 |
-
"coco/panoptic_train2017",
|
101 |
-
"coco/annotations/panoptic_train2017.json",
|
102 |
-
# This directory contains semantic annotations that are
|
103 |
-
# converted from panoptic annotations.
|
104 |
-
# It is used by PanopticFPN.
|
105 |
-
# You can use the script at detectron2/datasets/prepare_panoptic_fpn.py
|
106 |
-
# to create these directories.
|
107 |
-
"coco/panoptic_stuff_train2017",
|
108 |
-
),
|
109 |
-
"coco_2017_val_panoptic": (
|
110 |
-
"coco/panoptic_val2017",
|
111 |
-
"coco/annotations/panoptic_val2017.json",
|
112 |
-
"coco/panoptic_stuff_val2017",
|
113 |
-
),
|
114 |
-
"coco_2017_val_100_panoptic": (
|
115 |
-
"coco/panoptic_val2017_100",
|
116 |
-
"coco/annotations/panoptic_val2017_100.json",
|
117 |
-
"coco/panoptic_stuff_val2017_100",
|
118 |
-
),
|
119 |
-
}
|
120 |
-
|
121 |
-
|
122 |
-
def register_all_coco(root):
|
123 |
-
for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_COCO.items():
|
124 |
-
if dataset_name == 'coco_ovd': # for zero-shot split
|
125 |
-
for key, (image_root, json_file) in splits_per_dataset.items():
|
126 |
-
# Assume pre-defined datasets live in `./datasets`.
|
127 |
-
register_coco_instances(
|
128 |
-
key,
|
129 |
-
{}, # empty metadata, it will be overwritten in load_coco_json() function
|
130 |
-
os.path.join(root, json_file) if "://" not in json_file else json_file,
|
131 |
-
os.path.join(root, image_root),
|
132 |
-
)
|
133 |
-
else: # default splits
|
134 |
-
for key, (image_root, json_file) in splits_per_dataset.items():
|
135 |
-
# Assume pre-defined datasets live in `./datasets`.
|
136 |
-
register_coco_instances(
|
137 |
-
key,
|
138 |
-
_get_builtin_metadata(dataset_name),
|
139 |
-
os.path.join(root, json_file) if "://" not in json_file else json_file,
|
140 |
-
os.path.join(root, image_root),
|
141 |
-
)
|
142 |
-
|
143 |
-
for (
|
144 |
-
prefix,
|
145 |
-
(panoptic_root, panoptic_json, semantic_root),
|
146 |
-
) in _PREDEFINED_SPLITS_COCO_PANOPTIC.items():
|
147 |
-
prefix_instances = prefix[: -len("_panoptic")]
|
148 |
-
instances_meta = MetadataCatalog.get(prefix_instances)
|
149 |
-
image_root, instances_json = instances_meta.image_root, instances_meta.json_file
|
150 |
-
# The "separated" version of COCO panoptic segmentation dataset,
|
151 |
-
# e.g. used by Panoptic FPN
|
152 |
-
register_coco_panoptic_separated(
|
153 |
-
prefix,
|
154 |
-
_get_builtin_metadata("coco_panoptic_separated"),
|
155 |
-
image_root,
|
156 |
-
os.path.join(root, panoptic_root),
|
157 |
-
os.path.join(root, panoptic_json),
|
158 |
-
os.path.join(root, semantic_root),
|
159 |
-
instances_json,
|
160 |
-
)
|
161 |
-
# The "standard" version of COCO panoptic segmentation dataset,
|
162 |
-
# e.g. used by Panoptic-DeepLab
|
163 |
-
register_coco_panoptic(
|
164 |
-
prefix,
|
165 |
-
_get_builtin_metadata("coco_panoptic_standard"),
|
166 |
-
image_root,
|
167 |
-
os.path.join(root, panoptic_root),
|
168 |
-
os.path.join(root, panoptic_json),
|
169 |
-
instances_json,
|
170 |
-
)
|
171 |
-
|
172 |
-
# ==== Predefined datasets and splits for LVIS ==========
|
173 |
-
|
174 |
-
def register_all_flickr30k():
|
175 |
-
MetadataCatalog.get('yfcc100m').set(evaluator_type="flickr30k")
|
176 |
-
|
177 |
-
|
178 |
-
# ==== Predefined datasets and splits for LVIS ==========
|
179 |
-
|
180 |
-
|
181 |
-
_PREDEFINED_SPLITS_LVIS = {
|
182 |
-
"lvis_v1": {
|
183 |
-
"lvis_v1_train": ("coco/", "lvis/lvis_v1_train.json"),
|
184 |
-
"lvis_v1_val": ("coco/", "lvis/lvis_v1_val.json"),
|
185 |
-
"lvis_v1_test_dev": ("coco/", "lvis/lvis_v1_image_info_test_dev.json"),
|
186 |
-
"lvis_v1_test_challenge": ("coco/", "lvis/lvis_v1_image_info_test_challenge.json"),
|
187 |
-
},
|
188 |
-
"lvis_v1_zeroshot": {
|
189 |
-
"lvis_v1_train_zeroshot": ("coco/", "lvis/lvis_v1_train.json"),
|
190 |
-
"lvis_v1_val_zeroshot": ("coco/", "lvis/lvis_v1_val.json"),
|
191 |
-
"lvis_v1_test_dev_zeroshot": ("coco/", "lvis/lvis_v1_image_info_test_dev.json"),
|
192 |
-
"lvis_v1_test_challenge_zeroshot": ("coco/", "lvis/lvis_v1_image_info_test_challenge.json"),
|
193 |
-
},
|
194 |
-
"lvis_v0.5": {
|
195 |
-
"lvis_v0.5_train": ("coco/", "lvis/lvis_v0.5_train.json"),
|
196 |
-
"lvis_v0.5_val": ("coco/", "lvis/lvis_v0.5_val.json"),
|
197 |
-
"lvis_v0.5_val_rand_100": ("coco/", "lvis/lvis_v0.5_val_rand_100.json"),
|
198 |
-
"lvis_v0.5_test": ("coco/", "lvis/lvis_v0.5_image_info_test.json"),
|
199 |
-
},
|
200 |
-
"lvis_v0.5_cocofied": {
|
201 |
-
"lvis_v0.5_train_cocofied": ("coco/", "lvis/lvis_v0.5_train_cocofied.json"),
|
202 |
-
"lvis_v0.5_val_cocofied": ("coco/", "lvis/lvis_v0.5_val_cocofied.json"),
|
203 |
-
},
|
204 |
-
}
|
205 |
-
|
206 |
-
|
207 |
-
def register_all_lvis(root):
|
208 |
-
for dataset_name, splits_per_dataset in _PREDEFINED_SPLITS_LVIS.items():
|
209 |
-
for key, (image_root, json_file) in splits_per_dataset.items():
|
210 |
-
register_lvis_instances(
|
211 |
-
key,
|
212 |
-
get_lvis_instances_meta(dataset_name),
|
213 |
-
os.path.join(root, json_file) if "://" not in json_file else json_file,
|
214 |
-
os.path.join(root, image_root),
|
215 |
-
)
|
216 |
-
|
217 |
-
|
218 |
-
# ==== Predefined splits for raw cityscapes images ===========
|
219 |
-
_RAW_CITYSCAPES_SPLITS = {
|
220 |
-
"cityscapes_fine_{task}_train": ("cityscapes/leftImg8bit/train/", "cityscapes/gtFine/train/"),
|
221 |
-
"cityscapes_fine_{task}_val": ("cityscapes/leftImg8bit/val/", "cityscapes/gtFine/val/"),
|
222 |
-
"cityscapes_fine_{task}_test": ("cityscapes/leftImg8bit/test/", "cityscapes/gtFine/test/"),
|
223 |
-
}
|
224 |
-
|
225 |
-
|
226 |
-
def register_all_cityscapes(root):
|
227 |
-
for key, (image_dir, gt_dir) in _RAW_CITYSCAPES_SPLITS.items():
|
228 |
-
meta = _get_builtin_metadata("cityscapes")
|
229 |
-
image_dir = os.path.join(root, image_dir)
|
230 |
-
gt_dir = os.path.join(root, gt_dir)
|
231 |
-
|
232 |
-
inst_key = key.format(task="instance_seg")
|
233 |
-
DatasetCatalog.register(
|
234 |
-
inst_key,
|
235 |
-
lambda x=image_dir, y=gt_dir: load_cityscapes_instances(
|
236 |
-
x, y, from_json=True, to_polygons=True
|
237 |
-
),
|
238 |
-
)
|
239 |
-
MetadataCatalog.get(inst_key).set(
|
240 |
-
image_dir=image_dir, gt_dir=gt_dir, evaluator_type="cityscapes_instance", **meta
|
241 |
-
)
|
242 |
-
|
243 |
-
sem_key = key.format(task="sem_seg")
|
244 |
-
DatasetCatalog.register(
|
245 |
-
sem_key, lambda x=image_dir, y=gt_dir: load_cityscapes_semantic(x, y)
|
246 |
-
)
|
247 |
-
MetadataCatalog.get(sem_key).set(
|
248 |
-
image_dir=image_dir,
|
249 |
-
gt_dir=gt_dir,
|
250 |
-
evaluator_type="cityscapes_sem_seg",
|
251 |
-
ignore_label=255,
|
252 |
-
**meta,
|
253 |
-
)
|
254 |
-
|
255 |
-
|
256 |
-
# ==== Predefined splits for PASCAL VOC ===========
|
257 |
-
def register_all_pascal_voc(root):
|
258 |
-
SPLITS = [
|
259 |
-
("voc_2007_trainval", "VOC2007", "trainval"),
|
260 |
-
("voc_2007_train", "VOC2007", "train"),
|
261 |
-
("voc_2007_val", "VOC2007", "val"),
|
262 |
-
("voc_2007_test", "VOC2007", "test"),
|
263 |
-
("voc_2012_trainval", "VOC2012", "trainval"),
|
264 |
-
("voc_2012_train", "VOC2012", "train"),
|
265 |
-
("voc_2012_val", "VOC2012", "val"),
|
266 |
-
]
|
267 |
-
for name, dirname, split in SPLITS:
|
268 |
-
year = 2007 if "2007" in name else 2012
|
269 |
-
register_pascal_voc(name, os.path.join(root, dirname), split, year)
|
270 |
-
MetadataCatalog.get(name).evaluator_type = "pascal_voc"
|
271 |
-
|
272 |
-
|
273 |
-
def register_all_ade20k(root):
|
274 |
-
root = os.path.join(root, "ADEChallengeData2016")
|
275 |
-
for name, dirname in [("train", "training"), ("val", "validation")]:
|
276 |
-
image_dir = os.path.join(root, "images", dirname)
|
277 |
-
gt_dir = os.path.join(root, "annotations_detectron2", dirname)
|
278 |
-
name = f"ade20k_sem_seg_{name}"
|
279 |
-
DatasetCatalog.register(
|
280 |
-
name, lambda x=image_dir, y=gt_dir: load_sem_seg(y, x, gt_ext="png", image_ext="jpg")
|
281 |
-
)
|
282 |
-
MetadataCatalog.get(name).set(
|
283 |
-
stuff_classes=ADE20K_SEM_SEG_CATEGORIES[:],
|
284 |
-
image_root=image_dir,
|
285 |
-
sem_seg_root=gt_dir,
|
286 |
-
evaluator_type="sem_seg",
|
287 |
-
ignore_label=255,
|
288 |
-
)
|
289 |
-
|
290 |
-
|
291 |
-
# True for open source;
|
292 |
-
# Internally at fb, we register them elsewhere
|
293 |
-
if __name__.endswith(".builtin"):
|
294 |
-
# Assume pre-defined datasets live in `./datasets`.
|
295 |
-
_root = os.getenv("DETECTRON2_DATASETS", "datasets")
|
296 |
-
register_all_coco(_root)
|
297 |
-
register_all_lvis(_root)
|
298 |
-
register_all_cityscapes(_root)
|
299 |
-
register_all_cityscapes_panoptic(_root)
|
300 |
-
register_all_pascal_voc(_root)
|
301 |
-
register_all_ade20k(_root)
|
302 |
-
register_all_flickr30k()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Chloe0222/Chloe/app.py
DELETED
@@ -1,19 +0,0 @@
|
|
1 |
-
#libraries
|
2 |
-
import gradio as gr
|
3 |
-
from gradio.mix import Parallel
|
4 |
-
|
5 |
-
title="My First Text Generator"
|
6 |
-
description="Input text."
|
7 |
-
|
8 |
-
#variables, functions and parameters
|
9 |
-
model1 = gr.Interface.load("huggingface/gpt2")
|
10 |
-
model2 = gr.Interface.load("huggingface/gpt2-medium")
|
11 |
-
model3 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B")
|
12 |
-
|
13 |
-
#functions, parameters and variables
|
14 |
-
gr.Parallel(model1, model2, model3,title=title,description=description).launch()
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CikeyQI/meme-api/meme_generator/memes/jiujiu/__init__.py
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
from pathlib import Path
|
2 |
-
from typing import List
|
3 |
-
|
4 |
-
from PIL.Image import Image as IMG
|
5 |
-
from pil_utils import BuildImage
|
6 |
-
|
7 |
-
from meme_generator import add_meme
|
8 |
-
from meme_generator.utils import save_gif
|
9 |
-
|
10 |
-
img_dir = Path(__file__).parent / "images"
|
11 |
-
|
12 |
-
|
13 |
-
def jiujiu(images: List[BuildImage], texts, args):
|
14 |
-
img = images[0].convert("RGBA").resize((75, 51), keep_ratio=True)
|
15 |
-
frames: List[IMG] = []
|
16 |
-
for i in range(8):
|
17 |
-
frame = BuildImage.open(img_dir / f"{i}.png")
|
18 |
-
frame.paste(img, below=True)
|
19 |
-
frames.append(frame.image)
|
20 |
-
return save_gif(frames, 0.06)
|
21 |
-
|
22 |
-
|
23 |
-
add_meme("jiujiu", jiujiu, min_images=1, max_images=1, keywords=["啾啾"])
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat.b4/client/css/button.css
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
.button {
|
2 |
-
display: flex;
|
3 |
-
padding: 8px 12px;
|
4 |
-
align-items: center;
|
5 |
-
justify-content: center;
|
6 |
-
border: 1px solid var(--conversations);
|
7 |
-
border-radius: var(--border-radius-1);
|
8 |
-
width: 100%;
|
9 |
-
background: transparent;
|
10 |
-
cursor: pointer;
|
11 |
-
}
|
12 |
-
|
13 |
-
.button span {
|
14 |
-
color: var(--colour-3);
|
15 |
-
font-size: 0.875rem;
|
16 |
-
}
|
17 |
-
|
18 |
-
.button i::before {
|
19 |
-
margin-right: 8px;
|
20 |
-
}
|
21 |
-
|
22 |
-
@media screen and (max-width: 990px) {
|
23 |
-
.button span {
|
24 |
-
font-size: 0.75rem;
|
25 |
-
}
|
26 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat.v2/config.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
from dotenv import load_dotenv
|
2 |
-
import os
|
3 |
-
|
4 |
-
load_dotenv(dotenv_path=".env") # Load environment variables from .env file
|
5 |
-
|
6 |
-
# DATABASE_URL = os.getenv("DATABASE_URL")
|
7 |
-
# OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
|
8 |
-
# OCR_API_KEY = os.getenv("OCR_API_KEY")
|
9 |
-
NGROK_AUTH_TOKEN = os.getenv("NGROK_AUTH_TOKEN")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cyntexa/README/README.md
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: README
|
3 |
-
emoji: 🔥
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: static
|
7 |
-
pinned: false
|
8 |
-
---
|
9 |
-
|
10 |
-
Edit this `README.md` markdown file to author your organization card 🔥
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fastapi/middleware/cors.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
from starlette.middleware.cors import CORSMiddleware as CORSMiddleware # noqa
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-a01a6870.js
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
import{S as M,e as z,s as E,G as A,N as b,k as T,O as y,K as u,p as v,o as B,M as h,z as R,v as N,A as k,x as I,V as j,P as C,R as q,J as D,U as O,T as K,am as _e,h as P,m as se,u as oe,y as ae,f as V,q as ge,r as he,E as me}from"./index-3370be2a.js";import"./Button-89624748.js";import{B as G}from"./BlockTitle-bcf8c05e.js";import"./Info-5611e10f.js";const w=i=>{var e=null;return i<0?e=[52,152,219]:e=[231,76,60],be(de(Math.abs(i),[255,255,255],e))},de=(i,e,t)=>{i>1&&(i=1),i=Math.sqrt(i);var n=[0,0,0],s;for(s=0;s<3;s++)n[s]=Math.round(e[s]*(1-i)+t[s]*i);return n},be=i=>"rgb("+i[0]+", "+i[1]+", "+i[2]+")",x=(i,e,t,n,s)=>{var o=n/s,c=e/t,l=0,r=0,f=i?o>c:o<c;return f?(l=e,r=l/o):(r=t,l=r*o),{width:l,height:r,x:(e-l)/2,y:(t-r)/2}};function H(i,e,t){const n=i.slice();return n[2]=e[t],n}function ve(i){let e;return{c(){e=C(i[1])},m(t,n){v(t,e,n)},p(t,n){n&2&&q(e,t[1])},d(t){t&&k(e)}}}function L(i){let e,t=i[2][0]+"",n,s,o;return{c(){e=b("div"),n=C(t),s=y(),u(e,"class","item svelte-x6nxfm"),u(e,"style",o="background-color: "+w(i[2][1]))},m(c,l){v(c,e,l),h(e,n),h(e,s)},p(c,l){l&1&&t!==(t=c[2][0]+"")&&q(n,t),l&1&&o!==(o="background-color: "+w(c[2][1]))&&u(e,"style",o)},d(c){c&&k(e)}}}function ke(i){let e,t,n,s,o;t=new G({props:{$$slots:{default:[ve]},$$scope:{ctx:i}}});let c=A(i[0]),l=[];for(let r=0;r<c.length;r+=1)l[r]=L(H(i,c,r));return{c(){e=b("div"),T(t.$$.fragment),n=y(),s=b("div");for(let r=0;r<l.length;r+=1)l[r].c();u(s,"class","range svelte-x6nxfm"),u(e,"class","input-number svelte-x6nxfm")},m(r,f){v(r,e,f),B(t,e,null),h(e,n),h(e,s);for(let a=0;a<l.length;a+=1)l[a]&&l[a].m(s,null);o=!0},p(r,[f]){const a={};if(f&34&&(a.$$scope={dirty:f,ctx:r}),t.$set(a),f&1){c=A(r[0]);let _;for(_=0;_<c.length;_+=1){const g=H(r,c,_);l[_]?l[_].p(g,f):(l[_]=L(g),l[_].c(),l[_].m(s,null))}for(;_<l.length;_+=1)l[_].d(1);l.length=c.length}},i(r){o||(R(t.$$.fragment,r),o=!0)},o(r){N(t.$$.fragment,r),o=!1},d(r){r&&k(e),I(t),j(l,r)}}}function pe(i,e,t){let{interpretation:n}=e,{label:s=""}=e;return i.$$set=o=>{"interpretation"in o&&t(0,n=o.interpretation),"label"in o&&t(1,s=o.label)},[n,s]}class we extends M{constructor(e){super(),z(this,e,pe,ke,E,{interpretation:0,label:1})}}function Q(i,e,t){const n=i.slice();return n[3]=e[t],n[5]=t,n}function ye(i){let e;return{c(){e=C(i[2])},m(t,n){v(t,e,n)},p(t,n){n&4&&q(e,t[2])},d(t){t&&k(e)}}}function W(i){let e,t=i[3]+"",n,s,o;return{c(){e=b("li"),n=C(t),s=y(),u(e,"class","dropdown-item svelte-1cqwepf"),u(e,"style",o="background-color: "+w(i[0][i[5]]))},m(c,l){v(c,e,l),h(e,n),h(e,s)},p(c,l){l&2&&t!==(t=c[3]+"")&&q(n,t),l&1&&o!==(o="background-color: "+w(c[0][c[5]]))&&u(e,"style",o)},d(c){c&&k(e)}}}function Se(i){let e,t,n,s,o;t=new G({props:{$$slots:{default:[ye]},$$scope:{ctx:i}}});let c=A(i[1]),l=[];for(let r=0;r<c.length;r+=1)l[r]=W(Q(i,c,r));return{c(){e=b("div"),T(t.$$.fragment),n=y(),s=b("ul");for(let r=0;r<l.length;r+=1)l[r].c();u(s,"class","dropdown-menu svelte-1cqwepf")},m(r,f){v(r,e,f),B(t,e,null),h(e,n),h(e,s);for(let a=0;a<l.length;a+=1)l[a]&&l[a].m(s,null);o=!0},p(r,[f]){const a={};if(f&68&&(a.$$scope={dirty:f,ctx:r}),t.$set(a),f&3){c=A(r[1]);let _;for(_=0;_<c.length;_+=1){const g=Q(r,c,_);l[_]?l[_].p(g,f):(l[_]=W(g),l[_].c(),l[_].m(s,null))}for(;_<l.length;_+=1)l[_].d(1);l.length=c.length}},i(r){o||(R(t.$$.fragment,r),o=!0)},o(r){N(t.$$.fragment,r),o=!1},d(r){r&&k(e),I(t),j(l,r)}}}function Ce(i,e,t){let{interpretation:n}=e,{choices:s}=e,{label:o=""}=e;return i.$$set=c=>{"interpretation"in c&&t(0,n=c.interpretation),"choices"in c&&t(1,s=c.choices),"label"in c&&t(2,o=c.label)},[n,s,o]}class qe extends M{constructor(e){super(),z(this,e,Ce,Se,E,{interpretation:0,choices:1,label:2})}}function Re(i){let e;return{c(){e=C(i[0])},m(t,n){v(t,e,n)},p(t,n){n&1&&q(e,t[0])},d(t){t&&k(e)}}}function Ae(i){let e,t,n,s,o,c,l,r,f,a,_,g,m;return t=new G({props:{$$slots:{default:[Re]},$$scope:{ctx:i}}}),{c(){e=b("div"),T(t.$$.fragment),n=y(),s=b("button"),o=b("div"),l=y(),r=b("div"),f=D("svg"),a=D("line"),_=D("line"),u(o,"class","checkbox svelte-1nw19ca"),u(o,"style",c="background-color: "+w(i[2][0])),u(a,"x1","-7.5"),u(a,"y1","0"),u(a,"x2","-2.5"),u(a,"y2","5"),u(a,"stroke","black"),u(a,"stroke-width","4"),u(a,"stroke-linecap","round"),u(_,"x1","-2.5"),u(_,"y1","5"),u(_,"x2","7.5"),u(_,"y2","-7.5"),u(_,"stroke","black"),u(_,"stroke-width","4"),u(_,"stroke-linecap","round"),u(f,"viewBox","-10 -10 20 20"),u(f,"class","svelte-1nw19ca"),u(r,"class","checkbox svelte-1nw19ca"),u(r,"style",g="background-color: "+w(i[2][1])),u(s,"class","checkbox-item svelte-1nw19ca"),O(s,"selected",i[1]),u(e,"class","input-checkbox svelte-1nw19ca")},m(d,p){v(d,e,p),B(t,e,null),h(e,n),h(e,s),h(s,o),h(s,l),h(s,r),h(r,f),h(f,a),h(f,_),m=!0},p(d,[p]){const S={};p&9&&(S.$$scope={dirty:p,ctx:d}),t.$set(S),(!m||p&4&&c!==(c="background-color: "+w(d[2][0])))&&u(o,"style",c),(!m||p&4&&g!==(g="background-color: "+w(d[2][1])))&&u(r,"style",g),(!m||p&2)&&O(s,"selected",d[1])},i(d){m||(R(t.$$.fragment,d),m=!0)},o(d){N(t.$$.fragment,d),m=!1},d(d){d&&k(e),I(t)}}}function Ne(i,e,t){let{label:n=""}=e,{original:s}=e,{interpretation:o}=e;return i.$$set=c=>{"label"in c&&t(0,n=c.label),"original"in c&&t(1,s=c.original),"interpretation"in c&&t(2,o=c.interpretation)},[n,s,o]}class Te extends M{constructor(e){super(),z(this,e,Ne,Ae,E,{label:0,original:1,interpretation:2})}}function X(i,e,t){const n=i.slice();return n[4]=e[t],n[6]=t,n}function Be(i){let e;return{c(){e=C(i[3])},m(t,n){v(t,e,n)},p(t,n){n&8&&q(e,t[3])},d(t){t&&k(e)}}}function Y(i){let e,t,n,s,o,c,l,r,f,a,_=i[4]+"",g,m;return{c(){e=b("button"),t=b("div"),s=y(),o=b("div"),c=D("svg"),l=D("line"),r=D("line"),a=y(),g=C(_),m=y(),u(t,"class","checkbox svelte-1cbhr6k"),u(t,"style",n="background-color: "+w(i[1][i[6]][0])),u(l,"x1","-7.5"),u(l,"y1","0"),u(l,"x2","-2.5"),u(l,"y2","5"),u(l,"stroke","black"),u(l,"stroke-width","4"),u(l,"stroke-linecap","round"),u(r,"x1","-2.5"),u(r,"y1","5"),u(r,"x2","7.5"),u(r,"y2","-7.5"),u(r,"stroke","black"),u(r,"stroke-width","4"),u(r,"stroke-linecap","round"),u(c,"viewBox","-10 -10 20 20"),u(c,"class","svelte-1cbhr6k"),u(o,"class","checkbox svelte-1cbhr6k"),u(o,"style",f="background-color: "+w(i[1][i[6]][1])),u(e,"class","checkbox-item svelte-1cbhr6k"),O(e,"selected",i[0].includes(i[4]))},m(d,p){v(d,e,p),h(e,t),h(e,s),h(e,o),h(o,c),h(c,l),h(c,r),h(e,a),h(e,g),h(e,m)},p(d,p){p&2&&n!==(n="background-color: "+w(d[1][d[6]][0]))&&u(t,"style",n),p&2&&f!==(f="background-color: "+w(d[1][d[6]][1]))&&u(o,"style",f),p&4&&_!==(_=d[4]+"")&&q(g,_),p&5&&O(e,"selected",d[0].includes(d[4]))},d(d){d&&k(e)}}}function Ie(i){let e,t,n,s;t=new G({props:{$$slots:{default:[Be]},$$scope:{ctx:i}}});let o=A(i[2]),c=[];for(let l=0;l<o.length;l+=1)c[l]=Y(X(i,o,l));return{c(){e=b("div"),T(t.$$.fragment),n=y();for(let l=0;l<c.length;l+=1)c[l].c();u(e,"class","input-checkbox-group svelte-1cbhr6k")},m(l,r){v(l,e,r),B(t,e,null),h(e,n);for(let f=0;f<c.length;f+=1)c[f]&&c[f].m(e,null);s=!0},p(l,[r]){const f={};if(r&136&&(f.$$scope={dirty:r,ctx:l}),t.$set(f),r&7){o=A(l[2]);let a;for(a=0;a<o.length;a+=1){const _=X(l,o,a);c[a]?c[a].p(_,r):(c[a]=Y(_),c[a].c(),c[a].m(e,null))}for(;a<c.length;a+=1)c[a].d(1);c.length=o.length}},i(l){s||(R(t.$$.fragment,l),s=!0)},o(l){N(t.$$.fragment,l),s=!1},d(l){l&&k(e),I(t),j(c,l)}}}function Me(i,e,t){let{original:n}=e,{interpretation:s}=e,{choices:o}=e,{label:c=""}=e;return i.$$set=l=>{"original"in l&&t(0,n=l.original),"interpretation"in l&&t(1,s=l.interpretation),"choices"in l&&t(2,o=l.choices),"label"in l&&t(3,c=l.label)},[n,s,o,c]}class ze extends M{constructor(e){super(),z(this,e,Me,Ie,E,{original:0,interpretation:1,choices:2,label:3})}}function Z(i,e,t){const n=i.slice();return n[6]=e[t],n}function Ee(i){let e;return{c(){e=C(i[5])},m(t,n){v(t,e,n)},p(t,n){n&32&&q(e,t[5])},d(t){t&&k(e)}}}function $(i){let e,t;return{c(){e=b("div"),u(e,"style",t="background-color: "+w(i[6])),u(e,"class","svelte-1sxprr7")},m(n,s){v(n,e,s)},p(n,s){s&2&&t!==(t="background-color: "+w(n[6]))&&u(e,"style",t)},d(n){n&&k(e)}}}function Ge(i){let e,t,n,s,o,c,l,r,f,a;t=new G({props:{$$slots:{default:[Ee]},$$scope:{ctx:i}}});let _=A(i[1]),g=[];for(let m=0;m<_.length;m+=1)g[m]=$(Z(i,_,m));return{c(){e=b("div"),T(t.$$.fragment),n=y(),s=b("input"),o=y(),c=b("div");for(let m=0;m<g.length;m+=1)g[m].c();l=y(),r=b("div"),f=C(i[0]),u(s,"type","range"),s.disabled=!0,u(s,"min",i[2]),u(s,"max",i[3]),u(s,"step",i[4]),u(s,"class","svelte-1sxprr7"),u(c,"class","range svelte-1sxprr7"),u(r,"class","original svelte-1sxprr7"),u(e,"class","input-slider svelte-1sxprr7")},m(m,d){v(m,e,d),B(t,e,null),h(e,n),h(e,s),h(e,o),h(e,c);for(let p=0;p<g.length;p+=1)g[p]&&g[p].m(c,null);h(e,l),h(e,r),h(r,f),a=!0},p(m,[d]){const p={};if(d&544&&(p.$$scope={dirty:d,ctx:m}),t.$set(p),(!a||d&4)&&u(s,"min",m[2]),(!a||d&8)&&u(s,"max",m[3]),(!a||d&16)&&u(s,"step",m[4]),d&2){_=A(m[1]);let S;for(S=0;S<_.length;S+=1){const U=Z(m,_,S);g[S]?g[S].p(U,d):(g[S]=$(U),g[S].c(),g[S].m(c,null))}for(;S<g.length;S+=1)g[S].d(1);g.length=_.length}(!a||d&1)&&q(f,m[0])},i(m){a||(R(t.$$.fragment,m),a=!0)},o(m){N(t.$$.fragment,m),a=!1},d(m){m&&k(e),I(t),j(g,m)}}}function je(i,e,t){let{original:n}=e,{interpretation:s}=e,{minimum:o}=e,{maximum:c}=e,{step:l}=e,{label:r=""}=e;return i.$$set=f=>{"original"in f&&t(0,n=f.original),"interpretation"in f&&t(1,s=f.interpretation),"minimum"in f&&t(2,o=f.minimum),"maximum"in f&&t(3,c=f.maximum),"step"in f&&t(4,l=f.step),"label"in f&&t(5,r=f.label)},[n,s,o,c,l,r]}class De extends M{constructor(e){super(),z(this,e,je,Ge,E,{original:0,interpretation:1,minimum:2,maximum:3,step:4,label:5})}}function ee(i,e,t){const n=i.slice();return n[4]=e[t],n[6]=t,n}function Oe(i){let e;return{c(){e=C(i[3])},m(t,n){v(t,e,n)},p(t,n){n&8&&q(e,t[3])},d(t){t&&k(e)}}}function te(i){let e,t,n,s,o=i[4]+"",c,l;return{c(){e=b("button"),t=b("div"),s=y(),c=C(o),l=y(),u(t,"class","radio-circle svelte-1nekfre"),u(t,"style",n="background-color: "+w(i[1][i[6]])),u(e,"class","radio-item svelte-1nekfre"),O(e,"selected",i[0]===i[4])},m(r,f){v(r,e,f),h(e,t),h(e,s),h(e,c),h(e,l)},p(r,f){f&2&&n!==(n="background-color: "+w(r[1][r[6]]))&&u(t,"style",n),f&4&&o!==(o=r[4]+"")&&q(c,o),f&5&&O(e,"selected",r[0]===r[4])},d(r){r&&k(e)}}}function Ue(i){let e,t,n,s;t=new G({props:{$$slots:{default:[Oe]},$$scope:{ctx:i}}});let o=A(i[2]),c=[];for(let l=0;l<o.length;l+=1)c[l]=te(ee(i,o,l));return{c(){e=b("div"),T(t.$$.fragment),n=y();for(let l=0;l<c.length;l+=1)c[l].c();u(e,"class","input-radio svelte-1nekfre")},m(l,r){v(l,e,r),B(t,e,null),h(e,n);for(let f=0;f<c.length;f+=1)c[f]&&c[f].m(e,null);s=!0},p(l,[r]){const f={};if(r&136&&(f.$$scope={dirty:r,ctx:l}),t.$set(f),r&7){o=A(l[2]);let a;for(a=0;a<o.length;a+=1){const _=ee(l,o,a);c[a]?c[a].p(_,r):(c[a]=te(_),c[a].c(),c[a].m(e,null))}for(;a<c.length;a+=1)c[a].d(1);c.length=o.length}},i(l){s||(R(t.$$.fragment,l),s=!0)},o(l){N(t.$$.fragment,l),s=!1},d(l){l&&k(e),I(t),j(c,l)}}}function Fe(i,e,t){let{original:n}=e,{interpretation:s}=e,{choices:o}=e,{label:c=""}=e;return i.$$set=l=>{"original"in l&&t(0,n=l.original),"interpretation"in l&&t(1,s=l.interpretation),"choices"in l&&t(2,o=l.choices),"label"in l&&t(3,c=l.label)},[n,s,o,c]}class Je extends M{constructor(e){super(),z(this,e,Fe,Ue,E,{original:0,interpretation:1,choices:2,label:3})}}function Ke(i){let e;return{c(){e=C(i[1])},m(t,n){v(t,e,n)},p(t,n){n&2&&q(e,t[1])},d(t){t&&k(e)}}}function Pe(i){let e,t,n,s,o,c,l,r,f,a;return t=new G({props:{$$slots:{default:[Ke]},$$scope:{ctx:i}}}),{c(){e=b("div"),T(t.$$.fragment),n=y(),s=b("div"),o=b("div"),c=b("canvas"),l=y(),r=b("img"),u(o,"class","interpretation svelte-h0dntu"),K(r.src,f=i[0])||u(r,"src",f),u(r,"class","svelte-h0dntu"),u(s,"class","image-preview svelte-h0dntu"),u(e,"class","input-image")},m(_,g){v(_,e,g),B(t,e,null),h(e,n),h(e,s),h(s,o),h(o,c),i[6](c),h(s,l),h(s,r),i[7](r),a=!0},p(_,[g]){const m={};g&514&&(m.$$scope={dirty:g,ctx:_}),t.$set(m),(!a||g&1&&!K(r.src,f=_[0]))&&u(r,"src",f)},i(_){a||(R(t.$$.fragment,_),a=!0)},o(_){N(t.$$.fragment,_),a=!1},d(_){_&&k(e),I(t),i[6](null),i[7](null)}}}function Ve(i,e,t){let{original:n}=e,{interpretation:s}=e,{shape:o}=e,{label:c=""}=e,l,r;const f=(g,m,d,p)=>{var S=d/g[0].length,U=p/g.length,F=0;g.forEach(function(fe){var J=0;fe.forEach(function(ue){m.fillStyle=w(ue),m.fillRect(J*S,F*U,S,U),J++}),F++})};_e(()=>{let g=x(!0,r.width,r.height,r.naturalWidth,r.naturalHeight);o&&(g=x(!0,g.width,g.height,o[0],o[1]));let m=g.width,d=g.height;l.setAttribute("height",`${d}`),l.setAttribute("width",`${m}`),f(s,l.getContext("2d"),m,d)});function a(g){P[g?"unshift":"push"](()=>{l=g,t(2,l)})}function _(g){P[g?"unshift":"push"](()=>{r=g,t(3,r)})}return i.$$set=g=>{"original"in g&&t(0,n=g.original),"interpretation"in g&&t(4,s=g.interpretation),"shape"in g&&t(5,o=g.shape),"label"in g&&t(1,c=g.label)},[n,c,l,r,s,o,a,_]}class xe extends M{constructor(e){super(),z(this,e,Ve,Pe,E,{original:0,interpretation:4,shape:5,label:1})}}function le(i,e,t){const n=i.slice();return n[2]=e[t],n}function He(i){let e;return{c(){e=C(i[1])},m(t,n){v(t,e,n)},p(t,n){n&2&&q(e,t[1])},d(t){t&&k(e)}}}function ne(i){let e,t;return{c(){e=b("div"),u(e,"class","item svelte-13lmfcp"),u(e,"style",t="background-color: "+w(i[2]))},m(n,s){v(n,e,s)},p(n,s){s&1&&t!==(t="background-color: "+w(n[2]))&&u(e,"style",t)},d(n){n&&k(e)}}}function Le(i){let e,t,n,s,o;t=new G({props:{$$slots:{default:[He]},$$scope:{ctx:i}}});let c=A(i[0]),l=[];for(let r=0;r<c.length;r+=1)l[r]=ne(le(i,c,r));return{c(){e=b("div"),T(t.$$.fragment),n=y(),s=b("div");for(let r=0;r<l.length;r+=1)l[r].c();u(s,"class","range svelte-13lmfcp")},m(r,f){v(r,e,f),B(t,e,null),h(e,n),h(e,s);for(let a=0;a<l.length;a+=1)l[a]&&l[a].m(s,null);o=!0},p(r,[f]){const a={};if(f&34&&(a.$$scope={dirty:f,ctx:r}),t.$set(a),f&1){c=A(r[0]);let _;for(_=0;_<c.length;_+=1){const g=le(r,c,_);l[_]?l[_].p(g,f):(l[_]=ne(g),l[_].c(),l[_].m(s,null))}for(;_<l.length;_+=1)l[_].d(1);l.length=c.length}},i(r){o||(R(t.$$.fragment,r),o=!0)},o(r){N(t.$$.fragment,r),o=!1},d(r){r&&k(e),I(t),j(l,r)}}}function Qe(i,e,t){let{interpretation:n}=e,{label:s=""}=e;return i.$$set=o=>{"interpretation"in o&&t(0,n=o.interpretation),"label"in o&&t(1,s=o.label)},[n,s]}class We extends M{constructor(e){super(),z(this,e,Qe,Le,E,{interpretation:0,label:1})}}function ie(i,e,t){const n=i.slice();return n[2]=e[t][0],n[3]=e[t][1],n}function Xe(i){let e;return{c(){e=C(i[0])},m(t,n){v(t,e,n)},p(t,n){n&1&&q(e,t[0])},d(t){t&&k(e)}}}function re(i){let e,t=i[2]+"",n,s,o;return{c(){e=b("span"),n=C(t),s=y(),u(e,"class","text-span svelte-15c0u2m"),u(e,"style",o="background-color: "+w(i[3]))},m(c,l){v(c,e,l),h(e,n),h(e,s)},p(c,l){l&2&&t!==(t=c[2]+"")&&q(n,t),l&2&&o!==(o="background-color: "+w(c[3]))&&u(e,"style",o)},d(c){c&&k(e)}}}function Ye(i){let e,t,n,s;t=new G({props:{$$slots:{default:[Xe]},$$scope:{ctx:i}}});let o=A(i[1]),c=[];for(let l=0;l<o.length;l+=1)c[l]=re(ie(i,o,l));return{c(){e=b("div"),T(t.$$.fragment),n=y();for(let l=0;l<c.length;l+=1)c[l].c();u(e,"class","input-text svelte-15c0u2m")},m(l,r){v(l,e,r),B(t,e,null),h(e,n);for(let f=0;f<c.length;f+=1)c[f]&&c[f].m(e,null);s=!0},p(l,[r]){const f={};if(r&65&&(f.$$scope={dirty:r,ctx:l}),t.$set(f),r&2){o=A(l[1]);let a;for(a=0;a<o.length;a+=1){const _=ie(l,o,a);c[a]?c[a].p(_,r):(c[a]=re(_),c[a].c(),c[a].m(e,null))}for(;a<c.length;a+=1)c[a].d(1);c.length=o.length}},i(l){s||(R(t.$$.fragment,l),s=!0)},o(l){N(t.$$.fragment,l),s=!1},d(l){l&&k(e),I(t),j(c,l)}}}function Ze(i,e,t){let{label:n=""}=e,{interpretation:s}=e;return i.$$set=o=>{"label"in o&&t(0,n=o.label),"interpretation"in o&&t(1,s=o.interpretation)},[n,s]}class $e extends M{constructor(e){super(),z(this,e,Ze,Ye,E,{label:0,interpretation:1})}}const et={audio:We,dropdown:qe,checkbox:Te,checkboxgroup:ze,number:we,slider:De,radio:Je,image:xe,textbox:$e};function ce(i){let e,t,n;const s=[i[0],{original:i[1].original},{interpretation:i[1].interpretation}];var o=i[2];function c(l){let r={};for(let f=0;f<s.length;f+=1)r=me(r,s[f]);return{props:r}}return o&&(e=V(o,c())),{c(){e&&T(e.$$.fragment),t=se()},m(l,r){e&&B(e,l,r),v(l,t,r),n=!0},p(l,r){const f=r&3?ge(s,[r&1&&he(l[0]),r&2&&{original:l[1].original},r&2&&{interpretation:l[1].interpretation}]):{};if(r&4&&o!==(o=l[2])){if(e){oe();const a=e;N(a.$$.fragment,1,0,()=>{I(a,1)}),ae()}o?(e=V(o,c()),T(e.$$.fragment),R(e.$$.fragment,1),B(e,t.parentNode,t)):e=null}else o&&e.$set(f)},i(l){n||(e&&R(e.$$.fragment,l),n=!0)},o(l){e&&N(e.$$.fragment,l),n=!1},d(l){l&&k(t),e&&I(e,l)}}}function tt(i){let e,t,n=i[1]&&ce(i);return{c(){n&&n.c(),e=se()},m(s,o){n&&n.m(s,o),v(s,e,o),t=!0},p(s,[o]){s[1]?n?(n.p(s,o),o&2&&R(n,1)):(n=ce(s),n.c(),R(n,1),n.m(e.parentNode,e)):n&&(oe(),N(n,1,1,()=>{n=null}),ae())},i(s){t||(R(n),t=!0)},o(s){N(n),t=!1},d(s){s&&k(e),n&&n.d(s)}}}function lt(i,e,t){let n,{component:s}=e,{component_props:o}=e,{value:c}=e;return i.$$set=l=>{"component"in l&&t(3,s=l.component),"component_props"in l&&t(0,o=l.component_props),"value"in l&&t(1,c=l.value)},i.$$.update=()=>{i.$$.dirty&8&&t(2,n=et[s])},[o,c,n,s]}class nt extends M{constructor(e){super(),z(this,e,lt,tt,E,{component:3,component_props:0,value:1})}}const ot=nt,at=["dynamic"];export{ot as Component,at as modes};
|
2 |
-
//# sourceMappingURL=index-a01a6870.js.map
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/utils/readme_content.py
DELETED
@@ -1,18 +0,0 @@
|
|
1 |
-
README_CONTENT = """
|
2 |
-
---
|
3 |
-
tags: [gradio-theme]
|
4 |
-
title: {theme_name}
|
5 |
-
colorFrom: orange
|
6 |
-
colorTo: purple
|
7 |
-
sdk: gradio
|
8 |
-
sdk_version: {gradio_version}
|
9 |
-
app_file: app.py
|
10 |
-
pinned: false
|
11 |
-
license: apache-2.0
|
12 |
-
---
|
13 |
-
# {theme_name}
|
14 |
-
## Description
|
15 |
-
{description}
|
16 |
-
## Contributions
|
17 |
-
Thanks to [@{author}](https://huggingface.co/{author}) for adding this gradio theme!
|
18 |
-
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Detomo/Depth_estimation/app.py
DELETED
@@ -1,35 +0,0 @@
|
|
1 |
-
from layers import BilinearUpSampling2D
|
2 |
-
from tensorflow.keras.models import load_model
|
3 |
-
from utils import load_images, predict
|
4 |
-
import matplotlib.pyplot as plt
|
5 |
-
import numpy as np
|
6 |
-
import gradio as gr
|
7 |
-
|
8 |
-
custom_objects = {'BilinearUpSampling2D': BilinearUpSampling2D, 'depth_loss_function': None}
|
9 |
-
print('Loading model...')
|
10 |
-
model = load_model("model/model.h5", custom_objects=custom_objects, compile=False)
|
11 |
-
print('Successfully loaded model...')
|
12 |
-
examples = ['examples/377_image.png', 'examples/470_image.png', 'examples/499_image.png',
|
13 |
-
'examples/626_image.png', 'examples/358_image.png']
|
14 |
-
|
15 |
-
|
16 |
-
def infer(image):
|
17 |
-
inputs = load_images([image])
|
18 |
-
outputs = predict(model, inputs)
|
19 |
-
plasma = plt.get_cmap('plasma')
|
20 |
-
rescaled = outputs[0][:, :, 0]
|
21 |
-
rescaled = rescaled - np.min(rescaled)
|
22 |
-
rescaled = rescaled / np.max(rescaled)
|
23 |
-
image_out = plasma(rescaled)[:, :, :3]
|
24 |
-
return image_out
|
25 |
-
|
26 |
-
|
27 |
-
iface = gr.Interface(
|
28 |
-
fn=infer,
|
29 |
-
title="Monocular Depth Estimation",
|
30 |
-
description = "Unet architecture with Densenet201 backbone for estimating the depth of image 📏",
|
31 |
-
inputs=[gr.inputs.Image(label="image", type="numpy", shape=(640, 480))],
|
32 |
-
outputs="image",
|
33 |
-
cache_examples=True,
|
34 |
-
article = "Author: <a href=\"https://huggingface.co/vumichien\">Vu Minh Chien</a>.",
|
35 |
-
examples=examples).launch(debug=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DuckyPolice/DeciDiffusion-v1-0/app.py
DELETED
@@ -1,115 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import torch
|
3 |
-
from PIL.ImageDraw import Draw
|
4 |
-
from diffusers import StableDiffusionPipeline
|
5 |
-
from PIL import Image, ImageOps
|
6 |
-
|
7 |
-
|
8 |
-
# Load pipeline once
|
9 |
-
model_id = 'Deci/DeciDiffusion-v1-0'
|
10 |
-
device = "cuda" if torch.cuda.is_available() else "cpu"
|
11 |
-
pipe = StableDiffusionPipeline.from_pretrained(model_id, custom_pipeline=model_id, torch_dtype=torch.float32)
|
12 |
-
pipe.unet = pipe.unet.from_pretrained(model_id, subfolder='flexible_unet', torch_dtype=torch.float32)
|
13 |
-
pipe = pipe.to(device)
|
14 |
-
|
15 |
-
|
16 |
-
def read_content(file_path: str) -> str:
|
17 |
-
"""read the content of target file
|
18 |
-
"""
|
19 |
-
with open(file_path, 'r', encoding='utf-8') as f:
|
20 |
-
content = f.read()
|
21 |
-
|
22 |
-
return content
|
23 |
-
|
24 |
-
|
25 |
-
def predict(_prompt: str, _steps: int = 30, _seed: int = 42, _guidance_scale: float = 7.5, _negative_prompt: str = ""):
|
26 |
-
_negative_prompt = [_negative_prompt] if _negative_prompt else None
|
27 |
-
|
28 |
-
output = pipe(prompt=[_prompt],
|
29 |
-
negative_prompt=_negative_prompt,
|
30 |
-
num_inference_steps=int(_steps),
|
31 |
-
guidance_scale=_guidance_scale,
|
32 |
-
generator=torch.Generator(device).manual_seed(_seed),
|
33 |
-
)
|
34 |
-
output_image = output.images[0]
|
35 |
-
|
36 |
-
# Add border beneath the image with Deci logo + prompt
|
37 |
-
if len(_prompt) > 52:
|
38 |
-
_prompt = _prompt[:52] + "..."
|
39 |
-
|
40 |
-
original_image_height = output_image.size[1]
|
41 |
-
output_image = ImageOps.expand(output_image, border=(0, 0, 0, 64), fill='white')
|
42 |
-
deci_logo = Image.open('./deci_logo_white.png')
|
43 |
-
output_image.paste(deci_logo, (0, original_image_height))
|
44 |
-
Draw(output_image).text((deci_logo.size[0], original_image_height + 26), _prompt, (127, 127, 127))
|
45 |
-
return output_image
|
46 |
-
|
47 |
-
|
48 |
-
css = '''
|
49 |
-
.gradio-container {
|
50 |
-
max-width: 1100px !important;
|
51 |
-
background-image: url(https://huggingface.co/spaces/Deci/Deci-DeciDiffusionClean/resolve/main/background-image.png);
|
52 |
-
background-size: cover;
|
53 |
-
background-position: center center;
|
54 |
-
background-repeat: no-repeat;
|
55 |
-
}
|
56 |
-
|
57 |
-
.footer {margin-bottom: 45px;margin-top: 35px !important;text-align: center;border-bottom: 1px solid #e5e5e5}
|
58 |
-
.footer>p {font-size: .8rem; display: inline-block; padding: 0 10px;transform: translateY(10px);background: white}
|
59 |
-
.dark .footer {border-color: #303030}
|
60 |
-
.dark .footer>p {background: #0b0f19}
|
61 |
-
.acknowledgments h4{margin: 1.25em 0 .25em 0;font-weight: bold;font-size: 115%}
|
62 |
-
@keyframes spin {
|
63 |
-
from {
|
64 |
-
transform: rotate(0deg);
|
65 |
-
}
|
66 |
-
to {
|
67 |
-
transform: rotate(360deg);
|
68 |
-
}
|
69 |
-
}
|
70 |
-
'''
|
71 |
-
|
72 |
-
demo = gr.Blocks(css=css, elem_id="total-container")
|
73 |
-
with demo:
|
74 |
-
gr.HTML(read_content("header.html"))
|
75 |
-
with gr.Row():
|
76 |
-
with gr.Column():
|
77 |
-
with gr.Row(mobile_collapse=False, equal_height=True):
|
78 |
-
prompt = gr.Textbox(placeholder="Your prompt", show_label=False, elem_id="prompt", autofocus=True, lines=3, )
|
79 |
-
|
80 |
-
with gr.Accordion(label="Advanced Settings", open=False):
|
81 |
-
with gr.Row(mobile_collapse=False, equal_height=True):
|
82 |
-
steps = gr.Slider(value=30, minimum=15, maximum=50, step=1, label="steps", interactive=True)
|
83 |
-
seed = gr.Slider(value=42, minimum=1, maximum=100, step=1, label="seed", interactive=True)
|
84 |
-
guidance_scale = gr.Slider(value=7.5, minimum=1, maximum=15, step=0.1, label='guidance_scale', interactive=True)
|
85 |
-
|
86 |
-
with gr.Row(mobile_collapse=False, equal_height=True):
|
87 |
-
negative_prompt = gr.Textbox(label="negative_prompt", placeholder="Your negative prompt",
|
88 |
-
info="what you don't want to see in the image", lines=3)
|
89 |
-
with gr.Row():
|
90 |
-
btn = gr.Button(value="Generate!", elem_id="run_button")
|
91 |
-
|
92 |
-
with gr.Column():
|
93 |
-
image_out = gr.Image(label="Output", elem_id="output-img", height=400)
|
94 |
-
|
95 |
-
btn.click(fn=predict,
|
96 |
-
inputs=[prompt, steps, seed, guidance_scale, negative_prompt],
|
97 |
-
outputs=[image_out],
|
98 |
-
api_name='run')
|
99 |
-
|
100 |
-
gr.HTML(
|
101 |
-
"""
|
102 |
-
<div class="footer">
|
103 |
-
<p>Model by <a href="https://deci.ai" style="text-decoration: underline;" target="_blank">Deci.ai</a> - Gradio Demo by 🤗 Hugging Face
|
104 |
-
</p>
|
105 |
-
</div>
|
106 |
-
<div class="acknowledgments">
|
107 |
-
<p><h4>LICENSE</h4>
|
108 |
-
The model is licensed with a <a href="https://huggingface.co/Deci/DeciDiffusion-v1-0/blob/main/LICENSE-WEIGHTS.md" style="text-decoration: underline;" target="_blank">CreativeML Open RAIL-M</a> license. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in this license. The license forbids you from sharing any content that violates any laws, produce any harm to a person, disseminate any personal information that would be meant for harm, spread misinformation and target vulnerable groups. For the full list of restrictions please <a href="https://huggingface.co/Deci/DeciDiffusion-v1-0/blob/main/LICENSE-WEIGHTS.md" target="_blank" style="text-decoration: underline;" target="_blank">read the license</a></p>
|
109 |
-
<p><h4>Biases and content acknowledgment</h4>
|
110 |
-
Despite how impressive being able to turn text into image is, beware to the fact that this model may output content that reinforces or exacerbates societal biases, as well as realistic faces, pornography and violence. The model was trained on the <a href="https://laion.ai/blog/laion-5b/" style="text-decoration: underline;" target="_blank">LAION-5B dataset</a>, which scraped non-curated image-text-pairs from the internet (the exception being the removal of illegal content) and is meant for research purposes. You can read more in the <a href="https://huggingface.co/Deci/DeciDiffusion-v1-0" style="text-decoration: underline;" target="_blank">model card</a></p>
|
111 |
-
</div>
|
112 |
-
"""
|
113 |
-
)
|
114 |
-
|
115 |
-
demo.queue(max_size=50).launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|