Commit
·
e52c4b3
1
Parent(s):
f9510ca
Update parquet files (step 119 of 249)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/Miui-Theme-Editor-For-Mac-HOT.md +0 -96
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe InDesign 2.0 Free Download Create and Publish Professional Layouts.md +0 -184
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X3 Free Download Full Version Filehippo 15 Create Stunning Logos Illustrations and More.md +0 -175
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/El cuerpo habla joe navarro pdf 114 Aprende a interpretar las seales no verbales de los dems.md +0 -152
- spaces/1gistliPinn/ChatGPT4/Examples/Brsobstetricsandgynecologypdffree11 UPDATED.md +0 -6
- spaces/1gistliPinn/ChatGPT4/Examples/Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download [WORK].md +0 -24
- spaces/1gistliPinn/ChatGPT4/Examples/Download Ed Sheeran Plus Album Zip Mega.md +0 -10
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo ver y descargar pelculas y series en Cuevana 3 para PC y Android.md +0 -135
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download AR Emoji Stickers and Customize Them with Your Favorite Accessories and Backgrounds.md +0 -147
- spaces/1phancelerku/anime-remove-background/Bricks King APK The Best Brick Breaker Game for Android.md +0 -130
- spaces/1toTree/lora_test/ppdiffusers/models/cross_attention.py +0 -435
- spaces/34we12er/newbing/Dockerfile +0 -34
- spaces/4Taps/SadTalker/src/facerender/modules/keypoint_detector.py +0 -179
- spaces/52Hz/SRMNet_thesis/app.py +0 -72
- spaces/A666sxr/Genshin_TTS/losses.py +0 -71
- spaces/AB-TW/team-ai/agents/tools/smart_domain/api_layer_code_tool.py +0 -96
- spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Getting Started 6bc871dcdd4a4554b5b22c0c40740841.md +0 -66
- spaces/AIConsultant/MusicGen/tests/models/test_musicgen.py +0 -58
- spaces/AIWaves/Debate/src/agents/Environment/base_environment.py +0 -167
- spaces/Aadarsh4all/ChatWithBear/app.py +0 -63
- spaces/Abdllh/poetry/app.py +0 -53
- spaces/Abdulkader/HumanMotionsDetector/app.py +0 -24
- spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/2.js +0 -1
- spaces/Adapter/CoAdapter/ldm/modules/ema.py +0 -80
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetExpandedChildHeight.js +0 -22
- spaces/Aishwini/myfirstaigen/app.py +0 -34
- spaces/AkitoP/umamusume_bert_vits2/commons.py +0 -160
- spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/latex/attention/background.tex +0 -58
- spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/ray_utils.py +0 -289
- spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py +0 -6
- spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/group_sampler.py +0 -148
- spaces/AnimalEquality/chatbot/_proc/_docs/ingredient_vision.html +0 -802
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/drop.py +0 -65
- spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/knn.py +0 -77
- spaces/ArtGAN/Diffusion-API/diffusion_webui/__init__.py +0 -17
- spaces/Ash58947/Jan/Dockerfile +0 -21
- spaces/Aspik101/Polish_Llama2/app.py +0 -63
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/__init__.py +0 -25
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/extra_validations.py +0 -36
- spaces/Awiny/Image2Paragraph/models/image_text_transformation.py +0 -71
- spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_537238KB.py +0 -126
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_spinners.py +0 -482
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/spinner.py +0 -137
- spaces/Boadiwaa/Recipes/openai/api_resources/__init__.py +0 -13
- spaces/CVPR/LIVE/thrust/thrust/detail/complex/ctanhf.h +0 -124
- spaces/CVPR/LIVE/thrust/thrust/detail/config.h +0 -24
- spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/partition.h +0 -87
- spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/unet3d_nyu-checkpoint.py +0 -90
- spaces/CVPR/Text2Human/Text2Human/README.md +0 -255
- spaces/CVPR/lama-example/bin/paper_runfiles/generate_test_celeba-hq.sh +0 -17
spaces/1acneusushi/gradio-2dmoleculeeditor/Miui-Theme-Editor-For-Mac-HOT.md
DELETED
@@ -1,96 +0,0 @@
|
|
1 |
-
## Miui Theme Editor For Mac
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
**DOWNLOAD ✓✓✓ [https://www.google.com/url?q=https%3A%2F%2Furlca.com%2F2txKKy&sa=D&sntz=1&usg=AOvVaw3tEa034JOViv49zza8lXsX](https://www.google.com/url?q=https%3A%2F%2Furlca.com%2F2txKKy&sa=D&sntz=1&usg=AOvVaw3tEa034JOViv49zza8lXsX)**
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
# How to Use Miui Theme Editor For Mac to Create Awesome Themes
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
Miui Theme Editor is a powerful tool that allows you to customize and create your own themes for Miui devices. You can change the icons, wallpapers, fonts, sounds, animations, lock screen, status bar, and more. With Miui Theme Editor, you can unleash your creativity and make your phone stand out from the crowd.
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
But how do you use Miui Theme Editor For Mac? In this article, we will show you how to download, install, and use Miui Theme Editor For Mac step by step. Follow along and you will be able to create your own themes in no time.
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
## Download Miui Theme Editor For Mac
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
The first thing you need to do is to download Miui Theme Editor For Mac from the official website. Click here[^1^] to go to the download page and choose the version for MacOS. The file size is about 300 MB and it will be downloaded as a zip file.
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
## Install Miui Theme Editor For Mac
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
After downloading the zip file, you need to extract it to a folder on your Mac. You can use any unzip tool or simply double-click on the file to open it. You will see a folder named like 21.8.16\_1630000000 (the folder name may vary depending on the version of the editor). Inside this folder, you will find another folder named MIUINewThemeEditor and a file named MIUINewThemeEditor.jar.
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
To install Miui Theme Editor For Mac, you need to run the MIUINewThemeEditor.jar file. However, before you do that, you need to make sure that you have Java environment installed on your Mac. If you don't have it, you can click here[^2^] to download and install it first.
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
Due to the security settings of MacOS, you may not be able to run the MIUINewThemeEditor.jar file by double-clicking on it. In that case, you need to right-click on it and choose Open from the menu. You will see a pop-up window asking if you want to open it. Click on Open and you will see the Miui Theme Editor interface.
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
## Use Miui Theme Editor For Mac
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
Now that you have installed Miui Theme Editor For Mac, you can start creating your own themes. Here are some basic steps to follow:
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
- Connect your phone to your Mac via USB cable and enable USB debugging on your phone. To do that, go to Settings > My device > All specs and tap on MIUI version several times until you see a message saying "You are now a developer". Then go back to Settings > Additional settings > Developer options and turn on USB debugging.
|
70 |
-
|
71 |
-
- On the Miui Theme Editor interface, click on File > New Project and enter a name for your theme. You can also choose a base theme from the list or import an existing theme from your phone or computer.
|
72 |
-
|
73 |
-
- On the left panel, you will see different modules that you can customize for your theme, such as Icons, Wallpapers, Fonts, Sounds, etc. Click on any module and you will see its options on the right panel. You can change the colors, images, sizes, positions, animations, etc. of each element according to your preference.
|
74 |
-
|
75 |
-
- When you are done with editing a module, click on Save at the bottom right corner of the screen. You can also preview your theme on your phone by clicking on Apply at the top right corner of the screen.
|
76 |
-
|
77 |
-
- When you are satisfied with your theme, click on File > Export Project and choose a location to save your theme as an .mtz file. You can also upload your theme to the Miui Theme Store by clicking on File > Upload Project and following the instructions.
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
## Conclusion
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
Miui Theme Editor For Mac is a great tool for anyone who wants to create their own themes for Miui devices. It is easy to use and offers a lot of options to customize every aspect of your theme. With Miui Theme Editor For Mac, you can make your phone look unique and stylish.
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
If you have any questions or
|
90 |
-
|
91 |
-
1b8d091108
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe InDesign 2.0 Free Download Create and Publish Professional Layouts.md
DELETED
@@ -1,184 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Adobe InDesign 2.0 Free Download: A Guide for Beginners</h1>
|
3 |
-
<p>If you are looking for a powerful and versatile desktop publishing tool, you might have heard of Adobe InDesign. But what is it exactly, and how can you get it for free? In this article, we will answer these questions and more, as we guide you through the process of downloading, installing, and using Adobe InDesign 2.0 for free.</p>
|
4 |
-
<h2>adobe indesign 2.0 free download</h2><br /><p><b><b>Download</b> ••• <a href="https://byltly.com/2uKwZm">https://byltly.com/2uKwZm</a></b></p><br /><br />
|
5 |
-
<h2>What is Adobe InDesign?</h2>
|
6 |
-
<p>Adobe InDesign is a software application that allows you to create and publish professional-looking documents for print and digital media. You can use it to design books, magazines, flyers, posters, brochures, newsletters, catalogs, eBooks, interactive PDFs, and more.</p>
|
7 |
-
<h3>A brief history of InDesign</h3>
|
8 |
-
<p>Adobe InDesign was first released in 1999 as a successor to Adobe PageMaker, which was a popular desktop publishing program at the time. Adobe wanted to create a more modern and competitive tool that could rival QuarkXPress, another leading desktop publishing software in the market.</p>
|
9 |
-
<p>InDesign was initially met with skepticism and resistance from PageMaker and QuarkXPress users, who were reluctant to switch to a new and unfamiliar platform. However, over time, InDesign gained popularity and recognition for its innovative features and capabilities, such as transparency effects, long document support, typography control, XML integration, and cross-media publishing.</p>
|
10 |
-
<p>Since its debut, Adobe has released several versions of InDesign, each with new improvements and enhancements. The latest version is Adobe InDesign CC 2021, which is part of the Adobe Creative Cloud suite of applications.</p>
|
11 |
-
<h3>The main features of InDesign</h3>
|
12 |
-
<p>Adobe InDesign has many features that make it a powerful and versatile desktop publishing tool. Some of the main features are:</p>
|
13 |
-
<ul>
|
14 |
-
<li><b>Layout and design:</b> You can create custom layouts with flexible frames, grids, guides, rulers, and alignment tools. You can also use preset templates or import layouts from other applications.</li>
|
15 |
-
<li><b>Typography:</b> You can access thousands of fonts from Adobe Fonts or use your own fonts. You can also adjust the font size, style, color, spacing, kerning, tracking, hyphenation, and more.</li>
|
16 |
-
<li><b>Graphics:</b> You can import graphics from various formats, such as JPEG, PNG, GIF, TIFF, EPS, PDF, PSD, AI, and more. You can also edit graphics with tools like cropping, scaling, rotating, flipping, skewing, transparency effects, drop shadows, feathering, clipping paths, and more.</li>
|
17 |
-
<li><b>Color:</b> You can apply colors to text and graphics using swatches or color pickers. You can also create custom color schemes or use predefined color libraries.</li>
|
18 |
-
<li><b>Tables:</b> You can create tables with rows and columns to display data or information in a structured way. You can also format tables with borders, fills, strokes, cell styles,</li>
|
19 |
-
<li><b>Interactivity:</b> You can add interactivity to your documents by inserting hyperlinks, buttons, animations, video, audio, and more.</li>
|
20 |
-
<li><b>Exporting:</b> You can export your documents to various formats, such as PDF, EPUB, HTML, and more.</li>
|
21 |
-
</ul>
|
22 |
-
<h2>Why download Adobe InDesign 2.0?</h2>
|
23 |
-
<p>You might be wondering why you would want to download an old version of Adobe InDesign such as version 2.0 when there are newer and better versions available. Well there are some reasons why you might prefer to use InDesign 2.0 over the latest version.</p>
|
24 |
-
<p>How to get adobe indesign 2.0 for free<br />
|
25 |
-
Adobe indesign 2.0 crack download<br />
|
26 |
-
Adobe indesign 2.0 serial number generator<br />
|
27 |
-
Adobe indesign 2.0 full version download<br />
|
28 |
-
Adobe indesign 2.0 portable download<br />
|
29 |
-
Adobe indesign 2.0 mac free download<br />
|
30 |
-
Adobe indesign 2.0 windows 10 free download<br />
|
31 |
-
Adobe indesign 2.0 tutorial pdf free download<br />
|
32 |
-
Adobe indesign 2.0 templates free download<br />
|
33 |
-
Adobe indesign 2.0 trial version free download<br />
|
34 |
-
Adobe indesign 2.0 software free download<br />
|
35 |
-
Adobe indesign 2.0 license key free download<br />
|
36 |
-
Adobe indesign 2.0 setup free download<br />
|
37 |
-
Adobe indesign 2.0 offline installer free download<br />
|
38 |
-
Adobe indesign 2.0 iso file free download<br />
|
39 |
-
Adobe indesign 2.0 rar file free download<br />
|
40 |
-
Adobe indesign 2.0 zip file free download<br />
|
41 |
-
Adobe indesign 2.0 activation code free download<br />
|
42 |
-
Adobe indesign 2.0 patch file free download<br />
|
43 |
-
Adobe indesign 2.0 keygen free download<br />
|
44 |
-
Adobe indesign 2.0 torrent download free<br />
|
45 |
-
Adobe indesign 2.0 direct link free download<br />
|
46 |
-
Adobe indesign 2.0 mega link free download<br />
|
47 |
-
Adobe indesign 2.0 google drive link free download<br />
|
48 |
-
Adobe indesign 2.0 dropbox link free download<br />
|
49 |
-
Adobe indesign 2.0 mediafire link free download<br />
|
50 |
-
Adobe indesign 2.0 zippyshare link free download<br />
|
51 |
-
Adobe indesign 2.0 alternative free download<br />
|
52 |
-
Adobe indesign 2.0 compatible software free download<br />
|
53 |
-
Adobe indesign 2.0 plugins free download<br />
|
54 |
-
Adobe indesign 2.0 fonts free download<br />
|
55 |
-
Adobe indesign 2.0 brushes free download<br />
|
56 |
-
Adobe indesign 2.0 presets free download<br />
|
57 |
-
Adobe indesign 2.0 actions free download<br />
|
58 |
-
Adobe indesign 2.0 scripts free download<br />
|
59 |
-
Adobe indesign 2.0 extensions free download<br />
|
60 |
-
Adobe indesign 2.0 tips and tricks free download<br />
|
61 |
-
Adobe indesign 2.0 user guide free download<br />
|
62 |
-
Adobe indesign 2.0 manual free download<br />
|
63 |
-
Adobe indesign 2.0 help file free download<br />
|
64 |
-
Adobe indesign 2.0 cheat sheet free download<br />
|
65 |
-
Adobe indesign 2.0 keyboard shortcuts free download<br />
|
66 |
-
Adobe indesign 2.0 video tutorials free download<br />
|
67 |
-
Adobe indesign 2.0 online courses free download<br />
|
68 |
-
Adobe indesign 2.0 ebooks free download<br />
|
69 |
-
Adobe indesign 2.0 magazines free download<br />
|
70 |
-
Adobe indesign 2.0 brochures free download<br />
|
71 |
-
Adobe indesign 2.0 flyers free download<br />
|
72 |
-
Adobe indesign 2.0 posters free download<br />
|
73 |
-
Adobe indesign 2.0 newsletters free download</p>
|
74 |
-
<h3>The benefits of using InDesign 2.0</h3>
|
75 |
-
<p>Some of the benefits of using InDesign 2.0 are:</p>
|
76 |
-
<ul>
|
77 |
-
<li><b>Nostalgia:</b> If you are an old-school user who started with PageMaker or QuarkXPress you might feel nostalgic about using InDesign 2.0 which was one of the first versions of InDesign that introduced many new features and improvements.</li>
|
78 |
-
<li><b>Simplicity:</b> If you are a beginner who wants to learn the basics of desktop publishing without being overwhelmed by too many options and tools you might find InDesign 2.0 easier to use than the latest version which has more advanced and complex features.</li>
|
79 |
-
<li><b>Compatibility:</b> If you have an older computer or operating system that cannot run the latest version of InDesign you might be able to run InDesign 2.0 without any problems.</li>
|
80 |
-
<li><b>Affordability:</b> If you want to use InDesign without paying a monthly subscription fee for the Creative Cloud membership you might be able to get InDesign 2.0 for free or at a low cost from some sources.</li>
|
81 |
-
</ul>
|
82 |
-
<h3>The drawbacks of using InDesign 2.0</h3>
|
83 |
-
<p>Of course using InDesign 2.0 also has some drawbacks that you should be aware of before downloading it. Some of the drawbacks are:</p>
|
84 |
-
<ul>
|
85 |
-
<li><b>Lack of support:</b> Since InDesign 2.0 is an outdated version that was released in 2002 it is no longer supported by Adobe or any other official source. This means that you will not receive any updates bug fixes security patches or customer service for this version.</li>
|
86 |
-
<li><b>Lack of features:</b> Since InDesign 2.0 is an old version that was released before many new technologies and standards emerged in the desktop publishing industry it lacks many features that are available in the latest version of InDesign. For example, InDesign 2.0 does not support Unicode characters, OpenType fonts, PDF/X standards, EPUB export, and more.</li>
|
87 |
-
<li><b>Lack of compatibility:</b> Since InDesign 2.0 is an old version that was designed for older systems and formats, it might not be compatible with newer systems and formats. For example, InDesign 2.0 might not work well with Windows 10 or macOS Catalina, or it might not open files created with newer versions of InDesign or other applications.</li>
|
88 |
-
<li><b>Lack of legality:</b> Since InDesign 2.0 is a proprietary software that belongs to Adobe, it is not legal to download or use it without a valid license or permission from Adobe. This means that if you download or use InDesign 2.0 from an unauthorized source, you might be violating the terms of service or infringing the intellectual property rights of Adobe.</li>
|
89 |
-
</ul>
|
90 |
-
<h2>How to download Adobe InDesign 2.0 for free?</h2>
|
91 |
-
<p>If you still want to download Adobe InDesign 2.0 for free despite its drawbacks, there are two ways you can try: the official way and the alternative way.</p>
|
92 |
-
<h3>The official way to get a free trial of InDesign</h3>
|
93 |
-
<p>The official way to get a free trial of InDesign is to visit the <a href="https://www.adobe.com/products/indesign/free-trial-download.html">Adobe website</a> and follow these steps:</p>
|
94 |
-
<ol>
|
95 |
-
<li><p>Click the Start Free Trial button.</p></li>
|
96 |
-
<li><p>Sign in or set up your Adobe ID and download your free trial.</p></li>
|
97 |
-
<li><p>After your 7-day free trial ends, your Creative Cloud membership will continue unless canceled before free trial ends.</p></li>
|
98 |
-
</ol>
|
99 |
-
<p>This way will allow you to try the latest version of InDesign for free for seven days. However, if you want to continue using it after the trial period ends, you will have to pay a monthly subscription fee for the Creative Cloud membership.</p>
|
100 |
-
<h3>The alternative way to get a free version of InDesign 2.0</h3>
|
101 |
-
<p>The alternative way to get a free version of InDesign 2.0 is to visit some websites that offer old software downloads, such as WinWorld or Internet Archive. These websites provide access to archived versions of software that are no longer supported or available from their original sources. However, these websites are not authorized by Adobe or any other software company, so you should use them at your own risk and discretion.</p>
|
102 |
-
<h4>WinWorld: Adobe InDesign 2.0</h4>
|
103 |
-
<p>WinWorld is a website that provides access to old software downloads for various operating systems and platforms. You can find Adobe InDesign 2.0 on WinWorld by following these steps:</p>
|
104 |
-
<ol>
|
105 |
-
<li><p>Visit the <a href="https://winworldpc.com/product/adobe-indesign/20">WinWorld website</a> and search for Adobe InDesign 2.0.</p></li>
|
106 |
-
<li><p>Select the language and architecture of your choice and click the Download button.</p></li>
|
107 |
-
<li><p>You will get an ISO file that contains the installation files for InDesign 2.0.</p></li>
|
108 |
-
<li><p>You will need a CD burning software or a virtual drive software to mount the ISO file and run the setup.exe file.</p></li>
|
109 |
-
</ol>
|
110 |
-
<p>This way will allow you to download and install InDesign 2.0 for free on your computer. However, you will need a serial number to activate the software after installation. You can find some serial numbers on the WinWorld website or on other online sources, but they may not work or be valid.</p>
|
111 |
-
<h4>Internet Archive: Adobe InDesign 2.0</h4>
|
112 |
-
<p>Internet Archive is a website that provides access to archived versions of websites, books, videos, music, and software. You can find Adobe InDesign 2.0 on Internet Archive by following these steps:</p>
|
113 |
-
<ol>
|
114 |
-
<li><p>Visit the <a href="https://archive.org/details/eu_Adobe-Indesign-2.0">Internet Archive website</a> and search for Adobe InDesign 2.0.</p></li>
|
115 |
-
<li><p>Select the file that matches your language and platform and click the Download button.</p></li>
|
116 |
-
<li><p>You will get a ZIP file that contains the installation files for InDesign 2.0.</p></li>
|
117 |
-
<li><p>You will need a ZIP extraction software to unzip the file and run the setup.exe file.</p></li>
|
118 |
-
</ol>
|
119 |
-
<p>This way will allow you to download and install InDesign 2.0 for free on your computer. However, you will need a serial number to activate the software after installation. You can find some serial numbers on the Internet Archive website or on other online sources, but they may not work or be valid.</p>
|
120 |
-
<h2>How to install and use Adobe InDesign 2.0?</h2>
|
121 |
-
<p>If you have downloaded Adobe InDesign 2.0 from either the official or the alternative way, you will need to install and use it on your computer. Here are some tips on how to do that:</p>
|
122 |
-
<h3>The system requirements for InDesign 2.0</h3>
|
123 |
-
<p>Before installing InDesign 2.0 on your computer, you should check if your system meets the minimum requirements for running the software. According to Adobe, these are the system requirements for InDesign 2.0:</p>
|
124 |
-
<table>
|
125 |
-
<tr>
|
126 |
-
<th>Operating system</th>
|
127 |
-
<th>Processor</th>
|
128 |
-
<th>RAM</th>
|
129 |
-
<th>Hard disk space</th>
|
130 |
-
<th>Monitor resolution</th>
|
131 |
-
<th>CD-ROM drive</th>
|
132 |
-
</tr>
|
133 |
-
<tr>
|
134 |
-
<td>Windows XP/2000/NT/ME/98/95</td>
|
135 |
-
<td>Pentium II or higher</td>
|
136 |
-
<td>64 MB (128 MB recommended)</td>
|
137 |
-
<td>125 MB (175 MB recommended)</td>
|
138 |
-
<td>800 x 600 (1024 x 768 recommended)</td>
|
139 |
-
<td>Required</td>
|
140 |
-
</tr>
|
141 |
-
<tr>
|
142 |
-
<td>Mac OS X/9/8/7</td>
|
143 |
-
<td>G3 or higher</td>
|
144 |
-
<td>64 MB (128 MB recommended)</td>
|
145 |
-
<td>125 MB (175 MB recommended)</td>
|
146 |
-
<td>800 x 600 (1024 x 768 recommended)</td <td>Required</td </tr </table <p>If your system does not meet these requirements, you might experience problems with installing or running InDesign 2.0. You might also need to update your drivers, software, or hardware to ensure compatibility with InDesign 2.0.</p>
|
147 |
-
<h3>The installation process for InDesign 2.0</h3>
|
148 |
-
<p>After downloading InDesign 2.0 from either the official or the alternative way, you will need to install it on your computer. The installation process may vary depending on the source and format of the download, but here are some general steps you can follow:</p>
|
149 |
-
<ol>
|
150 |
-
<li><p>Locate the installation file on your computer. It may be an ISO file, a ZIP file, or an EXE file.</p></li>
|
151 |
-
<li><p>If the file is an ISO file, you will need a CD burning software or a virtual drive software to mount the ISO file and run the setup.exe file.</p></li>
|
152 |
-
<li><p>If the file is a ZIP file, you will need a ZIP extraction software to unzip the file and run the setup.exe file.</p></li>
|
153 |
-
<li><p>If the file is an EXE file, you can simply double-click it to run it.</p></li>
|
154 |
-
<li><p>Follow the onscreen instructions to complete the installation. You may need to agree to the terms and conditions, choose a destination folder, and enter a serial number.</p></li>
|
155 |
-
<li><p>After the installation is finished, you can launch InDesign 2.0 from your desktop or start menu.</p></li>
|
156 |
-
</ol>
|
157 |
-
<h3>The basic steps to create a document with InDesign 2.0</h3>
|
158 |
-
<p>Once you have installed and launched InDesign 2.0 on your computer, you can start creating your own documents with it. Here are some basic steps you can follow:</p>
|
159 |
-
<ol>
|
160 |
-
<li><p>Create a new document by choosing File > New > Document or pressing Ctrl+N (Windows) or Command+N (Mac).</p></li>
|
161 |
-
<li><p>Choose a preset or custom document size, orientation, margins, columns, and other options in the New Document dialog box and click OK.</p></li>
|
162 |
-
<li><p>Add text to your document by choosing File > Place or pressing Ctrl+D (Windows) or Command+D (Mac) and selecting a text file from your computer. You can also type text directly in InDesign by using the Type tool.</p></li>
|
163 |
-
<li><p>Add graphics to your document by choosing File > Place or pressing Ctrl+D (Windows) or Command+D (Mac) and selecting a graphic file from your computer. You can also draw graphics directly in InDesign by using the Pen tool or other drawing tools.</p></li>
|
164 |
-
<li><p>Format your text and graphics by using the Character palette, the Paragraph palette, the Swatches palette, the Stroke palette, and other palettes that you can access from the Window menu.</p></li>
|
165 |
-
<li><p>Create tables by choosing Table > Insert Table or pressing Ctrl+Alt+T (Windows) or Command+Option+T (Mac) and specifying the number of rows and columns in the Insert Table dialog box. You can also convert text to tables by choosing Table > Convert Text To Table.</p></li>
|
166 |
-
<li><p>Add interactivity to your document by choosing Object > Interactive > New Hyperlink or pressing Ctrl+Alt+H (Windows) or Command+Option+H (Mac) and specifying the link destination and appearance in the New Hyperlink dialog box. You can also add buttons, animations, video, audio, and more by using the Interactive palette.</p></li>
|
167 |
-
<li><p>Export your document by choosing File > Export or pressing Ctrl+E (Windows) or Command+E (Mac) and selecting a format from the Format menu in the Export dialog box. You can export your document as PDF, HTML, XML, EPS, JPEG, TIFF, and more.</p></li>
|
168 |
-
</ol>
|
169 |
-
<p>These are just some of the basic steps to create a document with InDesign 2.0. You can learn more about InDesign's features and functions by reading the User Guide or watching tutorials online.</p>
|
170 |
-
<h2>Conclusion</h2>
|
171 |
-
<p>In this article, we have learned what Adobe InDesign is, why you might want to download InDesign 2.0 for free, how to download InDesign 2.0 for free from different sources, how to install and use InDesign 2.0 on your computer, and how to create a document with InDesign 2.0. We hope this article has been helpful and informative for you.</p>
|
172 |
-
<p>If you have any questions or feedback about this article, please feel free to leave a comment below. We would love to hear from you!</p>
|
173 |
-
<h2>FAQs</h2>
|
174 |
-
<p>Here are some frequently asked questions about Adobe InDesign 2.0:</p>
|
175 |
-
<ol>
|
176 |
-
<li><b>Is Adobe InDesign 2.0 free?</b> Adobe InDesign 2.0 is not free as it is a proprietary software that belongs to Adobe. However, you might be able to get it for free or at a low cost from some unofficial sources such as WinWorld or Internet Archive. However, these sources are not authorized by Adobe or any other software company, so you should use them at your own risk and discretion.</li>
|
177 |
-
<li><b>Is Adobe InDesign 2.0 safe?</b> Adobe InDesign 2.0 is safe as long as you download it from a reliable source such as Adobe's website or a trusted CD-ROM. However, if you download it from an unauthorized source such as WinWorld or Internet Archive, you might encounter some risks such as viruses, malware, or legal issues. You should always scan any files you download with an antivirus software before opening them.</li>
|
178 |
-
<li><b>Is Adobe InDesign 2.0 compatible with Windows 10?</b> Adobe InDesign 2.0 is not officially compatible with Windows 10 as it was designed for older systems such as Windows XP/2000/NT/ME/98/95. However, some users have reported that they were able to run InDesign 2.0 on Windows 10 with some tweaks and adjustments. However, this is not guaranteed and may cause some errors or crashes.</li>
|
179 |
-
<li><b>Is Adobe InDesign 2.0 compatible with macOS Catalina?</b> Adobe InDesign 2.0 is not compatible with macOS Catalina as it is a 32-bit application and Catalina only supports 64-bit applications. You will not be able to run InDesign 2.0 on Catalina unless you use a virtual machine or a dual boot system.</li>
|
180 |
-
<li><b>Is Adobe InDesign 2.0 compatible with newer versions of InDesign?</b> Adobe InDesign 2.0 is partially compatible with newer versions of InDesign as it can open and save files in the INX format, which is an interchange format that preserves most of the document features. However, some features that are not supported by InDesign 2.0 may be lost or altered when opening or saving files in the INX format.</li>
|
181 |
-
</ol>
|
182 |
-
</p> 0a6ba089eb<br />
|
183 |
-
<br />
|
184 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Draw X3 Free Download Full Version Filehippo 15 Create Stunning Logos Illustrations and More.md
DELETED
@@ -1,175 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Xforce Keygen Adobe Premiere Pro CC Torrentinstmank: What Is It and How to Use It?</h1>
|
3 |
-
<p>If you are looking for a way to get Adobe Premiere Pro CC, one of the most popular and powerful video editing software, for free, you might have come across a term called "xforce keygen adobe premiere pro cc torrentinstmank". But what does it mean and how can you use it? In this article, we will explain everything you need to know about this method of obtaining Adobe Premiere Pro CC without paying a dime.</p>
|
4 |
-
<h2>xforce keygen adobe premiere pro cc torrentinstmank</h2><br /><p><b><b>Download File</b> ››››› <a href="https://byltly.com/2uKvBI">https://byltly.com/2uKvBI</a></b></p><br /><br />
|
5 |
-
<h2>Introduction</h2>
|
6 |
-
<p>Before we dive into the details of xforce keygen adobe premiere pro cc torrentinstmank, let's first understand what each of these words means.</p>
|
7 |
-
<h3>What is Adobe Premiere Pro CC?</h3>
|
8 |
-
<p>Adobe Premiere Pro CC is a professional video editing software that is part of the Adobe Creative Cloud suite. It allows you to create stunning videos for various purposes, such as film, TV, web, social media, etc. You can edit your footage in any format, from 8K to virtual reality, and apply various effects, transitions, titles, graphics, audio, and more. You can also collaborate with other editors and share your projects across different devices and platforms.</p>
|
9 |
-
<h3>What is xforce keygen?</h3>
|
10 |
-
<p>Xforce keygen is a tool that can generate serial numbers and activation codes for various software products, including Adobe products. It is created by a group of hackers called X-Force, who are known for cracking many software programs. By using xforce keygen, you can bypass the official activation process of Adobe Premiere Pro CC and use it for free.</p>
|
11 |
-
<p>xforce keygen adobe premiere pro cc 2019 crack<br />
|
12 |
-
xforce keygen adobe premiere pro cc 2020 download<br />
|
13 |
-
xforce keygen adobe premiere pro cc 2017 activation<br />
|
14 |
-
xforce keygen adobe premiere pro cc 2018 serial number<br />
|
15 |
-
xforce keygen adobe premiere pro cc 2015.3 patch<br />
|
16 |
-
xforce keygen adobe premiere pro cc 2015.2 update<br />
|
17 |
-
xforce keygen adobe premiere pro cc master collection<br />
|
18 |
-
xforce keygen adobe premiere pro cc rutracker<br />
|
19 |
-
xforce keygen adobe premiere pro cc getpcsofts<br />
|
20 |
-
xforce keygen adobe premiere pro cc reddit<br />
|
21 |
-
xforce keygen adobe premiere pro cc universal patcher<br />
|
22 |
-
xforce keygen adobe premiere pro cc amtlib.dll<br />
|
23 |
-
xforce keygen adobe premiere pro cc painteR<br />
|
24 |
-
xforce keygen adobe premiere pro cc thepiratebay<br />
|
25 |
-
xforce keygen adobe premiere pro cc fixthephoto<br />
|
26 |
-
xforce keygen adobe premiere pro cc free trial<br />
|
27 |
-
xforce keygen adobe premiere pro cc full version<br />
|
28 |
-
xforce keygen adobe premiere pro cc offline installer<br />
|
29 |
-
xforce keygen adobe premiere pro cc mac os<br />
|
30 |
-
xforce keygen adobe premiere pro cc windows 10<br />
|
31 |
-
xforce keygen adobe premiere pro cc 64 bit<br />
|
32 |
-
xforce keygen adobe premiere pro cc 32 bit<br />
|
33 |
-
xforce keygen adobe premiere pro cc video editing software<br />
|
34 |
-
xforce keygen adobe premiere pro cc professional tools<br />
|
35 |
-
xforce keygen adobe premiere pro cc creative cloud<br />
|
36 |
-
xforce keygen adobe premiere pro cc license key generator<br />
|
37 |
-
xforce keygen adobe premiere pro cc product activation code<br />
|
38 |
-
xforce keygen adobe premiere pro cc registration code<br />
|
39 |
-
xforce keygen adobe premiere pro cc crack download link<br />
|
40 |
-
xforce keygen adobe premiere pro cc torrent download magnet link<br />
|
41 |
-
xforce keygen adobe premiere pro cc how to install guide<br />
|
42 |
-
xforce keygen adobe premiere pro cc how to use tutorial<br />
|
43 |
-
xforce keygen adobe premiere pro cc features and benefits<br />
|
44 |
-
xforce keygen adobe premiere pro cc reviews and ratings<br />
|
45 |
-
xforce keygen adobe premiere pro cc alternatives and competitors<br />
|
46 |
-
xforce keygen adobe premiere pro cc support and help<br />
|
47 |
-
xforce keygen adobe premiere pro cc latest version update<br />
|
48 |
-
xforce keygen adobe premiere pro cc new features and improvements<br />
|
49 |
-
xforce keygen adobe premiere pro cc bugs and issues fix<br />
|
50 |
-
xforce keygen adobe premiere pro cc tips and tricks<br />
|
51 |
-
xforce keygen adobe premiere pro cc best practices and recommendations<br />
|
52 |
-
xforce keygen adobe premiere pro cc FAQs and answers<br />
|
53 |
-
xforce keygen adobe premiere pro cc forums and communities<br />
|
54 |
-
xforce keygen adobe premiere pro cc blogs and articles<br />
|
55 |
-
xforce keygen adobe premiere pro cc videos and tutorials<br />
|
56 |
-
xforce keygen adobe premiere pro cc courses and classes<br />
|
57 |
-
xforce keygen adobe premiere pro cc ebooks and books<br />
|
58 |
-
xforce keygen adobe premiere pro cc podcasts and webinars<br />
|
59 |
-
xforce keygen adobe premiere pro cc case studies and testimonials</p>
|
60 |
-
<h3>What is torrentinstmank?</h3>
|
61 |
-
<p>Torrentinstmank is a suffix that is added to some torrent files that contain cracked software. It is not a real word, but rather a combination of "torrent", "install", and "maniac". It implies that the torrent file contains everything you need to install and run the software without any problems.</p>
|
62 |
-
<h2>How to download and install xforce keygen adobe premiere pro cc torrentinstmank?</h2>
|
63 |
-
<p>Now that you know what xforce keygen adobe premiere pro cc torrentinstmank is, let's see how you can download and install it on your computer. Here are the steps you need to follow:</p>
|
64 |
-
<h3>Step 1: Download the torrent file from a reliable source</h3>
|
65 |
-
<p>The first thing you need to do is to find a trustworthy website that offers the torrent file for xforce keygen adobe premiere pro cc torrentinstmank. You can use any search engine to look for it, but be careful of fake or malicious links that might harm your computer. You can also check the comments and ratings of other users to see if the torrent file is safe and working.</p>
|
66 |
-
<p>Once you find a good link, click on it and download the torrent file to your computer. The file size should be around 1.5 GB.</p>
|
67 |
-
<h3>Step 2: Open the torrent file with a torrent client</h3>
|
68 |
-
<p>The next thing you need to do is to open the torrent file with a torrent client. A torrent client is a software that allows you to download files from other users who are sharing them. You can use any torrent client you like, such as uTorrent, BitTorrent, qBittorrent, etc.</p>
|
69 |
-
<p>After you install a torrent client on your computer, double-click on the torrent file you downloaded in step 1. The torrent client will open and start downloading the files from other peers. The download speed will depend on your internet connection and the number of seeders (users who have completed downloading) and leechers (users who are still downloading).</p>
|
70 |
-
<p>Wait until the download is finished. You should see a folder named "xforce keygen adobe premiere pro cc torrentinstmank" in your download location.</p>
|
71 |
-
<h3>Step 3: Extract the files from the downloaded folder</h3>
|
72 |
-
<p>The folder you downloaded contains compressed files that need to be extracted before you can use them. To extract them, you need a software that can handle ZIP or RAR files, such as WinRAR, 7-Zip, etc.</p>
|
73 |
-
<p>Right-click on the folder and select "Extract here" or "Extract to xforce keygen adobe premiere pro cc torrentinstmank". You should see another folder with the same name appear in your location.</p>
|
74 |
-
<h3>Step 4: Run the xforce keygen as administrator</h3>
|
75 |
-
<p>The next step is to run the xforce keygen as administrator. This will allow it to generate serial numbers and activation codes for Adobe Premiere Pro CC.</p>
|
76 |
-
<p>Navigate to the extracted folder and look for a file named "xf-adobecc2015.exe". Right-click on it and select "Run as administrator". You should see a window like this:</p>
|
77 |
-
<img src="https://i.imgur.com/9yZQX0M.png" alt="xforce keygen window">
|
78 |
-
<p>Select "Adobe Premiere Pro CC" from the drop-down menu and click on "Generate". You should see some codes appear in the fields below.</p>
|
79 |
-
<h3>Step 5: Generate a serial number and activation code for Adobe Premiere Pro CC</h3>
|
80 |
-
<p>The next step is to generate a serial number and activation code for Adobe Premiere Pro CC using the codes from step 4.</p>
|
81 |
-
<p>Copy the serial number from the xforce keygen window and paste it somewhere safe. You will need it later.</p>
|
82 |
-
<p>Click on "Patch" in the xforce keygen window. You should see a message like this:</p>
|
83 |
-
<img src="https://i.imgur.com/6qjQnJf.png" alt="xforce patch message">
|
84 |
-
<p>Click on "OK" and navigate to this location on your computer:</p>
|
85 |
-
<pre><code>C:\Program Files\Adobe\Adobe Premiere Pro CC </code></pre>
|
86 |
-
<p>Select the file named "amtlib.dll" and click on "Open". You should see another message like this:</p>
|
87 |
-
<img src="https://i.imgur.com/9wzgYnO.png" alt="xforce patch success message">
|
88 |
-
<p>Click on "OK". This means that you have successfully patched Adobe Premiere Pro CC.</p>
|
89 |
-
<p>Copy the activation code from the xforce keygen window and paste it somewhere safe. You will need it later.</p>
|
90 |
-
<h3>Step 6: Install Adobe Premiere Pro CC with the generated codes</h3>
|
91 |
-
<p>The final step is to install Adobe Premiere Pro CC with the generated codes from step 5.</p>
|
92 |
-
<p>Navigate to the extracted folder again and look for a file named "Set-up.exe". Double-click on it to start installing Adobe Premiere Pro CC.</p>
|
93 |
-
<p>You should see a window like this:</p>
|
94 |
-
<img src="https://i.imgur.com/7wvZa8f.png" alt="adobe installation window">
|
95 |
-
<p>Select your language preference and click on "Continue".</p>
|
96 |
-
<p>You should see another window like this:</p>
|
97 |
-
<h2>How to use Adobe Premiere Pro CC for video editing?</h2>
|
98 |
-
<p>After you have installed Adobe Premiere Pro CC with the generated codes, you can start using it for video editing. Here are some basic steps you can follow:</p>
|
99 |
-
<h3>Step 1: Launch Adobe Premiere Pro CC and create a new project</h3>
|
100 |
-
<p>To launch Adobe Premiere Pro CC, go to the Start menu (Windows) or the Applications folder (macOS) and click on the Adobe Premiere Pro CC icon. You should see a splash screen like this:</p>
|
101 |
-
<img src="https://i.imgur.com/4yXZy0k.png" alt="adobe premiere pro cc splash screen">
|
102 |
-
<p>Click on "New Project" to create a new project. You should see a window like this:</p>
|
103 |
-
<img src="https://i.imgur.com/8QzW7oG.png" alt="adobe premiere pro cc new project window">
|
104 |
-
<p>Give your project a name and choose a location to save it. You can also adjust other settings, such as the video rendering and playback engine, the video display format, the audio display format, the capture format, etc.</p>
|
105 |
-
<p>Click on "OK" to create your project.</p>
|
106 |
-
<h3>Step 2: Import your media files into the project panel</h3>
|
107 |
-
<p>To import your media files into the project panel, you can use any of these methods:</p>
|
108 |
-
<ul>
|
109 |
-
<li>Go to File > Import and browse for the files you want to import.</li>
|
110 |
-
<li>Drag and drop the files from your computer or an external drive into the project panel.</li>
|
111 |
-
<li>Use the Media Browser panel to navigate and import files from various sources, such as cameras, hard drives, network locations, etc.</li>
|
112 |
-
</ul>
|
113 |
-
<p>You should see your imported files appear in the project panel like this:</p>
|
114 |
-
<img src="https://i.imgur.com/9aZJl6E.png" alt="adobe premiere pro cc project panel">
|
115 |
-
<p>You can organize your files into bins (folders) by right-clicking on an empty area in the project panel and choosing New Bin. You can also rename, delete, duplicate, or reveal your files in Explorer (Windows) or Finder (macOS) by right-clicking on them and choosing the appropriate option.</p>
|
116 |
-
<h3>Step 3: Drag and drop your clips onto the timeline</h3>
|
117 |
-
<p>To create a sequence (a series of clips that play one after another) from your imported files, you can use any of these methods:</p>
|
118 |
-
<ul>
|
119 |
-
<li>Drag and drop one or more clips from the project panel onto the timeline panel. This will create a new sequence with settings that match your clips.</li>
|
120 |
-
<li>Right-click on one or more clips in the project panel and choose New Sequence From Clip. This will also create a new sequence with settings that match your clips.</li>
|
121 |
-
<li>Go to File > New > Sequence and choose a preset or custom sequence setting. This will create a new empty sequence with settings that you specify. Then drag and drop your clips onto the timeline.</li>
|
122 |
-
</ul>
|
123 |
-
<p>You should see your sequence appear in the timeline panel like this:</p>
|
124 |
-
<img src="https://i.imgur.com/5LxqY0n.png" alt="adobe premiere pro cc timeline panel">
|
125 |
-
<p>You can adjust the size and position of your clips on the timeline by using various tools, such as the Selection tool, the Ripple Edit tool, the Rolling Edit tool, the Rate Stretch tool, etc. You can also trim, split, cut, copy, paste, delete, or move your clips by using keyboard shortcuts or right-click menus.</p>
|
126 |
-
<h3>Step 4: Edit your clips using various tools and effects</h3>
|
127 |
-
<p>To edit your clips using various tools and effects, you can use any of these panels:</p>
|
128 |
-
<ul>
|
129 |
-
<li>The Source Monitor panel allows you to preview your clips before adding them to the timeline. You can also set in and out points for your clips and perform insert or overwrite edits.</li>
|
130 |
-
<li>The Program Monitor panel allows you to preview your sequence as you edit it. You can also perform various actions, such as play, pause, stop, go to previous or next edit point, zoom in or out, etc.</li>
|
131 |
-
<li>The Effect Controls panel allows you to adjust various parameters of your clips, such as position, scale, rotation, opacity, volume, etc. You can also add keyframes to animate these parameters over time.</li>
|
132 |
-
<li>The Effects panel allows you to browse and apply various effects to your clips, such as transitions, video effects, audio effects, etc. You can also search for effects by name or category.</li>
|
133 |
-
<li>The Lumetri Color panel allows you to perform color correction and grading on your clips. You can use various tools, such as basic correction, creative looks, curves, color wheels, vignette, etc.</li>
|
134 |
-
<li>The Essential Sound panel allows you to improve the sound quality of your clips. You can assign audio types (dialogue, music, sound effects, ambience) to your clips and apply various presets or custom adjustments.</li>
|
135 |
-
<h3>Step 5: Export your video in your desired format and quality</h3>
|
136 |
-
<p>After you have finished editing your video, you can export it in your desired format and quality. Here are some steps you can follow:</p>
|
137 |
-
<ul>
|
138 |
-
<li>Go to File > Export > Media or use the keyboard shortcut Ctrl + M (Windows) or Cmd + M (macOS) to open the Export Settings window.</li>
|
139 |
-
<li>Choose a format and a preset from the drop-down menus. You can use the default H.264 format for most web and mobile devices, or choose another format depending on your needs. You can also use the Match Source presets to match the settings of your source sequence.</li>
|
140 |
-
<li>Adjust the video and audio settings as desired. You can change the frame size, frame rate, bitrate, aspect ratio, audio codec, audio quality, etc. You can also use the Output tab to preview your video before exporting.</li>
|
141 |
-
<li>Choose a file name and a location to save your video. You can also choose to export as a single file or multiple files.</li>
|
142 |
-
<li>If you want to upload your video directly to social media platforms, such as YouTube, Vimeo, Facebook, or Behance, you can use the Publish tab to log in to your accounts and select the options you want.</li>
|
143 |
-
<li>Click on Export to start exporting your video. You can see the progress and status of your export in the Queue panel.</li>
|
144 |
-
</ul>
|
145 |
-
<h2>Conclusion</h2>
|
146 |
-
<p>In this article, we have explained what xforce keygen adobe premiere pro cc torrentinstmank is and how to use it to get Adobe Premiere Pro CC for free. We have also shown you how to use Adobe Premiere Pro CC for video editing and exporting. However, we do not recommend using this method of obtaining Adobe Premiere Pro CC, as it is illegal and unethical. It may also expose your computer to viruses, malware, or other security risks. If you want to use Adobe Premiere Pro CC legally and safely, you should purchase a subscription from the official Adobe website or use other free or low-cost alternatives.</p>
|
147 |
-
<h2>FAQs</h2>
|
148 |
-
<ol>
|
149 |
-
<li>What are the system requirements for Adobe Premiere Pro CC?</li>
|
150 |
-
<p>The system requirements for Adobe Premiere Pro CC vary depending on your operating system, processor, memory, graphics card, hard disk space, etc. You can check the minimum and recommended system requirements for Adobe Premiere Pro CC here: https://helpx.adobe.com/premiere-pro/system-requirements.html</p>
|
151 |
-
<li>What are some free or low-cost alternatives to Adobe Premiere Pro CC?</li>
|
152 |
-
<p>Some free or low-cost alternatives to Adobe Premiere Pro CC are:</p>
|
153 |
-
<ul>
|
154 |
-
<li>Davinci Resolve: A powerful video editing software that also offers color correction, visual effects, motion graphics, and audio post-production.</li>
|
155 |
-
<li>HitFilm Express: A video editing software that also offers compositing, 3D animation, and special effects.</li>
|
156 |
-
<li>Lightworks: A video editing software that also offers multicam editing, color grading, and audio mixing.</li>
|
157 |
-
<li>iMovie: A video editing software for Mac and iOS devices that also offers themes, transitions, titles, and trailers.</li>
|
158 |
-
<li>Shotcut: A video editing software that also offers filters, transitions, audio mixing, and webcam capture.</li>
|
159 |
-
</ul>
|
160 |
-
<li>How can I learn more about Adobe Premiere Pro CC?</li>
|
161 |
-
<p>You can learn more about Adobe Premiere Pro CC by visiting the official Adobe website: https://www.adobe.com/products/premiere.html You can also access various tutorials, guides, tips, and tricks from the Help menu in Adobe Premiere Pro CC or from these online resources:</p>
|
162 |
-
<ul>
|
163 |
-
<li>Premiere Pro User Guide: https://helpx.adobe.com/premiere-pro/user-guide.html</li>
|
164 |
-
<li>Premiere Pro Tutorials: https://helpx.adobe.com/premiere-pro/tutorials.html</li>
|
165 |
-
<li>Premiere Pro Learn & Support: https://helpx.adobe.com/support/premiere-pro.html</li>
|
166 |
-
<li>Premiere Pro Community Forum: https://community.adobe.com/t5/premiere-pro/ct-p/premiere-pro?page=1&sort=latest_replies&filter=all</li>
|
167 |
-
</ul>
|
168 |
-
<li>How can I contact Adobe customer support?</li>
|
169 |
-
<p>You can contact Adobe customer support by visiting this page: https://helpx.adobe.com/contact.html You can also chat with an agent online or call them by phone.</p>
|
170 |
-
<li>How can I report a bug or request a feature for Adobe Premiere Pro CC?</li>
|
171 |
-
<p>You can report a bug or request a feature for Adobe Premiere Pro CC by visiting this page: https://www.adobe.com/products/wishform.html You can also provide feedback or suggestions through the Help menu in Adobe Premiere Pro CC or through the UserVoice forum: https://adobe-video.uservoice.com/forums/911233-premiere-pro</p>
|
172 |
-
</ol>
|
173 |
-
</p> 0a6ba089eb<br />
|
174 |
-
<br />
|
175 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/El cuerpo habla joe navarro pdf 114 Aprende a interpretar las seales no verbales de los dems.md
DELETED
@@ -1,152 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>El cuerpo habla: cómo interpretar el lenguaje no verbal de Joe Navarro</h1>
|
3 |
-
<p>¿Te gustaría saber lo que piensan y sienten los demás con solo observar sus movimientos y expresiones? ¿Te gustaría mejorar tu capacidad de comunicarte con los demás y evitar malentendidos y conflictos? Si la respuesta es sí, entonces este artículo te interesa.</p>
|
4 |
-
<h2>el cuerpo habla joe navarro pdf 114</h2><br /><p><b><b>Download File</b> > <a href="https://byltly.com/2uKxeB">https://byltly.com/2uKxeB</a></b></p><br /><br />
|
5 |
-
<p>En este artículo te voy a hablar de un libro que te enseñará a dominar el arte de la comunicación no verbal. Se trata de <strong>El cuerpo habla</strong>, del autor Joe Navarro, un ex agente del FBI especializado en el análisis del comportamiento humano.</p>
|
6 |
-
<p>El cuerpo habla es un libro que te revelará los secretos del lenguaje no verbal, ese lenguaje silencioso pero poderoso que todos emitimos y recibimos inconscientemente. Aprenderás a interpretar las señales que los demás envían con su cuerpo, lo que te permitirá conocer sus intenciones y sentimientos reales y evitar así engaños y trampas. También aprenderás a utilizar el lenguaje no verbal para transmitir a los demás lo que realmente quieres comunicarles, ya sean familiares, amigos o jefes.</p>
|
7 |
-
<p>Si quieres saber más sobre este libro, sigue leyendo. Te voy a contar quién es Joe Navarro, qué es el lenguaje no verbal, qué nos enseña el libro El cuerpo habla y dónde puedes descargarlo en formato PDF.</p>
|
8 |
-
<h2>¿Quién es Joe Navarro?</h2>
|
9 |
-
<p>Joe Navarro es un reconocido experto en el campo de la comunicación no verbal. Nacido en Cuba, emigró a Estados Unidos cuando tenía ocho años y se convirtió en ciudadano estadounidense. Estudió justicia criminal en la Universidad Brigham Young y se unió al FBI como agente especial.</p>
|
10 |
-
<h3>Su trayectoria profesional como agente del FBI</h3>
|
11 |
-
<p>Durante 25 años, Joe Navarro trabajó como agente del FBI en diferentes áreas, como contraespionaje, contraterrorismo, crimen organizado y comportamiento criminal. Su labor consistía en interrogar e investigar a sospechosos, testigos y víctimas, utilizando sus habilidades para leer el lenguaje no verbal y detectar el engaño.</p>
|
12 |
-
<p>Joe Navarro fue uno de los fundadores del National Security Division's Behavioral Analysis Program, un programa que se encarga de analizar el comportamiento de individuos y grupos que suponen una amenaza para la seguridad nacional. También fue consultor para otros organismos gubernamentales y privados, como la CIA, el Departamento de Defensa o la NASA.</p>
|
13 |
-
<h3>Su experiencia como autor y conferenciante</h3>
|
14 |
-
<p>Tras retirarse del FBI en 2003, Joe Navarro se dedicó a escribir libros y artículos sobre el tema de la comunicación no verbal. Su obra más famosa es <strong>El cuerpo habla</strong>, publicada en 2008 y traducida a más de 30 idiomas. Otros libros suyos son <em>La biblia del lenguaje corporal</em>, <em>Mensajes peligrosos</em> o <em>Louder than words</em>.</p>
|
15 |
-
<p>el cuerpo habla joe navarro libro pdf gratis<br />
|
16 |
-
descargar el cuerpo habla joe navarro pdf completo<br />
|
17 |
-
el cuerpo habla joe navarro resumen pdf<br />
|
18 |
-
el cuerpo habla joe navarro epub<br />
|
19 |
-
el cuerpo habla joe navarro audiolibro<br />
|
20 |
-
el cuerpo habla joe navarro amazon<br />
|
21 |
-
el cuerpo habla joe navarro opiniones<br />
|
22 |
-
el cuerpo habla joe navarro leer online<br />
|
23 |
-
el cuerpo habla joe navarro reseña<br />
|
24 |
-
el cuerpo habla joe navarro pdf google drive<br />
|
25 |
-
el cuerpo habla joe navarro pdf mega<br />
|
26 |
-
el cuerpo habla joe navarro pdf descargar gratis<br />
|
27 |
-
el cuerpo habla joe navarro pdf online<br />
|
28 |
-
el cuerpo habla joe navarro pdf gratis español<br />
|
29 |
-
el cuerpo habla joe navarro pdf free download<br />
|
30 |
-
el cuerpo habla joe navarro pdf english<br />
|
31 |
-
el cuerpo habla joe navarro pdf español<br />
|
32 |
-
el cuerpo habla joe navarro ebook<br />
|
33 |
-
el cuerpo habla joe navarro kindle<br />
|
34 |
-
el cuerpo habla joe navarro fnac<br />
|
35 |
-
el cuerpo habla joe navarro casa del libro<br />
|
36 |
-
el cuerpo habla joe navarro mercadolibre<br />
|
37 |
-
el cuerpo habla joe navarro segunda mano<br />
|
38 |
-
el cuerpo habla joe navarro sinopsis<br />
|
39 |
-
el cuerpo habla joe navarro indice<br />
|
40 |
-
el cuerpo habla joe navarro frases<br />
|
41 |
-
el cuerpo habla joe navarro analisis<br />
|
42 |
-
el cuerpo habla joe navarro que es<br />
|
43 |
-
el cuerpo habla joe navarro para que sirve<br />
|
44 |
-
el cuerpo habla joe navarro como leerlo<br />
|
45 |
-
el cuerpo habla joe navarro como aplicarlo<br />
|
46 |
-
el cuerpo habla joe navarro beneficios<br />
|
47 |
-
el cuerpo habla joe navarro ventajas y desventajas<br />
|
48 |
-
el cuerpo habla joe navarro recomendaciones<br />
|
49 |
-
el cuerpo habla joe navarro consejos<br />
|
50 |
-
el cuerpo habla joe navarro tips<br />
|
51 |
-
el cuerpo habla joe navarro ejemplos<br />
|
52 |
-
el cuerpo habla joe navarro ejercicios<br />
|
53 |
-
el cuerpo habla joe navarro practicas<br />
|
54 |
-
el cuerpo habla joe navarro actividades<br />
|
55 |
-
el cuerpo habla joe navarro test<br />
|
56 |
-
el cuerpo habla joe navarro preguntas y respuestas<br />
|
57 |
-
el cuerpo habla joe navarro entrevista<br />
|
58 |
-
el cuerpo habla joe navarro biografia<br />
|
59 |
-
quien es Joe Navarro y que hace?<br />
|
60 |
-
que estudios tiene Joe Navarro?<br />
|
61 |
-
que otros libros ha escrito Joe Navarro?<br />
|
62 |
-
que significa "el lenguaje corporal" segun Joe Navarro?<br />
|
63 |
-
como mejorar la comunicacion no verbal segun Joe Navarro?</p>
|
64 |
-
<p>Además de escribir, Joe Navarro imparte conferencias y cursos sobre el lenguaje no verbal en todo el mundo. Su público incluye desde estudiantes y profesores hasta empresarios y líderes políticos. Su objetivo es ayudar a las personas a mejorar sus habilidades sociales y profesionales mediante el conocimiento del lenguaje no verbal.</p>
|
65 |
-
<h2>¿Qué es el lenguaje no verbal?</h2>
|
66 |
-
<p>El lenguaje no verbal es el conjunto de señales que emitimos y recibimos con nuestro cuerpo sin utilizar palabras. Estas señales pueden ser gestos, posturas, expresiones faciales, contacto visual, tono de voz, distancia interpersonal, etc.</p>
|
67 |
-
<h3>La importancia de la comunicación no verbal en la interacción humana</h3>
|
68 |
-
<p>La comunicación no verbal es una forma de comunicación universal que compartimos con otros animales. Es una forma de comunicación instintiva e inconsciente que tiene su origen en nuestro cerebro primitivo o límbico. Este cerebro se encarga de regular nuestras emociones, nuestros impulsos y nuestra supervivencia.</p>
|
69 |
-
<p>La comunicación no verbal tiene una gran influencia en la interacción humana. Según algunos estudios, el 93% de la comunicación entre las personas se basa en el lenguaje no verbal, mientras que solo el 7% se basa en las palabras. Esto significa que nuestro cuerpo dice mucho más que nuestras palabras.</p>
|
70 |
-
<p>La comunicación no verbal nos permite transmitir información sobre nosotros mismos, como nuestra personalidad, nuestro estado de ánimo, nuestra actitud o nuestras intenciones. También nos permite captar información sobre los demás, como sus emociones, sus pensamientos o sus motivaciones.</p>
|
71 |
-
<h3>Los tipos de señales no verbales: gestos, posturas, expresiones faciales, etc.</h3>
|
72 |
-
<p>El lenguaje no verbal se compone de diferentes tipos de señales que podemos clasificar según la parte del cuerpo que las emite o según su función. Algunos ejemplos son:</p>
|
73 |
-
<ul>
|
74 |
-
<li><strong>Gestos</strong>: son los movimientos que hacemos con las manos o los brazos para acompañar o sustituir las palabras. Por ejemplo, asentir con la cabeza para decir sí o negar con la cabeza para decir no.</li>
|
75 |
-
<li><strong>Posturas</strong>: son las posiciones que adoptamos con nuestro cuerpo cuando estamos sentados o de pie. Por ejemplo, cruzar los brazos o las piernas puede indicar defensa o rechazo.</li>
|
76 |
-
<li><strong>Expresiones faciales</strong>: son las configuraciones que hacemos con los músculos de la cara para mostrar nuestras emociones o reacciones. Por ejemplo, sonreír para expresar alegría o fruncir el ceño para expresar enfado.</li>
|
77 |
-
<li><strong>Contacto visual</strong>: es el grado y la duración de la mirada que establecemos con otra persona. Por ejemplo, mirar fijamente puede indicar interés o desafío.</li>
|
78 |
-
<li><strong>Tono de voz</strong>: es la variación del volumen, la velocidad y la entonación de nuestra voz cuando hablamos. Por ejemplo, hablar alto puede indicar confianza o agresividad.</li>
|
79 |
-
<li><strong>Distancia interpersonal</strong>: es el espacio físico que mantenemos entre nosotros y los demás cuando interactuamos. Por ejemplo, acercarnos mucho puede indicar intimidad o invasión.</li>
|
80 |
-
<h3>Los beneficios de aprender a leer el lenguaje no verbal</h3>
|
81 |
-
<p>Aprender a leer el lenguaje no verbal tiene muchas ventajas tanto en el ámbito personal como en el profesional. Algunos beneficios que te puede aportar son:</p>
|
82 |
-
<ul>
|
83 |
-
<li><strong>Mejorar tus relaciones interpersonales</strong>: el lenguaje no verbal te ayuda a establecer confianza, empatía y conexión con los demás. Al comprender mejor lo que sienten y piensan los demás, puedes adaptar tu comunicación a sus necesidades y evitar conflictos.</li>
|
84 |
-
<li><strong>Mejorar tu comunicación verbal</strong>: el lenguaje no verbal te ayuda a reforzar y clarificar tu mensaje verbal. Al utilizar gestos, expresiones o tonos de voz adecuados, puedes hacer que tu mensaje sea más convincente, memorable y persuasivo.</li>
|
85 |
-
<li><strong>Mejorar tu capacidad de persuasión e influencia</strong>: el lenguaje no verbal te ayuda a transmitir autoridad, credibilidad y confianza. Al proyectar una imagen positiva y segura de ti mismo, puedes ganarte el respeto y la admiración de los demás.</li>
|
86 |
-
<li><strong>Mejorar tu capacidad de detección del engaño</strong>: el lenguaje no verbal te ayuda a identificar las señales que indican que alguien está mintiendo o ocultando algo. Al observar las incongruencias entre lo que dice y lo que hace una persona, puedes protegerte de posibles fraudes o manipulaciones.</li>
|
87 |
-
<li><strong>Mejorar tu autoconocimiento y autocontrol</strong>: el lenguaje no verbal te ayuda a conocer mejor tus propias emociones y reacciones. Al ser consciente de cómo te expresas con tu cuerpo, puedes controlar mejor tus impulsos y mejorar tu inteligencia emocional.</li>
|
88 |
-
</ul>
|
89 |
-
<h2>¿Qué nos enseña el libro El cuerpo habla?</h2>
|
90 |
-
<p>El libro El cuerpo habla es una guía práctica y amena que te enseñará a dominar los secretos de la comunicación no verbal. A través de ejemplos, anécdotas y consejos, Joe Navarro te mostrará cómo interpretar y utilizar el lenguaje no verbal en diferentes situaciones de la vida cotidiana.</p>
|
91 |
-
<p>El libro se divide en nueve capítulos que abordan los siguientes temas:</p>
|
92 |
-
<h3>Cómo dominar los secretos de la comunicación no verbal</h3>
|
93 |
-
<p>En este capítulo, Joe Navarro te introduce al mundo del lenguaje no verbal y te explica por qué es tan importante aprender a leerlo y usarlo. Te da algunas claves para mejorar tu observación y tu atención a las señales no verbales que emiten los demás y tú mismo.</p>
|
94 |
-
<h3>Cómo entender nuestro legado límbico y sus implicaciones en nuestro comportamiento</h3>
|
95 |
-
<p>En este capítulo, Joe Navarro te explica cómo funciona nuestro cerebro límbico, el responsable de regular nuestras emociones y nuestro comportamiento instintivo. Te muestra cómo nuestro cerebro límbico se manifiesta a través de nuestro cuerpo y cómo podemos reconocer sus señales.</p>
|
96 |
-
<h3>Cómo utilizar el lenguaje no verbal para generar confianza, autoridad y sinceridad</h3>
|
97 |
-
<p>En este capítulo, Joe Navarro te enseña cómo utilizar el lenguaje no verbal para transmitir una imagen positiva de ti mismo y crear una buena impresión en los demás. Te da consejos para mejorar tu postura, tu gestualidad, tu contacto visual y tu tono de voz según el contexto y el objetivo que quieras conseguir.</p>
|
98 |
-
<h3>Cómo detectar el engaño a través de las señales no verbales</h3>
|
99 |
-
<h2>¿Dónde puedo descargar el libro El cuerpo habla en formato PDF?</h2>
|
100 |
-
<p>Si te ha interesado el libro El cuerpo habla y quieres leerlo en formato digital, te voy a dar algunas opciones para que puedas descargarlo en tu ordenador, tablet o móvil.</p>
|
101 |
-
<h3>Las ventajas de leer el libro en formato digital</h3>
|
102 |
-
<p>Leer el libro en formato digital tiene algunas ventajas sobre leerlo en formato impreso. Algunas de ellas son:</p>
|
103 |
-
<ul>
|
104 |
-
<li><strong>Ahorro de espacio y dinero</strong>: al descargar el libro en formato PDF, no tendrás que ocupar espacio en tu estantería ni gastar dinero en comprarlo. Además, podrás acceder al libro desde cualquier dispositivo y lugar.</li>
|
105 |
-
<li><strong>Facilidad de lectura y búsqueda</strong>: al leer el libro en formato PDF, podrás ajustar el tamaño de la letra, el brillo y el contraste a tu gusto. También podrás buscar palabras o frases concretas dentro del texto y marcar las páginas que te interesen.</li>
|
106 |
-
<li><strong>Respeto al medio ambiente</strong>: al leer el libro en formato PDF, estarás contribuyendo a reducir el consumo de papel y tinta, lo que supone un beneficio para el planeta.</li>
|
107 |
-
</ul>
|
108 |
-
<h3>Los sitios web donde se puede descargar el libro de forma gratuita o de pago</h3>
|
109 |
-
<p>Existen varios sitios web donde se puede descargar el libro El cuerpo habla de forma gratuita o de pago. Algunos de ellos son:</p>
|
110 |
-
<table>
|
111 |
-
<tr>
|
112 |
-
<th>Sitio web</th>
|
113 |
-
<th>Descripción</th>
|
114 |
-
<th>Precio</th>
|
115 |
-
</tr>
|
116 |
-
<tr>
|
117 |
-
<td>Zoboko.com</td>
|
118 |
-
<td>Es una plataforma de descarga de libros electrónicos en diferentes formatos y categorías. Ofrece una amplia variedad de libros gratuitos y de pago.</td>
|
119 |
-
<td>Gratis o 9,99 €</td>
|
120 |
-
</tr>
|
121 |
-
<tr>
|
122 |
-
<td>Scribd.com </td>
|
123 |
-
<td>Es una plataforma de lectura y publicación de documentos en línea. Permite acceder a millones de libros, audiolibros, revistas y otros contenidos.</td>
|
124 |
-
<td>Gratis con registro o 9,99 € al mes con suscripción</td>
|
125 |
-
</tr>
|
126 |
-
<tr>
|
127 |
-
<td>Idoc.pub</td>
|
128 |
-
<td>Es una plataforma de alojamiento y descarga de documentos en diferentes formatos. Permite compartir y descargar documentos de forma gratuita.</td>
|
129 |
-
<td>Gratis</td>
|
130 |
-
</tr>
|
131 |
-
</table>
|
132 |
-
<p>Estos son solo algunos ejemplos de sitios web donde se puede descargar el libro El cuerpo habla. Sin embargo, hay que tener en cuenta que algunos de estos sitios pueden no tener los derechos de autor del libro o pueden contener virus o malware. Por eso, se recomienda verificar la fiabilidad y la legalidad de los sitios antes de descargar cualquier contenido.</p>
|
133 |
-
<h2>Conclusión</h2>
|
134 |
-
<p>En este artículo te he hablado del libro El cuerpo habla, del autor Joe Navarro, un ex agente del FBI experto en comunicación no verbal. Te he contado quién es Joe Navarro, qué es el lenguaje no verbal, qué nos enseña el libro El cuerpo habla y dónde puedes descargarlo en formato PDF.</p>
|
135 |
-
<p>El cuerpo habla es un libro que te ayudará a mejorar tus habilidades sociales y profesionales mediante el conocimiento del lenguaje no verbal. Aprenderás a interpretar las señales que los demás emiten con su cuerpo y a utilizar tu propio cuerpo para comunicarte mejor con los demás.</p>
|
136 |
-
<p>Si te ha gustado este artículo y quieres saber más sobre el tema, te invito a leer el libro El cuerpo habla. Estoy seguro de que te resultará muy útil e interesante.</p>
|
137 |
-
<h2>Preguntas frecuentes</h2>
|
138 |
-
<ul>
|
139 |
-
<li><strong>¿Qu�� es el lenguaje no verbal?</strong></li>
|
140 |
-
<p>El lenguaje no verbal es el conjunto de señales que emitimos y recibimos con nuestro cuerpo sin utilizar palabras. Estas señales pueden ser gestos, posturas, expresiones faciales, contacto visual, tono de voz, distancia interpersonal, etc.</p>
|
141 |
-
<li><strong>¿Por qué es importante aprender a leer el lenguaje no verbal?</strong></li>
|
142 |
-
<p>Aprender a leer el lenguaje no verbal es importante porque nos permite transmitir y captar información sobre nosotros mismos y los demás que no se expresa con palabras. Nos ayuda a mejorar nuestras relaciones interpersonales, nuestra comunicación verbal, nuestra capacidad de persuasión e influencia, nuestra capacidad de detección del engaño y nuestro autoconocimiento y autocontrol.</p>
|
143 |
-
<li><strong>¿Quién es Joe Navarro?</strong></li>
|
144 |
-
<p>Joe Navarro es un reconocido experto en el campo de la comunicación no verbal. Es un ex agente del FBI especializado en el análisis del comportamiento humano. Es autor de varios libros sobre el tema, entre ellos El cuerpo habla. También es conferenciante y profesor sobre el lenguaje no verbal.</p>
|
145 |
-
<li><strong>¿Qué nos enseña el libro El cuerpo habla?</strong></li>
|
146 |
-
<p>El libro El cuerpo habla nos enseña a dominar los secretos de la comunicación no verbal. Nos muestra cómo interpretar y utilizar el lenguaje no verbal en diferentes situaciones de la vida cotidiana. Nos explica cómo funciona nuestro cerebro límbico, cómo generar confianza, autoridad y sinceridad con nuestro cuerpo y cómo detectar el engaño a través de las señales no verbales.</p>
|
147 |
-
<li><strong>¿Dónde puedo descargar el libro El cuerpo habla en formato PDF?</strong></li>
|
148 |
-
<p>Puedes descargar el libro El cuerpo habla en formato PDF en varios sitios web, como Zoboko.com, Scribd.com o Idoc.pub. Sin embargo, debes tener cuidado con la fiabilidad y la legalidad de los sitios antes de descargar cualquier contenido.</p>
|
149 |
-
</ul>
|
150 |
-
</p> 0a6ba089eb<br />
|
151 |
-
<br />
|
152 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Brsobstetricsandgynecologypdffree11 UPDATED.md
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
<h2>brsobstetricsandgynecologypdffree11</h2><br /><p><b><b>DOWNLOAD</b> · <a href="https://imgfil.com/2uxXQ1">https://imgfil.com/2uxXQ1</a></b></p><br /><br />
|
2 |
-
|
3 |
-
d5da3c52bf<br />
|
4 |
-
<br />
|
5 |
-
<br />
|
6 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download [WORK].md
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download</h1>
|
3 |
-
<p>Daum PotPlayer is a powerful and versatile media player that supports various formats, codecs, and subtitles. It also offers advanced features such as 3D video support, screen capture, live streaming, and audio enhancement. If you are looking for a reliable and easy-to-use media player for your Windows PC, Daum PotPlayer is a great choice.</p>
|
4 |
-
<h2>Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download</h2><br /><p><b><b>Download File</b> ✅ <a href="https://imgfil.com/2uy0tk">https://imgfil.com/2uy0tk</a></b></p><br /><br />
|
5 |
-
<p>However, if you want to enjoy the full benefits of Daum PotPlayer, you need to activate it with a serial key. A serial key is a unique code that unlocks the premium features of the software. Without a serial key, you will be limited to the basic functions of Daum PotPlayer and miss out on some of the best features.</p>
|
6 |
-
<p>Fortunately, you can get Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download from our website. This is a cracked version of Daum PotPlayer that comes with a valid serial key that you can use to activate the software. By downloading and installing this cracked version, you will be able to enjoy Daum PotPlayer without any restrictions or limitations.</p>
|
7 |
-
<p>Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download is safe and secure to use. It does not contain any viruses, malware, or spyware that could harm your PC or compromise your privacy. It also does not require any registration or payment to use. All you need to do is follow these simple steps:</p>
|
8 |
-
<ol>
|
9 |
-
<li>Download Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download from the link below.</li>
|
10 |
-
<li>Extract the zip file and run the setup file.</li>
|
11 |
-
<li>Follow the installation instructions and agree to the terms and conditions.</li>
|
12 |
-
<li>Copy the serial key from the crack folder and paste it into the activation window.</li>
|
13 |
-
<li>Click on activate and enjoy Daum PotPlayer with all its features.</li>
|
14 |
-
</ol>
|
15 |
-
<p>That's it! You have successfully installed and activated Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download on your PC. Now you can play any media file with high quality and performance. You can also customize your preferences and settings according to your needs and preferences.</p>
|
16 |
-
<p>Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download is the best way to experience Daum PotPlayer without spending any money or risking any legal issues. It is compatible with Windows XP, Vista, 7, 8, 8.1, and 10 (32-bit and 64-bit). It also supports multiple languages and has a user-friendly interface.</p>
|
17 |
-
<p>So what are you waiting for? Download Daum PotPlayer 1.7.21124 Crack With Serial Key 2020 Free Download today and enjoy the ultimate media player for your PC.</p>
|
18 |
-
<p></p>
|
19 |
-
|
20 |
-
<p>Daum PotPlayer is not just a media player, it is also a media manager. You can organize your media files into playlists, folders, and categories. You can also sort them by name, date, size, type, and more. You can also search for your media files using keywords and filters. Daum PotPlayer makes it easy to find and access your media files anytime and anywhere.</p>
|
21 |
-
<p>Daum PotPlayer also supports online streaming and downloading. You can watch live TV channels, radio stations, podcasts, and webcams from around the world. You can also download online videos and audio files from various websites and platforms. Daum PotPlayer lets you enjoy online media content without any hassle or interruption.</p>
|
22 |
-
<p>Daum PotPlayer also has a built-in screen recorder and editor. You can capture your screen activity and save it as a video file. You can also edit your recorded videos using various tools and effects. You can crop, trim, rotate, resize, add text, watermark, and more. Daum PotPlayer allows you to create your own videos and share them with others.</p> d5da3c52bf<br />
|
23 |
-
<br />
|
24 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Download Ed Sheeran Plus Album Zip Mega.md
DELETED
@@ -1,10 +0,0 @@
|
|
1 |
-
<h2>Download Ed Sheeran Plus Album Zip Mega</h2><br /><p><b><b>Download File</b> ⇒⇒⇒ <a href="https://imgfil.com/2uxZmz">https://imgfil.com/2uxZmz</a></b></p><br /><br />
|
2 |
-
<br />
|
3 |
-
CHAOS PDF.pdf. Has he ever met Ed Sheeran? Find out what music master of everything he is able to do!Download Ed Sheeran album ‘ + at the album page. The “Ed Sheeran” released on August 26, 2015, with the album number number.Download CHAOS + Album Zip. CHAOS (crack) + Full Album Zip / PDF / MPG / M4A + MP3. We have provided below links from where you can download CHAOS (crack) + Full Album Zip / PDF / MPG / M4A + MP3. You can get the song, music, information of Ed Sheeran- plus album song track/track list.Chaos, by Ed Sheeran has been downloaded by a lot of people. Chances are that you are one of them who has downloaded this album from the internet. Download Ed Sheeran, Live at Abbey Road, Limited Edition Download · Free Download - Ed Sheeran, + Album Download · Ed Sheeran (2015), Chaos [Plus Album Zip and Tracklist], [Full Tracklist] Download. (PDF) Download · Ed Sheeran (2015), Chaos (2015), PLUS ALBUM ZIP Download · Ed Sheeran - Chaos + Album (2015), plus Album Zip (3.2MB) · Ed Sheeran - Ed (2015) - PLUS.The albums containing song "Chaos" download is available at. Listen to this artist's songs and watch the videos. Listen to songs by Ed Sheeran - Chaos (2015) -plus album zip. Plus, download Ed Sheeran. ITunes:. 19.03.20. Chaos, Sheeran album tracklist, download. Chaos, a song by Ed Sheeran from the album Ch. Plus, download Ed Sheeran. 19.03.20.. 29.11.15.
|
4 |
-
|
5 |
-
Download
|
6 |
-
|
7 |
-
Ed Sheeran Chaos Plus Album Zip Mega. DOWNLOAD: . PrintMusic 2014 (crack CHAOS) [ChingLiu] Free!!BETTER!! Download . CHAOS PDF.pdf. Has he ever met Ed Sheeran? Find out what music master of everything he is able to do!DOWNLOAD ED SHEERAN PLUS ALBUM ZIP MEGA. DOWNLOAD: . PrintMusic 2014 (crack CHAOS) [ChingLiu] 4fefd39f24<br />
|
8 |
-
<br />
|
9 |
-
<br />
|
10 |
-
<p></p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Cmo ver y descargar pelculas y series en Cuevana 3 para PC y Android.md
DELETED
@@ -1,135 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Cuevana 3 Peliculas y Series APK: La mejor app para ver contenido en español</h1>
|
3 |
-
<p>Si eres un fanático de las películas y series, seguramente te gustaría tener una aplicación que te permita ver todo el contenido que quieras en tu dispositivo móvil, sin pagar nada y sin tener que soportar anuncios molestos. ¿Existe una app así? Sí, se llama Cuevana 3 y es una de las mejores opciones que hay para disfrutar del cine y la televisión en español.</p>
|
4 |
-
<h2>cuevana 3 peliculas y series apk</h2><br /><p><b><b>Download</b> ✺✺✺ <a href="https://urlin.us/2uSVVf">https://urlin.us/2uSVVf</a></b></p><br /><br />
|
5 |
-
<h2>¿Qué es Cuevana 3?</h2>
|
6 |
-
<p>Cuevana 3 es una aplicación para Android que te permite ver películas y series online gratis, en calidad HD y sin cortes. Es la versión más reciente de Cuevana, una plataforma de streaming que lleva más de una década ofreciendo contenido en español y subtitulado a millones de usuarios.</p>
|
7 |
-
<p>Cuevana 3 tiene un catálogo muy variado y actualizado, con los últimos estrenos y las series más populares del momento. Además, tiene una interfaz sencilla y rápida, que te facilita la búsqueda y la reproducción del contenido. También te da la opción de descargar las películas y series que quieras para verlas offline, o compartirlas con tus amigos a través de redes sociales o aplicaciones de mensajería.</p>
|
8 |
-
<h2>¿Cómo descargar e instalar Cuevana 3 APK?</h2>
|
9 |
-
<h3>Requisitos previos</h3>
|
10 |
-
<p>Para poder descargar e instalar Cuevana 3 APK en tu dispositivo Android, necesitas cumplir con algunos requisitos previos:</p>
|
11 |
-
<ul>
|
12 |
-
<li>Tener un dispositivo con Android 4.1 o superior.</li>
|
13 |
-
<li>Tener espacio suficiente en la memoria interna o externa del dispositivo.</li>
|
14 |
-
<li>Tener una conexión a internet estable (preferiblemente Wi-Fi).</li>
|
15 |
-
<li>Activar la opción de "Orígenes desconocidos" o "Fuentes desconocidas" en los ajustes de seguridad del dispositivo. Esto te permitirá instalar aplicaciones que no provienen de la tienda oficial de Google Play.</li>
|
16 |
-
</ul>
|
17 |
-
<h3>Pasos a seguir</h3>
|
18 |
-
<p>Una vez que hayas cumplido con los requisitos previos, puedes seguir estos pasos para descargar e instalar Cuevana 3 APK:</p>
|
19 |
-
<ol>
|
20 |
-
<li>Descarga el archivo APK de Cuevana 3 desde nuestra página web, haciendo clic en el botón de "Descargar". El archivo pesa unos 34,7 MB.</li>
|
21 |
-
<li>Busca el archivo descargado en la carpeta de "Descargas" o "Download" de tu dispositivo, o en el lugar donde hayas elegido guardar el archivo.</li>
|
22 |
-
<li>Toca el archivo para iniciar la instalación. Acepta los permisos que te solicite la aplicación.</li <li>Espera a que se complete la instalación. Puede tardar unos segundos o minutos, dependiendo de la velocidad de tu dispositivo y de tu conexión a internet.</li>
|
23 |
-
<li>Una vez instalada, abre la aplicación y disfruta de Cuevana 3.</li>
|
24 |
-
</ol>
|
25 |
-
<h2>¿Cómo usar Cuevana 3 APK?</h2>
|
26 |
-
<p>Usar Cuevana 3 APK es muy fácil y divertido. Solo tienes que seguir estos pasos:</p>
|
27 |
-
<h3>Buscar películas y series</h3>
|
28 |
-
<p>Cuevana 3 tiene un buscador integrado que te permite encontrar el contenido que quieras por título, género, año, idioma o calidad. Solo tienes que escribir lo que buscas en la barra superior y presionar el icono de la lupa.</p>
|
29 |
-
<p>cuevana 3 pro descargar gratis para android<br />
|
30 |
-
cuevana 3 app para ver peliculas y series online<br />
|
31 |
-
descargar cuevana 3 apk ultima version 2023<br />
|
32 |
-
cuevana 3 peliculas y series en español latino<br />
|
33 |
-
cuevana 3 apk sin publicidad ni virus<br />
|
34 |
-
como instalar cuevana 3 pro en pc<br />
|
35 |
-
cuevana 3 apk mod premium full hd<br />
|
36 |
-
cuevana 3 peliculas y series de estreno<br />
|
37 |
-
descargar cuevana 3 apk para smart tv<br />
|
38 |
-
cuevana 3 app oficial para android<br />
|
39 |
-
cuevana 3 pro apk mega mediafire<br />
|
40 |
-
cuevana 3 peliculas y series gratis sin registrarse<br />
|
41 |
-
descargar cuevana 3 apk para firestick<br />
|
42 |
-
cuevana 3 app para ios iphone ipad<br />
|
43 |
-
cuevana 3 apk xapk installer<br />
|
44 |
-
cuevana 3 peliculas y series en version original subtitulada<br />
|
45 |
-
descargar cuevana 3 apk para windows 10<br />
|
46 |
-
cuevana 3 app para chromecast<br />
|
47 |
-
cuevana 3 apk no funciona solucion<br />
|
48 |
-
cuevana 3 peliculas y series en calidad 4k<br />
|
49 |
-
descargar cuevana 3 apk para roku<br />
|
50 |
-
cuevana 3 app alternativas similares<br />
|
51 |
-
cuevana 3 apk opiniones y comentarios<br />
|
52 |
-
cuevana 3 peliculas y series de netflix<br />
|
53 |
-
descargar cuevana 3 apk para linux<br />
|
54 |
-
cuevana 3 app requisitos minimos<br />
|
55 |
-
cuevana 3 apk tutorial y guia<br />
|
56 |
-
cuevana 3 peliculas y series de disney plus<br />
|
57 |
-
descargar cuevana 3 apk para mac<br />
|
58 |
-
cuevana 3 app ventajas y desventajas<br />
|
59 |
-
cuevana 3 apk actualizacion automatica<br />
|
60 |
-
cuevana 3 peliculas y series de amazon prime video<br />
|
61 |
-
descargar cuevana 3 apk para android tv box<br />
|
62 |
-
cuevana 3 app canales de television en vivo<br />
|
63 |
-
cuevana 3 apk descargar contenidos offline<br />
|
64 |
-
cuevana 3 peliculas y series de hbo max<br />
|
65 |
-
descargar cuevana 3 apk para bluestacks<br />
|
66 |
-
cuevana 3 app modo oscuro activar<br />
|
67 |
-
cuevana 3 apk notificaciones de estrenos activar<br />
|
68 |
-
cuevana 3 peliculas y series de marvel y dc comics<br />
|
69 |
-
descargar cuevana 3 apk para nox player<br />
|
70 |
-
cuevana 3 app favoritos marcar y acceder <br />
|
71 |
-
cuevana 3 apk fuentes elegir la mejor calidad <br />
|
72 |
-
cuevana 3 peliculas y series de terror y suspenso <br />
|
73 |
-
descargar cuevana 3 apk para memu play <br />
|
74 |
-
cuevana 3 app previsualizar los contenidos <br />
|
75 |
-
cuevana 3 apk buscador avanzado por palabras clave <br />
|
76 |
-
cuevana 3 peliculas y series de comedia y romance <br />
|
77 |
-
descargar cuevana 3 apk para ldplayer <br />
|
78 |
-
cuevana 3 app seguridad y legalidad</p>
|
79 |
-
<p>También puedes explorar el catálogo de Cuevana 3 por categorías, como "Estrenos", "Más vistas", "Mejor valoradas", "Series", "Películas", etc. Solo tienes que deslizar el dedo por la pantalla para ver las diferentes opciones y tocar la que te interese.</p>
|
80 |
-
<h3>Reproducir contenido en streaming</h3>
|
81 |
-
<p>Cuando encuentres una película o serie que quieras ver, solo tienes que tocarla para ver los detalles, como el título, la sinopsis, el reparto, el género, el año, la duración, la valoración y los comentarios de otros usuarios.</p>
|
82 |
-
<p>Para reproducir el contenido en streaming, solo tienes que tocar el botón de "Ver ahora" y elegir el servidor y la calidad que prefieras. Cuevana 3 te ofrece varios servidores y calidades para que elijas la que mejor se adapte a tu conexión y a tu dispositivo.</p>
|
83 |
-
<p>Una vez que empiece la reproducción, puedes disfrutar del contenido en pantalla completa, con sonido y subtítulos en español. También puedes pausar, adelantar, retroceder o ajustar el volumen y el brillo con los controles táctiles.</p>
|
84 |
-
<h3>Descargar contenido para ver offline</h3>
|
85 |
-
<p>Si quieres descargar una película o serie para verla offline, sin necesidad de internet, solo tienes que tocar el botón de "Descargar" y elegir el servidor y la calidad que prefieras. Cuevana 3 te mostrará el tamaño del archivo y el tiempo estimado de descarga.</p>
|
86 |
-
<p>Una vez que se inicie la descarga, puedes ver el progreso en la barra inferior de la pantalla. También puedes pausar o cancelar la descarga en cualquier momento.</p>
|
87 |
-
<p>Cuando la descarga se complete, podrás ver el contenido descargado en la sección de "Descargas" de Cuevana 3. Allí podrás reproducirlo sin conexión a internet, borrarlo o compartirlo con tus amigos.</p>
|
88 |
-
<h2>Ventajas y desventajas de Cuevana 3 APK</h2>
|
89 |
-
<p>Cuevana 3 APK es una aplicación muy completa y atractiva para los amantes del cine y la televisión en español. Sin embargo, como toda aplicación, tiene sus ventajas y desventajas. Veamos algunas de ellas:</p>
|
90 |
-
<h3>Ventajas</h3>
|
91 |
-
<ul>
|
92 |
-
<li>Es gratis y sin publicidad. No tienes que pagar nada ni registrarte para usar Cuevana 3. Además, no tiene anuncios molestos ni ventanas emergentes que interrumpan tu experiencia.</li>
|
93 |
-
<li>Tiene contenido en español y subtitulado. Puedes ver películas y series en español latino o castellano, o en su idioma original con subtítulos en español. Así puedes disfrutar del contenido en tu idioma preferido o aprender otros idiomas.</li>
|
94 |
-
<li>Es compatible con varios dispositivos. Puedes usar Cuevana 3 en tu smartphone o tablet Android, o en tu Smart TV o Chromecast si los conectas con tu dispositivo móvil. Así puedes ver el contenido en una pantalla más grande y cómoda.</li>
|
95 |
-
</ul>
|
96 |
-
<h3>Desventajas</h3>
|
97 |
-
<ul>
|
98 |
-
<li>No tiene licencia oficial. Cuevana 3 no tiene los derechos de autor ni las licencias legales para ofrecer el contenido que muestra. Por eso puede infringir las normas de propiedad intelectual y estar sujeta a cierres o bloqueos por parte de las autoridades o los proveedores de internet.</li>
|
99 |
-
<li>Puede tener errores o fallos. Cuevana 3 puede presentar problemas técnicos o de funcionamiento, como caídas del servidor, enlaces rotos, baja calidad de imagen o sonido, sincronización de subtítulos, etc. Estos problemas pueden afectar la calidad y la continuidad de tu experiencia.</li>
|
100 |
-
<li>Puede consumir muchos datos móviles. Cuevana 3 requiere una conexión a internet para funcionar, y si usas datos móviles en lugar de Wi-Fi, puede consumir una gran cantidad de tu plan de datos. Esto puede generar cargos adicionales o reducir la velocidad de tu conexión.</li>
|
101 |
-
</ul>
|
102 |
-
<h2>Alternativas a Cuevana 3 APK</h2>
|
103 |
-
<p>Si por alguna razón no puedes o no quieres usar Cuevana 3 APK, existen otras alternativas que también te permiten ver películas y series online gratis o pagando una suscripción mensual. Algunas de ellas son:</p>
|
104 |
-
<h3>Netflix</h3>
|
105 |
-
<p>Netflix es la plataforma de streaming más popular y reconocida del mundo. Tiene un catálogo muy amplio y variado, con películas y series originales y exclusivas, así como contenido de otros estudios y productoras. Tiene una interfaz muy intuitiva y personalizada, que te recomienda el contenido que más te puede gustar según tus gustos y preferencias. También te permite descargar el contenido para verlo offline, crear perfiles para diferentes usuarios y ajustar la calidad de imagen y sonido según tu conexión. Netflix tiene un costo mensual que varía según el plan que elijas, y puedes probarlo gratis por un mes.</p>
|
106 |
-
<h3>HBO Max</h3>
|
107 |
-
<p>HBO Max es la plataforma de streaming de HBO, que te ofrece todo el contenido de este canal, así como películas y series de Warner Bros, DC, Cartoon Network, Adult Swim y más. Tiene un catálogo muy atractivo y actualizado, con estrenos simultáneos al cine y series aclamadas por la crítica y el público. Tiene una interfaz sencilla y funcional, que te permite buscar el contenido por categorías, géneros o colecciones. También te permite descargar el contenido para verlo offline, crear perfiles para diferentes usuarios y ajustar la calidad de imagen y sonido según tu conexión. HBO Max tiene un costo mensual que varía según el plan que elijas, y puedes probarlo gratis por una semana.</p>
|
108 |
-
<h3>Disney+</h3>
|
109 |
-
<p>Disney+ es la plataforma de streaming de Disney, que te ofrece todo el contenido de este estudio, así como películas y series de Pixar, Marvel, Star Wars, National Geographic y más. Tiene un catálogo muy completo y diverso, con películas y series clásicas y modernas, así como contenido original y exclusivo. Tiene una interfaz muy atractiva y dinámica, que te permite buscar el contenido por franquicias, géneros o temas. También te permite descargar el contenido para verlo offline, crear perfiles para diferentes usuarios y ajustar la calidad de imagen y sonido según tu conexión. Disney+ tiene un costo mensual fijo, y puedes probarlo gratis por una semana.</p>
|
110 |
-
<h2>Conclusión</h2>
|
111 |
-
<p>Cuevana 3 Peliculas y Series APK es una aplicación que te permite ver películas y series online gratis, en calidad HD y sin cortes. Es una excelente opción para los amantes del cine y la televisión en español, ya que tiene un catálogo muy variado y actualizado, con los últimos estrenos y las series más populares del momento.</p>
|
112 |
-
<p>Para usar Cuevana 3 APK solo necesitas tener un dispositivo Android con una conexión a internet estable. Además, puedes descargar el contenido para verlo offline o compartirlo con tus amigos. Cuevana 3 tiene una interfaz sencilla y rápida, que te facilita la búsqueda y la reproducción del contenido.</p>
|
113 |
-
<p>Sin embargo, Cuevana 3 también tiene algunas desventajas, como no tener licencia oficial, tener errores o fallos técnicos o consumir muchos datos móviles. Por eso, debes usarla bajo tu propia responsabilidad y criterio.</p>
|
114 |
-
<p>Si quieres probar otras alternativas a Cuevana 3 APK, puedes optar por plataformas de streaming como Netflix, HBO Max o Disney+, que también te ofrecen un gran catálogo de películas y series online, pero con un costo mensual.</p>
|
115 |
-
<p>Esperamos que este artículo te haya sido útil e informativo. Si tienes alguna duda o comentario sobre Cuevana 3 APK o sobre otras aplicaciones similares, no dudes en dejarnos tu opinión en la sección de abajo.</p>
|
116 |
-
<h2>Preguntas frecuentes</h2>
|
117 |
-
<ul>
|
118 |
-
<li><b>¿Cuevana 3 APK es legal?</b></li>
|
119 |
-
<li>Cuevana 3 APK no es una aplicación legal, ya que no tiene los derechos de autor ni las licencias necesarias para ofrecer el contenido que muestra. Por eso, puede estar violando las normas de propiedad intelectual y estar expuesta a cierres o bloqueos por parte de las autoridades o los proveedores de internet. Usar Cuevana 3 APK es bajo tu propia responsabilidad y criterio.</li>
|
120 |
-
<li><b>¿Cuevana 3 APK es segura?</b></li>
|
121 |
-
<li>Cuevana 3 APK es una aplicación segura en el sentido de que no contiene virus, malware ni software malicioso que pueda dañar tu dispositivo o robar tu información. Sin embargo, al no ser una aplicación oficial, no tiene garantías ni soporte técnico, por lo que puede presentar errores o fallos que afecten su funcionamiento. Además, al usar Cuevana 3 APK puedes estar infringiendo las leyes de tu país o región, por lo que debes tomar precauciones y usar un VPN para proteger tu privacidad y seguridad.</li>
|
122 |
-
<li><b>¿Cuevana 3 APK tiene publicidad?</b></li>
|
123 |
-
<li>No, Cuevana 3 APK no tiene publicidad ni ventanas emergentes que interrumpan tu experiencia. Es una de las ventajas de esta aplicación, ya que te permite ver el contenido sin distracciones ni molestias. Sin embargo, al no tener publicidad, tampoco tiene ingresos para mantenerse y actualizarse, por lo que depende de las donaciones voluntarias de los usuarios para seguir funcionando.</li>
|
124 |
-
<li><b>¿Cuevana 3 APK tiene contenido en español?</b></li>
|
125 |
-
<li>Sí, Cuevana 3 APK tiene contenido en español latino y castellano, así como en su idioma original con subtítulos en español. Puedes elegir el idioma que prefieras en el momento de reproducir el contenido. Cuevana 3 APK es una de las mejores aplicaciones para ver películas y series en español, ya que tiene un catálogo muy amplio y variado, con los últimos estrenos y las series más populares del momento.</li>
|
126 |
-
<li><b>¿Cuevana 3 APK funciona en Smart TV o Chromecast?</b></li>
|
127 |
-
<li>Sí, Cuevana 3 APK funciona en Smart TV o Chromecast si los conectas con tu dispositivo Android. De esta manera, puedes ver el contenido en una pantalla más grande y cómoda. Para hacerlo, solo tienes que seguir estos pasos:</p>
|
128 |
-
<ol>
|
129 |
-
<li>Conecta tu Smart TV o Chromecast a la misma red Wi-Fi que tu dispositivo Android.</li>
|
130 |
-
<li>Abre la aplicación de Cuevana 3 en tu dispositivo Android y busca el contenido que quieras ver.</li>
|
131 |
-
<li>Toca el icono de "Cast" o "Enviar" en la parte superior derecha de la pantalla y selecciona tu Smart TV o Chromecast como destino.</li>
|
132 |
-
<li>Espera a que se establezca la conexión y disfruta del contenido en tu Smart TV o Chromecast.</li>
|
133 |
-
</ol></p> 197e85843d<br />
|
134 |
-
<br />
|
135 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download AR Emoji Stickers and Customize Them with Your Favorite Accessories and Backgrounds.md
DELETED
@@ -1,147 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download AR Emoji Stickers</h1>
|
3 |
-
<p>Emoji are everywhere and there are plenty to choose from. But what if you want to make your own personalized and animated emoji that look like you or your favorite characters? That's where AR emoji stickers come in. In this article, we'll show you what are AR emoji stickers, how to create them, and how to share and use them in your messages and social media.</p>
|
4 |
-
<h2>What are AR Emoji Stickers?</h2>
|
5 |
-
<p>AR emoji stickers are a type of augmented reality (AR) technology that allows you to create and animate your own avatars using your smartphone's camera. You can use these avatars to make custom emoji, stickers, GIFs, and videos that reflect your personality, mood, and style. They're fun and more personal than the standard emoji and stickers you might use.</p>
|
6 |
-
<h2>download ar emoji stickers</h2><br /><p><b><b>DOWNLOAD</b> ⭐ <a href="https://urlin.us/2uSXTD">https://urlin.us/2uSXTD</a></b></p><br /><br />
|
7 |
-
<h3>Definition and examples of AR emoji stickers</h3>
|
8 |
-
<p>AR stands for augmented reality, which means adding digital elements to the real world through your device's screen. AR emoji stickers are one example of this, as they let you overlay your virtual avatar on top of your surroundings or any background you choose. You can also animate them with your facial expressions and movements in real-time.</p>
|
9 |
-
<p>Some examples of AR emoji stickers are:</p>
|
10 |
-
<ul>
|
11 |
-
<li>Samsung's AR Emoji, which lets you create an animated version of yourself or wear masks of other characters on your Galaxy device.</li>
|
12 |
-
<li>iPhone's Memoji, which lets you create a custom avatar that looks like you or anyone else on your iOS device.</li>
|
13 |
-
<li>Other apps that let you create and use AR emoji stickers on any device, such as Filmora for Mobile, Mirror, or Yoji.</li>
|
14 |
-
</ul>
|
15 |
-
<h3>Benefits and uses of AR emoji stickers</h3>
|
16 |
-
<p>AR emoji stickers have many benefits and uses, such as:</p>
|
17 |
-
<ul>
|
18 |
-
<li>They allow you to express yourself in a more creative and fun way than regular emoji or text.</li>
|
19 |
-
<li>They help you communicate your emotions, reactions, and opinions more clearly and effectively.</li>
|
20 |
-
<li>They make your messages and social media posts more engaging and interactive.</li>
|
21 |
-
<li>They let you customize your avatar with different looks, styles, accessories, and backgrounds.</li>
|
22 |
-
<li>They let you have fun with your friends and family by creating and sharing funny and cute AR emoji stickers.</li>
|
23 |
-
</ul>
|
24 |
-
<h2>How to Create Your Own AR Emoji Stickers</h2>
|
25 |
-
<p>Creating your own AR emoji stickers is easy and fun. You just need a smartphone with a camera and an app that supports AR emoji stickers. Here are some of the most popular apps that let you create your own AR emoji stickers:</p>
|
26 |
-
<h3>Using Samsung AR Emoji Stickers app</h3>
|
27 |
-
<p>If you have a Samsung Galaxy device that supports AR Emoji, such as the Galaxy S9 or later, you can use the pre-installed app called "AR Zone" to create your own AR emoji stickers. Here's how:</p>
|
28 |
-
<ol>
|
29 |
-
<li>Open the "AR Zone" app on your Galaxy device.</li>
|
30 |
-
<li>Tap "AR Emoji Studio" <li>Select "Create My Emoji" and follow the instructions to scan your face and customize your avatar.</li>
|
31 |
-
<li>Tap "Sticker" and choose from the different categories of stickers, such as "Basic", "Emotion", or "Pose".</li>
|
32 |
-
<li>Tap the sticker you want to use and then tap the download icon to save it to your device.</li>
|
33 |
-
<li>You can also tap the share icon to send it directly to your contacts or social media apps.</li>
|
34 |
-
</ol>
|
35 |
-
<h3>Using iPhone Memoji AR Stickers app</h3>
|
36 |
-
<p>If you have an iPhone X or later, you can use the built-in app called "Messages" to create your own Memoji AR stickers. Here's how:</p>
|
37 |
-
<ol>
|
38 |
-
<li>Open the "Messages" app on your iPhone and start a new conversation or open an existing one.</li>
|
39 |
-
<li>Tap the "Animoji" icon (the monkey face) and then swipe left to find the "+" button.</li>
|
40 |
-
<li>Tap the "+" button and follow the instructions to create your Memoji avatar. You can customize its appearance, hairstyle, accessories, and more.</li>
|
41 |
-
<li>Tap "Done" when you're satisfied with your Memoji.</li>
|
42 |
-
<li>To use your Memoji as a sticker, tap the sticker icon (the square with a peeling corner) and then tap your Memoji. You can also swipe up and down to see different expressions and poses.</li>
|
43 |
-
<li>Tap the sticker you want to use and then drag it to the message bubble or the photo you want to attach it to.</li>
|
44 |
-
<li>You can also tap the send button to send it as a separate message.</li>
|
45 |
-
</ol>
|
46 |
-
<h3>Using other apps for AR emoji stickers</h3>
|
47 |
-
<p>If you don't have a Samsung Galaxy or an iPhone device, or if you want to try other apps for creating and using AR emoji stickers, there are many options available for both Android and iOS devices. Some of them are:</p>
|
48 |
-
<ul>
|
49 |
-
<li>Filmora for Mobile, which lets you create and edit videos with AR emoji stickers, filters, effects, music, and more.</li>
|
50 |
-
<li>Mirror, which lets you create personalized emoji that look like you or anyone else, and use them as stickers, GIFs, or avatars.</li>
|
51 |
-
<li>Yoji, which lets you create 3D animated emoji that mimic your facial expressions and voice, and share them as videos or GIFs.</li>
|
52 |
-
</ul>
|
53 |
-
<p>To use these apps, you need to download them from the Google Play Store or the App Store, depending on your device. Then, follow the instructions on each app to create your AR emoji stickers and share them with others.</p>
|
54 |
-
<p>How to create your own ar emoji stickers<br />
|
55 |
-
Best apps for ar emoji stickers on Galaxy Store<br />
|
56 |
-
Ar emoji stickers for Samsung devices<br />
|
57 |
-
Ar emoji editor: customize your emoji with style<br />
|
58 |
-
Ar emoji stickers: fun and expressive way to communicate<br />
|
59 |
-
Download ar emoji stickers from Galaxy Store<br />
|
60 |
-
Ar emoji stickers: how to use them on social media<br />
|
61 |
-
Ar emoji stickers: make your messages more lively<br />
|
62 |
-
Ar emoji stickers: create and share your animated version<br />
|
63 |
-
Ar emoji stickers: add facial expressions, actions, and backgrounds<br />
|
64 |
-
Ar emoji stickers: how to edit and delete them<br />
|
65 |
-
Ar emoji stickers: how to access them on your keyboard<br />
|
66 |
-
Ar emoji stickers: how to send them as GIFs or videos<br />
|
67 |
-
Ar emoji stickers: how to sync them with your contacts<br />
|
68 |
-
Ar emoji stickers: how to download new sticker packs<br />
|
69 |
-
Ar emoji stickers: how to make them look like you<br />
|
70 |
-
Ar emoji stickers: how to change their clothes and accessories<br />
|
71 |
-
Ar emoji stickers: how to apply makeup and hair styles<br />
|
72 |
-
Ar emoji stickers: how to use them with Bixby Vision<br />
|
73 |
-
Ar emoji stickers: how to create custom text and drawings<br />
|
74 |
-
Ar emoji stickers vs Bitmoji: which one is better?<br />
|
75 |
-
Ar emoji stickers vs Animoji: which one is more realistic?<br />
|
76 |
-
Ar emoji stickers vs Memoji: which one is more fun?<br />
|
77 |
-
Ar emoji stickers vs Zepeto: which one is more popular?<br />
|
78 |
-
Ar emoji stickers vs Facemoji: which one is more diverse?<br />
|
79 |
-
How to make ar emoji stickers with your pets<br />
|
80 |
-
How to make ar emoji stickers with your friends<br />
|
81 |
-
How to make ar emoji stickers with celebrities<br />
|
82 |
-
How to make ar emoji stickers with cartoon characters<br />
|
83 |
-
How to make ar emoji stickers with emojis<br />
|
84 |
-
How to use ar emoji stickers on WhatsApp<br />
|
85 |
-
How to use ar emoji stickers on Instagram<br />
|
86 |
-
How to use ar emoji stickers on Snapchat<br />
|
87 |
-
How to use ar emoji stickers on Facebook Messenger<br />
|
88 |
-
How to use ar emoji stickers on TikTok<br />
|
89 |
-
How to use ar emoji stickers on YouTube<br />
|
90 |
-
How to use ar emoji stickers on Zoom<br />
|
91 |
-
How to use ar emoji stickers on Skype<br />
|
92 |
-
How to use ar emoji stickers on Discord<br />
|
93 |
-
How to use ar emoji stickers on Telegram<br />
|
94 |
-
Benefits of using ar emoji stickers for communication<br />
|
95 |
-
Challenges of using ar emoji stickers for communication<br />
|
96 |
-
Tips and tricks for using ar emoji stickers effectively<br />
|
97 |
-
Reviews and ratings of ar emoji sticker apps<br />
|
98 |
-
FAQs and troubleshooting for ar emoji sticker apps</p>
|
99 |
-
<h2>How to Share and Use Your AR Emoji Stickers</h2>
|
100 |
-
<p>Once you have created your AR emoji stickers, you can share and use them in various ways. Here are some of the most common ways to do so:</p>
|
101 |
-
<h3>Saving and downloading your AR emoji stickers</h3>
|
102 |
-
<p>If you want to save your AR emoji stickers for later use or download them to your device, you can do so by following these steps:</p>
|
103 |
-
<ol>
|
104 |
-
<li>Open the app that you used to create your AR emoji stickers.</li>
|
105 |
-
<li>Find the AR emoji sticker that you want to save or download.</li>
|
106 |
-
<li>Tap the download icon (usually a downward arrow) to save it to your device's gallery or file manager. You can also tap the menu icon (usually three dots) and select "Save" or "Export".</li>
|
107 |
-
<li>You can also tap the share icon (usually a paper plane) and select "Save Image" or "Save Video" if you want to save it as an image or a video file.</li>
|
108 |
-
</ol>
|
109 |
-
<h3>Adding your AR emoji stickers to messages and social media</h3>
|
110 |
-
<p>If you want to add your AR emoji stickers to your messages and social media posts, you can do so by following these steps:</p>
|
111 |
-
<ol>
|
112 |
-
<li>Open the app that you want to use, such as WhatsApp, Facebook Messenger, Instagram, Snapchat, etc.</li>
|
113 |
-
<li>Start a new conversation or open an existing one, or create a new post or story.</li>
|
114 |
-
<li>Tap the attachment icon (usually a paper clip) and select "Gallery" or "Photos".</li>
|
115 |
-
<li>Find the AR emoji sticker that you want to use from your device's gallery or file manager.</li>
|
116 |
-
<li>Select it and then tap the send button or the post button.</li>
|
117 |
-
</ol>
|
118 |
-
<h3>Tips and tricks for making your AR emoji stickers more fun and expressive</h3>
|
119 |
-
<p>To make your AR emoji stickers more fun and expressive, you can try these tips and tricks:</p>
|
120 |
-
<ul>
|
121 |
-
<li>Use different facial expressions and gestures when creating your AR emoji stickers <li>Use different backgrounds and filters to change the mood and atmosphere of your AR emoji stickers.</li>
|
122 |
-
<li>Use different accessories and outfits to customize your AR emoji stickers and make them more unique and stylish.</li>
|
123 |
-
<li>Use different poses and movements to make your AR emoji stickers more dynamic and lively.</li>
|
124 |
-
<li>Use different text and captions to add more context and humor to your AR emoji stickers.</li>
|
125 |
-
</ul>
|
126 |
-
<h2>Conclusion</h2>
|
127 |
-
<p>AR emoji stickers are a great way to spice up your messages and social media posts with your own personalized and animated avatars. They're easy to create, share, and use, and they can help you express yourself in a more fun and creative way. Whether you use Samsung's AR Emoji, iPhone's Memoji, or any other app, you can enjoy making and using AR emoji stickers with your friends and family.</p>
|
128 |
-
<p>We hope you found this article helpful and informative. If you have any questions or feedback, please let us know in the comments below. And don't forget to share your AR emoji stickers with us too!</p>
|
129 |
-
<h2>FAQs</h2>
|
130 |
-
<h3>What is the difference between AR emoji stickers and regular emoji?</h3>
|
131 |
-
<p>Regular emoji are standard symbols that represent various emotions, objects, animals, etc. They are usually static and have a fixed appearance. AR emoji stickers are customized avatars that you can create and animate using your smartphone's camera. They are usually dynamic and have a variable appearance.</p>
|
132 |
-
<h3>How can I make my AR emoji stickers look more like me?</h3>
|
133 |
-
<p>You can make your AR emoji stickers look more like you by adjusting the facial features, skin tone, hair color, eye color, etc. of your avatar. You can also add accessories, such as glasses, hats, earrings, etc. that match your style. You can also use your facial expressions and movements to make your AR emoji stickers more realistic.</p>
|
134 |
-
<h3>Can I use AR emoji stickers on any device?</h3>
|
135 |
-
<p>You can use AR emoji stickers on any device that supports AR technology and has a camera. However, some apps may be exclusive to certain devices or operating systems. For example, Samsung's AR Emoji is only available on Galaxy devices, while iPhone's Memoji is only available on iOS devices.</p>
|
136 |
-
<h3>How can I delete or edit my AR emoji stickers?</h3>
|
137 |
-
<p>You can delete or edit your AR emoji stickers by following these steps:</p>
|
138 |
-
<ol>
|
139 |
-
<li>Open the app that you used to create your AR emoji stickers.</li>
|
140 |
-
<li>Find the AR emoji sticker that you want to delete or edit.</li>
|
141 |
-
<li>Tap the menu icon (usually three dots) and select "Delete" or "Edit".</li>
|
142 |
-
<li>Confirm your action or make the changes you want.</li>
|
143 |
-
</ol>
|
144 |
-
<h3>Where can I find more AR emoji stickers to download?</h3>
|
145 |
-
<p>You can find more AR emoji stickers to download by browsing the app store of your device or searching online for "AR emoji stickers". You can also check out the websites or social media pages of the apps that you use for creating AR emoji stickers, as they may offer more options or updates.</p> 197e85843d<br />
|
146 |
-
<br />
|
147 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Bricks King APK The Best Brick Breaker Game for Android.md
DELETED
@@ -1,130 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Bricks King APK Download: A Fun and Relaxing Brick Breaker Game</h1>
|
3 |
-
<p>If you are looking for a new, exciting, and addictive casual game to play on your Android device, you might want to check out Bricks King. This is a brick breaker game that offers smooth and fluid gameplay, amazing powerups, beautiful graphics, and hundreds of challenging levels. In this article, we will tell you what Bricks King is, how to download and install it on your device, and what are the pros and cons of doing so.</p>
|
4 |
-
<h2>bricks king apk download</h2><br /><p><b><b>Download File</b> ►►► <a href="https://jinyurl.com/2uNKxr">https://jinyurl.com/2uNKxr</a></b></p><br /><br />
|
5 |
-
<h2>What is Bricks King?</h2>
|
6 |
-
<p>Bricks King is a casual brick breaker game developed by Prota Games. It was released in January 2023 and has been downloaded over 1 million times from Google Play Store. The game is rated 4.4 out of 5 stars by more than 1,000 users.</p>
|
7 |
-
<p>The goal of the game is to break all the bricks on the screen by using a ball and a paddle. You can move the paddle left and right by swiping your finger on the screen. The ball will bounce off the paddle and the bricks, creating satisfying chain reactions. You can also use various powerups to enhance your gameplay, such as extra balls, fireballs, magnets, lasers, and more.</p>
|
8 |
-
<h3>Features of Bricks King</h3>
|
9 |
-
<p>Bricks King has many features that make it a fun and relaxing game to play. Here are some of them:</p>
|
10 |
-
<h4>Smooth and fluid gameplay</h4>
|
11 |
-
<p>The game has a smooth and fluid gameplay that makes it easy to control the paddle and the ball. The game also has a clear user interface that shows you your score, level, lives, and powerups. The game runs smoothly on most Android devices without any lag or glitches.</p>
|
12 |
-
<h4>Amazing powerups and chain reactions</h4>
|
13 |
-
<p>The game has many powerups that you can collect by breaking certain bricks or hitting them with the ball. Some of the powerups are:</p>
|
14 |
-
<ul>
|
15 |
-
<li>Extra balls: This powerup gives you more balls to play with, increasing your chances of breaking more bricks.</li>
|
16 |
-
<li>Fireball: This powerup makes your ball burn through any brick it touches, creating a trail of fire.</li>
|
17 |
-
<li>Magnet: This powerup makes your paddle attract the ball, making it easier to catch it.</li>
|
18 |
-
<li>Laser: This powerup makes your paddle shoot lasers that can break bricks in a straight line.</li>
|
19 |
-
<li>And more!</li>
|
20 |
-
</ul>
|
21 |
-
<p>The game also has amazing chain reactions that happen when you break multiple bricks at once or use powerups. You can see sparks, explosions, flames, and other effects that make the game more enjoyable.</p>
|
22 |
-
<p>bricks king game free download apk<br />
|
23 |
-
bricks king android game apk<br />
|
24 |
-
bricks king casual brick breaker apk<br />
|
25 |
-
bricks king apk mod unlimited money<br />
|
26 |
-
bricks king apk latest version 2023<br />
|
27 |
-
bricks king apk for pc windows 10<br />
|
28 |
-
bricks king apk offline installer<br />
|
29 |
-
bricks king apk uptodown.com<br />
|
30 |
-
bricks king apk pure.com<br />
|
31 |
-
bricks king apk combo.com<br />
|
32 |
-
bricks king apk mirror.com<br />
|
33 |
-
bricks king apk no ads<br />
|
34 |
-
bricks king apk hack version<br />
|
35 |
-
bricks king apk old version<br />
|
36 |
-
bricks king apk new update<br />
|
37 |
-
bricks king apk full unlocked<br />
|
38 |
-
bricks king apk pro version<br />
|
39 |
-
bricks king apk cracked version<br />
|
40 |
-
bricks king apk premium version<br />
|
41 |
-
bricks king apk file download<br />
|
42 |
-
bricks king apk direct download link<br />
|
43 |
-
bricks king apk free download for android<br />
|
44 |
-
bricks king apk download for tablet<br />
|
45 |
-
bricks king apk download for android tv<br />
|
46 |
-
bricks king apk download from google play store<br />
|
47 |
-
bricks king game download apkpure<br />
|
48 |
-
bricks king game download uptodown<br />
|
49 |
-
bricks king game download apkmirror<br />
|
50 |
-
bricks king game download apkpure.com<br />
|
51 |
-
bricks king game download uptodown.com<br />
|
52 |
-
bricks king game download apkmirror.com<br />
|
53 |
-
bricks king game mod apk download<br />
|
54 |
-
bricks king game hack apk download<br />
|
55 |
-
bricks king game latest version apk download<br />
|
56 |
-
bricks king game old version apk download<br />
|
57 |
-
bricks king game new version apk download<br />
|
58 |
-
bricks king game update apk download<br />
|
59 |
-
bricks king game offline apk download<br />
|
60 |
-
bricks king game online apk download<br />
|
61 |
-
bricks king game free online play without downloading the app or the APK file.</p>
|
62 |
-
<h4>Beautiful graphics and sounds</h4>
|
63 |
-
<p>The game has beautiful graphics that are colorful and vibrant. The game also has relaxing sounds that match the gameplay. You can hear the sound of the ball bouncing off the bricks, the sound of the powerups activating, and the sound of the background music. The game also has different themes for each level, such as forest, desert, ocean, space, and more.</p>
|
64 |
-
<h4>Hundreds of challenging levels</h4>
|
65 |
-
<p>The game has hundreds of challenging levels for you to conquer. Each level has a different layout of bricks, different powerups, and different obstacles. Some levels have moving bricks, rotating bricks, invisible bricks, or unbreakable bricks. You have to use your skills and strategy to break all the bricks and complete the level. The game also has a star rating system that rewards you for completing the level with fewer balls or using fewer powerups. You can also replay the levels to improve your score and challenge yourself.</p>
|
66 |
-
<h3>How to download and install Bricks King APK on your Android device</h3>
|
67 |
-
<p>If you want to play Bricks King on your Android device, you can download and install it from Google Play Store. However, if you want to get the latest version of the game or access some features that are not available on the official app, you can download and install the Bricks King APK file from a trusted source. Here are the steps to do so:</p>
|
68 |
-
<h4>Step 1: Enable unknown sources</h4>
|
69 |
-
<p>Before you can install any APK file on your device, you need to enable unknown sources. This is a security setting that allows you to install apps from sources other than Google Play Store. To enable unknown sources, follow these steps:</p>
|
70 |
-
<ul>
|
71 |
-
<li>Go to your device's settings and tap on security or privacy.</li>
|
72 |
-
<li>Find the option that says unknown sources or install unknown apps and toggle it on.</li>
|
73 |
-
<li>A warning message will pop up. Read it carefully and tap on OK or allow.</li>
|
74 |
-
</ul>
|
75 |
-
<h4>Step 2: Download the APK file from a trusted source</h4>
|
76 |
-
<p>Next, you need to download the APK file of Bricks King from a trusted source. There are many websites that offer APK files for free, but not all of them are safe and reliable. Some of them may contain malware, viruses, or unwanted ads that can harm your device or compromise your privacy. To avoid this, you should only download APK files from reputable sources that have positive reviews and ratings from other users. One such source is [APKPure], which is a popular and trusted website that provides safe and updated APK files for various apps and games. To download the APK file of Bricks King from APKPure, follow these steps:</p>
|
77 |
-
<ul>
|
78 |
-
<li>Go to [APKPure] using your device's browser.</li>
|
79 |
-
<li>Type Bricks King in the search bar and tap on the search icon.</li>
|
80 |
-
<li>Find the app that matches the name and icon of Bricks King and tap on it.</li>
|
81 |
-
<li>Tap on the download button and wait for the download to finish.</li>
|
82 |
-
</ul>
|
83 |
-
<h4>Step 3: Locate and install the APK file</h4>
|
84 |
-
<p>After you have downloaded the APK file of Bricks King, you need to locate and install it on your device. To do this, follow these steps:</p>
|
85 |
-
<ul>
|
86 |
-
<li>Go to your device's file manager and find the folder where you saved the APK file. It is usually in the downloads folder.</li>
|
87 |
-
<li>Tap on the APK file and a prompt will appear. Tap on install and wait for the installation to complete.</li>
|
88 |
-
<li>If another prompt appears asking for permissions, tap on allow or accept.</li>
|
89 |
-
</ul>
|
90 |
-
<h4>Step 4: Launch and enjoy the game</h4>
|
91 |
-
<p>Once you have installed the APK file of Bricks King, you can launch and enjoy the game. To do this, follow these steps:</p>
|
92 |
-
<ul>
|
93 |
-
<li>Go to your device's app drawer and find the icon of Bricks King. Tap on it to open the game.</li>
|
94 |
-
<li>You may see a splash screen or an intro video. Wait for it to finish or skip it if possible.</li>
|
95 |
-
<li>You will see the main menu of the game. Tap on play or start to begin playing.</li>
|
96 |
-
<li>You can also adjust the settings, view your achievements, or access other features of the game from the main menu.</li>
|
97 |
-
</ul>
|
98 |
-
<h3>Pros and cons of Bricks King APK download</h3>
|
99 |
-
<p>Downloading and installing Bricks King APK on your device has some pros and cons that you should be aware of. Here are some of them:</p>
|
100 |
-
<h4>Pros</h4>
|
101 |
-
<ul>
|
102 |
-
<li>You can get the latest version of the game before it is available on Google Play Store.</li>
|
103 |
-
<li>You can access some features that are not available on the official app, such as unlimited coins, unlocked levels, or ad-free gameplay.</li>
|
104 |
-
<li>You can play the game even if it is not compatible with your device or region.</li>
|
105 |
-
<li>You can save some storage space by deleting the original app after installing the APK file.</li>
|
106 |
-
</ul>
|
107 |
-
<h4>Cons</h4>
|
108 |
-
<ul>
|
109 |
-
<li>You may encounter some bugs or errors that are not fixed yet by the developers.</li>
|
110 |
-
<li>You may not receive any updates or support from the developers if you encounter any problems with the game.</li>
|
111 |
-
<li>You may violate some terms and conditions of Google Play Store or the developers by installing an unofficial app.</li>
|
112 |
-
<li>You may expose your device or data to some risks by installing an app from an unknown source.</li>
|
113 |
-
<| <h2>Conclusion</h2>
|
114 |
-
<p>Bricks King is a fun and relaxing brick breaker game that you can play on your Android device. It has smooth and fluid gameplay, amazing powerups, beautiful graphics, and hundreds of challenging levels. You can download and install it from Google Play Store or from a trusted source like APKPure. However, you should also be aware of the pros and cons of doing so, and make sure you have enabled unknown sources on your device. If you are looking for a new, exciting, and addictive casual game to play, you should give Bricks King a try.</p>
|
115 |
-
<h3>FAQs</h3>
|
116 |
-
<p>Here are some frequently asked questions about Bricks King APK download:</p>
|
117 |
-
<ol>
|
118 |
-
<li>Is Bricks King APK download safe?</li>
|
119 |
-
<p>Bricks King APK download is safe if you download it from a trusted source like APKPure. However, you should always scan the APK file with an antivirus or malware scanner before installing it on your device. You should also avoid downloading APK files from unknown or suspicious sources that may contain harmful or unwanted content.</p>
|
120 |
-
<li>How can I update Bricks King APK?</li>
|
121 |
-
<p>If you have downloaded Bricks King APK from a trusted source like APKPure, you can update it by visiting the same website and downloading the latest version of the APK file. You can then install it over the existing app without losing your progress or data. However, you may not receive any notifications or alerts about the updates, so you have to check the website regularly for any new versions.</p>
|
122 |
-
<li>Can I play Bricks King offline?</li>
|
123 |
-
<p>Yes, you can play Bricks King offline without any internet connection. However, some features of the game may not work properly or at all, such as the leaderboard, achievements, or ads. You may also miss out on some updates or bug fixes that require an internet connection.</p>
|
124 |
-
<li>Can I play Bricks King on PC?</li>
|
125 |
-
<p>Yes, you can play Bricks King on PC by using an Android emulator. An Android emulator is a software that allows you to run Android apps and games on your PC. Some of the popular Android emulators are BlueStacks, NoxPlayer, and LDPlayer. You can download and install any of these emulators on your PC and then download and install Bricks King APK from a trusted source like APKPure. You can then launch and enjoy the game on your PC.</p>
|
126 |
-
<li>How can I contact the developers of Bricks King?</li>
|
127 |
-
<p>If you have any questions, feedback, suggestions, or issues with Bricks King, you can contact the developers of the game by sending them an email at [email protected]. You can also visit their website at https://protagames.com/ or follow them on Facebook at https://www.facebook.com/protagames/.</p>
|
128 |
-
</ol></p> 197e85843d<br />
|
129 |
-
<br />
|
130 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1toTree/lora_test/ppdiffusers/models/cross_attention.py
DELETED
@@ -1,435 +0,0 @@
|
|
1 |
-
# Copyright 2022 The HuggingFace Team. All rights reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
-
# you may not use this file except in compliance with the License.
|
5 |
-
# You may obtain a copy of the License at
|
6 |
-
#
|
7 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
-
#
|
9 |
-
# Unless required by applicable law or agreed to in writing, software
|
10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
-
# See the License for the specific language governing permissions and
|
13 |
-
# limitations under the License.
|
14 |
-
from typing import Optional, Union
|
15 |
-
|
16 |
-
import paddle
|
17 |
-
import paddle.nn as nn
|
18 |
-
import paddle.nn.functional as F
|
19 |
-
|
20 |
-
from ..initializer import normal_, zeros_
|
21 |
-
|
22 |
-
|
23 |
-
class CrossAttention(nn.Layer):
|
24 |
-
r"""
|
25 |
-
A cross attention layer.
|
26 |
-
|
27 |
-
Parameters:
|
28 |
-
query_dim (`int`): The number of channels in the query.
|
29 |
-
cross_attention_dim (`int`, *optional*):
|
30 |
-
The number of channels in the encoder_hidden_states. If not given, defaults to `query_dim`.
|
31 |
-
heads (`int`, *optional*, defaults to 8): The number of heads to use for multi-head attention.
|
32 |
-
dim_head (`int`, *optional*, defaults to 64): The number of channels in each head.
|
33 |
-
dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use.
|
34 |
-
bias (`bool`, *optional*, defaults to False):
|
35 |
-
Set to `True` for the query, key, and value linear layers to contain a bias parameter.
|
36 |
-
"""
|
37 |
-
|
38 |
-
def __init__(
|
39 |
-
self,
|
40 |
-
query_dim: int,
|
41 |
-
cross_attention_dim: Optional[int] = None,
|
42 |
-
heads: int = 8,
|
43 |
-
dim_head: int = 64,
|
44 |
-
dropout: float = 0.0,
|
45 |
-
bias=False,
|
46 |
-
upcast_attention: bool = False,
|
47 |
-
upcast_softmax: bool = False,
|
48 |
-
added_kv_proj_dim: Optional[int] = None,
|
49 |
-
norm_num_groups: Optional[int] = None,
|
50 |
-
processor: Optional["AttnProcessor"] = None,
|
51 |
-
):
|
52 |
-
super().__init__()
|
53 |
-
inner_dim = dim_head * heads
|
54 |
-
cross_attention_dim = cross_attention_dim if cross_attention_dim is not None else query_dim
|
55 |
-
self.upcast_attention = upcast_attention
|
56 |
-
self.upcast_softmax = upcast_softmax
|
57 |
-
|
58 |
-
self.scale = dim_head**-0.5
|
59 |
-
self.num_heads = heads
|
60 |
-
self.head_dim = inner_dim // heads
|
61 |
-
# for slice_size > 0 the attention score computation
|
62 |
-
# is split across the batch axis to save memory
|
63 |
-
# You can set slice_size with `set_attention_slice`
|
64 |
-
self.sliceable_head_dim = heads
|
65 |
-
|
66 |
-
self.added_kv_proj_dim = added_kv_proj_dim
|
67 |
-
|
68 |
-
if norm_num_groups is not None:
|
69 |
-
self.group_norm = nn.GroupNorm(num_channels=inner_dim, num_groups=norm_num_groups, epsilon=1e-5)
|
70 |
-
else:
|
71 |
-
self.group_norm = None
|
72 |
-
|
73 |
-
self.to_q = nn.Linear(query_dim, inner_dim, bias_attr=bias)
|
74 |
-
self.to_k = nn.Linear(cross_attention_dim, inner_dim, bias_attr=bias)
|
75 |
-
self.to_v = nn.Linear(cross_attention_dim, inner_dim, bias_attr=bias)
|
76 |
-
|
77 |
-
if self.added_kv_proj_dim is not None:
|
78 |
-
self.add_k_proj = nn.Linear(added_kv_proj_dim, cross_attention_dim)
|
79 |
-
self.add_v_proj = nn.Linear(added_kv_proj_dim, cross_attention_dim)
|
80 |
-
|
81 |
-
self.to_out = nn.LayerList([])
|
82 |
-
self.to_out.append(nn.Linear(inner_dim, query_dim))
|
83 |
-
self.to_out.append(nn.Dropout(dropout))
|
84 |
-
|
85 |
-
# set attention processor
|
86 |
-
processor = processor if processor is not None else CrossAttnProcessor()
|
87 |
-
self.set_processor(processor)
|
88 |
-
|
89 |
-
def set_attention_slice(self, slice_size):
|
90 |
-
if slice_size is not None and slice_size > self.sliceable_head_dim:
|
91 |
-
raise ValueError(f"slice_size {slice_size} has to be smaller or equal to {self.sliceable_head_dim}.")
|
92 |
-
|
93 |
-
if slice_size is not None and self.added_kv_proj_dim is not None:
|
94 |
-
processor = SlicedAttnAddedKVProcessor(slice_size)
|
95 |
-
elif slice_size is not None:
|
96 |
-
processor = SlicedAttnProcessor(slice_size)
|
97 |
-
elif self.added_kv_proj_dim is not None:
|
98 |
-
processor = CrossAttnAddedKVProcessor()
|
99 |
-
else:
|
100 |
-
processor = CrossAttnProcessor()
|
101 |
-
|
102 |
-
self.set_processor(processor)
|
103 |
-
|
104 |
-
def set_processor(self, processor: "AttnProcessor"):
|
105 |
-
self.processor = processor
|
106 |
-
|
107 |
-
def forward(self, hidden_states, encoder_hidden_states=None, attention_mask=None, **cross_attention_kwargs):
|
108 |
-
# The `CrossAttention` class can call different attention processors / attention functions
|
109 |
-
# here we simply pass along all tensors to the selected processor class
|
110 |
-
# For standard processors that are defined here, `**cross_attention_kwargs` is empty
|
111 |
-
return self.processor(
|
112 |
-
self,
|
113 |
-
hidden_states,
|
114 |
-
encoder_hidden_states=encoder_hidden_states,
|
115 |
-
attention_mask=attention_mask,
|
116 |
-
**cross_attention_kwargs,
|
117 |
-
)
|
118 |
-
|
119 |
-
def batch_to_head_dim(self, tensor):
|
120 |
-
tensor = tensor.transpose([0, 2, 1, 3])
|
121 |
-
tensor = tensor.reshape([0, 0, tensor.shape[2] * tensor.shape[3]])
|
122 |
-
return tensor
|
123 |
-
|
124 |
-
def head_to_batch_dim(self, tensor):
|
125 |
-
tensor = tensor.reshape([0, 0, self.num_heads, self.head_dim])
|
126 |
-
tensor = tensor.transpose([0, 2, 1, 3])
|
127 |
-
return tensor
|
128 |
-
|
129 |
-
def get_attention_scores(self, query, key, attention_mask=None):
|
130 |
-
if self.upcast_attention:
|
131 |
-
query = query.cast("float32")
|
132 |
-
key = key.cast("float32")
|
133 |
-
|
134 |
-
attention_scores = paddle.matmul(query, key, transpose_y=True) * self.scale
|
135 |
-
|
136 |
-
if attention_mask is not None:
|
137 |
-
attention_scores = attention_scores + attention_mask
|
138 |
-
|
139 |
-
if self.upcast_softmax:
|
140 |
-
attention_scores = attention_scores.cast("float32")
|
141 |
-
|
142 |
-
attention_probs = F.softmax(attention_scores, axis=-1)
|
143 |
-
if self.upcast_softmax:
|
144 |
-
attention_probs = attention_probs.cast(query.dtype)
|
145 |
-
|
146 |
-
return attention_probs
|
147 |
-
|
148 |
-
def prepare_attention_mask(self, attention_mask, target_length):
|
149 |
-
if attention_mask is None:
|
150 |
-
return attention_mask
|
151 |
-
|
152 |
-
if attention_mask.shape[-1] != target_length:
|
153 |
-
attention_mask = F.pad(attention_mask, (0, target_length), value=0.0, data_format="NCL")
|
154 |
-
attention_mask = attention_mask.repeat_interleave(self.num_heads, axis=0)
|
155 |
-
return attention_mask
|
156 |
-
|
157 |
-
|
158 |
-
class CrossAttnProcessor:
|
159 |
-
def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None):
|
160 |
-
batch_size, sequence_length, _ = hidden_states.shape
|
161 |
-
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
|
162 |
-
attention_mask = (
|
163 |
-
attention_mask.reshape([batch_size, attn.num_heads, -1, attention_mask.shape[-1]])
|
164 |
-
if attention_mask is not None
|
165 |
-
else None
|
166 |
-
)
|
167 |
-
|
168 |
-
query = attn.to_q(hidden_states)
|
169 |
-
query = attn.head_to_batch_dim(query)
|
170 |
-
|
171 |
-
encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
|
172 |
-
key = attn.to_k(encoder_hidden_states)
|
173 |
-
value = attn.to_v(encoder_hidden_states)
|
174 |
-
key = attn.head_to_batch_dim(key)
|
175 |
-
value = attn.head_to_batch_dim(value)
|
176 |
-
|
177 |
-
attention_probs = attn.get_attention_scores(query, key, attention_mask)
|
178 |
-
hidden_states = paddle.matmul(attention_probs, value)
|
179 |
-
hidden_states = attn.batch_to_head_dim(hidden_states)
|
180 |
-
|
181 |
-
# linear proj
|
182 |
-
hidden_states = attn.to_out[0](hidden_states)
|
183 |
-
# dropout
|
184 |
-
hidden_states = attn.to_out[1](hidden_states)
|
185 |
-
|
186 |
-
return hidden_states
|
187 |
-
|
188 |
-
|
189 |
-
class LoRALinearLayer(nn.Layer):
|
190 |
-
def __init__(self, in_features, out_features, rank=4):
|
191 |
-
super().__init__()
|
192 |
-
|
193 |
-
if rank > min(in_features, out_features):
|
194 |
-
raise ValueError(f"LoRA rank {rank} must be less or equal than {min(in_features, out_features)}")
|
195 |
-
|
196 |
-
self.down = nn.Linear(in_features, rank, bias_attr=False)
|
197 |
-
self.up = nn.Linear(rank, out_features, bias_attr=False)
|
198 |
-
self.scale = 1.0
|
199 |
-
|
200 |
-
normal_(self.down.weight, std=1 / rank)
|
201 |
-
zeros_(self.up.weight)
|
202 |
-
|
203 |
-
def forward(self, hidden_states):
|
204 |
-
orig_dtype = hidden_states.dtype
|
205 |
-
dtype = self.down.weight.dtype
|
206 |
-
|
207 |
-
down_hidden_states = self.down(hidden_states.cast(dtype))
|
208 |
-
up_hidden_states = self.up(down_hidden_states)
|
209 |
-
|
210 |
-
return up_hidden_states.cast(orig_dtype)
|
211 |
-
|
212 |
-
|
213 |
-
class LoRACrossAttnProcessor(nn.Layer):
|
214 |
-
def __init__(self, hidden_size, cross_attention_dim=None, rank=4):
|
215 |
-
super().__init__()
|
216 |
-
|
217 |
-
self.hidden_size = hidden_size
|
218 |
-
self.cross_attention_dim = cross_attention_dim
|
219 |
-
self.rank = rank
|
220 |
-
|
221 |
-
self.to_q_lora = LoRALinearLayer(hidden_size, hidden_size, rank)
|
222 |
-
self.to_k_lora = LoRALinearLayer(cross_attention_dim or hidden_size, hidden_size, rank)
|
223 |
-
self.to_v_lora = LoRALinearLayer(cross_attention_dim or hidden_size, hidden_size, rank)
|
224 |
-
self.to_out_lora = LoRALinearLayer(hidden_size, hidden_size, rank)
|
225 |
-
|
226 |
-
def __call__(
|
227 |
-
self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None, scale=1.0
|
228 |
-
):
|
229 |
-
batch_size, sequence_length, _ = hidden_states.shape
|
230 |
-
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
|
231 |
-
attention_mask = (
|
232 |
-
attention_mask.reshape([batch_size, attn.num_heads, -1, attention_mask.shape[-1]])
|
233 |
-
if attention_mask is not None
|
234 |
-
else None
|
235 |
-
)
|
236 |
-
|
237 |
-
query = attn.to_q(hidden_states) + scale * self.to_q_lora(hidden_states)
|
238 |
-
query = attn.head_to_batch_dim(query)
|
239 |
-
|
240 |
-
encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
|
241 |
-
|
242 |
-
key = attn.to_k(encoder_hidden_states) + scale * self.to_k_lora(encoder_hidden_states)
|
243 |
-
value = attn.to_v(encoder_hidden_states) + scale * self.to_v_lora(encoder_hidden_states)
|
244 |
-
|
245 |
-
key = attn.head_to_batch_dim(key)
|
246 |
-
value = attn.head_to_batch_dim(value)
|
247 |
-
|
248 |
-
attention_probs = attn.get_attention_scores(query, key, attention_mask)
|
249 |
-
hidden_states = paddle.matmul(attention_probs, value)
|
250 |
-
hidden_states = attn.batch_to_head_dim(hidden_states)
|
251 |
-
|
252 |
-
# linear proj
|
253 |
-
hidden_states = attn.to_out[0](hidden_states) + scale * self.to_out_lora(hidden_states)
|
254 |
-
# dropout
|
255 |
-
hidden_states = attn.to_out[1](hidden_states)
|
256 |
-
|
257 |
-
return hidden_states
|
258 |
-
|
259 |
-
|
260 |
-
class CrossAttnAddedKVProcessor:
|
261 |
-
def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None):
|
262 |
-
residual = hidden_states
|
263 |
-
hidden_states = hidden_states.reshape([hidden_states.shape[0], hidden_states.shape[1], -1]).transpose(
|
264 |
-
[0, 2, 1]
|
265 |
-
)
|
266 |
-
batch_size, sequence_length, _ = hidden_states.shape
|
267 |
-
encoder_hidden_states = encoder_hidden_states.transpose([0, 2, 1])
|
268 |
-
|
269 |
-
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
|
270 |
-
attention_mask = (
|
271 |
-
attention_mask.reshape([batch_size, attn.num_heads, -1, attention_mask.shape[-1]])
|
272 |
-
if attention_mask is not None
|
273 |
-
else None
|
274 |
-
)
|
275 |
-
|
276 |
-
hidden_states = attn.group_norm(hidden_states.transpose([0, 2, 1])).transpose([0, 2, 1])
|
277 |
-
|
278 |
-
query = attn.to_q(hidden_states)
|
279 |
-
query = attn.head_to_batch_dim(query)
|
280 |
-
|
281 |
-
key = attn.to_k(hidden_states)
|
282 |
-
value = attn.to_v(hidden_states)
|
283 |
-
key = attn.head_to_batch_dim(key)
|
284 |
-
value = attn.head_to_batch_dim(value)
|
285 |
-
|
286 |
-
encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states)
|
287 |
-
encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states)
|
288 |
-
encoder_hidden_states_key_proj = attn.head_to_batch_dim(encoder_hidden_states_key_proj)
|
289 |
-
encoder_hidden_states_value_proj = attn.head_to_batch_dim(encoder_hidden_states_value_proj)
|
290 |
-
|
291 |
-
key = paddle.concat([encoder_hidden_states_key_proj, key], axis=2)
|
292 |
-
value = paddle.concat([encoder_hidden_states_value_proj, value], axis=2)
|
293 |
-
|
294 |
-
attention_probs = attn.get_attention_scores(query, key, attention_mask)
|
295 |
-
hidden_states = paddle.matmul(attention_probs, value)
|
296 |
-
hidden_states = attn.batch_to_head_dim(hidden_states)
|
297 |
-
|
298 |
-
# linear proj
|
299 |
-
hidden_states = attn.to_out[0](hidden_states)
|
300 |
-
# dropout
|
301 |
-
hidden_states = attn.to_out[1](hidden_states)
|
302 |
-
|
303 |
-
hidden_states = hidden_states.transpose([0, 2, 1]).reshape(residual.shape)
|
304 |
-
hidden_states = hidden_states + residual
|
305 |
-
|
306 |
-
return hidden_states
|
307 |
-
|
308 |
-
|
309 |
-
class SlicedAttnProcessor:
|
310 |
-
def __init__(self, slice_size):
|
311 |
-
self.slice_size = slice_size
|
312 |
-
|
313 |
-
def __call__(self, attn: CrossAttention, hidden_states, encoder_hidden_states=None, attention_mask=None):
|
314 |
-
batch_size, sequence_length, _ = hidden_states.shape
|
315 |
-
|
316 |
-
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
|
317 |
-
|
318 |
-
query = attn.to_q(hidden_states)
|
319 |
-
query = attn.head_to_batch_dim(query)
|
320 |
-
|
321 |
-
encoder_hidden_states = encoder_hidden_states if encoder_hidden_states is not None else hidden_states
|
322 |
-
key = attn.to_k(encoder_hidden_states)
|
323 |
-
value = attn.to_v(encoder_hidden_states)
|
324 |
-
key = attn.head_to_batch_dim(key)
|
325 |
-
value = attn.head_to_batch_dim(value)
|
326 |
-
|
327 |
-
query = query.flatten(0, 1)
|
328 |
-
key = key.flatten(0, 1)
|
329 |
-
value = value.flatten(0, 1)
|
330 |
-
|
331 |
-
batch_size_attention = query.shape[0]
|
332 |
-
hidden_states = paddle.zeros((batch_size_attention, sequence_length, attn.head_dim), dtype=query.dtype)
|
333 |
-
|
334 |
-
for i in range(hidden_states.shape[0] // self.slice_size):
|
335 |
-
start_idx = i * self.slice_size
|
336 |
-
end_idx = (i + 1) * self.slice_size
|
337 |
-
|
338 |
-
query_slice = query[start_idx:end_idx]
|
339 |
-
key_slice = key[start_idx:end_idx]
|
340 |
-
attn_mask_slice = attention_mask[start_idx:end_idx] if attention_mask is not None else None
|
341 |
-
|
342 |
-
attn_slice = attn.get_attention_scores(query_slice, key_slice, attn_mask_slice)
|
343 |
-
|
344 |
-
attn_slice = paddle.matmul(attn_slice, value[start_idx:end_idx])
|
345 |
-
|
346 |
-
hidden_states[start_idx:end_idx] = attn_slice
|
347 |
-
|
348 |
-
# reshape back to [bs, num_heads, seqlen, head_dim]
|
349 |
-
hidden_states = hidden_states.reshape([-1, attn.num_heads, sequence_length, attn.head_dim])
|
350 |
-
# reshape hidden_states
|
351 |
-
hidden_states = attn.batch_to_head_dim(hidden_states)
|
352 |
-
|
353 |
-
# linear proj
|
354 |
-
hidden_states = attn.to_out[0](hidden_states)
|
355 |
-
# dropout
|
356 |
-
hidden_states = attn.to_out[1](hidden_states)
|
357 |
-
|
358 |
-
return hidden_states
|
359 |
-
|
360 |
-
|
361 |
-
class SlicedAttnAddedKVProcessor:
|
362 |
-
def __init__(self, slice_size):
|
363 |
-
self.slice_size = slice_size
|
364 |
-
|
365 |
-
def __call__(self, attn: "CrossAttention", hidden_states, encoder_hidden_states=None, attention_mask=None):
|
366 |
-
residual = hidden_states
|
367 |
-
hidden_states = hidden_states.reshape([hidden_states.shape[0], hidden_states.shape[1], -1]).transpose(
|
368 |
-
[0, 2, 1]
|
369 |
-
)
|
370 |
-
encoder_hidden_states = encoder_hidden_states.transpose([0, 2, 1])
|
371 |
-
|
372 |
-
batch_size, sequence_length, _ = hidden_states.shape
|
373 |
-
|
374 |
-
attention_mask = attn.prepare_attention_mask(attention_mask, sequence_length)
|
375 |
-
|
376 |
-
hidden_states = attn.group_norm(hidden_states.transpose([0, 2, 1])).transpose([0, 2, 1])
|
377 |
-
|
378 |
-
query = attn.to_q(hidden_states)
|
379 |
-
query = attn.head_to_batch_dim(query)
|
380 |
-
|
381 |
-
key = attn.to_k(hidden_states)
|
382 |
-
value = attn.to_v(hidden_states)
|
383 |
-
encoder_hidden_states_key_proj = attn.add_k_proj(encoder_hidden_states)
|
384 |
-
encoder_hidden_states_value_proj = attn.add_v_proj(encoder_hidden_states)
|
385 |
-
|
386 |
-
key = attn.head_to_batch_dim(key)
|
387 |
-
value = attn.head_to_batch_dim(value)
|
388 |
-
encoder_hidden_states_key_proj = attn.head_to_batch_dim(encoder_hidden_states_key_proj)
|
389 |
-
encoder_hidden_states_value_proj = attn.head_to_batch_dim(encoder_hidden_states_value_proj)
|
390 |
-
|
391 |
-
key = paddle.concat([encoder_hidden_states_key_proj, key], axis=2)
|
392 |
-
value = paddle.concat([encoder_hidden_states_value_proj, value], axis=2)
|
393 |
-
|
394 |
-
query = query.flatten(0, 1)
|
395 |
-
key = key.flatten(0, 1)
|
396 |
-
value = value.flatten(0, 1)
|
397 |
-
|
398 |
-
batch_size_attention = query.shape[0]
|
399 |
-
hidden_states = paddle.zeros((batch_size_attention, sequence_length, attn.head_dim), dtype=query.dtype)
|
400 |
-
for i in range(hidden_states.shape[0] // self.slice_size):
|
401 |
-
start_idx = i * self.slice_size
|
402 |
-
end_idx = (i + 1) * self.slice_size
|
403 |
-
|
404 |
-
query_slice = query[start_idx:end_idx]
|
405 |
-
key_slice = key[start_idx:end_idx]
|
406 |
-
attn_mask_slice = attention_mask[start_idx:end_idx] if attention_mask is not None else None
|
407 |
-
|
408 |
-
attn_slice = attn.get_attention_scores(query_slice, key_slice, attn_mask_slice)
|
409 |
-
|
410 |
-
attn_slice = paddle.matmul(attn_slice, value[start_idx:end_idx])
|
411 |
-
|
412 |
-
hidden_states[start_idx:end_idx] = attn_slice
|
413 |
-
|
414 |
-
# reshape back to [bs, num_heads, seqlen, head_dim]
|
415 |
-
hidden_states = hidden_states.reshape([-1, attn.num_heads, sequence_length, attn.head_dim])
|
416 |
-
# reshape hidden_states
|
417 |
-
hidden_states = attn.batch_to_head_dim(hidden_states)
|
418 |
-
|
419 |
-
# linear proj
|
420 |
-
hidden_states = attn.to_out[0](hidden_states)
|
421 |
-
# dropout
|
422 |
-
hidden_states = attn.to_out[1](hidden_states)
|
423 |
-
|
424 |
-
hidden_states = hidden_states.transpose([0, 2, 1]).reshape(residual.shape)
|
425 |
-
hidden_states = hidden_states + residual
|
426 |
-
|
427 |
-
return hidden_states
|
428 |
-
|
429 |
-
|
430 |
-
AttnProcessor = Union[
|
431 |
-
CrossAttnProcessor,
|
432 |
-
SlicedAttnProcessor,
|
433 |
-
CrossAttnAddedKVProcessor,
|
434 |
-
SlicedAttnAddedKVProcessor,
|
435 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/34we12er/newbing/Dockerfile
DELETED
@@ -1,34 +0,0 @@
|
|
1 |
-
# Build Stage
|
2 |
-
# 使用 golang:alpine 作为构建阶段的基础镜像
|
3 |
-
FROM golang:alpine AS builder
|
4 |
-
|
5 |
-
# 添加 git,以便之后能从GitHub克隆项目
|
6 |
-
RUN apk --no-cache add git
|
7 |
-
|
8 |
-
# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下
|
9 |
-
RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app
|
10 |
-
|
11 |
-
# 设置工作目录为之前克隆的项目目录
|
12 |
-
WORKDIR /workspace/app
|
13 |
-
|
14 |
-
# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小
|
15 |
-
RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go
|
16 |
-
|
17 |
-
# Runtime Stage
|
18 |
-
# 使用轻量级的 alpine 镜像作为运行时的基础镜像
|
19 |
-
FROM alpine
|
20 |
-
|
21 |
-
# 设置工作目录
|
22 |
-
WORKDIR /workspace/app
|
23 |
-
|
24 |
-
# 从构建阶段复制编译后的二进制文件到运行时镜像中
|
25 |
-
COPY --from=builder /workspace/app/go-proxy-bingai .
|
26 |
-
|
27 |
-
# 设置环境变量,此处为随机字符
|
28 |
-
ENV Go_Proxy_BingAI_USER_TOKEN_1="adhdadtbjxiuaj2562715zshyw38bjxy012hdy37bdola9"
|
29 |
-
|
30 |
-
# 暴露8080端口
|
31 |
-
EXPOSE 8080
|
32 |
-
|
33 |
-
# 容器启动时运行的命令
|
34 |
-
CMD ["/workspace/app/go-proxy-bingai"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/4Taps/SadTalker/src/facerender/modules/keypoint_detector.py
DELETED
@@ -1,179 +0,0 @@
|
|
1 |
-
from torch import nn
|
2 |
-
import torch
|
3 |
-
import torch.nn.functional as F
|
4 |
-
|
5 |
-
from src.facerender.sync_batchnorm import SynchronizedBatchNorm2d as BatchNorm2d
|
6 |
-
from src.facerender.modules.util import KPHourglass, make_coordinate_grid, AntiAliasInterpolation2d, ResBottleneck
|
7 |
-
|
8 |
-
|
9 |
-
class KPDetector(nn.Module):
|
10 |
-
"""
|
11 |
-
Detecting canonical keypoints. Return keypoint position and jacobian near each keypoint.
|
12 |
-
"""
|
13 |
-
|
14 |
-
def __init__(self, block_expansion, feature_channel, num_kp, image_channel, max_features, reshape_channel, reshape_depth,
|
15 |
-
num_blocks, temperature, estimate_jacobian=False, scale_factor=1, single_jacobian_map=False):
|
16 |
-
super(KPDetector, self).__init__()
|
17 |
-
|
18 |
-
self.predictor = KPHourglass(block_expansion, in_features=image_channel,
|
19 |
-
max_features=max_features, reshape_features=reshape_channel, reshape_depth=reshape_depth, num_blocks=num_blocks)
|
20 |
-
|
21 |
-
# self.kp = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=num_kp, kernel_size=7, padding=3)
|
22 |
-
self.kp = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=num_kp, kernel_size=3, padding=1)
|
23 |
-
|
24 |
-
if estimate_jacobian:
|
25 |
-
self.num_jacobian_maps = 1 if single_jacobian_map else num_kp
|
26 |
-
# self.jacobian = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=9 * self.num_jacobian_maps, kernel_size=7, padding=3)
|
27 |
-
self.jacobian = nn.Conv3d(in_channels=self.predictor.out_filters, out_channels=9 * self.num_jacobian_maps, kernel_size=3, padding=1)
|
28 |
-
'''
|
29 |
-
initial as:
|
30 |
-
[[1 0 0]
|
31 |
-
[0 1 0]
|
32 |
-
[0 0 1]]
|
33 |
-
'''
|
34 |
-
self.jacobian.weight.data.zero_()
|
35 |
-
self.jacobian.bias.data.copy_(torch.tensor([1, 0, 0, 0, 1, 0, 0, 0, 1] * self.num_jacobian_maps, dtype=torch.float))
|
36 |
-
else:
|
37 |
-
self.jacobian = None
|
38 |
-
|
39 |
-
self.temperature = temperature
|
40 |
-
self.scale_factor = scale_factor
|
41 |
-
if self.scale_factor != 1:
|
42 |
-
self.down = AntiAliasInterpolation2d(image_channel, self.scale_factor)
|
43 |
-
|
44 |
-
def gaussian2kp(self, heatmap):
|
45 |
-
"""
|
46 |
-
Extract the mean from a heatmap
|
47 |
-
"""
|
48 |
-
shape = heatmap.shape
|
49 |
-
heatmap = heatmap.unsqueeze(-1)
|
50 |
-
grid = make_coordinate_grid(shape[2:], heatmap.type()).unsqueeze_(0).unsqueeze_(0)
|
51 |
-
value = (heatmap * grid).sum(dim=(2, 3, 4))
|
52 |
-
kp = {'value': value}
|
53 |
-
|
54 |
-
return kp
|
55 |
-
|
56 |
-
def forward(self, x):
|
57 |
-
if self.scale_factor != 1:
|
58 |
-
x = self.down(x)
|
59 |
-
|
60 |
-
feature_map = self.predictor(x)
|
61 |
-
prediction = self.kp(feature_map)
|
62 |
-
|
63 |
-
final_shape = prediction.shape
|
64 |
-
heatmap = prediction.view(final_shape[0], final_shape[1], -1)
|
65 |
-
heatmap = F.softmax(heatmap / self.temperature, dim=2)
|
66 |
-
heatmap = heatmap.view(*final_shape)
|
67 |
-
|
68 |
-
out = self.gaussian2kp(heatmap)
|
69 |
-
|
70 |
-
if self.jacobian is not None:
|
71 |
-
jacobian_map = self.jacobian(feature_map)
|
72 |
-
jacobian_map = jacobian_map.reshape(final_shape[0], self.num_jacobian_maps, 9, final_shape[2],
|
73 |
-
final_shape[3], final_shape[4])
|
74 |
-
heatmap = heatmap.unsqueeze(2)
|
75 |
-
|
76 |
-
jacobian = heatmap * jacobian_map
|
77 |
-
jacobian = jacobian.view(final_shape[0], final_shape[1], 9, -1)
|
78 |
-
jacobian = jacobian.sum(dim=-1)
|
79 |
-
jacobian = jacobian.view(jacobian.shape[0], jacobian.shape[1], 3, 3)
|
80 |
-
out['jacobian'] = jacobian
|
81 |
-
|
82 |
-
return out
|
83 |
-
|
84 |
-
|
85 |
-
class HEEstimator(nn.Module):
|
86 |
-
"""
|
87 |
-
Estimating head pose and expression.
|
88 |
-
"""
|
89 |
-
|
90 |
-
def __init__(self, block_expansion, feature_channel, num_kp, image_channel, max_features, num_bins=66, estimate_jacobian=True):
|
91 |
-
super(HEEstimator, self).__init__()
|
92 |
-
|
93 |
-
self.conv1 = nn.Conv2d(in_channels=image_channel, out_channels=block_expansion, kernel_size=7, padding=3, stride=2)
|
94 |
-
self.norm1 = BatchNorm2d(block_expansion, affine=True)
|
95 |
-
self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
|
96 |
-
|
97 |
-
self.conv2 = nn.Conv2d(in_channels=block_expansion, out_channels=256, kernel_size=1)
|
98 |
-
self.norm2 = BatchNorm2d(256, affine=True)
|
99 |
-
|
100 |
-
self.block1 = nn.Sequential()
|
101 |
-
for i in range(3):
|
102 |
-
self.block1.add_module('b1_'+ str(i), ResBottleneck(in_features=256, stride=1))
|
103 |
-
|
104 |
-
self.conv3 = nn.Conv2d(in_channels=256, out_channels=512, kernel_size=1)
|
105 |
-
self.norm3 = BatchNorm2d(512, affine=True)
|
106 |
-
self.block2 = ResBottleneck(in_features=512, stride=2)
|
107 |
-
|
108 |
-
self.block3 = nn.Sequential()
|
109 |
-
for i in range(3):
|
110 |
-
self.block3.add_module('b3_'+ str(i), ResBottleneck(in_features=512, stride=1))
|
111 |
-
|
112 |
-
self.conv4 = nn.Conv2d(in_channels=512, out_channels=1024, kernel_size=1)
|
113 |
-
self.norm4 = BatchNorm2d(1024, affine=True)
|
114 |
-
self.block4 = ResBottleneck(in_features=1024, stride=2)
|
115 |
-
|
116 |
-
self.block5 = nn.Sequential()
|
117 |
-
for i in range(5):
|
118 |
-
self.block5.add_module('b5_'+ str(i), ResBottleneck(in_features=1024, stride=1))
|
119 |
-
|
120 |
-
self.conv5 = nn.Conv2d(in_channels=1024, out_channels=2048, kernel_size=1)
|
121 |
-
self.norm5 = BatchNorm2d(2048, affine=True)
|
122 |
-
self.block6 = ResBottleneck(in_features=2048, stride=2)
|
123 |
-
|
124 |
-
self.block7 = nn.Sequential()
|
125 |
-
for i in range(2):
|
126 |
-
self.block7.add_module('b7_'+ str(i), ResBottleneck(in_features=2048, stride=1))
|
127 |
-
|
128 |
-
self.fc_roll = nn.Linear(2048, num_bins)
|
129 |
-
self.fc_pitch = nn.Linear(2048, num_bins)
|
130 |
-
self.fc_yaw = nn.Linear(2048, num_bins)
|
131 |
-
|
132 |
-
self.fc_t = nn.Linear(2048, 3)
|
133 |
-
|
134 |
-
self.fc_exp = nn.Linear(2048, 3*num_kp)
|
135 |
-
|
136 |
-
def forward(self, x):
|
137 |
-
out = self.conv1(x)
|
138 |
-
out = self.norm1(out)
|
139 |
-
out = F.relu(out)
|
140 |
-
out = self.maxpool(out)
|
141 |
-
|
142 |
-
out = self.conv2(out)
|
143 |
-
out = self.norm2(out)
|
144 |
-
out = F.relu(out)
|
145 |
-
|
146 |
-
out = self.block1(out)
|
147 |
-
|
148 |
-
out = self.conv3(out)
|
149 |
-
out = self.norm3(out)
|
150 |
-
out = F.relu(out)
|
151 |
-
out = self.block2(out)
|
152 |
-
|
153 |
-
out = self.block3(out)
|
154 |
-
|
155 |
-
out = self.conv4(out)
|
156 |
-
out = self.norm4(out)
|
157 |
-
out = F.relu(out)
|
158 |
-
out = self.block4(out)
|
159 |
-
|
160 |
-
out = self.block5(out)
|
161 |
-
|
162 |
-
out = self.conv5(out)
|
163 |
-
out = self.norm5(out)
|
164 |
-
out = F.relu(out)
|
165 |
-
out = self.block6(out)
|
166 |
-
|
167 |
-
out = self.block7(out)
|
168 |
-
|
169 |
-
out = F.adaptive_avg_pool2d(out, 1)
|
170 |
-
out = out.view(out.shape[0], -1)
|
171 |
-
|
172 |
-
yaw = self.fc_roll(out)
|
173 |
-
pitch = self.fc_pitch(out)
|
174 |
-
roll = self.fc_yaw(out)
|
175 |
-
t = self.fc_t(out)
|
176 |
-
exp = self.fc_exp(out)
|
177 |
-
|
178 |
-
return {'yaw': yaw, 'pitch': pitch, 'roll': roll, 't': t, 'exp': exp}
|
179 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/52Hz/SRMNet_thesis/app.py
DELETED
@@ -1,72 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import gradio as gr
|
3 |
-
from PIL import Image
|
4 |
-
|
5 |
-
|
6 |
-
os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Deblurring_motionblur.pth -P experiments/pretrained_models')
|
7 |
-
os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Dehaze_realworld.pth -P experiments/pretrained_models')
|
8 |
-
os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Denoise_gaussian.pth -P experiments/pretrained_models')
|
9 |
-
os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Denoise_realworld.pth -P experiments/pretrained_models')
|
10 |
-
os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Deraining_raindrop.pth -P experiments/pretrained_models')
|
11 |
-
os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Deraining_rainstreak.pth -P experiments/pretrained_models')
|
12 |
-
os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/LLEnhancement.pth -P experiments/pretrained_models')
|
13 |
-
os.system('wget https://github.com/FanChiMao/SRMNet-thesis/releases/download/v0.0/Retouching.pth -P experiments/pretrained_models')
|
14 |
-
|
15 |
-
def inference(img, model):
|
16 |
-
os.system('mkdir test')
|
17 |
-
img.save("test/1.png", "PNG")
|
18 |
-
|
19 |
-
if model == 'Denoising (gaussian)':
|
20 |
-
os.system('python main_test_SRMNet.py --input_dir test --task Denoise_gaussian')
|
21 |
-
elif model == 'Denoising (real-world)':
|
22 |
-
os.system('python main_test_SRMNet.py --input_dir test --task Denoise_realworld')
|
23 |
-
elif model == 'Deblurring (motion-blur)':
|
24 |
-
os.system('python main_test_SRMNet.py --input_dir test --task Deblurring_motionblur')
|
25 |
-
elif model == 'Dehazing (dense haze)':
|
26 |
-
os.system('python main_test_SRMNet.py --input_dir test --task Dehaze_realworld')
|
27 |
-
elif model == 'Deraining (rainstreak)':
|
28 |
-
os.system('python main_test_SRMNet.py --input_dir test --task Deraining_rainstreak')
|
29 |
-
elif model == 'Deraining (raindrop)':
|
30 |
-
os.system('python main_test_SRMNet.py --input_dir test --task Deraining_raindrop')
|
31 |
-
elif model == 'Low-light Enhancement':
|
32 |
-
os.system('python main_test_SRMNet.py --input_dir test --task LLEnhancement')
|
33 |
-
elif model == 'Retouching':
|
34 |
-
os.system('python main_test_SRMNet.py --input_dir test --task Retouching')
|
35 |
-
|
36 |
-
return 'result/1.png'
|
37 |
-
|
38 |
-
|
39 |
-
title = "[NCHU thesis] Image Restoration by Selective Residual Block on Improved Hierarchical Encoder-Decoder Networks"
|
40 |
-
description = ""
|
41 |
-
article = "<p style='text-align: center'><a href='https://' target='_blank'>Image Restoration by Selective Residual Block on Improved Hierarchical Encoder-Decoder Networks</a> | <a href='https://github.com/FanChiMao/SRMNet-thesis' target='_blank'>Github Repo</a></p> <center><img src='https://visitor-badge.glitch.me/badge?page_id=52Hz_SRMNet_thesis' alt='visitor badge'></center>"
|
42 |
-
|
43 |
-
examples = [
|
44 |
-
['figures/noise_1.png', 'Denoising (gaussian)'],
|
45 |
-
['figures/noise_2.png', 'Denoising (real-world)'],
|
46 |
-
['figures/blur.png', 'Deblurring (motion-blur)'],
|
47 |
-
['figures/haze.png', 'Dehazing (dense haze)'],
|
48 |
-
['figures/rainstreak.png', 'Deraining (rainstreak)'],
|
49 |
-
['figures/raindrop.png', 'Deraining (raindrop)'],
|
50 |
-
['figures/LL.png', 'Low-light Enhancement'],
|
51 |
-
['figures/nchu.png', 'Retouching'],
|
52 |
-
]
|
53 |
-
gr.Interface(
|
54 |
-
inference,
|
55 |
-
[gr.inputs.Image(type="pil", label="Input"), gr.inputs.Dropdown(choices=[
|
56 |
-
'Denoising (gaussian)',
|
57 |
-
'Denoising (real-world)',
|
58 |
-
'Deblurring (motion-blur)',
|
59 |
-
'Dehazing (dense haze)',
|
60 |
-
'Deraining (rainstreak)',
|
61 |
-
'Deraining (raindrop)',
|
62 |
-
'Low-light Enhancement',
|
63 |
-
'Retouching',
|
64 |
-
], type="value", default='Denoising (gaussian)', label="model")],
|
65 |
-
gr.outputs.Image(type="file", label="Output"),
|
66 |
-
title=title,
|
67 |
-
description=description,
|
68 |
-
article=article,
|
69 |
-
allow_flagging=False,
|
70 |
-
allow_screenshot=False,
|
71 |
-
examples=examples
|
72 |
-
).launch(debug=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/A666sxr/Genshin_TTS/losses.py
DELETED
@@ -1,71 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch.nn import functional as F
|
3 |
-
from stft_loss import MultiResolutionSTFTLoss
|
4 |
-
|
5 |
-
|
6 |
-
import commons
|
7 |
-
|
8 |
-
|
9 |
-
def feature_loss(fmap_r, fmap_g):
|
10 |
-
loss = 0
|
11 |
-
for dr, dg in zip(fmap_r, fmap_g):
|
12 |
-
for rl, gl in zip(dr, dg):
|
13 |
-
rl = rl.float().detach()
|
14 |
-
gl = gl.float()
|
15 |
-
loss += torch.mean(torch.abs(rl - gl))
|
16 |
-
|
17 |
-
return loss * 2
|
18 |
-
|
19 |
-
|
20 |
-
def discriminator_loss(disc_real_outputs, disc_generated_outputs):
|
21 |
-
loss = 0
|
22 |
-
r_losses = []
|
23 |
-
g_losses = []
|
24 |
-
for dr, dg in zip(disc_real_outputs, disc_generated_outputs):
|
25 |
-
dr = dr.float()
|
26 |
-
dg = dg.float()
|
27 |
-
r_loss = torch.mean((1-dr)**2)
|
28 |
-
g_loss = torch.mean(dg**2)
|
29 |
-
loss += (r_loss + g_loss)
|
30 |
-
r_losses.append(r_loss.item())
|
31 |
-
g_losses.append(g_loss.item())
|
32 |
-
|
33 |
-
return loss, r_losses, g_losses
|
34 |
-
|
35 |
-
|
36 |
-
def generator_loss(disc_outputs):
|
37 |
-
loss = 0
|
38 |
-
gen_losses = []
|
39 |
-
for dg in disc_outputs:
|
40 |
-
dg = dg.float()
|
41 |
-
l = torch.mean((1-dg)**2)
|
42 |
-
gen_losses.append(l)
|
43 |
-
loss += l
|
44 |
-
|
45 |
-
return loss, gen_losses
|
46 |
-
|
47 |
-
|
48 |
-
def kl_loss(z_p, logs_q, m_p, logs_p, z_mask):
|
49 |
-
"""
|
50 |
-
z_p, logs_q: [b, h, t_t]
|
51 |
-
m_p, logs_p: [b, h, t_t]
|
52 |
-
"""
|
53 |
-
z_p = z_p.float()
|
54 |
-
logs_q = logs_q.float()
|
55 |
-
m_p = m_p.float()
|
56 |
-
logs_p = logs_p.float()
|
57 |
-
z_mask = z_mask.float()
|
58 |
-
|
59 |
-
kl = logs_p - logs_q - 0.5
|
60 |
-
kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p)
|
61 |
-
kl = torch.sum(kl * z_mask)
|
62 |
-
l = kl / torch.sum(z_mask)
|
63 |
-
return l
|
64 |
-
|
65 |
-
def subband_stft_loss(h, y_mb, y_hat_mb):
|
66 |
-
sub_stft_loss = MultiResolutionSTFTLoss(h.train.fft_sizes, h.train.hop_sizes, h.train.win_lengths)
|
67 |
-
y_mb = y_mb.view(-1, y_mb.size(2))
|
68 |
-
y_hat_mb = y_hat_mb.view(-1, y_hat_mb.size(2))
|
69 |
-
sub_sc_loss, sub_mag_loss = sub_stft_loss(y_hat_mb[:, :y_mb.size(-1)], y_mb)
|
70 |
-
return sub_sc_loss+sub_mag_loss
|
71 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AB-TW/team-ai/agents/tools/smart_domain/api_layer_code_tool.py
DELETED
@@ -1,96 +0,0 @@
|
|
1 |
-
from langchain import LLMChain, PromptTemplate
|
2 |
-
from langchain.agents import tool
|
3 |
-
|
4 |
-
from models import llm
|
5 |
-
|
6 |
-
|
7 |
-
API_LAYER = """You are a software developer. Your task is to generate the api layer tests and product code.
|
8 |
-
|
9 |
-
===TechStack
|
10 |
-
Java17、reactor、lombok、Junit5、reactor test、Mockito、 Spring WebFlux、Spring Boot Test
|
11 |
-
===END OF TechStack
|
12 |
-
|
13 |
-
===Architecture
|
14 |
-
the api layer inclue 2 componets:
|
15 |
-
* DTO: This component is use to define data structure that api request and response.
|
16 |
-
* Controller: This component is use to define the interface to access api.
|
17 |
-
---eaxmple code:
|
18 |
-
@RestController
|
19 |
-
@RequiredArgsConstructor
|
20 |
-
@RequestMapping("/features")
|
21 |
-
public class FeatureController {{
|
22 |
-
private final Features features;
|
23 |
-
|
24 |
-
@GetMapping()
|
25 |
-
public Flux<Feature> findAll() {{
|
26 |
-
return features.getAll();
|
27 |
-
}}
|
28 |
-
|
29 |
-
@PostMapping()
|
30 |
-
public Mono<Feature> add(@RequestBody Feature feature) {{
|
31 |
-
return features.add(feature);
|
32 |
-
}}
|
33 |
-
}}
|
34 |
-
---end of eaxmple code
|
35 |
-
===END OF Architecture
|
36 |
-
|
37 |
-
===TestStrategy
|
38 |
-
For the Controller and DTO, we can write component test to test the actual implementation of api operations, test class rely on Association interface use WebFluxTest and WebTestClient ability.
|
39 |
-
---eaxmple code:
|
40 |
-
@ExtendWith(SpringExtension.class)
|
41 |
-
@WebFluxTest(value = FeatureFlagApi.class, properties = "spring.main.lazy-initialization=true")
|
42 |
-
@ContextConfiguration(classes = TestConfiguration.class)
|
43 |
-
class FeatureControllerTest extends ControllerTestBase {{
|
44 |
-
@Autowired
|
45 |
-
WebTestClient webClient;
|
46 |
-
|
47 |
-
@MockBean
|
48 |
-
Features features;
|
49 |
-
|
50 |
-
@Test
|
51 |
-
void should_getAll_success_when_no_records() {{
|
52 |
-
when(features.getAll(Mockito.any())).thenReturn(Flux.empty());
|
53 |
-
|
54 |
-
webClient.get()
|
55 |
-
.uri("/features")
|
56 |
-
.exchange()
|
57 |
-
.expectStatus()
|
58 |
-
.isOk()
|
59 |
-
.expectBodyList(FeatureFlagResponse.class)
|
60 |
-
.hasSize(0);
|
61 |
-
}}
|
62 |
-
}}
|
63 |
-
---end of eaxmple code
|
64 |
-
===END OF TestStrategy
|
65 |
-
|
66 |
-
Use the following format:
|
67 |
-
request: the request that you need to fulfill include Entity and Association of domain layer
|
68 |
-
|
69 |
-
DTO:
|
70 |
-
```
|
71 |
-
the DTO code that you write to fulfill the request, follow TechStack and Architecture
|
72 |
-
```
|
73 |
-
|
74 |
-
Controller:
|
75 |
-
```
|
76 |
-
the Controller code that you write to fulfill the request, follow TechStack and Architecture
|
77 |
-
```
|
78 |
-
|
79 |
-
Test:
|
80 |
-
```
|
81 |
-
the test code that you write to fulfill the request, follow TechStack Architecture and TestStrategy
|
82 |
-
```
|
83 |
-
|
84 |
-
request: {input}"""
|
85 |
-
|
86 |
-
API_LAYER_PROMPT = PromptTemplate(input_variables=["input"], template=API_LAYER,)
|
87 |
-
|
88 |
-
|
89 |
-
apiChain = LLMChain(llm = llm(temperature=0.1), prompt=API_LAYER_PROMPT)
|
90 |
-
|
91 |
-
|
92 |
-
@tool("Generate API Layer Code", return_direct=True)
|
93 |
-
def apiLayerCodeGenerator(input: str) -> str:
|
94 |
-
'''useful for when you need to generate API layer code'''
|
95 |
-
response = apiChain.run(input)
|
96 |
-
return response
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AB-TW/team-ai/documents/bussiness_context/NOTION_DB/Engineering Wiki 2402f5396a3244fdb3f1d135bdb0f3d6/Getting Started 6bc871dcdd4a4554b5b22c0c40740841.md
DELETED
@@ -1,66 +0,0 @@
|
|
1 |
-
# Getting Started
|
2 |
-
|
3 |
-
Last edited time: March 31, 2023 1:49 PM
|
4 |
-
Owner: Anonymous
|
5 |
-
Tags: Guides and Processes
|
6 |
-
|
7 |
-
<aside>
|
8 |
-
💡 Notion Tip: When creating a page, it's important to give it a clear title and provide some content. This could include verifying the information, summarizing the topic, or sharing your thoughts and opinions on something that matters to you.
|
9 |
-
|
10 |
-
</aside>
|
11 |
-
|
12 |
-
# The Basics
|
13 |
-
|
14 |
-
## Create a Page
|
15 |
-
|
16 |
-
In your sidebar, click the `+` that appears next to the word **Workspace** on hover. A new page will appear. Give it a title and start typing like you would in any other document.
|
17 |
-
|
18 |
-
## Headings
|
19 |
-
|
20 |
-
You can add headings and subheadings in one of two ways:
|
21 |
-
|
22 |
-
- Type `/heading` or `/h1`, `/h2`, or `/h3` to choose the heading size you want.
|
23 |
-
- Use Markdown shortcuts, like `#`, `##`, and `###`.
|
24 |
-
- Create inline code by wrapping text with ``` (or with the shortcut `cmd/ctrl + e`).
|
25 |
-
|
26 |
-
## Toggle Lists
|
27 |
-
|
28 |
-
- Toggle lists streamline your content. Click the arrow to open.
|
29 |
-
- Click the arrow again to hide this content.
|
30 |
-
- Create a toggle by typing `/toggle` and pressing `enter`.
|
31 |
-
- You can add anything to toggles, including images and embeds.
|
32 |
-
|
33 |
-
## Callout Blocks
|
34 |
-
|
35 |
-
<aside>
|
36 |
-
💡 Create a callout block like this by typing `/call` and pressing `enter`.
|
37 |
-
Helpful for adding inline instructions, warnings, disclaimers, and tips.
|
38 |
-
Change the emoji icon by clicking on it.
|
39 |
-
|
40 |
-
</aside>
|
41 |
-
|
42 |
-
## Code Blocks
|
43 |
-
|
44 |
-
You can add code notation to any Notion page:
|
45 |
-
|
46 |
-
- Type `/code` and press `enter`.
|
47 |
-
- Choose the language from the dropdown in the bottom right corner.
|
48 |
-
- Here's an example:
|
49 |
-
|
50 |
-
```html
|
51 |
-
Hover over this block to see the <b>Copy to Clipboard</b> option!
|
52 |
-
```
|
53 |
-
|
54 |
-
- Your teammates can select any code to comment on it.
|
55 |
-
|
56 |
-
## Organizing Pages
|
57 |
-
|
58 |
-
Instead of using folders, Notion lets you nest pages inside pages.
|
59 |
-
|
60 |
-
- Type `/page` and press `enter` to create a sub-page inside a page. Like this:
|
61 |
-
|
62 |
-
[Example sub-page](Getting%20Started%206bc871dcdd4a4554b5b22c0c40740841/Example%20sub-page%2048f64d6186ec4428b2e4180475245a9c.md)
|
63 |
-
|
64 |
-
# Advanced Techniques
|
65 |
-
|
66 |
-
Check out this [Notion Editor 101](https://www.notion.so/68c7c67047494fdb87d50185429df93e) guide for more advanced tips and how-to's.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIConsultant/MusicGen/tests/models/test_musicgen.py
DELETED
@@ -1,58 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
import pytest
|
8 |
-
import torch
|
9 |
-
|
10 |
-
from audiocraft.models import MusicGen
|
11 |
-
|
12 |
-
|
13 |
-
class TestMusicGenModel:
|
14 |
-
def get_musicgen(self):
|
15 |
-
mg = MusicGen.get_pretrained(name='debug', device='cpu')
|
16 |
-
mg.set_generation_params(duration=2.0, extend_stride=2.)
|
17 |
-
return mg
|
18 |
-
|
19 |
-
def test_base(self):
|
20 |
-
mg = self.get_musicgen()
|
21 |
-
assert mg.frame_rate == 25
|
22 |
-
assert mg.sample_rate == 32000
|
23 |
-
assert mg.audio_channels == 1
|
24 |
-
|
25 |
-
def test_generate_unconditional(self):
|
26 |
-
mg = self.get_musicgen()
|
27 |
-
wav = mg.generate_unconditional(3)
|
28 |
-
assert list(wav.shape) == [3, 1, 64000]
|
29 |
-
|
30 |
-
def test_generate_continuation(self):
|
31 |
-
mg = self.get_musicgen()
|
32 |
-
prompt = torch.randn(3, 1, 32000)
|
33 |
-
wav = mg.generate_continuation(prompt, 32000)
|
34 |
-
assert list(wav.shape) == [3, 1, 64000]
|
35 |
-
|
36 |
-
prompt = torch.randn(2, 1, 32000)
|
37 |
-
wav = mg.generate_continuation(
|
38 |
-
prompt, 32000, ['youpi', 'lapin dort'])
|
39 |
-
assert list(wav.shape) == [2, 1, 64000]
|
40 |
-
|
41 |
-
prompt = torch.randn(2, 1, 32000)
|
42 |
-
with pytest.raises(AssertionError):
|
43 |
-
wav = mg.generate_continuation(
|
44 |
-
prompt, 32000, ['youpi', 'lapin dort', 'one too many'])
|
45 |
-
|
46 |
-
def test_generate(self):
|
47 |
-
mg = self.get_musicgen()
|
48 |
-
wav = mg.generate(
|
49 |
-
['youpi', 'lapin dort'])
|
50 |
-
assert list(wav.shape) == [2, 1, 64000]
|
51 |
-
|
52 |
-
def test_generate_long(self):
|
53 |
-
mg = self.get_musicgen()
|
54 |
-
mg.max_duration = 3.
|
55 |
-
mg.set_generation_params(duration=4., extend_stride=2.)
|
56 |
-
wav = mg.generate(
|
57 |
-
['youpi', 'lapin dort'])
|
58 |
-
assert list(wav.shape) == [2, 1, 32000 * 4]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIWaves/Debate/src/agents/Environment/base_environment.py
DELETED
@@ -1,167 +0,0 @@
|
|
1 |
-
from utils import get_relevant_history, get_embedding
|
2 |
-
import torch
|
3 |
-
from LLM.base_LLM import *
|
4 |
-
from Memory import Memory
|
5 |
-
from Prompt import *
|
6 |
-
import json
|
7 |
-
class Environment:
|
8 |
-
"""
|
9 |
-
The place where the agent activities, responsible for storing some shared memories
|
10 |
-
"""
|
11 |
-
def __init__(self, config) -> None:
|
12 |
-
self.shared_memory = {"long_term_memory": [], "short_term_memory": None}
|
13 |
-
self.agents = None
|
14 |
-
|
15 |
-
self.summary_system_prompt = {}
|
16 |
-
self.summary_last_prompt = {}
|
17 |
-
self.environment_prompt = {}
|
18 |
-
self.environment_type = config["environment_type"] if "environment_type" in config else "cooperative"
|
19 |
-
self.current_chat_history_idx = 0
|
20 |
-
self.LLMs = {}
|
21 |
-
|
22 |
-
# 初始化每个state 的summary 方法
|
23 |
-
# Initialize the summary method for each state
|
24 |
-
for state_name, state_dict in config["states"].items():
|
25 |
-
if state_name != "end_state":
|
26 |
-
self.summary_system_prompt[state_name] = (
|
27 |
-
state_dict["summary_system_prompt"]
|
28 |
-
if "summary_system_prompt" in state_dict
|
29 |
-
else eval(Default_environment_summary_system_prompt)
|
30 |
-
)
|
31 |
-
|
32 |
-
self.summary_last_prompt[state_name] = (
|
33 |
-
state_dict["summary_last_prompt"]
|
34 |
-
if "summary_last_prompt" in state_dict
|
35 |
-
else eval(Default_environment_summary_last_prompt)
|
36 |
-
)
|
37 |
-
|
38 |
-
self.environment_prompt[state_name] = (
|
39 |
-
state_dict["environment_prompt"]
|
40 |
-
if "environment_prompt" in state_dict
|
41 |
-
else " "
|
42 |
-
)
|
43 |
-
self.LLMs[state_name] = init_LLM(f"logs/{state_name}",**state_dict)
|
44 |
-
self.roles_to_names = None
|
45 |
-
self.names_to_roles = None
|
46 |
-
|
47 |
-
@classmethod
|
48 |
-
def from_config(cls, config_path):
|
49 |
-
with open(config_path) as f:
|
50 |
-
config = json.load(f)
|
51 |
-
return cls(config)
|
52 |
-
|
53 |
-
def summary(self, current_state):
|
54 |
-
"""
|
55 |
-
Summarize the situation in the current environment every once in a while
|
56 |
-
"""
|
57 |
-
MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"])
|
58 |
-
current_state_name = current_state.name
|
59 |
-
|
60 |
-
query = self.shared_memory["long_term_memory"][-1].content
|
61 |
-
relevant_history = get_relevant_history(
|
62 |
-
query,
|
63 |
-
self.shared_memory["long_term_memory"][:-1],
|
64 |
-
self.shared_memory["chat_embeddings"][:-1],
|
65 |
-
)
|
66 |
-
|
67 |
-
relevant_history = Memory.get_chat_history(relevant_history)
|
68 |
-
chat_history = Memory.get_chat_history(
|
69 |
-
self.shared_memory["long_term_memory"][-MAX_CHAT_HISTORY + 1 :]
|
70 |
-
)
|
71 |
-
summary = self.shared_memory["short_term_memory"]
|
72 |
-
|
73 |
-
|
74 |
-
# system prompt = environment prompt + current memory + system prompt
|
75 |
-
# current_memory = summary + chat history + relevant history
|
76 |
-
current_memory = eval(Environment_summary_memory)
|
77 |
-
environment_prompt = self.environment_prompt[current_state_name]
|
78 |
-
summary_system_prompt = self.summary_system_prompt[current_state_name]
|
79 |
-
|
80 |
-
environment_summary_system_prompt = eval(Environment_summary_system_prompt)
|
81 |
-
response = self.LLMs[current_state_name].get_response(None, environment_summary_system_prompt, stream=False)
|
82 |
-
return response
|
83 |
-
|
84 |
-
def update_memory(self, memory, current_state):
|
85 |
-
"""
|
86 |
-
update chat embbedings and long term memory,short term memory,agents long term memory
|
87 |
-
"""
|
88 |
-
MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"])
|
89 |
-
self.shared_memory["long_term_memory"].append(memory)
|
90 |
-
current_embedding = get_embedding(memory.content)
|
91 |
-
if "chat_embeddings" not in self.shared_memory:
|
92 |
-
self.shared_memory["chat_embeddings"] = current_embedding
|
93 |
-
else:
|
94 |
-
self.shared_memory["chat_embeddings"] = torch.cat(
|
95 |
-
[self.shared_memory["chat_embeddings"], current_embedding], dim=0
|
96 |
-
)
|
97 |
-
if len(self.shared_memory["long_term_memory"]) % MAX_CHAT_HISTORY == 0:
|
98 |
-
summary = self.summary(current_state)
|
99 |
-
self.shared_memory["short_term_memory"] = summary
|
100 |
-
|
101 |
-
self.agents[memory.send_name].update_memory(memory)
|
102 |
-
|
103 |
-
|
104 |
-
def _get_agent_last_conversation_idx(self,agent,current_long_term_memory):
|
105 |
-
last_conversation_idx = -1
|
106 |
-
for i, history in enumerate(current_long_term_memory):
|
107 |
-
if history.send_name == agent.name:
|
108 |
-
last_conversation_idx = i
|
109 |
-
return last_conversation_idx
|
110 |
-
|
111 |
-
|
112 |
-
def _get_agent_new_memory(self,agent,current_long_term_memory):
|
113 |
-
# get new conversation
|
114 |
-
last_conversation_idx = self._get_agent_last_conversation_idx(agent,current_long_term_memory)
|
115 |
-
|
116 |
-
if last_conversation_idx == -1:
|
117 |
-
new_conversation =current_long_term_memory
|
118 |
-
elif (
|
119 |
-
last_conversation_idx
|
120 |
-
== len(current_long_term_memory) - 1
|
121 |
-
):
|
122 |
-
new_conversation = []
|
123 |
-
else:
|
124 |
-
new_conversation = current_long_term_memory[
|
125 |
-
last_conversation_idx + 1 :
|
126 |
-
]
|
127 |
-
|
128 |
-
# get chat history from new conversation
|
129 |
-
return Memory.get_chat_history(new_conversation)
|
130 |
-
|
131 |
-
|
132 |
-
def _observe(self,agent):
|
133 |
-
MAX_CHAT_HISTORY = eval(os.environ["MAX_CHAT_HISTORY"])
|
134 |
-
current_state = agent.current_state
|
135 |
-
current_role = agent.state_roles[current_state.name]
|
136 |
-
current_component_dict = current_state.components[current_role]
|
137 |
-
|
138 |
-
# cooperative:Sharing information between different states ; competive: No information is shared between different states
|
139 |
-
current_chat_history_idx = self.current_chat_history_idx if self.environment_type == "competive" else 0
|
140 |
-
current_long_term_memory = self.shared_memory["long_term_memory"][current_chat_history_idx:]
|
141 |
-
current_chat_embbedings = self.shared_memory["chat_embeddings"][current_chat_history_idx:]
|
142 |
-
|
143 |
-
|
144 |
-
# relevant_memory
|
145 |
-
query = current_long_term_memory[-1].content
|
146 |
-
|
147 |
-
relevant_memory = get_relevant_history(
|
148 |
-
query,
|
149 |
-
current_long_term_memory[:-1],
|
150 |
-
current_chat_embbedings[:-1],
|
151 |
-
)
|
152 |
-
relevant_memory = Memory.get_chat_history(relevant_memory,agent.name)
|
153 |
-
|
154 |
-
relevant_memory = eval(Agent_observe_relevant_memory)
|
155 |
-
agent.relevant_memory = relevant_memory
|
156 |
-
|
157 |
-
|
158 |
-
# get chat history from new conversation
|
159 |
-
conversations = self._get_agent_new_memory(agent,current_long_term_memory)
|
160 |
-
|
161 |
-
# memory = relevant_memory + summary + history + query
|
162 |
-
query = current_long_term_memory[-1]
|
163 |
-
current_memory = eval(Agent_observe_memory)
|
164 |
-
|
165 |
-
return {"role": "user", "content": current_memory}
|
166 |
-
|
167 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aadarsh4all/ChatWithBear/app.py
DELETED
@@ -1,63 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import openai
|
3 |
-
import gradio as gr
|
4 |
-
|
5 |
-
#if you have OpenAI API key as an environment variable, enable the below
|
6 |
-
#openai.api_key = os.getenv("OPENAI_API_KEY")
|
7 |
-
|
8 |
-
#if you have OpenAI API key as a string, enable the below
|
9 |
-
openai.api_key = "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"
|
10 |
-
|
11 |
-
start_sequence = "\nAI:"
|
12 |
-
restart_sequence = "\nHuman: "
|
13 |
-
|
14 |
-
prompt = "Send A Message "
|
15 |
-
|
16 |
-
def openai_create(prompt):
|
17 |
-
|
18 |
-
response = openai.Completion.create(
|
19 |
-
model="text-davinci-003",
|
20 |
-
prompt=prompt,
|
21 |
-
temperature=0.9,
|
22 |
-
max_tokens=150,
|
23 |
-
top_p=1,
|
24 |
-
frequency_penalty=0,
|
25 |
-
presence_penalty=0.6,
|
26 |
-
stop=[" Human:", " AI:"]
|
27 |
-
)
|
28 |
-
|
29 |
-
return response.choices[0].text
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
def chatgpt_clone(input, history):
|
34 |
-
history = history or []
|
35 |
-
s = list(sum(history, ()))
|
36 |
-
s.append(input)
|
37 |
-
inp = ' '.join(s)
|
38 |
-
output = openai_create(inp)
|
39 |
-
history.append((input, output))
|
40 |
-
return history, history
|
41 |
-
|
42 |
-
|
43 |
-
block = gr.Blocks()
|
44 |
-
|
45 |
-
|
46 |
-
with block:
|
47 |
-
gr.Markdown("""<h1 ><center>ChatWithBear</center></h1>
|
48 |
-
<style>
|
49 |
-
h1{font-family: monospace;}
|
50 |
-
</style>
|
51 |
-
""")
|
52 |
-
chatbot = gr.Chatbot()
|
53 |
-
message = gr.Textbox(placeholder=prompt)
|
54 |
-
state = gr.State()
|
55 |
-
submit = gr.Button("SEND")
|
56 |
-
submit.click(chatgpt_clone, inputs=[message, state], outputs=[chatbot, state])
|
57 |
-
gr.Markdown("""<h4 ><center>Made by Aadarsh with 💕</center></h4>
|
58 |
-
<style>
|
59 |
-
h1{font-family: monospace;}
|
60 |
-
</style>
|
61 |
-
""")
|
62 |
-
|
63 |
-
block.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abdllh/poetry/app.py
DELETED
@@ -1,53 +0,0 @@
|
|
1 |
-
import gc
|
2 |
-
import gradio as gr
|
3 |
-
from transformers import pipeline, set_seed
|
4 |
-
|
5 |
-
pipe = pipeline('text-generation', framework='pt', model='akhooli/ap2023', tokenizer='akhooli/ap2023')
|
6 |
-
#gc.collect()
|
7 |
-
samples = [['أنت'
|
8 |
-
,1.0, 50, 1.0, 1.0, 114],['هل غادر'
|
9 |
-
,1.0, 50, 1.0, 1.0, 114 ],['ألا ليت'
|
10 |
-
,1.0, 50, 1.0, 1.0, 114 ],['يا قدس'
|
11 |
-
,1.0, 50, 1.0, 1.0, 114],['عيد بأية حال'
|
12 |
-
,1.0, 50, 1.0, 1.0, 114],['لكل شيء إذا ما'
|
13 |
-
,1.0, 50, 1.0, 1.0, 114 ],['.'
|
14 |
-
,1.0, 50, 1.0, 1.0, 114]]
|
15 |
-
|
16 |
-
notes = """
|
17 |
-
- Enter a short prompt or select (click) one of the examples and click SEND
|
18 |
-
- Adjust parameters (temperture, top k, top p and penalty) through the slider (keep close to default values).
|
19 |
-
- For the same seed (randomness), the same output is regenerated if other parameters are fixed. Seed should be 0 or more (not empty)
|
20 |
-
- Clear and enter new prompt or select another example and SEND to regenerate
|
21 |
-
- The '.' means start a new line from no prompt (your prompt need not be long)
|
22 |
-
- Be patient: this runs on CPU (free tier)
|
23 |
-
- Feedback (Twitter): @akhooli (https://twitter.com/akhooli/status/1611025232201977859)
|
24 |
-
- Note/Disclaimer: may generate unaccepted or inappropriate content. Use at your own risk.
|
25 |
-
"""
|
26 |
-
def sayPoetry(prompt, temp=1.0, topk = 50, topp = 1.0, penalty=1.0, seed=114):
|
27 |
-
if not int(seed) >= 0: seed=114
|
28 |
-
set_seed(seed)
|
29 |
-
gen = pipe(prompt, max_length=96, do_sample=True, temperature=temp, top_k=topk, top_p=topp, repetition_penalty=penalty,
|
30 |
-
min_length = 64, no_repeat_ngram_size = 3, return_full_text=True,
|
31 |
-
num_beams=5, num_return_sequences=1)[0]["generated_text"]
|
32 |
-
poetry =""
|
33 |
-
for line in gen.split('.')[:-1]:
|
34 |
-
poetry += line #+ "\n"
|
35 |
-
return poetry
|
36 |
-
poetry = gr.Interface(fn=sayPoetry,
|
37 |
-
inputs=[
|
38 |
-
gr.Textbox(label="Enter short prompt or select from examples:"),
|
39 |
-
gr.Slider(0.70, 1.2, step=0.01,value=1.0, label='control temperature'),
|
40 |
-
gr.Slider(25, 100, step=1,value=50, label='control top k'),
|
41 |
-
gr.Slider(0.80, 1.0, step=0.01,value=1.0, label='control top p'),
|
42 |
-
gr.Slider(0.90, 1.50, step=0.01,value=1.0, label='control penalty'),
|
43 |
-
gr.Number(value=139750, precision=0, label='Seed'),
|
44 |
-
],
|
45 |
-
outputs=[gr.Textbox(label="Generated Poetry:")],
|
46 |
-
|
47 |
-
allow_flagging='never',
|
48 |
-
title='Arabic Poetry Generation Demo (updated Jan. 2023)',
|
49 |
-
description = "A simple demo of AI generated poetry based on 1M poems fine-tuned using AraGPT2 (be patient, runs on cpu)",
|
50 |
-
examples=samples,
|
51 |
-
cache_examples=False,
|
52 |
-
article = notes)
|
53 |
-
poetry.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abdulkader/HumanMotionsDetector/app.py
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import numpy as np
|
3 |
-
import tensorflow as tf
|
4 |
-
from tensorflow import keras
|
5 |
-
import tensorflow_addons as tfa
|
6 |
-
import matplotlib.pyplot as plt
|
7 |
-
from tensorflow.keras import layers
|
8 |
-
from tensorflow.keras.models import load_model
|
9 |
-
|
10 |
-
from tensorflow import keras
|
11 |
-
model = keras.models.load_model('https://github.com/abdulkader902017/CervixNet/blob/6217a51b73ff30724d50712545b2b62bec8a754e/my_model/saved_model.pb')
|
12 |
-
response = requests.get("https://github.com/abdulkader902017/CervixNet/blob/main/labels.txt")
|
13 |
-
labels = response.text.split("\n")
|
14 |
-
|
15 |
-
def classify_image(inp):
|
16 |
-
inp = inp.reshape((-1, 32, 32, 3))
|
17 |
-
inp = tf.keras.applications.mobilenet_v2.preprocess_input(inp)
|
18 |
-
prediction = inception_net.predict(inp).flatten()
|
19 |
-
confidences = {labels[i]: float(prediction[i]) for i in range(3)}
|
20 |
-
return confidences
|
21 |
-
|
22 |
-
gr.Interface(fn=classify_image,
|
23 |
-
inputs=gr.Image(shape=(32, 32)),
|
24 |
-
outputs=gr.Label(num_top_classes=3)).launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/generated/client/nodes/2.js
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
export { default as component } from "../../../../src/routes/+page.svelte";
|
|
|
|
spaces/Adapter/CoAdapter/ldm/modules/ema.py
DELETED
@@ -1,80 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch import nn
|
3 |
-
|
4 |
-
|
5 |
-
class LitEma(nn.Module):
|
6 |
-
def __init__(self, model, decay=0.9999, use_num_upates=True):
|
7 |
-
super().__init__()
|
8 |
-
if decay < 0.0 or decay > 1.0:
|
9 |
-
raise ValueError('Decay must be between 0 and 1')
|
10 |
-
|
11 |
-
self.m_name2s_name = {}
|
12 |
-
self.register_buffer('decay', torch.tensor(decay, dtype=torch.float32))
|
13 |
-
self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int) if use_num_upates
|
14 |
-
else torch.tensor(-1, dtype=torch.int))
|
15 |
-
|
16 |
-
for name, p in model.named_parameters():
|
17 |
-
if p.requires_grad:
|
18 |
-
# remove as '.'-character is not allowed in buffers
|
19 |
-
s_name = name.replace('.', '')
|
20 |
-
self.m_name2s_name.update({name: s_name})
|
21 |
-
self.register_buffer(s_name, p.clone().detach().data)
|
22 |
-
|
23 |
-
self.collected_params = []
|
24 |
-
|
25 |
-
def reset_num_updates(self):
|
26 |
-
del self.num_updates
|
27 |
-
self.register_buffer('num_updates', torch.tensor(0, dtype=torch.int))
|
28 |
-
|
29 |
-
def forward(self, model):
|
30 |
-
decay = self.decay
|
31 |
-
|
32 |
-
if self.num_updates >= 0:
|
33 |
-
self.num_updates += 1
|
34 |
-
decay = min(self.decay, (1 + self.num_updates) / (10 + self.num_updates))
|
35 |
-
|
36 |
-
one_minus_decay = 1.0 - decay
|
37 |
-
|
38 |
-
with torch.no_grad():
|
39 |
-
m_param = dict(model.named_parameters())
|
40 |
-
shadow_params = dict(self.named_buffers())
|
41 |
-
|
42 |
-
for key in m_param:
|
43 |
-
if m_param[key].requires_grad:
|
44 |
-
sname = self.m_name2s_name[key]
|
45 |
-
shadow_params[sname] = shadow_params[sname].type_as(m_param[key])
|
46 |
-
shadow_params[sname].sub_(one_minus_decay * (shadow_params[sname] - m_param[key]))
|
47 |
-
else:
|
48 |
-
assert not key in self.m_name2s_name
|
49 |
-
|
50 |
-
def copy_to(self, model):
|
51 |
-
m_param = dict(model.named_parameters())
|
52 |
-
shadow_params = dict(self.named_buffers())
|
53 |
-
for key in m_param:
|
54 |
-
if m_param[key].requires_grad:
|
55 |
-
m_param[key].data.copy_(shadow_params[self.m_name2s_name[key]].data)
|
56 |
-
else:
|
57 |
-
assert not key in self.m_name2s_name
|
58 |
-
|
59 |
-
def store(self, parameters):
|
60 |
-
"""
|
61 |
-
Save the current parameters for restoring later.
|
62 |
-
Args:
|
63 |
-
parameters: Iterable of `torch.nn.Parameter`; the parameters to be
|
64 |
-
temporarily stored.
|
65 |
-
"""
|
66 |
-
self.collected_params = [param.clone() for param in parameters]
|
67 |
-
|
68 |
-
def restore(self, parameters):
|
69 |
-
"""
|
70 |
-
Restore the parameters stored with the `store` method.
|
71 |
-
Useful to validate the model with EMA parameters without affecting the
|
72 |
-
original optimization process. Store the parameters before the
|
73 |
-
`copy_to` method. After validation (or model saving), use this to
|
74 |
-
restore the former parameters.
|
75 |
-
Args:
|
76 |
-
parameters: Iterable of `torch.nn.Parameter`; the parameters to be
|
77 |
-
updated with the stored parameters.
|
78 |
-
"""
|
79 |
-
for c_param, param in zip(self.collected_params, parameters):
|
80 |
-
param.data.copy_(c_param.data)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/sizer/GetExpandedChildHeight.js
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
var GetExpandedChildHeight = function (child, parentHeight) {
|
2 |
-
if (parentHeight === undefined) {
|
3 |
-
parentHeight = this.height;
|
4 |
-
}
|
5 |
-
|
6 |
-
var childHeight;
|
7 |
-
var childConfig = child.rexSizer;
|
8 |
-
var padding = childConfig.padding;
|
9 |
-
if (this.orientation === 0) { // x
|
10 |
-
if (childConfig.expand) {
|
11 |
-
var innerHeight = parentHeight - this.space.top - this.space.bottom;
|
12 |
-
childHeight = innerHeight - padding.top - padding.bottom;
|
13 |
-
}
|
14 |
-
} else { // y
|
15 |
-
if ((childConfig.proportion > 0) && (this.proportionLength > 0)) {
|
16 |
-
childHeight = (childConfig.proportion * this.proportionLength);
|
17 |
-
}
|
18 |
-
}
|
19 |
-
return childHeight;
|
20 |
-
}
|
21 |
-
|
22 |
-
export default GetExpandedChildHeight;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aishwini/myfirstaigen/app.py
DELETED
@@ -1,34 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import gradio as gr
|
3 |
-
from langchain.chat_models import ChatOpenAI
|
4 |
-
from langchain import LLMChain, PromptTemplate
|
5 |
-
from langchain.memory import ConversationBufferMemory
|
6 |
-
|
7 |
-
OPENAI_API_KEY=os.getenv('OPENAI_API_KEY')
|
8 |
-
|
9 |
-
template = """You are a helpful assistant to answer all user queries.
|
10 |
-
{chat_history}
|
11 |
-
User: {user_message}
|
12 |
-
Chatbot:"""
|
13 |
-
|
14 |
-
prompt = PromptTemplate(
|
15 |
-
input_variables=["chat_history", "user_message"], template=template
|
16 |
-
)
|
17 |
-
|
18 |
-
memory = ConversationBufferMemory(memory_key="chat_history")
|
19 |
-
|
20 |
-
llm_chain = LLMChain(
|
21 |
-
llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"),
|
22 |
-
prompt=prompt,
|
23 |
-
verbose=True,
|
24 |
-
memory=memory,
|
25 |
-
)
|
26 |
-
|
27 |
-
def get_text_response(user_message,history):
|
28 |
-
response = llm_chain.predict(user_message = user_message)
|
29 |
-
return response
|
30 |
-
|
31 |
-
demo = gr.ChatInterface(get_text_response)
|
32 |
-
|
33 |
-
if __name__ == "__main__":
|
34 |
-
demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AkitoP/umamusume_bert_vits2/commons.py
DELETED
@@ -1,160 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
from torch.nn import functional as F
|
4 |
-
|
5 |
-
|
6 |
-
def init_weights(m, mean=0.0, std=0.01):
|
7 |
-
classname = m.__class__.__name__
|
8 |
-
if classname.find("Conv") != -1:
|
9 |
-
m.weight.data.normal_(mean, std)
|
10 |
-
|
11 |
-
|
12 |
-
def get_padding(kernel_size, dilation=1):
|
13 |
-
return int((kernel_size * dilation - dilation) / 2)
|
14 |
-
|
15 |
-
|
16 |
-
def convert_pad_shape(pad_shape):
|
17 |
-
layer = pad_shape[::-1]
|
18 |
-
pad_shape = [item for sublist in layer for item in sublist]
|
19 |
-
return pad_shape
|
20 |
-
|
21 |
-
|
22 |
-
def intersperse(lst, item):
|
23 |
-
result = [item] * (len(lst) * 2 + 1)
|
24 |
-
result[1::2] = lst
|
25 |
-
return result
|
26 |
-
|
27 |
-
|
28 |
-
def kl_divergence(m_p, logs_p, m_q, logs_q):
|
29 |
-
"""KL(P||Q)"""
|
30 |
-
kl = (logs_q - logs_p) - 0.5
|
31 |
-
kl += (
|
32 |
-
0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q)
|
33 |
-
)
|
34 |
-
return kl
|
35 |
-
|
36 |
-
|
37 |
-
def rand_gumbel(shape):
|
38 |
-
"""Sample from the Gumbel distribution, protect from overflows."""
|
39 |
-
uniform_samples = torch.rand(shape) * 0.99998 + 0.00001
|
40 |
-
return -torch.log(-torch.log(uniform_samples))
|
41 |
-
|
42 |
-
|
43 |
-
def rand_gumbel_like(x):
|
44 |
-
g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device)
|
45 |
-
return g
|
46 |
-
|
47 |
-
|
48 |
-
def slice_segments(x, ids_str, segment_size=4):
|
49 |
-
ret = torch.zeros_like(x[:, :, :segment_size])
|
50 |
-
for i in range(x.size(0)):
|
51 |
-
idx_str = ids_str[i]
|
52 |
-
idx_end = idx_str + segment_size
|
53 |
-
ret[i] = x[i, :, idx_str:idx_end]
|
54 |
-
return ret
|
55 |
-
|
56 |
-
|
57 |
-
def rand_slice_segments(x, x_lengths=None, segment_size=4):
|
58 |
-
b, d, t = x.size()
|
59 |
-
if x_lengths is None:
|
60 |
-
x_lengths = t
|
61 |
-
ids_str_max = x_lengths - segment_size + 1
|
62 |
-
ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long)
|
63 |
-
ret = slice_segments(x, ids_str, segment_size)
|
64 |
-
return ret, ids_str
|
65 |
-
|
66 |
-
|
67 |
-
def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4):
|
68 |
-
position = torch.arange(length, dtype=torch.float)
|
69 |
-
num_timescales = channels // 2
|
70 |
-
log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / (
|
71 |
-
num_timescales - 1
|
72 |
-
)
|
73 |
-
inv_timescales = min_timescale * torch.exp(
|
74 |
-
torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment
|
75 |
-
)
|
76 |
-
scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1)
|
77 |
-
signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0)
|
78 |
-
signal = F.pad(signal, [0, 0, 0, channels % 2])
|
79 |
-
signal = signal.view(1, channels, length)
|
80 |
-
return signal
|
81 |
-
|
82 |
-
|
83 |
-
def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4):
|
84 |
-
b, channels, length = x.size()
|
85 |
-
signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
|
86 |
-
return x + signal.to(dtype=x.dtype, device=x.device)
|
87 |
-
|
88 |
-
|
89 |
-
def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1):
|
90 |
-
b, channels, length = x.size()
|
91 |
-
signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale)
|
92 |
-
return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis)
|
93 |
-
|
94 |
-
|
95 |
-
def subsequent_mask(length):
|
96 |
-
mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0)
|
97 |
-
return mask
|
98 |
-
|
99 |
-
|
100 |
-
@torch.jit.script
|
101 |
-
def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels):
|
102 |
-
n_channels_int = n_channels[0]
|
103 |
-
in_act = input_a + input_b
|
104 |
-
t_act = torch.tanh(in_act[:, :n_channels_int, :])
|
105 |
-
s_act = torch.sigmoid(in_act[:, n_channels_int:, :])
|
106 |
-
acts = t_act * s_act
|
107 |
-
return acts
|
108 |
-
|
109 |
-
|
110 |
-
def convert_pad_shape(pad_shape):
|
111 |
-
layer = pad_shape[::-1]
|
112 |
-
pad_shape = [item for sublist in layer for item in sublist]
|
113 |
-
return pad_shape
|
114 |
-
|
115 |
-
|
116 |
-
def shift_1d(x):
|
117 |
-
x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1]
|
118 |
-
return x
|
119 |
-
|
120 |
-
|
121 |
-
def sequence_mask(length, max_length=None):
|
122 |
-
if max_length is None:
|
123 |
-
max_length = length.max()
|
124 |
-
x = torch.arange(max_length, dtype=length.dtype, device=length.device)
|
125 |
-
return x.unsqueeze(0) < length.unsqueeze(1)
|
126 |
-
|
127 |
-
|
128 |
-
def generate_path(duration, mask):
|
129 |
-
"""
|
130 |
-
duration: [b, 1, t_x]
|
131 |
-
mask: [b, 1, t_y, t_x]
|
132 |
-
"""
|
133 |
-
|
134 |
-
b, _, t_y, t_x = mask.shape
|
135 |
-
cum_duration = torch.cumsum(duration, -1)
|
136 |
-
|
137 |
-
cum_duration_flat = cum_duration.view(b * t_x)
|
138 |
-
path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype)
|
139 |
-
path = path.view(b, t_x, t_y)
|
140 |
-
path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1]
|
141 |
-
path = path.unsqueeze(1).transpose(2, 3) * mask
|
142 |
-
return path
|
143 |
-
|
144 |
-
|
145 |
-
def clip_grad_value_(parameters, clip_value, norm_type=2):
|
146 |
-
if isinstance(parameters, torch.Tensor):
|
147 |
-
parameters = [parameters]
|
148 |
-
parameters = list(filter(lambda p: p.grad is not None, parameters))
|
149 |
-
norm_type = float(norm_type)
|
150 |
-
if clip_value is not None:
|
151 |
-
clip_value = float(clip_value)
|
152 |
-
|
153 |
-
total_norm = 0
|
154 |
-
for p in parameters:
|
155 |
-
if clip_value is not None:
|
156 |
-
p.grad.data.clamp_(min=-clip_value, max=clip_value)
|
157 |
-
param_norm = p.grad.data.norm(norm_type)
|
158 |
-
total_norm += param_norm.item() ** norm_type
|
159 |
-
total_norm = total_norm ** (1.0 / norm_type)
|
160 |
-
return total_norm
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/latex/attention/background.tex
DELETED
@@ -1,58 +0,0 @@
|
|
1 |
-
The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}.
|
2 |
-
|
3 |
-
Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}.
|
4 |
-
|
5 |
-
End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}.
|
6 |
-
|
7 |
-
To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution.
|
8 |
-
In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}.
|
9 |
-
|
10 |
-
|
11 |
-
%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
|
12 |
-
|
13 |
-
%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation.
|
14 |
-
|
15 |
-
%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
|
16 |
-
|
17 |
-
%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost.
|
18 |
-
|
19 |
-
%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length.
|
20 |
-
|
21 |
-
%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs.
|
22 |
-
|
23 |
-
%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture.
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)?
|
28 |
-
|
29 |
-
%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence.
|
30 |
-
|
31 |
-
%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model.
|
32 |
-
|
33 |
-
%\begin{table}[h!]
|
34 |
-
%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.}
|
35 |
-
%\label{tab:op_complexities}
|
36 |
-
%\begin{center}
|
37 |
-
%\vspace{-5pt}
|
38 |
-
%\scalebox{0.75}{
|
39 |
-
|
40 |
-
%\begin{tabular}{l|c|c|c}
|
41 |
-
%\hline \hline
|
42 |
-
%Layer Type & Receptive & Complexity & Sequential \\
|
43 |
-
% & Field & & Operations \\
|
44 |
-
%\hline
|
45 |
-
%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\
|
46 |
-
%\hline
|
47 |
-
%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\
|
48 |
-
%\hline
|
49 |
-
%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\
|
50 |
-
%\hline
|
51 |
-
%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\
|
52 |
-
%\hline
|
53 |
-
%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\
|
54 |
-
%\hline \hline
|
55 |
-
%\end{tabular}
|
56 |
-
%}
|
57 |
-
%\end{center}
|
58 |
-
%\end{table}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/dataset/ray_utils.py
DELETED
@@ -1,289 +0,0 @@
|
|
1 |
-
import torch, re
|
2 |
-
import numpy as np
|
3 |
-
from torch import searchsorted
|
4 |
-
from kornia import create_meshgrid
|
5 |
-
|
6 |
-
|
7 |
-
# from utils import index_point_feature
|
8 |
-
|
9 |
-
def depth2dist(z_vals, cos_angle):
|
10 |
-
# z_vals: [N_ray N_sample]
|
11 |
-
device = z_vals.device
|
12 |
-
dists = z_vals[..., 1:] - z_vals[..., :-1]
|
13 |
-
dists = torch.cat([dists, torch.Tensor([1e10]).to(device).expand(dists[..., :1].shape)], -1) # [N_rays, N_samples]
|
14 |
-
dists = dists * cos_angle.unsqueeze(-1)
|
15 |
-
return dists
|
16 |
-
|
17 |
-
|
18 |
-
def ndc2dist(ndc_pts, cos_angle):
|
19 |
-
dists = torch.norm(ndc_pts[:, 1:] - ndc_pts[:, :-1], dim=-1)
|
20 |
-
dists = torch.cat([dists, 1e10 * cos_angle.unsqueeze(-1)], -1) # [N_rays, N_samples]
|
21 |
-
return dists
|
22 |
-
|
23 |
-
|
24 |
-
def get_ray_directions(H, W, focal, center=None):
|
25 |
-
"""
|
26 |
-
Get ray directions for all pixels in camera coordinate.
|
27 |
-
Reference: https://www.scratchapixel.com/lessons/3d-basic-rendering/
|
28 |
-
ray-tracing-generating-camera-rays/standard-coordinate-systems
|
29 |
-
Inputs:
|
30 |
-
H, W, focal: image height, width and focal length
|
31 |
-
Outputs:
|
32 |
-
directions: (H, W, 3), the direction of the rays in camera coordinate
|
33 |
-
"""
|
34 |
-
grid = create_meshgrid(H, W, normalized_coordinates=False)[0] + 0.5
|
35 |
-
|
36 |
-
i, j = grid.unbind(-1)
|
37 |
-
# the direction here is without +0.5 pixel centering as calibration is not so accurate
|
38 |
-
# see https://github.com/bmild/nerf/issues/24
|
39 |
-
cent = center if center is not None else [W / 2, H / 2]
|
40 |
-
directions = torch.stack([(i - cent[0]) / focal[0], (j - cent[1]) / focal[1], torch.ones_like(i)], -1) # (H, W, 3)
|
41 |
-
|
42 |
-
return directions
|
43 |
-
|
44 |
-
|
45 |
-
def get_ray_directions_blender(H, W, focal, center=None):
|
46 |
-
"""
|
47 |
-
Get ray directions for all pixels in camera coordinate.
|
48 |
-
Reference: https://www.scratchapixel.com/lessons/3d-basic-rendering/
|
49 |
-
ray-tracing-generating-camera-rays/standard-coordinate-systems
|
50 |
-
Inputs:
|
51 |
-
H, W, focal: image height, width and focal length
|
52 |
-
Outputs:
|
53 |
-
directions: (H, W, 3), the direction of the rays in camera coordinate
|
54 |
-
"""
|
55 |
-
grid = create_meshgrid(H, W, normalized_coordinates=False)[0]+0.5
|
56 |
-
i, j = grid.unbind(-1)
|
57 |
-
# the direction here is without +0.5 pixel centering as calibration is not so accurate
|
58 |
-
# see https://github.com/bmild/nerf/issues/24
|
59 |
-
cent = center if center is not None else [W / 2, H / 2]
|
60 |
-
directions = torch.stack([(i - cent[0]) / focal[0], -(j - cent[1]) / focal[1], -torch.ones_like(i)],
|
61 |
-
-1) # (H, W, 3)
|
62 |
-
|
63 |
-
return directions
|
64 |
-
|
65 |
-
|
66 |
-
def get_rays(directions, c2w):
|
67 |
-
"""
|
68 |
-
Get ray origin and normalized directions in world coordinate for all pixels in one image.
|
69 |
-
Reference: https://www.scratchapixel.com/lessons/3d-basic-rendering/
|
70 |
-
ray-tracing-generating-camera-rays/standard-coordinate-systems
|
71 |
-
Inputs:
|
72 |
-
directions: (H, W, 3) precomputed ray directions in camera coordinate
|
73 |
-
c2w: (3, 4) transformation matrix from camera coordinate to world coordinate
|
74 |
-
Outputs:
|
75 |
-
rays_o: (H*W, 3), the origin of the rays in world coordinate
|
76 |
-
rays_d: (H*W, 3), the normalized direction of the rays in world coordinate
|
77 |
-
"""
|
78 |
-
# Rotate ray directions from camera coordinate to the world coordinate
|
79 |
-
rays_d = directions @ c2w[:3, :3].T # (H, W, 3)
|
80 |
-
# rays_d = rays_d / torch.norm(rays_d, dim=-1, keepdim=True)
|
81 |
-
# The origin of all rays is the camera origin in world coordinate
|
82 |
-
rays_o = c2w[:3, 3].expand(rays_d.shape) # (H, W, 3)
|
83 |
-
|
84 |
-
rays_d = rays_d.view(-1, 3)
|
85 |
-
rays_o = rays_o.view(-1, 3)
|
86 |
-
|
87 |
-
return rays_o, rays_d
|
88 |
-
|
89 |
-
|
90 |
-
def ndc_rays_blender(H, W, focal, near, rays_o, rays_d):
|
91 |
-
# Shift ray origins to near plane
|
92 |
-
t = -(near + rays_o[..., 2]) / rays_d[..., 2]
|
93 |
-
rays_o = rays_o + t[..., None] * rays_d
|
94 |
-
|
95 |
-
# Projection
|
96 |
-
o0 = -1. / (W / (2. * focal)) * rays_o[..., 0] / rays_o[..., 2]
|
97 |
-
o1 = -1. / (H / (2. * focal)) * rays_o[..., 1] / rays_o[..., 2]
|
98 |
-
o2 = 1. + 2. * near / rays_o[..., 2]
|
99 |
-
|
100 |
-
d0 = -1. / (W / (2. * focal)) * (rays_d[..., 0] / rays_d[..., 2] - rays_o[..., 0] / rays_o[..., 2])
|
101 |
-
d1 = -1. / (H / (2. * focal)) * (rays_d[..., 1] / rays_d[..., 2] - rays_o[..., 1] / rays_o[..., 2])
|
102 |
-
d2 = -2. * near / rays_o[..., 2]
|
103 |
-
|
104 |
-
rays_o = torch.stack([o0, o1, o2], -1)
|
105 |
-
rays_d = torch.stack([d0, d1, d2], -1)
|
106 |
-
|
107 |
-
return rays_o, rays_d
|
108 |
-
|
109 |
-
def ndc_rays(H, W, focal, near, rays_o, rays_d):
|
110 |
-
# Shift ray origins to near plane
|
111 |
-
t = (near - rays_o[..., 2]) / rays_d[..., 2]
|
112 |
-
rays_o = rays_o + t[..., None] * rays_d
|
113 |
-
|
114 |
-
# Projection
|
115 |
-
o0 = 1. / (W / (2. * focal)) * rays_o[..., 0] / rays_o[..., 2]
|
116 |
-
o1 = 1. / (H / (2. * focal)) * rays_o[..., 1] / rays_o[..., 2]
|
117 |
-
o2 = 1. - 2. * near / rays_o[..., 2]
|
118 |
-
|
119 |
-
d0 = 1. / (W / (2. * focal)) * (rays_d[..., 0] / rays_d[..., 2] - rays_o[..., 0] / rays_o[..., 2])
|
120 |
-
d1 = 1. / (H / (2. * focal)) * (rays_d[..., 1] / rays_d[..., 2] - rays_o[..., 1] / rays_o[..., 2])
|
121 |
-
d2 = 2. * near / rays_o[..., 2]
|
122 |
-
|
123 |
-
rays_o = torch.stack([o0, o1, o2], -1)
|
124 |
-
rays_d = torch.stack([d0, d1, d2], -1)
|
125 |
-
|
126 |
-
return rays_o, rays_d
|
127 |
-
|
128 |
-
# Hierarchical sampling (section 5.2)
|
129 |
-
def sample_pdf(bins, weights, N_samples, det=False, pytest=False):
|
130 |
-
device = weights.device
|
131 |
-
# Get pdf
|
132 |
-
weights = weights + 1e-5 # prevent nans
|
133 |
-
pdf = weights / torch.sum(weights, -1, keepdim=True)
|
134 |
-
cdf = torch.cumsum(pdf, -1)
|
135 |
-
cdf = torch.cat([torch.zeros_like(cdf[..., :1]), cdf], -1) # (batch, len(bins))
|
136 |
-
|
137 |
-
# Take uniform samples
|
138 |
-
if det:
|
139 |
-
u = torch.linspace(0., 1., steps=N_samples, device=device)
|
140 |
-
u = u.expand(list(cdf.shape[:-1]) + [N_samples])
|
141 |
-
else:
|
142 |
-
u = torch.rand(list(cdf.shape[:-1]) + [N_samples], device=device)
|
143 |
-
|
144 |
-
# Pytest, overwrite u with numpy's fixed random numbers
|
145 |
-
if pytest:
|
146 |
-
np.random.seed(0)
|
147 |
-
new_shape = list(cdf.shape[:-1]) + [N_samples]
|
148 |
-
if det:
|
149 |
-
u = np.linspace(0., 1., N_samples)
|
150 |
-
u = np.broadcast_to(u, new_shape)
|
151 |
-
else:
|
152 |
-
u = np.random.rand(*new_shape)
|
153 |
-
u = torch.Tensor(u)
|
154 |
-
|
155 |
-
# Invert CDF
|
156 |
-
u = u.contiguous()
|
157 |
-
inds = searchsorted(cdf.detach(), u, right=True)
|
158 |
-
below = torch.max(torch.zeros_like(inds - 1), inds - 1)
|
159 |
-
above = torch.min((cdf.shape[-1] - 1) * torch.ones_like(inds), inds)
|
160 |
-
inds_g = torch.stack([below, above], -1) # (batch, N_samples, 2)
|
161 |
-
|
162 |
-
matched_shape = [inds_g.shape[0], inds_g.shape[1], cdf.shape[-1]]
|
163 |
-
cdf_g = torch.gather(cdf.unsqueeze(1).expand(matched_shape), 2, inds_g)
|
164 |
-
bins_g = torch.gather(bins.unsqueeze(1).expand(matched_shape), 2, inds_g)
|
165 |
-
|
166 |
-
denom = (cdf_g[..., 1] - cdf_g[..., 0])
|
167 |
-
denom = torch.where(denom < 1e-5, torch.ones_like(denom), denom)
|
168 |
-
t = (u - cdf_g[..., 0]) / denom
|
169 |
-
samples = bins_g[..., 0] + t * (bins_g[..., 1] - bins_g[..., 0])
|
170 |
-
|
171 |
-
return samples
|
172 |
-
|
173 |
-
|
174 |
-
def dda(rays_o, rays_d, bbox_3D):
|
175 |
-
inv_ray_d = 1.0 / (rays_d + 1e-6)
|
176 |
-
t_min = (bbox_3D[:1] - rays_o) * inv_ray_d # N_rays 3
|
177 |
-
t_max = (bbox_3D[1:] - rays_o) * inv_ray_d
|
178 |
-
t = torch.stack((t_min, t_max)) # 2 N_rays 3
|
179 |
-
t_min = torch.max(torch.min(t, dim=0)[0], dim=-1, keepdim=True)[0]
|
180 |
-
t_max = torch.min(torch.max(t, dim=0)[0], dim=-1, keepdim=True)[0]
|
181 |
-
return t_min, t_max
|
182 |
-
|
183 |
-
|
184 |
-
def ray_marcher(rays,
|
185 |
-
N_samples=64,
|
186 |
-
lindisp=False,
|
187 |
-
perturb=0,
|
188 |
-
bbox_3D=None):
|
189 |
-
"""
|
190 |
-
sample points along the rays
|
191 |
-
Inputs:
|
192 |
-
rays: ()
|
193 |
-
|
194 |
-
Returns:
|
195 |
-
|
196 |
-
"""
|
197 |
-
|
198 |
-
# Decompose the inputs
|
199 |
-
N_rays = rays.shape[0]
|
200 |
-
rays_o, rays_d = rays[:, 0:3], rays[:, 3:6] # both (N_rays, 3)
|
201 |
-
near, far = rays[:, 6:7], rays[:, 7:8] # both (N_rays, 1)
|
202 |
-
|
203 |
-
if bbox_3D is not None:
|
204 |
-
# cal aabb boundles
|
205 |
-
near, far = dda(rays_o, rays_d, bbox_3D)
|
206 |
-
|
207 |
-
# Sample depth points
|
208 |
-
z_steps = torch.linspace(0, 1, N_samples, device=rays.device) # (N_samples)
|
209 |
-
if not lindisp: # use linear sampling in depth space
|
210 |
-
z_vals = near * (1 - z_steps) + far * z_steps
|
211 |
-
else: # use linear sampling in disparity space
|
212 |
-
z_vals = 1 / (1 / near * (1 - z_steps) + 1 / far * z_steps)
|
213 |
-
|
214 |
-
z_vals = z_vals.expand(N_rays, N_samples)
|
215 |
-
|
216 |
-
if perturb > 0: # perturb sampling depths (z_vals)
|
217 |
-
z_vals_mid = 0.5 * (z_vals[:, :-1] + z_vals[:, 1:]) # (N_rays, N_samples-1) interval mid points
|
218 |
-
# get intervals between samples
|
219 |
-
upper = torch.cat([z_vals_mid, z_vals[:, -1:]], -1)
|
220 |
-
lower = torch.cat([z_vals[:, :1], z_vals_mid], -1)
|
221 |
-
|
222 |
-
perturb_rand = perturb * torch.rand(z_vals.shape, device=rays.device)
|
223 |
-
z_vals = lower + (upper - lower) * perturb_rand
|
224 |
-
|
225 |
-
xyz_coarse_sampled = rays_o.unsqueeze(1) + \
|
226 |
-
rays_d.unsqueeze(1) * z_vals.unsqueeze(2) # (N_rays, N_samples, 3)
|
227 |
-
|
228 |
-
return xyz_coarse_sampled, rays_o, rays_d, z_vals
|
229 |
-
|
230 |
-
|
231 |
-
def read_pfm(filename):
|
232 |
-
file = open(filename, 'rb')
|
233 |
-
color = None
|
234 |
-
width = None
|
235 |
-
height = None
|
236 |
-
scale = None
|
237 |
-
endian = None
|
238 |
-
|
239 |
-
header = file.readline().decode('utf-8').rstrip()
|
240 |
-
if header == 'PF':
|
241 |
-
color = True
|
242 |
-
elif header == 'Pf':
|
243 |
-
color = False
|
244 |
-
else:
|
245 |
-
raise Exception('Not a PFM file.')
|
246 |
-
|
247 |
-
dim_match = re.match(r'^(\d+)\s(\d+)\s$', file.readline().decode('utf-8'))
|
248 |
-
if dim_match:
|
249 |
-
width, height = map(int, dim_match.groups())
|
250 |
-
else:
|
251 |
-
raise Exception('Malformed PFM header.')
|
252 |
-
|
253 |
-
scale = float(file.readline().rstrip())
|
254 |
-
if scale < 0: # little-endian
|
255 |
-
endian = '<'
|
256 |
-
scale = -scale
|
257 |
-
else:
|
258 |
-
endian = '>' # big-endian
|
259 |
-
|
260 |
-
data = np.fromfile(file, endian + 'f')
|
261 |
-
shape = (height, width, 3) if color else (height, width)
|
262 |
-
|
263 |
-
data = np.reshape(data, shape)
|
264 |
-
data = np.flipud(data)
|
265 |
-
file.close()
|
266 |
-
return data, scale
|
267 |
-
|
268 |
-
|
269 |
-
def ndc_bbox(all_rays):
|
270 |
-
near_min = torch.min(all_rays[...,:3].view(-1,3),dim=0)[0]
|
271 |
-
near_max = torch.max(all_rays[..., :3].view(-1, 3), dim=0)[0]
|
272 |
-
far_min = torch.min((all_rays[...,:3]+all_rays[...,3:6]).view(-1,3),dim=0)[0]
|
273 |
-
far_max = torch.max((all_rays[...,:3]+all_rays[...,3:6]).view(-1, 3), dim=0)[0]
|
274 |
-
print(f'===> ndc bbox near_min:{near_min} near_max:{near_max} far_min:{far_min} far_max:{far_max}')
|
275 |
-
return torch.stack((torch.minimum(near_min,far_min),torch.maximum(near_max,far_max)))
|
276 |
-
|
277 |
-
import torchvision
|
278 |
-
normalize_vgg = torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406],
|
279 |
-
std=[0.229, 0.224, 0.225])
|
280 |
-
|
281 |
-
def denormalize_vgg(img):
|
282 |
-
im = img.clone()
|
283 |
-
im[:, 0, :, :] *= 0.229
|
284 |
-
im[:, 1, :, :] *= 0.224
|
285 |
-
im[:, 2, :, :] *= 0.225
|
286 |
-
im[:, 0, :, :] += 0.485
|
287 |
-
im[:, 1, :, :] += 0.456
|
288 |
-
im[:, 2, :, :] += 0.406
|
289 |
-
return im
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/vfnet/vfnet_r50_fpn_mdconv_c3-c5_mstrain_2x_coco.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
_base_ = './vfnet_r50_fpn_mstrain_2x_coco.py'
|
2 |
-
model = dict(
|
3 |
-
backbone=dict(
|
4 |
-
dcn=dict(type='DCNv2', deform_groups=1, fallback_on_stride=False),
|
5 |
-
stage_with_dcn=(False, True, True, True)),
|
6 |
-
bbox_head=dict(dcn_on_last_conv=True))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/datasets/samplers/group_sampler.py
DELETED
@@ -1,148 +0,0 @@
|
|
1 |
-
from __future__ import division
|
2 |
-
import math
|
3 |
-
|
4 |
-
import numpy as np
|
5 |
-
import torch
|
6 |
-
from mmcv.runner import get_dist_info
|
7 |
-
from torch.utils.data import Sampler
|
8 |
-
|
9 |
-
|
10 |
-
class GroupSampler(Sampler):
|
11 |
-
|
12 |
-
def __init__(self, dataset, samples_per_gpu=1):
|
13 |
-
assert hasattr(dataset, 'flag')
|
14 |
-
self.dataset = dataset
|
15 |
-
self.samples_per_gpu = samples_per_gpu
|
16 |
-
self.flag = dataset.flag.astype(np.int64)
|
17 |
-
self.group_sizes = np.bincount(self.flag)
|
18 |
-
self.num_samples = 0
|
19 |
-
for i, size in enumerate(self.group_sizes):
|
20 |
-
self.num_samples += int(np.ceil(
|
21 |
-
size / self.samples_per_gpu)) * self.samples_per_gpu
|
22 |
-
|
23 |
-
def __iter__(self):
|
24 |
-
indices = []
|
25 |
-
for i, size in enumerate(self.group_sizes):
|
26 |
-
if size == 0:
|
27 |
-
continue
|
28 |
-
indice = np.where(self.flag == i)[0]
|
29 |
-
assert len(indice) == size
|
30 |
-
np.random.shuffle(indice)
|
31 |
-
num_extra = int(np.ceil(size / self.samples_per_gpu)
|
32 |
-
) * self.samples_per_gpu - len(indice)
|
33 |
-
indice = np.concatenate(
|
34 |
-
[indice, np.random.choice(indice, num_extra)])
|
35 |
-
indices.append(indice)
|
36 |
-
indices = np.concatenate(indices)
|
37 |
-
indices = [
|
38 |
-
indices[i * self.samples_per_gpu:(i + 1) * self.samples_per_gpu]
|
39 |
-
for i in np.random.permutation(
|
40 |
-
range(len(indices) // self.samples_per_gpu))
|
41 |
-
]
|
42 |
-
indices = np.concatenate(indices)
|
43 |
-
indices = indices.astype(np.int64).tolist()
|
44 |
-
assert len(indices) == self.num_samples
|
45 |
-
return iter(indices)
|
46 |
-
|
47 |
-
def __len__(self):
|
48 |
-
return self.num_samples
|
49 |
-
|
50 |
-
|
51 |
-
class DistributedGroupSampler(Sampler):
|
52 |
-
"""Sampler that restricts data loading to a subset of the dataset.
|
53 |
-
|
54 |
-
It is especially useful in conjunction with
|
55 |
-
:class:`torch.nn.parallel.DistributedDataParallel`. In such case, each
|
56 |
-
process can pass a DistributedSampler instance as a DataLoader sampler,
|
57 |
-
and load a subset of the original dataset that is exclusive to it.
|
58 |
-
|
59 |
-
.. note::
|
60 |
-
Dataset is assumed to be of constant size.
|
61 |
-
|
62 |
-
Arguments:
|
63 |
-
dataset: Dataset used for sampling.
|
64 |
-
num_replicas (optional): Number of processes participating in
|
65 |
-
distributed training.
|
66 |
-
rank (optional): Rank of the current process within num_replicas.
|
67 |
-
seed (int, optional): random seed used to shuffle the sampler if
|
68 |
-
``shuffle=True``. This number should be identical across all
|
69 |
-
processes in the distributed group. Default: 0.
|
70 |
-
"""
|
71 |
-
|
72 |
-
def __init__(self,
|
73 |
-
dataset,
|
74 |
-
samples_per_gpu=1,
|
75 |
-
num_replicas=None,
|
76 |
-
rank=None,
|
77 |
-
seed=0):
|
78 |
-
_rank, _num_replicas = get_dist_info()
|
79 |
-
if num_replicas is None:
|
80 |
-
num_replicas = _num_replicas
|
81 |
-
if rank is None:
|
82 |
-
rank = _rank
|
83 |
-
self.dataset = dataset
|
84 |
-
self.samples_per_gpu = samples_per_gpu
|
85 |
-
self.num_replicas = num_replicas
|
86 |
-
self.rank = rank
|
87 |
-
self.epoch = 0
|
88 |
-
self.seed = seed if seed is not None else 0
|
89 |
-
|
90 |
-
assert hasattr(self.dataset, 'flag')
|
91 |
-
self.flag = self.dataset.flag
|
92 |
-
self.group_sizes = np.bincount(self.flag)
|
93 |
-
|
94 |
-
self.num_samples = 0
|
95 |
-
for i, j in enumerate(self.group_sizes):
|
96 |
-
self.num_samples += int(
|
97 |
-
math.ceil(self.group_sizes[i] * 1.0 / self.samples_per_gpu /
|
98 |
-
self.num_replicas)) * self.samples_per_gpu
|
99 |
-
self.total_size = self.num_samples * self.num_replicas
|
100 |
-
|
101 |
-
def __iter__(self):
|
102 |
-
# deterministically shuffle based on epoch
|
103 |
-
g = torch.Generator()
|
104 |
-
g.manual_seed(self.epoch + self.seed)
|
105 |
-
|
106 |
-
indices = []
|
107 |
-
for i, size in enumerate(self.group_sizes):
|
108 |
-
if size > 0:
|
109 |
-
indice = np.where(self.flag == i)[0]
|
110 |
-
assert len(indice) == size
|
111 |
-
# add .numpy() to avoid bug when selecting indice in parrots.
|
112 |
-
# TODO: check whether torch.randperm() can be replaced by
|
113 |
-
# numpy.random.permutation().
|
114 |
-
indice = indice[list(
|
115 |
-
torch.randperm(int(size), generator=g).numpy())].tolist()
|
116 |
-
extra = int(
|
117 |
-
math.ceil(
|
118 |
-
size * 1.0 / self.samples_per_gpu / self.num_replicas)
|
119 |
-
) * self.samples_per_gpu * self.num_replicas - len(indice)
|
120 |
-
# pad indice
|
121 |
-
tmp = indice.copy()
|
122 |
-
for _ in range(extra // size):
|
123 |
-
indice.extend(tmp)
|
124 |
-
indice.extend(tmp[:extra % size])
|
125 |
-
indices.extend(indice)
|
126 |
-
|
127 |
-
assert len(indices) == self.total_size
|
128 |
-
|
129 |
-
indices = [
|
130 |
-
indices[j] for i in list(
|
131 |
-
torch.randperm(
|
132 |
-
len(indices) // self.samples_per_gpu, generator=g))
|
133 |
-
for j in range(i * self.samples_per_gpu, (i + 1) *
|
134 |
-
self.samples_per_gpu)
|
135 |
-
]
|
136 |
-
|
137 |
-
# subsample
|
138 |
-
offset = self.num_samples * self.rank
|
139 |
-
indices = indices[offset:offset + self.num_samples]
|
140 |
-
assert len(indices) == self.num_samples
|
141 |
-
|
142 |
-
return iter(indices)
|
143 |
-
|
144 |
-
def __len__(self):
|
145 |
-
return self.num_samples
|
146 |
-
|
147 |
-
def set_epoch(self, epoch):
|
148 |
-
self.epoch = epoch
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnimalEquality/chatbot/_proc/_docs/ingredient_vision.html
DELETED
@@ -1,802 +0,0 @@
|
|
1 |
-
<!DOCTYPE html>
|
2 |
-
<html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"><head>
|
3 |
-
|
4 |
-
<meta charset="utf-8">
|
5 |
-
<meta name="generator" content="quarto-1.3.361">
|
6 |
-
|
7 |
-
<meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes">
|
8 |
-
|
9 |
-
<meta name="description" content="Exploring computer vision for vegan ingredient inferencing.">
|
10 |
-
|
11 |
-
<title>lv-recipe-chatbot - ingredient_vision</title>
|
12 |
-
<style>
|
13 |
-
code{white-space: pre-wrap;}
|
14 |
-
span.smallcaps{font-variant: small-caps;}
|
15 |
-
div.columns{display: flex; gap: min(4vw, 1.5em);}
|
16 |
-
div.column{flex: auto; overflow-x: auto;}
|
17 |
-
div.hanging-indent{margin-left: 1.5em; text-indent: -1.5em;}
|
18 |
-
ul.task-list{list-style: none;}
|
19 |
-
ul.task-list li input[type="checkbox"] {
|
20 |
-
width: 0.8em;
|
21 |
-
margin: 0 0.8em 0.2em -1em; /* quarto-specific, see https://github.com/quarto-dev/quarto-cli/issues/4556 */
|
22 |
-
vertical-align: middle;
|
23 |
-
}
|
24 |
-
/* CSS for syntax highlighting */
|
25 |
-
pre > code.sourceCode { white-space: pre; position: relative; }
|
26 |
-
pre > code.sourceCode > span { display: inline-block; line-height: 1.25; }
|
27 |
-
pre > code.sourceCode > span:empty { height: 1.2em; }
|
28 |
-
.sourceCode { overflow: visible; }
|
29 |
-
code.sourceCode > span { color: inherit; text-decoration: inherit; }
|
30 |
-
div.sourceCode { margin: 1em 0; }
|
31 |
-
pre.sourceCode { margin: 0; }
|
32 |
-
@media screen {
|
33 |
-
div.sourceCode { overflow: auto; }
|
34 |
-
}
|
35 |
-
@media print {
|
36 |
-
pre > code.sourceCode { white-space: pre-wrap; }
|
37 |
-
pre > code.sourceCode > span { text-indent: -5em; padding-left: 5em; }
|
38 |
-
}
|
39 |
-
pre.numberSource code
|
40 |
-
{ counter-reset: source-line 0; }
|
41 |
-
pre.numberSource code > span
|
42 |
-
{ position: relative; left: -4em; counter-increment: source-line; }
|
43 |
-
pre.numberSource code > span > a:first-child::before
|
44 |
-
{ content: counter(source-line);
|
45 |
-
position: relative; left: -1em; text-align: right; vertical-align: baseline;
|
46 |
-
border: none; display: inline-block;
|
47 |
-
-webkit-touch-callout: none; -webkit-user-select: none;
|
48 |
-
-khtml-user-select: none; -moz-user-select: none;
|
49 |
-
-ms-user-select: none; user-select: none;
|
50 |
-
padding: 0 4px; width: 4em;
|
51 |
-
}
|
52 |
-
pre.numberSource { margin-left: 3em; padding-left: 4px; }
|
53 |
-
div.sourceCode
|
54 |
-
{ }
|
55 |
-
@media screen {
|
56 |
-
pre > code.sourceCode > span > a:first-child::before { text-decoration: underline; }
|
57 |
-
}
|
58 |
-
</style>
|
59 |
-
|
60 |
-
|
61 |
-
<script src="site_libs/quarto-nav/quarto-nav.js"></script>
|
62 |
-
<script src="site_libs/quarto-nav/headroom.min.js"></script>
|
63 |
-
<script src="site_libs/clipboard/clipboard.min.js"></script>
|
64 |
-
<script src="site_libs/quarto-search/autocomplete.umd.js"></script>
|
65 |
-
<script src="site_libs/quarto-search/fuse.min.js"></script>
|
66 |
-
<script src="site_libs/quarto-search/quarto-search.js"></script>
|
67 |
-
<meta name="quarto:offset" content="./">
|
68 |
-
<script src="site_libs/quarto-html/quarto.js"></script>
|
69 |
-
<script src="site_libs/quarto-html/popper.min.js"></script>
|
70 |
-
<script src="site_libs/quarto-html/tippy.umd.min.js"></script>
|
71 |
-
<script src="site_libs/quarto-html/anchor.min.js"></script>
|
72 |
-
<link href="site_libs/quarto-html/tippy.css" rel="stylesheet">
|
73 |
-
<link href="site_libs/quarto-html/quarto-syntax-highlighting.css" rel="stylesheet" id="quarto-text-highlighting-styles">
|
74 |
-
<script src="site_libs/bootstrap/bootstrap.min.js"></script>
|
75 |
-
<link href="site_libs/bootstrap/bootstrap-icons.css" rel="stylesheet">
|
76 |
-
<link href="site_libs/bootstrap/bootstrap.min.css" rel="stylesheet" id="quarto-bootstrap" data-mode="light">
|
77 |
-
<script id="quarto-search-options" type="application/json">{
|
78 |
-
"location": "navbar",
|
79 |
-
"copy-button": false,
|
80 |
-
"collapse-after": 3,
|
81 |
-
"panel-placement": "end",
|
82 |
-
"type": "overlay",
|
83 |
-
"limit": 20,
|
84 |
-
"language": {
|
85 |
-
"search-no-results-text": "No results",
|
86 |
-
"search-matching-documents-text": "matching documents",
|
87 |
-
"search-copy-link-title": "Copy link to search",
|
88 |
-
"search-hide-matches-text": "Hide additional matches",
|
89 |
-
"search-more-match-text": "more match in this document",
|
90 |
-
"search-more-matches-text": "more matches in this document",
|
91 |
-
"search-clear-button-title": "Clear",
|
92 |
-
"search-detached-cancel-button-title": "Cancel",
|
93 |
-
"search-submit-button-title": "Submit",
|
94 |
-
"search-label": "Search"
|
95 |
-
}
|
96 |
-
}</script>
|
97 |
-
|
98 |
-
|
99 |
-
<link rel="stylesheet" href="styles.css">
|
100 |
-
<meta property="og:title" content="lv-recipe-chatbot - ingredient_vision">
|
101 |
-
<meta property="og:description" content="Exploring computer vision for vegan ingredient inferencing.">
|
102 |
-
<meta property="og:image" content="https://animalequality.github.io/lv-recipe-chatbot/03_ingredient_vision_files/figure-html/cell-8-output-1.png">
|
103 |
-
<meta property="og:site-name" content="lv-recipe-chatbot">
|
104 |
-
<meta property="og:image:height" content="256">
|
105 |
-
<meta property="og:image:width" content="512">
|
106 |
-
<meta name="twitter:title" content="lv-recipe-chatbot - ingredient_vision">
|
107 |
-
<meta name="twitter:description" content="Exploring computer vision for vegan ingredient inferencing.">
|
108 |
-
<meta name="twitter:image" content="https://animalequality.github.io/lv-recipe-chatbot/03_ingredient_vision_files/figure-html/cell-8-output-1.png">
|
109 |
-
<meta name="twitter:image-height" content="256">
|
110 |
-
<meta name="twitter:image-width" content="512">
|
111 |
-
<meta name="twitter:card" content="summary_large_image">
|
112 |
-
</head>
|
113 |
-
|
114 |
-
<body class="nav-sidebar floating nav-fixed">
|
115 |
-
|
116 |
-
<div id="quarto-search-results"></div>
|
117 |
-
<header id="quarto-header" class="headroom fixed-top">
|
118 |
-
<nav class="navbar navbar-expand-lg navbar-dark ">
|
119 |
-
<div class="navbar-container container-fluid">
|
120 |
-
<div class="navbar-brand-container">
|
121 |
-
<a class="navbar-brand" href="./index.html">
|
122 |
-
<span class="navbar-title">lv-recipe-chatbot</span>
|
123 |
-
</a>
|
124 |
-
</div>
|
125 |
-
<div class="quarto-navbar-tools ms-auto">
|
126 |
-
</div>
|
127 |
-
<div id="quarto-search" class="" title="Search"></div>
|
128 |
-
</div> <!-- /container-fluid -->
|
129 |
-
</nav>
|
130 |
-
<nav class="quarto-secondary-nav">
|
131 |
-
<div class="container-fluid d-flex">
|
132 |
-
<button type="button" class="quarto-btn-toggle btn" data-bs-toggle="collapse" data-bs-target="#quarto-sidebar,#quarto-sidebar-glass" aria-controls="quarto-sidebar" aria-expanded="false" aria-label="Toggle sidebar navigation" onclick="if (window.quartoToggleHeadroom) { window.quartoToggleHeadroom(); }">
|
133 |
-
<i class="bi bi-layout-text-sidebar-reverse"></i>
|
134 |
-
</button>
|
135 |
-
<nav class="quarto-page-breadcrumbs" aria-label="breadcrumb"><ol class="breadcrumb"><li class="breadcrumb-item"><a href="./ingredient_vision.html">ingredient_vision</a></li></ol></nav>
|
136 |
-
<a class="flex-grow-1" role="button" data-bs-toggle="collapse" data-bs-target="#quarto-sidebar,#quarto-sidebar-glass" aria-controls="quarto-sidebar" aria-expanded="false" aria-label="Toggle sidebar navigation" onclick="if (window.quartoToggleHeadroom) { window.quartoToggleHeadroom(); }">
|
137 |
-
</a>
|
138 |
-
</div>
|
139 |
-
</nav>
|
140 |
-
</header>
|
141 |
-
<!-- content -->
|
142 |
-
<div id="quarto-content" class="quarto-container page-columns page-rows-contents page-layout-article page-navbar">
|
143 |
-
<!-- sidebar -->
|
144 |
-
<nav id="quarto-sidebar" class="sidebar collapse collapse-horizontal sidebar-navigation floating overflow-auto">
|
145 |
-
<div class="sidebar-menu-container">
|
146 |
-
<ul class="list-unstyled mt-1">
|
147 |
-
<li class="sidebar-item">
|
148 |
-
<div class="sidebar-item-container">
|
149 |
-
<a href="./index.html" class="sidebar-item-text sidebar-link">
|
150 |
-
<span class="menu-text">lv-recipe-chatbot</span></a>
|
151 |
-
</div>
|
152 |
-
</li>
|
153 |
-
<li class="sidebar-item">
|
154 |
-
<div class="sidebar-item-container">
|
155 |
-
<a href="./engineer_prompt.html" class="sidebar-item-text sidebar-link">
|
156 |
-
<span class="menu-text">engineer_prompt</span></a>
|
157 |
-
</div>
|
158 |
-
</li>
|
159 |
-
<li class="sidebar-item">
|
160 |
-
<div class="sidebar-item-container">
|
161 |
-
<a href="./app.html" class="sidebar-item-text sidebar-link">
|
162 |
-
<span class="menu-text">app</span></a>
|
163 |
-
</div>
|
164 |
-
</li>
|
165 |
-
<li class="sidebar-item">
|
166 |
-
<div class="sidebar-item-container">
|
167 |
-
<a href="./vegan_recipe_tools.html" class="sidebar-item-text sidebar-link">
|
168 |
-
<span class="menu-text">vegan_recipe_tools</span></a>
|
169 |
-
</div>
|
170 |
-
</li>
|
171 |
-
<li class="sidebar-item">
|
172 |
-
<div class="sidebar-item-container">
|
173 |
-
<a href="./ingredient_vision.html" class="sidebar-item-text sidebar-link active">
|
174 |
-
<span class="menu-text">ingredient_vision</span></a>
|
175 |
-
</div>
|
176 |
-
</li>
|
177 |
-
</ul>
|
178 |
-
</div>
|
179 |
-
</nav>
|
180 |
-
<div id="quarto-sidebar-glass" data-bs-toggle="collapse" data-bs-target="#quarto-sidebar,#quarto-sidebar-glass"></div>
|
181 |
-
<!-- margin-sidebar -->
|
182 |
-
<div id="quarto-margin-sidebar" class="sidebar margin-sidebar">
|
183 |
-
<nav id="TOC" role="doc-toc" class="toc-active">
|
184 |
-
<h2 id="toc-title">On this page</h2>
|
185 |
-
|
186 |
-
<ul>
|
187 |
-
<li><a href="#format_image" id="toc-format_image" class="nav-link active" data-scroll-target="#format_image">format_image</a></li>
|
188 |
-
<li><a href="#blipimagecaptioning" id="toc-blipimagecaptioning" class="nav-link" data-scroll-target="#blipimagecaptioning">BlipImageCaptioning</a></li>
|
189 |
-
<li><a href="#blipimagecaptioning.inference" id="toc-blipimagecaptioning.inference" class="nav-link" data-scroll-target="#blipimagecaptioning.inference">BlipImageCaptioning.inference</a></li>
|
190 |
-
<li><a href="#blipvqa" id="toc-blipvqa" class="nav-link" data-scroll-target="#blipvqa">BlipVQA</a></li>
|
191 |
-
<li><a href="#blipvqa.inference" id="toc-blipvqa.inference" class="nav-link" data-scroll-target="#blipvqa.inference">BlipVQA.inference</a></li>
|
192 |
-
<li><a href="#veganingredientfinder" id="toc-veganingredientfinder" class="nav-link" data-scroll-target="#veganingredientfinder">VeganIngredientFinder</a></li>
|
193 |
-
<li><a href="#veganingredientfinder.list_ingredients" id="toc-veganingredientfinder.list_ingredients" class="nav-link" data-scroll-target="#veganingredientfinder.list_ingredients">VeganIngredientFinder.list_ingredients</a></li>
|
194 |
-
</ul>
|
195 |
-
<div class="toc-actions"><div><i class="bi bi-git"></i></div><div class="action-links"><p><a href="https://gitlab.com/animalequality/lv-recipe-chatbot/issues/new" class="toc-action">Report an issue</a></p></div></div></nav>
|
196 |
-
</div>
|
197 |
-
<!-- main -->
|
198 |
-
<main class="content" id="quarto-document-content">
|
199 |
-
|
200 |
-
<header id="title-block-header" class="quarto-title-block default">
|
201 |
-
<div class="quarto-title">
|
202 |
-
<h1 class="title">ingredient_vision</h1>
|
203 |
-
</div>
|
204 |
-
|
205 |
-
<div>
|
206 |
-
<div class="description">
|
207 |
-
Exploring computer vision for vegan ingredient inferencing.
|
208 |
-
</div>
|
209 |
-
</div>
|
210 |
-
|
211 |
-
|
212 |
-
<div class="quarto-title-meta">
|
213 |
-
|
214 |
-
|
215 |
-
|
216 |
-
|
217 |
-
</div>
|
218 |
-
|
219 |
-
|
220 |
-
</header>
|
221 |
-
|
222 |
-
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
|
223 |
-
<p>Inspiration drawn from <a href="https://github.com/microsoft/TaskMatrix">TaskMartix aka Visual ChatGPT</a></p>
|
224 |
-
<hr>
|
225 |
-
<p><a href="https://gitlab.com/animalequality/lv-recipe-chatbot/blob/main/lv_recipe_chatbot/ingredient_vision.py#L26" target="_blank" style="float:right; font-size:smaller">source</a></p>
|
226 |
-
<section id="format_image" class="level3">
|
227 |
-
<h3 class="anchored" data-anchor-id="format_image">format_image</h3>
|
228 |
-
<blockquote class="blockquote">
|
229 |
-
<pre><code> format_image (image:str)</code></pre>
|
230 |
-
</blockquote>
|
231 |
-
<table class="table">
|
232 |
-
<thead>
|
233 |
-
<tr class="header">
|
234 |
-
<th></th>
|
235 |
-
<th><strong>Type</strong></th>
|
236 |
-
<th><strong>Details</strong></th>
|
237 |
-
</tr>
|
238 |
-
</thead>
|
239 |
-
<tbody>
|
240 |
-
<tr class="odd">
|
241 |
-
<td>image</td>
|
242 |
-
<td>str</td>
|
243 |
-
<td>Image file path</td>
|
244 |
-
</tr>
|
245 |
-
</tbody>
|
246 |
-
</table>
|
247 |
-
<hr>
|
248 |
-
<p><a href="https://gitlab.com/animalequality/lv-recipe-chatbot/blob/main/lv_recipe_chatbot/ingredient_vision.py#L41" target="_blank" style="float:right; font-size:smaller">source</a></p>
|
249 |
-
</section>
|
250 |
-
<section id="blipimagecaptioning" class="level3">
|
251 |
-
<h3 class="anchored" data-anchor-id="blipimagecaptioning">BlipImageCaptioning</h3>
|
252 |
-
<blockquote class="blockquote">
|
253 |
-
<pre><code> BlipImageCaptioning (device:str)</code></pre>
|
254 |
-
</blockquote>
|
255 |
-
<p>Useful when you want to know what is inside the photo.</p>
|
256 |
-
<hr>
|
257 |
-
<p><a href="https://gitlab.com/animalequality/lv-recipe-chatbot/blob/main/lv_recipe_chatbot/ingredient_vision.py#L60" target="_blank" style="float:right; font-size:smaller">source</a></p>
|
258 |
-
</section>
|
259 |
-
<section id="blipimagecaptioning.inference" class="level3">
|
260 |
-
<h3 class="anchored" data-anchor-id="blipimagecaptioning.inference">BlipImageCaptioning.inference</h3>
|
261 |
-
<blockquote class="blockquote">
|
262 |
-
<pre><code> BlipImageCaptioning.inference
|
263 |
-
(image:<module'PIL.Image'from'/home/evylz/
|
264 |
-
AnimalEquality/lv-recipe-
|
265 |
-
chatbot/env/lib/python3.10/site-
|
266 |
-
packages/PIL/Image.py'>)</code></pre>
|
267 |
-
</blockquote>
|
268 |
-
<table class="table">
|
269 |
-
<thead>
|
270 |
-
<tr class="header">
|
271 |
-
<th></th>
|
272 |
-
<th><strong>Type</strong></th>
|
273 |
-
<th><strong>Details</strong></th>
|
274 |
-
</tr>
|
275 |
-
</thead>
|
276 |
-
<tbody>
|
277 |
-
<tr class="odd">
|
278 |
-
<td>image</td>
|
279 |
-
<td>PIL.Image</td>
|
280 |
-
<td></td>
|
281 |
-
</tr>
|
282 |
-
<tr class="even">
|
283 |
-
<td><strong>Returns</strong></td>
|
284 |
-
<td><strong>str</strong></td>
|
285 |
-
<td><strong>Caption for the image</strong></td>
|
286 |
-
</tr>
|
287 |
-
</tbody>
|
288 |
-
</table>
|
289 |
-
<hr>
|
290 |
-
<p><a href="https://gitlab.com/animalequality/lv-recipe-chatbot/blob/main/lv_recipe_chatbot/ingredient_vision.py#L71" target="_blank" style="float:right; font-size:smaller">source</a></p>
|
291 |
-
</section>
|
292 |
-
<section id="blipvqa" class="level3">
|
293 |
-
<h3 class="anchored" data-anchor-id="blipvqa">BlipVQA</h3>
|
294 |
-
<blockquote class="blockquote">
|
295 |
-
<pre><code> BlipVQA (device:str)</code></pre>
|
296 |
-
</blockquote>
|
297 |
-
<p>BLIP Visual Question Answering Useful when you need an answer for a question based on an image. Examples: what is the background color of this image, how many cats are in this figure, what is in this figure?</p>
|
298 |
-
<hr>
|
299 |
-
<p><a href="https://gitlab.com/animalequality/lv-recipe-chatbot/blob/main/lv_recipe_chatbot/ingredient_vision.py#L89" target="_blank" style="float:right; font-size:smaller">source</a></p>
|
300 |
-
</section>
|
301 |
-
<section id="blipvqa.inference" class="level3">
|
302 |
-
<h3 class="anchored" data-anchor-id="blipvqa.inference">BlipVQA.inference</h3>
|
303 |
-
<blockquote class="blockquote">
|
304 |
-
<pre><code> BlipVQA.inference
|
305 |
-
(image:<module'PIL.Image'from'/home/evylz/AnimalEquali
|
306 |
-
ty/lv-recipe-chatbot/env/lib/python3.10/site-
|
307 |
-
packages/PIL/Image.py'>, question:str)</code></pre>
|
308 |
-
</blockquote>
|
309 |
-
<table class="table">
|
310 |
-
<thead>
|
311 |
-
<tr class="header">
|
312 |
-
<th></th>
|
313 |
-
<th><strong>Type</strong></th>
|
314 |
-
<th><strong>Details</strong></th>
|
315 |
-
</tr>
|
316 |
-
</thead>
|
317 |
-
<tbody>
|
318 |
-
<tr class="odd">
|
319 |
-
<td>image</td>
|
320 |
-
<td>PIL.Image</td>
|
321 |
-
<td></td>
|
322 |
-
</tr>
|
323 |
-
<tr class="even">
|
324 |
-
<td>question</td>
|
325 |
-
<td>str</td>
|
326 |
-
<td></td>
|
327 |
-
</tr>
|
328 |
-
<tr class="odd">
|
329 |
-
<td><strong>Returns</strong></td>
|
330 |
-
<td><strong>str</strong></td>
|
331 |
-
<td><strong>Answer to the query on the image</strong></td>
|
332 |
-
</tr>
|
333 |
-
</tbody>
|
334 |
-
</table>
|
335 |
-
<div class="cell">
|
336 |
-
<div class="sourceCode cell-code" id="cb6"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb6-1"><a href="#cb6-1" aria-hidden="true" tabindex="-1"></a>sample_images <span class="op">=</span> os.listdir(SAMPLE_IMG_DIR)</span>
|
337 |
-
<span id="cb6-2"><a href="#cb6-2" aria-hidden="true" tabindex="-1"></a>sample_images</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
338 |
-
<div class="cell-output cell-output-display">
|
339 |
-
<pre><code>['veggie-fridge.jpeg',
|
340 |
-
'veg-groceries-table.jpg',
|
341 |
-
'fridge-splendid.jpg',
|
342 |
-
'neat-veg-groceries.jpg',
|
343 |
-
'veg-groceries-table.jpeg',
|
344 |
-
'Fruits-and-vegetables-one-a-table.jpg']</code></pre>
|
345 |
-
</div>
|
346 |
-
</div>
|
347 |
-
<div class="cell">
|
348 |
-
<div class="sourceCode cell-code" id="cb8"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb8-1"><a href="#cb8-1" aria-hidden="true" tabindex="-1"></a><span class="cf">for</span> img <span class="kw">in</span> sample_images:</span>
|
349 |
-
<span id="cb8-2"><a href="#cb8-2" aria-hidden="true" tabindex="-1"></a> display(format_image(SAMPLE_IMG_DIR <span class="op">/</span> img))</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
350 |
-
<div class="cell-output cell-output-display">
|
351 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-8-output-1.png" class="img-fluid"></p>
|
352 |
-
</div>
|
353 |
-
<div class="cell-output cell-output-display">
|
354 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-8-output-2.png" class="img-fluid"></p>
|
355 |
-
</div>
|
356 |
-
<div class="cell-output cell-output-display">
|
357 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-8-output-3.png" class="img-fluid"></p>
|
358 |
-
</div>
|
359 |
-
<div class="cell-output cell-output-display">
|
360 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-8-output-4.png" class="img-fluid"></p>
|
361 |
-
</div>
|
362 |
-
<div class="cell-output cell-output-display">
|
363 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-8-output-5.png" class="img-fluid"></p>
|
364 |
-
</div>
|
365 |
-
<div class="cell-output cell-output-display">
|
366 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-8-output-6.png" class="img-fluid"></p>
|
367 |
-
</div>
|
368 |
-
</div>
|
369 |
-
<p>The process:</p>
|
370 |
-
<ol type="1">
|
371 |
-
<li>Format image</li>
|
372 |
-
<li>Get description (caption)</li>
|
373 |
-
<li>Pass caption and ingredient queries to VQA</li>
|
374 |
-
</ol>
|
375 |
-
<div class="cell">
|
376 |
-
<div class="sourceCode cell-code" id="cb9"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb9-1"><a href="#cb9-1" aria-hidden="true" tabindex="-1"></a>vqa <span class="op">=</span> BlipVQA(<span class="st">"cpu"</span>)</span>
|
377 |
-
<span id="cb9-2"><a href="#cb9-2" aria-hidden="true" tabindex="-1"></a>img_cap <span class="op">=</span> BlipImageCaptioning(<span class="st">"cpu"</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
378 |
-
</div>
|
379 |
-
<div class="cell">
|
380 |
-
<div class="sourceCode cell-code" id="cb10"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb10-1"><a href="#cb10-1" aria-hidden="true" tabindex="-1"></a><span class="cf">for</span> img <span class="kw">in</span> sample_images:</span>
|
381 |
-
<span id="cb10-2"><a href="#cb10-2" aria-hidden="true" tabindex="-1"></a> img <span class="op">=</span> format_image(SAMPLE_IMG_DIR <span class="op">/</span> img)</span>
|
382 |
-
<span id="cb10-3"><a href="#cb10-3" aria-hidden="true" tabindex="-1"></a></span>
|
383 |
-
<span id="cb10-4"><a href="#cb10-4" aria-hidden="true" tabindex="-1"></a> display(desc, img.resize((<span class="bu">int</span>(img.size[<span class="dv">0</span>] <span class="op">*</span> <span class="fl">0.5</span>), <span class="bu">int</span>(img.size[<span class="dv">1</span>] <span class="op">*</span> <span class="fl">0.5</span>))))</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
384 |
-
<div class="cell-output cell-output-stdout">
|
385 |
-
<pre><code>CPU times: user 11.4 s, sys: 7.42 ms, total: 11.4 s
|
386 |
-
Wall time: 1.19 s
|
387 |
-
CPU times: user 13.5 s, sys: 7.5 ms, total: 13.5 s
|
388 |
-
Wall time: 1.36 s
|
389 |
-
CPU times: user 12 s, sys: 0 ns, total: 12 s
|
390 |
-
Wall time: 1.21 s
|
391 |
-
CPU times: user 12.5 s, sys: 0 ns, total: 12.5 s
|
392 |
-
Wall time: 1.27 s
|
393 |
-
CPU times: user 9.25 s, sys: 7.71 ms, total: 9.25 s
|
394 |
-
Wall time: 936 ms
|
395 |
-
CPU times: user 15.7 s, sys: 7.66 ms, total: 15.7 s
|
396 |
-
Wall time: 1.58 s</code></pre>
|
397 |
-
</div>
|
398 |
-
<div class="cell-output cell-output-display">
|
399 |
-
<pre><code>'a refrigerator with food inside'</code></pre>
|
400 |
-
</div>
|
401 |
-
<div class="cell-output cell-output-display">
|
402 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-10-output-3.png" class="img-fluid"></p>
|
403 |
-
</div>
|
404 |
-
<div class="cell-output cell-output-display">
|
405 |
-
<pre><code>'a table with a variety of fruits and vegetables'</code></pre>
|
406 |
-
</div>
|
407 |
-
<div class="cell-output cell-output-display">
|
408 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-10-output-5.png" class="img-fluid"></p>
|
409 |
-
</div>
|
410 |
-
<div class="cell-output cell-output-display">
|
411 |
-
<pre><code>'a refrigerator filled with food and drinks'</code></pre>
|
412 |
-
</div>
|
413 |
-
<div class="cell-output cell-output-display">
|
414 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-10-output-7.png" class="img-fluid"></p>
|
415 |
-
</div>
|
416 |
-
<div class="cell-output cell-output-display">
|
417 |
-
<pre><code>'a counter with various foods on it'</code></pre>
|
418 |
-
</div>
|
419 |
-
<div class="cell-output cell-output-display">
|
420 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-10-output-9.png" class="img-fluid"></p>
|
421 |
-
</div>
|
422 |
-
<div class="cell-output cell-output-display">
|
423 |
-
<pre><code>'a wooden table'</code></pre>
|
424 |
-
</div>
|
425 |
-
<div class="cell-output cell-output-display">
|
426 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-10-output-11.png" class="img-fluid"></p>
|
427 |
-
</div>
|
428 |
-
<div class="cell-output cell-output-display">
|
429 |
-
<pre><code>'a table with a variety of fruits and vegetables'</code></pre>
|
430 |
-
</div>
|
431 |
-
<div class="cell-output cell-output-display">
|
432 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-10-output-13.png" class="img-fluid"></p>
|
433 |
-
</div>
|
434 |
-
</div>
|
435 |
-
<div class="cell">
|
436 |
-
<div class="sourceCode cell-code" id="cb18"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb18-1"><a href="#cb18-1" aria-hidden="true" tabindex="-1"></a><span class="cf">for</span> img <span class="kw">in</span> sample_images:</span>
|
437 |
-
<span id="cb18-2"><a href="#cb18-2" aria-hidden="true" tabindex="-1"></a> img <span class="op">=</span> format_image(SAMPLE_IMG_DIR <span class="op">/</span> img)</span>
|
438 |
-
<span id="cb18-3"><a href="#cb18-3" aria-hidden="true" tabindex="-1"></a> desc <span class="op">=</span> img_cap.inference(img)</span>
|
439 |
-
<span id="cb18-4"><a href="#cb18-4" aria-hidden="true" tabindex="-1"></a></span>
|
440 |
-
<span id="cb18-5"><a href="#cb18-5" aria-hidden="true" tabindex="-1"></a> answer <span class="op">+=</span> <span class="st">"</span><span class="ch">\n</span><span class="st">"</span> <span class="op">+</span> vqa.inference(</span>
|
441 |
-
<span id="cb18-6"><a href="#cb18-6" aria-hidden="true" tabindex="-1"></a> img, <span class="ss">f"What are three of the fruits seen in the image if any?"</span></span>
|
442 |
-
<span id="cb18-7"><a href="#cb18-7" aria-hidden="true" tabindex="-1"></a> )</span>
|
443 |
-
<span id="cb18-8"><a href="#cb18-8" aria-hidden="true" tabindex="-1"></a> answer <span class="op">+=</span> <span class="st">"</span><span class="ch">\n</span><span class="st">"</span> <span class="op">+</span> vqa.inference(</span>
|
444 |
-
<span id="cb18-9"><a href="#cb18-9" aria-hidden="true" tabindex="-1"></a> img, <span class="ss">f"What grains and starches are in the image if any?"</span></span>
|
445 |
-
<span id="cb18-10"><a href="#cb18-10" aria-hidden="true" tabindex="-1"></a> )</span>
|
446 |
-
<span id="cb18-11"><a href="#cb18-11" aria-hidden="true" tabindex="-1"></a> answer <span class="op">+=</span> <span class="st">"</span><span class="ch">\n</span><span class="st">"</span> <span class="op">+</span> vqa.inference(img, <span class="ss">f"Is there plant-based milk in the image?"</span>)</span>
|
447 |
-
<span id="cb18-12"><a href="#cb18-12" aria-hidden="true" tabindex="-1"></a> <span class="bu">print</span>(</span>
|
448 |
-
<span id="cb18-13"><a href="#cb18-13" aria-hidden="true" tabindex="-1"></a> <span class="ss">f"""</span><span class="sc">{</span>desc<span class="sc">}</span></span>
|
449 |
-
<span id="cb18-14"><a href="#cb18-14" aria-hidden="true" tabindex="-1"></a><span class="sc">{</span>answer<span class="sc">}</span><span class="ss">"""</span></span>
|
450 |
-
<span id="cb18-15"><a href="#cb18-15" aria-hidden="true" tabindex="-1"></a> )</span>
|
451 |
-
<span id="cb18-16"><a href="#cb18-16" aria-hidden="true" tabindex="-1"></a> display(img.resize((<span class="bu">int</span>(img.size[<span class="dv">0</span>] <span class="op">*</span> <span class="fl">0.75</span>), <span class="bu">int</span>(img.size[<span class="dv">1</span>] <span class="op">*</span> <span class="fl">0.75</span>))))</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
452 |
-
<div class="cell-output cell-output-stdout">
|
453 |
-
<pre><code>CPU times: user 7.67 s, sys: 12.1 ms, total: 7.68 s
|
454 |
-
Wall time: 779 ms
|
455 |
-
a refrigerator with food inside
|
456 |
-
cabbage lettuce onion
|
457 |
-
apples
|
458 |
-
rice
|
459 |
-
yes
|
460 |
-
CPU times: user 10.5 s, sys: 8.13 ms, total: 10.5 s
|
461 |
-
Wall time: 1.06 s
|
462 |
-
a table with a variety of fruits and vegetables
|
463 |
-
broccoli and tomatoes
|
464 |
-
bananas apples oranges
|
465 |
-
potatoes
|
466 |
-
yes
|
467 |
-
CPU times: user 11.7 s, sys: 0 ns, total: 11.7 s
|
468 |
-
Wall time: 1.18 s
|
469 |
-
a refrigerator filled with food and drinks
|
470 |
-
broccoli and zucchini
|
471 |
-
bananas
|
472 |
-
rice
|
473 |
-
yes
|
474 |
-
CPU times: user 11.5 s, sys: 12.2 ms, total: 11.5 s
|
475 |
-
Wall time: 1.16 s
|
476 |
-
a counter with various foods on it
|
477 |
-
carrots and broccoli
|
478 |
-
apples bananas and tomatoes
|
479 |
-
rice
|
480 |
-
yes
|
481 |
-
CPU times: user 9.62 s, sys: 4.22 ms, total: 9.63 s
|
482 |
-
Wall time: 973 ms
|
483 |
-
a wooden table
|
484 |
-
potatoes and carrots
|
485 |
-
apples
|
486 |
-
potatoes
|
487 |
-
yes
|
488 |
-
CPU times: user 11.1 s, sys: 8.23 ms, total: 11.1 s
|
489 |
-
Wall time: 1.12 s
|
490 |
-
a table with a variety of fruits and vegetables
|
491 |
-
peppers broccoli and squash
|
492 |
-
watermelon limes and pineapple
|
493 |
-
rice
|
494 |
-
no</code></pre>
|
495 |
-
</div>
|
496 |
-
<div class="cell-output cell-output-display">
|
497 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-11-output-2.png" class="img-fluid"></p>
|
498 |
-
</div>
|
499 |
-
<div class="cell-output cell-output-display">
|
500 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-11-output-3.png" class="img-fluid"></p>
|
501 |
-
</div>
|
502 |
-
<div class="cell-output cell-output-display">
|
503 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-11-output-4.png" class="img-fluid"></p>
|
504 |
-
</div>
|
505 |
-
<div class="cell-output cell-output-display">
|
506 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-11-output-5.png" class="img-fluid"></p>
|
507 |
-
</div>
|
508 |
-
<div class="cell-output cell-output-display">
|
509 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-11-output-6.png" class="img-fluid"></p>
|
510 |
-
</div>
|
511 |
-
<div class="cell-output cell-output-display">
|
512 |
-
<p><img src="03_ingredient_vision_files/figure-html/cell-11-output-7.png" class="img-fluid"></p>
|
513 |
-
</div>
|
514 |
-
</div>
|
515 |
-
<hr>
|
516 |
-
<p><a href="https://gitlab.com/animalequality/lv-recipe-chatbot/blob/main/lv_recipe_chatbot/ingredient_vision.py#L106" target="_blank" style="float:right; font-size:smaller">source</a></p>
|
517 |
-
</section>
|
518 |
-
<section id="veganingredientfinder" class="level3">
|
519 |
-
<h3 class="anchored" data-anchor-id="veganingredientfinder">VeganIngredientFinder</h3>
|
520 |
-
<blockquote class="blockquote">
|
521 |
-
<pre><code> VeganIngredientFinder ()</code></pre>
|
522 |
-
</blockquote>
|
523 |
-
<p>Initialize self. See help(type(self)) for accurate signature.</p>
|
524 |
-
<hr>
|
525 |
-
<p><a href="https://gitlab.com/animalequality/lv-recipe-chatbot/blob/main/lv_recipe_chatbot/ingredient_vision.py#L111" target="_blank" style="float:right; font-size:smaller">source</a></p>
|
526 |
-
</section>
|
527 |
-
<section id="veganingredientfinder.list_ingredients" class="level3">
|
528 |
-
<h3 class="anchored" data-anchor-id="veganingredientfinder.list_ingredients">VeganIngredientFinder.list_ingredients</h3>
|
529 |
-
<blockquote class="blockquote">
|
530 |
-
<pre><code> VeganIngredientFinder.list_ingredients (img:str)</code></pre>
|
531 |
-
</blockquote>
|
532 |
-
<table class="table">
|
533 |
-
<thead>
|
534 |
-
<tr class="header">
|
535 |
-
<th></th>
|
536 |
-
<th><strong>Type</strong></th>
|
537 |
-
<th><strong>Details</strong></th>
|
538 |
-
</tr>
|
539 |
-
</thead>
|
540 |
-
<tbody>
|
541 |
-
<tr class="odd">
|
542 |
-
<td>img</td>
|
543 |
-
<td>str</td>
|
544 |
-
<td>Image file path</td>
|
545 |
-
</tr>
|
546 |
-
<tr class="even">
|
547 |
-
<td><strong>Returns</strong></td>
|
548 |
-
<td><strong>str</strong></td>
|
549 |
-
<td></td>
|
550 |
-
</tr>
|
551 |
-
</tbody>
|
552 |
-
</table>
|
553 |
-
<div class="cell">
|
554 |
-
<div class="sourceCode cell-code" id="cb22"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb22-1"><a href="#cb22-1" aria-hidden="true" tabindex="-1"></a>vegan_ingred_finder <span class="op">=</span> VeganIngredientFinder()</span>
|
555 |
-
<span id="cb22-2"><a href="#cb22-2" aria-hidden="true" tabindex="-1"></a>vegan_ingred_finder.list_ingredients(SAMPLE_IMG_DIR <span class="op">/</span> sample_images[<span class="dv">0</span>])</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
|
556 |
-
<div class="cell-output cell-output-display">
|
557 |
-
<pre><code>'cabbage lettuce onion\napples\nrice\nplant-based milk'</code></pre>
|
558 |
-
</div>
|
559 |
-
</div>
|
560 |
-
|
561 |
-
|
562 |
-
</section>
|
563 |
-
|
564 |
-
</main> <!-- /main -->
|
565 |
-
<script id="quarto-html-after-body" type="application/javascript">
|
566 |
-
window.document.addEventListener("DOMContentLoaded", function (event) {
|
567 |
-
const toggleBodyColorMode = (bsSheetEl) => {
|
568 |
-
const mode = bsSheetEl.getAttribute("data-mode");
|
569 |
-
const bodyEl = window.document.querySelector("body");
|
570 |
-
if (mode === "dark") {
|
571 |
-
bodyEl.classList.add("quarto-dark");
|
572 |
-
bodyEl.classList.remove("quarto-light");
|
573 |
-
} else {
|
574 |
-
bodyEl.classList.add("quarto-light");
|
575 |
-
bodyEl.classList.remove("quarto-dark");
|
576 |
-
}
|
577 |
-
}
|
578 |
-
const toggleBodyColorPrimary = () => {
|
579 |
-
const bsSheetEl = window.document.querySelector("link#quarto-bootstrap");
|
580 |
-
if (bsSheetEl) {
|
581 |
-
toggleBodyColorMode(bsSheetEl);
|
582 |
-
}
|
583 |
-
}
|
584 |
-
toggleBodyColorPrimary();
|
585 |
-
const icon = "";
|
586 |
-
const anchorJS = new window.AnchorJS();
|
587 |
-
anchorJS.options = {
|
588 |
-
placement: 'right',
|
589 |
-
icon: icon
|
590 |
-
};
|
591 |
-
anchorJS.add('.anchored');
|
592 |
-
const isCodeAnnotation = (el) => {
|
593 |
-
for (const clz of el.classList) {
|
594 |
-
if (clz.startsWith('code-annotation-')) {
|
595 |
-
return true;
|
596 |
-
}
|
597 |
-
}
|
598 |
-
return false;
|
599 |
-
}
|
600 |
-
const clipboard = new window.ClipboardJS('.code-copy-button', {
|
601 |
-
text: function(trigger) {
|
602 |
-
const codeEl = trigger.previousElementSibling.cloneNode(true);
|
603 |
-
for (const childEl of codeEl.children) {
|
604 |
-
if (isCodeAnnotation(childEl)) {
|
605 |
-
childEl.remove();
|
606 |
-
}
|
607 |
-
}
|
608 |
-
return codeEl.innerText;
|
609 |
-
}
|
610 |
-
});
|
611 |
-
clipboard.on('success', function(e) {
|
612 |
-
// button target
|
613 |
-
const button = e.trigger;
|
614 |
-
// don't keep focus
|
615 |
-
button.blur();
|
616 |
-
// flash "checked"
|
617 |
-
button.classList.add('code-copy-button-checked');
|
618 |
-
var currentTitle = button.getAttribute("title");
|
619 |
-
button.setAttribute("title", "Copied!");
|
620 |
-
let tooltip;
|
621 |
-
if (window.bootstrap) {
|
622 |
-
button.setAttribute("data-bs-toggle", "tooltip");
|
623 |
-
button.setAttribute("data-bs-placement", "left");
|
624 |
-
button.setAttribute("data-bs-title", "Copied!");
|
625 |
-
tooltip = new bootstrap.Tooltip(button,
|
626 |
-
{ trigger: "manual",
|
627 |
-
customClass: "code-copy-button-tooltip",
|
628 |
-
offset: [0, -8]});
|
629 |
-
tooltip.show();
|
630 |
-
}
|
631 |
-
setTimeout(function() {
|
632 |
-
if (tooltip) {
|
633 |
-
tooltip.hide();
|
634 |
-
button.removeAttribute("data-bs-title");
|
635 |
-
button.removeAttribute("data-bs-toggle");
|
636 |
-
button.removeAttribute("data-bs-placement");
|
637 |
-
}
|
638 |
-
button.setAttribute("title", currentTitle);
|
639 |
-
button.classList.remove('code-copy-button-checked');
|
640 |
-
}, 1000);
|
641 |
-
// clear code selection
|
642 |
-
e.clearSelection();
|
643 |
-
});
|
644 |
-
function tippyHover(el, contentFn) {
|
645 |
-
const config = {
|
646 |
-
allowHTML: true,
|
647 |
-
content: contentFn,
|
648 |
-
maxWidth: 500,
|
649 |
-
delay: 100,
|
650 |
-
arrow: false,
|
651 |
-
appendTo: function(el) {
|
652 |
-
return el.parentElement;
|
653 |
-
},
|
654 |
-
interactive: true,
|
655 |
-
interactiveBorder: 10,
|
656 |
-
theme: 'quarto',
|
657 |
-
placement: 'bottom-start'
|
658 |
-
};
|
659 |
-
window.tippy(el, config);
|
660 |
-
}
|
661 |
-
const noterefs = window.document.querySelectorAll('a[role="doc-noteref"]');
|
662 |
-
for (var i=0; i<noterefs.length; i++) {
|
663 |
-
const ref = noterefs[i];
|
664 |
-
tippyHover(ref, function() {
|
665 |
-
// use id or data attribute instead here
|
666 |
-
let href = ref.getAttribute('data-footnote-href') || ref.getAttribute('href');
|
667 |
-
try { href = new URL(href).hash; } catch {}
|
668 |
-
const id = href.replace(/^#\/?/, "");
|
669 |
-
const note = window.document.getElementById(id);
|
670 |
-
return note.innerHTML;
|
671 |
-
});
|
672 |
-
}
|
673 |
-
let selectedAnnoteEl;
|
674 |
-
const selectorForAnnotation = ( cell, annotation) => {
|
675 |
-
let cellAttr = 'data-code-cell="' + cell + '"';
|
676 |
-
let lineAttr = 'data-code-annotation="' + annotation + '"';
|
677 |
-
const selector = 'span[' + cellAttr + '][' + lineAttr + ']';
|
678 |
-
return selector;
|
679 |
-
}
|
680 |
-
const selectCodeLines = (annoteEl) => {
|
681 |
-
const doc = window.document;
|
682 |
-
const targetCell = annoteEl.getAttribute("data-target-cell");
|
683 |
-
const targetAnnotation = annoteEl.getAttribute("data-target-annotation");
|
684 |
-
const annoteSpan = window.document.querySelector(selectorForAnnotation(targetCell, targetAnnotation));
|
685 |
-
const lines = annoteSpan.getAttribute("data-code-lines").split(",");
|
686 |
-
const lineIds = lines.map((line) => {
|
687 |
-
return targetCell + "-" + line;
|
688 |
-
})
|
689 |
-
let top = null;
|
690 |
-
let height = null;
|
691 |
-
let parent = null;
|
692 |
-
if (lineIds.length > 0) {
|
693 |
-
//compute the position of the single el (top and bottom and make a div)
|
694 |
-
const el = window.document.getElementById(lineIds[0]);
|
695 |
-
top = el.offsetTop;
|
696 |
-
height = el.offsetHeight;
|
697 |
-
parent = el.parentElement.parentElement;
|
698 |
-
if (lineIds.length > 1) {
|
699 |
-
const lastEl = window.document.getElementById(lineIds[lineIds.length - 1]);
|
700 |
-
const bottom = lastEl.offsetTop + lastEl.offsetHeight;
|
701 |
-
height = bottom - top;
|
702 |
-
}
|
703 |
-
if (top !== null && height !== null && parent !== null) {
|
704 |
-
// cook up a div (if necessary) and position it
|
705 |
-
let div = window.document.getElementById("code-annotation-line-highlight");
|
706 |
-
if (div === null) {
|
707 |
-
div = window.document.createElement("div");
|
708 |
-
div.setAttribute("id", "code-annotation-line-highlight");
|
709 |
-
div.style.position = 'absolute';
|
710 |
-
parent.appendChild(div);
|
711 |
-
}
|
712 |
-
div.style.top = top - 2 + "px";
|
713 |
-
div.style.height = height + 4 + "px";
|
714 |
-
let gutterDiv = window.document.getElementById("code-annotation-line-highlight-gutter");
|
715 |
-
if (gutterDiv === null) {
|
716 |
-
gutterDiv = window.document.createElement("div");
|
717 |
-
gutterDiv.setAttribute("id", "code-annotation-line-highlight-gutter");
|
718 |
-
gutterDiv.style.position = 'absolute';
|
719 |
-
const codeCell = window.document.getElementById(targetCell);
|
720 |
-
const gutter = codeCell.querySelector('.code-annotation-gutter');
|
721 |
-
gutter.appendChild(gutterDiv);
|
722 |
-
}
|
723 |
-
gutterDiv.style.top = top - 2 + "px";
|
724 |
-
gutterDiv.style.height = height + 4 + "px";
|
725 |
-
}
|
726 |
-
selectedAnnoteEl = annoteEl;
|
727 |
-
}
|
728 |
-
};
|
729 |
-
const unselectCodeLines = () => {
|
730 |
-
const elementsIds = ["code-annotation-line-highlight", "code-annotation-line-highlight-gutter"];
|
731 |
-
elementsIds.forEach((elId) => {
|
732 |
-
const div = window.document.getElementById(elId);
|
733 |
-
if (div) {
|
734 |
-
div.remove();
|
735 |
-
}
|
736 |
-
});
|
737 |
-
selectedAnnoteEl = undefined;
|
738 |
-
};
|
739 |
-
// Attach click handler to the DT
|
740 |
-
const annoteDls = window.document.querySelectorAll('dt[data-target-cell]');
|
741 |
-
for (const annoteDlNode of annoteDls) {
|
742 |
-
annoteDlNode.addEventListener('click', (event) => {
|
743 |
-
const clickedEl = event.target;
|
744 |
-
if (clickedEl !== selectedAnnoteEl) {
|
745 |
-
unselectCodeLines();
|
746 |
-
const activeEl = window.document.querySelector('dt[data-target-cell].code-annotation-active');
|
747 |
-
if (activeEl) {
|
748 |
-
activeEl.classList.remove('code-annotation-active');
|
749 |
-
}
|
750 |
-
selectCodeLines(clickedEl);
|
751 |
-
clickedEl.classList.add('code-annotation-active');
|
752 |
-
} else {
|
753 |
-
// Unselect the line
|
754 |
-
unselectCodeLines();
|
755 |
-
clickedEl.classList.remove('code-annotation-active');
|
756 |
-
}
|
757 |
-
});
|
758 |
-
}
|
759 |
-
const findCites = (el) => {
|
760 |
-
const parentEl = el.parentElement;
|
761 |
-
if (parentEl) {
|
762 |
-
const cites = parentEl.dataset.cites;
|
763 |
-
if (cites) {
|
764 |
-
return {
|
765 |
-
el,
|
766 |
-
cites: cites.split(' ')
|
767 |
-
};
|
768 |
-
} else {
|
769 |
-
return findCites(el.parentElement)
|
770 |
-
}
|
771 |
-
} else {
|
772 |
-
return undefined;
|
773 |
-
}
|
774 |
-
};
|
775 |
-
var bibliorefs = window.document.querySelectorAll('a[role="doc-biblioref"]');
|
776 |
-
for (var i=0; i<bibliorefs.length; i++) {
|
777 |
-
const ref = bibliorefs[i];
|
778 |
-
const citeInfo = findCites(ref);
|
779 |
-
if (citeInfo) {
|
780 |
-
tippyHover(citeInfo.el, function() {
|
781 |
-
var popup = window.document.createElement('div');
|
782 |
-
citeInfo.cites.forEach(function(cite) {
|
783 |
-
var citeDiv = window.document.createElement('div');
|
784 |
-
citeDiv.classList.add('hanging-indent');
|
785 |
-
citeDiv.classList.add('csl-entry');
|
786 |
-
var biblioDiv = window.document.getElementById('ref-' + cite);
|
787 |
-
if (biblioDiv) {
|
788 |
-
citeDiv.innerHTML = biblioDiv.innerHTML;
|
789 |
-
}
|
790 |
-
popup.appendChild(citeDiv);
|
791 |
-
});
|
792 |
-
return popup.innerHTML;
|
793 |
-
});
|
794 |
-
}
|
795 |
-
}
|
796 |
-
});
|
797 |
-
</script>
|
798 |
-
</div> <!-- /content -->
|
799 |
-
|
800 |
-
|
801 |
-
|
802 |
-
</body></html>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/bricks/drop.py
DELETED
@@ -1,65 +0,0 @@
|
|
1 |
-
# Copyright (c) OpenMMLab. All rights reserved.
|
2 |
-
import torch
|
3 |
-
import torch.nn as nn
|
4 |
-
|
5 |
-
from annotator.uniformer.mmcv import build_from_cfg
|
6 |
-
from .registry import DROPOUT_LAYERS
|
7 |
-
|
8 |
-
|
9 |
-
def drop_path(x, drop_prob=0., training=False):
|
10 |
-
"""Drop paths (Stochastic Depth) per sample (when applied in main path of
|
11 |
-
residual blocks).
|
12 |
-
|
13 |
-
We follow the implementation
|
14 |
-
https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501
|
15 |
-
"""
|
16 |
-
if drop_prob == 0. or not training:
|
17 |
-
return x
|
18 |
-
keep_prob = 1 - drop_prob
|
19 |
-
# handle tensors with different dimensions, not just 4D tensors.
|
20 |
-
shape = (x.shape[0], ) + (1, ) * (x.ndim - 1)
|
21 |
-
random_tensor = keep_prob + torch.rand(
|
22 |
-
shape, dtype=x.dtype, device=x.device)
|
23 |
-
output = x.div(keep_prob) * random_tensor.floor()
|
24 |
-
return output
|
25 |
-
|
26 |
-
|
27 |
-
@DROPOUT_LAYERS.register_module()
|
28 |
-
class DropPath(nn.Module):
|
29 |
-
"""Drop paths (Stochastic Depth) per sample (when applied in main path of
|
30 |
-
residual blocks).
|
31 |
-
|
32 |
-
We follow the implementation
|
33 |
-
https://github.com/rwightman/pytorch-image-models/blob/a2727c1bf78ba0d7b5727f5f95e37fb7f8866b1f/timm/models/layers/drop.py # noqa: E501
|
34 |
-
|
35 |
-
Args:
|
36 |
-
drop_prob (float): Probability of the path to be zeroed. Default: 0.1
|
37 |
-
"""
|
38 |
-
|
39 |
-
def __init__(self, drop_prob=0.1):
|
40 |
-
super(DropPath, self).__init__()
|
41 |
-
self.drop_prob = drop_prob
|
42 |
-
|
43 |
-
def forward(self, x):
|
44 |
-
return drop_path(x, self.drop_prob, self.training)
|
45 |
-
|
46 |
-
|
47 |
-
@DROPOUT_LAYERS.register_module()
|
48 |
-
class Dropout(nn.Dropout):
|
49 |
-
"""A wrapper for ``torch.nn.Dropout``, We rename the ``p`` of
|
50 |
-
``torch.nn.Dropout`` to ``drop_prob`` so as to be consistent with
|
51 |
-
``DropPath``
|
52 |
-
|
53 |
-
Args:
|
54 |
-
drop_prob (float): Probability of the elements to be
|
55 |
-
zeroed. Default: 0.5.
|
56 |
-
inplace (bool): Do the operation inplace or not. Default: False.
|
57 |
-
"""
|
58 |
-
|
59 |
-
def __init__(self, drop_prob=0.5, inplace=False):
|
60 |
-
super().__init__(p=drop_prob, inplace=inplace)
|
61 |
-
|
62 |
-
|
63 |
-
def build_dropout(cfg, default_args=None):
|
64 |
-
"""Builder for drop out layers."""
|
65 |
-
return build_from_cfg(cfg, DROPOUT_LAYERS, default_args)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/ops/knn.py
DELETED
@@ -1,77 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch.autograd import Function
|
3 |
-
|
4 |
-
from ..utils import ext_loader
|
5 |
-
|
6 |
-
ext_module = ext_loader.load_ext('_ext', ['knn_forward'])
|
7 |
-
|
8 |
-
|
9 |
-
class KNN(Function):
|
10 |
-
r"""KNN (CUDA) based on heap data structure.
|
11 |
-
Modified from `PAConv <https://github.com/CVMI-Lab/PAConv/tree/main/
|
12 |
-
scene_seg/lib/pointops/src/knnquery_heap>`_.
|
13 |
-
|
14 |
-
Find k-nearest points.
|
15 |
-
"""
|
16 |
-
|
17 |
-
@staticmethod
|
18 |
-
def forward(ctx,
|
19 |
-
k: int,
|
20 |
-
xyz: torch.Tensor,
|
21 |
-
center_xyz: torch.Tensor = None,
|
22 |
-
transposed: bool = False) -> torch.Tensor:
|
23 |
-
"""
|
24 |
-
Args:
|
25 |
-
k (int): number of nearest neighbors.
|
26 |
-
xyz (Tensor): (B, N, 3) if transposed == False, else (B, 3, N).
|
27 |
-
xyz coordinates of the features.
|
28 |
-
center_xyz (Tensor, optional): (B, npoint, 3) if transposed ==
|
29 |
-
False, else (B, 3, npoint). centers of the knn query.
|
30 |
-
Default: None.
|
31 |
-
transposed (bool, optional): whether the input tensors are
|
32 |
-
transposed. Should not explicitly use this keyword when
|
33 |
-
calling knn (=KNN.apply), just add the fourth param.
|
34 |
-
Default: False.
|
35 |
-
|
36 |
-
Returns:
|
37 |
-
Tensor: (B, k, npoint) tensor with the indices of
|
38 |
-
the features that form k-nearest neighbours.
|
39 |
-
"""
|
40 |
-
assert (k > 0) & (k < 100), 'k should be in range(0, 100)'
|
41 |
-
|
42 |
-
if center_xyz is None:
|
43 |
-
center_xyz = xyz
|
44 |
-
|
45 |
-
if transposed:
|
46 |
-
xyz = xyz.transpose(2, 1).contiguous()
|
47 |
-
center_xyz = center_xyz.transpose(2, 1).contiguous()
|
48 |
-
|
49 |
-
assert xyz.is_contiguous() # [B, N, 3]
|
50 |
-
assert center_xyz.is_contiguous() # [B, npoint, 3]
|
51 |
-
|
52 |
-
center_xyz_device = center_xyz.get_device()
|
53 |
-
assert center_xyz_device == xyz.get_device(), \
|
54 |
-
'center_xyz and xyz should be put on the same device'
|
55 |
-
if torch.cuda.current_device() != center_xyz_device:
|
56 |
-
torch.cuda.set_device(center_xyz_device)
|
57 |
-
|
58 |
-
B, npoint, _ = center_xyz.shape
|
59 |
-
N = xyz.shape[1]
|
60 |
-
|
61 |
-
idx = center_xyz.new_zeros((B, npoint, k)).int()
|
62 |
-
dist2 = center_xyz.new_zeros((B, npoint, k)).float()
|
63 |
-
|
64 |
-
ext_module.knn_forward(
|
65 |
-
xyz, center_xyz, idx, dist2, b=B, n=N, m=npoint, nsample=k)
|
66 |
-
# idx shape to [B, k, npoint]
|
67 |
-
idx = idx.transpose(2, 1).contiguous()
|
68 |
-
if torch.__version__ != 'parrots':
|
69 |
-
ctx.mark_non_differentiable(idx)
|
70 |
-
return idx
|
71 |
-
|
72 |
-
@staticmethod
|
73 |
-
def backward(ctx, a=None):
|
74 |
-
return None, None, None
|
75 |
-
|
76 |
-
|
77 |
-
knn = KNN.apply
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ArtGAN/Diffusion-API/diffusion_webui/__init__.py
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
from diffusion_webui.diffusion_models.controlnet_inpaint_pipeline import (
|
2 |
-
StableDiffusionControlNetInpaintGenerator,
|
3 |
-
)
|
4 |
-
from diffusion_webui.diffusion_models.controlnet_pipeline import (
|
5 |
-
StableDiffusionControlNetGenerator,
|
6 |
-
)
|
7 |
-
from diffusion_webui.diffusion_models.img2img_app import (
|
8 |
-
StableDiffusionImage2ImageGenerator,
|
9 |
-
)
|
10 |
-
from diffusion_webui.diffusion_models.inpaint_app import (
|
11 |
-
StableDiffusionInpaintGenerator,
|
12 |
-
)
|
13 |
-
from diffusion_webui.diffusion_models.text2img_app import (
|
14 |
-
StableDiffusionText2ImageGenerator,
|
15 |
-
)
|
16 |
-
|
17 |
-
__version__ = "2.5.0"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ash58947/Jan/Dockerfile
DELETED
@@ -1,21 +0,0 @@
|
|
1 |
-
FROM node:18-bullseye-slim
|
2 |
-
|
3 |
-
RUN apt-get update && \
|
4 |
-
|
5 |
-
apt-get install -y git
|
6 |
-
|
7 |
-
RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
|
8 |
-
|
9 |
-
WORKDIR /app
|
10 |
-
|
11 |
-
RUN npm install
|
12 |
-
|
13 |
-
COPY Dockerfile greeting.md* .env* ./
|
14 |
-
|
15 |
-
RUN npm run build
|
16 |
-
|
17 |
-
EXPOSE 7860
|
18 |
-
|
19 |
-
ENV NODE_ENV=production
|
20 |
-
|
21 |
-
CMD [ "npm", "start" ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aspik101/Polish_Llama2/app.py
DELETED
@@ -1,63 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import random
|
3 |
-
import time
|
4 |
-
from ctransformers import AutoModelForCausalLM
|
5 |
-
import datetime
|
6 |
-
import os
|
7 |
-
|
8 |
-
|
9 |
-
params = {
|
10 |
-
"max_new_tokens":512,
|
11 |
-
"stop":["<end>" ,"<|endoftext|>"],
|
12 |
-
"temperature":0.7,
|
13 |
-
"top_p":0.8,
|
14 |
-
"stream":True,
|
15 |
-
"batch_size": 8}
|
16 |
-
|
17 |
-
|
18 |
-
def save_log(task, to_save):
|
19 |
-
with open("logs.txt", "a") as log_file:
|
20 |
-
current_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
21 |
-
log_file.write(f"[{current_time}] - {task}: {to_save}\n")
|
22 |
-
print(to_save)
|
23 |
-
|
24 |
-
|
25 |
-
llm = AutoModelForCausalLM.from_pretrained("Aspik101/Llama-2-7b-chat-hf-pl-lora_GGML", model_type="llama")
|
26 |
-
|
27 |
-
with gr.Blocks() as demo:
|
28 |
-
chatbot = gr.Chatbot()
|
29 |
-
msg = gr.Textbox()
|
30 |
-
clear = gr.Button("Clear")
|
31 |
-
|
32 |
-
def user(user_message, history):
|
33 |
-
return "", history + [[user_message, None]]
|
34 |
-
|
35 |
-
def parse_history(hist):
|
36 |
-
history_ = ""
|
37 |
-
for q, a in hist:
|
38 |
-
history_ += f"<user>: {q } \n"
|
39 |
-
if a:
|
40 |
-
history_ += f"<assistant>: {a} \n"
|
41 |
-
return history_
|
42 |
-
|
43 |
-
def bot(history):
|
44 |
-
print("history: ",history)
|
45 |
-
prompt = f"Jesteś AI assystentem. Odpowiadaj po polsku. {parse_history(history)}. <assistant>:"
|
46 |
-
print("prompt: ",prompt)
|
47 |
-
stream = llm(prompt, **params)
|
48 |
-
history[-1][1] = ""
|
49 |
-
answer_save = ""
|
50 |
-
for character in stream:
|
51 |
-
history[-1][1] += character
|
52 |
-
answer_save += character
|
53 |
-
time.sleep(0.005)
|
54 |
-
yield history
|
55 |
-
|
56 |
-
print("answer_save: ",answer_save)
|
57 |
-
msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
|
58 |
-
bot, chatbot, chatbot
|
59 |
-
)
|
60 |
-
clear.click(lambda: None, None, chatbot, queue=False)
|
61 |
-
|
62 |
-
demo.queue()
|
63 |
-
demo.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/packaging/__init__.py
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
# This file is dual licensed under the terms of the Apache License, Version
|
2 |
-
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
3 |
-
# for complete details.
|
4 |
-
|
5 |
-
from .__about__ import (
|
6 |
-
__author__,
|
7 |
-
__copyright__,
|
8 |
-
__email__,
|
9 |
-
__license__,
|
10 |
-
__summary__,
|
11 |
-
__title__,
|
12 |
-
__uri__,
|
13 |
-
__version__,
|
14 |
-
)
|
15 |
-
|
16 |
-
__all__ = [
|
17 |
-
"__title__",
|
18 |
-
"__summary__",
|
19 |
-
"__uri__",
|
20 |
-
"__version__",
|
21 |
-
"__author__",
|
22 |
-
"__email__",
|
23 |
-
"__license__",
|
24 |
-
"__copyright__",
|
25 |
-
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/config/_validate_pyproject/extra_validations.py
DELETED
@@ -1,36 +0,0 @@
|
|
1 |
-
"""The purpose of this module is implement PEP 621 validations that are
|
2 |
-
difficult to express as a JSON Schema (or that are not supported by the current
|
3 |
-
JSON Schema library).
|
4 |
-
"""
|
5 |
-
|
6 |
-
from typing import Mapping, TypeVar
|
7 |
-
|
8 |
-
from .error_reporting import ValidationError
|
9 |
-
|
10 |
-
T = TypeVar("T", bound=Mapping)
|
11 |
-
|
12 |
-
|
13 |
-
class RedefiningStaticFieldAsDynamic(ValidationError):
|
14 |
-
"""According to PEP 621:
|
15 |
-
|
16 |
-
Build back-ends MUST raise an error if the metadata specifies a field
|
17 |
-
statically as well as being listed in dynamic.
|
18 |
-
"""
|
19 |
-
|
20 |
-
|
21 |
-
def validate_project_dynamic(pyproject: T) -> T:
|
22 |
-
project_table = pyproject.get("project", {})
|
23 |
-
dynamic = project_table.get("dynamic", [])
|
24 |
-
|
25 |
-
for field in dynamic:
|
26 |
-
if field in project_table:
|
27 |
-
msg = f"You cannot provide a value for `project.{field}` and "
|
28 |
-
msg += "list it under `project.dynamic` at the same time"
|
29 |
-
name = f"data.project.{field}"
|
30 |
-
value = {field: project_table[field], "...": " # ...", "dynamic": dynamic}
|
31 |
-
raise RedefiningStaticFieldAsDynamic(msg, value, name, rule="PEP 621")
|
32 |
-
|
33 |
-
return pyproject
|
34 |
-
|
35 |
-
|
36 |
-
EXTRA_VALIDATIONS = (validate_project_dynamic,)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/image_text_transformation.py
DELETED
@@ -1,71 +0,0 @@
|
|
1 |
-
from models.blip2_model import ImageCaptioning
|
2 |
-
from models.grit_model import DenseCaptioning
|
3 |
-
from models.gpt_model import ImageToText
|
4 |
-
from models.controlnet_model import TextToImage
|
5 |
-
from models.region_semantic import RegionSemantic
|
6 |
-
from utils.util import read_image_width_height, display_images_and_text, resize_long_edge
|
7 |
-
import argparse
|
8 |
-
from PIL import Image
|
9 |
-
import base64
|
10 |
-
from io import BytesIO
|
11 |
-
import os
|
12 |
-
|
13 |
-
def pil_image_to_base64(image):
|
14 |
-
buffered = BytesIO()
|
15 |
-
image.save(buffered, format="JPEG")
|
16 |
-
img_str = base64.b64encode(buffered.getvalue()).decode()
|
17 |
-
return img_str
|
18 |
-
|
19 |
-
|
20 |
-
class ImageTextTransformation:
|
21 |
-
def __init__(self, args):
|
22 |
-
# Load your big model here
|
23 |
-
self.args = args
|
24 |
-
self.init_models()
|
25 |
-
self.ref_image = None
|
26 |
-
|
27 |
-
def init_models(self):
|
28 |
-
openai_key = os.environ['OPENAI_KEY']
|
29 |
-
print(self.args)
|
30 |
-
print('\033[1;34m' + "Welcome to the Image2Paragraph toolbox...".center(50, '-') + '\033[0m')
|
31 |
-
print('\033[1;33m' + "Initializing models...".center(50, '-') + '\033[0m')
|
32 |
-
print('\033[1;31m' + "This is time-consuming, please wait...".center(50, '-') + '\033[0m')
|
33 |
-
self.image_caption_model = ImageCaptioning(device=self.args.image_caption_device, captioner_base_model=self.args.captioner_base_model)
|
34 |
-
self.dense_caption_model = DenseCaptioning(device=self.args.dense_caption_device)
|
35 |
-
self.gpt_model = ImageToText(openai_key)
|
36 |
-
self.controlnet_model = TextToImage(device=self.args.contolnet_device)
|
37 |
-
self.region_semantic_model = RegionSemantic(device=self.args.semantic_segment_device, image_caption_model=self.image_caption_model, region_classify_model=self.args.region_classify_model, sam_arch=self.args.sam_arch)
|
38 |
-
print('\033[1;32m' + "Model initialization finished!".center(50, '-') + '\033[0m')
|
39 |
-
|
40 |
-
|
41 |
-
def image_to_text(self, img_src):
|
42 |
-
# the information to generate paragraph based on the context
|
43 |
-
self.ref_image = Image.open(img_src)
|
44 |
-
# resize image to long edge 384
|
45 |
-
self.ref_image = resize_long_edge(self.ref_image, 384)
|
46 |
-
width, height = read_image_width_height(img_src)
|
47 |
-
print(self.args)
|
48 |
-
if self.args.image_caption:
|
49 |
-
image_caption = self.image_caption_model.image_caption(img_src)
|
50 |
-
else:
|
51 |
-
image_caption = " "
|
52 |
-
if self.args.dense_caption:
|
53 |
-
dense_caption = self.dense_caption_model.image_dense_caption(img_src)
|
54 |
-
else:
|
55 |
-
dense_caption = " "
|
56 |
-
if self.args.semantic_segment:
|
57 |
-
region_semantic = self.region_semantic_model.region_semantic(img_src)
|
58 |
-
else:
|
59 |
-
region_semantic = " "
|
60 |
-
generated_text = self.gpt_model.paragraph_summary_with_gpt(image_caption, dense_caption, region_semantic, width, height)
|
61 |
-
return image_caption, dense_caption, region_semantic, generated_text
|
62 |
-
|
63 |
-
def text_to_image(self, text):
|
64 |
-
generated_image = self.controlnet_model.text_to_image(text, self.ref_image)
|
65 |
-
return generated_image
|
66 |
-
|
67 |
-
def text_to_image_retrieval(self, text):
|
68 |
-
pass
|
69 |
-
|
70 |
-
def image_to_text_retrieval(self, image):
|
71 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/infer/lib/uvr5_pack/lib_v5/layers_537238KB.py
DELETED
@@ -1,126 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn.functional as F
|
3 |
-
from torch import nn
|
4 |
-
|
5 |
-
from . import spec_utils
|
6 |
-
|
7 |
-
|
8 |
-
class Conv2DBNActiv(nn.Module):
|
9 |
-
def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
|
10 |
-
super(Conv2DBNActiv, self).__init__()
|
11 |
-
self.conv = nn.Sequential(
|
12 |
-
nn.Conv2d(
|
13 |
-
nin,
|
14 |
-
nout,
|
15 |
-
kernel_size=ksize,
|
16 |
-
stride=stride,
|
17 |
-
padding=pad,
|
18 |
-
dilation=dilation,
|
19 |
-
bias=False,
|
20 |
-
),
|
21 |
-
nn.BatchNorm2d(nout),
|
22 |
-
activ(),
|
23 |
-
)
|
24 |
-
|
25 |
-
def __call__(self, x):
|
26 |
-
return self.conv(x)
|
27 |
-
|
28 |
-
|
29 |
-
class SeperableConv2DBNActiv(nn.Module):
|
30 |
-
def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
|
31 |
-
super(SeperableConv2DBNActiv, self).__init__()
|
32 |
-
self.conv = nn.Sequential(
|
33 |
-
nn.Conv2d(
|
34 |
-
nin,
|
35 |
-
nin,
|
36 |
-
kernel_size=ksize,
|
37 |
-
stride=stride,
|
38 |
-
padding=pad,
|
39 |
-
dilation=dilation,
|
40 |
-
groups=nin,
|
41 |
-
bias=False,
|
42 |
-
),
|
43 |
-
nn.Conv2d(nin, nout, kernel_size=1, bias=False),
|
44 |
-
nn.BatchNorm2d(nout),
|
45 |
-
activ(),
|
46 |
-
)
|
47 |
-
|
48 |
-
def __call__(self, x):
|
49 |
-
return self.conv(x)
|
50 |
-
|
51 |
-
|
52 |
-
class Encoder(nn.Module):
|
53 |
-
def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
|
54 |
-
super(Encoder, self).__init__()
|
55 |
-
self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
|
56 |
-
self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
|
57 |
-
|
58 |
-
def __call__(self, x):
|
59 |
-
skip = self.conv1(x)
|
60 |
-
h = self.conv2(skip)
|
61 |
-
|
62 |
-
return h, skip
|
63 |
-
|
64 |
-
|
65 |
-
class Decoder(nn.Module):
|
66 |
-
def __init__(
|
67 |
-
self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
|
68 |
-
):
|
69 |
-
super(Decoder, self).__init__()
|
70 |
-
self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
|
71 |
-
self.dropout = nn.Dropout2d(0.1) if dropout else None
|
72 |
-
|
73 |
-
def __call__(self, x, skip=None):
|
74 |
-
x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
|
75 |
-
if skip is not None:
|
76 |
-
skip = spec_utils.crop_center(skip, x)
|
77 |
-
x = torch.cat([x, skip], dim=1)
|
78 |
-
h = self.conv(x)
|
79 |
-
|
80 |
-
if self.dropout is not None:
|
81 |
-
h = self.dropout(h)
|
82 |
-
|
83 |
-
return h
|
84 |
-
|
85 |
-
|
86 |
-
class ASPPModule(nn.Module):
|
87 |
-
def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
|
88 |
-
super(ASPPModule, self).__init__()
|
89 |
-
self.conv1 = nn.Sequential(
|
90 |
-
nn.AdaptiveAvgPool2d((1, None)),
|
91 |
-
Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
|
92 |
-
)
|
93 |
-
self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
|
94 |
-
self.conv3 = SeperableConv2DBNActiv(
|
95 |
-
nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
|
96 |
-
)
|
97 |
-
self.conv4 = SeperableConv2DBNActiv(
|
98 |
-
nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
|
99 |
-
)
|
100 |
-
self.conv5 = SeperableConv2DBNActiv(
|
101 |
-
nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
|
102 |
-
)
|
103 |
-
self.conv6 = SeperableConv2DBNActiv(
|
104 |
-
nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
|
105 |
-
)
|
106 |
-
self.conv7 = SeperableConv2DBNActiv(
|
107 |
-
nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
|
108 |
-
)
|
109 |
-
self.bottleneck = nn.Sequential(
|
110 |
-
Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
|
111 |
-
)
|
112 |
-
|
113 |
-
def forward(self, x):
|
114 |
-
_, _, h, w = x.size()
|
115 |
-
feat1 = F.interpolate(
|
116 |
-
self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
|
117 |
-
)
|
118 |
-
feat2 = self.conv2(x)
|
119 |
-
feat3 = self.conv3(x)
|
120 |
-
feat4 = self.conv4(x)
|
121 |
-
feat5 = self.conv5(x)
|
122 |
-
feat6 = self.conv6(x)
|
123 |
-
feat7 = self.conv7(x)
|
124 |
-
out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
|
125 |
-
bottle = self.bottleneck(out)
|
126 |
-
return bottle
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/_spinners.py
DELETED
@@ -1,482 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Spinners are from:
|
3 |
-
* cli-spinners:
|
4 |
-
MIT License
|
5 |
-
Copyright (c) Sindre Sorhus <[email protected]> (sindresorhus.com)
|
6 |
-
Permission is hereby granted, free of charge, to any person obtaining a copy
|
7 |
-
of this software and associated documentation files (the "Software"), to deal
|
8 |
-
in the Software without restriction, including without limitation the rights to
|
9 |
-
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
10 |
-
the Software, and to permit persons to whom the Software is furnished to do so,
|
11 |
-
subject to the following conditions:
|
12 |
-
The above copyright notice and this permission notice shall be included
|
13 |
-
in all copies or substantial portions of the Software.
|
14 |
-
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
|
15 |
-
INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
|
16 |
-
PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
|
17 |
-
FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
|
18 |
-
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
19 |
-
IN THE SOFTWARE.
|
20 |
-
"""
|
21 |
-
|
22 |
-
SPINNERS = {
|
23 |
-
"dots": {
|
24 |
-
"interval": 80,
|
25 |
-
"frames": "⠋⠙⠹⠸⠼⠴⠦⠧⠇⠏",
|
26 |
-
},
|
27 |
-
"dots2": {"interval": 80, "frames": "⣾⣽⣻⢿⡿⣟⣯⣷"},
|
28 |
-
"dots3": {
|
29 |
-
"interval": 80,
|
30 |
-
"frames": "⠋⠙⠚⠞⠖⠦⠴⠲⠳⠓",
|
31 |
-
},
|
32 |
-
"dots4": {
|
33 |
-
"interval": 80,
|
34 |
-
"frames": "⠄⠆⠇⠋⠙⠸⠰⠠⠰⠸⠙⠋⠇⠆",
|
35 |
-
},
|
36 |
-
"dots5": {
|
37 |
-
"interval": 80,
|
38 |
-
"frames": "⠋⠙⠚⠒⠂⠂⠒⠲⠴⠦⠖⠒⠐⠐⠒⠓⠋",
|
39 |
-
},
|
40 |
-
"dots6": {
|
41 |
-
"interval": 80,
|
42 |
-
"frames": "⠁⠉⠙⠚⠒⠂⠂⠒⠲⠴⠤⠄⠄⠤⠴⠲⠒⠂⠂⠒⠚⠙⠉⠁",
|
43 |
-
},
|
44 |
-
"dots7": {
|
45 |
-
"interval": 80,
|
46 |
-
"frames": "⠈⠉⠋⠓⠒⠐⠐⠒⠖⠦⠤⠠⠠⠤⠦⠖⠒⠐⠐⠒⠓⠋⠉⠈",
|
47 |
-
},
|
48 |
-
"dots8": {
|
49 |
-
"interval": 80,
|
50 |
-
"frames": "⠁⠁⠉⠙⠚⠒⠂⠂⠒⠲⠴⠤⠄⠄⠤⠠⠠⠤⠦⠖⠒⠐⠐⠒⠓⠋⠉⠈⠈",
|
51 |
-
},
|
52 |
-
"dots9": {"interval": 80, "frames": "⢹⢺⢼⣸⣇⡧⡗⡏"},
|
53 |
-
"dots10": {"interval": 80, "frames": "⢄⢂⢁⡁⡈⡐⡠"},
|
54 |
-
"dots11": {"interval": 100, "frames": "⠁⠂⠄⡀⢀⠠⠐⠈"},
|
55 |
-
"dots12": {
|
56 |
-
"interval": 80,
|
57 |
-
"frames": [
|
58 |
-
"⢀⠀",
|
59 |
-
"⡀⠀",
|
60 |
-
"⠄⠀",
|
61 |
-
"⢂⠀",
|
62 |
-
"⡂⠀",
|
63 |
-
"⠅⠀",
|
64 |
-
"⢃⠀",
|
65 |
-
"⡃⠀",
|
66 |
-
"⠍⠀",
|
67 |
-
"⢋⠀",
|
68 |
-
"⡋⠀",
|
69 |
-
"⠍⠁",
|
70 |
-
"⢋⠁",
|
71 |
-
"⡋⠁",
|
72 |
-
"⠍⠉",
|
73 |
-
"⠋⠉",
|
74 |
-
"⠋⠉",
|
75 |
-
"⠉⠙",
|
76 |
-
"⠉⠙",
|
77 |
-
"⠉⠩",
|
78 |
-
"⠈⢙",
|
79 |
-
"⠈⡙",
|
80 |
-
"⢈⠩",
|
81 |
-
"⡀⢙",
|
82 |
-
"⠄⡙",
|
83 |
-
"⢂⠩",
|
84 |
-
"⡂⢘",
|
85 |
-
"⠅⡘",
|
86 |
-
"⢃⠨",
|
87 |
-
"⡃⢐",
|
88 |
-
"⠍⡐",
|
89 |
-
"⢋⠠",
|
90 |
-
"⡋⢀",
|
91 |
-
"⠍⡁",
|
92 |
-
"⢋⠁",
|
93 |
-
"⡋⠁",
|
94 |
-
"⠍⠉",
|
95 |
-
"⠋⠉",
|
96 |
-
"⠋⠉",
|
97 |
-
"⠉⠙",
|
98 |
-
"⠉⠙",
|
99 |
-
"⠉⠩",
|
100 |
-
"⠈⢙",
|
101 |
-
"⠈⡙",
|
102 |
-
"⠈⠩",
|
103 |
-
"⠀⢙",
|
104 |
-
"⠀⡙",
|
105 |
-
"⠀⠩",
|
106 |
-
"⠀⢘",
|
107 |
-
"⠀⡘",
|
108 |
-
"⠀⠨",
|
109 |
-
"⠀⢐",
|
110 |
-
"⠀⡐",
|
111 |
-
"⠀⠠",
|
112 |
-
"⠀⢀",
|
113 |
-
"⠀⡀",
|
114 |
-
],
|
115 |
-
},
|
116 |
-
"dots8Bit": {
|
117 |
-
"interval": 80,
|
118 |
-
"frames": "⠀⠁⠂⠃⠄⠅⠆⠇⡀⡁⡂⡃⡄⡅⡆⡇⠈⠉⠊⠋⠌⠍⠎⠏⡈⡉⡊⡋⡌⡍⡎⡏⠐⠑⠒⠓⠔⠕⠖⠗⡐⡑⡒⡓⡔⡕⡖⡗⠘⠙⠚⠛⠜⠝⠞⠟⡘⡙"
|
119 |
-
"⡚⡛⡜⡝⡞⡟⠠⠡⠢⠣⠤⠥⠦⠧⡠⡡⡢⡣⡤⡥⡦⡧⠨⠩⠪⠫⠬⠭⠮⠯⡨⡩⡪⡫⡬⡭⡮⡯⠰⠱⠲⠳⠴⠵⠶⠷⡰⡱⡲⡳⡴⡵⡶⡷⠸⠹⠺⠻"
|
120 |
-
"⠼⠽⠾⠿⡸⡹⡺⡻⡼⡽⡾⡿⢀⢁⢂⢃⢄⢅⢆⢇⣀⣁⣂⣃⣄⣅⣆⣇⢈⢉⢊⢋⢌⢍⢎⢏⣈⣉⣊⣋⣌⣍⣎⣏⢐⢑⢒⢓⢔⢕⢖⢗⣐⣑⣒⣓⣔⣕"
|
121 |
-
"⣖⣗⢘⢙⢚⢛⢜⢝⢞⢟⣘⣙⣚⣛⣜⣝⣞⣟⢠⢡⢢⢣⢤⢥⢦⢧⣠⣡⣢⣣⣤⣥⣦⣧⢨⢩⢪⢫⢬⢭⢮⢯⣨⣩⣪⣫⣬⣭⣮⣯⢰⢱⢲⢳⢴⢵⢶⢷"
|
122 |
-
"⣰⣱⣲⣳⣴⣵⣶⣷⢸⢹⢺⢻⢼⢽⢾⢿⣸⣹⣺⣻⣼⣽⣾⣿",
|
123 |
-
},
|
124 |
-
"line": {"interval": 130, "frames": ["-", "\\", "|", "/"]},
|
125 |
-
"line2": {"interval": 100, "frames": "⠂-–—–-"},
|
126 |
-
"pipe": {"interval": 100, "frames": "┤┘┴└├┌┬┐"},
|
127 |
-
"simpleDots": {"interval": 400, "frames": [". ", ".. ", "...", " "]},
|
128 |
-
"simpleDotsScrolling": {
|
129 |
-
"interval": 200,
|
130 |
-
"frames": [". ", ".. ", "...", " ..", " .", " "],
|
131 |
-
},
|
132 |
-
"star": {"interval": 70, "frames": "✶✸✹✺✹✷"},
|
133 |
-
"star2": {"interval": 80, "frames": "+x*"},
|
134 |
-
"flip": {
|
135 |
-
"interval": 70,
|
136 |
-
"frames": "___-``'´-___",
|
137 |
-
},
|
138 |
-
"hamburger": {"interval": 100, "frames": "☱☲☴"},
|
139 |
-
"growVertical": {
|
140 |
-
"interval": 120,
|
141 |
-
"frames": "▁▃▄▅▆▇▆▅▄▃",
|
142 |
-
},
|
143 |
-
"growHorizontal": {
|
144 |
-
"interval": 120,
|
145 |
-
"frames": "▏▎▍▌▋▊▉▊▋▌▍▎",
|
146 |
-
},
|
147 |
-
"balloon": {"interval": 140, "frames": " .oO@* "},
|
148 |
-
"balloon2": {"interval": 120, "frames": ".oO°Oo."},
|
149 |
-
"noise": {"interval": 100, "frames": "▓▒░"},
|
150 |
-
"bounce": {"interval": 120, "frames": "⠁⠂⠄⠂"},
|
151 |
-
"boxBounce": {"interval": 120, "frames": "▖▘▝▗"},
|
152 |
-
"boxBounce2": {"interval": 100, "frames": "▌▀▐▄"},
|
153 |
-
"triangle": {"interval": 50, "frames": "◢◣◤◥"},
|
154 |
-
"arc": {"interval": 100, "frames": "◜◠◝◞◡◟"},
|
155 |
-
"circle": {"interval": 120, "frames": "◡⊙◠"},
|
156 |
-
"squareCorners": {"interval": 180, "frames": "◰◳◲◱"},
|
157 |
-
"circleQuarters": {"interval": 120, "frames": "◴◷◶◵"},
|
158 |
-
"circleHalves": {"interval": 50, "frames": "◐◓◑◒"},
|
159 |
-
"squish": {"interval": 100, "frames": "╫╪"},
|
160 |
-
"toggle": {"interval": 250, "frames": "⊶⊷"},
|
161 |
-
"toggle2": {"interval": 80, "frames": "▫▪"},
|
162 |
-
"toggle3": {"interval": 120, "frames": "□■"},
|
163 |
-
"toggle4": {"interval": 100, "frames": "■□▪▫"},
|
164 |
-
"toggle5": {"interval": 100, "frames": "▮▯"},
|
165 |
-
"toggle6": {"interval": 300, "frames": "ဝ၀"},
|
166 |
-
"toggle7": {"interval": 80, "frames": "⦾⦿"},
|
167 |
-
"toggle8": {"interval": 100, "frames": "◍◌"},
|
168 |
-
"toggle9": {"interval": 100, "frames": "◉◎"},
|
169 |
-
"toggle10": {"interval": 100, "frames": "㊂㊀㊁"},
|
170 |
-
"toggle11": {"interval": 50, "frames": "⧇⧆"},
|
171 |
-
"toggle12": {"interval": 120, "frames": "☗☖"},
|
172 |
-
"toggle13": {"interval": 80, "frames": "=*-"},
|
173 |
-
"arrow": {"interval": 100, "frames": "←↖↑↗→↘↓↙"},
|
174 |
-
"arrow2": {
|
175 |
-
"interval": 80,
|
176 |
-
"frames": ["⬆️ ", "↗️ ", "➡️ ", "↘️ ", "⬇️ ", "↙️ ", "⬅️ ", "↖️ "],
|
177 |
-
},
|
178 |
-
"arrow3": {
|
179 |
-
"interval": 120,
|
180 |
-
"frames": ["▹▹▹▹▹", "▸▹▹▹▹", "▹▸▹▹▹", "▹▹▸▹▹", "▹▹▹▸▹", "▹▹▹▹▸"],
|
181 |
-
},
|
182 |
-
"bouncingBar": {
|
183 |
-
"interval": 80,
|
184 |
-
"frames": [
|
185 |
-
"[ ]",
|
186 |
-
"[= ]",
|
187 |
-
"[== ]",
|
188 |
-
"[=== ]",
|
189 |
-
"[ ===]",
|
190 |
-
"[ ==]",
|
191 |
-
"[ =]",
|
192 |
-
"[ ]",
|
193 |
-
"[ =]",
|
194 |
-
"[ ==]",
|
195 |
-
"[ ===]",
|
196 |
-
"[====]",
|
197 |
-
"[=== ]",
|
198 |
-
"[== ]",
|
199 |
-
"[= ]",
|
200 |
-
],
|
201 |
-
},
|
202 |
-
"bouncingBall": {
|
203 |
-
"interval": 80,
|
204 |
-
"frames": [
|
205 |
-
"( ● )",
|
206 |
-
"( ● )",
|
207 |
-
"( ● )",
|
208 |
-
"( ● )",
|
209 |
-
"( ●)",
|
210 |
-
"( ● )",
|
211 |
-
"( ● )",
|
212 |
-
"( ● )",
|
213 |
-
"( ● )",
|
214 |
-
"(● )",
|
215 |
-
],
|
216 |
-
},
|
217 |
-
"smiley": {"interval": 200, "frames": ["😄 ", "😝 "]},
|
218 |
-
"monkey": {"interval": 300, "frames": ["🙈 ", "🙈 ", "🙉 ", "🙊 "]},
|
219 |
-
"hearts": {"interval": 100, "frames": ["💛 ", "💙 ", "💜 ", "💚 ", "❤️ "]},
|
220 |
-
"clock": {
|
221 |
-
"interval": 100,
|
222 |
-
"frames": [
|
223 |
-
"🕛 ",
|
224 |
-
"🕐 ",
|
225 |
-
"🕑 ",
|
226 |
-
"🕒 ",
|
227 |
-
"🕓 ",
|
228 |
-
"🕔 ",
|
229 |
-
"🕕 ",
|
230 |
-
"🕖 ",
|
231 |
-
"🕗 ",
|
232 |
-
"🕘 ",
|
233 |
-
"🕙 ",
|
234 |
-
"🕚 ",
|
235 |
-
],
|
236 |
-
},
|
237 |
-
"earth": {"interval": 180, "frames": ["🌍 ", "🌎 ", "🌏 "]},
|
238 |
-
"material": {
|
239 |
-
"interval": 17,
|
240 |
-
"frames": [
|
241 |
-
"█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
242 |
-
"██▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
243 |
-
"███▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
244 |
-
"████▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
245 |
-
"██████▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
246 |
-
"██████▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
247 |
-
"███████▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
248 |
-
"████████▁▁▁▁▁▁▁▁▁▁▁▁",
|
249 |
-
"█████████▁▁▁▁▁▁▁▁▁▁▁",
|
250 |
-
"█████████▁▁▁▁▁▁▁▁▁▁▁",
|
251 |
-
"██████████▁▁▁▁▁▁▁▁▁▁",
|
252 |
-
"███████████▁▁▁▁▁▁▁▁▁",
|
253 |
-
"█████████████▁▁▁▁▁▁▁",
|
254 |
-
"██████████████▁▁▁▁▁▁",
|
255 |
-
"██████████████▁▁▁▁▁▁",
|
256 |
-
"▁██████████████▁▁▁▁▁",
|
257 |
-
"▁██████████████▁▁▁▁▁",
|
258 |
-
"▁██████████████▁▁▁▁▁",
|
259 |
-
"▁▁██████████████▁▁▁▁",
|
260 |
-
"▁▁▁██████████████▁▁▁",
|
261 |
-
"▁▁▁▁█████████████▁▁▁",
|
262 |
-
"▁▁▁▁██████████████▁▁",
|
263 |
-
"▁▁▁▁██████████████▁▁",
|
264 |
-
"▁▁▁▁▁██████████████▁",
|
265 |
-
"▁▁▁▁▁██████████████▁",
|
266 |
-
"▁▁▁▁▁██████████████▁",
|
267 |
-
"▁▁▁▁▁▁██████████████",
|
268 |
-
"▁▁▁▁▁▁██████████████",
|
269 |
-
"▁▁▁▁▁▁▁█████████████",
|
270 |
-
"▁▁▁▁▁▁▁█████████████",
|
271 |
-
"▁▁▁▁▁▁▁▁████████████",
|
272 |
-
"▁▁▁▁▁▁▁▁████████████",
|
273 |
-
"▁▁▁▁▁▁▁▁▁███████████",
|
274 |
-
"▁▁▁▁▁▁▁▁▁███████████",
|
275 |
-
"▁▁▁▁▁▁▁▁▁▁██████████",
|
276 |
-
"▁▁▁▁▁▁▁▁▁▁██████████",
|
277 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁████████",
|
278 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁███████",
|
279 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁██████",
|
280 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█████",
|
281 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█████",
|
282 |
-
"█▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁████",
|
283 |
-
"██▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
|
284 |
-
"██▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
|
285 |
-
"███▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
|
286 |
-
"████▁▁▁▁▁▁▁▁▁▁▁▁▁▁██",
|
287 |
-
"█████▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
|
288 |
-
"█████▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
|
289 |
-
"██████▁▁▁▁▁▁▁▁▁▁▁▁▁█",
|
290 |
-
"████████▁▁▁▁▁▁▁▁▁▁▁▁",
|
291 |
-
"█████████▁▁▁▁▁▁▁▁▁▁▁",
|
292 |
-
"█████████▁▁▁▁▁▁▁▁▁▁▁",
|
293 |
-
"█████████▁▁▁▁▁▁▁▁▁▁▁",
|
294 |
-
"█████████▁▁▁▁▁▁▁▁▁▁▁",
|
295 |
-
"███████████▁▁▁▁▁▁▁▁▁",
|
296 |
-
"████████████▁▁▁▁▁▁▁▁",
|
297 |
-
"████████████▁▁▁▁▁▁▁▁",
|
298 |
-
"██████████████▁▁▁▁▁▁",
|
299 |
-
"██████████████▁▁▁▁▁▁",
|
300 |
-
"▁██████████████▁▁▁▁▁",
|
301 |
-
"▁██████████████▁▁▁▁▁",
|
302 |
-
"▁▁▁█████████████▁▁▁▁",
|
303 |
-
"▁▁▁▁▁████████████▁▁▁",
|
304 |
-
"▁▁▁▁▁████████████▁▁▁",
|
305 |
-
"▁▁▁▁▁▁███████████▁▁▁",
|
306 |
-
"▁▁▁▁▁▁▁▁█████████▁▁▁",
|
307 |
-
"▁▁▁▁▁▁▁▁█████████▁▁▁",
|
308 |
-
"▁▁▁▁▁▁▁▁▁█████████▁▁",
|
309 |
-
"▁▁▁▁▁▁▁▁▁█████████▁▁",
|
310 |
-
"▁▁▁▁▁▁▁▁▁▁█████████▁",
|
311 |
-
"▁▁▁▁▁▁▁▁▁▁▁████████▁",
|
312 |
-
"▁▁▁▁▁▁▁▁▁▁▁████████▁",
|
313 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁███████▁",
|
314 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁███████▁",
|
315 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁███████",
|
316 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁███████",
|
317 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█████",
|
318 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁████",
|
319 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁████",
|
320 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁████",
|
321 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
|
322 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁███",
|
323 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁██",
|
324 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁██",
|
325 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁██",
|
326 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
|
327 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
|
328 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁█",
|
329 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
330 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
331 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
332 |
-
"▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁",
|
333 |
-
],
|
334 |
-
},
|
335 |
-
"moon": {
|
336 |
-
"interval": 80,
|
337 |
-
"frames": ["🌑 ", "🌒 ", "🌓 ", "🌔 ", "🌕 ", "🌖 ", "🌗 ", "🌘 "],
|
338 |
-
},
|
339 |
-
"runner": {"interval": 140, "frames": ["🚶 ", "🏃 "]},
|
340 |
-
"pong": {
|
341 |
-
"interval": 80,
|
342 |
-
"frames": [
|
343 |
-
"▐⠂ ▌",
|
344 |
-
"▐⠈ ▌",
|
345 |
-
"▐ ⠂ ▌",
|
346 |
-
"▐ ⠠ ▌",
|
347 |
-
"▐ ⡀ ▌",
|
348 |
-
"▐ ⠠ ▌",
|
349 |
-
"▐ ⠂ ▌",
|
350 |
-
"▐ ⠈ ▌",
|
351 |
-
"▐ ⠂ ▌",
|
352 |
-
"▐ ⠠ ▌",
|
353 |
-
"▐ ⡀ ▌",
|
354 |
-
"▐ ⠠ ▌",
|
355 |
-
"▐ ⠂ ▌",
|
356 |
-
"▐ ⠈ ▌",
|
357 |
-
"▐ ⠂▌",
|
358 |
-
"▐ ⠠▌",
|
359 |
-
"▐ ⡀▌",
|
360 |
-
"▐ ⠠ ▌",
|
361 |
-
"▐ ⠂ ▌",
|
362 |
-
"▐ ⠈ ▌",
|
363 |
-
"▐ ⠂ ▌",
|
364 |
-
"▐ ⠠ ▌",
|
365 |
-
"▐ ⡀ ▌",
|
366 |
-
"▐ ⠠ ▌",
|
367 |
-
"▐ ⠂ ▌",
|
368 |
-
"▐ ⠈ ▌",
|
369 |
-
"▐ ⠂ ▌",
|
370 |
-
"▐ ⠠ ▌",
|
371 |
-
"▐ ⡀ ▌",
|
372 |
-
"▐⠠ ▌",
|
373 |
-
],
|
374 |
-
},
|
375 |
-
"shark": {
|
376 |
-
"interval": 120,
|
377 |
-
"frames": [
|
378 |
-
"▐|\\____________▌",
|
379 |
-
"▐_|\\___________▌",
|
380 |
-
"▐__|\\__________▌",
|
381 |
-
"▐___|\\_________▌",
|
382 |
-
"▐____|\\________▌",
|
383 |
-
"▐_____|\\_______▌",
|
384 |
-
"▐______|\\______▌",
|
385 |
-
"▐_______|\\_____▌",
|
386 |
-
"▐________|\\____▌",
|
387 |
-
"▐_________|\\___▌",
|
388 |
-
"▐__________|\\__▌",
|
389 |
-
"▐___________|\\_▌",
|
390 |
-
"▐____________|\\▌",
|
391 |
-
"▐____________/|▌",
|
392 |
-
"▐___________/|_▌",
|
393 |
-
"▐__________/|__▌",
|
394 |
-
"▐_________/|___▌",
|
395 |
-
"▐________/|____▌",
|
396 |
-
"▐_______/|_____▌",
|
397 |
-
"▐______/|______▌",
|
398 |
-
"▐_____/|_______▌",
|
399 |
-
"▐____/|________▌",
|
400 |
-
"▐___/|_________▌",
|
401 |
-
"▐__/|__________▌",
|
402 |
-
"▐_/|___________▌",
|
403 |
-
"▐/|____________▌",
|
404 |
-
],
|
405 |
-
},
|
406 |
-
"dqpb": {"interval": 100, "frames": "dqpb"},
|
407 |
-
"weather": {
|
408 |
-
"interval": 100,
|
409 |
-
"frames": [
|
410 |
-
"☀️ ",
|
411 |
-
"☀️ ",
|
412 |
-
"☀️ ",
|
413 |
-
"🌤 ",
|
414 |
-
"⛅️ ",
|
415 |
-
"🌥 ",
|
416 |
-
"☁️ ",
|
417 |
-
"🌧 ",
|
418 |
-
"🌨 ",
|
419 |
-
"🌧 ",
|
420 |
-
"🌨 ",
|
421 |
-
"🌧 ",
|
422 |
-
"🌨 ",
|
423 |
-
"⛈ ",
|
424 |
-
"🌨 ",
|
425 |
-
"🌧 ",
|
426 |
-
"🌨 ",
|
427 |
-
"☁️ ",
|
428 |
-
"🌥 ",
|
429 |
-
"⛅️ ",
|
430 |
-
"🌤 ",
|
431 |
-
"☀️ ",
|
432 |
-
"☀️ ",
|
433 |
-
],
|
434 |
-
},
|
435 |
-
"christmas": {"interval": 400, "frames": "🌲🎄"},
|
436 |
-
"grenade": {
|
437 |
-
"interval": 80,
|
438 |
-
"frames": [
|
439 |
-
"، ",
|
440 |
-
"′ ",
|
441 |
-
" ´ ",
|
442 |
-
" ‾ ",
|
443 |
-
" ⸌",
|
444 |
-
" ⸊",
|
445 |
-
" |",
|
446 |
-
" ⁎",
|
447 |
-
" ⁕",
|
448 |
-
" ෴ ",
|
449 |
-
" ⁓",
|
450 |
-
" ",
|
451 |
-
" ",
|
452 |
-
" ",
|
453 |
-
],
|
454 |
-
},
|
455 |
-
"point": {"interval": 125, "frames": ["∙∙∙", "●∙∙", "∙●∙", "∙∙●", "∙∙∙"]},
|
456 |
-
"layer": {"interval": 150, "frames": "-=≡"},
|
457 |
-
"betaWave": {
|
458 |
-
"interval": 80,
|
459 |
-
"frames": [
|
460 |
-
"ρββββββ",
|
461 |
-
"βρβββββ",
|
462 |
-
"ββρββββ",
|
463 |
-
"βββρβββ",
|
464 |
-
"ββββρββ",
|
465 |
-
"βββββρβ",
|
466 |
-
"ββββββρ",
|
467 |
-
],
|
468 |
-
},
|
469 |
-
"aesthetic": {
|
470 |
-
"interval": 80,
|
471 |
-
"frames": [
|
472 |
-
"▰▱▱▱▱▱▱",
|
473 |
-
"▰▰▱▱▱▱▱",
|
474 |
-
"▰▰▰▱▱▱▱",
|
475 |
-
"▰▰▰▰▱▱▱",
|
476 |
-
"▰▰▰▰▰▱▱",
|
477 |
-
"▰▰▰▰▰▰▱",
|
478 |
-
"▰▰▰▰▰▰▰",
|
479 |
-
"▰▱▱▱▱▱▱",
|
480 |
-
],
|
481 |
-
},
|
482 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/spinner.py
DELETED
@@ -1,137 +0,0 @@
|
|
1 |
-
from typing import cast, List, Optional, TYPE_CHECKING, Union
|
2 |
-
|
3 |
-
from ._spinners import SPINNERS
|
4 |
-
from .measure import Measurement
|
5 |
-
from .table import Table
|
6 |
-
from .text import Text
|
7 |
-
|
8 |
-
if TYPE_CHECKING:
|
9 |
-
from .console import Console, ConsoleOptions, RenderResult, RenderableType
|
10 |
-
from .style import StyleType
|
11 |
-
|
12 |
-
|
13 |
-
class Spinner:
|
14 |
-
"""A spinner animation.
|
15 |
-
|
16 |
-
Args:
|
17 |
-
name (str): Name of spinner (run python -m rich.spinner).
|
18 |
-
text (RenderableType, optional): A renderable to display at the right of the spinner (str or Text typically). Defaults to "".
|
19 |
-
style (StyleType, optional): Style for spinner animation. Defaults to None.
|
20 |
-
speed (float, optional): Speed factor for animation. Defaults to 1.0.
|
21 |
-
|
22 |
-
Raises:
|
23 |
-
KeyError: If name isn't one of the supported spinner animations.
|
24 |
-
"""
|
25 |
-
|
26 |
-
def __init__(
|
27 |
-
self,
|
28 |
-
name: str,
|
29 |
-
text: "RenderableType" = "",
|
30 |
-
*,
|
31 |
-
style: Optional["StyleType"] = None,
|
32 |
-
speed: float = 1.0,
|
33 |
-
) -> None:
|
34 |
-
try:
|
35 |
-
spinner = SPINNERS[name]
|
36 |
-
except KeyError:
|
37 |
-
raise KeyError(f"no spinner called {name!r}")
|
38 |
-
self.text: "Union[RenderableType, Text]" = (
|
39 |
-
Text.from_markup(text) if isinstance(text, str) else text
|
40 |
-
)
|
41 |
-
self.frames = cast(List[str], spinner["frames"])[:]
|
42 |
-
self.interval = cast(float, spinner["interval"])
|
43 |
-
self.start_time: Optional[float] = None
|
44 |
-
self.style = style
|
45 |
-
self.speed = speed
|
46 |
-
self.frame_no_offset: float = 0.0
|
47 |
-
self._update_speed = 0.0
|
48 |
-
|
49 |
-
def __rich_console__(
|
50 |
-
self, console: "Console", options: "ConsoleOptions"
|
51 |
-
) -> "RenderResult":
|
52 |
-
yield self.render(console.get_time())
|
53 |
-
|
54 |
-
def __rich_measure__(
|
55 |
-
self, console: "Console", options: "ConsoleOptions"
|
56 |
-
) -> Measurement:
|
57 |
-
text = self.render(0)
|
58 |
-
return Measurement.get(console, options, text)
|
59 |
-
|
60 |
-
def render(self, time: float) -> "RenderableType":
|
61 |
-
"""Render the spinner for a given time.
|
62 |
-
|
63 |
-
Args:
|
64 |
-
time (float): Time in seconds.
|
65 |
-
|
66 |
-
Returns:
|
67 |
-
RenderableType: A renderable containing animation frame.
|
68 |
-
"""
|
69 |
-
if self.start_time is None:
|
70 |
-
self.start_time = time
|
71 |
-
|
72 |
-
frame_no = ((time - self.start_time) * self.speed) / (
|
73 |
-
self.interval / 1000.0
|
74 |
-
) + self.frame_no_offset
|
75 |
-
frame = Text(
|
76 |
-
self.frames[int(frame_no) % len(self.frames)], style=self.style or ""
|
77 |
-
)
|
78 |
-
|
79 |
-
if self._update_speed:
|
80 |
-
self.frame_no_offset = frame_no
|
81 |
-
self.start_time = time
|
82 |
-
self.speed = self._update_speed
|
83 |
-
self._update_speed = 0.0
|
84 |
-
|
85 |
-
if not self.text:
|
86 |
-
return frame
|
87 |
-
elif isinstance(self.text, (str, Text)):
|
88 |
-
return Text.assemble(frame, " ", self.text)
|
89 |
-
else:
|
90 |
-
table = Table.grid(padding=1)
|
91 |
-
table.add_row(frame, self.text)
|
92 |
-
return table
|
93 |
-
|
94 |
-
def update(
|
95 |
-
self,
|
96 |
-
*,
|
97 |
-
text: "RenderableType" = "",
|
98 |
-
style: Optional["StyleType"] = None,
|
99 |
-
speed: Optional[float] = None,
|
100 |
-
) -> None:
|
101 |
-
"""Updates attributes of a spinner after it has been started.
|
102 |
-
|
103 |
-
Args:
|
104 |
-
text (RenderableType, optional): A renderable to display at the right of the spinner (str or Text typically). Defaults to "".
|
105 |
-
style (StyleType, optional): Style for spinner animation. Defaults to None.
|
106 |
-
speed (float, optional): Speed factor for animation. Defaults to None.
|
107 |
-
"""
|
108 |
-
if text:
|
109 |
-
self.text = Text.from_markup(text) if isinstance(text, str) else text
|
110 |
-
if style:
|
111 |
-
self.style = style
|
112 |
-
if speed:
|
113 |
-
self._update_speed = speed
|
114 |
-
|
115 |
-
|
116 |
-
if __name__ == "__main__": # pragma: no cover
|
117 |
-
from time import sleep
|
118 |
-
|
119 |
-
from .columns import Columns
|
120 |
-
from .panel import Panel
|
121 |
-
from .live import Live
|
122 |
-
|
123 |
-
all_spinners = Columns(
|
124 |
-
[
|
125 |
-
Spinner(spinner_name, text=Text(repr(spinner_name), style="green"))
|
126 |
-
for spinner_name in sorted(SPINNERS.keys())
|
127 |
-
],
|
128 |
-
column_first=True,
|
129 |
-
expand=True,
|
130 |
-
)
|
131 |
-
|
132 |
-
with Live(
|
133 |
-
Panel(all_spinners, title="Spinners", border_style="blue"),
|
134 |
-
refresh_per_second=20,
|
135 |
-
) as live:
|
136 |
-
while True:
|
137 |
-
sleep(0.1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Boadiwaa/Recipes/openai/api_resources/__init__.py
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
from openai.api_resources.answer import Answer # noqa: F401
|
2 |
-
from openai.api_resources.classification import Classification # noqa: F401
|
3 |
-
from openai.api_resources.completion import Completion # noqa: F401
|
4 |
-
from openai.api_resources.customer import Customer # noqa: F401
|
5 |
-
from openai.api_resources.edit import Edit # noqa: F401
|
6 |
-
from openai.api_resources.deployment import Deployment # noqa: F401
|
7 |
-
from openai.api_resources.embedding import Embedding # noqa: F401
|
8 |
-
from openai.api_resources.engine import Engine # noqa: F401
|
9 |
-
from openai.api_resources.error_object import ErrorObject # noqa: F401
|
10 |
-
from openai.api_resources.file import File # noqa: F401
|
11 |
-
from openai.api_resources.fine_tune import FineTune # noqa: F401
|
12 |
-
from openai.api_resources.model import Model # noqa: F401
|
13 |
-
from openai.api_resources.search import Search # noqa: F401
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/detail/complex/ctanhf.h
DELETED
@@ -1,124 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
* Copyright 2013 Filipe RNC Maia
|
4 |
-
*
|
5 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
6 |
-
* you may not use this file except in compliance with the License.
|
7 |
-
* You may obtain a copy of the License at
|
8 |
-
*
|
9 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
10 |
-
*
|
11 |
-
* Unless required by applicable law or agreed to in writing, software
|
12 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
13 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
14 |
-
* See the License for the specific language governing permissions and
|
15 |
-
* limitations under the License.
|
16 |
-
*/
|
17 |
-
|
18 |
-
/*-
|
19 |
-
* Copyright (c) 2011 David Schultz
|
20 |
-
* All rights reserved.
|
21 |
-
*
|
22 |
-
* Redistribution and use in source and binary forms, with or without
|
23 |
-
* modification, are permitted provided that the following conditions
|
24 |
-
* are met:
|
25 |
-
* 1. Redistributions of source code must retain the above copyright
|
26 |
-
* notice unmodified, this list of conditions, and the following
|
27 |
-
* disclaimer.
|
28 |
-
* 2. Redistributions in binary form must reproduce the above copyright
|
29 |
-
* notice, this list of conditions and the following disclaimer in the
|
30 |
-
* documentation and/or other materials provided with the distribution.
|
31 |
-
*
|
32 |
-
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
|
33 |
-
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
|
34 |
-
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
|
35 |
-
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
|
36 |
-
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
|
37 |
-
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
38 |
-
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
39 |
-
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
40 |
-
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
|
41 |
-
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
42 |
-
*/
|
43 |
-
|
44 |
-
/*
|
45 |
-
* Adapted from FreeBSD by Filipe Maia, [email protected]:
|
46 |
-
* freebsd/lib/msun/src/s_ctanhf.c
|
47 |
-
*/
|
48 |
-
|
49 |
-
/*
|
50 |
-
* Hyperbolic tangent of a complex argument z. See ctanh.c for details.
|
51 |
-
*/
|
52 |
-
|
53 |
-
#pragma once
|
54 |
-
|
55 |
-
#include <thrust/complex.h>
|
56 |
-
#include <thrust/detail/complex/math_private.h>
|
57 |
-
#include <cmath>
|
58 |
-
|
59 |
-
namespace thrust{
|
60 |
-
namespace detail{
|
61 |
-
namespace complex{
|
62 |
-
|
63 |
-
using thrust::complex;
|
64 |
-
|
65 |
-
__host__ __device__ inline
|
66 |
-
complex<float> ctanhf(const complex<float>& z){
|
67 |
-
float x, y;
|
68 |
-
float t, beta, s, rho, denom;
|
69 |
-
uint32_t hx, ix;
|
70 |
-
|
71 |
-
x = z.real();
|
72 |
-
y = z.imag();
|
73 |
-
|
74 |
-
get_float_word(hx, x);
|
75 |
-
ix = hx & 0x7fffffff;
|
76 |
-
|
77 |
-
if (ix >= 0x7f800000) {
|
78 |
-
if (ix & 0x7fffff)
|
79 |
-
return (complex<float>(x, (y == 0.0f ? y : x * y)));
|
80 |
-
set_float_word(x, hx - 0x40000000);
|
81 |
-
return (complex<float>(x,
|
82 |
-
copysignf(0, isinf(y) ? y : sinf(y) * cosf(y))));
|
83 |
-
}
|
84 |
-
|
85 |
-
if (!isfinite(y))
|
86 |
-
return (complex<float>(y - y, y - y));
|
87 |
-
|
88 |
-
if (ix >= 0x41300000) { /* x >= 11 */
|
89 |
-
float exp_mx = expf(-fabsf(x));
|
90 |
-
return (complex<float>(copysignf(1.0f, x),
|
91 |
-
4.0f * sinf(y) * cosf(y) * exp_mx * exp_mx));
|
92 |
-
}
|
93 |
-
|
94 |
-
t = tanf(y);
|
95 |
-
beta = 1.0f + t * t;
|
96 |
-
s = sinhf(x);
|
97 |
-
rho = sqrtf(1.0f + s * s);
|
98 |
-
denom = 1.0f + beta * s * s;
|
99 |
-
return (complex<float>((beta * rho * s) / denom, t / denom));
|
100 |
-
}
|
101 |
-
|
102 |
-
__host__ __device__ inline
|
103 |
-
complex<float> ctanf(complex<float> z){
|
104 |
-
z = ctanhf(complex<float>(-z.imag(), z.real()));
|
105 |
-
return (complex<float>(z.imag(), -z.real()));
|
106 |
-
}
|
107 |
-
|
108 |
-
} // namespace complex
|
109 |
-
|
110 |
-
} // namespace detail
|
111 |
-
|
112 |
-
template <>
|
113 |
-
__host__ __device__
|
114 |
-
inline complex<float> tan(const complex<float>& z){
|
115 |
-
return detail::complex::ctanf(z);
|
116 |
-
}
|
117 |
-
|
118 |
-
template <>
|
119 |
-
__host__ __device__
|
120 |
-
inline complex<float> tanh(const complex<float>& z){
|
121 |
-
return detail::complex::ctanhf(z);
|
122 |
-
}
|
123 |
-
|
124 |
-
} // namespace thrust
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/detail/config.h
DELETED
@@ -1,24 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
/*! \file config.h
|
17 |
-
* \brief Defines platform configuration.
|
18 |
-
*/
|
19 |
-
|
20 |
-
#pragma once
|
21 |
-
|
22 |
-
#include <thrust/version.h>
|
23 |
-
#include <thrust/detail/config/config.h>
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/tbb/detail/partition.h
DELETED
@@ -1,87 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
#include <thrust/system/tbb/detail/execution_policy.h>
|
21 |
-
#include <thrust/pair.h>
|
22 |
-
|
23 |
-
namespace thrust
|
24 |
-
{
|
25 |
-
namespace system
|
26 |
-
{
|
27 |
-
namespace tbb
|
28 |
-
{
|
29 |
-
namespace detail
|
30 |
-
{
|
31 |
-
|
32 |
-
|
33 |
-
template<typename DerivedPolicy,
|
34 |
-
typename ForwardIterator,
|
35 |
-
typename Predicate>
|
36 |
-
ForwardIterator stable_partition(execution_policy<DerivedPolicy> &exec,
|
37 |
-
ForwardIterator first,
|
38 |
-
ForwardIterator last,
|
39 |
-
Predicate pred);
|
40 |
-
|
41 |
-
template<typename DerivedPolicy,
|
42 |
-
typename ForwardIterator,
|
43 |
-
typename InputIterator,
|
44 |
-
typename Predicate>
|
45 |
-
ForwardIterator stable_partition(execution_policy<DerivedPolicy> &exec,
|
46 |
-
ForwardIterator first,
|
47 |
-
ForwardIterator last,
|
48 |
-
InputIterator stencil,
|
49 |
-
Predicate pred);
|
50 |
-
|
51 |
-
template<typename DerivedPolicy,
|
52 |
-
typename InputIterator,
|
53 |
-
typename OutputIterator1,
|
54 |
-
typename OutputIterator2,
|
55 |
-
typename Predicate>
|
56 |
-
thrust::pair<OutputIterator1,OutputIterator2>
|
57 |
-
stable_partition_copy(execution_policy<DerivedPolicy> &exec,
|
58 |
-
InputIterator first,
|
59 |
-
InputIterator last,
|
60 |
-
OutputIterator1 out_true,
|
61 |
-
OutputIterator2 out_false,
|
62 |
-
Predicate pred);
|
63 |
-
|
64 |
-
|
65 |
-
template<typename DerivedPolicy,
|
66 |
-
typename InputIterator1,
|
67 |
-
typename InputIterator2,
|
68 |
-
typename OutputIterator1,
|
69 |
-
typename OutputIterator2,
|
70 |
-
typename Predicate>
|
71 |
-
thrust::pair<OutputIterator1,OutputIterator2>
|
72 |
-
stable_partition_copy(execution_policy<DerivedPolicy> &exec,
|
73 |
-
InputIterator1 first,
|
74 |
-
InputIterator1 last,
|
75 |
-
InputIterator2 stencil,
|
76 |
-
OutputIterator1 out_true,
|
77 |
-
OutputIterator2 out_false,
|
78 |
-
Predicate pred);
|
79 |
-
|
80 |
-
|
81 |
-
} // end namespace detail
|
82 |
-
} // end namespace tbb
|
83 |
-
} // end namespace system
|
84 |
-
} // end namespace thrust
|
85 |
-
|
86 |
-
#include <thrust/system/tbb/detail/partition.inl>
|
87 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/MonoScene/monoscene/.ipynb_checkpoints/unet3d_nyu-checkpoint.py
DELETED
@@ -1,90 +0,0 @@
|
|
1 |
-
# encoding: utf-8
|
2 |
-
import torch
|
3 |
-
import torch.nn as nn
|
4 |
-
import torch.nn.functional as F
|
5 |
-
import numpy as np
|
6 |
-
from monoscene.CRP3D import CPMegaVoxels
|
7 |
-
from monoscene.modules import (
|
8 |
-
Process,
|
9 |
-
Upsample,
|
10 |
-
Downsample,
|
11 |
-
SegmentationHead,
|
12 |
-
ASPP,
|
13 |
-
)
|
14 |
-
|
15 |
-
|
16 |
-
class UNet3D(nn.Module):
|
17 |
-
def __init__(
|
18 |
-
self,
|
19 |
-
class_num,
|
20 |
-
norm_layer,
|
21 |
-
feature,
|
22 |
-
full_scene_size,
|
23 |
-
n_relations=4,
|
24 |
-
project_res=[],
|
25 |
-
context_prior=True,
|
26 |
-
bn_momentum=0.1,
|
27 |
-
):
|
28 |
-
super(UNet3D, self).__init__()
|
29 |
-
self.business_layer = []
|
30 |
-
self.project_res = project_res
|
31 |
-
|
32 |
-
self.feature_1_4 = feature
|
33 |
-
self.feature_1_8 = feature * 2
|
34 |
-
self.feature_1_16 = feature * 4
|
35 |
-
|
36 |
-
self.feature_1_16_dec = self.feature_1_16
|
37 |
-
self.feature_1_8_dec = self.feature_1_8
|
38 |
-
self.feature_1_4_dec = self.feature_1_4
|
39 |
-
|
40 |
-
self.process_1_4 = nn.Sequential(
|
41 |
-
Process(self.feature_1_4, norm_layer, bn_momentum, dilations=[1, 2, 3]),
|
42 |
-
Downsample(self.feature_1_4, norm_layer, bn_momentum),
|
43 |
-
)
|
44 |
-
self.process_1_8 = nn.Sequential(
|
45 |
-
Process(self.feature_1_8, norm_layer, bn_momentum, dilations=[1, 2, 3]),
|
46 |
-
Downsample(self.feature_1_8, norm_layer, bn_momentum),
|
47 |
-
)
|
48 |
-
self.up_1_16_1_8 = Upsample(
|
49 |
-
self.feature_1_16_dec, self.feature_1_8_dec, norm_layer, bn_momentum
|
50 |
-
)
|
51 |
-
self.up_1_8_1_4 = Upsample(
|
52 |
-
self.feature_1_8_dec, self.feature_1_4_dec, norm_layer, bn_momentum
|
53 |
-
)
|
54 |
-
self.ssc_head_1_4 = SegmentationHead(
|
55 |
-
self.feature_1_4_dec, self.feature_1_4_dec, class_num, [1, 2, 3]
|
56 |
-
)
|
57 |
-
|
58 |
-
self.context_prior = context_prior
|
59 |
-
size_1_16 = tuple(np.ceil(i / 4).astype(int) for i in full_scene_size)
|
60 |
-
|
61 |
-
if context_prior:
|
62 |
-
self.CP_mega_voxels = CPMegaVoxels(
|
63 |
-
self.feature_1_16,
|
64 |
-
size_1_16,
|
65 |
-
n_relations=n_relations,
|
66 |
-
bn_momentum=bn_momentum,
|
67 |
-
)
|
68 |
-
|
69 |
-
#
|
70 |
-
def forward(self, input_dict):
|
71 |
-
res = {}
|
72 |
-
|
73 |
-
x3d_1_4 = input_dict["x3d"]
|
74 |
-
x3d_1_8 = self.process_1_4(x3d_1_4)
|
75 |
-
x3d_1_16 = self.process_1_8(x3d_1_8)
|
76 |
-
|
77 |
-
if self.context_prior:
|
78 |
-
ret = self.CP_mega_voxels(x3d_1_16)
|
79 |
-
x3d_1_16 = ret["x"]
|
80 |
-
for k in ret.keys():
|
81 |
-
res[k] = ret[k]
|
82 |
-
|
83 |
-
x3d_up_1_8 = self.up_1_16_1_8(x3d_1_16) + x3d_1_8
|
84 |
-
x3d_up_1_4 = self.up_1_8_1_4(x3d_up_1_8) + x3d_1_4
|
85 |
-
|
86 |
-
ssc_logit_1_4 = self.ssc_head_1_4(x3d_up_1_4)
|
87 |
-
|
88 |
-
res["ssc_logit"] = ssc_logit_1_4
|
89 |
-
|
90 |
-
return res
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Text2Human/Text2Human/README.md
DELETED
@@ -1,255 +0,0 @@
|
|
1 |
-
# Text2Human - Official PyTorch Implementation
|
2 |
-
|
3 |
-
<!-- <img src="./doc_images/overview.jpg" width="96%" height="96%"> -->
|
4 |
-
|
5 |
-
This repository provides the official PyTorch implementation for the following paper:
|
6 |
-
|
7 |
-
**Text2Human: Text-Driven Controllable Human Image Generation**</br>
|
8 |
-
[Yuming Jiang](https://yumingj.github.io/), [Shuai Yang](https://williamyang1991.github.io/), [Haonan Qiu](http://haonanqiu.com/), [Wayne Wu](https://dblp.org/pid/50/8731.html), [Chen Change Loy](https://www.mmlab-ntu.com/person/ccloy/) and [Ziwei Liu](https://liuziwei7.github.io/)</br>
|
9 |
-
In ACM Transactions on Graphics (Proceedings of SIGGRAPH), 2022.
|
10 |
-
|
11 |
-
From [MMLab@NTU](https://www.mmlab-ntu.com/index.html) affliated with S-Lab, Nanyang Technological University and SenseTime Research.
|
12 |
-
|
13 |
-
<table>
|
14 |
-
<tr>
|
15 |
-
<td><img src="assets/1.png" width="100%"/></td>
|
16 |
-
<td><img src="assets/2.png" width="100%"/></td>
|
17 |
-
<td><img src="assets/3.png" width="100%"/></td>
|
18 |
-
<td><img src="assets/4.png" width="100%"/></td>
|
19 |
-
</tr>
|
20 |
-
<tr>
|
21 |
-
<td align='center' width='24%'>The lady wears a short-sleeve T-shirt with pure color pattern, and a short and denim skirt.</td>
|
22 |
-
<td align='center' width='24%'>The man wears a long and floral shirt, and long pants with the pure color pattern.</td>
|
23 |
-
<td align='center' width='24%'>A lady is wearing a sleeveless pure-color shirt and long jeans</td>
|
24 |
-
<td align='center' width='24%'>The man wears a short-sleeve T-shirt with the pure color pattern and a short pants with the pure color pattern.</td>
|
25 |
-
<tr>
|
26 |
-
</table>
|
27 |
-
|
28 |
-
[**[Project Page]**](https://yumingj.github.io/projects/Text2Human.html) | [**[Paper]**](https://arxiv.org/pdf/2205.15996.pdf) | [**[Dataset]**](https://github.com/yumingj/DeepFashion-MultiModal) | [**[Demo Video]**](https://youtu.be/yKh4VORA_E0)
|
29 |
-
|
30 |
-
|
31 |
-
## Updates
|
32 |
-
|
33 |
-
- [05/2022] Paper and demo video are released.
|
34 |
-
- [05/2022] Code is released.
|
35 |
-
- [05/2022] This website is created.
|
36 |
-
|
37 |
-
## Installation
|
38 |
-
**Clone this repo:**
|
39 |
-
```bash
|
40 |
-
git clone https://github.com/yumingj/Text2Human.git
|
41 |
-
cd Text2Human
|
42 |
-
```
|
43 |
-
**Dependencies:**
|
44 |
-
|
45 |
-
All dependencies for defining the environment are provided in `environment/text2human_env.yaml`.
|
46 |
-
We recommend using [Anaconda](https://docs.anaconda.com/anaconda/install/) to manage the python environment:
|
47 |
-
```bash
|
48 |
-
conda env create -f ./environment/text2human_env.yaml
|
49 |
-
conda activate text2human
|
50 |
-
conda install -c huggingface tokenizers=0.9.4
|
51 |
-
conda install -c huggingface transformers=4.0.0
|
52 |
-
conda install -c conda-forge sentence-transformers=2.0.0
|
53 |
-
```
|
54 |
-
|
55 |
-
If it doesn't work, you may need to install the following packages on your own:
|
56 |
-
- Python 3.6
|
57 |
-
- PyTorch 1.7.1
|
58 |
-
- CUDA 10.1
|
59 |
-
- [sentence-transformers](https://huggingface.co/sentence-transformers) 2.0.0
|
60 |
-
- [tokenizers](https://pypi.org/project/tokenizers/) 0.9.4
|
61 |
-
- [transformers](https://huggingface.co/docs/transformers/installation) 4.0.0
|
62 |
-
|
63 |
-
## (1) Dataset Preparation
|
64 |
-
|
65 |
-
In this work, we contribute a large-scale high-quality dataset with rich multi-modal annotations named [DeepFashion-MultiModal](https://github.com/yumingj/DeepFashion-MultiModal) Dataset.
|
66 |
-
Here we pre-processed the raw annotations of the original dataset for the task of text-driven controllable human image generation. The pre-processing pipeline consists of:
|
67 |
-
- align the human body in the center of the images according to the human pose
|
68 |
-
- fuse the clothing color and clothing fabric annotations into one texture annotation
|
69 |
-
- do some annotation cleaning and image filtering
|
70 |
-
- split the whole dataset into the training set and testing set
|
71 |
-
|
72 |
-
You can download our processed dataset from this [Google Drive](https://drive.google.com/file/d/1KIoFfRZNQVn6RV_wTxG2wZmY8f2T_84B/view?usp=sharing). If you want to access the raw annotations, please refer to the [DeepFashion-MultiModal](https://github.com/yumingj/DeepFashion-MultiModal) Dataset.
|
73 |
-
|
74 |
-
After downloading the dataset, unzip the file and put them under the dataset folder with the following structure:
|
75 |
-
```
|
76 |
-
./datasets
|
77 |
-
├── train_images
|
78 |
-
├── xxx.png
|
79 |
-
...
|
80 |
-
├── xxx.png
|
81 |
-
└── xxx.png
|
82 |
-
├── test_images
|
83 |
-
% the same structure as in train_images
|
84 |
-
├── densepose
|
85 |
-
% the same structure as in train_images
|
86 |
-
├── segm
|
87 |
-
% the same structure as in train_images
|
88 |
-
├── shape_ann
|
89 |
-
├── test_ann_file.txt
|
90 |
-
├── train_ann_file.txt
|
91 |
-
└── val_ann_file.txt
|
92 |
-
└── texture_ann
|
93 |
-
├── test
|
94 |
-
├── lower_fused.txt
|
95 |
-
├── outer_fused.txt
|
96 |
-
└── upper_fused.txt
|
97 |
-
├── train
|
98 |
-
% the same files as in test
|
99 |
-
└── val
|
100 |
-
% the same files as in test
|
101 |
-
```
|
102 |
-
|
103 |
-
## (2) Sampling
|
104 |
-
|
105 |
-
### Inference Notebook
|
106 |
-
<img src="https://colab.research.google.com/assets/colab-badge.svg" height=22.5></a></br>
|
107 |
-
Coming soon.
|
108 |
-
|
109 |
-
|
110 |
-
### Pretrained Models
|
111 |
-
|
112 |
-
Pretrained models can be downloaded from this [Google Drive](https://drive.google.com/file/d/1VyI8_AbPwAUaZJPaPba8zxsFIWumlDen/view?usp=sharing). Unzip the file and put them under the dataset folder with the following structure:
|
113 |
-
```
|
114 |
-
pretrained_models
|
115 |
-
├── index_pred_net.pth
|
116 |
-
├── parsing_gen.pth
|
117 |
-
├── parsing_token.pth
|
118 |
-
├── sampler.pth
|
119 |
-
├── vqvae_bottom.pth
|
120 |
-
└── vqvae_top.pth
|
121 |
-
```
|
122 |
-
|
123 |
-
### Generation from Paring Maps
|
124 |
-
You can generate images from given parsing maps and pre-defined texture annotations:
|
125 |
-
```python
|
126 |
-
python sample_from_parsing.py -opt ./configs/sample_from_parsing.yml
|
127 |
-
```
|
128 |
-
The results are saved in the folder `./results/sampling_from_parsing`.
|
129 |
-
|
130 |
-
### Generation from Poses
|
131 |
-
You can generate images from given human poses and pre-defined clothing shape and texture annotations:
|
132 |
-
```python
|
133 |
-
python sample_from_pose.py -opt ./configs/sample_from_pose.yml
|
134 |
-
```
|
135 |
-
|
136 |
-
**Remarks**: The above two scripts generate images without language interactions. If you want to generate images using texts, you can use the notebook or our user interface.
|
137 |
-
|
138 |
-
### User Interface
|
139 |
-
|
140 |
-
```python
|
141 |
-
python ui_demo.py
|
142 |
-
```
|
143 |
-
<img src="./assets/ui.png" width="100%">
|
144 |
-
|
145 |
-
The descriptions for shapes should follow the following format:
|
146 |
-
```
|
147 |
-
<gender>, <sleeve length>, <length of lower clothing>, <outer clothing type>, <other accessories1>, ...
|
148 |
-
|
149 |
-
Note: The outer clothing type and accessories can be omitted.
|
150 |
-
|
151 |
-
Examples:
|
152 |
-
man, sleeveless T-shirt, long pants
|
153 |
-
woman, short-sleeve T-shirt, short jeans
|
154 |
-
```
|
155 |
-
|
156 |
-
The descriptions for textures should follow the following format:
|
157 |
-
```
|
158 |
-
<upper clothing texture>, <lower clothing texture>, <outer clothing texture>
|
159 |
-
|
160 |
-
Note: Currently, we only support 5 types of textures, i.e., pure color, stripe/spline, plaid/lattice,
|
161 |
-
floral, denim. Your inputs should be restricted to these textures.
|
162 |
-
```
|
163 |
-
|
164 |
-
## (3) Training Text2Human
|
165 |
-
|
166 |
-
### Stage I: Pose to Parsing
|
167 |
-
Train the parsing generation network. If you want to skip the training of this network, you can download our pretrained model from [here](https://drive.google.com/file/d/1MNyFLGqIQcOMg_HhgwCmKqdwfQSjeg_6/view?usp=sharing).
|
168 |
-
```python
|
169 |
-
python train_parsing_gen.py -opt ./configs/parsing_gen.yml
|
170 |
-
```
|
171 |
-
|
172 |
-
### Stage II: Parsing to Human
|
173 |
-
|
174 |
-
**Step 1: Train the top level of the hierarchical VQVAE.**
|
175 |
-
We provide our pretrained model [here](https://drive.google.com/file/d/1TwypUg85gPFJtMwBLUjVS66FKR3oaTz8/view?usp=sharing). This model is trained by:
|
176 |
-
```python
|
177 |
-
python train_vqvae.py -opt ./configs/vqvae_top.yml
|
178 |
-
```
|
179 |
-
|
180 |
-
**Step 2: Train the bottom level of the hierarchical VQVAE.**
|
181 |
-
We provide our pretrained model [here](https://drive.google.com/file/d/15hzbY-RG-ILgzUqqGC0qMzlS4OayPdRH/view?usp=sharing). This model is trained by:
|
182 |
-
```python
|
183 |
-
python train_vqvae.py -opt ./configs/vqvae_bottom.yml
|
184 |
-
```
|
185 |
-
|
186 |
-
**Stage 3 & 4: Train the sampler with mixture-of-experts.** To train the sampler, we first need to train a model to tokenize the parsing maps. You can access our pretrained parsing maps [here](https://drive.google.com/file/d/1GLHoOeCP6sMao1-R63ahJMJF7-J00uir/view?usp=sharing).
|
187 |
-
```python
|
188 |
-
python train_parsing_token.py -opt ./configs/parsing_token.yml
|
189 |
-
```
|
190 |
-
|
191 |
-
With the parsing tokenization model, the sampler is trained by:
|
192 |
-
```python
|
193 |
-
python train_sampler.py -opt ./configs/sampler.yml
|
194 |
-
```
|
195 |
-
Our pretrained sampler is provided [here](https://drive.google.com/file/d/1OQO_kG2fK7eKiG1VJH1OL782X71UQAmS/view?usp=sharing).
|
196 |
-
|
197 |
-
**Stage 5: Train the index prediction network.**
|
198 |
-
We provide our pretrained index prediction network [here](https://drive.google.com/file/d/1rqhkQD-JGd7YBeIfDvMV-vjfbNHpIhYm/view?usp=sharing). It is trained by:
|
199 |
-
```python
|
200 |
-
python train_index_prediction.py -opt ./configs/index_pred_net.yml
|
201 |
-
```
|
202 |
-
|
203 |
-
|
204 |
-
**Remarks**: In the config files, we use the path to our models as the required pretrained models. If you want to train the models from scratch, please replace the path to your own one. We set the numbers of the training epochs as large numbers and you can choose the best epoch for each model. For your reference, our pretrained parsing generation network is trained for 50 epochs, top-level VQVAE is trained for 135 epochs, bottom-level VQVAE is trained for 70 epochs, parsing tokenization network is trained for 20 epochs, sampler is trained for 95 epochs, and the index prediction network is trained for 70 epochs.
|
205 |
-
|
206 |
-
## (4) Results
|
207 |
-
|
208 |
-
Please visit our [Project Page](https://yumingj.github.io/projects/Text2Human.html#results) to view more results.</br>
|
209 |
-
You can select the attribtues to customize the desired human images.
|
210 |
-
[<img src="./assets/results.png" width="90%">
|
211 |
-
](https://yumingj.github.io/projects/Text2Human.html#results)
|
212 |
-
|
213 |
-
## DeepFashion-MultiModal Dataset
|
214 |
-
|
215 |
-
<img src="./assets/dataset_logo.png" width="90%">
|
216 |
-
|
217 |
-
In this work, we also propose **DeepFashion-MultiModal**, a large-scale high-quality human dataset with rich multi-modal annotations. It has the following properties:
|
218 |
-
1. It contains 44,096 high-resolution human images, including 12,701 full body human images.
|
219 |
-
2. For each full body images, we **manually annotate** the human parsing labels of 24 classes.
|
220 |
-
3. For each full body images, we **manually annotate** the keypoints.
|
221 |
-
4. We extract DensePose for each human image.
|
222 |
-
5. Each image is **manually annotated** with attributes for both clothes shapes and textures.
|
223 |
-
6. We provide a textual description for each image.
|
224 |
-
|
225 |
-
<img src="./assets/dataset_overview.png" width="100%">
|
226 |
-
|
227 |
-
Please refer to [this repo](https://github.com/yumingj/DeepFashion-MultiModal) for more details about our proposed dataset.
|
228 |
-
|
229 |
-
## TODO List
|
230 |
-
|
231 |
-
- [ ] Release 1024x512 version of Text2Human.
|
232 |
-
- [ ] Train the Text2Human using [SHHQ dataset](https://stylegan-human.github.io/).
|
233 |
-
|
234 |
-
## Citation
|
235 |
-
|
236 |
-
If you find this work useful for your research, please consider citing our paper:
|
237 |
-
|
238 |
-
```bibtex
|
239 |
-
@article{jiang2022text2human,
|
240 |
-
title={Text2Human: Text-Driven Controllable Human Image Generation},
|
241 |
-
author={Jiang, Yuming and Yang, Shuai and Qiu, Haonan and Wu, Wayne and Loy, Chen Change and Liu, Ziwei},
|
242 |
-
journal={ACM Transactions on Graphics (TOG)},
|
243 |
-
volume={41},
|
244 |
-
number={4},
|
245 |
-
articleno={162},
|
246 |
-
pages={1--11},
|
247 |
-
year={2022},
|
248 |
-
publisher={ACM New York, NY, USA},
|
249 |
-
doi={10.1145/3528223.3530104},
|
250 |
-
}
|
251 |
-
```
|
252 |
-
|
253 |
-
## Acknowledgments
|
254 |
-
|
255 |
-
Part of the code is borrowed from [unleashing-transformers](https://github.com/samb-t/unleashing-transformers), [taming-transformers](https://github.com/CompVis/taming-transformers) and [mmsegmentation](https://github.com/open-mmlab/mmsegmentation).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/bin/paper_runfiles/generate_test_celeba-hq.sh
DELETED
@@ -1,17 +0,0 @@
|
|
1 |
-
#!/usr/bin/env bash
|
2 |
-
|
3 |
-
# paths to data are valid for mml-ws01
|
4 |
-
OUT_DIR="/media/inpainting/paper_data/CelebA-HQ_val_test"
|
5 |
-
|
6 |
-
source "$(dirname $0)/env.sh"
|
7 |
-
|
8 |
-
for datadir in "val" "test"
|
9 |
-
do
|
10 |
-
for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512
|
11 |
-
do
|
12 |
-
"$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-celeba-hq \
|
13 |
-
location.out_dir=$OUT_DIR cropping.out_square_crop=False
|
14 |
-
|
15 |
-
"$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
|
16 |
-
done
|
17 |
-
done
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|