Commit
·
b4ea192
1
Parent(s):
05c9856
Update parquet files (step 35 of 476)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1111u/oai-reverse-proxy/Dockerfile +0 -11
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee Photo Studio Professional 2020 Crack PATCHED.md +0 -117
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download DLL-Files Fixer Full Crack 2023 - The Ultimate Solution for DLL Problems.md +0 -28
- spaces/1gistliPinn/ChatGPT4/Examples/Big Boobs Sexy Video Com.md +0 -15
- spaces/1gistliPinn/ChatGPT4/Examples/Face To Face Mat Book Free Download.md +0 -9
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android 12 Emojis Whats New and How to Get Them on Your Phone.md +0 -153
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash Mini APK and Play the Beta in Any Country.md +0 -102
- spaces/1toTree/lora_test/ppdiffusers/commands/__init__.py +0 -28
- spaces/52Hz/CMFNet_deraindrop/model/CMFNet.py +0 -193
- spaces/52Hz/SUNet_AWGN_denoising/README.md +0 -37
- spaces/AI-ZTH-03-23/8.Datasets-NER-Biomed-ClinicalTerms/backup.app.py +0 -268
- spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/filter.py +0 -95
- spaces/Aaaaaaaabdualh/poetry2023/app.py +0 -53
- spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/quantization/__init__.py +0 -9
- spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/conversation.py +0 -107
- spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/evaluator/__init__.py +0 -6
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/ColorPicker.d.ts +0 -38
- spaces/AlexZou/Deploy_Restoration/model/IAT_main.py +0 -133
- spaces/Altinas/vits-uma-genshin-honkais/attentions.py +0 -300
- spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/prod_cons.h +0 -433
- spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp +0 -23
- spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/libra_retinanet_r50_fpn_1x_coco.py +0 -26
- spaces/Andy1621/uniformer_image_detection/mmdet/datasets/xml_style.py +0 -170
- spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59.py +0 -2
- spaces/Ariharasudhan/YoloV5/utils/aws/mime.sh +0 -26
- spaces/Artificio/AdversarialArt/src/.ipynb_checkpoints/utils-checkpoint.py +0 -35
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/__init__.py +0 -2
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/token.py +0 -213
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/bar.py +0 -94
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/_structures.py +0 -61
- spaces/Awesimo/jojogan/e4e/utils/alignment.py +0 -115
- spaces/BAAI/vid2vid-zero/vid2vid_zero/p2p/null_text_w_ptp.py +0 -504
- spaces/BairaS/Tabular_ML/README.md +0 -12
- spaces/Benson/text-generation/Examples/Chicos De La Escuela Apk.md +0 -27
- spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/msgpack/ext.py +0 -193
- spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/subscribers.py +0 -92
- spaces/CVPR/LIVE/thrust/thrust/type_traits/logical_metafunctions.h +0 -179
- spaces/CVPR/lama-example/saicinpainting/evaluation/losses/base_loss.py +0 -528
- spaces/CVPR/lama-example/saicinpainting/training/losses/feature_matching.py +0 -33
- spaces/CVPR/lama-example/saicinpainting/training/modules/base.py +0 -80
- spaces/CVPR/regionclip-demo/detectron2/utils/colormap.py +0 -140
- spaces/ClassCat/YOLOS-Object-Detection/app.py +0 -130
- spaces/Codecooker/rvcapi/README.md +0 -13
- spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/rpn.py +0 -321
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_b_s_l_n.py +0 -6
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otData.py +0 -0
- spaces/Danielito/webui/README.md +0 -20
- spaces/Dify-AI/Baichuan2-13B-Chat/README.md +0 -13
- spaces/DrBenjamin/AI_Demo/pages/💁 Open_Assistant.py +0 -359
- spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/__init__.py +0 -0
spaces/1111u/oai-reverse-proxy/Dockerfile
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
FROM node:18-bullseye-slim
|
2 |
-
RUN apt-get update && \
|
3 |
-
apt-get install -y git
|
4 |
-
RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app
|
5 |
-
WORKDIR /app
|
6 |
-
RUN npm install
|
7 |
-
COPY Dockerfile greeting.md* .env* ./
|
8 |
-
RUN npm run build
|
9 |
-
EXPOSE 7860
|
10 |
-
ENV NODE_ENV=production
|
11 |
-
CMD [ "npm", "start" ]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/ACDSee Photo Studio Professional 2020 Crack PATCHED.md
DELETED
@@ -1,117 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>ACDSee Photo Studio Professional 2020 Crack: A Comprehensive Review</h1>
|
3 |
-
<p>If you are looking for a fast, powerful and easy-to-use image management system that can handle all your photography needs, you might have heard of ACDSee Photo Studio Professional 2020. This software is a popular choice among photographers, graphic designers and digital artists who want to acquire, organize, view, enhance and share their digital photos and other media files. But what exactly is ACDSee Photo Studio Professional 2020 and what can it do for you? And more importantly, how can you get it without breaking the law or risking your computer's security? In this article, we will answer these questions and give you a comprehensive review of ACDSee Photo Studio Professional 2020 Crack.</p>
|
4 |
-
<h2>What is ACDSee Photo Studio Professional 2020?</h2>
|
5 |
-
<p>ACDSee Photo Studio Professional 2020 is a software application that combines the functions of a RAW editor and a digital asset management solution. It allows you to perform a wide range of adjustments on your photos, such as exposure, color, contrast, sharpness, noise reduction, red eye removal, watermarking, cropping, resizing and more. You can also apply various effects and filters to enhance your images and give them a professional edge. Moreover, you can manage your digital assets with powerful searching and cataloging tools that let you find, sort, tag, rate and organize your photos and media files. You can also create password-protected albums and folders on the developer's website and share them with your friends.</p>
|
6 |
-
<h2>ACDSee Photo Studio Professional 2020 Crack</h2><br /><p><b><b>Download File</b> ↔ <a href="https://byltly.com/2uKxiC">https://byltly.com/2uKxiC</a></b></p><br /><br />
|
7 |
-
<h3>Features and Benefits</h3>
|
8 |
-
<p>ACDSee Photo Studio Professional 2020 has many features and benefits that make it stand out from other similar software. Here are some of them:</p>
|
9 |
-
<h4>Acquire, Organize, View, Enhance and Share Your Photos and Media Files</h4>
|
10 |
-
<p>With ACDSee Photo Studio Professional 2020, you can connect directly to the folders on your computer without importing them into a separate library. This saves you time and disk space and allows you to enjoy increased speed and performance. You can also run slide shows, play embedded audio and display multiple-page images in any of the more than 50 image and multimedia file formats supported by the software.</p>
|
11 |
-
<h4>Process Digital Photos with a Variety of Editing Tools</h4>
|
12 |
-
<h4>Streamline Your Workflow with Performance Improvements and GPU-Enriched Software</h4>
|
13 |
-
<p>ACDSee Photo Studio Professional 2020 is designed to streamline your workflow and give your image development a competitive, professional edge. It features performance improvements, such as faster loading and processing of RAW files, and GPU-enriched software that enhances the speed and quality of your adjustments. You can also use ACDSee Actions to record and batch-apply any of the 200+ adjustments available in Edit mode. Additionally, you can take snapshots of your work at any time and return to them later if you want to undo or redo any changes.</p>
|
14 |
-
<h4>Expand Your Creative Scope with Photoshop Plugin Support</h4>
|
15 |
-
<p>ACDSee Photo Studio Professional 2020 allows you to expand your creative scope with the ability to import and apply Photoshop plugins to your Edit mode adjustment workflow. You can draw your own effects and use them in combination with best-in-class photo editing tools. You can also use the built-in LUTs (Look-Up Tables) to apply color grading to your images, or create your own custom LUTs for a unique look.</p>
|
16 |
-
<h4>Manage Your Digital Assets with Powerful Searching and Cataloging Tools</h4>
|
17 |
-
<p>ACDSee Photo Studio Professional 2020 helps you manage your digital assets with powerful searching and cataloging tools that let you find, sort, tag, rate and organize your photos and media files. You can find images based on metadata, file properties, date, event, keyword, rating, color label and GPS location. You can also build and save detailed searches, enter single words or phrases, search only specific folders or find that one particular image with the Quick Search bar. Moreover, you can add ratings, hierarchical keywords, categories and location data to your images and quickly identify photos for further processing with visual tags or customizable color labels.</p>
|
18 |
-
<p>ACDSee Photo Studio Pro 2020 full version with crack<br />
|
19 |
-
How to activate ACDSee Photo Studio Professional 2020 for free<br />
|
20 |
-
ACDSee Photo Studio Professional 2020 license key generator<br />
|
21 |
-
Download ACDSee Photo Studio Pro 2020 cracked software<br />
|
22 |
-
ACDSee Photo Studio Professional 2020 serial number and patch<br />
|
23 |
-
ACDSee Photo Studio Pro 2020 crack download for Windows 10<br />
|
24 |
-
ACDSee Photo Studio Professional 2020 keygen and activation code<br />
|
25 |
-
ACDSee Photo Studio Pro 2020 crack + torrent link<br />
|
26 |
-
ACDSee Photo Studio Professional 2020 registration code and crack<br />
|
27 |
-
ACDSee Photo Studio Pro 2020 cracked version free download<br />
|
28 |
-
ACDSee Photo Studio Professional 2020 crack only file<br />
|
29 |
-
ACDSee Photo Studio Pro 2020 full crack setup and install<br />
|
30 |
-
ACDSee Photo Studio Professional 2020 lifetime crack and update<br />
|
31 |
-
ACDSee Photo Studio Pro 2020 crack with offline installer<br />
|
32 |
-
ACDSee Photo Studio Professional 2020 portable edition with crack<br />
|
33 |
-
ACDSee Photo Studio Pro 2020 crack + direct download link<br />
|
34 |
-
ACDSee Photo Studio Professional 2020 activation key and crack<br />
|
35 |
-
ACDSee Photo Studio Pro 2020 full crack + online installer<br />
|
36 |
-
ACDSee Photo Studio Professional 2020 product key and crack<br />
|
37 |
-
ACDSee Photo Studio Pro 2020 crack with latest features and updates<br />
|
38 |
-
ACDSee Photo Studio Professional 2020 working crack and patch<br />
|
39 |
-
ACDSee Photo Studio Pro 2020 crack for Windows 7/8/8.1/10<br />
|
40 |
-
ACDSee Photo Studio Professional 2020 cracked software download site<br />
|
41 |
-
ACDSee Photo Studio Pro 2020 full version crack free download<br />
|
42 |
-
ACDSee Photo Studio Professional 2020 crack with user guide and tutorial<br />
|
43 |
-
ACDSee Photo Studio Pro 2020 best alternative to cracked software<br />
|
44 |
-
ACDSee Photo Studio Professional 2020 trial reset and crack<br />
|
45 |
-
ACDSee Photo Studio Pro 2020 cracked software review and rating<br />
|
46 |
-
ACDSee Photo Studio Professional 2020 system requirements and crack<br />
|
47 |
-
ACDSee Photo Studio Pro 2020 latest version download with crack<br />
|
48 |
-
ACDSee Photo Studio Professional 2020 comparison with other photo editing software<br />
|
49 |
-
ACDSee Photo Studio Pro 2020 features and benefits of using cracked software<br />
|
50 |
-
ACDSee Photo Studio Professional 2020 disadvantages and risks of using cracked software<br />
|
51 |
-
ACDSee Photo Studio Pro 2020 how to uninstall cracked software safely<br />
|
52 |
-
ACDSee Photo Studio Professional 2020 how to avoid malware and viruses from cracked software<br />
|
53 |
-
ACDSee Photo Studio Pro 2020 how to fix common errors and issues with cracked software<br />
|
54 |
-
ACDSee Photo Studio Professional 2020 how to update cracked software without losing activation<br />
|
55 |
-
ACDSee Photo Studio Pro 2020 how to backup and restore cracked software data and settings<br />
|
56 |
-
ACDSee Photo Studio Professional 2020 how to transfer cracked software to another computer or device<br />
|
57 |
-
ACDSee Photo Studio Pro 2020 how to use cracked software legally and ethically</p>
|
58 |
-
<h4>Correct Lens Distortion and Achieve HDR Results</h4>
|
59 |
-
<p>ACDSee Photo Studio Professional 2020 enables you to correct lens distortion and achieve HDR results with ease. You can correct barrel and pincushion distortion in your photos by applying the correction calibrated to fix the distortion inherent to the lens used. You can also fix chromatic aberration and map this correction to your lens for future use. Furthermore, you can get HDR results by intelligently adjusting areas that are too light or too dark in your images.</p>
|
60 |
-
<h3>How to Download and Install ACDSee Photo Studio Professional 2020 Crack?</h3>
|
61 |
-
<p>If you are tempted to download and install ACDSee Photo Studio Professional 2020 Crack from unofficial sources, you should think twice before doing so. There are many risks and disadvantages of using cracked software that you should be aware of.</p>
|
62 |
-
<h4>The Risks of Using Cracked Software</h4>
|
63 |
-
<p>Using cracked software is illegal and unethical. You are violating the intellectual property rights of the software developers and depriving them of their rightful income. You are also exposing yourself to potential legal consequences if you are caught using pirated software.</p>
|
64 |
-
<p>Moreover, using cracked software is unsafe and unreliable. You are risking your computer's security by downloading files from untrusted sources that may contain viruses, malware or spyware. These malicious programs can damage your system, steal your personal information or compromise your online privacy. You are also risking your data's integrity by using software that may not work properly or crash unexpectedly.</p>
|
65 |
-
<h4>The Legal Way to Get ACDSee Photo Studio Professional 2020</h4>
|
66 |
-
<p>The legal way to get ACDSee Photo Studio Professional 2020 is to buy it from the official website of ACDSee. By doing so, you will get a genuine license key that will activate the software and grant you access to all its features and updates. You will also get technical support and customer service from the developers in case you encounter any problems or have any questions.</p>
|
67 |
-
<p>The price of ACDSee Photo Studio Professional 2020 is $99.99 for a single user license. However, you can also get a free trial version for 30 days that will let you test the software before buying it. You can download the free trial version from here: https://www.acdsee.com/en/products/photo-studio-professional/</p>
|
68 |
-
<h2>Conclusion</h2>
|
69 |
-
<p>ACDSee Photo Studio Professional 2020 is a powerful image management system that can help you acquire, organize, view, enhance and share your digital photos and other media files. It has many features and benefits that make it a great choice for photographers, graphic designers and digital artists who want to streamline their workflow and expand their creative scope. However, downloading and installing ACDSee Photo Studio Professional 2020 Crack is not a good idea as it is illegal, unethical, unsafe and unreliable. The best way to get ACDSee Photo Studio Professional 2020 is to buy it from the official website of ACDSee or try it for free for 30 days.</p>
|
70 |
-
<h2>FAQs</h2>
|
71 |
-
<p>Here are some frequently asked questions about ACDSee Photo Studio Professional 2020 Crack:</p>
|
72 |
-
<ul>
|
73 |
-
<li><b>What are the system requirements for ACDSee Photo Studio Professional 2020?</b></li>
|
74 |
-
<li>The system requirements for ACDSee Photo Studio Professional 2020 are as follows:</li>
|
75 |
-
<li>- Intel® or AMD processor with 64-bit support</li>
|
76 |
-
<li>- Windows® 10 (64-bit editions only)</li>
|
77 |
-
<li>- Microsoft® Internet Explorer® 11 or higher</li>
|
78 |
-
<li>- Microsoft® DirectX® 10 or higher</li>
|
79 |
-
<li>- Windows Media® Player 9.0</li>
|
80 |
-
<li>- Microsoft® Office 2010 or above</li>
|
81 |
-
<li>- Ghostscript 8.0 - for PDF support</li>
|
82 |
-
<li>- Windows Media® Player 9.0 - for M3U support</li>
|
83 |
-
<li>- 2 GB RAM (6 GB RAM recommended)</li>
|
84 |
-
<li>- 2 GB of available hard disk space</li>
|
85 |
-
<li>- High Color display adapter at 1024 x 768 resolution (1920 x 1080 recommended)</li>
|
86 |
-
<li>- CD/DVD Burner - for creating CDs or DVDs</li>
|
87 |
-
</ul>
|
88 |
-
<ul>
|
89 |
-
<li><b>How do I uninstall ACDSee Photo Studio Professional 2020?</b></li>
|
90 |
-
<li>To uninstall ACDSee Photo Studio Professional 2020 from your computer, follow these steps:</li>
|
91 |
-
<li>- Close ACDSee Photo Studio Professional 2020 if it is running.</li>
|
92 |
-
<li>- Click Start > Control Panel > Programs > Programs and Features.</li>
|
93 |
-
<li>- Select ACDSee Photo Studio Professional 2020 from the list of programs.</li>
|
94 |
-
<li>- Click Uninstall/Change and follow the instructions on the screen.</li>
|
95 |
-
</ul>
|
96 |
-
<ul>
|
97 |
-
<li><b>How do I update ACDSee Photo Studio Professional 2020?</b></li>
|
98 |
-
<ul>
|
99 |
-
<li>To update ACDSee Photo Studio Professional 2020, follow these steps:</li>
|
100 |
-
<li>- Open ACDSee Photo Studio Professional 2020 and click Help > Check for Updates.</li>
|
101 |
-
<li>- If there is an update available, click Download and Install.</li>
|
102 |
-
<li>- Follow the instructions on the screen to complete the update process.</li>
|
103 |
-
</ul>
|
104 |
-
<ul>
|
105 |
-
<li><b>How do I contact ACDSee customer support?</b></li>
|
106 |
-
<li>To contact ACDSee customer support, you can use one of the following methods:</li>
|
107 |
-
<li>- Visit the ACDSee support page: https://www.acdsee.com/en/support/</li>
|
108 |
-
<li>- Submit a ticket online: https://acdsystems.desk.com/customer/portal/emails/new</li>
|
109 |
-
<li>- Call the toll-free number: 1-888-767-9888 (Monday to Friday, 8:30 AM to 5:00 PM PST)</li>
|
110 |
-
</ul>
|
111 |
-
<ul>
|
112 |
-
<li><b>How do I get a refund for ACDSee Photo Studio Professional 2020?</b></li>
|
113 |
-
<li>To get a refund for ACDSee Photo Studio Professional 2020, you need to contact ACDSee customer support within 30 days of your purchase and provide your order number and reason for requesting a refund. You can use any of the methods mentioned above to contact ACDSee customer support. Please note that refunds are subject to ACDSee's refund policy and terms and conditions.</li>
|
114 |
-
</ul>
|
115 |
-
</p> 0a6ba089eb<br />
|
116 |
-
<br />
|
117 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download DLL-Files Fixer Full Crack 2023 - The Ultimate Solution for DLL Problems.md
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download DLL-Files Fixer Full Crack for Free</h1>
|
3 |
-
<p>DLL-Files Fixer is a software that can help you fix DLL errors, optimize your PC performance, and prevent system crashes. But how can you download DLL-Files Fixer full crack for free? Here are some steps you can follow:</p>
|
4 |
-
<h2>download dll fixer full crack</h2><br /><p><b><b>Download File</b> ✏ <a href="https://byltly.com/2uKyvG">https://byltly.com/2uKyvG</a></b></p><br /><br />
|
5 |
-
<ol>
|
6 |
-
<li>Go to a reliable website that offers DLL-Files Fixer full crack, such as <a href="https://pesktop.com/en/windows/dll-files_fixer">PeskTop</a> , <a href="https://www.novahax.com/2015/12/dll-files-fixer-premium-3390-crack.html">NovaHax</a> , or <a href="http://www.vstmenia.org/dll-files-fixer-crack/">VSTMenia</a> . Make sure you have a good antivirus program to scan the downloaded files for malware.</li>
|
7 |
-
<li>Download the DLL-Files Fixer setup file and the crack file from the website. You may need to enter a password to extract the files, which is usually 123 or the name of the website.</li>
|
8 |
-
<li>Disable your Windows Defender or firewall temporarily to avoid any interference with the crack files.</li>
|
9 |
-
<li>Install the DLL-Files Fixer program by running the setup file. Do not launch the program after installation.</li>
|
10 |
-
<li>Copy the crack file and paste it into the installation folder of DLL-Files Fixer. Replace the original file if prompted.</li>
|
11 |
-
<li>Run the DLL-Files Fixer program and enjoy its full features for free.</li>
|
12 |
-
</ol>
|
13 |
-
<p>Note: Downloading and using DLL-Files Fixer full crack may be illegal and risky. You may violate the software license agreement and expose your PC to potential threats. We do not recommend or endorse this method. Use it at your own risk.</p><p>What is DLL-Files Fixer and why do you need it?</p>
|
14 |
-
<p>DLL-Files Fixer is a software that can help you fix DLL errors, optimize your PC performance, and prevent system crashes. DLL stands for Dynamic Link Library, which is a file that contains code and data that can be used by multiple programs at the same time. DLL files are essential for the proper functioning of many applications and games on Windows.</p>
|
15 |
-
<p>However, sometimes DLL files can get corrupted, missing, or outdated, causing various problems such as error messages, slow performance, or even blue screen of death. This is where DLL-Files Fixer comes in handy. It can scan your system for any DLL issues and fix them automatically. It can also download and install the latest and compatible DLL files from its online library of over 2.5 million files. Moreover, it can defragment and clean your registry to improve your PC speed and stability.</p>
|
16 |
-
<p>With DLL-Files Fixer, you can easily solve your DLL problems and enjoy a smooth and error-free PC experience.</p>
|
17 |
-
<p></p><p>How much does DLL-Files Fixer cost and how can you get it?</p>
|
18 |
-
<p>DLL-Files Fixer is a paid software that offers a free trial version for 15 days. After that, you need to purchase a license key to continue using it. The license key costs $29.95 for one year and $39.95 for three years. You can buy it from the official website of DLL-Files Fixer or from other authorized resellers.</p>
|
19 |
-
<p>To get DLL-Files Fixer, you need to download and install it from the official website or from a trusted source. You can also use the DLL-Files Fixer full crack method that we explained earlier, but we do not recommend it for the reasons we mentioned. Once you have installed the software, you can run it and scan your PC for any DLL errors. You can then fix them with one click or download and install the missing or updated DLL files from the online library.</p>
|
20 |
-
<p>What are the alternatives to DLL-Files Fixer?</p>
|
21 |
-
<p>If you are looking for other software that can help you fix DLL errors, you may want to check out some of these alternatives:</p>
|
22 |
-
<ul>
|
23 |
-
<li><a href="https://www.glarysoft.com/glary-utilities/">Glary Utilities</a>: This is a free and powerful system optimizer that can fix registry errors, clean junk files, boost PC speed, and more. It also has a module called \"Fix Missing Software Files\" that can scan and repair DLL issues.</li>
|
24 |
-
<li><a href="https://www.wisecleaner.com/wise-care-365.html">Wise Care 365</a>: This is another free and comprehensive PC cleaner and optimizer that can fix registry problems, optimize system settings, protect privacy, and more. It also has a feature called \"PC Checkup\" that can detect and fix DLL errors.</li>
|
25 |
-
<li><a href="https://www.dllsuite.net/">DLL Suite</a>: This is a dedicated DLL fixer that can scan and repair all kinds of DLL errors on Windows. It can also download and install the latest DLL files from its online database of over 25 million files. It offers a free trial version and a paid version that costs $19.99 for one year.</li>
|
26 |
-
</ul></p> ddb901b051<br />
|
27 |
-
<br />
|
28 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Big Boobs Sexy Video Com.md
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
|
2 |
-
<p>father son 3some slutload ford escort power booster eats<br />bulls cum from wifes pussy montana physiological<br />treatment sex offenders older silver daddies gay man.<br />hot adult free ladies phone talk numbers black hardcore assfucking big boobs blojob pornhub eva<br />green pics nude nuns fatties porn movies.<br />nc state university towers porn video naked hot blonde lesbians adult hidden cam videos big tit sluts bukkake nureyev bisexual.</p>
|
3 |
-
<p>molly sirius's sex photoes flat chested tight pussy girls british glamour<br />model sex tape moms and guys sex elephant xxx thumbs.<br />sexy lesbian heel worship newport beach facial rejuvenation one extra sex<br />chromosome hardcore site terra.es nude sexy indian women.<br />cause of sexual addiction support group vintage rosenthal studio line cheek naked sandy anal gland absess french adult rap.</p>
|
4 |
-
<h2>big boobs sexy video com</h2><br /><p><b><b>Download File</b> ……… <a href="https://imgfil.com/2uy0vl">https://imgfil.com/2uy0vl</a></b></p><br /><br />
|
5 |
-
<p>petey the porno puppet porn stamina tricks how to get pornstar size penis nude massage arcadia ca free girlfriends<br />and wives porn videos.<br />female sex inhancers xxx skinny porn women want comic strip rubric need for speed boobs.</p>
|
6 |
-
<p>fat black mature women review of dr wexler facial products ltr<br />lingerie booty pants skinny slut galleries.<br />dirt sexy money season1 bridgett baby doll riley<br />nude loa german girl gives lapdance and blowjob us virgin islands official tourist board.</p>
|
7 |
-
<p>you sexy babe vintage ebony interacial nudists at the beach<br />mom seduced teen guys crusing for cock.<br />deepthroat film after pain sex vaginal blacklight nude gypsy poster sexy college redhead<br />clip day free hardcore.</p>
|
8 |
-
<p>brunette teen pic sexy lilo and stitch pics vpr fake<br />porn pics emma watson mayra naked picture<br />veronica.<br />estrogen receptor breast cancer cumshot compilation give<br />it away tqa vancouver erotic clubs julika wagner nude.</p>
|
9 |
-
<p>grinding lesbian pussy video big tits abuse redheaded naked males pics porn stars who smoke looking at the world<br />from the bottom of a.<br />milf exam bikini 'larry's locker room sexy female strippers erotic male cowboys xian xua hentai.</p>
|
10 |
-
<p>flat chested nude thumbs culture fag god hate paperback religious rhetorics sexual violence onegai teacher hentai hot milf sexy movies busty mayure picture<br />thumbs.<br />female reporter fucked mature chubby doing anal hardcore missy<br />taylor skiny fucks free donkey punch porn videos.<br />bottom flared gown prom how many girls eat pussy libertarians suck hugh laurie kicking<br />ass woman playing with her sexy shoes.<br />sexy teen swallow videos gay and lesbian resource center dallas mature audience only<br />philip k dick speech milf bathing suit video.<br />adult soccer leagues austin tx harvard project for asian international relations vero beach boys center teen challenge metacafe ass same gay.</p>
|
11 |
-
<p>free xxx girl fuck dog vid only blow job blonde it's<br />my pleasure to join carla bonner sexy.<br />neil porn vince lizzie cums video adult stemcell sex internet games<br />mature nudity free.<br />karla edecan telcel xxx girls that suck for money pic jamie bamber<br />nude online cum.<br />cats back spasms and he licks sexy girls in sexy clothes videos<br /> gay male model picture free chyna porn.</p>
|
12 |
-
<p></p>
|
13 |
-
<p>tumblr hairy putting cum in a girl's mouth sex clubs in lille<br />do christian women have anal sex musclar gay men.<br />indian femdom free trial upskirt las vegas young amateur milf is homosexual adoption legal<br />female sexy body.<br />asian magazine cover upskirt bloobers young girls sex blogs naked hott sluts<br />free natural amateur breasts tgp.<br />young girls with no boobs how to test a males virginity grandma lesbian porn naughty julie anal free pictures of shaved pussy.</p> aaccfb2cb3<br />
|
14 |
-
<br />
|
15 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1gistliPinn/ChatGPT4/Examples/Face To Face Mat Book Free Download.md
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<p>this site is great for kids, students, and teachers. they have a large variety of books, videos, music, and games. they have a large selection of kids books and educational materials. you can find the children's books in the books, toys, and games section.</p>
|
3 |
-
<h2>face to face mat book free download</h2><br /><p><b><b>Download</b> - <a href="https://imgfil.com/2uxYaf">https://imgfil.com/2uxYaf</a></b></p><br /><br />
|
4 |
-
<p>this site is a great resource for those looking for inspiration to start, and those already active in, the digital arts. you will find tutorials, tips, and more. there are even plenty of free projects and resources to use for your own projects. if you are just getting started in the arts, this site can be a great resource for you. they have plenty of projects and tutorials for beginners.</p>
|
5 |
-
<p>this book addresses these weaknesses and gaps. it provides evidence of the capability shortfalls that currently exist in many countries, analyses this evidence and identifies capability traps that hold many governments back, particularly related to isomorphic mimicry and premature load-bearing. the book then describes a process that governments can use to escape these capability traps. called pdia (problem driven iterative adaptation), this process empowers people working in governments to find and fit solutions to the problems they face. this process is explained in a practical manner so that readers can actually apply tools and ideas to the capability challenges they face in their own contexts. these applications will help readers implement policies and reforms that have more impact than those of the past. </p>
|
6 |
-
<p>the book describes the pdia process, outlining the steps to take for governments to find and address problems, escape capability traps, and turn their capabilities into their greatest strengths. the problems that governments face need to be identified, be the need be addressed through policy reform, and be implemented by governments.</p>
|
7 |
-
<p></p> 899543212b<br />
|
8 |
-
<br />
|
9 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android 12 Emojis Whats New and How to Get Them on Your Phone.md
DELETED
@@ -1,153 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Download and Install Android 12 Emojis on Any Android Phone</h1>
|
3 |
-
<p>Do you want to spice up your text messages, social media posts, and online chats with the latest and coolest emojis? If you are an Android user, you might be interested in getting the new Android 12 emojis on your phone. Android 12 is Google's newest operating system, which introduces many improvements and changes over Android 11. One of the new changes is the introduction of new emojis and the redesign of current ones. The new emojis are part of Unicode's Emoji 13.1 set and will be included on Android 12.</p>
|
4 |
-
<h2>android 12 emojis download apk</h2><br /><p><b><b>Download Zip</b> ☑ <a href="https://urlin.us/2uSZSI">https://urlin.us/2uSZSI</a></b></p><br /><br />
|
5 |
-
<p>However, if you are not running Android 12 on your phone, you might be wondering how to get the new emojis. Don't worry, we have got you covered. In this article, we will show you how to download and install Android 12 emojis on any Android phone using a simple method. You don't need to wait for the official update or buy a new phone to enjoy the new emojis. All you need is a rooted phone and a Magisk module. Let's get started!</p>
|
6 |
-
<h2>What are Android 12 Emojis and Why You Should Get Them</h2>
|
7 |
-
<h3>Android 12 Emojis Overview</h3>
|
8 |
-
<p>Emojis are small icons that represent various emotions, objects, animals, activities, and symbols. They are widely used in digital communication to express feelings, convey messages, and add fun and personality. Emojis are standardized by Unicode, an organization that sets the rules and guidelines for text encoding and representation. Unicode releases new emoji sets every year, which are then adopted by different platforms and devices.</p>
|
9 |
-
<p>Android 12 emojis are the latest emoji set released by Google for its operating system. According to Google, these emoji updates are coming to Android 12 later in the year but will be made available on Gmail, Google Chat, YouTube Live Chat, and Chrome OS later this month. The update includes nearly 1,000 redesigned emojis that make them clearer at small sizes, more accurate, or more cross-platform compatible. The update also adds support for new emojis that were introduced by Unicode in Emoji 13.1 set. Some of the new emojis include <em>melting face</em>, <em>face with open eyes and hand over mouth</em>, <em>face with peeking eye</em>, <em>saluting face</em>, <em>dotted line face</em>, <em>heart on fire</em>, <em>mending heart</em>, <em>woman with beard</em>, <em>man with beard</em>, <em>woman feeding baby</em>, <em>man feeding baby</em>, <em>bubble tea</em>, <em>tamale</em>, <em>potted plant</em>, <em>rock</em>, <em>wood</em>, <em>hut</em>, <em>worm</em>, <em>beaver</em>, <em>seal</em>, <em>fly</em>, <em>cockroach</em>, <em>ninja</em>, <em>military helmet</em>, <em>accordion</em>, <em>long drum</em>, <em>coin</em>, <em>circular saw</em>, <em>screwdriver</em>, <em>mirror</em>, <em>window</em>, <em<plunger<>/p></p>
|
10 |
-
<h3> <h3>Benefits of Android 12 Emojis</h3>
|
11 |
-
<p>There are many reasons why you might want to get the new Android 12 emojis on your phone. Here are some of them:</p>
|
12 |
-
<ul>
|
13 |
-
<li>You can express yourself better with the new emojis that capture a wider range of emotions, situations, and identities.</li>
|
14 |
-
<li>You can stay updated with the latest trends and culture with the new emojis that reflect the current world and society.</li>
|
15 |
-
<li>You can have more fun and creativity with the new emojis that offer more options and variations for your messages and posts.</li>
|
16 |
-
<li>You can communicate more effectively with the new emojis that are clearer, more consistent, and more compatible across different platforms and devices.</li>
|
17 |
-
</ul>
|
18 |
-
<h2>How to Download Android 12 Emojis APK File</h2>
|
19 |
-
<h3>Requirements for Installing Android 12 Emojis</h3>
|
20 |
-
<p>Before you can install the new Android 12 emojis on your phone, you need to make sure that you meet some requirements. These are:</p>
|
21 |
-
<p>How to install android 12 emojis on all android phones<br />
|
22 |
-
Android 12 emoji changelog and new designs<br />
|
23 |
-
Get android 12 new emojis on rooted android device<br />
|
24 |
-
Android 12 emoji set magisk module download<br />
|
25 |
-
Android 12 emojis vs android 11 emojis comparison<br />
|
26 |
-
Android 12 emoji font file for non-rooted devices<br />
|
27 |
-
Android 12 emoji update release date and features<br />
|
28 |
-
How to use android 12 emojis on gmail and google chat<br />
|
29 |
-
Android 12 emoji list and meanings<br />
|
30 |
-
Android 12 emoji keyboard app for android<br />
|
31 |
-
How to get android 12 emojis on samsung devices<br />
|
32 |
-
Android 12 emoji redesign and improvement<br />
|
33 |
-
Android 12 emoji compatibility with other platforms<br />
|
34 |
-
Android 12 emoji apk mod for older android versions<br />
|
35 |
-
Android 12 emoji preview and beta testing<br />
|
36 |
-
How to customize android 12 emojis with root<br />
|
37 |
-
Android 12 emoji support for whatsapp and instagram<br />
|
38 |
-
Android 12 emoji review and feedback<br />
|
39 |
-
Android 12 emoji pack for gboard and swiftkey<br />
|
40 |
-
Android 12 emoji font changer app for android<br />
|
41 |
-
How to uninstall android 12 emojis from rooted device<br />
|
42 |
-
Android 12 emoji bug fixes and issues<br />
|
43 |
-
Android 12 emoji quiz and trivia game<br />
|
44 |
-
Android 12 emoji wallpapers and stickers for android<br />
|
45 |
-
Android 12 emoji generator and creator tool<br />
|
46 |
-
How to enable android 12 emojis on chrome os<br />
|
47 |
-
Android 12 emoji history and evolution<br />
|
48 |
-
Android 12 emoji statistics and trends<br />
|
49 |
-
Android 12 emoji tips and tricks<br />
|
50 |
-
Android 12 emoji alternatives and replacements<br />
|
51 |
-
How to backup android 12 emojis on rooted device<br />
|
52 |
-
Android 12 emoji license and terms of use<br />
|
53 |
-
Android 12 emoji font converter for pc and mac<br />
|
54 |
-
Android 12 emoji art and memes for android<br />
|
55 |
-
Android 12 emoji theme and launcher for android<br />
|
56 |
-
How to disable android 12 emojis on rooted device<br />
|
57 |
-
Android 12 emoji request and suggestion form<br />
|
58 |
-
Android 12 emoji news and updates<br />
|
59 |
-
Android 12 emoji guide and tutorial for beginners<br />
|
60 |
-
Android 12 emoji font editor and modifier tool<br />
|
61 |
-
How to restore android 11 emojis on rooted device<br />
|
62 |
-
Android 12 emoji problems and solutions<br />
|
63 |
-
Android 12 emoji fun facts and trivia<br />
|
64 |
-
Android 12 emoji stickers and gifs for android<br />
|
65 |
-
Android 12 emoji maker and editor app for android<br />
|
66 |
-
How to access android 12 emojis on youtube live chat <br />
|
67 |
-
Android 12 emoji comparison with ios and windows <br />
|
68 |
-
Android 12 emoji font download link and instructions <br />
|
69 |
-
Android 12 emoji rating and opinion poll <br />
|
70 |
-
Android 12 emoji best practices and recommendations</p>
|
71 |
-
<ul>
|
72 |
-
<li>Your phone must be running on Android 8.0 Oreo or higher.</li>
|
73 |
-
<li>Your phone must be rooted with Magisk installed. Rooting is the process of gaining full access and control over your phone's system. Magisk is a tool that allows you to modify your phone's system without affecting its safety and stability. If you don't know how to root your phone or install Magisk, you can search for online guides or tutorials that are specific to your phone model.</li>
|
74 |
-
<li>You must have a file manager app that can access the root directory of your phone. You can use any file manager app that has this feature, such as Solid Explorer, FX File Explorer, or MiXplorer.</li>
|
75 |
-
<li>You must have a backup of your phone's data in case something goes wrong during the installation process. You can use any backup app that you prefer, such as Titanium Backup, Swift Backup, or Google Drive.</li>
|
76 |
-
</ul>
|
77 |
-
<h3>Steps to Download Android 12 Emojis APK File</h3>
|
78 |
-
<p>Once you have met the requirements, you can proceed to download the Android 12 emojis APK file. This is a file that contains the new emoji fonts that will replace the existing ones on your phone. Here are the steps to download the APK file:</p>
|
79 |
-
<ol>
|
80 |
-
<li>Go to this link on your phone's browser. This is the official download page for the Android 12 emojis APK file.</li>
|
81 |
-
<li>Tap on the green button that says "Download APK". This will start downloading the file to your phone's storage.</li>
|
82 |
-
<li>Wait for the download to finish. You can check the progress on your notification bar or your browser's download manager.</li>
|
83 |
-
<li>Once the download is complete, locate the file on your file manager app. It should be in the Downloads folder by default. The file name should be something like "Android-12-Emojis.apk".</li>
|
84 |
-
</ol> <h2>How to Install Android 12 Emojis Using Magisk Module</h2>
|
85 |
-
<h3>What is Magisk and How to Use It</h3>
|
86 |
-
<p>Magisk is a powerful tool that allows you to modify your phone's system without affecting its safety and stability. It works by creating a virtual partition on your phone that overlays the original system partition. This way, you can make changes to the system without actually modifying it. Magisk also has a feature called Magisk Modules, which are add-ons that can enhance your phone's functionality and performance. You can install various Magisk Modules from the Magisk Manager app, which is the main interface for managing Magisk on your phone.</p>
|
87 |
-
<h3>Steps to Install Android 12 Emojis Using Magisk Module</h3>
|
88 |
-
<p>Now that you have downloaded the Android 12 emojis APK file, you need to install it using a Magisk Module. This will ensure that the new emojis are applied to your phone's system and apps. Here are the steps to install Android 12 emojis using a Magisk Module:</p>
|
89 |
-
<ol>
|
90 |
-
<li>Open the Magisk Manager app on your phone. If you don't have it, you can download it from here.</li>
|
91 |
-
<li>Tap on the menu icon on the top left corner and select "Modules". This will open the list of installed and available Magisk Modules on your phone.</li>
|
92 |
-
<li>Tap on the plus icon on the bottom right corner and navigate to the folder where you saved the Android 12 emojis APK file. Tap on the file and select "Open". This will start installing the Magisk Module for Android 12 emojis.</li>
|
93 |
-
<li>Wait for the installation to finish. You will see a message that says "Module installed" when it is done.</li>
|
94 |
-
<li>Tap on the "Reboot" button at the bottom of the screen. This will restart your phone and apply the changes.</li>
|
95 |
-
</ol>
|
96 |
-
<h2>How to Enjoy Android 12 Emojis on Your Phone</h2>
|
97 |
-
<h3>How to Access and Use Android 12 Emojis</h3>
|
98 |
-
<p>Congratulations, you have successfully installed Android 12 emojis on your phone. Now you can enjoy using them in your text messages, social media posts, and online chats. Here is how to access and use Android 12 emojis on your phone:</p>
|
99 |
-
<ul>
|
100 |
-
<li>To access Android 12 emojis, you need to use a keyboard app that supports them. You can use any keyboard app that you prefer, such as Gboard, SwiftKey, or Samsung Keyboard.</li>
|
101 |
-
<li>To use Android 12 emojis, you need to tap on the emoji icon on your keyboard app. This will open the emoji panel where you can browse and select the emojis that you want to use.</li>
|
102 |
-
<li>You can also search for specific emojis by typing their names or keywords in the search bar at the bottom of the emoji panel. For example, if you want to use the <em>melting face</em> emoji, you can type "melting" or "face" in the search bar and find it easily.</li>
|
103 |
-
<li>You can also customize some of the emojis by tapping and holding on them. This will open a pop-up menu where you can choose different skin tones, hair styles, or genders for some of the emojis.</li>
|
104 |
-
</ul>
|
105 |
-
<h3>Tips and Tricks for Android 12 Emojis</h3>
|
106 |
-
<p>To make the most out of Android 12 emojis, here are some tips and tricks that you can try:</p>
|
107 |
-
<ul>
|
108 |
-
<li>You can create shortcuts for your favorite or frequently used emojis by adding them to your keyboard app's clipboard or dictionary. This way, you can access them quickly without browsing through the emoji panel.</li>
|
109 |
-
<li>You can combine different emojis to create new meanings or expressions. For example, you can combine <em>heart on fire</em> and <em>mending heart</em> to show that you are recovering from a heartbreak or combine <em>ninja</em> and <em>military helmet</em> to show that you are ready for action.</li>
|
110 |
-
<li>You can use emojis to spice up your contacts' names or labels. For example, you can add <em>potted plant</em> or <em>worm</em> to your friend's name who loves gardening or add <em>bubble tea</em> or <em>tamale</em> to your favorite food delivery service.</li>
|
111 |
-
</ul>
|
112 |
-
<h2>Conclusion and FAQs</h2>
|
113 |
-
<h3>Conclusion</h3>
|
114 |
-
<p>In this article, we have shown you how to download and install Android 12 emojis on any Android phone using a simple method. You don't need to wait for the official update or buy a new phone to enjoy the new emojis. All you need is a rooted phone and a Magisk module. We have also explained what are Android 12 emojis and why you should get them. We have also given you some tips and tricks on how to access and use Android 12 emojis on your phone. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy texting!</p>
|
115 |
-
<h3>FAQs</h3>
|
116 |
-
<p>Here are some of the frequently asked questions about Android 12 emojis:</p>
|
117 |
-
<table>
|
118 |
-
<tr>
|
119 |
-
<th>Question</th>
|
120 |
-
<th>Answer</th>
|
121 |
-
</tr>
|
122 |
-
<tr>
|
123 |
-
<td>Can I get Android 12 emojis without rooting my phone?</td>
|
124 |
-
<td>No, you cannot get Android 12 emojis without rooting your phone. Rooting is necessary to install the Magisk module that replaces the existing emoji fonts on your phone. If you don't want to root your phone, you will have to wait for the official update from Google or your phone manufacturer.</td>
|
125 |
-
</tr>
|
126 |
-
<tr>
|
127 |
-
<td>Will Android 12 emojis work on all apps and platforms?</td>
|
128 |
-
<td>Yes, Android 12 emojis will work on all apps and platforms that support Unicode's Emoji 13.1 set. However, some apps and platforms may display the emojis differently depending on their own design and style. For example, WhatsApp and Facebook Messenger have their own emoji sets that may not match the Android 12 emojis exactly.</td>
|
129 |
-
</tr>
|
130 |
-
<tr>
|
131 |
-
<td>How can I revert back to the old emojis if I don't like the new ones?</td>
|
132 |
-
<td>If you want to revert back to the old emojis, you can uninstall the Magisk module that you installed for Android 12 emojis. To do this, open the Magisk Manager app, go to Modules, find the Android 12 Emojis module, and tap on the trash icon. Then reboot your phone and the old emojis will be restored.</td>
|
133 |
-
</tr>
|
134 |
-
<tr>
|
135 |
-
<td>Are there any risks or drawbacks of installing Android 12 emojis on my phone?</td>
|
136 |
-
<td>There are no major risks or drawbacks of installing Android 12 emojis on your phone, as long as you follow the instructions carefully and backup your data before proceeding. However, some minor issues that you may encounter are:</td>
|
137 |
-
<ul>
|
138 |
-
<li>You may lose some of the original emoji fonts that came with your phone.</li>
|
139 |
-
<li>You may experience some compatibility issues with some apps or platforms that do not support the new emojis.</li>
|
140 |
-
<li>You may void your phone's warranty or lose access to some features or services that require an unrooted phone.</li>
|
141 |
-
</ul>
|
142 |
-
</tr>
|
143 |
-
<tr>
|
144 |
-
<td>Where can I find more information or help about Android 12 emojis?</td>
|
145 |
-
<td>If you want to find more information or help about Android 12 emojis, you can visit these sources:</td>
|
146 |
-
<ul>
|
147 |
-
<li>The official Google blog post that announces the new emoji updates.</li>
|
148 |
-
<li>The official Unicode website that lists all the new emojis in Emoji 13.1 set.</li>
|
149 |
-
<li>The XDA Developers forum thread that provides the download link and instructions for the Android 12 Emojis Magisk module.</li>
|
150 |
-
<li>The Reddit community that discusses and shares everything related to Android 12.</li>
|
151 |
-
</ul></p> 197e85843d<br />
|
152 |
-
<br />
|
153 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Clash Mini APK and Play the Beta in Any Country.md
DELETED
@@ -1,102 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Where to Download Clash Mini</h1>
|
3 |
-
<p>Clash Mini is a brand new mobile game launched by Supercell, the same developers behind popular titles such as Clash Royale and Clash of Clans. Clash Mini takes characters from the ‘Clash Universe’ which are used to create a unique deck to battle against other players in in a board-game-like set up. If you are a fan of the Clash games or you are looking for a fun and strategic board game, you might be wondering where to download Clash Mini and how to play it. In this article, we will answer all your questions and give you some tips and tricks to help you master the game.</p>
|
4 |
-
<h2>where to download clash mini</h2><br /><p><b><b>Download File</b> ⚡ <a href="https://urlin.us/2uT2cW">https://urlin.us/2uT2cW</a></b></p><br /><br />
|
5 |
-
<h2>What is Clash Mini?</h2>
|
6 |
-
<h3>A fun and strategic board game</h3>
|
7 |
-
<p>Clash Mini is a game of choices, where you have to duel and rumble with other players in a fun, strategy-packed board game. You have to collect, summon and upgrade your army of Minis, which are adorable versions of the iconic Clash characters. You have to predict your opponent’s moves and then assemble your winning strategy and formation. Watch your Minis come to life and clash to be the last one standing!</p>
|
8 |
-
<h3>A new member of the Clash family</h3>
|
9 |
-
<p>Clash Mini is one of the three new games in the Clash of Clans universe unveiled by Supercell in April 2021, alongside Clash Quest and Clash Heroes. The developer said that they wanted to offer a new Clash experience to current players and broaden Clash to new audiences who haven’t experienced Clash before. Clash Mini is designed to be easy to learn but challenging to master, making it suitable for casual and hardcore gamers alike.</p>
|
10 |
-
<h2>Which platforms can play Clash Mini?</h2>
|
11 |
-
<h3>Mobile exclusive game</h3>
|
12 |
-
<p>Like all the previous Clash titles, Clash Mini is exclusive to mobile devices. You won’t be able to play it on any non-mobile platforms such as PC or console. However, you can use an emulator to play it on your computer if you really want to, but this might affect the performance and compatibility of the game.</p>
|
13 |
-
<h3>Available for iOS and Android devices</h3>
|
14 |
-
<p>Clash Mini is available for both iOS and Android devices. You can download it from the App Store or Google Play Store depending on your device. However, the game is not yet globally released, so you might not be able to find it in your region. The game is currently in beta testing phase, which means that it is only available in certain countries for a limited number of players.</p>
|
15 |
-
<h2>When is the Clash Mini release date?</h2>
|
16 |
-
<h3>Beta version launched in November 2021</h3>
|
17 |
-
<p>The beta version of Clash Mini was launched on November 8, 2021 for players in Finland, Sweden, Norway, Denmark, Iceland, Canada, Singapore, Chile, Hong Kong, Sri Lanka and The Philippines. The beta version allows players to test the game before its official release and provide feedback to the developer. The beta version also helps the developer to fix any bugs or issues that might occur in the game.</p>
|
18 |
-
<h3>Global release date not confirmed yet</h3>
|
19 |
-
<p>The global release date of Clash Mini has not been confirmed yet by Supercell. The developer has not announced when they plan to launch the game worldwide or which countries will be added next to the beta version. However, based on the previous Clash games, we can expect that the game will be released globally sometime in 2023. Until then, you can follow the official Clash Mini social media accounts and website to get the latest news and updates about the game.</p>
|
20 |
-
<p>How to download clash mini on android<br />
|
21 |
-
Clash mini apk download latest version<br />
|
22 |
-
Clash mini beta download link<br />
|
23 |
-
Clash mini release date and download guide<br />
|
24 |
-
Clash mini download for PC windows 10<br />
|
25 |
-
Clash mini mod apk download unlimited gems<br />
|
26 |
-
Clash mini strategy and tips for beginners<br />
|
27 |
-
Clash mini review and gameplay video<br />
|
28 |
-
Clash mini download error and how to fix it<br />
|
29 |
-
Clash mini vs clash royale comparison<br />
|
30 |
-
Clash mini best minis and heroes to use<br />
|
31 |
-
Clash mini cheats and hacks for free gems<br />
|
32 |
-
Clash mini update and patch notes<br />
|
33 |
-
Clash mini wiki and fan site<br />
|
34 |
-
Clash mini support and contact information<br />
|
35 |
-
How to play clash mini on mac<br />
|
36 |
-
Clash mini download size and system requirements<br />
|
37 |
-
Clash mini online multiplayer mode<br />
|
38 |
-
Clash mini skins and customization options<br />
|
39 |
-
Clash mini tournaments and events<br />
|
40 |
-
How to download clash mini on ios<br />
|
41 |
-
Clash mini google play store link<br />
|
42 |
-
Clash mini discord server and community<br />
|
43 |
-
Clash mini reddit and forum discussions<br />
|
44 |
-
Clash mini official website and blog<br />
|
45 |
-
How to download clash mini on fire tablet<br />
|
46 |
-
Clash mini amazon app store link<br />
|
47 |
-
Clash mini gift codes and redeem codes<br />
|
48 |
-
Clash mini feedback and suggestions<br />
|
49 |
-
Clash mini faq and troubleshooting<br />
|
50 |
-
How to download clash mini on chromebook<br />
|
51 |
-
Clash mini web version and browser extension<br />
|
52 |
-
Clash mini achievements and rewards<br />
|
53 |
-
Clash mini leaderboard and rankings<br />
|
54 |
-
Clash mini social media accounts and news<br />
|
55 |
-
How to download clash mini on linux<br />
|
56 |
-
Clash mini github repository and source code<br />
|
57 |
-
Clash mini development and history<br />
|
58 |
-
Clash mini future plans and roadmap<br />
|
59 |
-
Clash mini testimonials and ratings<br />
|
60 |
-
How to download clash mini on bluestacks emulator<br />
|
61 |
-
Clash mini nox player emulator download link <br />
|
62 |
-
Clash mini best settings and optimization tips <br />
|
63 |
-
Clash mini fun facts and trivia <br />
|
64 |
-
Clash mini fan art and wallpapers <br />
|
65 |
-
Clash mini merchandise and products <br />
|
66 |
-
How to download clash mini on smart tv <br />
|
67 |
-
Clash mini streaming platforms and channels <br />
|
68 |
-
Clash mini podcast and interviews</p>
|
69 |
-
<h2>How to download Clash Mini beta?</h2>
|
70 |
-
<h3>Sign up on the official website</h3>
|
71 |
-
<p>If you want to play Clash Mini beta, you have to sign up on the official website. You have to enter your email address and choose your preferred platform (iOS or Android). You will also have to agree to the terms and conditions and privacy policy of the game. After you sign up, you will receive a confirmation email with a link to download the game.</p>
|
72 |
-
<h3>Download from the App Store or Google Play Store</h3>
|
73 |
-
<p>After you receive the confirmation email, you can download Clash Mini beta from the App Store or Google Play Store. You have to search for Clash Mini in the store and tap on the download button. You might have to enter your Apple ID or Google account credentials to verify your identity. Once the download is complete, you can open the game and start playing.</p>
|
74 |
-
<h2>How to play Clash Mini?</h2>
|
75 |
-
<h3>Collect, summon and upgrade your Minis</h3>
|
76 |
-
<p>The core gameplay of Clash Mini is to collect, summon and upgrade your Minis. Minis are cute and powerful versions of the Clash characters that you can use to fight against other players. There are different types of Minis, such as tanks, damage dealers, healers, support and more. Each Mini has its own stats, abilities and synergies with other Minis. You can collect Minis by opening chests or buying them from the shop. You can summon Minis by placing them on the board before each battle. You can upgrade Minis by spending gold and cards to increase their level and power.</p>
|
77 |
-
<h3>Predict, position and clash with your opponent</h3>
|
78 |
-
<p>The other aspect of Clash Mini is to predict, position and clash with your opponent. Each battle consists of three rounds, where you have to place your Minis on a 4x4 grid board. You have to predict what your opponent will do and try to counter their strategy. You have to position your Minis wisely on the board, taking into account their range, movement, direction and abilities. You have to clash with your opponent by watching your Minis fight automatically based on their stats and skills. The player who wins two out of three rounds wins the battle.</p>
|
79 |
-
<h2>Tips and tricks for Clash Mini</h2>
|
80 |
-
<h3>Choose the right characters for your army</h3>
|
81 |
-
<p>One of the most important tips for Clash Mini is to choose the right characters for your army. You have to consider the strengths and weaknesses of each Mini and how they work together as a team. You have to balance your army with different roles, such as tanks, damage dealers, healers and support. You have to adapt your army according to the game mode, the map and the opponent you are facing. You have to experiment with different combinations of Minis and find out what works best for you.</p>
|
82 |
-
<h3>Position your Minis wisely on the battlefield</h3>
|
83 |
-
<p>Another crucial tip for Clash Mini is to position your Minis wisely on the battlefield. You have to think strategically about where you place your Minis on the board before each round. You have to consider factors such as range, movement, direction and abilities of your Minis and how they interact with each other and with the enemy Minis. You have to avoid placing your Minis in vulnerable spots where they can be easily attacked or countered by the opponent. You have to use the terrain features such as walls, bridges and obstacles to your advantage.</p>
|
84 |
-
<h3>Utilize special abilities and upgrades during battle</h3>
|
85 |
-
<p>The final tip for Clash Mini is to utilize special abilities and upgrades during battle. Each Mini has a unique ability that can be activated once per round by tapping on it. These abilities can be offensive, defensive or supportive in nature and can change the outcome of a battle if used at the right time. You also have access to upgrades that can boost your Minis’ stats or skills during a battle. These upgrades are randomly generated from a pool of options and can be applied by dragging them onto a Mini. You have to use these abilities and upgrades wisely and strategically to gain an edge over your opponent.</p>
|
86 |
-
<h2>Conclusion</h2>
|
87 |
-
<p>Clash Mini is a fun and strategic board game that features adorable versions of the Clash characters in a fast-paced duel against other players. The game is currently in beta testing phase and is only available in certain countries for iOS and Android devices. The global release date of the game is not confirmed yet but is expected sometime in 2023. If you want to play Clash Mini beta , you have to sign up on the official website and download it from the App Store or Google Play Store. To play Clash Mini, you have to collect, summon and upgrade your Minis, predict, position and clash with your opponent, and utilize special abilities and upgrades during battle. We hope this article has helped you learn more about Clash Mini and how to download and play it. If you have any questions, you can check out the FAQs below or visit the official Clash Mini website for more information.</p>
|
88 |
-
<h4>FAQs</h4>
|
89 |
-
<ul>
|
90 |
-
<li>Q: How much does Clash Mini cost?</li>
|
91 |
-
<li>A: Clash Mini is free to download and play, but it offers in-app purchases for some items and features.</li>
|
92 |
-
<li>Q: How can I contact Supercell for feedback or support?</li>
|
93 |
-
<li>A: You can contact Supercell through the in-game settings menu or by visiting their website or social media accounts.</li>
|
94 |
-
<li>Q: How can I join a clan or create my own clan in Clash Mini?</li>
|
95 |
-
<li>A: You can join a clan or create your own clan by tapping on the clan icon on the main screen. You can invite your friends or other players to join your clan or search for an existing clan to join.</li>
|
96 |
-
<li>Q: How can I earn rewards and chests in Clash Mini?</li>
|
97 |
-
<li>A: You can earn rewards and chests by winning battles, completing quests, participating in events, ranking up in leagues, and opening the free chest every four hours.</li>
|
98 |
-
<li>Q: How can I watch replays or share my battles in Clash Mini?</li>
|
99 |
-
<li>A: You can watch replays or share your battles by tapping on the battle log icon on the main screen. You can also watch live battles of other players or top players by tapping on the TV icon on the main screen.</li>
|
100 |
-
</ul></p> 197e85843d<br />
|
101 |
-
<br />
|
102 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1toTree/lora_test/ppdiffusers/commands/__init__.py
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
|
2 |
-
# Copyright 2022 The HuggingFace Team. All rights reserved.
|
3 |
-
#
|
4 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
# you may not use this file except in compliance with the License.
|
6 |
-
# You may obtain a copy of the License at
|
7 |
-
#
|
8 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
#
|
10 |
-
# Unless required by applicable law or agreed to in writing, software
|
11 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
# See the License for the specific language governing permissions and
|
14 |
-
# limitations under the License.
|
15 |
-
|
16 |
-
from abc import ABC, abstractmethod
|
17 |
-
from argparse import ArgumentParser
|
18 |
-
|
19 |
-
|
20 |
-
class BasePPDiffusersCLICommand(ABC):
|
21 |
-
@staticmethod
|
22 |
-
@abstractmethod
|
23 |
-
def register_subcommand(parser: ArgumentParser):
|
24 |
-
raise NotImplementedError()
|
25 |
-
|
26 |
-
@abstractmethod
|
27 |
-
def run(self):
|
28 |
-
raise NotImplementedError()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/52Hz/CMFNet_deraindrop/model/CMFNet.py
DELETED
@@ -1,193 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import torch.nn as nn
|
3 |
-
from model.block import SAB, CAB, PAB, conv, SAM, conv3x3, conv_down
|
4 |
-
|
5 |
-
##########################################################################
|
6 |
-
## U-Net
|
7 |
-
bn = 2 # block number-1
|
8 |
-
|
9 |
-
class Encoder(nn.Module):
|
10 |
-
def __init__(self, n_feat, kernel_size, reduction, act, bias, scale_unetfeats, block):
|
11 |
-
super(Encoder, self).__init__()
|
12 |
-
if block == 'CAB':
|
13 |
-
self.encoder_level1 = [CAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
14 |
-
self.encoder_level2 = [CAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
15 |
-
self.encoder_level3 = [CAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
16 |
-
elif block == 'PAB':
|
17 |
-
self.encoder_level1 = [PAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
18 |
-
self.encoder_level2 = [PAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
19 |
-
self.encoder_level3 = [PAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
20 |
-
elif block == 'SAB':
|
21 |
-
self.encoder_level1 = [SAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
22 |
-
self.encoder_level2 = [SAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
23 |
-
self.encoder_level3 = [SAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
24 |
-
self.encoder_level1 = nn.Sequential(*self.encoder_level1)
|
25 |
-
self.encoder_level2 = nn.Sequential(*self.encoder_level2)
|
26 |
-
self.encoder_level3 = nn.Sequential(*self.encoder_level3)
|
27 |
-
self.down12 = DownSample(n_feat, scale_unetfeats)
|
28 |
-
self.down23 = DownSample(n_feat + scale_unetfeats, scale_unetfeats)
|
29 |
-
|
30 |
-
def forward(self, x):
|
31 |
-
enc1 = self.encoder_level1(x)
|
32 |
-
x = self.down12(enc1)
|
33 |
-
enc2 = self.encoder_level2(x)
|
34 |
-
x = self.down23(enc2)
|
35 |
-
enc3 = self.encoder_level3(x)
|
36 |
-
return [enc1, enc2, enc3]
|
37 |
-
|
38 |
-
class Decoder(nn.Module):
|
39 |
-
def __init__(self, n_feat, kernel_size, reduction, act, bias, scale_unetfeats, block):
|
40 |
-
super(Decoder, self).__init__()
|
41 |
-
if block == 'CAB':
|
42 |
-
self.decoder_level1 = [CAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
43 |
-
self.decoder_level2 = [CAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
44 |
-
self.decoder_level3 = [CAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
45 |
-
elif block == 'PAB':
|
46 |
-
self.decoder_level1 = [PAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
47 |
-
self.decoder_level2 = [PAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
48 |
-
self.decoder_level3 = [PAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
49 |
-
elif block == 'SAB':
|
50 |
-
self.decoder_level1 = [SAB(n_feat, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
51 |
-
self.decoder_level2 = [SAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
52 |
-
self.decoder_level3 = [SAB(n_feat + (scale_unetfeats * 2), kernel_size, reduction, bias=bias, act=act) for _ in range(bn)]
|
53 |
-
self.decoder_level1 = nn.Sequential(*self.decoder_level1)
|
54 |
-
self.decoder_level2 = nn.Sequential(*self.decoder_level2)
|
55 |
-
self.decoder_level3 = nn.Sequential(*self.decoder_level3)
|
56 |
-
if block == 'CAB':
|
57 |
-
self.skip_attn1 = CAB(n_feat, kernel_size, reduction, bias=bias, act=act)
|
58 |
-
self.skip_attn2 = CAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act)
|
59 |
-
if block == 'PAB':
|
60 |
-
self.skip_attn1 = PAB(n_feat, kernel_size, reduction, bias=bias, act=act)
|
61 |
-
self.skip_attn2 = PAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act)
|
62 |
-
if block == 'SAB':
|
63 |
-
self.skip_attn1 = SAB(n_feat, kernel_size, reduction, bias=bias, act=act)
|
64 |
-
self.skip_attn2 = SAB(n_feat + scale_unetfeats, kernel_size, reduction, bias=bias, act=act)
|
65 |
-
self.up21 = SkipUpSample(n_feat, scale_unetfeats)
|
66 |
-
self.up32 = SkipUpSample(n_feat + scale_unetfeats, scale_unetfeats)
|
67 |
-
|
68 |
-
def forward(self, outs):
|
69 |
-
enc1, enc2, enc3 = outs
|
70 |
-
dec3 = self.decoder_level3(enc3)
|
71 |
-
x = self.up32(dec3, self.skip_attn2(enc2))
|
72 |
-
dec2 = self.decoder_level2(x)
|
73 |
-
x = self.up21(dec2, self.skip_attn1(enc1))
|
74 |
-
dec1 = self.decoder_level1(x)
|
75 |
-
return [dec1, dec2, dec3]
|
76 |
-
|
77 |
-
##########################################################################
|
78 |
-
##---------- Resizing Modules ----------
|
79 |
-
class DownSample(nn.Module):
|
80 |
-
def __init__(self, in_channels, s_factor):
|
81 |
-
super(DownSample, self).__init__()
|
82 |
-
self.down = nn.Sequential(nn.Upsample(scale_factor=0.5, mode='bilinear', align_corners=False),
|
83 |
-
nn.Conv2d(in_channels, in_channels + s_factor, 1, stride=1, padding=0, bias=False))
|
84 |
-
|
85 |
-
def forward(self, x):
|
86 |
-
x = self.down(x)
|
87 |
-
return x
|
88 |
-
|
89 |
-
class UpSample(nn.Module):
|
90 |
-
def __init__(self, in_channels, s_factor):
|
91 |
-
super(UpSample, self).__init__()
|
92 |
-
self.up = nn.Sequential(nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),
|
93 |
-
nn.Conv2d(in_channels + s_factor, in_channels, 1, stride=1, padding=0, bias=False))
|
94 |
-
|
95 |
-
def forward(self, x):
|
96 |
-
x = self.up(x)
|
97 |
-
return x
|
98 |
-
|
99 |
-
class SkipUpSample(nn.Module):
|
100 |
-
def __init__(self, in_channels, s_factor):
|
101 |
-
super(SkipUpSample, self).__init__()
|
102 |
-
self.up = nn.Sequential(nn.Upsample(scale_factor=2, mode='bilinear', align_corners=False),
|
103 |
-
nn.Conv2d(in_channels + s_factor, in_channels, 1, stride=1, padding=0, bias=False))
|
104 |
-
|
105 |
-
def forward(self, x, y):
|
106 |
-
x = self.up(x)
|
107 |
-
x = x + y
|
108 |
-
return x
|
109 |
-
|
110 |
-
##########################################################################
|
111 |
-
# Mixed Residual Module
|
112 |
-
class Mix(nn.Module):
|
113 |
-
def __init__(self, m=1):
|
114 |
-
super(Mix, self).__init__()
|
115 |
-
w = nn.Parameter(torch.FloatTensor([m]), requires_grad=True)
|
116 |
-
w = nn.Parameter(w, requires_grad=True)
|
117 |
-
self.w = w
|
118 |
-
self.mix_block = nn.Sigmoid()
|
119 |
-
|
120 |
-
def forward(self, fea1, fea2, feat3):
|
121 |
-
factor = self.mix_block(self.w)
|
122 |
-
other = (1 - factor)/2
|
123 |
-
output = fea1 * other.expand_as(fea1) + fea2 * factor.expand_as(fea2) + feat3 * other.expand_as(feat3)
|
124 |
-
return output, factor
|
125 |
-
|
126 |
-
##########################################################################
|
127 |
-
# Architecture
|
128 |
-
class CMFNet(nn.Module):
|
129 |
-
def __init__(self, in_c=3, out_c=3, n_feat=96, scale_unetfeats=48, kernel_size=3, reduction=4, bias=False):
|
130 |
-
super(CMFNet, self).__init__()
|
131 |
-
|
132 |
-
p_act = nn.PReLU()
|
133 |
-
self.shallow_feat1 = nn.Sequential(conv(in_c, n_feat // 2, kernel_size, bias=bias), p_act,
|
134 |
-
conv(n_feat // 2, n_feat, kernel_size, bias=bias))
|
135 |
-
self.shallow_feat2 = nn.Sequential(conv(in_c, n_feat // 2, kernel_size, bias=bias), p_act,
|
136 |
-
conv(n_feat // 2, n_feat, kernel_size, bias=bias))
|
137 |
-
self.shallow_feat3 = nn.Sequential(conv(in_c, n_feat // 2, kernel_size, bias=bias), p_act,
|
138 |
-
conv(n_feat // 2, n_feat, kernel_size, bias=bias))
|
139 |
-
|
140 |
-
self.stage1_encoder = Encoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'CAB')
|
141 |
-
self.stage1_decoder = Decoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'CAB')
|
142 |
-
|
143 |
-
self.stage2_encoder = Encoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'PAB')
|
144 |
-
self.stage2_decoder = Decoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'PAB')
|
145 |
-
|
146 |
-
self.stage3_encoder = Encoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'SAB')
|
147 |
-
self.stage3_decoder = Decoder(n_feat, kernel_size, reduction, p_act, bias, scale_unetfeats, 'SAB')
|
148 |
-
|
149 |
-
self.sam1o = SAM(n_feat, kernel_size=3, bias=bias)
|
150 |
-
self.sam2o = SAM(n_feat, kernel_size=3, bias=bias)
|
151 |
-
self.sam3o = SAM(n_feat, kernel_size=3, bias=bias)
|
152 |
-
|
153 |
-
self.mix = Mix(1)
|
154 |
-
self.add123 = conv(out_c, out_c, kernel_size, bias=bias)
|
155 |
-
self.concat123 = conv(n_feat*3, n_feat, kernel_size, bias=bias)
|
156 |
-
self.tail = conv(n_feat, out_c, kernel_size, bias=bias)
|
157 |
-
|
158 |
-
|
159 |
-
def forward(self, x):
|
160 |
-
## Compute Shallow Features
|
161 |
-
shallow1 = self.shallow_feat1(x)
|
162 |
-
shallow2 = self.shallow_feat2(x)
|
163 |
-
shallow3 = self.shallow_feat3(x)
|
164 |
-
|
165 |
-
## Enter the UNet-CAB
|
166 |
-
x1 = self.stage1_encoder(shallow1)
|
167 |
-
x1_D = self.stage1_decoder(x1)
|
168 |
-
## Apply SAM
|
169 |
-
x1_out, x1_img = self.sam1o(x1_D[0], x)
|
170 |
-
|
171 |
-
## Enter the UNet-PAB
|
172 |
-
x2 = self.stage2_encoder(shallow2)
|
173 |
-
x2_D = self.stage2_decoder(x2)
|
174 |
-
## Apply SAM
|
175 |
-
x2_out, x2_img = self.sam2o(x2_D[0], x)
|
176 |
-
|
177 |
-
## Enter the UNet-SAB
|
178 |
-
x3 = self.stage3_encoder(shallow3)
|
179 |
-
x3_D = self.stage3_decoder(x3)
|
180 |
-
## Apply SAM
|
181 |
-
x3_out, x3_img = self.sam3o(x3_D[0], x)
|
182 |
-
|
183 |
-
## Aggregate SAM features of Stage 1, Stage 2 and Stage 3
|
184 |
-
mix_r = self.mix(x1_img, x2_img, x3_img)
|
185 |
-
mixed_img = self.add123(mix_r[0])
|
186 |
-
|
187 |
-
## Concat SAM features of Stage 1, Stage 2 and Stage 3
|
188 |
-
concat_feat = self.concat123(torch.cat([x1_out, x2_out, x3_out], 1))
|
189 |
-
x_final = self.tail(concat_feat)
|
190 |
-
|
191 |
-
return x_final + mixed_img
|
192 |
-
|
193 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/52Hz/SUNet_AWGN_denoising/README.md
DELETED
@@ -1,37 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: SUNet_AWGN_denoising
|
3 |
-
emoji: 🌪
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: gradio
|
7 |
-
app_file: app.py
|
8 |
-
pinned: false
|
9 |
-
---
|
10 |
-
|
11 |
-
# Configuration
|
12 |
-
|
13 |
-
`title`: _string_
|
14 |
-
Display title for the Space
|
15 |
-
|
16 |
-
`emoji`: _string_
|
17 |
-
Space emoji (emoji-only character allowed)
|
18 |
-
|
19 |
-
`colorFrom`: _string_
|
20 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
21 |
-
|
22 |
-
`colorTo`: _string_
|
23 |
-
Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray)
|
24 |
-
|
25 |
-
`sdk`: _string_
|
26 |
-
Can be either `gradio`, `streamlit`, or `static`
|
27 |
-
|
28 |
-
`sdk_version` : _string_
|
29 |
-
Only applicable for `streamlit` SDK.
|
30 |
-
See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions.
|
31 |
-
|
32 |
-
`app_file`: _string_
|
33 |
-
Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code).
|
34 |
-
Path is relative to the root of the repository.
|
35 |
-
|
36 |
-
`pinned`: _boolean_
|
37 |
-
Whether the Space stays on top of your list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AI-ZTH-03-23/8.Datasets-NER-Biomed-ClinicalTerms/backup.app.py
DELETED
@@ -1,268 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import pandas as pd
|
3 |
-
import json
|
4 |
-
from collections import defaultdict
|
5 |
-
|
6 |
-
# Create tokenizer for biomed model
|
7 |
-
from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
|
8 |
-
tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all") # https://huggingface.co/d4data/biomedical-ner-all?text=asthma
|
9 |
-
model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all")
|
10 |
-
pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
|
11 |
-
|
12 |
-
# Matplotlib for entity graph
|
13 |
-
import matplotlib.pyplot as plt
|
14 |
-
plt.switch_backend("Agg")
|
15 |
-
|
16 |
-
# Load examples from JSON
|
17 |
-
import os
|
18 |
-
|
19 |
-
# Load terminology datasets:
|
20 |
-
basedir = os.path.dirname(__file__)
|
21 |
-
#dataLOINC = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv')
|
22 |
-
#dataPanels = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv')
|
23 |
-
#dataSNOMED = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
|
24 |
-
#dataOMS = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv')
|
25 |
-
#dataICD10 = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv')
|
26 |
-
|
27 |
-
dataLOINC = pd.read_csv(f'LoincTableCore.csv')
|
28 |
-
dataPanels = pd.read_csv(f'PanelsAndForms-ACW1208Labeled.csv')
|
29 |
-
dataSNOMED = pd.read_csv(f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
|
30 |
-
dataOMS = pd.read_csv(f'SnomedOMS.csv')
|
31 |
-
dataICD10 = pd.read_csv(f'ICD10Diagnosis.csv')
|
32 |
-
|
33 |
-
dir_path = os.path.dirname(os.path.realpath(__file__))
|
34 |
-
EXAMPLES = {}
|
35 |
-
#with open(dir_path + "\\" + "examples.json", "r") as f:
|
36 |
-
with open("examples.json", "r") as f:
|
37 |
-
example_json = json.load(f)
|
38 |
-
EXAMPLES = {x["text"]: x["label"] for x in example_json}
|
39 |
-
|
40 |
-
def MatchLOINC(name):
|
41 |
-
#basedir = os.path.dirname(__file__)
|
42 |
-
pd.set_option("display.max_rows", None)
|
43 |
-
#data = pd.read_csv(basedir + "\\" + f'LoincTableCore.csv')
|
44 |
-
data = dataLOINC
|
45 |
-
swith=data.loc[data['COMPONENT'].str.contains(name, case=False, na=False)]
|
46 |
-
return swith
|
47 |
-
|
48 |
-
def MatchLOINCPanelsandForms(name):
|
49 |
-
#basedir = os.path.dirname(__file__)
|
50 |
-
#data = pd.read_csv(basedir + "\\" + f'PanelsAndForms-ACW1208Labeled.csv')
|
51 |
-
data = dataPanels
|
52 |
-
# Assessment Name:
|
53 |
-
#swith=data.loc[data['ParentName'].str.contains(name, case=False, na=False)]
|
54 |
-
# Assessment Question:
|
55 |
-
swith=data.loc[data['LoincName'].str.contains(name, case=False, na=False)]
|
56 |
-
return swith
|
57 |
-
|
58 |
-
def MatchSNOMED(name):
|
59 |
-
#basedir = os.path.dirname(__file__)
|
60 |
-
#data = pd.read_csv(basedir + "\\" + f'sct2_TextDefinition_Full-en_US1000124_20220901.txt',sep='\t')
|
61 |
-
data = dataSNOMED
|
62 |
-
swith=data.loc[data['term'].str.contains(name, case=False, na=False)]
|
63 |
-
return swith
|
64 |
-
|
65 |
-
def MatchOMS(name):
|
66 |
-
#basedir = os.path.dirname(__file__)
|
67 |
-
#data = pd.read_csv(basedir + "\\" + f'SnomedOMS.csv')
|
68 |
-
data = dataOMS
|
69 |
-
swith=data.loc[data['SNOMED CT'].str.contains(name, case=False, na=False)]
|
70 |
-
return swith
|
71 |
-
|
72 |
-
def MatchICD10(name):
|
73 |
-
#basedir = os.path.dirname(__file__)
|
74 |
-
#data = pd.read_csv(basedir + "\\" + f'ICD10Diagnosis.csv')
|
75 |
-
data = dataICD10
|
76 |
-
swith=data.loc[data['Description'].str.contains(name, case=False, na=False)]
|
77 |
-
return swith
|
78 |
-
|
79 |
-
def SaveResult(text, outputfileName):
|
80 |
-
#try:
|
81 |
-
basedir = os.path.dirname(__file__)
|
82 |
-
savePath = outputfileName
|
83 |
-
print("Saving: " + text + " to " + savePath)
|
84 |
-
from os.path import exists
|
85 |
-
file_exists = exists(savePath)
|
86 |
-
if file_exists:
|
87 |
-
with open(outputfileName, "a") as f: #append
|
88 |
-
#for line in text:
|
89 |
-
f.write(str(text.replace("\n"," ")))
|
90 |
-
f.write('\n')
|
91 |
-
else:
|
92 |
-
with open(outputfileName, "w") as f: #write
|
93 |
-
#for line in text:
|
94 |
-
f.write(str(text.replace("\n"," ")))
|
95 |
-
f.write('\n')
|
96 |
-
#except ValueError as err:
|
97 |
-
# raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
|
98 |
-
|
99 |
-
return
|
100 |
-
|
101 |
-
def loadFile(filename):
|
102 |
-
try:
|
103 |
-
basedir = os.path.dirname(__file__)
|
104 |
-
loadPath = basedir + "\\" + filename
|
105 |
-
|
106 |
-
print("Loading: " + loadPath)
|
107 |
-
|
108 |
-
from os.path import exists
|
109 |
-
file_exists = exists(loadPath)
|
110 |
-
|
111 |
-
if file_exists:
|
112 |
-
with open(loadPath, "r") as f: #read
|
113 |
-
contents = f.read()
|
114 |
-
print(contents)
|
115 |
-
return contents
|
116 |
-
|
117 |
-
except ValueError as err:
|
118 |
-
raise ValueError("File Save Error in SaveResult \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
|
119 |
-
|
120 |
-
return ""
|
121 |
-
|
122 |
-
def get_today_filename():
|
123 |
-
from datetime import datetime
|
124 |
-
date = datetime.now().strftime("%Y_%m_%d-%I.%M.%S.%p")
|
125 |
-
#print(f"filename_{date}") 'filename_2023_01_12-03-29-22_AM'
|
126 |
-
return f"MedNER_{date}.csv"
|
127 |
-
|
128 |
-
def get_base(filename):
|
129 |
-
basedir = os.path.dirname(__file__)
|
130 |
-
loadPath = basedir + "\\" + filename
|
131 |
-
#print("Loading: " + loadPath)
|
132 |
-
return loadPath
|
133 |
-
|
134 |
-
def group_by_entity(raw):
|
135 |
-
outputFile = get_base(get_today_filename())
|
136 |
-
out = defaultdict(int)
|
137 |
-
|
138 |
-
for ent in raw:
|
139 |
-
out[ent["entity_group"]] += 1
|
140 |
-
myEntityGroup = ent["entity_group"]
|
141 |
-
print("Found entity group type: " + myEntityGroup)
|
142 |
-
|
143 |
-
if (myEntityGroup in ['Sign_symptom', 'Detailed_description', 'History', 'Activity', 'Medication' ]):
|
144 |
-
eterm = ent["word"].replace('#','')
|
145 |
-
minlength = 3
|
146 |
-
if len(eterm) > minlength:
|
147 |
-
print("Found eterm: " + eterm)
|
148 |
-
eterm.replace("#","")
|
149 |
-
g1=MatchLOINC(eterm)
|
150 |
-
g2=MatchLOINCPanelsandForms(eterm)
|
151 |
-
g3=MatchSNOMED(eterm)
|
152 |
-
g4=MatchOMS(eterm)
|
153 |
-
g5=MatchICD10(eterm)
|
154 |
-
sAll = ""
|
155 |
-
|
156 |
-
print("Saving to output file " + outputFile)
|
157 |
-
# Create harmonisation output format of input to output code, name, Text
|
158 |
-
|
159 |
-
try: # 18 fields, output to labeled CSV dataset for results teaching on scored regret changes to action plan with data inputs
|
160 |
-
col = " 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19"
|
161 |
-
|
162 |
-
#LOINC
|
163 |
-
g11 = g1['LOINC_NUM'].to_string().replace(","," ").replace("\n"," ")
|
164 |
-
g12 = g1['COMPONENT'].to_string().replace(","," ").replace("\n"," ")
|
165 |
-
s1 = ("LOINC," + myEntityGroup + "," + eterm + ",questions of ," + g12 + "," + g11 + ", Label,Value, Label,Value, Label,Value ")
|
166 |
-
if g11 != 'Series([] )': SaveResult(s1, outputFile)
|
167 |
-
|
168 |
-
#LOINC Panels
|
169 |
-
g21 = g2['Loinc'].to_string().replace(","," ").replace("\n"," ")
|
170 |
-
g22 = g2['LoincName'].to_string().replace(","," ").replace("\n"," ")
|
171 |
-
g23 = g2['ParentLoinc'].to_string().replace(","," ").replace("\n"," ")
|
172 |
-
g24 = g2['ParentName'].to_string().replace(","," ").replace("\n"," ")
|
173 |
-
# s2 = ("LOINC Panel," + myEntityGroup + "," + eterm + ",name of ," + g22 + "," + g21 + ", and Parent codes of ," + g23 + ", with Parent names of ," + g24 + ", Label,Value ")
|
174 |
-
s2 = ("LOINC Panel," + myEntityGroup + "," + eterm + ",name of ," + g22 + "," + g21 + "," + g24 + ", and Parent codes of ," + g23 + "," + ", Label,Value ")
|
175 |
-
if g21 != 'Series([] )': SaveResult(s2, outputFile)
|
176 |
-
|
177 |
-
#SNOMED
|
178 |
-
g31 = g3['conceptId'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ")
|
179 |
-
g32 = g3['term'].to_string().replace(","," ").replace("\n"," ").replace("\l"," ").replace("\r"," ")
|
180 |
-
s3 = ("SNOMED Concept," + myEntityGroup + "," + eterm + ",terms of ," + g32 + "," + g31 + ", Label,Value, Label,Value, Label,Value ")
|
181 |
-
if g31 != 'Series([] )': SaveResult(s3, outputFile)
|
182 |
-
|
183 |
-
#OMS
|
184 |
-
g41 = g4['Omaha Code'].to_string().replace(","," ").replace("\n"," ")
|
185 |
-
g42 = g4['SNOMED CT concept ID'].to_string().replace(","," ").replace("\n"," ")
|
186 |
-
g43 = g4['SNOMED CT'].to_string().replace(","," ").replace("\n"," ")
|
187 |
-
g44 = g4['PR'].to_string().replace(","," ").replace("\n"," ")
|
188 |
-
g45 = g4['S&S'].to_string().replace(","," ").replace("\n"," ")
|
189 |
-
s4 = ("OMS," + myEntityGroup + "," + eterm + ",concepts of ," + g44 + "," + g45 + ", and SNOMED codes of ," + g43 + ", and OMS problem of ," + g42 + ", and OMS Sign Symptom of ," + g41)
|
190 |
-
if g41 != 'Series([] )': SaveResult(s4, outputFile)
|
191 |
-
|
192 |
-
#ICD10
|
193 |
-
g51 = g5['Code'].to_string().replace(","," ").replace("\n"," ")
|
194 |
-
g52 = g5['Description'].to_string().replace(","," ").replace("\n"," ")
|
195 |
-
s5 = ("ICD10," + myEntityGroup + "," + eterm + ",descriptions of ," + g52 + "," + g51 + ", Label,Value, Label,Value, Label,Value ")
|
196 |
-
if g51 != 'Series([] )': SaveResult(s5, outputFile)
|
197 |
-
|
198 |
-
except ValueError as err:
|
199 |
-
raise ValueError("Error in group by entity \n" + format_tb(err.__traceback__)[0] + err.args[0] + "\nEnd of error message.") from None
|
200 |
-
|
201 |
-
return outputFile
|
202 |
-
|
203 |
-
|
204 |
-
def plot_to_figure(grouped):
|
205 |
-
fig = plt.figure()
|
206 |
-
plt.bar(x=list(grouped.keys()), height=list(grouped.values()))
|
207 |
-
plt.margins(0.2)
|
208 |
-
plt.subplots_adjust(bottom=0.4)
|
209 |
-
plt.xticks(rotation=90)
|
210 |
-
return fig
|
211 |
-
|
212 |
-
|
213 |
-
def ner(text):
|
214 |
-
raw = pipe(text)
|
215 |
-
ner_content = {
|
216 |
-
"text": text,
|
217 |
-
"entities": [
|
218 |
-
{
|
219 |
-
"entity": x["entity_group"],
|
220 |
-
"word": x["word"],
|
221 |
-
"score": x["score"],
|
222 |
-
"start": x["start"],
|
223 |
-
"end": x["end"],
|
224 |
-
}
|
225 |
-
for x in raw
|
226 |
-
],
|
227 |
-
}
|
228 |
-
|
229 |
-
outputFile = group_by_entity(raw)
|
230 |
-
label = EXAMPLES.get(text, "Unknown")
|
231 |
-
outputDataframe = pd.read_csv(outputFile)
|
232 |
-
return (ner_content, outputDataframe, outputFile)
|
233 |
-
|
234 |
-
demo = gr.Blocks()
|
235 |
-
with demo:
|
236 |
-
gr.Markdown(
|
237 |
-
"""
|
238 |
-
# 🩺⚕️NLP Clinical Ontology Biomedical NER
|
239 |
-
"""
|
240 |
-
)
|
241 |
-
input = gr.Textbox(label="Note text", value="")
|
242 |
-
|
243 |
-
with gr.Tab("Biomedical Entity Recognition"):
|
244 |
-
output=[
|
245 |
-
gr.HighlightedText(label="NER", combine_adjacent=True),
|
246 |
-
#gr.JSON(label="Entity Counts"),
|
247 |
-
#gr.Label(label="Rating"),
|
248 |
-
#gr.Plot(label="Bar"),
|
249 |
-
gr.Dataframe(label="Dataframe"),
|
250 |
-
gr.File(label="File"),
|
251 |
-
]
|
252 |
-
examples=list(EXAMPLES.keys())
|
253 |
-
gr.Examples(examples, inputs=input)
|
254 |
-
input.change(fn=ner, inputs=input, outputs=output)
|
255 |
-
|
256 |
-
with gr.Tab("Clinical Terminology Resolution"):
|
257 |
-
with gr.Row(variant="compact"):
|
258 |
-
btnLOINC = gr.Button("LOINC")
|
259 |
-
btnPanels = gr.Button("Panels")
|
260 |
-
btnSNOMED = gr.Button("SNOMED")
|
261 |
-
btnOMS = gr.Button("OMS")
|
262 |
-
btnICD10 = gr.Button("ICD10")
|
263 |
-
|
264 |
-
examples=list(EXAMPLES.keys())
|
265 |
-
gr.Examples(examples, inputs=input)
|
266 |
-
input.change(fn=ner, inputs=input, outputs=output)
|
267 |
-
#layout="vertical"
|
268 |
-
demo.launch(debug=True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio/vocoder/bigvgan/alias_free_torch/filter.py
DELETED
@@ -1,95 +0,0 @@
|
|
1 |
-
# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0
|
2 |
-
# LICENSE is in incl_licenses directory.
|
3 |
-
|
4 |
-
import torch
|
5 |
-
import torch.nn as nn
|
6 |
-
import torch.nn.functional as F
|
7 |
-
import math
|
8 |
-
|
9 |
-
if 'sinc' in dir(torch):
|
10 |
-
sinc = torch.sinc
|
11 |
-
else:
|
12 |
-
# This code is adopted from adefossez's julius.core.sinc under the MIT License
|
13 |
-
# https://adefossez.github.io/julius/julius/core.html
|
14 |
-
# LICENSE is in incl_licenses directory.
|
15 |
-
def sinc(x: torch.Tensor):
|
16 |
-
"""
|
17 |
-
Implementation of sinc, i.e. sin(pi * x) / (pi * x)
|
18 |
-
__Warning__: Different to julius.sinc, the input is multiplied by `pi`!
|
19 |
-
"""
|
20 |
-
return torch.where(x == 0,
|
21 |
-
torch.tensor(1., device=x.device, dtype=x.dtype),
|
22 |
-
torch.sin(math.pi * x) / math.pi / x)
|
23 |
-
|
24 |
-
|
25 |
-
# This code is adopted from adefossez's julius.lowpass.LowPassFilters under the MIT License
|
26 |
-
# https://adefossez.github.io/julius/julius/lowpass.html
|
27 |
-
# LICENSE is in incl_licenses directory.
|
28 |
-
def kaiser_sinc_filter1d(cutoff, half_width, kernel_size): # return filter [1,1,kernel_size]
|
29 |
-
even = (kernel_size % 2 == 0)
|
30 |
-
half_size = kernel_size // 2
|
31 |
-
|
32 |
-
#For kaiser window
|
33 |
-
delta_f = 4 * half_width
|
34 |
-
A = 2.285 * (half_size - 1) * math.pi * delta_f + 7.95
|
35 |
-
if A > 50.:
|
36 |
-
beta = 0.1102 * (A - 8.7)
|
37 |
-
elif A >= 21.:
|
38 |
-
beta = 0.5842 * (A - 21)**0.4 + 0.07886 * (A - 21.)
|
39 |
-
else:
|
40 |
-
beta = 0.
|
41 |
-
window = torch.kaiser_window(kernel_size, beta=beta, periodic=False)
|
42 |
-
|
43 |
-
# ratio = 0.5/cutoff -> 2 * cutoff = 1 / ratio
|
44 |
-
if even:
|
45 |
-
time = (torch.arange(-half_size, half_size) + 0.5)
|
46 |
-
else:
|
47 |
-
time = torch.arange(kernel_size) - half_size
|
48 |
-
if cutoff == 0:
|
49 |
-
filter_ = torch.zeros_like(time)
|
50 |
-
else:
|
51 |
-
filter_ = 2 * cutoff * window * sinc(2 * cutoff * time)
|
52 |
-
# Normalize filter to have sum = 1, otherwise we will have a small leakage
|
53 |
-
# of the constant component in the input signal.
|
54 |
-
filter_ /= filter_.sum()
|
55 |
-
filter = filter_.view(1, 1, kernel_size)
|
56 |
-
|
57 |
-
return filter
|
58 |
-
|
59 |
-
|
60 |
-
class LowPassFilter1d(nn.Module):
|
61 |
-
def __init__(self,
|
62 |
-
cutoff=0.5,
|
63 |
-
half_width=0.6,
|
64 |
-
stride: int = 1,
|
65 |
-
padding: bool = True,
|
66 |
-
padding_mode: str = 'replicate',
|
67 |
-
kernel_size: int = 12):
|
68 |
-
# kernel_size should be even number for stylegan3 setup,
|
69 |
-
# in this implementation, odd number is also possible.
|
70 |
-
super().__init__()
|
71 |
-
if cutoff < -0.:
|
72 |
-
raise ValueError("Minimum cutoff must be larger than zero.")
|
73 |
-
if cutoff > 0.5:
|
74 |
-
raise ValueError("A cutoff above 0.5 does not make sense.")
|
75 |
-
self.kernel_size = kernel_size
|
76 |
-
self.even = (kernel_size % 2 == 0)
|
77 |
-
self.pad_left = kernel_size // 2 - int(self.even)
|
78 |
-
self.pad_right = kernel_size // 2
|
79 |
-
self.stride = stride
|
80 |
-
self.padding = padding
|
81 |
-
self.padding_mode = padding_mode
|
82 |
-
filter = kaiser_sinc_filter1d(cutoff, half_width, kernel_size)
|
83 |
-
self.register_buffer("filter", filter)
|
84 |
-
|
85 |
-
#input [B, C, T]
|
86 |
-
def forward(self, x):
|
87 |
-
_, C, _ = x.shape
|
88 |
-
|
89 |
-
if self.padding:
|
90 |
-
x = F.pad(x, (self.pad_left, self.pad_right),
|
91 |
-
mode=self.padding_mode)
|
92 |
-
out = F.conv1d(x, self.filter.expand(C, -1, -1),
|
93 |
-
stride=self.stride, groups=C)
|
94 |
-
|
95 |
-
return out
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Aaaaaaaabdualh/poetry2023/app.py
DELETED
@@ -1,53 +0,0 @@
|
|
1 |
-
import gc
|
2 |
-
import gradio as gr
|
3 |
-
from transformers import pipeline, set_seed
|
4 |
-
|
5 |
-
pipe = pipeline('text-generation', framework='pt', model='akhooli/ap2023', tokenizer='akhooli/ap2023')
|
6 |
-
#gc.collect()
|
7 |
-
samples = [['أنت'
|
8 |
-
,1.0, 50, 1.0, 1.0, 114],['هل غادر'
|
9 |
-
,1.0, 50, 1.0, 1.0, 114 ],['ألا ليت'
|
10 |
-
,1.0, 50, 1.0, 1.0, 114 ],['يا قدس'
|
11 |
-
,1.0, 50, 1.0, 1.0, 114],['عيد بأية حال'
|
12 |
-
,1.0, 50, 1.0, 1.0, 114],['لكل شيء إذا ما'
|
13 |
-
,1.0, 50, 1.0, 1.0, 114 ],['.'
|
14 |
-
,1.0, 50, 1.0, 1.0, 114]]
|
15 |
-
|
16 |
-
notes = """
|
17 |
-
- Enter a short prompt or select (click) one of the examples and click SEND
|
18 |
-
- Adjust parameters (temperture, top k, top p and penalty) through the slider (keep close to default values).
|
19 |
-
- For the same seed (randomness), the same output is regenerated if other parameters are fixed
|
20 |
-
- Clear and enter new prompt or select another example and SEND to regenerate
|
21 |
-
- The '.' means start a new line from no prompt (your prompt need not be long)
|
22 |
-
- Be patient: this runs on CPU (free tier)
|
23 |
-
- Feedback (Twitter): @akhooli (https://twitter.com/akhooli/status/1611025232201977859)
|
24 |
-
- Note/Disclaimer: may generate unaccepted or inappropriate content. Use at your own risk.
|
25 |
-
"""
|
26 |
-
def sayPoetry(prompt, temp=1.0, topk = 50, topp = 1.0, penalty=1.0, seed=114):
|
27 |
-
if not int(seed) >= 0: seed=114
|
28 |
-
set_seed(seed)
|
29 |
-
gen = pipe(prompt, max_length=96, do_sample=True, temperature=temp, top_k=topk, top_p=topp, repetition_penalty=penalty,
|
30 |
-
min_length = 64, no_repeat_ngram_size = 3, return_full_text=True,
|
31 |
-
num_beams=5, num_return_sequences=1)[0]["generated_text"]
|
32 |
-
poetry =""
|
33 |
-
for line in gen.split('.')[:-1]:
|
34 |
-
poetry += line #+ "\n"
|
35 |
-
return poetry
|
36 |
-
poetry = gr.Interface(fn=sayPoetry,
|
37 |
-
inputs=[
|
38 |
-
gr.Textbox(label="Enter short prompt or select from examples:"),
|
39 |
-
gr.Slider(0.70, 1.2, step=0.01,value=1.0, label='control temperature'),
|
40 |
-
gr.Slider(25, 100, step=1,value=50, label='control top k'),
|
41 |
-
gr.Slider(0.80, 1.0, step=0.01,value=1.0, label='control top p'),
|
42 |
-
gr.Slider(0.90, 1.50, step=0.01,value=1.0, label='control penalty'),
|
43 |
-
gr.Number(value=139750, precision=0, label='Seed'),
|
44 |
-
],
|
45 |
-
outputs=[gr.Textbox(label="Generated Poetry:")],
|
46 |
-
|
47 |
-
allow_flagging='never',
|
48 |
-
title='Arabic Poetry Generation Demo (updated Jan. 2023)',
|
49 |
-
description = "A simple demo of AI generated poetry based on 1M poems fine-tuned using AraGPT2 (be patient, runs on cpu)",
|
50 |
-
examples=samples,
|
51 |
-
cache_examples=False,
|
52 |
-
article = notes)
|
53 |
-
poetry.launch() # show_error = True, debug=True
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/quantization/__init__.py
DELETED
@@ -1,9 +0,0 @@
|
|
1 |
-
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
|
7 |
-
# flake8: noqa
|
8 |
-
from .vq import ResidualVectorQuantizer
|
9 |
-
from .base import BaseQuantizer, DummyQuantizer, QuantizedResult
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/agents/simulation_agent/conversation.py
DELETED
@@ -1,107 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
from colorama import Fore
|
3 |
-
|
4 |
-
# import logging
|
5 |
-
from agentverse.logging import get_logger
|
6 |
-
import bdb
|
7 |
-
from string import Template
|
8 |
-
from typing import TYPE_CHECKING, List
|
9 |
-
|
10 |
-
from agentverse.message import Message
|
11 |
-
|
12 |
-
#from . import agent_registry
|
13 |
-
#from .base import BaseAgent
|
14 |
-
from agentverse.agents import agent_registry
|
15 |
-
from agentverse.agents.base import BaseAgent
|
16 |
-
|
17 |
-
logger = get_logger()
|
18 |
-
|
19 |
-
|
20 |
-
@agent_registry.register("conversation")
|
21 |
-
class ConversationAgent(BaseAgent):
|
22 |
-
def step(self, env_description: str = "") -> Message:
|
23 |
-
prompt = self._fill_prompt_template(env_description)
|
24 |
-
|
25 |
-
parsed_response = None
|
26 |
-
for i in range(self.max_retry):
|
27 |
-
try:
|
28 |
-
response = self.llm.generate_response(prompt)
|
29 |
-
parsed_response = self.output_parser.parse(response)
|
30 |
-
break
|
31 |
-
except KeyboardInterrupt:
|
32 |
-
raise
|
33 |
-
except Exception as e:
|
34 |
-
logger.error(e)
|
35 |
-
logger.warn("Retrying...")
|
36 |
-
continue
|
37 |
-
|
38 |
-
if parsed_response is None:
|
39 |
-
logger.error(f"{self.name} failed to generate valid response.")
|
40 |
-
|
41 |
-
message = Message(
|
42 |
-
content=""
|
43 |
-
if parsed_response is None
|
44 |
-
else parsed_response.return_values["output"],
|
45 |
-
sender=self.name,
|
46 |
-
receiver=self.get_receiver(),
|
47 |
-
)
|
48 |
-
return message
|
49 |
-
|
50 |
-
async def astep(self, env_description: str = "") -> Message:
|
51 |
-
"""Asynchronous version of step"""
|
52 |
-
prompt = self._fill_prompt_template(env_description)
|
53 |
-
|
54 |
-
parsed_response = None
|
55 |
-
for i in range(self.max_retry):
|
56 |
-
try:
|
57 |
-
# if self.name == "Code Reviewer":
|
58 |
-
logger.debug(prompt, "Prompt", Fore.CYAN)
|
59 |
-
response = await self.llm.agenerate_response(prompt)
|
60 |
-
|
61 |
-
# logging.info(f"{self.name}'s request result:"
|
62 |
-
# f" {response.content}")
|
63 |
-
parsed_response = self.output_parser.parse(response)
|
64 |
-
break
|
65 |
-
except (KeyboardInterrupt, bdb.BdbQuit):
|
66 |
-
raise
|
67 |
-
except Exception as e:
|
68 |
-
logger.error(e)
|
69 |
-
logger.warning("Retrying...")
|
70 |
-
continue
|
71 |
-
|
72 |
-
if parsed_response is None:
|
73 |
-
logger.error(f"{self.name} failed to generate valid response.")
|
74 |
-
|
75 |
-
message = Message(
|
76 |
-
content=""
|
77 |
-
if parsed_response is None
|
78 |
-
else parsed_response.return_values["output"],
|
79 |
-
sender=self.name,
|
80 |
-
receiver=self.get_receiver(),
|
81 |
-
)
|
82 |
-
return message
|
83 |
-
|
84 |
-
def _fill_prompt_template(self, env_description: str = "") -> str:
|
85 |
-
"""Fill the placeholders in the prompt template
|
86 |
-
|
87 |
-
In the conversation agent, three placeholders are supported:
|
88 |
-
- ${agent_name}: the name of the agent
|
89 |
-
- ${env_description}: the description of the environment
|
90 |
-
- ${role_description}: the description of the role of the agent
|
91 |
-
- ${chat_history}: the chat history of the agent
|
92 |
-
"""
|
93 |
-
input_arguments = {
|
94 |
-
"agent_name": self.name,
|
95 |
-
"env_description": env_description,
|
96 |
-
"role_description": self.role_description,
|
97 |
-
"chat_history": self.memory.to_string(add_sender_prefix=True),
|
98 |
-
}
|
99 |
-
return Template(self.prompt_template).safe_substitute(input_arguments)
|
100 |
-
|
101 |
-
def add_message_to_memory(self, messages: List[Message]) -> None:
|
102 |
-
self.memory.add_message(messages)
|
103 |
-
|
104 |
-
def reset(self) -> None:
|
105 |
-
"""Reset the agent"""
|
106 |
-
self.memory.reset()
|
107 |
-
# TODO: reset receiver
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/agentverse/environments/tasksolving_env/rules/evaluator/__init__.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
from agentverse.registry import Registry
|
2 |
-
|
3 |
-
evaluator_registry = Registry(name="EvaluatorRegistry")
|
4 |
-
|
5 |
-
from .base import BaseEvaluator, NoneEvaluator
|
6 |
-
from .basic import BasicEvaluator
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/ColorPicker.d.ts
DELETED
@@ -1,38 +0,0 @@
|
|
1 |
-
import Sizer from '../../sizer/Sizer';
|
2 |
-
|
3 |
-
export default ColorPicker;
|
4 |
-
|
5 |
-
declare namespace ColorPicker {
|
6 |
-
interface IConfig extends Sizer.IConfig {
|
7 |
-
background?: Phaser.GameObjects.GameObject,
|
8 |
-
|
9 |
-
hPalette?: {
|
10 |
-
position?: 0 | 1 | 2 | 3 | 'bottom' | 'left' | 'top' | 'right',
|
11 |
-
size?: number, width?: number, height?: number,
|
12 |
-
},
|
13 |
-
|
14 |
-
svPalette?: {
|
15 |
-
width?: number, height?: number,
|
16 |
-
},
|
17 |
-
|
18 |
-
valuechangeCallback: (newValue: number, oldValue: number, colorPicker: ColorPicker) => void,
|
19 |
-
valuechangeCallbackScope?: Object,
|
20 |
-
|
21 |
-
value?: number,
|
22 |
-
}
|
23 |
-
}
|
24 |
-
|
25 |
-
declare class ColorPicker extends Sizer {
|
26 |
-
constructor(
|
27 |
-
scene: Phaser.Scene,
|
28 |
-
config?: ColorPicker.IConfig
|
29 |
-
);
|
30 |
-
|
31 |
-
setValue(value: number): this;
|
32 |
-
value: number;
|
33 |
-
|
34 |
-
setColor(color: number): this;
|
35 |
-
color: number;
|
36 |
-
|
37 |
-
|
38 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AlexZou/Deploy_Restoration/model/IAT_main.py
DELETED
@@ -1,133 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import numpy as np
|
3 |
-
from torch import nn
|
4 |
-
import torch.nn.functional as F
|
5 |
-
import os
|
6 |
-
import math
|
7 |
-
|
8 |
-
from timm.models.layers import trunc_normal_
|
9 |
-
from model.blocks import CBlock_ln, SwinTransformerBlock
|
10 |
-
from model.global_net import Global_pred
|
11 |
-
|
12 |
-
class Local_pred(nn.Module):
|
13 |
-
def __init__(self, dim=16, number=4, type='ccc'):
|
14 |
-
super(Local_pred, self).__init__()
|
15 |
-
# initial convolution
|
16 |
-
self.conv1 = nn.Conv2d(3, dim, 3, padding=1, groups=1)
|
17 |
-
self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
|
18 |
-
# main blocks
|
19 |
-
block = CBlock_ln(dim)
|
20 |
-
block_t = SwinTransformerBlock(dim) # head number
|
21 |
-
if type =='ccc':
|
22 |
-
#blocks1, blocks2 = [block for _ in range(number)], [block for _ in range(number)]
|
23 |
-
blocks1 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
|
24 |
-
blocks2 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
|
25 |
-
elif type =='ttt':
|
26 |
-
blocks1, blocks2 = [block_t for _ in range(number)], [block_t for _ in range(number)]
|
27 |
-
elif type =='cct':
|
28 |
-
blocks1, blocks2 = [block, block, block_t], [block, block, block_t]
|
29 |
-
# block1 = [CBlock_ln(16), nn.Conv2d(16,24,3,1,1)]
|
30 |
-
self.mul_blocks = nn.Sequential(*blocks1, nn.Conv2d(dim, 3, 3, 1, 1), nn.ReLU())
|
31 |
-
self.add_blocks = nn.Sequential(*blocks2, nn.Conv2d(dim, 3, 3, 1, 1), nn.Tanh())
|
32 |
-
|
33 |
-
|
34 |
-
def forward(self, img):
|
35 |
-
img1 = self.relu(self.conv1(img))
|
36 |
-
mul = self.mul_blocks(img1)
|
37 |
-
add = self.add_blocks(img1)
|
38 |
-
|
39 |
-
return mul, add
|
40 |
-
|
41 |
-
# Short Cut Connection on Final Layer
|
42 |
-
class Local_pred_S(nn.Module):
|
43 |
-
def __init__(self, in_dim=3, dim=16, number=4, type='ccc'):
|
44 |
-
super(Local_pred_S, self).__init__()
|
45 |
-
# initial convolution
|
46 |
-
self.conv1 = nn.Conv2d(in_dim, dim, 3, padding=1, groups=1)
|
47 |
-
self.relu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
|
48 |
-
# main blocks
|
49 |
-
block = CBlock_ln(dim)
|
50 |
-
block_t = SwinTransformerBlock(dim) # head number
|
51 |
-
if type =='ccc':
|
52 |
-
blocks1 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
|
53 |
-
blocks2 = [CBlock_ln(16, drop_path=0.01), CBlock_ln(16, drop_path=0.05), CBlock_ln(16, drop_path=0.1)]
|
54 |
-
elif type =='ttt':
|
55 |
-
blocks1, blocks2 = [block_t for _ in range(number)], [block_t for _ in range(number)]
|
56 |
-
elif type =='cct':
|
57 |
-
blocks1, blocks2 = [block, block, block_t], [block, block, block_t]
|
58 |
-
# block1 = [CBlock_ln(16), nn.Conv2d(16,24,3,1,1)]
|
59 |
-
self.mul_blocks = nn.Sequential(*blocks1)
|
60 |
-
self.add_blocks = nn.Sequential(*blocks2)
|
61 |
-
|
62 |
-
self.mul_end = nn.Sequential(nn.Conv2d(dim, 3, 3, 1, 1), nn.ReLU())
|
63 |
-
self.add_end = nn.Sequential(nn.Conv2d(dim, 3, 3, 1, 1), nn.Tanh())
|
64 |
-
self.apply(self._init_weights)
|
65 |
-
|
66 |
-
def _init_weights(self, m):
|
67 |
-
if isinstance(m, nn.Linear):
|
68 |
-
trunc_normal_(m.weight, std=.02)
|
69 |
-
if isinstance(m, nn.Linear) and m.bias is not None:
|
70 |
-
nn.init.constant_(m.bias, 0)
|
71 |
-
elif isinstance(m, nn.LayerNorm):
|
72 |
-
nn.init.constant_(m.bias, 0)
|
73 |
-
nn.init.constant_(m.weight, 1.0)
|
74 |
-
elif isinstance(m, nn.Conv2d):
|
75 |
-
fan_out = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
|
76 |
-
fan_out //= m.groups
|
77 |
-
m.weight.data.normal_(0, math.sqrt(2.0 / fan_out))
|
78 |
-
if m.bias is not None:
|
79 |
-
m.bias.data.zero_()
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
def forward(self, img):
|
84 |
-
img1 = self.relu(self.conv1(img))
|
85 |
-
# short cut connection
|
86 |
-
mul = self.mul_blocks(img1) + img1
|
87 |
-
add = self.add_blocks(img1) + img1
|
88 |
-
mul = self.mul_end(mul)
|
89 |
-
add = self.add_end(add)
|
90 |
-
|
91 |
-
return mul, add
|
92 |
-
|
93 |
-
class IAT(nn.Module):
|
94 |
-
def __init__(self, in_dim=3, with_global=True, type='lol'):
|
95 |
-
super(IAT, self).__init__()
|
96 |
-
#self.local_net = Local_pred()
|
97 |
-
|
98 |
-
self.local_net = Local_pred_S(in_dim=in_dim)
|
99 |
-
|
100 |
-
self.with_global = with_global
|
101 |
-
if self.with_global:
|
102 |
-
self.global_net = Global_pred(in_channels=in_dim, type=type)
|
103 |
-
|
104 |
-
def apply_color(self, image, ccm):
|
105 |
-
shape = image.shape
|
106 |
-
image = image.view(-1, 3)
|
107 |
-
image = torch.tensordot(image, ccm, dims=[[-1], [-1]])
|
108 |
-
image = image.view(shape)
|
109 |
-
return torch.clamp(image, 1e-8, 1.0)
|
110 |
-
|
111 |
-
def forward(self, img_low):
|
112 |
-
#print(self.with_global)
|
113 |
-
mul, add = self.local_net(img_low)
|
114 |
-
img_high = (img_low.mul(mul)).add(add)
|
115 |
-
|
116 |
-
if not self.with_global:
|
117 |
-
return img_high
|
118 |
-
|
119 |
-
else:
|
120 |
-
gamma, color = self.global_net(img_low)
|
121 |
-
b = img_high.shape[0]
|
122 |
-
img_high = img_high.permute(0, 2, 3, 1) # (B,C,H,W) -- (B,H,W,C)
|
123 |
-
img_high = torch.stack([self.apply_color(img_high[i,:,:,:], color[i,:,:])**gamma[i,:] for i in range(b)], dim=0)
|
124 |
-
img_high = img_high.permute(0, 3, 1, 2) # (B,H,W,C) -- (B,C,H,W)
|
125 |
-
return img_high
|
126 |
-
|
127 |
-
|
128 |
-
if __name__ == "__main__":
|
129 |
-
os.environ['CUDA_VISIBLE_DEVICES']='3'
|
130 |
-
img = torch.Tensor(1, 3, 400, 600)
|
131 |
-
net = IAT()
|
132 |
-
print('total parameters:', sum(param.numel() for param in net.parameters()))
|
133 |
-
_, _, high = net(img)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Altinas/vits-uma-genshin-honkais/attentions.py
DELETED
@@ -1,300 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
import torch
|
3 |
-
from torch import nn
|
4 |
-
from torch.nn import functional as F
|
5 |
-
|
6 |
-
import commons
|
7 |
-
from modules import LayerNorm
|
8 |
-
|
9 |
-
|
10 |
-
class Encoder(nn.Module):
|
11 |
-
def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs):
|
12 |
-
super().__init__()
|
13 |
-
self.hidden_channels = hidden_channels
|
14 |
-
self.filter_channels = filter_channels
|
15 |
-
self.n_heads = n_heads
|
16 |
-
self.n_layers = n_layers
|
17 |
-
self.kernel_size = kernel_size
|
18 |
-
self.p_dropout = p_dropout
|
19 |
-
self.window_size = window_size
|
20 |
-
|
21 |
-
self.drop = nn.Dropout(p_dropout)
|
22 |
-
self.attn_layers = nn.ModuleList()
|
23 |
-
self.norm_layers_1 = nn.ModuleList()
|
24 |
-
self.ffn_layers = nn.ModuleList()
|
25 |
-
self.norm_layers_2 = nn.ModuleList()
|
26 |
-
for i in range(self.n_layers):
|
27 |
-
self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size))
|
28 |
-
self.norm_layers_1.append(LayerNorm(hidden_channels))
|
29 |
-
self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout))
|
30 |
-
self.norm_layers_2.append(LayerNorm(hidden_channels))
|
31 |
-
|
32 |
-
def forward(self, x, x_mask):
|
33 |
-
attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
|
34 |
-
x = x * x_mask
|
35 |
-
for i in range(self.n_layers):
|
36 |
-
y = self.attn_layers[i](x, x, attn_mask)
|
37 |
-
y = self.drop(y)
|
38 |
-
x = self.norm_layers_1[i](x + y)
|
39 |
-
|
40 |
-
y = self.ffn_layers[i](x, x_mask)
|
41 |
-
y = self.drop(y)
|
42 |
-
x = self.norm_layers_2[i](x + y)
|
43 |
-
x = x * x_mask
|
44 |
-
return x
|
45 |
-
|
46 |
-
|
47 |
-
class Decoder(nn.Module):
|
48 |
-
def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs):
|
49 |
-
super().__init__()
|
50 |
-
self.hidden_channels = hidden_channels
|
51 |
-
self.filter_channels = filter_channels
|
52 |
-
self.n_heads = n_heads
|
53 |
-
self.n_layers = n_layers
|
54 |
-
self.kernel_size = kernel_size
|
55 |
-
self.p_dropout = p_dropout
|
56 |
-
self.proximal_bias = proximal_bias
|
57 |
-
self.proximal_init = proximal_init
|
58 |
-
|
59 |
-
self.drop = nn.Dropout(p_dropout)
|
60 |
-
self.self_attn_layers = nn.ModuleList()
|
61 |
-
self.norm_layers_0 = nn.ModuleList()
|
62 |
-
self.encdec_attn_layers = nn.ModuleList()
|
63 |
-
self.norm_layers_1 = nn.ModuleList()
|
64 |
-
self.ffn_layers = nn.ModuleList()
|
65 |
-
self.norm_layers_2 = nn.ModuleList()
|
66 |
-
for i in range(self.n_layers):
|
67 |
-
self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init))
|
68 |
-
self.norm_layers_0.append(LayerNorm(hidden_channels))
|
69 |
-
self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout))
|
70 |
-
self.norm_layers_1.append(LayerNorm(hidden_channels))
|
71 |
-
self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True))
|
72 |
-
self.norm_layers_2.append(LayerNorm(hidden_channels))
|
73 |
-
|
74 |
-
def forward(self, x, x_mask, h, h_mask):
|
75 |
-
"""
|
76 |
-
x: decoder input
|
77 |
-
h: encoder output
|
78 |
-
"""
|
79 |
-
self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype)
|
80 |
-
encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1)
|
81 |
-
x = x * x_mask
|
82 |
-
for i in range(self.n_layers):
|
83 |
-
y = self.self_attn_layers[i](x, x, self_attn_mask)
|
84 |
-
y = self.drop(y)
|
85 |
-
x = self.norm_layers_0[i](x + y)
|
86 |
-
|
87 |
-
y = self.encdec_attn_layers[i](x, h, encdec_attn_mask)
|
88 |
-
y = self.drop(y)
|
89 |
-
x = self.norm_layers_1[i](x + y)
|
90 |
-
|
91 |
-
y = self.ffn_layers[i](x, x_mask)
|
92 |
-
y = self.drop(y)
|
93 |
-
x = self.norm_layers_2[i](x + y)
|
94 |
-
x = x * x_mask
|
95 |
-
return x
|
96 |
-
|
97 |
-
|
98 |
-
class MultiHeadAttention(nn.Module):
|
99 |
-
def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False):
|
100 |
-
super().__init__()
|
101 |
-
assert channels % n_heads == 0
|
102 |
-
|
103 |
-
self.channels = channels
|
104 |
-
self.out_channels = out_channels
|
105 |
-
self.n_heads = n_heads
|
106 |
-
self.p_dropout = p_dropout
|
107 |
-
self.window_size = window_size
|
108 |
-
self.heads_share = heads_share
|
109 |
-
self.block_length = block_length
|
110 |
-
self.proximal_bias = proximal_bias
|
111 |
-
self.proximal_init = proximal_init
|
112 |
-
self.attn = None
|
113 |
-
|
114 |
-
self.k_channels = channels // n_heads
|
115 |
-
self.conv_q = nn.Conv1d(channels, channels, 1)
|
116 |
-
self.conv_k = nn.Conv1d(channels, channels, 1)
|
117 |
-
self.conv_v = nn.Conv1d(channels, channels, 1)
|
118 |
-
self.conv_o = nn.Conv1d(channels, out_channels, 1)
|
119 |
-
self.drop = nn.Dropout(p_dropout)
|
120 |
-
|
121 |
-
if window_size is not None:
|
122 |
-
n_heads_rel = 1 if heads_share else n_heads
|
123 |
-
rel_stddev = self.k_channels**-0.5
|
124 |
-
self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
|
125 |
-
self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev)
|
126 |
-
|
127 |
-
nn.init.xavier_uniform_(self.conv_q.weight)
|
128 |
-
nn.init.xavier_uniform_(self.conv_k.weight)
|
129 |
-
nn.init.xavier_uniform_(self.conv_v.weight)
|
130 |
-
if proximal_init:
|
131 |
-
with torch.no_grad():
|
132 |
-
self.conv_k.weight.copy_(self.conv_q.weight)
|
133 |
-
self.conv_k.bias.copy_(self.conv_q.bias)
|
134 |
-
|
135 |
-
def forward(self, x, c, attn_mask=None):
|
136 |
-
q = self.conv_q(x)
|
137 |
-
k = self.conv_k(c)
|
138 |
-
v = self.conv_v(c)
|
139 |
-
|
140 |
-
x, self.attn = self.attention(q, k, v, mask=attn_mask)
|
141 |
-
|
142 |
-
x = self.conv_o(x)
|
143 |
-
return x
|
144 |
-
|
145 |
-
def attention(self, query, key, value, mask=None):
|
146 |
-
# reshape [b, d, t] -> [b, n_h, t, d_k]
|
147 |
-
b, d, t_s, t_t = (*key.size(), query.size(2))
|
148 |
-
query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3)
|
149 |
-
key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
|
150 |
-
value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3)
|
151 |
-
|
152 |
-
scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1))
|
153 |
-
if self.window_size is not None:
|
154 |
-
assert t_s == t_t, "Relative attention is only available for self-attention."
|
155 |
-
key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s)
|
156 |
-
rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings)
|
157 |
-
scores_local = self._relative_position_to_absolute_position(rel_logits)
|
158 |
-
scores = scores + scores_local
|
159 |
-
if self.proximal_bias:
|
160 |
-
assert t_s == t_t, "Proximal bias is only available for self-attention."
|
161 |
-
scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype)
|
162 |
-
if mask is not None:
|
163 |
-
scores = scores.masked_fill(mask == 0, -1e4)
|
164 |
-
if self.block_length is not None:
|
165 |
-
assert t_s == t_t, "Local attention is only available for self-attention."
|
166 |
-
block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length)
|
167 |
-
scores = scores.masked_fill(block_mask == 0, -1e4)
|
168 |
-
p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s]
|
169 |
-
p_attn = self.drop(p_attn)
|
170 |
-
output = torch.matmul(p_attn, value)
|
171 |
-
if self.window_size is not None:
|
172 |
-
relative_weights = self._absolute_position_to_relative_position(p_attn)
|
173 |
-
value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s)
|
174 |
-
output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings)
|
175 |
-
output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t]
|
176 |
-
return output, p_attn
|
177 |
-
|
178 |
-
def _matmul_with_relative_values(self, x, y):
|
179 |
-
"""
|
180 |
-
x: [b, h, l, m]
|
181 |
-
y: [h or 1, m, d]
|
182 |
-
ret: [b, h, l, d]
|
183 |
-
"""
|
184 |
-
ret = torch.matmul(x, y.unsqueeze(0))
|
185 |
-
return ret
|
186 |
-
|
187 |
-
def _matmul_with_relative_keys(self, x, y):
|
188 |
-
"""
|
189 |
-
x: [b, h, l, d]
|
190 |
-
y: [h or 1, m, d]
|
191 |
-
ret: [b, h, l, m]
|
192 |
-
"""
|
193 |
-
ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1))
|
194 |
-
return ret
|
195 |
-
|
196 |
-
def _get_relative_embeddings(self, relative_embeddings, length):
|
197 |
-
max_relative_position = 2 * self.window_size + 1
|
198 |
-
# Pad first before slice to avoid using cond ops.
|
199 |
-
pad_length = max(length - (self.window_size + 1), 0)
|
200 |
-
slice_start_position = max((self.window_size + 1) - length, 0)
|
201 |
-
slice_end_position = slice_start_position + 2 * length - 1
|
202 |
-
if pad_length > 0:
|
203 |
-
padded_relative_embeddings = F.pad(
|
204 |
-
relative_embeddings,
|
205 |
-
commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]))
|
206 |
-
else:
|
207 |
-
padded_relative_embeddings = relative_embeddings
|
208 |
-
used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position]
|
209 |
-
return used_relative_embeddings
|
210 |
-
|
211 |
-
def _relative_position_to_absolute_position(self, x):
|
212 |
-
"""
|
213 |
-
x: [b, h, l, 2*l-1]
|
214 |
-
ret: [b, h, l, l]
|
215 |
-
"""
|
216 |
-
batch, heads, length, _ = x.size()
|
217 |
-
# Concat columns of pad to shift from relative to absolute indexing.
|
218 |
-
x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]]))
|
219 |
-
|
220 |
-
# Concat extra elements so to add up to shape (len+1, 2*len-1).
|
221 |
-
x_flat = x.view([batch, heads, length * 2 * length])
|
222 |
-
x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]]))
|
223 |
-
|
224 |
-
# Reshape and slice out the padded elements.
|
225 |
-
x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:]
|
226 |
-
return x_final
|
227 |
-
|
228 |
-
def _absolute_position_to_relative_position(self, x):
|
229 |
-
"""
|
230 |
-
x: [b, h, l, l]
|
231 |
-
ret: [b, h, l, 2*l-1]
|
232 |
-
"""
|
233 |
-
batch, heads, length, _ = x.size()
|
234 |
-
# padd along column
|
235 |
-
x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]]))
|
236 |
-
x_flat = x.view([batch, heads, length**2 + length*(length -1)])
|
237 |
-
# add 0's in the beginning that will skew the elements after reshape
|
238 |
-
x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]]))
|
239 |
-
x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:]
|
240 |
-
return x_final
|
241 |
-
|
242 |
-
def _attention_bias_proximal(self, length):
|
243 |
-
"""Bias for self-attention to encourage attention to close positions.
|
244 |
-
Args:
|
245 |
-
length: an integer scalar.
|
246 |
-
Returns:
|
247 |
-
a Tensor with shape [1, 1, length, length]
|
248 |
-
"""
|
249 |
-
r = torch.arange(length, dtype=torch.float32)
|
250 |
-
diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1)
|
251 |
-
return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0)
|
252 |
-
|
253 |
-
|
254 |
-
class FFN(nn.Module):
|
255 |
-
def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False):
|
256 |
-
super().__init__()
|
257 |
-
self.in_channels = in_channels
|
258 |
-
self.out_channels = out_channels
|
259 |
-
self.filter_channels = filter_channels
|
260 |
-
self.kernel_size = kernel_size
|
261 |
-
self.p_dropout = p_dropout
|
262 |
-
self.activation = activation
|
263 |
-
self.causal = causal
|
264 |
-
|
265 |
-
if causal:
|
266 |
-
self.padding = self._causal_padding
|
267 |
-
else:
|
268 |
-
self.padding = self._same_padding
|
269 |
-
|
270 |
-
self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size)
|
271 |
-
self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size)
|
272 |
-
self.drop = nn.Dropout(p_dropout)
|
273 |
-
|
274 |
-
def forward(self, x, x_mask):
|
275 |
-
x = self.conv_1(self.padding(x * x_mask))
|
276 |
-
if self.activation == "gelu":
|
277 |
-
x = x * torch.sigmoid(1.702 * x)
|
278 |
-
else:
|
279 |
-
x = torch.relu(x)
|
280 |
-
x = self.drop(x)
|
281 |
-
x = self.conv_2(self.padding(x * x_mask))
|
282 |
-
return x * x_mask
|
283 |
-
|
284 |
-
def _causal_padding(self, x):
|
285 |
-
if self.kernel_size == 1:
|
286 |
-
return x
|
287 |
-
pad_l = self.kernel_size - 1
|
288 |
-
pad_r = 0
|
289 |
-
padding = [[0, 0], [0, 0], [pad_l, pad_r]]
|
290 |
-
x = F.pad(x, commons.convert_pad_shape(padding))
|
291 |
-
return x
|
292 |
-
|
293 |
-
def _same_padding(self, x):
|
294 |
-
if self.kernel_size == 1:
|
295 |
-
return x
|
296 |
-
pad_l = (self.kernel_size - 1) // 2
|
297 |
-
pad_r = self.kernel_size // 2
|
298 |
-
padding = [[0, 0], [0, 0], [pad_l, pad_r]]
|
299 |
-
x = F.pad(x, commons.convert_pad_shape(padding))
|
300 |
-
return x
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ameaou/academic-chatgpt3.1/crazy_functions/test_project/cpp/cppipc/prod_cons.h
DELETED
@@ -1,433 +0,0 @@
|
|
1 |
-
#pragma once
|
2 |
-
|
3 |
-
#include <atomic>
|
4 |
-
#include <utility>
|
5 |
-
#include <cstring>
|
6 |
-
#include <type_traits>
|
7 |
-
#include <cstdint>
|
8 |
-
|
9 |
-
#include "libipc/def.h"
|
10 |
-
|
11 |
-
#include "libipc/platform/detail.h"
|
12 |
-
#include "libipc/circ/elem_def.h"
|
13 |
-
#include "libipc/utility/log.h"
|
14 |
-
#include "libipc/utility/utility.h"
|
15 |
-
|
16 |
-
namespace ipc {
|
17 |
-
|
18 |
-
////////////////////////////////////////////////////////////////
|
19 |
-
/// producer-consumer implementation
|
20 |
-
////////////////////////////////////////////////////////////////
|
21 |
-
|
22 |
-
template <typename Flag>
|
23 |
-
struct prod_cons_impl;
|
24 |
-
|
25 |
-
template <>
|
26 |
-
struct prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
27 |
-
|
28 |
-
template <std::size_t DataSize, std::size_t AlignSize>
|
29 |
-
struct elem_t {
|
30 |
-
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
31 |
-
};
|
32 |
-
|
33 |
-
alignas(cache_line_size) std::atomic<circ::u2_t> rd_; // read index
|
34 |
-
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
35 |
-
|
36 |
-
constexpr circ::u2_t cursor() const noexcept {
|
37 |
-
return 0;
|
38 |
-
}
|
39 |
-
|
40 |
-
template <typename W, typename F, typename E>
|
41 |
-
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
42 |
-
auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed));
|
43 |
-
if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) {
|
44 |
-
return false; // full
|
45 |
-
}
|
46 |
-
std::forward<F>(f)(&(elems[cur_wt].data_));
|
47 |
-
wt_.fetch_add(1, std::memory_order_release);
|
48 |
-
return true;
|
49 |
-
}
|
50 |
-
|
51 |
-
/**
|
52 |
-
* In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'.
|
53 |
-
* So we could just disconnect all connections of receiver, and return false.
|
54 |
-
*/
|
55 |
-
template <typename W, typename F, typename E>
|
56 |
-
bool force_push(W* wrapper, F&&, E*) {
|
57 |
-
wrapper->elems()->disconnect_receiver(~static_cast<circ::cc_t>(0u));
|
58 |
-
return false;
|
59 |
-
}
|
60 |
-
|
61 |
-
template <typename W, typename F, typename R, typename E>
|
62 |
-
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) {
|
63 |
-
auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed));
|
64 |
-
if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) {
|
65 |
-
return false; // empty
|
66 |
-
}
|
67 |
-
std::forward<F>(f)(&(elems[cur_rd].data_));
|
68 |
-
std::forward<R>(out)(true);
|
69 |
-
rd_.fetch_add(1, std::memory_order_release);
|
70 |
-
return true;
|
71 |
-
}
|
72 |
-
};
|
73 |
-
|
74 |
-
template <>
|
75 |
-
struct prod_cons_impl<wr<relat::single, relat::multi , trans::unicast>>
|
76 |
-
: prod_cons_impl<wr<relat::single, relat::single, trans::unicast>> {
|
77 |
-
|
78 |
-
template <typename W, typename F, typename E>
|
79 |
-
bool force_push(W* wrapper, F&&, E*) {
|
80 |
-
wrapper->elems()->disconnect_receiver(1);
|
81 |
-
return false;
|
82 |
-
}
|
83 |
-
|
84 |
-
template <typename W, typename F, typename R,
|
85 |
-
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
86 |
-
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
87 |
-
byte_t buff[DS];
|
88 |
-
for (unsigned k = 0;;) {
|
89 |
-
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
90 |
-
if (circ::index_of(cur_rd) ==
|
91 |
-
circ::index_of(wt_.load(std::memory_order_acquire))) {
|
92 |
-
return false; // empty
|
93 |
-
}
|
94 |
-
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
95 |
-
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
96 |
-
std::forward<F>(f)(buff);
|
97 |
-
std::forward<R>(out)(true);
|
98 |
-
return true;
|
99 |
-
}
|
100 |
-
ipc::yield(k);
|
101 |
-
}
|
102 |
-
}
|
103 |
-
};
|
104 |
-
|
105 |
-
template <>
|
106 |
-
struct prod_cons_impl<wr<relat::multi , relat::multi, trans::unicast>>
|
107 |
-
: prod_cons_impl<wr<relat::single, relat::multi, trans::unicast>> {
|
108 |
-
|
109 |
-
using flag_t = std::uint64_t;
|
110 |
-
|
111 |
-
template <std::size_t DataSize, std::size_t AlignSize>
|
112 |
-
struct elem_t {
|
113 |
-
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
114 |
-
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
115 |
-
};
|
116 |
-
|
117 |
-
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
118 |
-
|
119 |
-
template <typename W, typename F, typename E>
|
120 |
-
bool push(W* /*wrapper*/, F&& f, E* elems) {
|
121 |
-
circ::u2_t cur_ct, nxt_ct;
|
122 |
-
for (unsigned k = 0;;) {
|
123 |
-
cur_ct = ct_.load(std::memory_order_relaxed);
|
124 |
-
if (circ::index_of(nxt_ct = cur_ct + 1) ==
|
125 |
-
circ::index_of(rd_.load(std::memory_order_acquire))) {
|
126 |
-
return false; // full
|
127 |
-
}
|
128 |
-
if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) {
|
129 |
-
break;
|
130 |
-
}
|
131 |
-
ipc::yield(k);
|
132 |
-
}
|
133 |
-
auto* el = elems + circ::index_of(cur_ct);
|
134 |
-
std::forward<F>(f)(&(el->data_));
|
135 |
-
// set flag & try update wt
|
136 |
-
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
137 |
-
while (1) {
|
138 |
-
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
139 |
-
if (cur_ct != wt_.load(std::memory_order_relaxed)) {
|
140 |
-
return true;
|
141 |
-
}
|
142 |
-
if ((~cac_ct) != cur_ct) {
|
143 |
-
return true;
|
144 |
-
}
|
145 |
-
if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) {
|
146 |
-
return true;
|
147 |
-
}
|
148 |
-
wt_.store(nxt_ct, std::memory_order_release);
|
149 |
-
cur_ct = nxt_ct;
|
150 |
-
nxt_ct = cur_ct + 1;
|
151 |
-
el = elems + circ::index_of(cur_ct);
|
152 |
-
}
|
153 |
-
return true;
|
154 |
-
}
|
155 |
-
|
156 |
-
template <typename W, typename F, typename E>
|
157 |
-
bool force_push(W* wrapper, F&&, E*) {
|
158 |
-
wrapper->elems()->disconnect_receiver(1);
|
159 |
-
return false;
|
160 |
-
}
|
161 |
-
|
162 |
-
template <typename W, typename F, typename R,
|
163 |
-
template <std::size_t, std::size_t> class E, std::size_t DS, std::size_t AS>
|
164 |
-
bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E<DS, AS>* elems) {
|
165 |
-
byte_t buff[DS];
|
166 |
-
for (unsigned k = 0;;) {
|
167 |
-
auto cur_rd = rd_.load(std::memory_order_relaxed);
|
168 |
-
auto cur_wt = wt_.load(std::memory_order_acquire);
|
169 |
-
auto id_rd = circ::index_of(cur_rd);
|
170 |
-
auto id_wt = circ::index_of(cur_wt);
|
171 |
-
if (id_rd == id_wt) {
|
172 |
-
auto* el = elems + id_wt;
|
173 |
-
auto cac_ct = el->f_ct_.load(std::memory_order_acquire);
|
174 |
-
if ((~cac_ct) != cur_wt) {
|
175 |
-
return false; // empty
|
176 |
-
}
|
177 |
-
if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) {
|
178 |
-
wt_.store(cur_wt + 1, std::memory_order_release);
|
179 |
-
}
|
180 |
-
k = 0;
|
181 |
-
}
|
182 |
-
else {
|
183 |
-
std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff));
|
184 |
-
if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) {
|
185 |
-
std::forward<F>(f)(buff);
|
186 |
-
std::forward<R>(out)(true);
|
187 |
-
return true;
|
188 |
-
}
|
189 |
-
ipc::yield(k);
|
190 |
-
}
|
191 |
-
}
|
192 |
-
}
|
193 |
-
};
|
194 |
-
|
195 |
-
template <>
|
196 |
-
struct prod_cons_impl<wr<relat::single, relat::multi, trans::broadcast>> {
|
197 |
-
|
198 |
-
using rc_t = std::uint64_t;
|
199 |
-
|
200 |
-
enum : rc_t {
|
201 |
-
ep_mask = 0x00000000ffffffffull,
|
202 |
-
ep_incr = 0x0000000100000000ull
|
203 |
-
};
|
204 |
-
|
205 |
-
template <std::size_t DataSize, std::size_t AlignSize>
|
206 |
-
struct elem_t {
|
207 |
-
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
208 |
-
std::atomic<rc_t> rc_ { 0 }; // read-counter
|
209 |
-
};
|
210 |
-
|
211 |
-
alignas(cache_line_size) std::atomic<circ::u2_t> wt_; // write index
|
212 |
-
alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer
|
213 |
-
|
214 |
-
circ::u2_t cursor() const noexcept {
|
215 |
-
return wt_.load(std::memory_order_acquire);
|
216 |
-
}
|
217 |
-
|
218 |
-
template <typename W, typename F, typename E>
|
219 |
-
bool push(W* wrapper, F&& f, E* elems) {
|
220 |
-
E* el;
|
221 |
-
for (unsigned k = 0;;) {
|
222 |
-
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
223 |
-
if (cc == 0) return false; // no reader
|
224 |
-
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
225 |
-
// check all consumers have finished reading this element
|
226 |
-
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
227 |
-
circ::cc_t rem_cc = cur_rc & ep_mask;
|
228 |
-
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) {
|
229 |
-
return false; // has not finished yet
|
230 |
-
}
|
231 |
-
// consider rem_cc to be 0 here
|
232 |
-
if (el->rc_.compare_exchange_weak(
|
233 |
-
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
234 |
-
break;
|
235 |
-
}
|
236 |
-
ipc::yield(k);
|
237 |
-
}
|
238 |
-
std::forward<F>(f)(&(el->data_));
|
239 |
-
wt_.fetch_add(1, std::memory_order_release);
|
240 |
-
return true;
|
241 |
-
}
|
242 |
-
|
243 |
-
template <typename W, typename F, typename E>
|
244 |
-
bool force_push(W* wrapper, F&& f, E* elems) {
|
245 |
-
E* el;
|
246 |
-
epoch_ += ep_incr;
|
247 |
-
for (unsigned k = 0;;) {
|
248 |
-
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
249 |
-
if (cc == 0) return false; // no reader
|
250 |
-
el = elems + circ::index_of(wt_.load(std::memory_order_relaxed));
|
251 |
-
// check all consumers have finished reading this element
|
252 |
-
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
253 |
-
circ::cc_t rem_cc = cur_rc & ep_mask;
|
254 |
-
if (cc & rem_cc) {
|
255 |
-
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
256 |
-
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
257 |
-
if (cc == 0) return false; // no reader
|
258 |
-
}
|
259 |
-
// just compare & exchange
|
260 |
-
if (el->rc_.compare_exchange_weak(
|
261 |
-
cur_rc, epoch_ | static_cast<rc_t>(cc), std::memory_order_release)) {
|
262 |
-
break;
|
263 |
-
}
|
264 |
-
ipc::yield(k);
|
265 |
-
}
|
266 |
-
std::forward<F>(f)(&(el->data_));
|
267 |
-
wt_.fetch_add(1, std::memory_order_release);
|
268 |
-
return true;
|
269 |
-
}
|
270 |
-
|
271 |
-
template <typename W, typename F, typename R, typename E>
|
272 |
-
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) {
|
273 |
-
if (cur == cursor()) return false; // acquire
|
274 |
-
auto* el = elems + circ::index_of(cur++);
|
275 |
-
std::forward<F>(f)(&(el->data_));
|
276 |
-
for (unsigned k = 0;;) {
|
277 |
-
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
278 |
-
if ((cur_rc & ep_mask) == 0) {
|
279 |
-
std::forward<R>(out)(true);
|
280 |
-
return true;
|
281 |
-
}
|
282 |
-
auto nxt_rc = cur_rc & ~static_cast<rc_t>(wrapper->connected_id());
|
283 |
-
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
284 |
-
std::forward<R>(out)((nxt_rc & ep_mask) == 0);
|
285 |
-
return true;
|
286 |
-
}
|
287 |
-
ipc::yield(k);
|
288 |
-
}
|
289 |
-
}
|
290 |
-
};
|
291 |
-
|
292 |
-
template <>
|
293 |
-
struct prod_cons_impl<wr<relat::multi, relat::multi, trans::broadcast>> {
|
294 |
-
|
295 |
-
using rc_t = std::uint64_t;
|
296 |
-
using flag_t = std::uint64_t;
|
297 |
-
|
298 |
-
enum : rc_t {
|
299 |
-
rc_mask = 0x00000000ffffffffull,
|
300 |
-
ep_mask = 0x00ffffffffffffffull,
|
301 |
-
ep_incr = 0x0100000000000000ull,
|
302 |
-
ic_mask = 0xff000000ffffffffull,
|
303 |
-
ic_incr = 0x0000000100000000ull
|
304 |
-
};
|
305 |
-
|
306 |
-
template <std::size_t DataSize, std::size_t AlignSize>
|
307 |
-
struct elem_t {
|
308 |
-
std::aligned_storage_t<DataSize, AlignSize> data_ {};
|
309 |
-
std::atomic<rc_t > rc_ { 0 }; // read-counter
|
310 |
-
std::atomic<flag_t> f_ct_ { 0 }; // commit flag
|
311 |
-
};
|
312 |
-
|
313 |
-
alignas(cache_line_size) std::atomic<circ::u2_t> ct_; // commit index
|
314 |
-
alignas(cache_line_size) std::atomic<rc_t> epoch_ { 0 };
|
315 |
-
|
316 |
-
circ::u2_t cursor() const noexcept {
|
317 |
-
return ct_.load(std::memory_order_acquire);
|
318 |
-
}
|
319 |
-
|
320 |
-
constexpr static rc_t inc_rc(rc_t rc) noexcept {
|
321 |
-
return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask);
|
322 |
-
}
|
323 |
-
|
324 |
-
constexpr static rc_t inc_mask(rc_t rc) noexcept {
|
325 |
-
return inc_rc(rc) & ~rc_mask;
|
326 |
-
}
|
327 |
-
|
328 |
-
template <typename W, typename F, typename E>
|
329 |
-
bool push(W* wrapper, F&& f, E* elems) {
|
330 |
-
E* el;
|
331 |
-
circ::u2_t cur_ct;
|
332 |
-
rc_t epoch = epoch_.load(std::memory_order_acquire);
|
333 |
-
for (unsigned k = 0;;) {
|
334 |
-
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
335 |
-
if (cc == 0) return false; // no reader
|
336 |
-
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
337 |
-
// check all consumers have finished reading this element
|
338 |
-
auto cur_rc = el->rc_.load(std::memory_order_relaxed);
|
339 |
-
circ::cc_t rem_cc = cur_rc & rc_mask;
|
340 |
-
if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) {
|
341 |
-
return false; // has not finished yet
|
342 |
-
}
|
343 |
-
else if (!rem_cc) {
|
344 |
-
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
345 |
-
if ((cur_fl != cur_ct) && cur_fl) {
|
346 |
-
return false; // full
|
347 |
-
}
|
348 |
-
}
|
349 |
-
// consider rem_cc to be 0 here
|
350 |
-
if (el->rc_.compare_exchange_weak(
|
351 |
-
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed) &&
|
352 |
-
epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) {
|
353 |
-
break;
|
354 |
-
}
|
355 |
-
ipc::yield(k);
|
356 |
-
}
|
357 |
-
// only one thread/process would touch here at one time
|
358 |
-
ct_.store(cur_ct + 1, std::memory_order_release);
|
359 |
-
std::forward<F>(f)(&(el->data_));
|
360 |
-
// set flag & try update wt
|
361 |
-
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
362 |
-
return true;
|
363 |
-
}
|
364 |
-
|
365 |
-
template <typename W, typename F, typename E>
|
366 |
-
bool force_push(W* wrapper, F&& f, E* elems) {
|
367 |
-
E* el;
|
368 |
-
circ::u2_t cur_ct;
|
369 |
-
rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
370 |
-
for (unsigned k = 0;;) {
|
371 |
-
circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed);
|
372 |
-
if (cc == 0) return false; // no reader
|
373 |
-
el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed));
|
374 |
-
// check all consumers have finished reading this element
|
375 |
-
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
376 |
-
circ::cc_t rem_cc = cur_rc & rc_mask;
|
377 |
-
if (cc & rem_cc) {
|
378 |
-
ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc);
|
379 |
-
cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers
|
380 |
-
if (cc == 0) return false; // no reader
|
381 |
-
}
|
382 |
-
// just compare & exchange
|
383 |
-
if (el->rc_.compare_exchange_weak(
|
384 |
-
cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast<rc_t>(cc), std::memory_order_relaxed)) {
|
385 |
-
if (epoch == epoch_.load(std::memory_order_acquire)) {
|
386 |
-
break;
|
387 |
-
}
|
388 |
-
else if (push(wrapper, std::forward<F>(f), elems)) {
|
389 |
-
return true;
|
390 |
-
}
|
391 |
-
epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr;
|
392 |
-
}
|
393 |
-
ipc::yield(k);
|
394 |
-
}
|
395 |
-
// only one thread/process would touch here at one time
|
396 |
-
ct_.store(cur_ct + 1, std::memory_order_release);
|
397 |
-
std::forward<F>(f)(&(el->data_));
|
398 |
-
// set flag & try update wt
|
399 |
-
el->f_ct_.store(~static_cast<flag_t>(cur_ct), std::memory_order_release);
|
400 |
-
return true;
|
401 |
-
}
|
402 |
-
|
403 |
-
template <typename W, typename F, typename R, typename E, std::size_t N>
|
404 |
-
bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) {
|
405 |
-
auto* el = elems + circ::index_of(cur);
|
406 |
-
auto cur_fl = el->f_ct_.load(std::memory_order_acquire);
|
407 |
-
if (cur_fl != ~static_cast<flag_t>(cur)) {
|
408 |
-
return false; // empty
|
409 |
-
}
|
410 |
-
++cur;
|
411 |
-
std::forward<F>(f)(&(el->data_));
|
412 |
-
for (unsigned k = 0;;) {
|
413 |
-
auto cur_rc = el->rc_.load(std::memory_order_acquire);
|
414 |
-
if ((cur_rc & rc_mask) == 0) {
|
415 |
-
std::forward<R>(out)(true);
|
416 |
-
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
417 |
-
return true;
|
418 |
-
}
|
419 |
-
auto nxt_rc = inc_rc(cur_rc) & ~static_cast<rc_t>(wrapper->connected_id());
|
420 |
-
bool last_one = false;
|
421 |
-
if ((last_one = (nxt_rc & rc_mask) == 0)) {
|
422 |
-
el->f_ct_.store(cur + N - 1, std::memory_order_release);
|
423 |
-
}
|
424 |
-
if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) {
|
425 |
-
std::forward<R>(out)(last_one);
|
426 |
-
return true;
|
427 |
-
}
|
428 |
-
ipc::yield(k);
|
429 |
-
}
|
430 |
-
}
|
431 |
-
};
|
432 |
-
|
433 |
-
} // namespace ipc
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/op/upfirdn2d.cpp
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
#include <torch/extension.h>
|
2 |
-
|
3 |
-
|
4 |
-
torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel,
|
5 |
-
int up_x, int up_y, int down_x, int down_y,
|
6 |
-
int pad_x0, int pad_x1, int pad_y0, int pad_y1);
|
7 |
-
|
8 |
-
#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
|
9 |
-
#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
|
10 |
-
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
|
11 |
-
|
12 |
-
torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel,
|
13 |
-
int up_x, int up_y, int down_x, int down_y,
|
14 |
-
int pad_x0, int pad_x1, int pad_y0, int pad_y1) {
|
15 |
-
CHECK_CUDA(input);
|
16 |
-
CHECK_CUDA(kernel);
|
17 |
-
|
18 |
-
return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1);
|
19 |
-
}
|
20 |
-
|
21 |
-
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
|
22 |
-
m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)");
|
23 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/libra_rcnn/libra_retinanet_r50_fpn_1x_coco.py
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
_base_ = '../retinanet/retinanet_r50_fpn_1x_coco.py'
|
2 |
-
# model settings
|
3 |
-
model = dict(
|
4 |
-
neck=[
|
5 |
-
dict(
|
6 |
-
type='FPN',
|
7 |
-
in_channels=[256, 512, 1024, 2048],
|
8 |
-
out_channels=256,
|
9 |
-
start_level=1,
|
10 |
-
add_extra_convs='on_input',
|
11 |
-
num_outs=5),
|
12 |
-
dict(
|
13 |
-
type='BFP',
|
14 |
-
in_channels=256,
|
15 |
-
num_levels=5,
|
16 |
-
refine_level=1,
|
17 |
-
refine_type='non_local')
|
18 |
-
],
|
19 |
-
bbox_head=dict(
|
20 |
-
loss_bbox=dict(
|
21 |
-
_delete_=True,
|
22 |
-
type='BalancedL1Loss',
|
23 |
-
alpha=0.5,
|
24 |
-
gamma=1.5,
|
25 |
-
beta=0.11,
|
26 |
-
loss_weight=1.0)))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/datasets/xml_style.py
DELETED
@@ -1,170 +0,0 @@
|
|
1 |
-
import os.path as osp
|
2 |
-
import xml.etree.ElementTree as ET
|
3 |
-
|
4 |
-
import mmcv
|
5 |
-
import numpy as np
|
6 |
-
from PIL import Image
|
7 |
-
|
8 |
-
from .builder import DATASETS
|
9 |
-
from .custom import CustomDataset
|
10 |
-
|
11 |
-
|
12 |
-
@DATASETS.register_module()
|
13 |
-
class XMLDataset(CustomDataset):
|
14 |
-
"""XML dataset for detection.
|
15 |
-
|
16 |
-
Args:
|
17 |
-
min_size (int | float, optional): The minimum size of bounding
|
18 |
-
boxes in the images. If the size of a bounding box is less than
|
19 |
-
``min_size``, it would be add to ignored field.
|
20 |
-
"""
|
21 |
-
|
22 |
-
def __init__(self, min_size=None, **kwargs):
|
23 |
-
assert self.CLASSES or kwargs.get(
|
24 |
-
'classes', None), 'CLASSES in `XMLDataset` can not be None.'
|
25 |
-
super(XMLDataset, self).__init__(**kwargs)
|
26 |
-
self.cat2label = {cat: i for i, cat in enumerate(self.CLASSES)}
|
27 |
-
self.min_size = min_size
|
28 |
-
|
29 |
-
def load_annotations(self, ann_file):
|
30 |
-
"""Load annotation from XML style ann_file.
|
31 |
-
|
32 |
-
Args:
|
33 |
-
ann_file (str): Path of XML file.
|
34 |
-
|
35 |
-
Returns:
|
36 |
-
list[dict]: Annotation info from XML file.
|
37 |
-
"""
|
38 |
-
|
39 |
-
data_infos = []
|
40 |
-
img_ids = mmcv.list_from_file(ann_file)
|
41 |
-
for img_id in img_ids:
|
42 |
-
filename = f'JPEGImages/{img_id}.jpg'
|
43 |
-
xml_path = osp.join(self.img_prefix, 'Annotations',
|
44 |
-
f'{img_id}.xml')
|
45 |
-
tree = ET.parse(xml_path)
|
46 |
-
root = tree.getroot()
|
47 |
-
size = root.find('size')
|
48 |
-
if size is not None:
|
49 |
-
width = int(size.find('width').text)
|
50 |
-
height = int(size.find('height').text)
|
51 |
-
else:
|
52 |
-
img_path = osp.join(self.img_prefix, 'JPEGImages',
|
53 |
-
'{}.jpg'.format(img_id))
|
54 |
-
img = Image.open(img_path)
|
55 |
-
width, height = img.size
|
56 |
-
data_infos.append(
|
57 |
-
dict(id=img_id, filename=filename, width=width, height=height))
|
58 |
-
|
59 |
-
return data_infos
|
60 |
-
|
61 |
-
def _filter_imgs(self, min_size=32):
|
62 |
-
"""Filter images too small or without annotation."""
|
63 |
-
valid_inds = []
|
64 |
-
for i, img_info in enumerate(self.data_infos):
|
65 |
-
if min(img_info['width'], img_info['height']) < min_size:
|
66 |
-
continue
|
67 |
-
if self.filter_empty_gt:
|
68 |
-
img_id = img_info['id']
|
69 |
-
xml_path = osp.join(self.img_prefix, 'Annotations',
|
70 |
-
f'{img_id}.xml')
|
71 |
-
tree = ET.parse(xml_path)
|
72 |
-
root = tree.getroot()
|
73 |
-
for obj in root.findall('object'):
|
74 |
-
name = obj.find('name').text
|
75 |
-
if name in self.CLASSES:
|
76 |
-
valid_inds.append(i)
|
77 |
-
break
|
78 |
-
else:
|
79 |
-
valid_inds.append(i)
|
80 |
-
return valid_inds
|
81 |
-
|
82 |
-
def get_ann_info(self, idx):
|
83 |
-
"""Get annotation from XML file by index.
|
84 |
-
|
85 |
-
Args:
|
86 |
-
idx (int): Index of data.
|
87 |
-
|
88 |
-
Returns:
|
89 |
-
dict: Annotation info of specified index.
|
90 |
-
"""
|
91 |
-
|
92 |
-
img_id = self.data_infos[idx]['id']
|
93 |
-
xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml')
|
94 |
-
tree = ET.parse(xml_path)
|
95 |
-
root = tree.getroot()
|
96 |
-
bboxes = []
|
97 |
-
labels = []
|
98 |
-
bboxes_ignore = []
|
99 |
-
labels_ignore = []
|
100 |
-
for obj in root.findall('object'):
|
101 |
-
name = obj.find('name').text
|
102 |
-
if name not in self.CLASSES:
|
103 |
-
continue
|
104 |
-
label = self.cat2label[name]
|
105 |
-
difficult = obj.find('difficult')
|
106 |
-
difficult = 0 if difficult is None else int(difficult.text)
|
107 |
-
bnd_box = obj.find('bndbox')
|
108 |
-
# TODO: check whether it is necessary to use int
|
109 |
-
# Coordinates may be float type
|
110 |
-
bbox = [
|
111 |
-
int(float(bnd_box.find('xmin').text)),
|
112 |
-
int(float(bnd_box.find('ymin').text)),
|
113 |
-
int(float(bnd_box.find('xmax').text)),
|
114 |
-
int(float(bnd_box.find('ymax').text))
|
115 |
-
]
|
116 |
-
ignore = False
|
117 |
-
if self.min_size:
|
118 |
-
assert not self.test_mode
|
119 |
-
w = bbox[2] - bbox[0]
|
120 |
-
h = bbox[3] - bbox[1]
|
121 |
-
if w < self.min_size or h < self.min_size:
|
122 |
-
ignore = True
|
123 |
-
if difficult or ignore:
|
124 |
-
bboxes_ignore.append(bbox)
|
125 |
-
labels_ignore.append(label)
|
126 |
-
else:
|
127 |
-
bboxes.append(bbox)
|
128 |
-
labels.append(label)
|
129 |
-
if not bboxes:
|
130 |
-
bboxes = np.zeros((0, 4))
|
131 |
-
labels = np.zeros((0, ))
|
132 |
-
else:
|
133 |
-
bboxes = np.array(bboxes, ndmin=2) - 1
|
134 |
-
labels = np.array(labels)
|
135 |
-
if not bboxes_ignore:
|
136 |
-
bboxes_ignore = np.zeros((0, 4))
|
137 |
-
labels_ignore = np.zeros((0, ))
|
138 |
-
else:
|
139 |
-
bboxes_ignore = np.array(bboxes_ignore, ndmin=2) - 1
|
140 |
-
labels_ignore = np.array(labels_ignore)
|
141 |
-
ann = dict(
|
142 |
-
bboxes=bboxes.astype(np.float32),
|
143 |
-
labels=labels.astype(np.int64),
|
144 |
-
bboxes_ignore=bboxes_ignore.astype(np.float32),
|
145 |
-
labels_ignore=labels_ignore.astype(np.int64))
|
146 |
-
return ann
|
147 |
-
|
148 |
-
def get_cat_ids(self, idx):
|
149 |
-
"""Get category ids in XML file by index.
|
150 |
-
|
151 |
-
Args:
|
152 |
-
idx (int): Index of data.
|
153 |
-
|
154 |
-
Returns:
|
155 |
-
list[int]: All categories in the image of specified index.
|
156 |
-
"""
|
157 |
-
|
158 |
-
cat_ids = []
|
159 |
-
img_id = self.data_infos[idx]['id']
|
160 |
-
xml_path = osp.join(self.img_prefix, 'Annotations', f'{img_id}.xml')
|
161 |
-
tree = ET.parse(xml_path)
|
162 |
-
root = tree.getroot()
|
163 |
-
for obj in root.findall('object'):
|
164 |
-
name = obj.find('name').text
|
165 |
-
if name not in self.CLASSES:
|
166 |
-
continue
|
167 |
-
label = self.cat2label[name]
|
168 |
-
cat_ids.append(label)
|
169 |
-
|
170 |
-
return cat_ids
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3plus/deeplabv3plus_r101-d8_480x480_80k_pascal_context_59.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './deeplabv3plus_r50-d8_480x480_80k_pascal_context_59.py'
|
2 |
-
model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Ariharasudhan/YoloV5/utils/aws/mime.sh
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
# AWS EC2 instance startup 'MIME' script https://aws.amazon.com/premiumsupport/knowledge-center/execute-user-data-ec2/
|
2 |
-
# This script will run on every instance restart, not only on first start
|
3 |
-
# --- DO NOT COPY ABOVE COMMENTS WHEN PASTING INTO USERDATA ---
|
4 |
-
|
5 |
-
Content-Type: multipart/mixed; boundary="//"
|
6 |
-
MIME-Version: 1.0
|
7 |
-
|
8 |
-
--//
|
9 |
-
Content-Type: text/cloud-config; charset="us-ascii"
|
10 |
-
MIME-Version: 1.0
|
11 |
-
Content-Transfer-Encoding: 7bit
|
12 |
-
Content-Disposition: attachment; filename="cloud-config.txt"
|
13 |
-
|
14 |
-
#cloud-config
|
15 |
-
cloud_final_modules:
|
16 |
-
- [scripts-user, always]
|
17 |
-
|
18 |
-
--//
|
19 |
-
Content-Type: text/x-shellscript; charset="us-ascii"
|
20 |
-
MIME-Version: 1.0
|
21 |
-
Content-Transfer-Encoding: 7bit
|
22 |
-
Content-Disposition: attachment; filename="userdata.txt"
|
23 |
-
|
24 |
-
#!/bin/bash
|
25 |
-
# --- paste contents of userdata.sh here ---
|
26 |
-
--//
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Artificio/AdversarialArt/src/.ipynb_checkpoints/utils-checkpoint.py
DELETED
@@ -1,35 +0,0 @@
|
|
1 |
-
from PIL import Image
|
2 |
-
import torch
|
3 |
-
import torch.nn as nn
|
4 |
-
from typing import Dict, Iterable, Callable
|
5 |
-
from torch import Tensor
|
6 |
-
import glob
|
7 |
-
from tqdm import tqdm
|
8 |
-
import numpy as np
|
9 |
-
from PIL import ImageFile
|
10 |
-
ImageFile.LOAD_TRUNCATED_IMAGES = True
|
11 |
-
Image.MAX_IMAGE_PIXELS = None
|
12 |
-
|
13 |
-
|
14 |
-
# +
|
15 |
-
class RobustModel(nn.Module):
|
16 |
-
def __init__(self, model):
|
17 |
-
super().__init__()
|
18 |
-
self.model = model
|
19 |
-
def forward(self, x, *args, **kwargs):
|
20 |
-
return self.model(x)
|
21 |
-
|
22 |
-
|
23 |
-
class CustomArt(torch.utils.data.Dataset):
|
24 |
-
def __init__(self, image,transforms=None):
|
25 |
-
self.transforms = transforms
|
26 |
-
self.image = image
|
27 |
-
self.mean = torch.tensor([0.4850, 0.4560, 0.4060])
|
28 |
-
self.std = torch.tensor([0.2290, 0.2240, 0.2250])
|
29 |
-
def __getitem__(self, idx):
|
30 |
-
if self.transforms:
|
31 |
-
img = self.transforms(self.image)
|
32 |
-
return torch.as_tensor(img, dtype=torch.float)
|
33 |
-
|
34 |
-
def __len__(self):
|
35 |
-
return len(self.image)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/models/__init__.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
"""A package that contains models that represent entities.
|
2 |
-
"""
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pygments/token.py
DELETED
@@ -1,213 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
pygments.token
|
3 |
-
~~~~~~~~~~~~~~
|
4 |
-
|
5 |
-
Basic token types and the standard tokens.
|
6 |
-
|
7 |
-
:copyright: Copyright 2006-2022 by the Pygments team, see AUTHORS.
|
8 |
-
:license: BSD, see LICENSE for details.
|
9 |
-
"""
|
10 |
-
|
11 |
-
|
12 |
-
class _TokenType(tuple):
|
13 |
-
parent = None
|
14 |
-
|
15 |
-
def split(self):
|
16 |
-
buf = []
|
17 |
-
node = self
|
18 |
-
while node is not None:
|
19 |
-
buf.append(node)
|
20 |
-
node = node.parent
|
21 |
-
buf.reverse()
|
22 |
-
return buf
|
23 |
-
|
24 |
-
def __init__(self, *args):
|
25 |
-
# no need to call super.__init__
|
26 |
-
self.subtypes = set()
|
27 |
-
|
28 |
-
def __contains__(self, val):
|
29 |
-
return self is val or (
|
30 |
-
type(val) is self.__class__ and
|
31 |
-
val[:len(self)] == self
|
32 |
-
)
|
33 |
-
|
34 |
-
def __getattr__(self, val):
|
35 |
-
if not val or not val[0].isupper():
|
36 |
-
return tuple.__getattribute__(self, val)
|
37 |
-
new = _TokenType(self + (val,))
|
38 |
-
setattr(self, val, new)
|
39 |
-
self.subtypes.add(new)
|
40 |
-
new.parent = self
|
41 |
-
return new
|
42 |
-
|
43 |
-
def __repr__(self):
|
44 |
-
return 'Token' + (self and '.' or '') + '.'.join(self)
|
45 |
-
|
46 |
-
def __copy__(self):
|
47 |
-
# These instances are supposed to be singletons
|
48 |
-
return self
|
49 |
-
|
50 |
-
def __deepcopy__(self, memo):
|
51 |
-
# These instances are supposed to be singletons
|
52 |
-
return self
|
53 |
-
|
54 |
-
|
55 |
-
Token = _TokenType()
|
56 |
-
|
57 |
-
# Special token types
|
58 |
-
Text = Token.Text
|
59 |
-
Whitespace = Text.Whitespace
|
60 |
-
Escape = Token.Escape
|
61 |
-
Error = Token.Error
|
62 |
-
# Text that doesn't belong to this lexer (e.g. HTML in PHP)
|
63 |
-
Other = Token.Other
|
64 |
-
|
65 |
-
# Common token types for source code
|
66 |
-
Keyword = Token.Keyword
|
67 |
-
Name = Token.Name
|
68 |
-
Literal = Token.Literal
|
69 |
-
String = Literal.String
|
70 |
-
Number = Literal.Number
|
71 |
-
Punctuation = Token.Punctuation
|
72 |
-
Operator = Token.Operator
|
73 |
-
Comment = Token.Comment
|
74 |
-
|
75 |
-
# Generic types for non-source code
|
76 |
-
Generic = Token.Generic
|
77 |
-
|
78 |
-
# String and some others are not direct children of Token.
|
79 |
-
# alias them:
|
80 |
-
Token.Token = Token
|
81 |
-
Token.String = String
|
82 |
-
Token.Number = Number
|
83 |
-
|
84 |
-
|
85 |
-
def is_token_subtype(ttype, other):
|
86 |
-
"""
|
87 |
-
Return True if ``ttype`` is a subtype of ``other``.
|
88 |
-
|
89 |
-
exists for backwards compatibility. use ``ttype in other`` now.
|
90 |
-
"""
|
91 |
-
return ttype in other
|
92 |
-
|
93 |
-
|
94 |
-
def string_to_tokentype(s):
|
95 |
-
"""
|
96 |
-
Convert a string into a token type::
|
97 |
-
|
98 |
-
>>> string_to_token('String.Double')
|
99 |
-
Token.Literal.String.Double
|
100 |
-
>>> string_to_token('Token.Literal.Number')
|
101 |
-
Token.Literal.Number
|
102 |
-
>>> string_to_token('')
|
103 |
-
Token
|
104 |
-
|
105 |
-
Tokens that are already tokens are returned unchanged:
|
106 |
-
|
107 |
-
>>> string_to_token(String)
|
108 |
-
Token.Literal.String
|
109 |
-
"""
|
110 |
-
if isinstance(s, _TokenType):
|
111 |
-
return s
|
112 |
-
if not s:
|
113 |
-
return Token
|
114 |
-
node = Token
|
115 |
-
for item in s.split('.'):
|
116 |
-
node = getattr(node, item)
|
117 |
-
return node
|
118 |
-
|
119 |
-
|
120 |
-
# Map standard token types to short names, used in CSS class naming.
|
121 |
-
# If you add a new item, please be sure to run this file to perform
|
122 |
-
# a consistency check for duplicate values.
|
123 |
-
STANDARD_TYPES = {
|
124 |
-
Token: '',
|
125 |
-
|
126 |
-
Text: '',
|
127 |
-
Whitespace: 'w',
|
128 |
-
Escape: 'esc',
|
129 |
-
Error: 'err',
|
130 |
-
Other: 'x',
|
131 |
-
|
132 |
-
Keyword: 'k',
|
133 |
-
Keyword.Constant: 'kc',
|
134 |
-
Keyword.Declaration: 'kd',
|
135 |
-
Keyword.Namespace: 'kn',
|
136 |
-
Keyword.Pseudo: 'kp',
|
137 |
-
Keyword.Reserved: 'kr',
|
138 |
-
Keyword.Type: 'kt',
|
139 |
-
|
140 |
-
Name: 'n',
|
141 |
-
Name.Attribute: 'na',
|
142 |
-
Name.Builtin: 'nb',
|
143 |
-
Name.Builtin.Pseudo: 'bp',
|
144 |
-
Name.Class: 'nc',
|
145 |
-
Name.Constant: 'no',
|
146 |
-
Name.Decorator: 'nd',
|
147 |
-
Name.Entity: 'ni',
|
148 |
-
Name.Exception: 'ne',
|
149 |
-
Name.Function: 'nf',
|
150 |
-
Name.Function.Magic: 'fm',
|
151 |
-
Name.Property: 'py',
|
152 |
-
Name.Label: 'nl',
|
153 |
-
Name.Namespace: 'nn',
|
154 |
-
Name.Other: 'nx',
|
155 |
-
Name.Tag: 'nt',
|
156 |
-
Name.Variable: 'nv',
|
157 |
-
Name.Variable.Class: 'vc',
|
158 |
-
Name.Variable.Global: 'vg',
|
159 |
-
Name.Variable.Instance: 'vi',
|
160 |
-
Name.Variable.Magic: 'vm',
|
161 |
-
|
162 |
-
Literal: 'l',
|
163 |
-
Literal.Date: 'ld',
|
164 |
-
|
165 |
-
String: 's',
|
166 |
-
String.Affix: 'sa',
|
167 |
-
String.Backtick: 'sb',
|
168 |
-
String.Char: 'sc',
|
169 |
-
String.Delimiter: 'dl',
|
170 |
-
String.Doc: 'sd',
|
171 |
-
String.Double: 's2',
|
172 |
-
String.Escape: 'se',
|
173 |
-
String.Heredoc: 'sh',
|
174 |
-
String.Interpol: 'si',
|
175 |
-
String.Other: 'sx',
|
176 |
-
String.Regex: 'sr',
|
177 |
-
String.Single: 's1',
|
178 |
-
String.Symbol: 'ss',
|
179 |
-
|
180 |
-
Number: 'm',
|
181 |
-
Number.Bin: 'mb',
|
182 |
-
Number.Float: 'mf',
|
183 |
-
Number.Hex: 'mh',
|
184 |
-
Number.Integer: 'mi',
|
185 |
-
Number.Integer.Long: 'il',
|
186 |
-
Number.Oct: 'mo',
|
187 |
-
|
188 |
-
Operator: 'o',
|
189 |
-
Operator.Word: 'ow',
|
190 |
-
|
191 |
-
Punctuation: 'p',
|
192 |
-
Punctuation.Marker: 'pm',
|
193 |
-
|
194 |
-
Comment: 'c',
|
195 |
-
Comment.Hashbang: 'ch',
|
196 |
-
Comment.Multiline: 'cm',
|
197 |
-
Comment.Preproc: 'cp',
|
198 |
-
Comment.PreprocFile: 'cpf',
|
199 |
-
Comment.Single: 'c1',
|
200 |
-
Comment.Special: 'cs',
|
201 |
-
|
202 |
-
Generic: 'g',
|
203 |
-
Generic.Deleted: 'gd',
|
204 |
-
Generic.Emph: 'ge',
|
205 |
-
Generic.Error: 'gr',
|
206 |
-
Generic.Heading: 'gh',
|
207 |
-
Generic.Inserted: 'gi',
|
208 |
-
Generic.Output: 'go',
|
209 |
-
Generic.Prompt: 'gp',
|
210 |
-
Generic.Strong: 'gs',
|
211 |
-
Generic.Subheading: 'gu',
|
212 |
-
Generic.Traceback: 'gt',
|
213 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/rich/bar.py
DELETED
@@ -1,94 +0,0 @@
|
|
1 |
-
from typing import Optional, Union
|
2 |
-
|
3 |
-
from .color import Color
|
4 |
-
from .console import Console, ConsoleOptions, RenderResult
|
5 |
-
from .jupyter import JupyterMixin
|
6 |
-
from .measure import Measurement
|
7 |
-
from .segment import Segment
|
8 |
-
from .style import Style
|
9 |
-
|
10 |
-
# There are left-aligned characters for 1/8 to 7/8, but
|
11 |
-
# the right-aligned characters exist only for 1/8 and 4/8.
|
12 |
-
BEGIN_BLOCK_ELEMENTS = ["█", "█", "█", "▐", "▐", "▐", "▕", "▕"]
|
13 |
-
END_BLOCK_ELEMENTS = [" ", "▏", "▎", "▍", "▌", "▋", "▊", "▉"]
|
14 |
-
FULL_BLOCK = "█"
|
15 |
-
|
16 |
-
|
17 |
-
class Bar(JupyterMixin):
|
18 |
-
"""Renders a solid block bar.
|
19 |
-
|
20 |
-
Args:
|
21 |
-
size (float): Value for the end of the bar.
|
22 |
-
begin (float): Begin point (between 0 and size, inclusive).
|
23 |
-
end (float): End point (between 0 and size, inclusive).
|
24 |
-
width (int, optional): Width of the bar, or ``None`` for maximum width. Defaults to None.
|
25 |
-
color (Union[Color, str], optional): Color of the bar. Defaults to "default".
|
26 |
-
bgcolor (Union[Color, str], optional): Color of bar background. Defaults to "default".
|
27 |
-
"""
|
28 |
-
|
29 |
-
def __init__(
|
30 |
-
self,
|
31 |
-
size: float,
|
32 |
-
begin: float,
|
33 |
-
end: float,
|
34 |
-
*,
|
35 |
-
width: Optional[int] = None,
|
36 |
-
color: Union[Color, str] = "default",
|
37 |
-
bgcolor: Union[Color, str] = "default",
|
38 |
-
):
|
39 |
-
self.size = size
|
40 |
-
self.begin = max(begin, 0)
|
41 |
-
self.end = min(end, size)
|
42 |
-
self.width = width
|
43 |
-
self.style = Style(color=color, bgcolor=bgcolor)
|
44 |
-
|
45 |
-
def __repr__(self) -> str:
|
46 |
-
return f"Bar({self.size}, {self.begin}, {self.end})"
|
47 |
-
|
48 |
-
def __rich_console__(
|
49 |
-
self, console: Console, options: ConsoleOptions
|
50 |
-
) -> RenderResult:
|
51 |
-
|
52 |
-
width = min(
|
53 |
-
self.width if self.width is not None else options.max_width,
|
54 |
-
options.max_width,
|
55 |
-
)
|
56 |
-
|
57 |
-
if self.begin >= self.end:
|
58 |
-
yield Segment(" " * width, self.style)
|
59 |
-
yield Segment.line()
|
60 |
-
return
|
61 |
-
|
62 |
-
prefix_complete_eights = int(width * 8 * self.begin / self.size)
|
63 |
-
prefix_bar_count = prefix_complete_eights // 8
|
64 |
-
prefix_eights_count = prefix_complete_eights % 8
|
65 |
-
|
66 |
-
body_complete_eights = int(width * 8 * self.end / self.size)
|
67 |
-
body_bar_count = body_complete_eights // 8
|
68 |
-
body_eights_count = body_complete_eights % 8
|
69 |
-
|
70 |
-
# When start and end fall into the same cell, we ideally should render
|
71 |
-
# a symbol that's "center-aligned", but there is no good symbol in Unicode.
|
72 |
-
# In this case, we fall back to right-aligned block symbol for simplicity.
|
73 |
-
|
74 |
-
prefix = " " * prefix_bar_count
|
75 |
-
if prefix_eights_count:
|
76 |
-
prefix += BEGIN_BLOCK_ELEMENTS[prefix_eights_count]
|
77 |
-
|
78 |
-
body = FULL_BLOCK * body_bar_count
|
79 |
-
if body_eights_count:
|
80 |
-
body += END_BLOCK_ELEMENTS[body_eights_count]
|
81 |
-
|
82 |
-
suffix = " " * (width - len(body))
|
83 |
-
|
84 |
-
yield Segment(prefix + body[len(prefix) :] + suffix, self.style)
|
85 |
-
yield Segment.line()
|
86 |
-
|
87 |
-
def __rich_measure__(
|
88 |
-
self, console: Console, options: ConsoleOptions
|
89 |
-
) -> Measurement:
|
90 |
-
return (
|
91 |
-
Measurement(self.width, self.width)
|
92 |
-
if self.width is not None
|
93 |
-
else Measurement(4, options.max_width)
|
94 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pkg_resources/_vendor/packaging/_structures.py
DELETED
@@ -1,61 +0,0 @@
|
|
1 |
-
# This file is dual licensed under the terms of the Apache License, Version
|
2 |
-
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
|
3 |
-
# for complete details.
|
4 |
-
|
5 |
-
|
6 |
-
class InfinityType:
|
7 |
-
def __repr__(self) -> str:
|
8 |
-
return "Infinity"
|
9 |
-
|
10 |
-
def __hash__(self) -> int:
|
11 |
-
return hash(repr(self))
|
12 |
-
|
13 |
-
def __lt__(self, other: object) -> bool:
|
14 |
-
return False
|
15 |
-
|
16 |
-
def __le__(self, other: object) -> bool:
|
17 |
-
return False
|
18 |
-
|
19 |
-
def __eq__(self, other: object) -> bool:
|
20 |
-
return isinstance(other, self.__class__)
|
21 |
-
|
22 |
-
def __gt__(self, other: object) -> bool:
|
23 |
-
return True
|
24 |
-
|
25 |
-
def __ge__(self, other: object) -> bool:
|
26 |
-
return True
|
27 |
-
|
28 |
-
def __neg__(self: object) -> "NegativeInfinityType":
|
29 |
-
return NegativeInfinity
|
30 |
-
|
31 |
-
|
32 |
-
Infinity = InfinityType()
|
33 |
-
|
34 |
-
|
35 |
-
class NegativeInfinityType:
|
36 |
-
def __repr__(self) -> str:
|
37 |
-
return "-Infinity"
|
38 |
-
|
39 |
-
def __hash__(self) -> int:
|
40 |
-
return hash(repr(self))
|
41 |
-
|
42 |
-
def __lt__(self, other: object) -> bool:
|
43 |
-
return True
|
44 |
-
|
45 |
-
def __le__(self, other: object) -> bool:
|
46 |
-
return True
|
47 |
-
|
48 |
-
def __eq__(self, other: object) -> bool:
|
49 |
-
return isinstance(other, self.__class__)
|
50 |
-
|
51 |
-
def __gt__(self, other: object) -> bool:
|
52 |
-
return False
|
53 |
-
|
54 |
-
def __ge__(self, other: object) -> bool:
|
55 |
-
return False
|
56 |
-
|
57 |
-
def __neg__(self: object) -> InfinityType:
|
58 |
-
return Infinity
|
59 |
-
|
60 |
-
|
61 |
-
NegativeInfinity = NegativeInfinityType()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awesimo/jojogan/e4e/utils/alignment.py
DELETED
@@ -1,115 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import PIL
|
3 |
-
import PIL.Image
|
4 |
-
import scipy
|
5 |
-
import scipy.ndimage
|
6 |
-
import dlib
|
7 |
-
|
8 |
-
|
9 |
-
def get_landmark(filepath, predictor):
|
10 |
-
"""get landmark with dlib
|
11 |
-
:return: np.array shape=(68, 2)
|
12 |
-
"""
|
13 |
-
detector = dlib.get_frontal_face_detector()
|
14 |
-
|
15 |
-
img = dlib.load_rgb_image(filepath)
|
16 |
-
dets = detector(img, 1)
|
17 |
-
|
18 |
-
for k, d in enumerate(dets):
|
19 |
-
shape = predictor(img, d)
|
20 |
-
|
21 |
-
t = list(shape.parts())
|
22 |
-
a = []
|
23 |
-
for tt in t:
|
24 |
-
a.append([tt.x, tt.y])
|
25 |
-
lm = np.array(a)
|
26 |
-
return lm
|
27 |
-
|
28 |
-
|
29 |
-
def align_face(filepath, predictor):
|
30 |
-
"""
|
31 |
-
:param filepath: str
|
32 |
-
:return: PIL Image
|
33 |
-
"""
|
34 |
-
|
35 |
-
lm = get_landmark(filepath, predictor)
|
36 |
-
|
37 |
-
lm_chin = lm[0: 17] # left-right
|
38 |
-
lm_eyebrow_left = lm[17: 22] # left-right
|
39 |
-
lm_eyebrow_right = lm[22: 27] # left-right
|
40 |
-
lm_nose = lm[27: 31] # top-down
|
41 |
-
lm_nostrils = lm[31: 36] # top-down
|
42 |
-
lm_eye_left = lm[36: 42] # left-clockwise
|
43 |
-
lm_eye_right = lm[42: 48] # left-clockwise
|
44 |
-
lm_mouth_outer = lm[48: 60] # left-clockwise
|
45 |
-
lm_mouth_inner = lm[60: 68] # left-clockwise
|
46 |
-
|
47 |
-
# Calculate auxiliary vectors.
|
48 |
-
eye_left = np.mean(lm_eye_left, axis=0)
|
49 |
-
eye_right = np.mean(lm_eye_right, axis=0)
|
50 |
-
eye_avg = (eye_left + eye_right) * 0.5
|
51 |
-
eye_to_eye = eye_right - eye_left
|
52 |
-
mouth_left = lm_mouth_outer[0]
|
53 |
-
mouth_right = lm_mouth_outer[6]
|
54 |
-
mouth_avg = (mouth_left + mouth_right) * 0.5
|
55 |
-
eye_to_mouth = mouth_avg - eye_avg
|
56 |
-
|
57 |
-
# Choose oriented crop rectangle.
|
58 |
-
x = eye_to_eye - np.flipud(eye_to_mouth) * [-1, 1]
|
59 |
-
x /= np.hypot(*x)
|
60 |
-
x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8)
|
61 |
-
y = np.flipud(x) * [-1, 1]
|
62 |
-
c = eye_avg + eye_to_mouth * 0.1
|
63 |
-
quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y])
|
64 |
-
qsize = np.hypot(*x) * 2
|
65 |
-
|
66 |
-
# read image
|
67 |
-
img = PIL.Image.open(filepath)
|
68 |
-
|
69 |
-
output_size = 256
|
70 |
-
transform_size = 256
|
71 |
-
enable_padding = True
|
72 |
-
|
73 |
-
# Shrink.
|
74 |
-
shrink = int(np.floor(qsize / output_size * 0.5))
|
75 |
-
if shrink > 1:
|
76 |
-
rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink)))
|
77 |
-
img = img.resize(rsize, PIL.Image.ANTIALIAS)
|
78 |
-
quad /= shrink
|
79 |
-
qsize /= shrink
|
80 |
-
|
81 |
-
# Crop.
|
82 |
-
border = max(int(np.rint(qsize * 0.1)), 3)
|
83 |
-
crop = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
|
84 |
-
int(np.ceil(max(quad[:, 1]))))
|
85 |
-
crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]),
|
86 |
-
min(crop[3] + border, img.size[1]))
|
87 |
-
if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]:
|
88 |
-
img = img.crop(crop)
|
89 |
-
quad -= crop[0:2]
|
90 |
-
|
91 |
-
# Pad.
|
92 |
-
pad = (int(np.floor(min(quad[:, 0]))), int(np.floor(min(quad[:, 1]))), int(np.ceil(max(quad[:, 0]))),
|
93 |
-
int(np.ceil(max(quad[:, 1]))))
|
94 |
-
pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0),
|
95 |
-
max(pad[3] - img.size[1] + border, 0))
|
96 |
-
if enable_padding and max(pad) > border - 4:
|
97 |
-
pad = np.maximum(pad, int(np.rint(qsize * 0.3)))
|
98 |
-
img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect')
|
99 |
-
h, w, _ = img.shape
|
100 |
-
y, x, _ = np.ogrid[:h, :w, :1]
|
101 |
-
mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w - 1 - x) / pad[2]),
|
102 |
-
1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h - 1 - y) / pad[3]))
|
103 |
-
blur = qsize * 0.02
|
104 |
-
img += (scipy.ndimage.gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0)
|
105 |
-
img += (np.median(img, axis=(0, 1)) - img) * np.clip(mask, 0.0, 1.0)
|
106 |
-
img = PIL.Image.fromarray(np.uint8(np.clip(np.rint(img), 0, 255)), 'RGB')
|
107 |
-
quad += pad[:2]
|
108 |
-
|
109 |
-
# Transform.
|
110 |
-
img = img.transform((transform_size, transform_size), PIL.Image.QUAD, (quad + 0.5).flatten(), PIL.Image.BILINEAR)
|
111 |
-
if output_size < transform_size:
|
112 |
-
img = img.resize((output_size, output_size), PIL.Image.ANTIALIAS)
|
113 |
-
|
114 |
-
# Return aligned image.
|
115 |
-
return img
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BAAI/vid2vid-zero/vid2vid_zero/p2p/null_text_w_ptp.py
DELETED
@@ -1,504 +0,0 @@
|
|
1 |
-
# Copyright 2022 Google LLC
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
-
# you may not use this file except in compliance with the License.
|
5 |
-
# You may obtain a copy of the License at
|
6 |
-
#
|
7 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
-
#
|
9 |
-
# Unless required by applicable law or agreed to in writing, software
|
10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
-
# See the License for the specific language governing permissions and
|
13 |
-
# limitations under the License.
|
14 |
-
|
15 |
-
|
16 |
-
from typing import Optional, Union, Tuple, List, Callable, Dict
|
17 |
-
from tqdm import tqdm
|
18 |
-
import torch
|
19 |
-
import torch.nn.functional as nnf
|
20 |
-
import numpy as np
|
21 |
-
import abc
|
22 |
-
from . import ptp_utils
|
23 |
-
from . import seq_aligner
|
24 |
-
import shutil
|
25 |
-
from torch.optim.adam import Adam
|
26 |
-
from PIL import Image
|
27 |
-
|
28 |
-
|
29 |
-
LOW_RESOURCE = False
|
30 |
-
NUM_DDIM_STEPS = 50
|
31 |
-
MAX_NUM_WORDS = 77
|
32 |
-
device = torch.device('cuda')
|
33 |
-
from transformers import CLIPTextModel, CLIPTokenizer
|
34 |
-
|
35 |
-
pretrained_model_path = "checkpoints/CompVis/stable-diffusion-v1-4/"
|
36 |
-
|
37 |
-
ldm_stable = None
|
38 |
-
tokenizer = CLIPTokenizer.from_pretrained(pretrained_model_path, subfolder="tokenizer")
|
39 |
-
|
40 |
-
|
41 |
-
class LocalBlend:
|
42 |
-
|
43 |
-
def get_mask(self, maps, alpha, use_pool):
|
44 |
-
k = 1
|
45 |
-
maps = (maps * alpha).sum(-1).mean(1)
|
46 |
-
if use_pool:
|
47 |
-
maps = nnf.max_pool2d(maps, (k * 2 + 1, k * 2 +1), (1, 1), padding=(k, k))
|
48 |
-
mask = nnf.interpolate(maps, size=(x_t.shape[2:]))
|
49 |
-
mask = mask / mask.max(2, keepdims=True)[0].max(3, keepdims=True)[0]
|
50 |
-
mask = mask.gt(self.th[1-int(use_pool)])
|
51 |
-
mask = mask[:1] + mask
|
52 |
-
return mask
|
53 |
-
|
54 |
-
def __call__(self, x_t, attention_store):
|
55 |
-
self.counter += 1
|
56 |
-
if self.counter > self.start_blend:
|
57 |
-
|
58 |
-
maps = attention_store["down_cross"][2:4] + attention_store["up_cross"][:3]
|
59 |
-
maps = [item.reshape(self.alpha_layers.shape[0], -1, 1, 16, 16, MAX_NUM_WORDS) for item in maps]
|
60 |
-
maps = torch.cat(maps, dim=1)
|
61 |
-
mask = self.get_mask(maps, self.alpha_layers, True)
|
62 |
-
if self.substruct_layers is not None:
|
63 |
-
maps_sub = ~self.get_mask(maps, self.substruct_layers, False)
|
64 |
-
mask = mask * maps_sub
|
65 |
-
mask = mask.float()
|
66 |
-
x_t = x_t[:1] + mask * (x_t - x_t[:1])
|
67 |
-
return x_t
|
68 |
-
|
69 |
-
def __init__(self, prompts: List[str], words: List[List[str]], substruct_words=None, start_blend=0.2, th=(.3, .3)):
|
70 |
-
alpha_layers = torch.zeros(len(prompts), 1, 1, 1, 1, MAX_NUM_WORDS)
|
71 |
-
for i, (prompt, words_) in enumerate(zip(prompts, words)):
|
72 |
-
if type(words_) is str:
|
73 |
-
words_ = [words_]
|
74 |
-
for word in words_:
|
75 |
-
ind = ptp_utils.get_word_inds(prompt, word, tokenizer)
|
76 |
-
alpha_layers[i, :, :, :, :, ind] = 1
|
77 |
-
|
78 |
-
if substruct_words is not None:
|
79 |
-
substruct_layers = torch.zeros(len(prompts), 1, 1, 1, 1, MAX_NUM_WORDS)
|
80 |
-
for i, (prompt, words_) in enumerate(zip(prompts, substruct_words)):
|
81 |
-
if type(words_) is str:
|
82 |
-
words_ = [words_]
|
83 |
-
for word in words_:
|
84 |
-
ind = ptp_utils.get_word_inds(prompt, word, tokenizer)
|
85 |
-
substruct_layers[i, :, :, :, :, ind] = 1
|
86 |
-
self.substruct_layers = substruct_layers.to(device)
|
87 |
-
else:
|
88 |
-
self.substruct_layers = None
|
89 |
-
self.alpha_layers = alpha_layers.to(device)
|
90 |
-
self.start_blend = int(start_blend * NUM_DDIM_STEPS)
|
91 |
-
self.counter = 0
|
92 |
-
self.th=th
|
93 |
-
|
94 |
-
|
95 |
-
class EmptyControl:
|
96 |
-
|
97 |
-
|
98 |
-
def step_callback(self, x_t):
|
99 |
-
return x_t
|
100 |
-
|
101 |
-
def between_steps(self):
|
102 |
-
return
|
103 |
-
|
104 |
-
def __call__(self, attn, is_cross: bool, place_in_unet: str):
|
105 |
-
return attn
|
106 |
-
|
107 |
-
|
108 |
-
class AttentionControl(abc.ABC):
|
109 |
-
|
110 |
-
def step_callback(self, x_t):
|
111 |
-
return x_t
|
112 |
-
|
113 |
-
def between_steps(self):
|
114 |
-
return
|
115 |
-
|
116 |
-
@property
|
117 |
-
def num_uncond_att_layers(self):
|
118 |
-
return self.num_att_layers if LOW_RESOURCE else 0
|
119 |
-
|
120 |
-
@abc.abstractmethod
|
121 |
-
def forward (self, attn, is_cross: bool, place_in_unet: str):
|
122 |
-
raise NotImplementedError
|
123 |
-
|
124 |
-
def __call__(self, attn, is_cross: bool, place_in_unet: str):
|
125 |
-
if self.cur_att_layer >= self.num_uncond_att_layers:
|
126 |
-
if LOW_RESOURCE:
|
127 |
-
attn = self.forward(attn, is_cross, place_in_unet)
|
128 |
-
else:
|
129 |
-
h = attn.shape[0]
|
130 |
-
attn[h // 2:] = self.forward(attn[h // 2:], is_cross, place_in_unet)
|
131 |
-
self.cur_att_layer += 1
|
132 |
-
if self.cur_att_layer == self.num_att_layers + self.num_uncond_att_layers:
|
133 |
-
self.cur_att_layer = 0
|
134 |
-
self.cur_step += 1
|
135 |
-
self.between_steps()
|
136 |
-
return attn
|
137 |
-
|
138 |
-
def reset(self):
|
139 |
-
self.cur_step = 0
|
140 |
-
self.cur_att_layer = 0
|
141 |
-
|
142 |
-
def __init__(self):
|
143 |
-
self.cur_step = 0
|
144 |
-
self.num_att_layers = -1
|
145 |
-
self.cur_att_layer = 0
|
146 |
-
|
147 |
-
|
148 |
-
class SpatialReplace(EmptyControl):
|
149 |
-
|
150 |
-
def step_callback(self, x_t):
|
151 |
-
if self.cur_step < self.stop_inject:
|
152 |
-
b = x_t.shape[0]
|
153 |
-
x_t = x_t[:1].expand(b, *x_t.shape[1:])
|
154 |
-
return x_t
|
155 |
-
|
156 |
-
def __init__(self, stop_inject: float):
|
157 |
-
super(SpatialReplace, self).__init__()
|
158 |
-
self.stop_inject = int((1 - stop_inject) * NUM_DDIM_STEPS)
|
159 |
-
|
160 |
-
|
161 |
-
class AttentionStore(AttentionControl):
|
162 |
-
|
163 |
-
@staticmethod
|
164 |
-
def get_empty_store():
|
165 |
-
return {"down_cross": [], "mid_cross": [], "up_cross": [],
|
166 |
-
"down_self": [], "mid_self": [], "up_self": []}
|
167 |
-
|
168 |
-
def forward(self, attn, is_cross: bool, place_in_unet: str):
|
169 |
-
key = f"{place_in_unet}_{'cross' if is_cross else 'self'}"
|
170 |
-
if attn.shape[1] <= 32 ** 2: # avoid memory overhead
|
171 |
-
self.step_store[key].append(attn)
|
172 |
-
return attn
|
173 |
-
|
174 |
-
def between_steps(self):
|
175 |
-
if len(self.attention_store) == 0:
|
176 |
-
self.attention_store = self.step_store
|
177 |
-
else:
|
178 |
-
for key in self.attention_store:
|
179 |
-
for i in range(len(self.attention_store[key])):
|
180 |
-
self.attention_store[key][i] += self.step_store[key][i]
|
181 |
-
self.step_store = self.get_empty_store()
|
182 |
-
|
183 |
-
def get_average_attention(self):
|
184 |
-
average_attention = {key: [item / self.cur_step for item in self.attention_store[key]] for key in self.attention_store}
|
185 |
-
return average_attention
|
186 |
-
|
187 |
-
|
188 |
-
def reset(self):
|
189 |
-
super(AttentionStore, self).reset()
|
190 |
-
self.step_store = self.get_empty_store()
|
191 |
-
self.attention_store = {}
|
192 |
-
|
193 |
-
def __init__(self):
|
194 |
-
super(AttentionStore, self).__init__()
|
195 |
-
self.step_store = self.get_empty_store()
|
196 |
-
self.attention_store = {}
|
197 |
-
|
198 |
-
|
199 |
-
class AttentionControlEdit(AttentionStore, abc.ABC):
|
200 |
-
|
201 |
-
def step_callback(self, x_t):
|
202 |
-
if self.local_blend is not None:
|
203 |
-
x_t = self.local_blend(x_t, self.attention_store)
|
204 |
-
return x_t
|
205 |
-
|
206 |
-
def replace_self_attention(self, attn_base, att_replace, place_in_unet):
|
207 |
-
if att_replace.shape[2] <= 32 ** 2:
|
208 |
-
attn_base = attn_base.unsqueeze(0).expand(att_replace.shape[0], *attn_base.shape)
|
209 |
-
return attn_base
|
210 |
-
else:
|
211 |
-
return att_replace
|
212 |
-
|
213 |
-
@abc.abstractmethod
|
214 |
-
def replace_cross_attention(self, attn_base, att_replace):
|
215 |
-
raise NotImplementedError
|
216 |
-
|
217 |
-
def forward(self, attn, is_cross: bool, place_in_unet: str):
|
218 |
-
super(AttentionControlEdit, self).forward(attn, is_cross, place_in_unet)
|
219 |
-
if is_cross or (self.num_self_replace[0] <= self.cur_step < self.num_self_replace[1]):
|
220 |
-
h = attn.shape[0] // (self.batch_size)
|
221 |
-
attn = attn.reshape(self.batch_size, h, *attn.shape[1:])
|
222 |
-
attn_base, attn_repalce = attn[0], attn[1:]
|
223 |
-
if is_cross:
|
224 |
-
alpha_words = self.cross_replace_alpha[self.cur_step]
|
225 |
-
attn_repalce_new = self.replace_cross_attention(attn_base, attn_repalce) * alpha_words + (1 - alpha_words) * attn_repalce
|
226 |
-
attn[1:] = attn_repalce_new
|
227 |
-
else:
|
228 |
-
attn[1:] = self.replace_self_attention(attn_base, attn_repalce, place_in_unet)
|
229 |
-
attn = attn.reshape(self.batch_size * h, *attn.shape[2:])
|
230 |
-
return attn
|
231 |
-
|
232 |
-
def __init__(self, prompts, num_steps: int,
|
233 |
-
cross_replace_steps: Union[float, Tuple[float, float], Dict[str, Tuple[float, float]]],
|
234 |
-
self_replace_steps: Union[float, Tuple[float, float]],
|
235 |
-
local_blend: Optional[LocalBlend]):
|
236 |
-
super(AttentionControlEdit, self).__init__()
|
237 |
-
self.batch_size = len(prompts)
|
238 |
-
self.cross_replace_alpha = ptp_utils.get_time_words_attention_alpha(prompts, num_steps, cross_replace_steps, tokenizer).to(device)
|
239 |
-
if type(self_replace_steps) is float:
|
240 |
-
self_replace_steps = 0, self_replace_steps
|
241 |
-
self.num_self_replace = int(num_steps * self_replace_steps[0]), int(num_steps * self_replace_steps[1])
|
242 |
-
self.local_blend = local_blend
|
243 |
-
|
244 |
-
class AttentionReplace(AttentionControlEdit):
|
245 |
-
|
246 |
-
def replace_cross_attention(self, attn_base, att_replace):
|
247 |
-
return torch.einsum('hpw,bwn->bhpn', attn_base, self.mapper)
|
248 |
-
|
249 |
-
def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float,
|
250 |
-
local_blend: Optional[LocalBlend] = None):
|
251 |
-
super(AttentionReplace, self).__init__(prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend)
|
252 |
-
self.mapper = seq_aligner.get_replacement_mapper(prompts, tokenizer).to(device)
|
253 |
-
|
254 |
-
|
255 |
-
class AttentionRefine(AttentionControlEdit):
|
256 |
-
|
257 |
-
def replace_cross_attention(self, attn_base, att_replace):
|
258 |
-
attn_base_replace = attn_base[:, :, self.mapper].permute(2, 0, 1, 3)
|
259 |
-
attn_replace = attn_base_replace * self.alphas + att_replace * (1 - self.alphas)
|
260 |
-
# attn_replace = attn_replace / attn_replace.sum(-1, keepdims=True)
|
261 |
-
return attn_replace
|
262 |
-
|
263 |
-
def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float,
|
264 |
-
local_blend: Optional[LocalBlend] = None):
|
265 |
-
super(AttentionRefine, self).__init__(prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend)
|
266 |
-
self.mapper, alphas = seq_aligner.get_refinement_mapper(prompts, tokenizer)
|
267 |
-
self.mapper, alphas = self.mapper.to(device), alphas.to(device)
|
268 |
-
self.alphas = alphas.reshape(alphas.shape[0], 1, 1, alphas.shape[1])
|
269 |
-
|
270 |
-
|
271 |
-
class AttentionReweight(AttentionControlEdit):
|
272 |
-
|
273 |
-
def replace_cross_attention(self, attn_base, att_replace):
|
274 |
-
if self.prev_controller is not None:
|
275 |
-
attn_base = self.prev_controller.replace_cross_attention(attn_base, att_replace)
|
276 |
-
attn_replace = attn_base[None, :, :, :] * self.equalizer[:, None, None, :]
|
277 |
-
# attn_replace = attn_replace / attn_replace.sum(-1, keepdims=True)
|
278 |
-
return attn_replace
|
279 |
-
|
280 |
-
def __init__(self, prompts, num_steps: int, cross_replace_steps: float, self_replace_steps: float, equalizer,
|
281 |
-
local_blend: Optional[LocalBlend] = None, controller: Optional[AttentionControlEdit] = None):
|
282 |
-
super(AttentionReweight, self).__init__(prompts, num_steps, cross_replace_steps, self_replace_steps, local_blend)
|
283 |
-
self.equalizer = equalizer.to(device)
|
284 |
-
self.prev_controller = controller
|
285 |
-
|
286 |
-
|
287 |
-
def get_equalizer(text: str, word_select: Union[int, Tuple[int, ...]], values: Union[List[float],
|
288 |
-
Tuple[float, ...]]):
|
289 |
-
if type(word_select) is int or type(word_select) is str:
|
290 |
-
word_select = (word_select,)
|
291 |
-
equalizer = torch.ones(1, 77)
|
292 |
-
|
293 |
-
for word, val in zip(word_select, values):
|
294 |
-
inds = ptp_utils.get_word_inds(text, word, tokenizer)
|
295 |
-
equalizer[:, inds] = val
|
296 |
-
return equalizer
|
297 |
-
|
298 |
-
def aggregate_attention(attention_store: AttentionStore, res: int, from_where: List[str], is_cross: bool, select: int):
|
299 |
-
out = []
|
300 |
-
attention_maps = attention_store.get_average_attention()
|
301 |
-
num_pixels = res ** 2
|
302 |
-
for location in from_where:
|
303 |
-
for item in attention_maps[f"{location}_{'cross' if is_cross else 'self'}"]:
|
304 |
-
if item.shape[1] == num_pixels:
|
305 |
-
cross_maps = item.reshape(len(prompts), -1, res, res, item.shape[-1])[select]
|
306 |
-
out.append(cross_maps)
|
307 |
-
out = torch.cat(out, dim=0)
|
308 |
-
out = out.sum(0) / out.shape[0]
|
309 |
-
return out.cpu()
|
310 |
-
|
311 |
-
|
312 |
-
def make_controller(prompts: List[str], is_replace_controller: bool, cross_replace_steps: Dict[str, float], self_replace_steps: float, blend_words=None, equilizer_params=None) -> AttentionControlEdit:
|
313 |
-
if blend_words is None:
|
314 |
-
lb = None
|
315 |
-
else:
|
316 |
-
lb = LocalBlend(prompts, blend_word)
|
317 |
-
if is_replace_controller:
|
318 |
-
controller = AttentionReplace(prompts, NUM_DDIM_STEPS, cross_replace_steps=cross_replace_steps, self_replace_steps=self_replace_steps, local_blend=lb)
|
319 |
-
else:
|
320 |
-
controller = AttentionRefine(prompts, NUM_DDIM_STEPS, cross_replace_steps=cross_replace_steps, self_replace_steps=self_replace_steps, local_blend=lb)
|
321 |
-
if equilizer_params is not None:
|
322 |
-
eq = get_equalizer(prompts[1], equilizer_params["words"], equilizer_params["values"])
|
323 |
-
controller = AttentionReweight(prompts, NUM_DDIM_STEPS, cross_replace_steps=cross_replace_steps,
|
324 |
-
self_replace_steps=self_replace_steps, equalizer=eq, local_blend=lb, controller=controller)
|
325 |
-
return controller
|
326 |
-
|
327 |
-
|
328 |
-
def show_cross_attention(attention_store: AttentionStore, res: int, from_where: List[str], select: int = 0):
|
329 |
-
tokens = tokenizer.encode(prompts[select])
|
330 |
-
decoder = tokenizer.decode
|
331 |
-
attention_maps = aggregate_attention(attention_store, res, from_where, True, select)
|
332 |
-
images = []
|
333 |
-
for i in range(len(tokens)):
|
334 |
-
image = attention_maps[:, :, i]
|
335 |
-
image = 255 * image / image.max()
|
336 |
-
image = image.unsqueeze(-1).expand(*image.shape, 3)
|
337 |
-
image = image.numpy().astype(np.uint8)
|
338 |
-
image = np.array(Image.fromarray(image).resize((256, 256)))
|
339 |
-
image = ptp_utils.text_under_image(image, decoder(int(tokens[i])))
|
340 |
-
images.append(image)
|
341 |
-
ptp_utils.view_images(np.stack(images, axis=0))
|
342 |
-
|
343 |
-
|
344 |
-
class NullInversion:
|
345 |
-
|
346 |
-
def prev_step(self, model_output: Union[torch.FloatTensor, np.ndarray], timestep: int, sample: Union[torch.FloatTensor, np.ndarray]):
|
347 |
-
prev_timestep = timestep - self.scheduler.config.num_train_timesteps // self.scheduler.num_inference_steps
|
348 |
-
alpha_prod_t = self.scheduler.alphas_cumprod[timestep]
|
349 |
-
alpha_prod_t_prev = self.scheduler.alphas_cumprod[prev_timestep] if prev_timestep >= 0 else self.scheduler.final_alpha_cumprod
|
350 |
-
beta_prod_t = 1 - alpha_prod_t
|
351 |
-
pred_original_sample = (sample - beta_prod_t ** 0.5 * model_output) / alpha_prod_t ** 0.5
|
352 |
-
pred_sample_direction = (1 - alpha_prod_t_prev) ** 0.5 * model_output
|
353 |
-
prev_sample = alpha_prod_t_prev ** 0.5 * pred_original_sample + pred_sample_direction
|
354 |
-
return prev_sample
|
355 |
-
|
356 |
-
def next_step(self, model_output: Union[torch.FloatTensor, np.ndarray], timestep: int, sample: Union[torch.FloatTensor, np.ndarray]):
|
357 |
-
timestep, next_timestep = min(timestep - self.scheduler.config.num_train_timesteps // self.scheduler.num_inference_steps, 999), timestep
|
358 |
-
alpha_prod_t = self.scheduler.alphas_cumprod[timestep] if timestep >= 0 else self.scheduler.final_alpha_cumprod
|
359 |
-
alpha_prod_t_next = self.scheduler.alphas_cumprod[next_timestep]
|
360 |
-
beta_prod_t = 1 - alpha_prod_t
|
361 |
-
next_original_sample = (sample - beta_prod_t ** 0.5 * model_output) / alpha_prod_t ** 0.5
|
362 |
-
next_sample_direction = (1 - alpha_prod_t_next) ** 0.5 * model_output
|
363 |
-
next_sample = alpha_prod_t_next ** 0.5 * next_original_sample + next_sample_direction
|
364 |
-
return next_sample
|
365 |
-
|
366 |
-
def get_noise_pred_single(self, latents, t, context, normal_infer=True):
|
367 |
-
noise_pred = self.model.unet(latents, t, encoder_hidden_states=context, normal_infer=normal_infer)["sample"]
|
368 |
-
return noise_pred
|
369 |
-
|
370 |
-
def get_noise_pred(self, latents, t, is_forward=True, context=None, normal_infer=True):
|
371 |
-
latents_input = torch.cat([latents] * 2)
|
372 |
-
if context is None:
|
373 |
-
context = self.context
|
374 |
-
guidance_scale = 1 if is_forward else self.guidance_scale
|
375 |
-
noise_pred = self.model.unet(latents_input, t, encoder_hidden_states=context, normal_infer=normal_infer)["sample"]
|
376 |
-
noise_pred_uncond, noise_prediction_text = noise_pred.chunk(2)
|
377 |
-
noise_pred = noise_pred_uncond + guidance_scale * (noise_prediction_text - noise_pred_uncond)
|
378 |
-
if is_forward:
|
379 |
-
latents = self.next_step(noise_pred, t, latents)
|
380 |
-
else:
|
381 |
-
latents = self.prev_step(noise_pred, t, latents)
|
382 |
-
return latents
|
383 |
-
|
384 |
-
@torch.no_grad()
|
385 |
-
def latent2image(self, latents, return_type='np'):
|
386 |
-
latents = 1 / 0.18215 * latents.detach()
|
387 |
-
image = self.model.vae.decode(latents)['sample']
|
388 |
-
if return_type == 'np':
|
389 |
-
image = (image / 2 + 0.5).clamp(0, 1)
|
390 |
-
image = image.cpu().permute(0, 2, 3, 1).numpy()[0]
|
391 |
-
image = (image * 255).astype(np.uint8)
|
392 |
-
return image
|
393 |
-
|
394 |
-
@torch.no_grad()
|
395 |
-
def image2latent(self, image):
|
396 |
-
with torch.no_grad():
|
397 |
-
if type(image) is Image:
|
398 |
-
image = np.array(image)
|
399 |
-
if type(image) is torch.Tensor and image.dim() == 4:
|
400 |
-
latents = image
|
401 |
-
else:
|
402 |
-
image = torch.from_numpy(image).float() / 127.5 - 1
|
403 |
-
image = image.permute(2, 0, 1).unsqueeze(0).to(device)
|
404 |
-
latents = self.model.vae.encode(image)['latent_dist'].mean
|
405 |
-
latents = latents * 0.18215
|
406 |
-
return latents
|
407 |
-
|
408 |
-
@torch.no_grad()
|
409 |
-
def init_prompt(self, prompt: str):
|
410 |
-
uncond_input = self.model.tokenizer(
|
411 |
-
[""], padding="max_length", max_length=self.model.tokenizer.model_max_length,
|
412 |
-
return_tensors="pt"
|
413 |
-
)
|
414 |
-
uncond_embeddings = self.model.text_encoder(uncond_input.input_ids.to(self.model.device))[0]
|
415 |
-
text_input = self.model.tokenizer(
|
416 |
-
[prompt],
|
417 |
-
padding="max_length",
|
418 |
-
max_length=self.model.tokenizer.model_max_length,
|
419 |
-
truncation=True,
|
420 |
-
return_tensors="pt",
|
421 |
-
)
|
422 |
-
# (1, 77, 768)
|
423 |
-
text_embeddings = self.model.text_encoder(text_input.input_ids.to(self.model.device))[0]
|
424 |
-
# (2, 77, 768)
|
425 |
-
self.context = torch.cat([uncond_embeddings, text_embeddings])
|
426 |
-
self.prompt = prompt
|
427 |
-
|
428 |
-
@torch.no_grad()
|
429 |
-
def ddim_loop(self, latent):
|
430 |
-
uncond_embeddings, cond_embeddings = self.context.chunk(2)
|
431 |
-
cond = cond_embeddings if self.null_inv_with_prompt else uncond_embeddings
|
432 |
-
all_latent = [latent]
|
433 |
-
latent = latent.clone().detach()
|
434 |
-
for i in range(NUM_DDIM_STEPS):
|
435 |
-
t = self.model.scheduler.timesteps[len(self.model.scheduler.timesteps) - i - 1]
|
436 |
-
noise_pred = self.get_noise_pred_single(latent, t, cond, normal_infer=True)
|
437 |
-
latent = self.next_step(noise_pred, t, latent)
|
438 |
-
all_latent.append(latent)
|
439 |
-
return all_latent
|
440 |
-
|
441 |
-
@property
|
442 |
-
def scheduler(self):
|
443 |
-
return self.model.scheduler
|
444 |
-
|
445 |
-
@torch.no_grad()
|
446 |
-
def ddim_inversion(self, latent):
|
447 |
-
ddim_latents = self.ddim_loop(latent)
|
448 |
-
return ddim_latents
|
449 |
-
|
450 |
-
def null_optimization(self, latents, null_inner_steps, epsilon, null_base_lr=1e-2):
|
451 |
-
uncond_embeddings, cond_embeddings = self.context.chunk(2)
|
452 |
-
uncond_embeddings_list = []
|
453 |
-
latent_cur = latents[-1]
|
454 |
-
bar = tqdm(total=null_inner_steps * NUM_DDIM_STEPS)
|
455 |
-
for i in range(NUM_DDIM_STEPS):
|
456 |
-
uncond_embeddings = uncond_embeddings.clone().detach()
|
457 |
-
uncond_embeddings.requires_grad = True
|
458 |
-
optimizer = Adam([uncond_embeddings], lr=null_base_lr * (1. - i / 100.))
|
459 |
-
latent_prev = latents[len(latents) - i - 2]
|
460 |
-
t = self.model.scheduler.timesteps[i]
|
461 |
-
with torch.no_grad():
|
462 |
-
noise_pred_cond = self.get_noise_pred_single(latent_cur, t, cond_embeddings, normal_infer=self.null_normal_infer)
|
463 |
-
for j in range(null_inner_steps):
|
464 |
-
noise_pred_uncond = self.get_noise_pred_single(latent_cur, t, uncond_embeddings, normal_infer=self.null_normal_infer)
|
465 |
-
noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_cond - noise_pred_uncond)
|
466 |
-
latents_prev_rec = self.prev_step(noise_pred, t, latent_cur)
|
467 |
-
loss = nnf.mse_loss(latents_prev_rec, latent_prev)
|
468 |
-
optimizer.zero_grad()
|
469 |
-
loss.backward()
|
470 |
-
optimizer.step()
|
471 |
-
assert not torch.isnan(uncond_embeddings.abs().mean())
|
472 |
-
loss_item = loss.item()
|
473 |
-
bar.update()
|
474 |
-
if loss_item < epsilon + i * 2e-5:
|
475 |
-
break
|
476 |
-
for j in range(j + 1, null_inner_steps):
|
477 |
-
bar.update()
|
478 |
-
uncond_embeddings_list.append(uncond_embeddings[:1].detach())
|
479 |
-
with torch.no_grad():
|
480 |
-
context = torch.cat([uncond_embeddings, cond_embeddings])
|
481 |
-
latent_cur = self.get_noise_pred(latent_cur, t, False, context, normal_infer=self.null_normal_infer)
|
482 |
-
bar.close()
|
483 |
-
return uncond_embeddings_list
|
484 |
-
|
485 |
-
def invert(self, latents: torch.Tensor, prompt: str, null_inner_steps=10, early_stop_epsilon=1e-5, verbose=False, null_base_lr=1e-2):
|
486 |
-
self.init_prompt(prompt)
|
487 |
-
if verbose:
|
488 |
-
print("DDIM inversion...")
|
489 |
-
ddim_latents = self.ddim_inversion(latents.to(torch.float32))
|
490 |
-
if verbose:
|
491 |
-
print("Null-text optimization...")
|
492 |
-
uncond_embeddings = self.null_optimization(ddim_latents, null_inner_steps, early_stop_epsilon, null_base_lr=null_base_lr)
|
493 |
-
return ddim_latents[-1], uncond_embeddings
|
494 |
-
|
495 |
-
|
496 |
-
def __init__(self, model, guidance_scale, null_inv_with_prompt, null_normal_infer=True):
|
497 |
-
self.null_normal_infer = null_normal_infer
|
498 |
-
self.null_inv_with_prompt = null_inv_with_prompt
|
499 |
-
self.guidance_scale = guidance_scale
|
500 |
-
self.model = model
|
501 |
-
self.tokenizer = self.model.tokenizer
|
502 |
-
self.model.scheduler.set_timesteps(NUM_DDIM_STEPS)
|
503 |
-
self.prompt = None
|
504 |
-
self.context = None
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BairaS/Tabular_ML/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Tabular ML
|
3 |
-
emoji: 😻
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: red
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.10.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Chicos De La Escuela Apk.md
DELETED
@@ -1,27 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Stumble chicos Mod APK: Cómo desbloquear todo y tener más diversión</h1>
|
3 |
-
Si estás buscando un divertido y caótico juego multijugador que puedas jugar en tu teléfono o PC, entonces es posible que quieras echar un vistazo a Stumble Guys. Este juego está inspirado en los populares Fall Guys, pero es completamente gratuito y exclusivo para dispositivos Android e iOS. Sin embargo, si desea desbloquear todo en el juego y divertirse más, entonces es posible que desee utilizar Stumble Guys Mod APK. En este artículo, le diremos lo que Stumble Guys es, ¿por qué debe utilizar Stumble Guys Mod APK, cómo descargar e instalar, y algunos consejos y trucos para ganar en el juego. <h2>¿Qué es Stumble Guys? </h2>
|
4 |
-
<h3>Un juego de fiesta multijugador inspirado en Fall Guys</h3>
|
5 |
-
Stumble Guys es un juego multijugador masivo con hasta 32 jugadores en línea. El objetivo del juego es avanzar a través de una serie de niveles corriendo y saltando, evitando obstáculos y peligros. El juego cuenta con un motor basado en la física, que da a los personajes un sentido de peso y el momento. El juego también tiene un diseño colorido y loco, con muchos trajes desbloqueables y emotes. El juego es muy similar a Fall Guys, pero está diseñado para dispositivos móviles. <h3>Características y jugabilidad de Stumble Guys</h3>
|
6 |
-
|
7 |
-
<h3>Beneficios de usar la versión modificada de Stumble Guys</h3>
|
8 |
-
Stumble Guys Mod APK es una versión modificada del juego original que le da algunas ventajas y características adicionales. Algunos de los beneficios de usar Stumble Guys Mod APK son: - Desbloqueado todo: Se puede acceder a todos los trajes, emotes, pasos, pieles, sombreros, gafas, máscaras, etc. sin gastar monedas o gemas. - Monedas y gemas ilimitadas: Puedes conseguir monedas y gemas ilimitadas para comprar lo que quieras en el juego. - Sin anuncios: Puedes disfrutar del juego sin que ningún anuncio molesto interrumpa tu juego. - No se requiere raíz: Usted no necesita para rootear su dispositivo para utilizar Stumble Guys Mod APK. <h3 Riesgos y desventajas de usar la versión modded de Stumble Guys</h3>
|
9 |
-
Stumble Guys Mod APK no es una versión oficial del juego, y puede tener algunos riesgos y desventajas que usted debe ser consciente de. Algunos de los riesgos y desventajas son: - Problemas de compatibilidad: Stumble Guys Mod APK puede no funcionar en algunos dispositivos o con algunas actualizaciones del juego. - Problemas de seguridad: Stumble Guys Mod APK puede contener virus, malware o spyware que puede dañar su dispositivo o robar sus datos. - Cuestiones de van: Stumble Guys Mod APK puede violar los términos y condiciones del juego, y puede obtener prohibido jugar en línea o acceder a su cuenta. - Cuestiones éticas: Stumble Guys Mod APK puede darle una ventaja injusta sobre otros jugadores, y puede arruinar la diversión y el desafío del juego. <h2>¿Cómo descargar e instalar Stumble Guys Mod APK? </h2>
|
10 |
-
<h3>Pasos para descargar e instalar Stumble Guys Mod APK en dispositivos Android</h3>
|
11 |
-
|
12 |
-
Si desea jugar Stumble Guys Mod APK en su PC, necesitará un emulador de Android que puede ejecutar aplicaciones Android en su ordenador. Algunos de los emuladores de Android populares son [BlueStacks], [NoxPlayer], y [LDPlayer]. Puede seguir estos pasos para descargar e instalar Stumble Guys Mod APK en su PC utilizando un emulador: - Paso 1: Descargar e instalar un emulador de Android de su elección en su PC. - Paso 2: Inicie el emulador e inicie sesión con su cuenta de Google. - Paso 3: Ir a un sitio web de confianza que proporciona Stumble Guys Mod APK, tales como [APKPure] o [APKDone]. - Paso 4: Descargar la última versión del archivo Stumble Guys Mod APK en su PC. - Paso 5: Arrastre y suelte el archivo descargado Stumble Guys Mod APK en la ventana del emulador, o utilice el navegador incorporado para localizar e instalar. - Paso 6: Esperar a que la instalación termine y lanzar el juego. <h2>Consejos y trucos para ganar en Stumble Guys</h2>
|
13 |
-
<h3>Configura tus controles antes de jugar</h3>
|
14 |
-
Stumble Guys tiene dos opciones de control: joystick o botones. Puede elegir el que más le convenga en el menú de configuración. También puede ajustar la sensibilidad y el tamaño de los controles según su preferencia. Asegúrese de probar sus controles antes de jugar, para que pueda tener un juego suave y cómodo. <h3>Usa la física de tu personaje para tu ventaja</h3>
|
15 |
-
Stumble Guys tiene un motor de física realista que afecta la forma en que tu personaje se mueve e interactúa con el entorno. Puedes usar esto a tu favor usando el momento, la inercia, la gravedad, la fricción, etc. Por ejemplo, puedes saltar más alto corriendo más rápido, puedes deslizarte por las pendientes agachándote, puedes rebotar contra las paredes golpeándolas en un ángulo, etc. Experimenta con diferentes movimientos y observa cómo afectan tu rendimiento. <h3>Usa los desafíos a tu favor</h3>
|
16 |
-
|
17 |
-
Stumble Guys es un juego en el que todo puede suceder. Puedes estar liderando en un nivel, pero te quedas atrás en otro. Puede ser eliminado por un obstáculo al azar o un jugador astuto. Puede ser afortunado o desafortunado dependiendo de la situación. El punto es que no siempre se trata de ser el primero en todos los niveles. A veces, es mejor ser inteligente y estratégico que rápido e imprudente. Por ejemplo, puedes esperar a que otros jugadores despejen el camino para ti, puedes evitar áreas llenas de gente donde sobreviene el caos, puedes usar los obstáculos para tu ventaja, etc. El objetivo es sobrevivir y calificar para el siguiente nivel, no ser el más rápido. Recuerda, es un juego de diversión y caos, no una carrera. <h2>Conclusión</h2>
|
18 |
-
Stumble Guys es un divertido y caótico juego multijugador que puedes jugar en tu teléfono o PC. Está inspirado en Fall Guys, pero es gratuito y exclusivo para dispositivos Android e iOS. Si desea desbloquear todo en el juego y divertirse más, puede utilizar Stumble Guys Mod APK, que le da monedas y gemas ilimitadas, desbloqueado trajes y emotes, sin anuncios, y más. Sin embargo, también debes ser consciente de los riesgos y desventajas de usar la versión modificada del juego, como problemas de compatibilidad, problemas de seguridad, problemas de prohibición y cuestiones éticas. También debes seguir algunos consejos y trucos para ganar en el juego, como configurar tus controles, usar la física de tu personaje, usar los desafíos y ser inteligente y estratégico. Esperamos que este artículo le ayudó a aprender más acerca de Stumble Guys Mod APK y cómo usarlo. Divertirse y disfrutar del juego! <h2>Preguntas frecuentes</h2>
|
19 |
-
<h3>Q: ¿Es Stumble Guys Mod APK seguro de usar? </h3>
|
20 |
-
|
21 |
-
A: Stumble Guys Mod APK puede violar los términos y condiciones del juego, y puede obtener prohibido jugar en línea o acceder a su cuenta. Usted debe utilizar Stumble Guys Mod APK a su propio riesgo, y respetar los derechos de los desarrolladores y otros jugadores. <h3>Q: Cómo actualizar Stumble Guys Mod APK? </h3>
|
22 |
-
A: Stumble Guys Mod APK no puede funcionar con algunas actualizaciones del juego. Usted debe comprobar el sitio web donde ha descargado Stumble Guys Mod APK para las nuevas versiones o actualizaciones. También debe desinstalar la versión anterior de Stumble Guys Mod APK antes de instalar el nuevo. <h3>Q: Cómo desinstalar Stumble Guys Mod APK? </h3>
|
23 |
-
R: Si desea desinstalar Stumble Guys Mod APK de su dispositivo, puede seguir estos pasos: - Paso 1: Ir a la configuración del dispositivo y encontrar las aplicaciones o menú. - Paso 2: Encuentra Stumble Guys Mod APK en la lista de aplicaciones y toque en él. - Paso 3: Toque en el botón de desinstalación y confirmar su acción. - Paso 4: Espere a que la desinstalación para terminar y reiniciar el dispositivo. <h3>Q: Cómo ponerse en contacto con los desarrolladores de Stumble Guys? </h3>
|
24 |
-
R: Si tienes alguna pregunta, comentario o sugerencia para los desarrolladores de Stumble Guys, puedes contactarlos a través de sus cuentas oficiales de redes sociales o su dirección de correo electrónico. Estos son algunos de sus datos de contacto: - Facebook: https://www.facebook.com/StumbleGuys/ - Twitter: https://twitter.com/StumbleGuys - Instagram: https:/www.instagram.com/stumbleguys/ - Correo electrónico: [email protected]</p>
|
25 |
-
<h2>chicos de la escuela apk</h2><br /><p><b><b>DOWNLOAD</b> ✺✺✺ <a href="https://bltlly.com/2v6IUC">https://bltlly.com/2v6IUC</a></b></p><br /><br /> 64aa2da5cf<br />
|
26 |
-
<br />
|
27 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/msgpack/ext.py
DELETED
@@ -1,193 +0,0 @@
|
|
1 |
-
# coding: utf-8
|
2 |
-
from collections import namedtuple
|
3 |
-
import datetime
|
4 |
-
import sys
|
5 |
-
import struct
|
6 |
-
|
7 |
-
|
8 |
-
PY2 = sys.version_info[0] == 2
|
9 |
-
|
10 |
-
if PY2:
|
11 |
-
int_types = (int, long)
|
12 |
-
_utc = None
|
13 |
-
else:
|
14 |
-
int_types = int
|
15 |
-
try:
|
16 |
-
_utc = datetime.timezone.utc
|
17 |
-
except AttributeError:
|
18 |
-
_utc = datetime.timezone(datetime.timedelta(0))
|
19 |
-
|
20 |
-
|
21 |
-
class ExtType(namedtuple("ExtType", "code data")):
|
22 |
-
"""ExtType represents ext type in msgpack."""
|
23 |
-
|
24 |
-
def __new__(cls, code, data):
|
25 |
-
if not isinstance(code, int):
|
26 |
-
raise TypeError("code must be int")
|
27 |
-
if not isinstance(data, bytes):
|
28 |
-
raise TypeError("data must be bytes")
|
29 |
-
if not 0 <= code <= 127:
|
30 |
-
raise ValueError("code must be 0~127")
|
31 |
-
return super(ExtType, cls).__new__(cls, code, data)
|
32 |
-
|
33 |
-
|
34 |
-
class Timestamp(object):
|
35 |
-
"""Timestamp represents the Timestamp extension type in msgpack.
|
36 |
-
|
37 |
-
When built with Cython, msgpack uses C methods to pack and unpack `Timestamp`. When using pure-Python
|
38 |
-
msgpack, :func:`to_bytes` and :func:`from_bytes` are used to pack and unpack `Timestamp`.
|
39 |
-
|
40 |
-
This class is immutable: Do not override seconds and nanoseconds.
|
41 |
-
"""
|
42 |
-
|
43 |
-
__slots__ = ["seconds", "nanoseconds"]
|
44 |
-
|
45 |
-
def __init__(self, seconds, nanoseconds=0):
|
46 |
-
"""Initialize a Timestamp object.
|
47 |
-
|
48 |
-
:param int seconds:
|
49 |
-
Number of seconds since the UNIX epoch (00:00:00 UTC Jan 1 1970, minus leap seconds).
|
50 |
-
May be negative.
|
51 |
-
|
52 |
-
:param int nanoseconds:
|
53 |
-
Number of nanoseconds to add to `seconds` to get fractional time.
|
54 |
-
Maximum is 999_999_999. Default is 0.
|
55 |
-
|
56 |
-
Note: Negative times (before the UNIX epoch) are represented as negative seconds + positive ns.
|
57 |
-
"""
|
58 |
-
if not isinstance(seconds, int_types):
|
59 |
-
raise TypeError("seconds must be an integer")
|
60 |
-
if not isinstance(nanoseconds, int_types):
|
61 |
-
raise TypeError("nanoseconds must be an integer")
|
62 |
-
if not (0 <= nanoseconds < 10**9):
|
63 |
-
raise ValueError(
|
64 |
-
"nanoseconds must be a non-negative integer less than 999999999."
|
65 |
-
)
|
66 |
-
self.seconds = seconds
|
67 |
-
self.nanoseconds = nanoseconds
|
68 |
-
|
69 |
-
def __repr__(self):
|
70 |
-
"""String representation of Timestamp."""
|
71 |
-
return "Timestamp(seconds={0}, nanoseconds={1})".format(
|
72 |
-
self.seconds, self.nanoseconds
|
73 |
-
)
|
74 |
-
|
75 |
-
def __eq__(self, other):
|
76 |
-
"""Check for equality with another Timestamp object"""
|
77 |
-
if type(other) is self.__class__:
|
78 |
-
return (
|
79 |
-
self.seconds == other.seconds and self.nanoseconds == other.nanoseconds
|
80 |
-
)
|
81 |
-
return False
|
82 |
-
|
83 |
-
def __ne__(self, other):
|
84 |
-
"""not-equals method (see :func:`__eq__()`)"""
|
85 |
-
return not self.__eq__(other)
|
86 |
-
|
87 |
-
def __hash__(self):
|
88 |
-
return hash((self.seconds, self.nanoseconds))
|
89 |
-
|
90 |
-
@staticmethod
|
91 |
-
def from_bytes(b):
|
92 |
-
"""Unpack bytes into a `Timestamp` object.
|
93 |
-
|
94 |
-
Used for pure-Python msgpack unpacking.
|
95 |
-
|
96 |
-
:param b: Payload from msgpack ext message with code -1
|
97 |
-
:type b: bytes
|
98 |
-
|
99 |
-
:returns: Timestamp object unpacked from msgpack ext payload
|
100 |
-
:rtype: Timestamp
|
101 |
-
"""
|
102 |
-
if len(b) == 4:
|
103 |
-
seconds = struct.unpack("!L", b)[0]
|
104 |
-
nanoseconds = 0
|
105 |
-
elif len(b) == 8:
|
106 |
-
data64 = struct.unpack("!Q", b)[0]
|
107 |
-
seconds = data64 & 0x00000003FFFFFFFF
|
108 |
-
nanoseconds = data64 >> 34
|
109 |
-
elif len(b) == 12:
|
110 |
-
nanoseconds, seconds = struct.unpack("!Iq", b)
|
111 |
-
else:
|
112 |
-
raise ValueError(
|
113 |
-
"Timestamp type can only be created from 32, 64, or 96-bit byte objects"
|
114 |
-
)
|
115 |
-
return Timestamp(seconds, nanoseconds)
|
116 |
-
|
117 |
-
def to_bytes(self):
|
118 |
-
"""Pack this Timestamp object into bytes.
|
119 |
-
|
120 |
-
Used for pure-Python msgpack packing.
|
121 |
-
|
122 |
-
:returns data: Payload for EXT message with code -1 (timestamp type)
|
123 |
-
:rtype: bytes
|
124 |
-
"""
|
125 |
-
if (self.seconds >> 34) == 0: # seconds is non-negative and fits in 34 bits
|
126 |
-
data64 = self.nanoseconds << 34 | self.seconds
|
127 |
-
if data64 & 0xFFFFFFFF00000000 == 0:
|
128 |
-
# nanoseconds is zero and seconds < 2**32, so timestamp 32
|
129 |
-
data = struct.pack("!L", data64)
|
130 |
-
else:
|
131 |
-
# timestamp 64
|
132 |
-
data = struct.pack("!Q", data64)
|
133 |
-
else:
|
134 |
-
# timestamp 96
|
135 |
-
data = struct.pack("!Iq", self.nanoseconds, self.seconds)
|
136 |
-
return data
|
137 |
-
|
138 |
-
@staticmethod
|
139 |
-
def from_unix(unix_sec):
|
140 |
-
"""Create a Timestamp from posix timestamp in seconds.
|
141 |
-
|
142 |
-
:param unix_float: Posix timestamp in seconds.
|
143 |
-
:type unix_float: int or float.
|
144 |
-
"""
|
145 |
-
seconds = int(unix_sec // 1)
|
146 |
-
nanoseconds = int((unix_sec % 1) * 10**9)
|
147 |
-
return Timestamp(seconds, nanoseconds)
|
148 |
-
|
149 |
-
def to_unix(self):
|
150 |
-
"""Get the timestamp as a floating-point value.
|
151 |
-
|
152 |
-
:returns: posix timestamp
|
153 |
-
:rtype: float
|
154 |
-
"""
|
155 |
-
return self.seconds + self.nanoseconds / 1e9
|
156 |
-
|
157 |
-
@staticmethod
|
158 |
-
def from_unix_nano(unix_ns):
|
159 |
-
"""Create a Timestamp from posix timestamp in nanoseconds.
|
160 |
-
|
161 |
-
:param int unix_ns: Posix timestamp in nanoseconds.
|
162 |
-
:rtype: Timestamp
|
163 |
-
"""
|
164 |
-
return Timestamp(*divmod(unix_ns, 10**9))
|
165 |
-
|
166 |
-
def to_unix_nano(self):
|
167 |
-
"""Get the timestamp as a unixtime in nanoseconds.
|
168 |
-
|
169 |
-
:returns: posix timestamp in nanoseconds
|
170 |
-
:rtype: int
|
171 |
-
"""
|
172 |
-
return self.seconds * 10**9 + self.nanoseconds
|
173 |
-
|
174 |
-
def to_datetime(self):
|
175 |
-
"""Get the timestamp as a UTC datetime.
|
176 |
-
|
177 |
-
Python 2 is not supported.
|
178 |
-
|
179 |
-
:rtype: datetime.
|
180 |
-
"""
|
181 |
-
return datetime.datetime.fromtimestamp(0, _utc) + datetime.timedelta(
|
182 |
-
seconds=self.to_unix()
|
183 |
-
)
|
184 |
-
|
185 |
-
@staticmethod
|
186 |
-
def from_datetime(dt):
|
187 |
-
"""Create a Timestamp from datetime with tzinfo.
|
188 |
-
|
189 |
-
Python 2 is not supported.
|
190 |
-
|
191 |
-
:rtype: Timestamp
|
192 |
-
"""
|
193 |
-
return Timestamp.from_unix(dt.timestamp())
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/s3transfer/subscribers.py
DELETED
@@ -1,92 +0,0 @@
|
|
1 |
-
# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License"). You
|
4 |
-
# may not use this file except in compliance with the License. A copy of
|
5 |
-
# the License is located at
|
6 |
-
#
|
7 |
-
# http://aws.amazon.com/apache2.0/
|
8 |
-
#
|
9 |
-
# or in the "license" file accompanying this file. This file is
|
10 |
-
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
|
11 |
-
# ANY KIND, either express or implied. See the License for the specific
|
12 |
-
# language governing permissions and limitations under the License.
|
13 |
-
from s3transfer.compat import accepts_kwargs
|
14 |
-
from s3transfer.exceptions import InvalidSubscriberMethodError
|
15 |
-
|
16 |
-
|
17 |
-
class BaseSubscriber:
|
18 |
-
"""The base subscriber class
|
19 |
-
|
20 |
-
It is recommended that all subscriber implementations subclass and then
|
21 |
-
override the subscription methods (i.e. on_{subsribe_type}() methods).
|
22 |
-
"""
|
23 |
-
|
24 |
-
VALID_SUBSCRIBER_TYPES = ['queued', 'progress', 'done']
|
25 |
-
|
26 |
-
def __new__(cls, *args, **kwargs):
|
27 |
-
cls._validate_subscriber_methods()
|
28 |
-
return super().__new__(cls)
|
29 |
-
|
30 |
-
@classmethod
|
31 |
-
def _validate_subscriber_methods(cls):
|
32 |
-
for subscriber_type in cls.VALID_SUBSCRIBER_TYPES:
|
33 |
-
subscriber_method = getattr(cls, 'on_' + subscriber_type)
|
34 |
-
if not callable(subscriber_method):
|
35 |
-
raise InvalidSubscriberMethodError(
|
36 |
-
'Subscriber method %s must be callable.'
|
37 |
-
% subscriber_method
|
38 |
-
)
|
39 |
-
|
40 |
-
if not accepts_kwargs(subscriber_method):
|
41 |
-
raise InvalidSubscriberMethodError(
|
42 |
-
'Subscriber method %s must accept keyword '
|
43 |
-
'arguments (**kwargs)' % subscriber_method
|
44 |
-
)
|
45 |
-
|
46 |
-
def on_queued(self, future, **kwargs):
|
47 |
-
"""Callback to be invoked when transfer request gets queued
|
48 |
-
|
49 |
-
This callback can be useful for:
|
50 |
-
|
51 |
-
* Keeping track of how many transfers have been requested
|
52 |
-
* Providing the expected transfer size through
|
53 |
-
future.meta.provide_transfer_size() so a HeadObject would not
|
54 |
-
need to be made for copies and downloads.
|
55 |
-
|
56 |
-
:type future: s3transfer.futures.TransferFuture
|
57 |
-
:param future: The TransferFuture representing the requested transfer.
|
58 |
-
"""
|
59 |
-
pass
|
60 |
-
|
61 |
-
def on_progress(self, future, bytes_transferred, **kwargs):
|
62 |
-
"""Callback to be invoked when progress is made on transfer
|
63 |
-
|
64 |
-
This callback can be useful for:
|
65 |
-
|
66 |
-
* Recording and displaying progress
|
67 |
-
|
68 |
-
:type future: s3transfer.futures.TransferFuture
|
69 |
-
:param future: The TransferFuture representing the requested transfer.
|
70 |
-
|
71 |
-
:type bytes_transferred: int
|
72 |
-
:param bytes_transferred: The number of bytes transferred for that
|
73 |
-
invocation of the callback. Note that a negative amount can be
|
74 |
-
provided, which usually indicates that an in-progress request
|
75 |
-
needed to be retried and thus progress was rewound.
|
76 |
-
"""
|
77 |
-
pass
|
78 |
-
|
79 |
-
def on_done(self, future, **kwargs):
|
80 |
-
"""Callback to be invoked once a transfer is done
|
81 |
-
|
82 |
-
This callback can be useful for:
|
83 |
-
|
84 |
-
* Recording and displaying whether the transfer succeeded or
|
85 |
-
failed using future.result()
|
86 |
-
* Running some task after the transfer completed like changing
|
87 |
-
the last modified time of a downloaded file.
|
88 |
-
|
89 |
-
:type future: s3transfer.futures.TransferFuture
|
90 |
-
:param future: The TransferFuture representing the requested transfer.
|
91 |
-
"""
|
92 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/type_traits/logical_metafunctions.h
DELETED
@@ -1,179 +0,0 @@
|
|
1 |
-
///////////////////////////////////////////////////////////////////////////////
|
2 |
-
// Copyright (c) 2018 NVIDIA Corporation
|
3 |
-
// Copyright (c) 2015-2018 Bryce Adelstein Lelbach aka wash
|
4 |
-
//
|
5 |
-
// Distributed under the Boost Software License, Version 1.0. (See accompanying
|
6 |
-
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
|
7 |
-
///////////////////////////////////////////////////////////////////////////////
|
8 |
-
|
9 |
-
/*! \file logical_metafunctions.h
|
10 |
-
* \brief C++17's \c conjunction, \c disjunction, and \c negation metafunctions.
|
11 |
-
*/
|
12 |
-
|
13 |
-
#pragma once
|
14 |
-
|
15 |
-
#include <thrust/detail/config.h>
|
16 |
-
#include <thrust/detail/cpp11_required.h>
|
17 |
-
|
18 |
-
#if THRUST_CPP_DIALECT >= 2011
|
19 |
-
|
20 |
-
#include <type_traits>
|
21 |
-
|
22 |
-
namespace thrust
|
23 |
-
{
|
24 |
-
|
25 |
-
#if THRUST_CPP_DIALECT >= 2017
|
26 |
-
|
27 |
-
/// An \c integral_constant whose value is <code>(... && Ts::value)</code>.
|
28 |
-
template <typename... Ts>
|
29 |
-
using conjunction = std::conjunction<Ts...>;
|
30 |
-
|
31 |
-
/// A <code>constexpr bool</code> whose value is <code>(... && Ts::value)</code>.
|
32 |
-
template <typename... Ts>
|
33 |
-
constexpr bool conjunction_v = conjunction<Ts...>::value;
|
34 |
-
|
35 |
-
/// An \c integral_constant whose value is <code>(... || Ts::value)</code>.
|
36 |
-
template <typename... Ts>
|
37 |
-
using disjunction = std::disjunction<Ts...>;
|
38 |
-
|
39 |
-
/// A <code>constexpr bool</code> whose value is <code>(... || Ts::value)</code>.
|
40 |
-
template <typename... Ts>
|
41 |
-
constexpr bool disjunction_v = disjunction<Ts...>::value;
|
42 |
-
|
43 |
-
/// An \c integral_constant whose value is <code>!Ts::value</code>.
|
44 |
-
template <typename T>
|
45 |
-
using negation = std::negation<T>;
|
46 |
-
|
47 |
-
/// A <code>constexpr bool</code> whose value is <code>!Ts::value</code>.
|
48 |
-
template <typename T>
|
49 |
-
constexpr bool negation_v = negation<T>::value;
|
50 |
-
|
51 |
-
///////////////////////////////////////////////////////////////////////////////
|
52 |
-
|
53 |
-
#else // Older than C++17.
|
54 |
-
|
55 |
-
/// An \c integral_constant whose value is <code>(... && Ts::value)</code>.
|
56 |
-
template <typename... Ts>
|
57 |
-
struct conjunction;
|
58 |
-
|
59 |
-
#if THRUST_CPP_DIALECT >= 2014
|
60 |
-
/// A <code>constexpr bool</code> whose value is <code>(... && Ts::value)</code>.
|
61 |
-
template <typename... Ts>
|
62 |
-
constexpr bool conjunction_v = conjunction<Ts...>::value;
|
63 |
-
#endif
|
64 |
-
|
65 |
-
template <>
|
66 |
-
struct conjunction<> : std::true_type {};
|
67 |
-
|
68 |
-
template <typename T>
|
69 |
-
struct conjunction<T> : T {};
|
70 |
-
|
71 |
-
template <typename T0, typename T1>
|
72 |
-
struct conjunction<T0, T1> : std::conditional<T0::value, T1, T0>::type {};
|
73 |
-
|
74 |
-
template<typename T0, typename T1, typename T2, typename... TN>
|
75 |
-
struct conjunction<T0, T1, T2, TN...>
|
76 |
-
: std::conditional<T0::value, conjunction<T1, T2, TN...>, T0>::type {};
|
77 |
-
|
78 |
-
///////////////////////////////////////////////////////////////////////////////
|
79 |
-
|
80 |
-
/// An \c integral_constant whose value is <code>(... || Ts::value)</code>.
|
81 |
-
template <typename... Ts>
|
82 |
-
struct disjunction;
|
83 |
-
|
84 |
-
#if THRUST_CPP_DIALECT >= 2014
|
85 |
-
/// A <code>constexpr bool</code> whose value is <code>(... || Ts::value)</code>.
|
86 |
-
template <typename... Ts>
|
87 |
-
constexpr bool disjunction_v = disjunction<Ts...>::value;
|
88 |
-
#endif
|
89 |
-
|
90 |
-
template <>
|
91 |
-
struct disjunction<> : std::false_type {};
|
92 |
-
|
93 |
-
template <typename T>
|
94 |
-
struct disjunction<T> : T {};
|
95 |
-
|
96 |
-
template <typename T0, typename... TN>
|
97 |
-
struct disjunction<T0, TN...>
|
98 |
-
: std::conditional<T0::value != false, T0, disjunction<TN...> >::type {};
|
99 |
-
|
100 |
-
///////////////////////////////////////////////////////////////////////////////
|
101 |
-
|
102 |
-
/// An \c integral_constant whose value is <code>!T::value</code>.
|
103 |
-
template <typename T>
|
104 |
-
struct negation;
|
105 |
-
|
106 |
-
#if THRUST_CPP_DIALECT >= 2014
|
107 |
-
/// A <code>constexpr bool</code> whose value is <code>!T::value</code>.
|
108 |
-
template <typename T>
|
109 |
-
constexpr bool negation_v = negation<T>::value;
|
110 |
-
#endif
|
111 |
-
|
112 |
-
template <typename T>
|
113 |
-
struct negation : std::integral_constant<bool, !T::value> {};
|
114 |
-
|
115 |
-
#endif // THRUST_CPP_DIALECT >= 2017
|
116 |
-
|
117 |
-
///////////////////////////////////////////////////////////////////////////////
|
118 |
-
|
119 |
-
/// An \c integral_constant whose value is <code>(... && Bs)</code>.
|
120 |
-
template <bool... Bs>
|
121 |
-
struct conjunction_value;
|
122 |
-
|
123 |
-
#if THRUST_CPP_DIALECT >= 2014
|
124 |
-
/// A <code>constexpr bool</code> whose value is <code>(... && Bs)</code>.
|
125 |
-
template <bool... Bs>
|
126 |
-
constexpr bool conjunction_value_v = conjunction_value<Bs...>::value;
|
127 |
-
#endif
|
128 |
-
|
129 |
-
template <>
|
130 |
-
struct conjunction_value<> : std::true_type {};
|
131 |
-
|
132 |
-
template <bool B>
|
133 |
-
struct conjunction_value<B> : std::integral_constant<bool, B> {};
|
134 |
-
|
135 |
-
template <bool B0, bool... BN>
|
136 |
-
struct conjunction_value<B0, BN...>
|
137 |
-
: std::integral_constant<bool, B0 && conjunction_value<BN...>::value> {};
|
138 |
-
|
139 |
-
///////////////////////////////////////////////////////////////////////////////
|
140 |
-
|
141 |
-
/// An \c integral_constant whose value is <code>(... || Bs)</code>.
|
142 |
-
template <bool... Bs>
|
143 |
-
struct disjunction_value;
|
144 |
-
|
145 |
-
#if THRUST_CPP_DIALECT >= 2014
|
146 |
-
/// A <code>constexpr bool</code> whose value is <code>(... || Bs)</code>.
|
147 |
-
template <bool... Bs>
|
148 |
-
constexpr bool disjunction_value_v = disjunction_value<Bs...>::value;
|
149 |
-
#endif
|
150 |
-
|
151 |
-
template <>
|
152 |
-
struct disjunction_value<> : std::false_type {};
|
153 |
-
|
154 |
-
template <bool B>
|
155 |
-
struct disjunction_value<B> : std::integral_constant<bool, B> {};
|
156 |
-
|
157 |
-
template <bool B0, bool... BN>
|
158 |
-
struct disjunction_value<B0, BN...>
|
159 |
-
: std::integral_constant<bool, B0 || disjunction_value<BN...>::value> {};
|
160 |
-
|
161 |
-
///////////////////////////////////////////////////////////////////////////////
|
162 |
-
|
163 |
-
/// An \c integral_constant whose value is <code>!B</code>.
|
164 |
-
template <bool B>
|
165 |
-
struct negation_value;
|
166 |
-
|
167 |
-
#if THRUST_CPP_DIALECT >= 2014
|
168 |
-
/// A <code>constexpr bool</code> whose value is <code>!B</code>.
|
169 |
-
template <bool B>
|
170 |
-
constexpr bool negation_value_v = negation_value<B>::value;
|
171 |
-
#endif
|
172 |
-
|
173 |
-
template <bool B>
|
174 |
-
struct negation_value : std::integral_constant<bool, !B> {};
|
175 |
-
|
176 |
-
} // end namespace thrust
|
177 |
-
|
178 |
-
#endif // THRUST_CPP_DIALECT >= 2011
|
179 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/saicinpainting/evaluation/losses/base_loss.py
DELETED
@@ -1,528 +0,0 @@
|
|
1 |
-
import logging
|
2 |
-
from abc import abstractmethod, ABC
|
3 |
-
|
4 |
-
import numpy as np
|
5 |
-
import sklearn
|
6 |
-
import sklearn.svm
|
7 |
-
import torch
|
8 |
-
import torch.nn as nn
|
9 |
-
import torch.nn.functional as F
|
10 |
-
from joblib import Parallel, delayed
|
11 |
-
from scipy import linalg
|
12 |
-
|
13 |
-
from models.ade20k import SegmentationModule, NUM_CLASS, segm_options
|
14 |
-
from .fid.inception import InceptionV3
|
15 |
-
from .lpips import PerceptualLoss
|
16 |
-
from .ssim import SSIM
|
17 |
-
|
18 |
-
LOGGER = logging.getLogger(__name__)
|
19 |
-
|
20 |
-
|
21 |
-
def get_groupings(groups):
|
22 |
-
"""
|
23 |
-
:param groups: group numbers for respective elements
|
24 |
-
:return: dict of kind {group_idx: indices of the corresponding group elements}
|
25 |
-
"""
|
26 |
-
label_groups, count_groups = np.unique(groups, return_counts=True)
|
27 |
-
|
28 |
-
indices = np.argsort(groups)
|
29 |
-
|
30 |
-
grouping = dict()
|
31 |
-
cur_start = 0
|
32 |
-
for label, count in zip(label_groups, count_groups):
|
33 |
-
cur_end = cur_start + count
|
34 |
-
cur_indices = indices[cur_start:cur_end]
|
35 |
-
grouping[label] = cur_indices
|
36 |
-
cur_start = cur_end
|
37 |
-
return grouping
|
38 |
-
|
39 |
-
|
40 |
-
class EvaluatorScore(nn.Module):
|
41 |
-
@abstractmethod
|
42 |
-
def forward(self, pred_batch, target_batch, mask):
|
43 |
-
pass
|
44 |
-
|
45 |
-
@abstractmethod
|
46 |
-
def get_value(self, groups=None, states=None):
|
47 |
-
pass
|
48 |
-
|
49 |
-
@abstractmethod
|
50 |
-
def reset(self):
|
51 |
-
pass
|
52 |
-
|
53 |
-
|
54 |
-
class PairwiseScore(EvaluatorScore, ABC):
|
55 |
-
def __init__(self):
|
56 |
-
super().__init__()
|
57 |
-
self.individual_values = None
|
58 |
-
|
59 |
-
def get_value(self, groups=None, states=None):
|
60 |
-
"""
|
61 |
-
:param groups:
|
62 |
-
:return:
|
63 |
-
total_results: dict of kind {'mean': score mean, 'std': score std}
|
64 |
-
group_results: None, if groups is None;
|
65 |
-
else dict {group_idx: {'mean': score mean among group, 'std': score std among group}}
|
66 |
-
"""
|
67 |
-
individual_values = torch.stack(states, dim=0).reshape(-1).cpu().numpy() if states is not None \
|
68 |
-
else self.individual_values
|
69 |
-
|
70 |
-
total_results = {
|
71 |
-
'mean': individual_values.mean(),
|
72 |
-
'std': individual_values.std()
|
73 |
-
}
|
74 |
-
|
75 |
-
if groups is None:
|
76 |
-
return total_results, None
|
77 |
-
|
78 |
-
group_results = dict()
|
79 |
-
grouping = get_groupings(groups)
|
80 |
-
for label, index in grouping.items():
|
81 |
-
group_scores = individual_values[index]
|
82 |
-
group_results[label] = {
|
83 |
-
'mean': group_scores.mean(),
|
84 |
-
'std': group_scores.std()
|
85 |
-
}
|
86 |
-
return total_results, group_results
|
87 |
-
|
88 |
-
def reset(self):
|
89 |
-
self.individual_values = []
|
90 |
-
|
91 |
-
|
92 |
-
class SSIMScore(PairwiseScore):
|
93 |
-
def __init__(self, window_size=11):
|
94 |
-
super().__init__()
|
95 |
-
self.score = SSIM(window_size=window_size, size_average=False).eval()
|
96 |
-
self.reset()
|
97 |
-
|
98 |
-
def forward(self, pred_batch, target_batch, mask=None):
|
99 |
-
batch_values = self.score(pred_batch, target_batch)
|
100 |
-
self.individual_values = np.hstack([
|
101 |
-
self.individual_values, batch_values.detach().cpu().numpy()
|
102 |
-
])
|
103 |
-
return batch_values
|
104 |
-
|
105 |
-
|
106 |
-
class LPIPSScore(PairwiseScore):
|
107 |
-
def __init__(self, model='net-lin', net='vgg', model_path=None, use_gpu=True):
|
108 |
-
super().__init__()
|
109 |
-
self.score = PerceptualLoss(model=model, net=net, model_path=model_path,
|
110 |
-
use_gpu=use_gpu, spatial=False).eval()
|
111 |
-
self.reset()
|
112 |
-
|
113 |
-
def forward(self, pred_batch, target_batch, mask=None):
|
114 |
-
batch_values = self.score(pred_batch, target_batch).flatten()
|
115 |
-
self.individual_values = np.hstack([
|
116 |
-
self.individual_values, batch_values.detach().cpu().numpy()
|
117 |
-
])
|
118 |
-
return batch_values
|
119 |
-
|
120 |
-
|
121 |
-
def fid_calculate_activation_statistics(act):
|
122 |
-
mu = np.mean(act, axis=0)
|
123 |
-
sigma = np.cov(act, rowvar=False)
|
124 |
-
return mu, sigma
|
125 |
-
|
126 |
-
|
127 |
-
def calculate_frechet_distance(activations_pred, activations_target, eps=1e-6):
|
128 |
-
mu1, sigma1 = fid_calculate_activation_statistics(activations_pred)
|
129 |
-
mu2, sigma2 = fid_calculate_activation_statistics(activations_target)
|
130 |
-
|
131 |
-
diff = mu1 - mu2
|
132 |
-
|
133 |
-
# Product might be almost singular
|
134 |
-
covmean, _ = linalg.sqrtm(sigma1.dot(sigma2), disp=False)
|
135 |
-
if not np.isfinite(covmean).all():
|
136 |
-
msg = ('fid calculation produces singular product; '
|
137 |
-
'adding %s to diagonal of cov estimates') % eps
|
138 |
-
LOGGER.warning(msg)
|
139 |
-
offset = np.eye(sigma1.shape[0]) * eps
|
140 |
-
covmean = linalg.sqrtm((sigma1 + offset).dot(sigma2 + offset))
|
141 |
-
|
142 |
-
# Numerical error might give slight imaginary component
|
143 |
-
if np.iscomplexobj(covmean):
|
144 |
-
# if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-3):
|
145 |
-
if not np.allclose(np.diagonal(covmean).imag, 0, atol=1e-2):
|
146 |
-
m = np.max(np.abs(covmean.imag))
|
147 |
-
raise ValueError('Imaginary component {}'.format(m))
|
148 |
-
covmean = covmean.real
|
149 |
-
|
150 |
-
tr_covmean = np.trace(covmean)
|
151 |
-
|
152 |
-
return (diff.dot(diff) + np.trace(sigma1) +
|
153 |
-
np.trace(sigma2) - 2 * tr_covmean)
|
154 |
-
|
155 |
-
|
156 |
-
class FIDScore(EvaluatorScore):
|
157 |
-
def __init__(self, dims=2048, eps=1e-6):
|
158 |
-
LOGGER.info("FIDscore init called")
|
159 |
-
super().__init__()
|
160 |
-
if getattr(FIDScore, '_MODEL', None) is None:
|
161 |
-
block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
|
162 |
-
FIDScore._MODEL = InceptionV3([block_idx]).eval()
|
163 |
-
self.model = FIDScore._MODEL
|
164 |
-
self.eps = eps
|
165 |
-
self.reset()
|
166 |
-
LOGGER.info("FIDscore init done")
|
167 |
-
|
168 |
-
def forward(self, pred_batch, target_batch, mask=None):
|
169 |
-
activations_pred = self._get_activations(pred_batch)
|
170 |
-
activations_target = self._get_activations(target_batch)
|
171 |
-
|
172 |
-
self.activations_pred.append(activations_pred.detach().cpu())
|
173 |
-
self.activations_target.append(activations_target.detach().cpu())
|
174 |
-
|
175 |
-
return activations_pred, activations_target
|
176 |
-
|
177 |
-
def get_value(self, groups=None, states=None):
|
178 |
-
LOGGER.info("FIDscore get_value called")
|
179 |
-
activations_pred, activations_target = zip(*states) if states is not None \
|
180 |
-
else (self.activations_pred, self.activations_target)
|
181 |
-
activations_pred = torch.cat(activations_pred).cpu().numpy()
|
182 |
-
activations_target = torch.cat(activations_target).cpu().numpy()
|
183 |
-
|
184 |
-
total_distance = calculate_frechet_distance(activations_pred, activations_target, eps=self.eps)
|
185 |
-
total_results = dict(mean=total_distance)
|
186 |
-
|
187 |
-
if groups is None:
|
188 |
-
group_results = None
|
189 |
-
else:
|
190 |
-
group_results = dict()
|
191 |
-
grouping = get_groupings(groups)
|
192 |
-
for label, index in grouping.items():
|
193 |
-
if len(index) > 1:
|
194 |
-
group_distance = calculate_frechet_distance(activations_pred[index], activations_target[index],
|
195 |
-
eps=self.eps)
|
196 |
-
group_results[label] = dict(mean=group_distance)
|
197 |
-
|
198 |
-
else:
|
199 |
-
group_results[label] = dict(mean=float('nan'))
|
200 |
-
|
201 |
-
self.reset()
|
202 |
-
|
203 |
-
LOGGER.info("FIDscore get_value done")
|
204 |
-
|
205 |
-
return total_results, group_results
|
206 |
-
|
207 |
-
def reset(self):
|
208 |
-
self.activations_pred = []
|
209 |
-
self.activations_target = []
|
210 |
-
|
211 |
-
def _get_activations(self, batch):
|
212 |
-
activations = self.model(batch)[0]
|
213 |
-
if activations.shape[2] != 1 or activations.shape[3] != 1:
|
214 |
-
assert False, \
|
215 |
-
'We should not have got here, because Inception always scales inputs to 299x299'
|
216 |
-
# activations = F.adaptive_avg_pool2d(activations, output_size=(1, 1))
|
217 |
-
activations = activations.squeeze(-1).squeeze(-1)
|
218 |
-
return activations
|
219 |
-
|
220 |
-
|
221 |
-
class SegmentationAwareScore(EvaluatorScore):
|
222 |
-
def __init__(self, weights_path):
|
223 |
-
super().__init__()
|
224 |
-
self.segm_network = SegmentationModule(weights_path=weights_path, use_default_normalization=True).eval()
|
225 |
-
self.target_class_freq_by_image_total = []
|
226 |
-
self.target_class_freq_by_image_mask = []
|
227 |
-
self.pred_class_freq_by_image_mask = []
|
228 |
-
|
229 |
-
def forward(self, pred_batch, target_batch, mask):
|
230 |
-
pred_segm_flat = self.segm_network.predict(pred_batch)[0].view(pred_batch.shape[0], -1).long().detach().cpu().numpy()
|
231 |
-
target_segm_flat = self.segm_network.predict(target_batch)[0].view(pred_batch.shape[0], -1).long().detach().cpu().numpy()
|
232 |
-
mask_flat = (mask.view(mask.shape[0], -1) > 0.5).detach().cpu().numpy()
|
233 |
-
|
234 |
-
batch_target_class_freq_total = []
|
235 |
-
batch_target_class_freq_mask = []
|
236 |
-
batch_pred_class_freq_mask = []
|
237 |
-
|
238 |
-
for cur_pred_segm, cur_target_segm, cur_mask in zip(pred_segm_flat, target_segm_flat, mask_flat):
|
239 |
-
cur_target_class_freq_total = np.bincount(cur_target_segm, minlength=NUM_CLASS)[None, ...]
|
240 |
-
cur_target_class_freq_mask = np.bincount(cur_target_segm[cur_mask], minlength=NUM_CLASS)[None, ...]
|
241 |
-
cur_pred_class_freq_mask = np.bincount(cur_pred_segm[cur_mask], minlength=NUM_CLASS)[None, ...]
|
242 |
-
|
243 |
-
self.target_class_freq_by_image_total.append(cur_target_class_freq_total)
|
244 |
-
self.target_class_freq_by_image_mask.append(cur_target_class_freq_mask)
|
245 |
-
self.pred_class_freq_by_image_mask.append(cur_pred_class_freq_mask)
|
246 |
-
|
247 |
-
batch_target_class_freq_total.append(cur_target_class_freq_total)
|
248 |
-
batch_target_class_freq_mask.append(cur_target_class_freq_mask)
|
249 |
-
batch_pred_class_freq_mask.append(cur_pred_class_freq_mask)
|
250 |
-
|
251 |
-
batch_target_class_freq_total = np.concatenate(batch_target_class_freq_total, axis=0)
|
252 |
-
batch_target_class_freq_mask = np.concatenate(batch_target_class_freq_mask, axis=0)
|
253 |
-
batch_pred_class_freq_mask = np.concatenate(batch_pred_class_freq_mask, axis=0)
|
254 |
-
return batch_target_class_freq_total, batch_target_class_freq_mask, batch_pred_class_freq_mask
|
255 |
-
|
256 |
-
def reset(self):
|
257 |
-
super().reset()
|
258 |
-
self.target_class_freq_by_image_total = []
|
259 |
-
self.target_class_freq_by_image_mask = []
|
260 |
-
self.pred_class_freq_by_image_mask = []
|
261 |
-
|
262 |
-
|
263 |
-
def distribute_values_to_classes(target_class_freq_by_image_mask, values, idx2name):
|
264 |
-
assert target_class_freq_by_image_mask.ndim == 2 and target_class_freq_by_image_mask.shape[0] == values.shape[0]
|
265 |
-
total_class_freq = target_class_freq_by_image_mask.sum(0)
|
266 |
-
distr_values = (target_class_freq_by_image_mask * values[..., None]).sum(0)
|
267 |
-
result = distr_values / (total_class_freq + 1e-3)
|
268 |
-
return {idx2name[i]: val for i, val in enumerate(result) if total_class_freq[i] > 0}
|
269 |
-
|
270 |
-
|
271 |
-
def get_segmentation_idx2name():
|
272 |
-
return {i - 1: name for i, name in segm_options['classes'].set_index('Idx', drop=True)['Name'].to_dict().items()}
|
273 |
-
|
274 |
-
|
275 |
-
class SegmentationAwarePairwiseScore(SegmentationAwareScore):
|
276 |
-
def __init__(self, *args, **kwargs):
|
277 |
-
super().__init__(*args, **kwargs)
|
278 |
-
self.individual_values = []
|
279 |
-
self.segm_idx2name = get_segmentation_idx2name()
|
280 |
-
|
281 |
-
def forward(self, pred_batch, target_batch, mask):
|
282 |
-
cur_class_stats = super().forward(pred_batch, target_batch, mask)
|
283 |
-
score_values = self.calc_score(pred_batch, target_batch, mask)
|
284 |
-
self.individual_values.append(score_values)
|
285 |
-
return cur_class_stats + (score_values,)
|
286 |
-
|
287 |
-
@abstractmethod
|
288 |
-
def calc_score(self, pred_batch, target_batch, mask):
|
289 |
-
raise NotImplementedError()
|
290 |
-
|
291 |
-
def get_value(self, groups=None, states=None):
|
292 |
-
"""
|
293 |
-
:param groups:
|
294 |
-
:return:
|
295 |
-
total_results: dict of kind {'mean': score mean, 'std': score std}
|
296 |
-
group_results: None, if groups is None;
|
297 |
-
else dict {group_idx: {'mean': score mean among group, 'std': score std among group}}
|
298 |
-
"""
|
299 |
-
if states is not None:
|
300 |
-
(target_class_freq_by_image_total,
|
301 |
-
target_class_freq_by_image_mask,
|
302 |
-
pred_class_freq_by_image_mask,
|
303 |
-
individual_values) = states
|
304 |
-
else:
|
305 |
-
target_class_freq_by_image_total = self.target_class_freq_by_image_total
|
306 |
-
target_class_freq_by_image_mask = self.target_class_freq_by_image_mask
|
307 |
-
pred_class_freq_by_image_mask = self.pred_class_freq_by_image_mask
|
308 |
-
individual_values = self.individual_values
|
309 |
-
|
310 |
-
target_class_freq_by_image_total = np.concatenate(target_class_freq_by_image_total, axis=0)
|
311 |
-
target_class_freq_by_image_mask = np.concatenate(target_class_freq_by_image_mask, axis=0)
|
312 |
-
pred_class_freq_by_image_mask = np.concatenate(pred_class_freq_by_image_mask, axis=0)
|
313 |
-
individual_values = np.concatenate(individual_values, axis=0)
|
314 |
-
|
315 |
-
total_results = {
|
316 |
-
'mean': individual_values.mean(),
|
317 |
-
'std': individual_values.std(),
|
318 |
-
**distribute_values_to_classes(target_class_freq_by_image_mask, individual_values, self.segm_idx2name)
|
319 |
-
}
|
320 |
-
|
321 |
-
if groups is None:
|
322 |
-
return total_results, None
|
323 |
-
|
324 |
-
group_results = dict()
|
325 |
-
grouping = get_groupings(groups)
|
326 |
-
for label, index in grouping.items():
|
327 |
-
group_class_freq = target_class_freq_by_image_mask[index]
|
328 |
-
group_scores = individual_values[index]
|
329 |
-
group_results[label] = {
|
330 |
-
'mean': group_scores.mean(),
|
331 |
-
'std': group_scores.std(),
|
332 |
-
** distribute_values_to_classes(group_class_freq, group_scores, self.segm_idx2name)
|
333 |
-
}
|
334 |
-
return total_results, group_results
|
335 |
-
|
336 |
-
def reset(self):
|
337 |
-
super().reset()
|
338 |
-
self.individual_values = []
|
339 |
-
|
340 |
-
|
341 |
-
class SegmentationClassStats(SegmentationAwarePairwiseScore):
|
342 |
-
def calc_score(self, pred_batch, target_batch, mask):
|
343 |
-
return 0
|
344 |
-
|
345 |
-
def get_value(self, groups=None, states=None):
|
346 |
-
"""
|
347 |
-
:param groups:
|
348 |
-
:return:
|
349 |
-
total_results: dict of kind {'mean': score mean, 'std': score std}
|
350 |
-
group_results: None, if groups is None;
|
351 |
-
else dict {group_idx: {'mean': score mean among group, 'std': score std among group}}
|
352 |
-
"""
|
353 |
-
if states is not None:
|
354 |
-
(target_class_freq_by_image_total,
|
355 |
-
target_class_freq_by_image_mask,
|
356 |
-
pred_class_freq_by_image_mask,
|
357 |
-
_) = states
|
358 |
-
else:
|
359 |
-
target_class_freq_by_image_total = self.target_class_freq_by_image_total
|
360 |
-
target_class_freq_by_image_mask = self.target_class_freq_by_image_mask
|
361 |
-
pred_class_freq_by_image_mask = self.pred_class_freq_by_image_mask
|
362 |
-
|
363 |
-
target_class_freq_by_image_total = np.concatenate(target_class_freq_by_image_total, axis=0)
|
364 |
-
target_class_freq_by_image_mask = np.concatenate(target_class_freq_by_image_mask, axis=0)
|
365 |
-
pred_class_freq_by_image_mask = np.concatenate(pred_class_freq_by_image_mask, axis=0)
|
366 |
-
|
367 |
-
target_class_freq_by_image_total_marginal = target_class_freq_by_image_total.sum(0).astype('float32')
|
368 |
-
target_class_freq_by_image_total_marginal /= target_class_freq_by_image_total_marginal.sum()
|
369 |
-
|
370 |
-
target_class_freq_by_image_mask_marginal = target_class_freq_by_image_mask.sum(0).astype('float32')
|
371 |
-
target_class_freq_by_image_mask_marginal /= target_class_freq_by_image_mask_marginal.sum()
|
372 |
-
|
373 |
-
pred_class_freq_diff = (pred_class_freq_by_image_mask - target_class_freq_by_image_mask).sum(0) / (target_class_freq_by_image_mask.sum(0) + 1e-3)
|
374 |
-
|
375 |
-
total_results = dict()
|
376 |
-
total_results.update({f'total_freq/{self.segm_idx2name[i]}': v
|
377 |
-
for i, v in enumerate(target_class_freq_by_image_total_marginal)
|
378 |
-
if v > 0})
|
379 |
-
total_results.update({f'mask_freq/{self.segm_idx2name[i]}': v
|
380 |
-
for i, v in enumerate(target_class_freq_by_image_mask_marginal)
|
381 |
-
if v > 0})
|
382 |
-
total_results.update({f'mask_freq_diff/{self.segm_idx2name[i]}': v
|
383 |
-
for i, v in enumerate(pred_class_freq_diff)
|
384 |
-
if target_class_freq_by_image_total_marginal[i] > 0})
|
385 |
-
|
386 |
-
if groups is None:
|
387 |
-
return total_results, None
|
388 |
-
|
389 |
-
group_results = dict()
|
390 |
-
grouping = get_groupings(groups)
|
391 |
-
for label, index in grouping.items():
|
392 |
-
group_target_class_freq_by_image_total = target_class_freq_by_image_total[index]
|
393 |
-
group_target_class_freq_by_image_mask = target_class_freq_by_image_mask[index]
|
394 |
-
group_pred_class_freq_by_image_mask = pred_class_freq_by_image_mask[index]
|
395 |
-
|
396 |
-
group_target_class_freq_by_image_total_marginal = group_target_class_freq_by_image_total.sum(0).astype('float32')
|
397 |
-
group_target_class_freq_by_image_total_marginal /= group_target_class_freq_by_image_total_marginal.sum()
|
398 |
-
|
399 |
-
group_target_class_freq_by_image_mask_marginal = group_target_class_freq_by_image_mask.sum(0).astype('float32')
|
400 |
-
group_target_class_freq_by_image_mask_marginal /= group_target_class_freq_by_image_mask_marginal.sum()
|
401 |
-
|
402 |
-
group_pred_class_freq_diff = (group_pred_class_freq_by_image_mask - group_target_class_freq_by_image_mask).sum(0) / (
|
403 |
-
group_target_class_freq_by_image_mask.sum(0) + 1e-3)
|
404 |
-
|
405 |
-
cur_group_results = dict()
|
406 |
-
cur_group_results.update({f'total_freq/{self.segm_idx2name[i]}': v
|
407 |
-
for i, v in enumerate(group_target_class_freq_by_image_total_marginal)
|
408 |
-
if v > 0})
|
409 |
-
cur_group_results.update({f'mask_freq/{self.segm_idx2name[i]}': v
|
410 |
-
for i, v in enumerate(group_target_class_freq_by_image_mask_marginal)
|
411 |
-
if v > 0})
|
412 |
-
cur_group_results.update({f'mask_freq_diff/{self.segm_idx2name[i]}': v
|
413 |
-
for i, v in enumerate(group_pred_class_freq_diff)
|
414 |
-
if group_target_class_freq_by_image_total_marginal[i] > 0})
|
415 |
-
|
416 |
-
group_results[label] = cur_group_results
|
417 |
-
return total_results, group_results
|
418 |
-
|
419 |
-
|
420 |
-
class SegmentationAwareSSIM(SegmentationAwarePairwiseScore):
|
421 |
-
def __init__(self, *args, window_size=11, **kwargs):
|
422 |
-
super().__init__(*args, **kwargs)
|
423 |
-
self.score_impl = SSIM(window_size=window_size, size_average=False).eval()
|
424 |
-
|
425 |
-
def calc_score(self, pred_batch, target_batch, mask):
|
426 |
-
return self.score_impl(pred_batch, target_batch).detach().cpu().numpy()
|
427 |
-
|
428 |
-
|
429 |
-
class SegmentationAwareLPIPS(SegmentationAwarePairwiseScore):
|
430 |
-
def __init__(self, *args, model='net-lin', net='vgg', model_path=None, use_gpu=True, **kwargs):
|
431 |
-
super().__init__(*args, **kwargs)
|
432 |
-
self.score_impl = PerceptualLoss(model=model, net=net, model_path=model_path,
|
433 |
-
use_gpu=use_gpu, spatial=False).eval()
|
434 |
-
|
435 |
-
def calc_score(self, pred_batch, target_batch, mask):
|
436 |
-
return self.score_impl(pred_batch, target_batch).flatten().detach().cpu().numpy()
|
437 |
-
|
438 |
-
|
439 |
-
def calculade_fid_no_img(img_i, activations_pred, activations_target, eps=1e-6):
|
440 |
-
activations_pred = activations_pred.copy()
|
441 |
-
activations_pred[img_i] = activations_target[img_i]
|
442 |
-
return calculate_frechet_distance(activations_pred, activations_target, eps=eps)
|
443 |
-
|
444 |
-
|
445 |
-
class SegmentationAwareFID(SegmentationAwarePairwiseScore):
|
446 |
-
def __init__(self, *args, dims=2048, eps=1e-6, n_jobs=-1, **kwargs):
|
447 |
-
super().__init__(*args, **kwargs)
|
448 |
-
if getattr(FIDScore, '_MODEL', None) is None:
|
449 |
-
block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
|
450 |
-
FIDScore._MODEL = InceptionV3([block_idx]).eval()
|
451 |
-
self.model = FIDScore._MODEL
|
452 |
-
self.eps = eps
|
453 |
-
self.n_jobs = n_jobs
|
454 |
-
|
455 |
-
def calc_score(self, pred_batch, target_batch, mask):
|
456 |
-
activations_pred = self._get_activations(pred_batch)
|
457 |
-
activations_target = self._get_activations(target_batch)
|
458 |
-
return activations_pred, activations_target
|
459 |
-
|
460 |
-
def get_value(self, groups=None, states=None):
|
461 |
-
"""
|
462 |
-
:param groups:
|
463 |
-
:return:
|
464 |
-
total_results: dict of kind {'mean': score mean, 'std': score std}
|
465 |
-
group_results: None, if groups is None;
|
466 |
-
else dict {group_idx: {'mean': score mean among group, 'std': score std among group}}
|
467 |
-
"""
|
468 |
-
if states is not None:
|
469 |
-
(target_class_freq_by_image_total,
|
470 |
-
target_class_freq_by_image_mask,
|
471 |
-
pred_class_freq_by_image_mask,
|
472 |
-
activation_pairs) = states
|
473 |
-
else:
|
474 |
-
target_class_freq_by_image_total = self.target_class_freq_by_image_total
|
475 |
-
target_class_freq_by_image_mask = self.target_class_freq_by_image_mask
|
476 |
-
pred_class_freq_by_image_mask = self.pred_class_freq_by_image_mask
|
477 |
-
activation_pairs = self.individual_values
|
478 |
-
|
479 |
-
target_class_freq_by_image_total = np.concatenate(target_class_freq_by_image_total, axis=0)
|
480 |
-
target_class_freq_by_image_mask = np.concatenate(target_class_freq_by_image_mask, axis=0)
|
481 |
-
pred_class_freq_by_image_mask = np.concatenate(pred_class_freq_by_image_mask, axis=0)
|
482 |
-
activations_pred, activations_target = zip(*activation_pairs)
|
483 |
-
activations_pred = np.concatenate(activations_pred, axis=0)
|
484 |
-
activations_target = np.concatenate(activations_target, axis=0)
|
485 |
-
|
486 |
-
total_results = {
|
487 |
-
'mean': calculate_frechet_distance(activations_pred, activations_target, eps=self.eps),
|
488 |
-
'std': 0,
|
489 |
-
**self.distribute_fid_to_classes(target_class_freq_by_image_mask, activations_pred, activations_target)
|
490 |
-
}
|
491 |
-
|
492 |
-
if groups is None:
|
493 |
-
return total_results, None
|
494 |
-
|
495 |
-
group_results = dict()
|
496 |
-
grouping = get_groupings(groups)
|
497 |
-
for label, index in grouping.items():
|
498 |
-
if len(index) > 1:
|
499 |
-
group_activations_pred = activations_pred[index]
|
500 |
-
group_activations_target = activations_target[index]
|
501 |
-
group_class_freq = target_class_freq_by_image_mask[index]
|
502 |
-
group_results[label] = {
|
503 |
-
'mean': calculate_frechet_distance(group_activations_pred, group_activations_target, eps=self.eps),
|
504 |
-
'std': 0,
|
505 |
-
**self.distribute_fid_to_classes(group_class_freq,
|
506 |
-
group_activations_pred,
|
507 |
-
group_activations_target)
|
508 |
-
}
|
509 |
-
else:
|
510 |
-
group_results[label] = dict(mean=float('nan'), std=0)
|
511 |
-
return total_results, group_results
|
512 |
-
|
513 |
-
def distribute_fid_to_classes(self, class_freq, activations_pred, activations_target):
|
514 |
-
real_fid = calculate_frechet_distance(activations_pred, activations_target, eps=self.eps)
|
515 |
-
|
516 |
-
fid_no_images = Parallel(n_jobs=self.n_jobs)(
|
517 |
-
delayed(calculade_fid_no_img)(img_i, activations_pred, activations_target, eps=self.eps)
|
518 |
-
for img_i in range(activations_pred.shape[0])
|
519 |
-
)
|
520 |
-
errors = real_fid - fid_no_images
|
521 |
-
return distribute_values_to_classes(class_freq, errors, self.segm_idx2name)
|
522 |
-
|
523 |
-
def _get_activations(self, batch):
|
524 |
-
activations = self.model(batch)[0]
|
525 |
-
if activations.shape[2] != 1 or activations.shape[3] != 1:
|
526 |
-
activations = F.adaptive_avg_pool2d(activations, output_size=(1, 1))
|
527 |
-
activations = activations.squeeze(-1).squeeze(-1).detach().cpu().numpy()
|
528 |
-
return activations
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/saicinpainting/training/losses/feature_matching.py
DELETED
@@ -1,33 +0,0 @@
|
|
1 |
-
from typing import List
|
2 |
-
|
3 |
-
import torch
|
4 |
-
import torch.nn.functional as F
|
5 |
-
|
6 |
-
|
7 |
-
def masked_l2_loss(pred, target, mask, weight_known, weight_missing):
|
8 |
-
per_pixel_l2 = F.mse_loss(pred, target, reduction='none')
|
9 |
-
pixel_weights = mask * weight_missing + (1 - mask) * weight_known
|
10 |
-
return (pixel_weights * per_pixel_l2).mean()
|
11 |
-
|
12 |
-
|
13 |
-
def masked_l1_loss(pred, target, mask, weight_known, weight_missing):
|
14 |
-
per_pixel_l1 = F.l1_loss(pred, target, reduction='none')
|
15 |
-
pixel_weights = mask * weight_missing + (1 - mask) * weight_known
|
16 |
-
return (pixel_weights * per_pixel_l1).mean()
|
17 |
-
|
18 |
-
|
19 |
-
def feature_matching_loss(fake_features: List[torch.Tensor], target_features: List[torch.Tensor], mask=None):
|
20 |
-
if mask is None:
|
21 |
-
res = torch.stack([F.mse_loss(fake_feat, target_feat)
|
22 |
-
for fake_feat, target_feat in zip(fake_features, target_features)]).mean()
|
23 |
-
else:
|
24 |
-
res = 0
|
25 |
-
norm = 0
|
26 |
-
for fake_feat, target_feat in zip(fake_features, target_features):
|
27 |
-
cur_mask = F.interpolate(mask, size=fake_feat.shape[-2:], mode='bilinear', align_corners=False)
|
28 |
-
error_weights = 1 - cur_mask
|
29 |
-
cur_val = ((fake_feat - target_feat).pow(2) * error_weights).mean()
|
30 |
-
res = res + cur_val
|
31 |
-
norm += 1
|
32 |
-
res = res / norm
|
33 |
-
return res
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/lama-example/saicinpainting/training/modules/base.py
DELETED
@@ -1,80 +0,0 @@
|
|
1 |
-
import abc
|
2 |
-
from typing import Tuple, List
|
3 |
-
|
4 |
-
import torch
|
5 |
-
import torch.nn as nn
|
6 |
-
|
7 |
-
from saicinpainting.training.modules.depthwise_sep_conv import DepthWiseSeperableConv
|
8 |
-
from saicinpainting.training.modules.multidilated_conv import MultidilatedConv
|
9 |
-
|
10 |
-
|
11 |
-
class BaseDiscriminator(nn.Module):
|
12 |
-
@abc.abstractmethod
|
13 |
-
def forward(self, x: torch.Tensor) -> Tuple[torch.Tensor, List[torch.Tensor]]:
|
14 |
-
"""
|
15 |
-
Predict scores and get intermediate activations. Useful for feature matching loss
|
16 |
-
:return tuple (scores, list of intermediate activations)
|
17 |
-
"""
|
18 |
-
raise NotImplemented()
|
19 |
-
|
20 |
-
|
21 |
-
def get_conv_block_ctor(kind='default'):
|
22 |
-
if not isinstance(kind, str):
|
23 |
-
return kind
|
24 |
-
if kind == 'default':
|
25 |
-
return nn.Conv2d
|
26 |
-
if kind == 'depthwise':
|
27 |
-
return DepthWiseSeperableConv
|
28 |
-
if kind == 'multidilated':
|
29 |
-
return MultidilatedConv
|
30 |
-
raise ValueError(f'Unknown convolutional block kind {kind}')
|
31 |
-
|
32 |
-
|
33 |
-
def get_norm_layer(kind='bn'):
|
34 |
-
if not isinstance(kind, str):
|
35 |
-
return kind
|
36 |
-
if kind == 'bn':
|
37 |
-
return nn.BatchNorm2d
|
38 |
-
if kind == 'in':
|
39 |
-
return nn.InstanceNorm2d
|
40 |
-
raise ValueError(f'Unknown norm block kind {kind}')
|
41 |
-
|
42 |
-
|
43 |
-
def get_activation(kind='tanh'):
|
44 |
-
if kind == 'tanh':
|
45 |
-
return nn.Tanh()
|
46 |
-
if kind == 'sigmoid':
|
47 |
-
return nn.Sigmoid()
|
48 |
-
if kind is False:
|
49 |
-
return nn.Identity()
|
50 |
-
raise ValueError(f'Unknown activation kind {kind}')
|
51 |
-
|
52 |
-
|
53 |
-
class SimpleMultiStepGenerator(nn.Module):
|
54 |
-
def __init__(self, steps: List[nn.Module]):
|
55 |
-
super().__init__()
|
56 |
-
self.steps = nn.ModuleList(steps)
|
57 |
-
|
58 |
-
def forward(self, x):
|
59 |
-
cur_in = x
|
60 |
-
outs = []
|
61 |
-
for step in self.steps:
|
62 |
-
cur_out = step(cur_in)
|
63 |
-
outs.append(cur_out)
|
64 |
-
cur_in = torch.cat((cur_in, cur_out), dim=1)
|
65 |
-
return torch.cat(outs[::-1], dim=1)
|
66 |
-
|
67 |
-
def deconv_factory(kind, ngf, mult, norm_layer, activation, max_features):
|
68 |
-
if kind == 'convtranspose':
|
69 |
-
return [nn.ConvTranspose2d(min(max_features, ngf * mult),
|
70 |
-
min(max_features, int(ngf * mult / 2)),
|
71 |
-
kernel_size=3, stride=2, padding=1, output_padding=1),
|
72 |
-
norm_layer(min(max_features, int(ngf * mult / 2))), activation]
|
73 |
-
elif kind == 'bilinear':
|
74 |
-
return [nn.Upsample(scale_factor=2, mode='bilinear'),
|
75 |
-
DepthWiseSeperableConv(min(max_features, ngf * mult),
|
76 |
-
min(max_features, int(ngf * mult / 2)),
|
77 |
-
kernel_size=3, stride=1, padding=1),
|
78 |
-
norm_layer(min(max_features, int(ngf * mult / 2))), activation]
|
79 |
-
else:
|
80 |
-
raise Exception(f"Invalid deconv kind: {kind}")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/regionclip-demo/detectron2/utils/colormap.py
DELETED
@@ -1,140 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
|
3 |
-
"""
|
4 |
-
An awesome colormap for really neat visualizations.
|
5 |
-
Copied from Detectron, and removed gray colors.
|
6 |
-
"""
|
7 |
-
|
8 |
-
import numpy as np
|
9 |
-
|
10 |
-
__all__ = ["colormap", "random_color"]
|
11 |
-
|
12 |
-
# fmt: off
|
13 |
-
# RGB:
|
14 |
-
_COLORS = np.array(
|
15 |
-
[
|
16 |
-
0.000, 0.447, 0.741,
|
17 |
-
0.850, 0.325, 0.098,
|
18 |
-
0.929, 0.694, 0.125,
|
19 |
-
0.494, 0.184, 0.556,
|
20 |
-
0.466, 0.674, 0.188,
|
21 |
-
0.301, 0.745, 0.933,
|
22 |
-
0.635, 0.078, 0.184,
|
23 |
-
0.300, 0.300, 0.300,
|
24 |
-
0.600, 0.600, 0.600,
|
25 |
-
1.000, 0.000, 0.000,
|
26 |
-
1.000, 0.500, 0.000,
|
27 |
-
0.749, 0.749, 0.000,
|
28 |
-
0.000, 1.000, 0.000,
|
29 |
-
0.000, 0.000, 1.000,
|
30 |
-
0.667, 0.000, 1.000,
|
31 |
-
0.333, 0.333, 0.000,
|
32 |
-
0.333, 0.667, 0.000,
|
33 |
-
0.333, 1.000, 0.000,
|
34 |
-
0.667, 0.333, 0.000,
|
35 |
-
0.667, 0.667, 0.000,
|
36 |
-
0.667, 1.000, 0.000,
|
37 |
-
1.000, 0.333, 0.000,
|
38 |
-
1.000, 0.667, 0.000,
|
39 |
-
1.000, 1.000, 0.000,
|
40 |
-
0.000, 0.333, 0.500,
|
41 |
-
0.000, 0.667, 0.500,
|
42 |
-
0.000, 1.000, 0.500,
|
43 |
-
0.333, 0.000, 0.500,
|
44 |
-
0.333, 0.333, 0.500,
|
45 |
-
0.333, 0.667, 0.500,
|
46 |
-
0.333, 1.000, 0.500,
|
47 |
-
0.667, 0.000, 0.500,
|
48 |
-
0.667, 0.333, 0.500,
|
49 |
-
0.667, 0.667, 0.500,
|
50 |
-
0.667, 1.000, 0.500,
|
51 |
-
1.000, 0.000, 0.500,
|
52 |
-
1.000, 0.333, 0.500,
|
53 |
-
1.000, 0.667, 0.500,
|
54 |
-
1.000, 1.000, 0.500,
|
55 |
-
0.000, 0.333, 1.000,
|
56 |
-
0.000, 0.667, 1.000,
|
57 |
-
0.000, 1.000, 1.000,
|
58 |
-
0.333, 0.000, 1.000,
|
59 |
-
0.333, 0.333, 1.000,
|
60 |
-
0.333, 0.667, 1.000,
|
61 |
-
0.333, 1.000, 1.000,
|
62 |
-
0.667, 0.000, 1.000,
|
63 |
-
0.667, 0.333, 1.000,
|
64 |
-
0.667, 0.667, 1.000,
|
65 |
-
0.667, 1.000, 1.000,
|
66 |
-
1.000, 0.000, 1.000,
|
67 |
-
1.000, 0.333, 1.000,
|
68 |
-
1.000, 0.667, 1.000,
|
69 |
-
0.333, 0.000, 0.000,
|
70 |
-
0.500, 0.000, 0.000,
|
71 |
-
0.667, 0.000, 0.000,
|
72 |
-
0.833, 0.000, 0.000,
|
73 |
-
1.000, 0.000, 0.000,
|
74 |
-
0.000, 0.167, 0.000,
|
75 |
-
0.000, 0.333, 0.000,
|
76 |
-
0.000, 0.500, 0.000,
|
77 |
-
0.000, 0.667, 0.000,
|
78 |
-
0.000, 0.833, 0.000,
|
79 |
-
0.000, 1.000, 0.000,
|
80 |
-
0.000, 0.000, 0.167,
|
81 |
-
0.000, 0.000, 0.333,
|
82 |
-
0.000, 0.000, 0.500,
|
83 |
-
0.000, 0.000, 0.667,
|
84 |
-
0.000, 0.000, 0.833,
|
85 |
-
0.000, 0.000, 1.000,
|
86 |
-
0.000, 0.000, 0.000,
|
87 |
-
0.143, 0.143, 0.143,
|
88 |
-
0.857, 0.857, 0.857,
|
89 |
-
1.000, 1.000, 1.000
|
90 |
-
]
|
91 |
-
).astype(np.float32).reshape(-1, 3)
|
92 |
-
# fmt: on
|
93 |
-
|
94 |
-
|
95 |
-
def colormap(rgb=False, maximum=255):
|
96 |
-
"""
|
97 |
-
Args:
|
98 |
-
rgb (bool): whether to return RGB colors or BGR colors.
|
99 |
-
maximum (int): either 255 or 1
|
100 |
-
|
101 |
-
Returns:
|
102 |
-
ndarray: a float32 array of Nx3 colors, in range [0, 255] or [0, 1]
|
103 |
-
"""
|
104 |
-
assert maximum in [255, 1], maximum
|
105 |
-
c = _COLORS * maximum
|
106 |
-
if not rgb:
|
107 |
-
c = c[:, ::-1]
|
108 |
-
return c
|
109 |
-
|
110 |
-
|
111 |
-
def random_color(rgb=False, maximum=255):
|
112 |
-
"""
|
113 |
-
Args:
|
114 |
-
rgb (bool): whether to return RGB colors or BGR colors.
|
115 |
-
maximum (int): either 255 or 1
|
116 |
-
|
117 |
-
Returns:
|
118 |
-
ndarray: a vector of 3 numbers
|
119 |
-
"""
|
120 |
-
idx = np.random.randint(0, len(_COLORS))
|
121 |
-
ret = _COLORS[idx] * maximum
|
122 |
-
if not rgb:
|
123 |
-
ret = ret[::-1]
|
124 |
-
return ret
|
125 |
-
|
126 |
-
|
127 |
-
if __name__ == "__main__":
|
128 |
-
import cv2
|
129 |
-
|
130 |
-
size = 100
|
131 |
-
H, W = 10, 10
|
132 |
-
canvas = np.random.rand(H * size, W * size, 3).astype("float32")
|
133 |
-
for h in range(H):
|
134 |
-
for w in range(W):
|
135 |
-
idx = h * W + w
|
136 |
-
if idx >= len(_COLORS):
|
137 |
-
break
|
138 |
-
canvas[h * size : (h + 1) * size, w * size : (w + 1) * size] = _COLORS[idx]
|
139 |
-
cv2.imshow("a", canvas)
|
140 |
-
cv2.waitKey(0)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/ClassCat/YOLOS-Object-Detection/app.py
DELETED
@@ -1,130 +0,0 @@
|
|
1 |
-
|
2 |
-
import torch
|
3 |
-
|
4 |
-
from transformers import AutoImageProcessor, AutoModelForObjectDetection
|
5 |
-
#from transformers import pipeline
|
6 |
-
|
7 |
-
from PIL import Image
|
8 |
-
import matplotlib.pyplot as plt
|
9 |
-
import matplotlib.patches as patches
|
10 |
-
|
11 |
-
import io
|
12 |
-
from random import choice
|
13 |
-
|
14 |
-
|
15 |
-
image_processor_tiny = AutoImageProcessor.from_pretrained("hustvl/yolos-tiny")
|
16 |
-
model_tiny = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-tiny")
|
17 |
-
|
18 |
-
image_processor_small = AutoImageProcessor.from_pretrained("hustvl/yolos-small")
|
19 |
-
model_small = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-small")
|
20 |
-
|
21 |
-
|
22 |
-
import gradio as gr
|
23 |
-
|
24 |
-
|
25 |
-
COLORS = ["#ff7f7f", "#ff7fbf", "#ff7fff", "#bf7fff",
|
26 |
-
"#7f7fff", "#7fbfff", "#7fffff", "#7fffbf",
|
27 |
-
"#7fff7f", "#bfff7f", "#ffff7f", "#ffbf7f"]
|
28 |
-
|
29 |
-
fdic = {
|
30 |
-
"family" : "DejaVu Serif",
|
31 |
-
"style" : "normal",
|
32 |
-
"size" : 18,
|
33 |
-
"color" : "yellow",
|
34 |
-
"weight" : "bold"
|
35 |
-
}
|
36 |
-
|
37 |
-
|
38 |
-
def get_figure(in_pil_img, in_results):
|
39 |
-
plt.figure(figsize=(16, 10))
|
40 |
-
plt.imshow(in_pil_img)
|
41 |
-
ax = plt.gca()
|
42 |
-
|
43 |
-
for score, label, box in zip(in_results["scores"], in_results["labels"], in_results["boxes"]):
|
44 |
-
selected_color = choice(COLORS)
|
45 |
-
|
46 |
-
box_int = [i.item() for i in torch.round(box).to(torch.int32)]
|
47 |
-
x, y, w, h = box_int[0], box_int[1], box_int[2]-box_int[0], box_int[3]-box_int[1]
|
48 |
-
#x, y, w, h = torch.round(box[0]).item(), torch.round(box[1]).item(), torch.round(box[2]-box[0]).item(), torch.round(box[3]-box[1]).item()
|
49 |
-
|
50 |
-
ax.add_patch(plt.Rectangle((x, y), w, h, fill=False, color=selected_color, linewidth=3, alpha=0.8))
|
51 |
-
ax.text(x, y, f"{model_tiny.config.id2label[label.item()]}: {round(score.item()*100, 2)}%", fontdict=fdic, alpha=0.8)
|
52 |
-
|
53 |
-
plt.axis("off")
|
54 |
-
|
55 |
-
return plt.gcf()
|
56 |
-
|
57 |
-
|
58 |
-
def infer(in_pil_img, in_model="yolos-tiny", in_threshold=0.9):
|
59 |
-
target_sizes = torch.tensor([in_pil_img.size[::-1]])
|
60 |
-
|
61 |
-
if in_model == "yolos-small":
|
62 |
-
inputs = image_processor_small(images=in_pil_img, return_tensors="pt")
|
63 |
-
outputs = model_small(**inputs)
|
64 |
-
|
65 |
-
# convert outputs (bounding boxes and class logits) to COCO API
|
66 |
-
results = image_processor_small.post_process_object_detection(outputs, threshold=in_threshold, target_sizes=target_sizes)[0]
|
67 |
-
|
68 |
-
else:
|
69 |
-
inputs = image_processor_tiny(images=in_pil_img, return_tensors="pt")
|
70 |
-
outputs = model_tiny(**inputs)
|
71 |
-
|
72 |
-
# convert outputs (bounding boxes and class logits) to COCO API
|
73 |
-
results = image_processor_tiny.post_process_object_detection(outputs, threshold=in_threshold, target_sizes=target_sizes)[0]
|
74 |
-
|
75 |
-
figure = get_figure(in_pil_img, results)
|
76 |
-
|
77 |
-
buf = io.BytesIO()
|
78 |
-
figure.savefig(buf, bbox_inches='tight')
|
79 |
-
buf.seek(0)
|
80 |
-
output_pil_img = Image.open(buf)
|
81 |
-
|
82 |
-
return output_pil_img
|
83 |
-
|
84 |
-
|
85 |
-
with gr.Blocks(title="YOLOS Object Detection - ClassCat",
|
86 |
-
css=".gradio-container {background:lightyellow;}"
|
87 |
-
) as demo:
|
88 |
-
#sample_index = gr.State([])
|
89 |
-
|
90 |
-
gr.HTML("""<div style="font-family:'Times New Roman', 'Serif'; font-size:16pt; font-weight:bold; text-align:center; color:royalblue;">YOLOS Object Detection</div>""")
|
91 |
-
|
92 |
-
gr.HTML("""<h4 style="color:navy;">1. Select a model.</h4>""")
|
93 |
-
|
94 |
-
model = gr.Radio(["yolos-tiny", "yolos-small"], value="yolos-tiny", label="Model name")
|
95 |
-
|
96 |
-
gr.HTML("""<br/>""")
|
97 |
-
gr.HTML("""<h4 style="color:navy;">2-a. Select an example by clicking a thumbnail below.</h4>""")
|
98 |
-
gr.HTML("""<h4 style="color:navy;">2-b. Or upload an image by clicking on the canvas.</h4>""")
|
99 |
-
|
100 |
-
with gr.Row():
|
101 |
-
input_image = gr.Image(label="Input image", type="pil")
|
102 |
-
output_image = gr.Image(label="Output image with predicted instances", type="pil")
|
103 |
-
|
104 |
-
gr.Examples(['samples/cats.jpg', 'samples/detectron2.png', 'samples/cat.jpg', 'samples/hotdog.jpg'], inputs=input_image)
|
105 |
-
|
106 |
-
gr.HTML("""<br/>""")
|
107 |
-
gr.HTML("""<h4 style="color:navy;">3. Set a threshold value (default to 0.9)</h4>""")
|
108 |
-
|
109 |
-
threshold = gr.Slider(0, 1.0, value=0.9, label='threshold')
|
110 |
-
|
111 |
-
gr.HTML("""<br/>""")
|
112 |
-
gr.HTML("""<h4 style="color:navy;">4. Then, click "Infer" button to predict object instances. It will take about 10 seconds (yolos-tiny) or 20 seconds (yolos-small).</h4>""")
|
113 |
-
|
114 |
-
send_btn = gr.Button("Infer")
|
115 |
-
send_btn.click(fn=infer, inputs=[input_image, model, threshold], outputs=[output_image])
|
116 |
-
|
117 |
-
gr.HTML("""<br/>""")
|
118 |
-
gr.HTML("""<h4 style="color:navy;">Reference</h4>""")
|
119 |
-
gr.HTML("""<ul>""")
|
120 |
-
gr.HTML("""<li><a href="https://huggingface.co/docs/transformers/model_doc/yolos" target="_blank">Hugging Face Transformers - YOLOS</a>""")
|
121 |
-
gr.HTML("""</ul>""")
|
122 |
-
|
123 |
-
|
124 |
-
#demo.queue()
|
125 |
-
demo.launch(debug=True)
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
### EOF ###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Codecooker/rvcapi/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Rvcapi
|
3 |
-
emoji: 🚀
|
4 |
-
colorFrom: purple
|
5 |
-
colorTo: red
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.40.1
|
8 |
-
app_file: src/webui.py
|
9 |
-
pinned: false
|
10 |
-
license: gpl-3.0
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/modeling/rpn/rpn.py
DELETED
@@ -1,321 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
2 |
-
import torch
|
3 |
-
import torch.nn.functional as F
|
4 |
-
from torch import nn
|
5 |
-
import math
|
6 |
-
from maskrcnn_benchmark.modeling import registry
|
7 |
-
from maskrcnn_benchmark.modeling.box_coder import BoxCoder
|
8 |
-
from maskrcnn_benchmark.modeling.rpn.retinanet.retinanet import build_retinanet
|
9 |
-
from maskrcnn_benchmark.modeling.rpn.fcos.fcos import build_fcos
|
10 |
-
from .loss import make_rpn_loss_evaluator
|
11 |
-
from .anchor_generator import make_anchor_generator
|
12 |
-
from .inference import make_rpn_postprocessor
|
13 |
-
|
14 |
-
|
15 |
-
class RPNHeadConvRegressor(nn.Module):
|
16 |
-
"""
|
17 |
-
A simple RPN Head for classification and bbox regression
|
18 |
-
"""
|
19 |
-
|
20 |
-
def __init__(self, cfg, in_channels, num_anchors):
|
21 |
-
"""
|
22 |
-
Arguments:
|
23 |
-
cfg : config
|
24 |
-
in_channels (int): number of channels of the input feature
|
25 |
-
num_anchors (int): number of anchors to be predicted
|
26 |
-
"""
|
27 |
-
super(RPNHeadConvRegressor, self).__init__()
|
28 |
-
self.cls_logits = nn.Conv2d(in_channels, num_anchors, kernel_size=1, stride=1)
|
29 |
-
self.bbox_pred = nn.Conv2d(
|
30 |
-
in_channels, num_anchors * 4, kernel_size=1, stride=1
|
31 |
-
)
|
32 |
-
|
33 |
-
for l in [self.cls_logits, self.bbox_pred]:
|
34 |
-
torch.nn.init.normal_(l.weight, std=0.01)
|
35 |
-
torch.nn.init.constant_(l.bias, 0)
|
36 |
-
|
37 |
-
def forward(self, x):
|
38 |
-
assert isinstance(x, (list, tuple))
|
39 |
-
logits = [self.cls_logits(y) for y in x]
|
40 |
-
bbox_reg = [self.bbox_pred(y) for y in x]
|
41 |
-
|
42 |
-
return logits, bbox_reg
|
43 |
-
|
44 |
-
|
45 |
-
class RPNHeadFeatureSingleConv(nn.Module):
|
46 |
-
"""
|
47 |
-
Adds a simple RPN Head with one conv to extract the feature
|
48 |
-
"""
|
49 |
-
|
50 |
-
def __init__(self, cfg, in_channels):
|
51 |
-
"""
|
52 |
-
Arguments:
|
53 |
-
cfg : config
|
54 |
-
in_channels (int): number of channels of the input feature
|
55 |
-
"""
|
56 |
-
super(RPNHeadFeatureSingleConv, self).__init__()
|
57 |
-
self.conv = nn.Conv2d(
|
58 |
-
in_channels, in_channels, kernel_size=3, stride=1, padding=1
|
59 |
-
)
|
60 |
-
|
61 |
-
for l in [self.conv]:
|
62 |
-
torch.nn.init.normal_(l.weight, std=0.01)
|
63 |
-
torch.nn.init.constant_(l.bias, 0)
|
64 |
-
|
65 |
-
self.out_channels = in_channels
|
66 |
-
|
67 |
-
def forward(self, x):
|
68 |
-
assert isinstance(x, (list, tuple))
|
69 |
-
x = [F.relu(self.conv(z)) for z in x]
|
70 |
-
|
71 |
-
return x
|
72 |
-
|
73 |
-
|
74 |
-
@registry.RPN_HEADS.register("SingleConvRPNHead_1")
|
75 |
-
class RPNHead(nn.Module):
|
76 |
-
"""
|
77 |
-
Adds a simple RPN Head with classification and regression heads
|
78 |
-
"""
|
79 |
-
|
80 |
-
def __init__(self, cfg, in_channels, num_anchors):
|
81 |
-
"""
|
82 |
-
Arguments:
|
83 |
-
cfg : config
|
84 |
-
in_channels (int): number of channels of the input feature
|
85 |
-
num_anchors (int): number of anchors to be predicted
|
86 |
-
"""
|
87 |
-
super(RPNHead, self).__init__()
|
88 |
-
self.conv = nn.Conv2d(
|
89 |
-
in_channels, in_channels, kernel_size=3, stride=1, padding=1
|
90 |
-
)
|
91 |
-
self.cls_logits = nn.Conv2d(in_channels, num_anchors, kernel_size=1, stride=1)
|
92 |
-
self.bbox_pred_new = nn.Conv2d(
|
93 |
-
in_channels, num_anchors * 18, kernel_size=1, stride=1
|
94 |
-
)
|
95 |
-
|
96 |
-
for l in [self.conv, self.cls_logits, self.bbox_pred_new]:
|
97 |
-
torch.nn.init.normal_(l.weight, std=0.01)
|
98 |
-
torch.nn.init.constant_(l.bias, 0)
|
99 |
-
|
100 |
-
def forward(self, x):
|
101 |
-
|
102 |
-
logits = []
|
103 |
-
bbox_reg = []
|
104 |
-
for feature in x:
|
105 |
-
t = F.relu(self.conv(feature))
|
106 |
-
logits.append(self.cls_logits(t))
|
107 |
-
bbox_reg.append(self.bbox_pred_new(t))
|
108 |
-
return logits, bbox_reg
|
109 |
-
|
110 |
-
|
111 |
-
class RPNModule(torch.nn.Module):
|
112 |
-
"""
|
113 |
-
Module for RPN computation. Takes feature maps from the backbone and RPN
|
114 |
-
proposals and losses. Works for both FPN and non-FPN.
|
115 |
-
"""
|
116 |
-
|
117 |
-
def __init__(self, cfg, in_channels):
|
118 |
-
super(RPNModule, self).__init__()
|
119 |
-
|
120 |
-
self.cfg = cfg.clone()
|
121 |
-
|
122 |
-
anchor_generator = make_anchor_generator(cfg)
|
123 |
-
|
124 |
-
rpn_head = registry.RPN_HEADS[cfg.MODEL.RPN.RPN_HEAD]
|
125 |
-
head = rpn_head(
|
126 |
-
cfg, in_channels, anchor_generator.num_anchors_per_location()[0]
|
127 |
-
)
|
128 |
-
|
129 |
-
rpn_box_coder = BoxCoder(weights=(1.0, 1.0, 1.0, 1.0))
|
130 |
-
|
131 |
-
box_selector_train = make_rpn_postprocessor(cfg, rpn_box_coder, is_train=True)
|
132 |
-
box_selector_test = make_rpn_postprocessor(cfg, rpn_box_coder, is_train=False)
|
133 |
-
|
134 |
-
loss_evaluator = make_rpn_loss_evaluator(cfg, rpn_box_coder)
|
135 |
-
|
136 |
-
self.anchor_generator = anchor_generator
|
137 |
-
self.head = head
|
138 |
-
self.box_selector_train = box_selector_train
|
139 |
-
self.box_selector_test = box_selector_test
|
140 |
-
self.loss_evaluator = loss_evaluator
|
141 |
-
|
142 |
-
def forward(self, images, features, targets=None, prefix=''):
|
143 |
-
"""
|
144 |
-
Arguments:
|
145 |
-
images (ImageList): images for which we want to compute the predictions
|
146 |
-
features (list[Tensor]): features computed from the images that are
|
147 |
-
used for computing the predictions. Each tensor in the list
|
148 |
-
correspond to different feature levels
|
149 |
-
targets (list[BoxList): ground-truth boxes present in the image (optional)
|
150 |
-
|
151 |
-
Returns:
|
152 |
-
boxes (list[BoxList]): the predicted boxes from the RPN, one BoxList per
|
153 |
-
image.
|
154 |
-
losses (dict[Tensor]): the losses for the model during training. During
|
155 |
-
testing, it is an empty dict.
|
156 |
-
"""
|
157 |
-
objectness, rpn_box_regression = self.head(features) # len = 5
|
158 |
-
anchors = self.anchor_generator(images, features)
|
159 |
-
|
160 |
-
if self.training:
|
161 |
-
return self._forward_train(anchors, objectness,
|
162 |
-
rpn_box_regression, targets, prefix)
|
163 |
-
else:
|
164 |
-
return self._forward_test(anchors, objectness, rpn_box_regression)
|
165 |
-
|
166 |
-
def _forward_train(self, anchors, objectness, rpn_box_regression, # [image,number,[n,4]]
|
167 |
-
targets, prefix):
|
168 |
-
if self.cfg.MODEL.RPN_ONLY:
|
169 |
-
# When training an RPN-only model, the loss is determined by the
|
170 |
-
# predicted objectness and rpn_box_regression values and there is
|
171 |
-
# no need to transform the anchors into predicted boxes; this is an
|
172 |
-
# optimization that avoids the unnecessary transformation.
|
173 |
-
boxes = anchors
|
174 |
-
else:
|
175 |
-
# print('\n---end-to-end model---\n')
|
176 |
-
# For end-to-end models, anchors must be transformed into boxes and
|
177 |
-
# sampled into a training batch.
|
178 |
-
with torch.no_grad():
|
179 |
-
boxes = self.box_selector_train(
|
180 |
-
anchors, objectness, rpn_box_regression, targets
|
181 |
-
)
|
182 |
-
anchors_new = list(zip(*anchors))
|
183 |
-
regress_new = regress_to_box(anchors_new, rpn_box_regression)
|
184 |
-
|
185 |
-
loss_objectness, loss_rpn_box_reg = self.loss_evaluator(
|
186 |
-
anchors, objectness, regress_new, targets
|
187 |
-
)
|
188 |
-
losses = {
|
189 |
-
prefix + "loss_objectness": loss_objectness,
|
190 |
-
prefix + "loss_rpn_box_reg": loss_rpn_box_reg,
|
191 |
-
}
|
192 |
-
return boxes, losses
|
193 |
-
|
194 |
-
def _forward_test(self, anchors, objectness, rpn_box_regression):
|
195 |
-
boxes = self.box_selector_test(anchors, objectness, rpn_box_regression)
|
196 |
-
if self.cfg.MODEL.RPN_ONLY:
|
197 |
-
# For end-to-end models, the RPN proposals are an intermediate state
|
198 |
-
# and don't bother to sort them in decreasing score order. For RPN-only
|
199 |
-
# models, the proposals are the final output and we return them in
|
200 |
-
# high-to-low confidence order.
|
201 |
-
inds = [
|
202 |
-
box.get_field("objectness").sort(descending=True)[1] for box in boxes
|
203 |
-
]
|
204 |
-
boxes = [box[ind] for box, ind in zip(boxes, inds)]
|
205 |
-
return boxes, {}
|
206 |
-
|
207 |
-
|
208 |
-
def build_rpn(cfg, in_channels):
|
209 |
-
"""
|
210 |
-
This gives the gist of it. Not super important because it doesn't change as much
|
211 |
-
"""
|
212 |
-
if cfg.MODEL.FCOS_ON:
|
213 |
-
return build_fcos(cfg, in_channels)
|
214 |
-
if cfg.MODEL.RETINANET_ON:
|
215 |
-
return build_retinanet(cfg, in_channels)
|
216 |
-
|
217 |
-
return RPNModule(cfg, in_channels)
|
218 |
-
|
219 |
-
|
220 |
-
def regress_to_box(anchor_define,regress_pre):
|
221 |
-
|
222 |
-
boxes_total = []
|
223 |
-
num_f = 0
|
224 |
-
for a, b in zip(anchor_define, regress_pre):
|
225 |
-
boxes_total.append(forward_feature_map(a, b))
|
226 |
-
num_f += 1
|
227 |
-
return boxes_total
|
228 |
-
|
229 |
-
def forward_feature_map(anchors_define, boxes_regression):
|
230 |
-
N, A, H, W = boxes_regression.shape
|
231 |
-
|
232 |
-
boxes_regression = faltten(boxes_regression, N, A, 18, H, W) #
|
233 |
-
|
234 |
-
# image_shapes = [box.size for box in anchors_define]
|
235 |
-
concat_anchors = torch.cat([a.bbox for a in anchors_define], dim=0)
|
236 |
-
concat_anchors = concat_anchors.reshape(N, -1, 4)
|
237 |
-
proposals = decode_iou(boxes_regression.view(-1, 18), concat_anchors.view(-1, 4))
|
238 |
-
box_temp_post = proposals.view(N, -1, 4)
|
239 |
-
|
240 |
-
return box_temp_post
|
241 |
-
|
242 |
-
def faltten(layer, N, A, C, H, W):
|
243 |
-
layer = layer.view(N, -1, C, H, W)
|
244 |
-
layer = layer.permute(0, 3, 4, 1, 2) #N H W A C
|
245 |
-
layer = layer.reshape(N, -1, C) # N H*W*A C
|
246 |
-
return layer
|
247 |
-
|
248 |
-
def decode_iou( rel_codes, boxes, num_p = 8):
|
249 |
-
"""
|
250 |
-
From a set of original boxes and encoded relative box offsets,
|
251 |
-
get the decoded boxes.
|
252 |
-
|
253 |
-
Arguments:
|
254 |
-
rel_codes (Tensor): encoded boxes # predict [2, 12000, 4]
|
255 |
-
boxes (Tensor): reference boxes. # anchor [2, 12000, 4] xmin0 ymin1 xmax2 ymax3
|
256 |
-
"""
|
257 |
-
boxes = boxes.to(rel_codes.dtype)
|
258 |
-
|
259 |
-
TO_REMOVE = 1 # TODO remove
|
260 |
-
widths = boxes[:, 2] - boxes[:, 0] + TO_REMOVE
|
261 |
-
heights = boxes[:, 3] - boxes[:, 1] + TO_REMOVE
|
262 |
-
dx = rel_codes[:, 16]
|
263 |
-
dy = rel_codes[:, 17]
|
264 |
-
|
265 |
-
ctr_x = boxes[:, 0] + 0.5 * widths
|
266 |
-
ctr_y = boxes[:, 1] + 0.5 * heights
|
267 |
-
|
268 |
-
ctr_x_new = dx * widths * 0.5 + ctr_x
|
269 |
-
ctr_y_new = dy * heights * 0.5 + ctr_y
|
270 |
-
# 123
|
271 |
-
# 8#4
|
272 |
-
# 765
|
273 |
-
if num_p == 8: # 8 boundary points
|
274 |
-
x_1 = boxes[:, 0] + widths * rel_codes[:, 0]
|
275 |
-
y_1 = boxes[:, 1] + heights * rel_codes[:, 1]
|
276 |
-
x_2 = ctr_x + widths * rel_codes[:, 2]
|
277 |
-
y_2 = boxes[:, 1] + heights * rel_codes[:, 3]
|
278 |
-
x_3 = boxes[:, 2] + widths * rel_codes[:, 4]
|
279 |
-
y_3 = boxes[:, 1] + heights * rel_codes[:, 5]
|
280 |
-
x_4 = boxes[:, 2] + widths * rel_codes[:, 6]
|
281 |
-
y_4 = ctr_y + heights * rel_codes[:, 7]
|
282 |
-
x_5 = boxes[:, 2] + widths * rel_codes[:, 8]
|
283 |
-
y_5 = boxes[:, 3] + heights * rel_codes[:, 9]
|
284 |
-
x_6 = ctr_x + widths * rel_codes[:, 10]
|
285 |
-
y_6 = boxes[:, 3] + heights * rel_codes[:, 11]
|
286 |
-
x_7 = boxes[:, 0] + widths * rel_codes[:, 12]
|
287 |
-
y_7 = boxes[:, 3] + heights * rel_codes[:, 13]
|
288 |
-
x_8 = boxes[:, 0] + widths * rel_codes[:, 14]
|
289 |
-
y_8 = ctr_y + heights * rel_codes[:, 15]
|
290 |
-
x_total = torch.stack([x_1, x_2, x_3, x_4, x_5, x_6, x_7, x_8], 0) # [8, N]
|
291 |
-
y_total = torch.stack([y_1, y_2, y_3, y_4, y_5, y_6, y_7, y_8], 0)
|
292 |
-
|
293 |
-
x_min = torch.min(x_total, 0, keepdim=True) # [1, N]
|
294 |
-
x_max = torch.max(x_total, 0, keepdim=True) # [1, N]
|
295 |
-
y_min = torch.min(y_total, 0, keepdim=True) # [1, N]
|
296 |
-
y_max = torch.max(y_total, 0, keepdim=True) # [1, N]
|
297 |
-
|
298 |
-
N1, N2 = x_min[0].shape
|
299 |
-
x_min = x_min[0].view([N2])
|
300 |
-
x_max = x_max[0].view([N2])
|
301 |
-
y_min = y_min[0].view([N2])
|
302 |
-
y_max = y_max[0].view([N2])
|
303 |
-
|
304 |
-
x_min = torch.stack([x_min, ctr_x_new], 0)
|
305 |
-
x_max = torch.stack([x_max, ctr_x_new], 0)
|
306 |
-
y_min = torch.stack([y_min, ctr_y_new], 0)
|
307 |
-
y_max = torch.stack([y_max, ctr_y_new], 0)
|
308 |
-
|
309 |
-
x_min = torch.min(x_min, 0, keepdim=True) # [1, N]
|
310 |
-
x_max = torch.max(x_max, 0, keepdim=True) # [1, N]
|
311 |
-
y_min = torch.min(y_min, 0, keepdim=True) # [1, N]
|
312 |
-
y_max = torch.max(y_max, 0, keepdim=True) # [1, N]
|
313 |
-
|
314 |
-
pred_boxes = torch.zeros_like(boxes)
|
315 |
-
|
316 |
-
pred_boxes[:, 0] = x_min[0][0, :]
|
317 |
-
pred_boxes[:, 1] = y_min[0][0, :]
|
318 |
-
pred_boxes[:, 2] = x_max[0][0, :]
|
319 |
-
pred_boxes[:, 3] = y_max[0][0, :]
|
320 |
-
|
321 |
-
return pred_boxes
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_b_s_l_n.py
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
from .otBase import BaseTTXConverter
|
2 |
-
|
3 |
-
|
4 |
-
# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6bsln.html
|
5 |
-
class table__b_s_l_n(BaseTTXConverter):
|
6 |
-
pass
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/otData.py
DELETED
The diff for this file is too large to render.
See raw diff
|
|
spaces/Danielito/webui/README.md
DELETED
@@ -1,20 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Stable Diffusion Web UI
|
3 |
-
emoji: 🚧
|
4 |
-
colorFrom: yellow
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.9
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
duplicated_from: camenduru/webui
|
11 |
-
---
|
12 |
-
|
13 |
-
## Stable Diffusion Web UI
|
14 |
-
[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
|
15 |
-
|
16 |
-
## Documentation
|
17 |
-
[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki)
|
18 |
-
|
19 |
-
## Models License
|
20 |
-
https://huggingface.co/spaces/CompVis/stable-diffusion-license
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Dify-AI/Baichuan2-13B-Chat/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Baichuan2 13B Chat
|
3 |
-
emoji: 🔥
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.42.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: other
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DrBenjamin/AI_Demo/pages/💁 Open_Assistant.py
DELETED
@@ -1,359 +0,0 @@
|
|
1 |
-
##### `💁 Open_Assistant.py`
|
2 |
-
##### Chat Llm Streaming
|
3 |
-
##### https://huggingface.co/spaces/olivierdehaene/chat-llm-streaming/blob/main/README.md
|
4 |
-
##### https://open-assistant.io/dashboard
|
5 |
-
##### https://github.com/LAION-AI/Open-Assistant
|
6 |
-
|
7 |
-
##### Please reach out to [email protected] for any questions
|
8 |
-
#### Loading needed Python libraries
|
9 |
-
import streamlit as st
|
10 |
-
import os
|
11 |
-
from text_generation import Client, InferenceAPIClient
|
12 |
-
from text_generation import InferenceAPIClient
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
#### Streamlit initial setup
|
18 |
-
st.set_page_config(
|
19 |
-
page_title = "💁 Open Assistant LLM",
|
20 |
-
page_icon = "images/OpenAssistant.png",
|
21 |
-
layout = "centered",
|
22 |
-
initial_sidebar_state = "expanded"
|
23 |
-
)
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
#### Main program
|
29 |
-
st.header('💁 Open Assistant LLM')
|
30 |
-
st.write('Conversational AI for everyone.')
|
31 |
-
st.write('In the same way that Stable Diffusion helped the world make art and images in new ways, this helps to improve the world by providing amazing conversational AI.')
|
32 |
-
st.write('This is the first iteration English supervised-fine-tuning (SFT) model of the Open-Assistant project. It is based on a Pythia 12B that was fine-tuned on ~22k human demonstrations of assistant conversations collected through the https://open-assistant.io/ human feedback web app before March 7, 2023.')
|
33 |
-
st.write(':orange[Needs to be run on Hugging Face to access the OpenAssistant model (Run it here https://huggingface.co/spaces/DrBenjamin/AI_Demo).]')
|
34 |
-
with st.form('OpenAssistant'):
|
35 |
-
client = InferenceAPIClient("OpenAssistant/oasst-sft-1-pythia-12b")
|
36 |
-
st.subheader('Question')
|
37 |
-
input_text = st.text_input('Ask a question')
|
38 |
-
input_text = '<|prompter|>' + input_text + '<|endoftext|><|assistant|>'
|
39 |
-
submitted = st.form_submit_button('Submit')
|
40 |
-
if submitted:
|
41 |
-
text = client.generate(input_text).generated_text
|
42 |
-
st.subheader('Answer')
|
43 |
-
st.write('Answer: :green[' + str(text) + ']')
|
44 |
-
|
45 |
-
|
46 |
-
# Token Streaming
|
47 |
-
#text = ""
|
48 |
-
#for response in client.generate_stream("<|prompter|>Why is the sky blue?<|endoftext|><|assistant|>"):
|
49 |
-
# if not response.token.special:
|
50 |
-
# print(response.token.text)
|
51 |
-
# text += response.token.text
|
52 |
-
#st.write(text)
|
53 |
-
|
54 |
-
#
|
55 |
-
# openchat_preprompt = (
|
56 |
-
# "\n<human>: Hi!\n<bot>: My name is Bot, model version is 0.15, part of an open-source kit for "
|
57 |
-
# "fine-tuning new bots! I was created by Together, LAION, and Ontocord.ai and the open-source "
|
58 |
-
# "community. I am not human, not evil and not alive, and thus have no thoughts and feelings, "
|
59 |
-
# "but I am programmed to be helpful, polite, honest, and friendly.\n"
|
60 |
-
# )
|
61 |
-
#
|
62 |
-
#
|
63 |
-
# def get_client(model: str):
|
64 |
-
# if model == "togethercomputer/GPT-NeoXT-Chat-Base-20B":
|
65 |
-
# return Client(os.getenv("OPENCHAT_API_URL"))
|
66 |
-
# return InferenceAPIClient(model, token = os.getenv("HF_TOKEN", None))
|
67 |
-
#
|
68 |
-
#
|
69 |
-
# def get_usernames(model: str):
|
70 |
-
# """
|
71 |
-
# Returns:
|
72 |
-
# (str, str, str, str): pre-prompt, username, bot name, separator
|
73 |
-
# """
|
74 |
-
# if model == "OpenAssistant/oasst-sft-1-pythia-12b":
|
75 |
-
# return "", "<|prompter|>", "<|assistant|>", "<|endoftext|>"
|
76 |
-
# if model == "togethercomputer/GPT-NeoXT-Chat-Base-20B":
|
77 |
-
# return openchat_preprompt, "<human>: ", "<bot>: ", "\n"
|
78 |
-
# return "", "User: ", "Assistant: ", "\n"
|
79 |
-
#
|
80 |
-
#
|
81 |
-
# def predict(
|
82 |
-
# model: str,
|
83 |
-
# inputs: str,
|
84 |
-
# typical_p: float,
|
85 |
-
# top_p: float,
|
86 |
-
# temperature: float,
|
87 |
-
# top_k: int,
|
88 |
-
# repetition_penalty: float,
|
89 |
-
# watermark: bool,
|
90 |
-
# chatbot,
|
91 |
-
# history,
|
92 |
-
# ):
|
93 |
-
# client = get_client(model)
|
94 |
-
# preprompt, user_name, assistant_name, sep = get_usernames(model)
|
95 |
-
#
|
96 |
-
# history.append(inputs)
|
97 |
-
#
|
98 |
-
# past = []
|
99 |
-
# for data in chatbot:
|
100 |
-
# user_data, model_data = data
|
101 |
-
#
|
102 |
-
# if not user_data.startswith(user_name):
|
103 |
-
# user_data = user_name + user_data
|
104 |
-
# if not model_data.startswith(sep + assistant_name):
|
105 |
-
# model_data = sep + assistant_name + model_data
|
106 |
-
#
|
107 |
-
# past.append(user_data + model_data.rstrip() + sep)
|
108 |
-
#
|
109 |
-
# if not inputs.startswith(user_name):
|
110 |
-
# inputs = user_name + inputs
|
111 |
-
#
|
112 |
-
# total_inputs = preprompt + "".join(past) + inputs + sep + assistant_name.rstrip()
|
113 |
-
#
|
114 |
-
# partial_words = ""
|
115 |
-
#
|
116 |
-
# if model == "OpenAssistant/oasst-sft-1-pythia-12b":
|
117 |
-
# iterator = client.generate_stream(
|
118 |
-
# total_inputs,
|
119 |
-
# typical_p = typical_p,
|
120 |
-
# truncate = 1000,
|
121 |
-
# watermark = watermark,
|
122 |
-
# max_new_tokens = 500,
|
123 |
-
# )
|
124 |
-
# else:
|
125 |
-
# iterator = client.generate_stream(
|
126 |
-
# total_inputs,
|
127 |
-
# top_p = top_p if top_p < 1.0 else None,
|
128 |
-
# top_k = top_k,
|
129 |
-
# truncate = 1000,
|
130 |
-
# repetition_penalty = repetition_penalty,
|
131 |
-
# watermark = watermark,
|
132 |
-
# temperature = temperature,
|
133 |
-
# max_new_tokens = 500,
|
134 |
-
# stop_sequences = [user_name.rstrip(), assistant_name.rstrip()],
|
135 |
-
# )
|
136 |
-
#
|
137 |
-
# for i, response in enumerate(iterator):
|
138 |
-
# if response.token.special:
|
139 |
-
# continue
|
140 |
-
#
|
141 |
-
# partial_words = partial_words + response.token.text
|
142 |
-
# if partial_words.endswith(user_name.rstrip()):
|
143 |
-
# partial_words = partial_words.rstrip(user_name.rstrip())
|
144 |
-
# if partial_words.endswith(assistant_name.rstrip()):
|
145 |
-
# partial_words = partial_words.rstrip(assistant_name.rstrip())
|
146 |
-
#
|
147 |
-
# if i == 0:
|
148 |
-
# history.append(" " + partial_words)
|
149 |
-
# elif response.token.text not in user_name:
|
150 |
-
# history[-1] = partial_words
|
151 |
-
#
|
152 |
-
# chat = [
|
153 |
-
# (history[i].strip(), history[i + 1].strip())
|
154 |
-
# for i in range(0, len(history) - 1, 2)
|
155 |
-
# ]
|
156 |
-
# yield chat, history
|
157 |
-
#
|
158 |
-
#
|
159 |
-
# def reset_textbox():
|
160 |
-
# return gr.update(value = "")
|
161 |
-
#
|
162 |
-
#
|
163 |
-
# def radio_on_change(
|
164 |
-
# value: str,
|
165 |
-
# disclaimer,
|
166 |
-
# typical_p,
|
167 |
-
# top_p,
|
168 |
-
# top_k,
|
169 |
-
# temperature,
|
170 |
-
# repetition_penalty,
|
171 |
-
# watermark,
|
172 |
-
# ):
|
173 |
-
# if value == "OpenAssistant/oasst-sft-1-pythia-12b":
|
174 |
-
# typical_p = typical_p.update(value = 0.2, visible = True)
|
175 |
-
# top_p = top_p.update(visible = False)
|
176 |
-
# top_k = top_k.update(visible = False)
|
177 |
-
# temperature = temperature.update(visible = False)
|
178 |
-
# disclaimer = disclaimer.update(visible = False)
|
179 |
-
# repetition_penalty = repetition_penalty.update(visible = False)
|
180 |
-
# watermark = watermark.update(False)
|
181 |
-
# elif value == "togethercomputer/GPT-NeoXT-Chat-Base-20B":
|
182 |
-
# typical_p = typical_p.update(visible = False)
|
183 |
-
# top_p = top_p.update(value = 0.25, visible = True)
|
184 |
-
# top_k = top_k.update(value = 50, visible = True)
|
185 |
-
# temperature = temperature.update(value = 0.6, visible = True)
|
186 |
-
# repetition_penalty = repetition_penalty.update(value = 1.01, visible = True)
|
187 |
-
# watermark = watermark.update(False)
|
188 |
-
# disclaimer = disclaimer.update(visible = True)
|
189 |
-
# else:
|
190 |
-
# typical_p = typical_p.update(visible = False)
|
191 |
-
# top_p = top_p.update(value = 0.95, visible = True)
|
192 |
-
# top_k = top_k.update(value = 4, visible = True)
|
193 |
-
# temperature = temperature.update(value = 0.5, visible = True)
|
194 |
-
# repetition_penalty = repetition_penalty.update(value = 1.03, visible = True)
|
195 |
-
# watermark = watermark.update(True)
|
196 |
-
# disclaimer = disclaimer.update(visible = False)
|
197 |
-
# return (
|
198 |
-
# disclaimer,
|
199 |
-
# typical_p,
|
200 |
-
# top_p,
|
201 |
-
# top_k,
|
202 |
-
# temperature,
|
203 |
-
# repetition_penalty,
|
204 |
-
# watermark,
|
205 |
-
# )
|
206 |
-
#
|
207 |
-
#
|
208 |
-
# title = """<h1 align="center">🔥Large Language Model API 🚀Streaming🚀</h1>"""
|
209 |
-
# description = """Language models can be conditioned to act like dialogue agents through a conversational prompt that typically takes the form:
|
210 |
-
# ```
|
211 |
-
# User: <utterance>
|
212 |
-
# Assistant: <utterance>
|
213 |
-
# User: <utterance>
|
214 |
-
# Assistant: <utterance>
|
215 |
-
# ...
|
216 |
-
# ```
|
217 |
-
# In this app, you can explore the outputs of multiple LLMs when prompted in this way.
|
218 |
-
# """
|
219 |
-
#
|
220 |
-
# openchat_disclaimer = """
|
221 |
-
# <div align="center">Checkout the official <a href=https://huggingface.co/spaces/togethercomputer/OpenChatKit>OpenChatKit feedback app</a> for the full experience.</div>
|
222 |
-
# """
|
223 |
-
#
|
224 |
-
# with gr.Blocks(
|
225 |
-
# css = """#col_container {margin-left: auto; margin-right: auto;}
|
226 |
-
# #chatbot {height: 520px; overflow: auto;}"""
|
227 |
-
# ) as demo:
|
228 |
-
# gr.HTML(title)
|
229 |
-
# with gr.Column(elem_id = "col_container"):
|
230 |
-
# model = gr.Radio(
|
231 |
-
# value = "OpenAssistant/oasst-sft-1-pythia-12b",
|
232 |
-
# choices = [
|
233 |
-
# "OpenAssistant/oasst-sft-1-pythia-12b",
|
234 |
-
# # "togethercomputer/GPT-NeoXT-Chat-Base-20B",
|
235 |
-
# "google/flan-t5-xxl",
|
236 |
-
# "google/flan-ul2",
|
237 |
-
# "bigscience/bloom",
|
238 |
-
# "bigscience/bloomz",
|
239 |
-
# "EleutherAI/gpt-neox-20b",
|
240 |
-
# ],
|
241 |
-
# label = "Model",
|
242 |
-
# interactive = True,
|
243 |
-
# )
|
244 |
-
#
|
245 |
-
# chatbot = gr.Chatbot(elem_id = "chatbot")
|
246 |
-
# inputs = gr.Textbox(
|
247 |
-
# placeholder = "Hi there!", label = "Type an input and press Enter"
|
248 |
-
# )
|
249 |
-
# disclaimer = gr.Markdown(openchat_disclaimer, visible = False)
|
250 |
-
# state = gr.State([])
|
251 |
-
# b1 = gr.Button()
|
252 |
-
#
|
253 |
-
# with gr.Accordion("Parameters", open = False):
|
254 |
-
# typical_p = gr.Slider(
|
255 |
-
# minimum = -0,
|
256 |
-
# maximum = 1.0,
|
257 |
-
# value = 0.2,
|
258 |
-
# step = 0.05,
|
259 |
-
# interactive = True,
|
260 |
-
# label = "Typical P mass",
|
261 |
-
# )
|
262 |
-
# top_p = gr.Slider(
|
263 |
-
# minimum = -0,
|
264 |
-
# maximum = 1.0,
|
265 |
-
# value = 0.25,
|
266 |
-
# step = 0.05,
|
267 |
-
# interactive = True,
|
268 |
-
# label = "Top-p (nucleus sampling)",
|
269 |
-
# visible = False,
|
270 |
-
# )
|
271 |
-
# temperature = gr.Slider(
|
272 |
-
# minimum = -0,
|
273 |
-
# maximum = 5.0,
|
274 |
-
# value = 0.6,
|
275 |
-
# step = 0.1,
|
276 |
-
# interactive = True,
|
277 |
-
# label = "Temperature",
|
278 |
-
# visible = False,
|
279 |
-
# )
|
280 |
-
# top_k = gr.Slider(
|
281 |
-
# minimum = 1,
|
282 |
-
# maximum = 50,
|
283 |
-
# value = 50,
|
284 |
-
# step = 1,
|
285 |
-
# interactive = True,
|
286 |
-
# label = "Top-k",
|
287 |
-
# visible = False,
|
288 |
-
# )
|
289 |
-
# repetition_penalty = gr.Slider(
|
290 |
-
# minimum = 0.1,
|
291 |
-
# maximum = 3.0,
|
292 |
-
# value = 1.03,
|
293 |
-
# step = 0.01,
|
294 |
-
# interactive = True,
|
295 |
-
# label = "Repetition Penalty",
|
296 |
-
# visible = False,
|
297 |
-
# )
|
298 |
-
# watermark = gr.Checkbox(value = False, label = "Text watermarking")
|
299 |
-
#
|
300 |
-
# model.change(
|
301 |
-
# lambda value: radio_on_change(
|
302 |
-
# value,
|
303 |
-
# disclaimer,
|
304 |
-
# typical_p,
|
305 |
-
# top_p,
|
306 |
-
# top_k,
|
307 |
-
# temperature,
|
308 |
-
# repetition_penalty,
|
309 |
-
# watermark,
|
310 |
-
# ),
|
311 |
-
# inputs = model,
|
312 |
-
# outputs = [
|
313 |
-
# disclaimer,
|
314 |
-
# typical_p,
|
315 |
-
# top_p,
|
316 |
-
# top_k,
|
317 |
-
# temperature,
|
318 |
-
# repetition_penalty,
|
319 |
-
# watermark,
|
320 |
-
# ],
|
321 |
-
# )
|
322 |
-
#
|
323 |
-
# inputs.submit(
|
324 |
-
# predict,
|
325 |
-
# [
|
326 |
-
# model,
|
327 |
-
# inputs,
|
328 |
-
# typical_p,
|
329 |
-
# top_p,
|
330 |
-
# temperature,
|
331 |
-
# top_k,
|
332 |
-
# repetition_penalty,
|
333 |
-
# watermark,
|
334 |
-
# chatbot,
|
335 |
-
# state,
|
336 |
-
# ],
|
337 |
-
# [chatbot, state],
|
338 |
-
# )
|
339 |
-
# b1.click(
|
340 |
-
# predict,
|
341 |
-
# [
|
342 |
-
# model,
|
343 |
-
# inputs,
|
344 |
-
# typical_p,
|
345 |
-
# top_p,
|
346 |
-
# temperature,
|
347 |
-
# top_k,
|
348 |
-
# repetition_penalty,
|
349 |
-
# watermark,
|
350 |
-
# chatbot,
|
351 |
-
# state,
|
352 |
-
# ],
|
353 |
-
# [chatbot, state],
|
354 |
-
# )
|
355 |
-
# b1.click(reset_textbox, [], [inputs])
|
356 |
-
# inputs.submit(reset_textbox, [], [inputs])
|
357 |
-
#
|
358 |
-
# gr.Markdown(description)
|
359 |
-
# demo.queue(concurrency_count = 16).launch(debug = True)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DragGan/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/stylegan2/__init__.py
DELETED
File without changes
|