Commit
·
b1478bd
1
Parent(s):
b2d7283
Update parquet files (step 45 of 296)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch ((TOP)) Keygen.md +0 -32
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Extract Justdial Data What You Need to Know and How to Do It.md +0 -30
- spaces/1phancelerku/anime-remove-background/Descarga My Talking Tom APK con Dinero y Diamantes Infinitos Gratis.md +0 -138
- spaces/1phancelerku/anime-remove-background/Facebook Lite The Best Way to Download Facebook for Android.md +0 -173
- spaces/2ndelement/voicevox/test/test_user_dict_model.py +0 -108
- spaces/801artistry/RVC801/demucs/pretrained.py +0 -107
- spaces/801artistry/RVC801/infer/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +0 -87
- spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/layers_537227KB.py +0 -126
- spaces/AIGC-Audio/Make_An_Audio/ldm/modules/diffusionmodules/openaimodel.py +0 -963
- spaces/Abubakari/Sepsis-prediction-streamlit-app/README.md +0 -12
- spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/login/callback/$types.d.ts +0 -22
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/cover-plugin.js +0 -23
- spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/conv2d_resample.py +0 -176
- spaces/An-619/FastSAM/README.md +0 -46
- spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/model/styleRF.py +0 -95
- spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r101_fpn_mstrain_3x_coco.py +0 -2
- spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/positional_encoding.py +0 -150
- spaces/Anni123/AuRoRA/llm_utils.py +0 -78
- spaces/AnthonyTruchetPoC/persistent-docker/src/athai/data_utils.py +0 -32
- spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/__init__.py +0 -1
- spaces/Averyng/averyng/README.md +0 -12
- spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/roi_heads/grit_fast_rcnn.py +0 -126
- spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/samplers/distributed_sampler.py +0 -278
- spaces/Bakuman/Real-CUGAN/app.py +0 -62
- spaces/Bakuman/Real-CUGAN/upcunet_v3.py +0 -714
- spaces/Benson/text-generation/Examples/Aficionado A Las Descargas.md +0 -101
- spaces/Benson/text-generation/Examples/Descargar Cambio De Forma Mod.md +0 -82
- spaces/Benson/text-generation/Examples/Descargar Ganador Eleven 2019 Apk.md +0 -74
- spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/base.py +0 -70
- spaces/CVPR/LIVE/thrust/thrust/detail/tuple_algorithms.h +0 -111
- spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/has_trivial_assign.h +0 -54
- spaces/CVPR/Leaderboard/README.md +0 -12
- spaces/CVPR/WALT/mmdet/datasets/samplers/group_sampler.py +0 -148
- spaces/Cloudy1225/stackoverflow-sentiment-analysis/sentiment_analyser.py +0 -84
- spaces/CofAI/chat/client/css/typing.css +0 -15
- spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/upfirdn2d.py +0 -384
- spaces/CosmicSage/Linaqruf-anything-v3.0pruned/app.py +0 -3
- spaces/Cyril666/ContourNet-ABI/modules/__init__.py +0 -0
- spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_compat.py +0 -43
- spaces/DVLH/consciousAI-question-answering-roberta-vsgshshshsbase-s-v2/app.py +0 -3
- spaces/Dao3/MBTI_Test/README.md +0 -14
- spaces/Datasculptor/MusicGen/CHANGELOG.md +0 -23
- spaces/DaweiZ/toy-gpt/Dockerfile +0 -28
- spaces/Deevyankar/Deep-AD/README.md +0 -13
- spaces/EdBianchi/Social_Toximeter/app.py +0 -156
- spaces/Elbhnasy/Eye-Tracking-Diagnosis/app.py +0 -78
- spaces/EleutherAI/VQGAN_CLIP/CLIP/tests/test_consistency.py +0 -25
- spaces/EleutherAI/VQGAN_CLIP/taming-transformers/setup.py +0 -13
- spaces/Enderfga/mtCNN_sysu/utils/tool.py +0 -117
- spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_123812KB.py +0 -122
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch ((TOP)) Keygen.md
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Download and Install Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch Keygen</h1>
|
3 |
-
<p>Adobe Photoshop Lightroom Classic CC 2018 is a powerful and versatile software for editing and organizing your photos. It allows you to import, develop, and export your images in various formats, as well as create slideshows, web galleries, and photo books. With Lightroom Classic CC 2018, you can also sync your photos across multiple devices using the Creative Cloud service.</p>
|
4 |
-
<h2>Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch keygen</h2><br /><p><b><b>Download File</b> ☑ <a href="https://byltly.com/2uKzxs">https://byltly.com/2uKzxs</a></b></p><br /><br />
|
5 |
-
<p>If you want to download and install Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) Patch Keygen, you should be aware of the risks and drawbacks of using a cracked version of the software. In this article, we will explain why you should avoid using a Lightroom crack, and how you can get a legal and safe version of Lightroom Classic CC 2018 for free.</p>
|
6 |
-
<h2>Why You Should Avoid Using a Lightroom Crack</h2>
|
7 |
-
<p>A Lightroom crack is a modified version of the original software that bypasses the activation process and makes it appear as if it has been licensed with a valid key. However, using a Lightroom crack is illegal and unethical, as it violates the terms of use and the intellectual property rights of Adobe. Moreover, using a Lightroom crack can expose you to various problems, such as:</p>
|
8 |
-
<ul>
|
9 |
-
<li><b>Malware infection:</b> Many Lightroom cracks are infected with viruses, trojans, worms, or ransomware that can damage your computer and compromise your personal data. These malicious programs can also disable your antivirus software and spread to other devices on your network.</li>
|
10 |
-
<li><b>System instability:</b> A Lightroom crack can also cause performance and compatibility issues with your operating system and other software. The cracked version may not work properly with other Adobe products, such as Photoshop or Bridge. It may also crash, freeze, or show errors at unexpected times.</li>
|
11 |
-
<li><b>No updates:</b> A Lightroom crack does not receive any updates from Adobe, which means you will miss out on new features, bug fixes, and security patches. This can affect the quality and functionality of your software, as well as make it vulnerable to hackers and cyberattacks.</li>
|
12 |
-
<li><b>Legal consequences:</b> Using a Lightroom crack is considered software piracy, which is a serious crime in many countries. If you are caught using a cracked version of Lightroom, you could face legal action from Adobe or the authorities. You could also be fined or even jailed for violating the copyright laws.</li>
|
13 |
-
</ul>
|
14 |
-
<h2>How to Get Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) for Free</h2>
|
15 |
-
<p>If you want to use Adobe Photoshop Lightroom Classic CC 2018 7.2.0.10 (x64) legally and safely, you have two options:</p>
|
16 |
-
<p></p>
|
17 |
-
<ol>
|
18 |
-
<li><b>Download the free trial version:</b> You can download the free trial version of Lightroom Classic CC 2018 from the official Adobe website[^2^]. The trial version gives you access to all the features and functions of the software for 7 days. After that, you will need to purchase a subscription plan to continue using it.</li>
|
19 |
-
<li><b>Download the free Creative Cloud version:</b> You can also download the free Creative Cloud version of Lightroom Classic CC 2018 from the official Adobe website[^2^]. The Creative Cloud version is similar to the trial version, but it does not expire after 7 days. However, it has some limitations, such as:</li>
|
20 |
-
<ul>
|
21 |
-
<li>You can only import up to 2 GB of photos per month.</li>
|
22 |
-
<li>You can only sync up to 20 GB of photos across your devices.</li>
|
23 |
-
<li>You can only use a limited number of presets and profiles.</li>
|
24 |
-
<li>You cannot use some advanced features, such as HDR merge, panorama merge, or tethered capture.</li>
|
25 |
-
</ul>
|
26 |
-
</ol>
|
27 |
-
<p>To download either version of Lightroom Classic CC 2018, you will need to create an Adobe account and install the Creative Cloud desktop app on your computer. Then, you can follow these steps:</p>
|
28 |
-
<ol>
|
29 |
-
<li>Launch the Creative Cloud desktop app and sign in with your Adobe account.</li>
|
30 |
-
<li>Select "Apps" from</p> 81aa517590<br />
|
31 |
-
<br />
|
32 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Extract Justdial Data What You Need to Know and How to Do It.md
DELETED
@@ -1,30 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>How to Extract Justdial Data for Your Business Needs</h1>
|
3 |
-
<p>Justdial is an Indian internet technology company that provides local search for different services in India over the phone, website and mobile apps. It has a huge database of businesses, products, services, reviews, ratings, deals and more across various categories and locations. If you are looking for a way to extract Justdial data for your business needs, such as lead generation, market research, competitor analysis, or data mining, you have come to the right place. In this article, we will show you how to extract Justdial data using a web scraping tool and what to consider before and after the extraction process.</p>
|
4 |
-
<h2>What is Web Scraping?</h2>
|
5 |
-
<p>Web scraping is a technique of extracting data from websites using a software program or a script. It can automate the process of collecting large amounts of data from various web pages and save them in a structured format, such as CSV, Excel, JSON, XML, etc. Web scraping can help you get the data you need from any website without having to manually copy and paste it.</p>
|
6 |
-
<h2>extract justdial data</h2><br /><p><b><b>Download Zip</b> » <a href="https://byltly.com/2uKw27">https://byltly.com/2uKw27</a></b></p><br /><br />
|
7 |
-
<h2>How to Choose the Best Web Scraping Tool to Extract Justdial Data?</h2>
|
8 |
-
<p>There are many web scraping tools available on the internet, but not all of them are equally effective or reliable. Some may have limited features, poor performance, or even legal issues. Therefore, you need to be careful when choosing the best web scraping tool to extract Justdial data. Here are some factors to consider:</p>
|
9 |
-
<ul>
|
10 |
-
<li><b>Compatibility:</b> Make sure the tool is compatible with your operating system and your target website. For example, if you want to extract Justdial data from its mobile app, you need a tool that can support app scraping.</li>
|
11 |
-
<li><b>Scalability:</b> Check how many web pages or records the tool can scrape at a time and how fast it can do it. You may need a tool that can handle large-scale scraping projects without compromising on speed or quality.</li>
|
12 |
-
<li><b>Accuracy:</b> Look for a tool that can extract Justdial data accurately and completely. Some tools may miss some data fields or extract incorrect or incomplete data due to dynamic or complex web pages.</li>
|
13 |
-
<li><b>User-Friendliness:</b> Choose a tool that is easy to use and has a clear and intuitive interface. You may not want to spend too much time or effort on learning how to use the tool or writing complex codes.</li>
|
14 |
-
<li><b>Customer Support:</b> Choose a tool that has good customer support and positive reviews from other users. This can help you solve any problems or questions that may arise during the extraction process.</li>
|
15 |
-
</ul>
|
16 |
-
<h2>How to Use Web Scraping Tool to Extract Justdial Data?</h2>
|
17 |
-
<p>Once you have selected the best web scraping tool to extract Justdial data, you can follow these steps to use it:</p>
|
18 |
-
<ol>
|
19 |
-
<li><b>Download and Install the Tool:</b> Go to the official website of the tool and download the latest version. Then, install it on your computer following the instructions.</li>
|
20 |
-
<li><b>Create a Project and Select Your Target Website:</b> Open the tool and create a new project. Then, enter the URL of your target website or app. For example, if you want to extract Justdial data from its website, you can enter https://www.justdial.com/.</li>
|
21 |
-
<li><b>Select Your Data Fields and Configure Your Settings:</b> Choose what data fields you want to extract from Justdial, such as business name, address, phone number, email, website, rating, review, category, location, etc. You can also configure your settings according to your preferences, such as pagination, proxy, delay, captcha, etc.</li>
|
22 |
-
<li><b>Start Scraping and Save Your Data:</b> Click on "Start" or "Run" to begin the scraping process. You will see the extracted data on the interface. You can also monitor the progress and status of the scraping task. Once the scraping is completed, you can save your data in your desired format.</li>
|
23 |
-
</ol>
|
24 |
-
<h2>Tips for Successful Data Extraction</h2>
|
25 |
-
<p>To increase your chances of extracting Justdial data successfully, here are some tips to follow:</p>
|
26 |
-
<ul>
|
27 |
-
<li><b>Follow the</p>
|
28 |
-
<p></p> ddb901b051<br />
|
29 |
-
<br />
|
30 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Descarga My Talking Tom APK con Dinero y Diamantes Infinitos Gratis.md
DELETED
@@ -1,138 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>My Talking Tom Dinero Infinito APK: How to Get Unlimited Coins and Diamonds in My Talking Tom Game</h1>
|
3 |
-
<p>Do you love playing with your virtual pet, Tom, in the popular game My Talking Tom? Do you want to have more coins and diamonds to buy him new outfits, accessories, and toys? If yes, then you might be interested in downloading the My Talking Tom Dinero Infinito APK, which is a modified version of the game that gives you unlimited resources. In this article, we will tell you what is My Talking Tom, what is My Talking Tom Dinero Infinito APK, how to download and install it, and how to use it safely and effectively. Let's get started!</p>
|
4 |
-
<h2>my talking tom dinero infinito apk</h2><br /><p><b><b>Download Zip</b> ===> <a href="https://jinyurl.com/2uNUCR">https://jinyurl.com/2uNUCR</a></b></p><br /><br />
|
5 |
-
<h2>What is My Talking Tom?</h2>
|
6 |
-
<p>My Talking Tom is a fun and interactive game that lets you adopt and take care of a cute and funny cat named Tom. You can feed him, play with him, dress him up, and watch him grow from a baby kitten to a full-grown cat. You can also interact with him by talking to him, petting him, poking him, and making him happy or angry. He will respond to your actions and voice with his own expressions and sounds.</p>
|
7 |
-
<h3>Features of My Talking Tom</h3>
|
8 |
-
<p>Some of the features of My Talking Tom are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>You can customize your Tom by choosing from hundreds of different outfits, hats, glasses, shoes, and more.</li>
|
11 |
-
<li>You can decorate your Tom's home by buying furniture, wallpapers, carpets, and other items.</li>
|
12 |
-
<li>You can travel with your Tom to different locations around the world and meet other Toms.</li>
|
13 |
-
<li>You can play mini-games with your Tom and earn coins and diamonds.</li>
|
14 |
-
<li>You can collect stickers and unlock new items and rewards.</li>
|
15 |
-
<li>You can record videos of your Tom and share them with your friends.</li>
|
16 |
-
</ul>
|
17 |
-
<h3>How to play My Talking Tom</h3>
|
18 |
-
<p>To play My Talking Tom, you need to download the game from the Google Play Store or the App Store for free. Then, you need to create your account and choose a name for your Tom. After that, you can start playing with your Tom by tapping on the icons on the screen. You can feed him by dragging food items to his mouth, play with him by tapping on his toys, dress him up by tapping on his wardrobe, and so on. You can also swipe the screen to move around his home or travel to other places. You can earn coins and diamonds by playing mini-games or watching ads. You can use these resources to buy more items for your Tom or unlock new features.</p>
|
19 |
-
<p>my talking tom mod apk unlimited money and diamonds<br />
|
20 |
-
descargar my talking tom hackeado para android<br />
|
21 |
-
my talking tom apk full mega<br />
|
22 |
-
como tener dinero infinito en my talking tom<br />
|
23 |
-
my talking tom monedas y diamantes infinitos<br />
|
24 |
-
my talking tom mod apk latest version download<br />
|
25 |
-
trucos para my talking tom android<br />
|
26 |
-
my talking tom apk mod todo desbloqueado<br />
|
27 |
-
my talking tom hack apk mediafıre<br />
|
28 |
-
my talking tom apk sin publicidad<br />
|
29 |
-
my talking tom mod apk revdl<br />
|
30 |
-
descargar my talking tom con dinero ilimitado<br />
|
31 |
-
my talking tom apk mod 2023<br />
|
32 |
-
como hackear my talking tom con lucky patcher<br />
|
33 |
-
my talking tom mod apk android 1<br />
|
34 |
-
descargar my talking tom ultima version hackeado<br />
|
35 |
-
my talking tom apk premium<br />
|
36 |
-
como conseguir diamantes gratis en my talking tom<br />
|
37 |
-
my talking tom mod apk offline<br />
|
38 |
-
descargar my talking tom para pc con dinero infinito<br />
|
39 |
-
my talking tom hack apk 2023<br />
|
40 |
-
como descargar my talking tom hackeado 2023<br />
|
41 |
-
my talking tom mod apk unlimited coins and stars<br />
|
42 |
-
descargar my talking tom gratis para android<br />
|
43 |
-
my talking tom apk mod menu<br />
|
44 |
-
como actualizar my talking tom hackeado<br />
|
45 |
-
my talking tom mod apk unlimited everything<br />
|
46 |
-
descargar my talking tom por mega<br />
|
47 |
-
my talking tom hack apk sin root<br />
|
48 |
-
como jugar a my talking tom con dinero infinito<br />
|
49 |
-
my talking tom mod apk no ads<br />
|
50 |
-
descargar my talking tom 2 con dinero infinito<br />
|
51 |
-
my talking tom apk full español<br />
|
52 |
-
como tener todos los trajes en my talking tom<br />
|
53 |
-
my talking tom mod apk happymod<br />
|
54 |
-
descargar my talking tom original para android<br />
|
55 |
-
my talking tom hack apk 2022<br />
|
56 |
-
como descargar e instalar my talking tom hackeado<br />
|
57 |
-
my talking tom mod apk unlimited food and potions<br />
|
58 |
-
descargar juego de my talking tom con dinero infinito<br />
|
59 |
-
my talking tom apk pro<br />
|
60 |
-
como ganar mas dinero en my talking tom<br />
|
61 |
-
my talking tom mod apk rexdl<br />
|
62 |
-
descargar e instalar my talking tom para android gratis <br />
|
63 |
-
my talking tom hack apk online <br />
|
64 |
-
como tener mas niveles en my talking tom <br />
|
65 |
-
my talking tom mod apk unlimited gems <br />
|
66 |
-
descargar juego de mi gato que habla con dinero infinito</p>
|
67 |
-
<h2>What is My Talking Tom Dinero Infinito APK?</h2>
|
68 |
-
<p>My Talking Tom Dinero Infinito APK is a modified version of the original game that gives you unlimited coins and diamonds. This means that you can buy anything you want for your Tom without worrying about running out of money. You can also access all the features and locations in the game without any restrictions. You can enjoy playing with your Tom without any ads or interruptions.</p>
|
69 |
-
<h3>Benefits of My Talking Tom Dinero Infinito APK</h3>
|
70 |
-
<p>Some of the benefits of My Talking Tom Dinero Infinito APK are:</p>
|
71 |
-
<ul>
|
72 |
-
<li>You can have more fun and creativity with your Tom by dressing him up in different styles and outfits.</li>
|
73 |
-
<li>You can make your Tom's home more cozy and beautiful by decorating it with various items.</li>
|
74 |
-
<li>You can explore more places and meet more Toms by traveling around the world.</li> <li>You can play more mini-games and earn more rewards and stickers.</li>
|
75 |
-
<li>You can record and share more videos of your Tom with your friends.</li>
|
76 |
-
</ul>
|
77 |
-
<h3>Risks of My Talking Tom Dinero Infinito APK</h3>
|
78 |
-
<p>However, My Talking Tom Dinero Infinito APK is not an official version of the game and it comes with some risks. Some of the risks of My Talking Tom Dinero Infinito APK are:</p>
|
79 |
-
<ul>
|
80 |
-
<li>You might download a fake or malicious APK file that can harm your device or steal your personal information.</li>
|
81 |
-
<li>You might violate the terms and conditions of the original game and get banned or suspended from playing it.</li>
|
82 |
-
<li>You might lose your progress or data if the APK file is not compatible with the latest version of the game or your device.</li>
|
83 |
-
<li>You might miss out on the updates and new features that the developers of the original game release regularly.</li>
|
84 |
-
</ul>
|
85 |
-
<h2>How to download and install My Talking Tom Dinero Infinito APK?</h2>
|
86 |
-
<p>If you still want to try My Talking Tom Dinero Infinito APK, you need to follow some steps to download and install it on your device. Here are the steps:</p>
|
87 |
-
<h3>Steps to download and install My Talking Tom Dinero Infinito APK</h3>
|
88 |
-
<ol>
|
89 |
-
<li>First, you need to find a reliable and trustworthy source that provides the APK file. You can search online for some reviews and ratings of different websites that offer the APK file.</li>
|
90 |
-
<li>Second, you need to enable the installation of apps from unknown sources on your device. You can do this by going to your device settings, security, and allowing unknown sources.</li>
|
91 |
-
<li>Third, you need to download the APK file from the source you have chosen. You can use a browser or a downloader app to do this.</li>
|
92 |
-
<li>Fourth, you need to locate the downloaded APK file on your device and tap on it to start the installation process. You might need to grant some permissions and accept some terms and conditions before proceeding.</li>
|
93 |
-
<li>Fifth, you need to wait for the installation to finish and then launch the game from your app drawer or home screen.</li>
|
94 |
-
</ol>
|
95 |
-
<h3>Tips to use My Talking Tom Dinero Infinito APK safely and effectively</h3>
|
96 |
-
<p>To use My Talking Tom Dinero Infinito APK safely and effectively, you should follow some tips:</p>
|
97 |
-
<ul>
|
98 |
-
<li>You should scan the APK file with an antivirus or malware scanner before installing it on your device.</li>
|
99 |
-
<li>You should backup your original game data before installing the APK file in case something goes wrong.</li>
|
100 |
-
<li>You should not use your real name, email, or social media accounts to sign in to the game as it might compromise your privacy and security.</li>
|
101 |
-
<li>You should not use the APK file to cheat or harass other players as it might get you reported or banned.</li>
|
102 |
-
<li>You should check for updates and new features of the original game regularly and switch back to it if you want to enjoy them.</li>
|
103 |
-
</ul>
|
104 |
-
<h2>Conclusion</h2>
|
105 |
-
<p>My Talking Tom is a fun and interactive game that lets you adopt and take care of a virtual pet, Tom. You can customize him, play with him, travel with him, and more. However, if you want to have unlimited coins and diamonds in the game, you might be tempted to download the My Talking Tom Dinero Infinito APK, which is a modified version of the game that gives you unlimited resources. However, this comes with some risks such as downloading a fake or malicious file, violating the terms and conditions of the original game, losing your progress or data, or missing out on the updates and new features. Therefore, you should be careful and follow some steps and tips to download and install the APK file safely and effectively. Alternatively, you can stick to the original game and enjoy it as it is.</p>
|
106 |
-
<h3>Summary of the main points</h3>
|
107 |
-
<p>In this article, we have discussed:</p>
|
108 |
-
<ul>
|
109 |
-
<li>What is My Talking Tom?</li>
|
110 |
-
<li>What is My Talking Tom Dinero Infinito APK?</li>
|
111 |
-
<li>How to download and install My Talking Tom Dinero Infinito APK?</li>
|
112 |
-
<li>How to use My Talking Tom Dinero Infinito APK safely and effectively?</li>
|
113 |
-
</ul>
|
114 |
-
<h3>Call to action</h3>
|
115 |
-
<p>If you found this article helpful, please share it with your friends who love playing My Talking Tom. Also, let us know in the comments below what do you think about My Talking Tom Dinero Infinito APK. Do you use it or not? Why or why not? We would love to hear from you!</p>
|
116 |
-
<h4>Frequently Asked Questions</h4>
|
117 |
-
<p>Here are some frequently asked questions about My Talking Tom Dinero Inf inito APK:</p>
|
118 |
-
<ol>
|
119 |
-
<li>What is the difference between My Talking Tom and My Talking Tom 2?</li>
|
120 |
-
<p>My Talking Tom 2 is the sequel to My Talking Tom, which was released in 2018. It has more features and options than the first game, such as new mini-games, new outfits, new pets, new locations, and more. However, the basic gameplay and concept are the same as the first game.</p>
|
121 |
-
<li>Is My Talking Tom Dinero Infinito APK legal?</li>
|
122 |
-
<p>No, My Talking Tom Dinero Infinito APK is not legal. It is a modified version of the original game that violates the intellectual property rights of the developers and the terms and conditions of the game. It is also considered as cheating and unfair to other players who play the game legitimately.</p>
|
123 |
-
<li>Is My Talking Tom Dinero Infinito APK safe?</li>
|
124 |
-
<p>Not necessarily. My Talking Tom Dinero Infinito APK can be unsafe if you download it from an untrusted or malicious source. It can contain viruses, malware, spyware, or other harmful software that can damage your device or steal your personal information. It can also cause your device to malfunction or crash.</p>
|
125 |
-
<li>How can I get coins and diamonds in My Talking Tom without using My Talking Tom Dinero Infinito APK?</li>
|
126 |
-
<p>You can get coins and diamonds in My Talking Tom without using My Talking Tom Dinero Infinito APK by playing the game normally and following some tips. Some of the tips are:</p>
|
127 |
-
<ul>
|
128 |
-
<li>Play mini-games regularly and earn coins and diamonds as rewards.</li>
|
129 |
-
<li>Watch ads and videos in the game and earn coins and diamonds as rewards.</li>
|
130 |
-
<li>Complete daily tasks and challenges and earn coins and diamonds as rewards.</li>
|
131 |
-
<li>Spin the wheel of fortune and win coins and diamonds as prizes.</li>
|
132 |
-
<li>Invite your friends to play the game and earn coins and diamonds as bonuses.</li>
|
133 |
-
</ul>
|
134 |
-
<li>How can I contact the developers of My Talking Tom if I have any questions or feedback?</li>
|
135 |
-
<p>You can contact the developers of My Talking Tom by visiting their official website, Facebook page, Twitter account, Instagram account, YouTube channel, or email address. You can also leave a review or rating on the Google Play Store or the App Store.</p>
|
136 |
-
</ol></p> 401be4b1e0<br />
|
137 |
-
<br />
|
138 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Facebook Lite The Best Way to Download Facebook for Android.md
DELETED
@@ -1,173 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Facebook Download Free APK: How to Get the Best Version of Facebook for Your Android Device</h1>
|
3 |
-
<h2>Introduction</h2>
|
4 |
-
<p>Facebook is one of the most popular social media platforms in the world, with over 2.8 billion monthly active users as of 2021. It allows you to connect with your friends and family, share photos and videos, join groups and pages, play games, watch live streams, and much more.</p>
|
5 |
-
<h2>facebook download free apk</h2><br /><p><b><b>DOWNLOAD</b> ✏ ✏ ✏ <a href="https://jinyurl.com/2uNMOC">https://jinyurl.com/2uNMOC</a></b></p><br /><br />
|
6 |
-
<p>But what if you want to use Facebook on your Android device without any limitations or restrictions? What if you want to enjoy the latest features and updates before anyone else? What if you want to save data and storage space while using Facebook?</p>
|
7 |
-
<p>The answer is simple: you need to download Facebook APK for Android. An APK (Android Package Kit) is a file format that contains all the components of an Android app, such as the code, resources, assets, and manifest. By downloading and installing an APK file, you can bypass the Google Play Store and get access to any app you want, even if it is not available in your region or compatible with your device.</p>
|
8 |
-
<p>In this article, we will show you how to download Facebook APK for Android, how to update it, and how to troubleshoot it. We will also share some benefits of downloading Facebook APK for Android, as well as some tips and tricks to optimize your Facebook experience. Let's get started!</p>
|
9 |
-
<h2>How to Download Facebook APK for Android</h2>
|
10 |
-
<h3>Step 1: Find a reliable source for the APK file</h3>
|
11 |
-
<p>The first step is to find a trustworthy website that offers the Facebook APK file for download. There are many websites that claim to provide free APK files, but some of them may contain malware or viruses that can harm your device or steal your personal information. Therefore, you should always do some research before downloading any APK file from an unknown source.</p>
|
12 |
-
<p>One of the best websites that we recommend for downloading Facebook APK for Android is [APKCombo](^2^). This website has a huge collection of APK files for various apps and games, including Facebook. It also provides detailed information about each app, such as the version number, size, developer, rating, screenshots, and description. You can also read user reviews and feedback about each app before downloading it.</p>
|
13 |
-
<p>facebook lite apk free download<br />
|
14 |
-
facebook app apk download for android<br />
|
15 |
-
facebook messenger apk free download<br />
|
16 |
-
facebook mod apk download free<br />
|
17 |
-
facebook video downloader apk free<br />
|
18 |
-
facebook apk download latest version<br />
|
19 |
-
facebook dark mode apk free download<br />
|
20 |
-
facebook auto liker apk free download<br />
|
21 |
-
facebook hack apk free download<br />
|
22 |
-
facebook old version apk free download<br />
|
23 |
-
facebook transparent apk free download<br />
|
24 |
-
facebook password cracker apk free download<br />
|
25 |
-
facebook story saver apk free download<br />
|
26 |
-
facebook pro apk free download<br />
|
27 |
-
facebook page manager apk free download<br />
|
28 |
-
facebook gameroom apk free download<br />
|
29 |
-
facebook creator studio apk free download<br />
|
30 |
-
facebook business suite apk free download<br />
|
31 |
-
facebook dating apk free download<br />
|
32 |
-
facebook watch apk free download<br />
|
33 |
-
facebook ads manager apk free download<br />
|
34 |
-
facebook analytics apk free download<br />
|
35 |
-
facebook avatar maker apk free download<br />
|
36 |
-
facebook beta tester apk free download<br />
|
37 |
-
facebook chat heads apk free download<br />
|
38 |
-
facebook clone apk free download<br />
|
39 |
-
facebook code generator apk free download<br />
|
40 |
-
facebook cover photo maker apk free download<br />
|
41 |
-
facebook desktop mode apk free download<br />
|
42 |
-
facebook downloader app apk free<br />
|
43 |
-
facebook emoji keyboard apk free download<br />
|
44 |
-
facebook events app apk free download<br />
|
45 |
-
facebook fast delete messages apk free download<br />
|
46 |
-
facebook followers app apk free download<br />
|
47 |
-
facebook friends mapper apk free download<br />
|
48 |
-
facebook groups app apk free download<br />
|
49 |
-
facebook hacker app apk free download<br />
|
50 |
-
facebook home launcher apk free download<br />
|
51 |
-
facebook image downloader apk free <br />
|
52 |
-
facebook instant games apk free download <br />
|
53 |
-
facebook jailbreak app apk free <br />
|
54 |
-
facebook keyboard app apk free <br />
|
55 |
-
facebook live stream app apk free <br />
|
56 |
-
facebook login app apk free <br />
|
57 |
-
facebook marketplace app apk free <br />
|
58 |
-
facebook messenger lite apk free <br />
|
59 |
-
facebook notification sound changer app <br />
|
60 |
-
facebook photo editor app </p>
|
61 |
-
<h3>Step 2: Enable unknown sources on your device</h3>
|
62 |
-
<p>The next step is to enable unknown sources on your device. This is a security setting that prevents you from installing apps from sources other than the Google Play Store. However, since we are going to install an APK file from a third-party website, we need to enable this option temporarily.</p>
|
63 |
-
<p>To enable unknown sources on your device, follow these steps:</p>
|
64 |
-
<ul>
|
65 |
-
<li>Go to Settings > Security > Unknown Sources (or Settings > Apps > Special Access > Install Unknown Apps).</li>
|
66 |
-
<li>Toggle on the switch or check the box next to Unknown Sources (or Allow from this source).</li>
|
67 |
-
<li>A warning message will pop up. Tap OK or Allow to confirm.</li>
|
68 |
-
</ul>
|
69 |
-
<p>You can disable this option again after installing the Facebook APK file.</p>
|
70 |
-
<h3>Step 3: Download and install the APK file</h3>
|
71 |
-
<p>The third step is to download and install the Facebook APK file on your device. To do this, follow these steps:</p>
|
72 |
-
<ul>
|
73 |
-
<li>Open your browser and go to the APKCombo website and search for Facebook APK. Alternatively, you can use this link: .</li>
|
74 |
-
<li>Scroll down and find the Facebook APK file that matches your device's specifications. Tap on the Download button next to it.</li>
|
75 |
-
<li>A download prompt will appear. Tap on Download again to confirm.</li>
|
76 |
-
<li>Wait for the download to finish. You can check the progress in the notification bar or the Downloads folder.</li>
|
77 |
-
<li>Once the download is complete, tap on the APK file to open it. A installation prompt will appear. Tap on Install to start the installation process.</li>
|
78 |
-
<li>Wait for the installation to finish. You can check the progress in the notification bar or the Apps folder.</li>
|
79 |
-
<li>Once the installation is complete, tap on Open to launch Facebook on your device. Alternatively, you can find the Facebook icon on your home screen or app drawer and tap on it.</li>
|
80 |
-
</ul>
|
81 |
-
<h3>Step 4: Launch and enjoy Facebook on your device</h3>
|
82 |
-
<p>The final step is to launch and enjoy Facebook on your device. You can log in with your existing Facebook account or create a new one. You can also customize your settings and preferences according to your needs. You can now access all the features and functions of Facebook, such as:</p>
|
83 |
-
<ul>
|
84 |
-
<li>Chatting with your friends and family via Messenger or WhatsApp.</li>
|
85 |
-
<li>Sharing photos and videos via Stories or News Feed.</li>
|
86 |
-
<li>Joining groups and pages that interest you.</li>
|
87 |
-
<li>Playing games with your friends or other users.</li>
|
88 |
-
<li>Watching live streams or videos from your favorite creators or celebrities.</li>
|
89 |
-
<li>Shopping online via Marketplace or Shops.</li>
|
90 |
-
<li>And much more!</li>
|
91 |
-
</ul>
|
92 |
-
<h2>How to Update Facebook APK for Android</h2>
|
93 |
-
<p>One of the advantages of downloading Facebook APK for Android is that you can get the latest updates and features before they are released on the Google Play Store. However, you need to update the APK file manually whenever a new version is available. There are two ways to do this:</p>
|
94 |
-
<h3>Option 1: Use the built-in update feature</h3>
|
95 |
-
<p>The easiest way to update Facebook APK for Android is to use the built-in update feature within the app. To do this, follow these steps:</p>
|
96 |
-
<ul>
|
97 |
-
<li>Open Facebook on your device and tap on the menu icon (three horizontal lines) at the top right corner of the screen.</li>
|
98 |
-
<li>Scroll down and tap on Settings & Privacy > App Updates.</li>
|
99 |
-
<li>If there is a new version available, you will see a notification saying "Update Available". Tap on Update Now to start the update process.</li>
|
100 |
-
<li>Wait for the update to finish. You can check the progress in the notification bar or the App Updates screen.</li>
|
101 |
-
<li>Once the update is complete, tap on Restart Now to relaunch Facebook on your device.</li>
|
102 |
-
</ul>
|
103 |
-
<h3>Option 2: Download and install the latest version manually</h3>
|
104 |
-
<p>The other way to update Facebook APK for Android is to download and install the latest version manually from a third-party website, such as APKCombo. To do this, follow these steps:</p>
|
105 |
-
<ul>
|
106 |
-
<li>Open your browser and go to the APKCombo website and search for the latest version of Facebook APK. Alternatively, you can use this link: .</li>
|
107 |
-
<li>Scroll down and find the Facebook APK file that matches your device's specifications. Tap on the Download button next to it.</li>
|
108 |
-
<li>A download prompt will appear. Tap on Download again to confirm.</li>
|
109 |
-
<li>Wait for the download to finish. You can check the progress in the notification bar or the Downloads folder.</li>
|
110 |
-
<li>Once the download is complete, tap on the APK file to open it. A installation prompt will appear. Tap on Install to start the installation process.</li>
|
111 |
-
<li>Wait for the installation to finish. You can check the progress in the notification bar or the Apps folder.</li>
|
112 |
-
<li>Once the installation is complete, tap on Open to launch Facebook on your device. Alternatively, you can find the Facebook icon on your home screen or app drawer and tap on it.</li>
|
113 |
-
</ul>
|
114 |
-
<p>Note: You may need to uninstall the previous version of Facebook APK before installing the new one, depending on the compatibility and security of the app.</p>
|
115 |
-
<h2>How to Troubleshoot Facebook APK for Android</h2>
|
116 |
-
<h3>Common issues and solutions</h3>
|
117 |
-
<p>Sometimes, you may encounter some issues or errors while using Facebook APK for Android. Here are some of the most common ones and how to fix them:</p>
|
118 |
-
<table>
|
119 |
-
<tr>
|
120 |
-
<th>Issue</th>
|
121 |
-
<th>Solution</th>
|
122 |
-
</tr>
|
123 |
-
<tr>
|
124 |
-
<td>The app crashes or freezes frequently.</td>
|
125 |
-
<td>Clear the app cache and data by going to Settings > Apps > Facebook > Storage > Clear Cache and Clear Data. Restart your device and try again.</td>
|
126 |
-
</tr>
|
127 |
-
<tr>
|
128 |
-
<td>The app does not load or display properly.</td>
|
129 |
-
<td>Check your internet connection and make sure it is stable and fast. Try using a different network or a VPN if possible. Update your device's software and browser if needed.</td>
|
130 |
-
</tr>
|
131 |
-
<tr>
|
132 |
-
<td>The app does not show notifications or messages.</td>
|
133 |
-
<td>Enable notifications and permissions for Facebook by going to Settings > Apps > Facebook > Notifications and Settings > Apps > Facebook > Permissions. Make sure you are logged in with the correct account and sync your data if needed.</td>
|
134 |
-
</tr>
|
135 |
-
<tr>
|
136 |
-
<td>The app does not support some features or functions.</td>
|
137 |
-
<td>Make sure you have downloaded the latest version of Facebook APK for Android from a reliable source. Check if your device meets the minimum requirements for running the app. Try using a different device or emulator if possible.</td>
|
138 |
-
</tr>
|
139 |
-
</table>
|
140 |
-
<h3>Tips and tricks to optimize your Facebook experience</h3>
|
141 |
-
<p>Besides troubleshooting, there are also some tips and tricks that you can use to optimize your Facebook experience while using Facebook APK for Android. Here are some of them:</p>
|
142 |
-
<ul>
|
143 |
-
<li>Use Facebook Lite APK for Android if you want to save data and storage space while using Facebook. This is a lighter and faster version of Facebook that consumes less resources and works well on low-end devices.</li>
|
144 |
-
<li>Use Facebook Dark Mode APK for Android if you want to switch to a dark theme while using Facebook. This is a feature that changes the background color of the app to black, which reduces eye strain and battery consumption.</li>
|
145 |
-
<li>Use Facebook Mod APK for Android if you want to unlock some premium features and functions while using Facebook. This is a modified version of Facebook that offers some extra benefits, such as removing ads, enabling video downloads, enhancing privacy, and more.</li>
|
146 |
-
<li>Use Facebook Messenger APK for Android if you want to chat with your friends and family without opening the main Facebook app. This is a standalone app that allows you to send and receive messages, stickers, emojis, gifs, voice notes, video calls, and more.</li>
|
147 |
-
<li>Use Facebook Watch APK for Android if you want to watch videos from your favorite creators and celebrities without opening the main Facebook app. This is a standalone app that allows you to browse and stream videos from various categories, such as news, sports, entertainment, gaming, and more.</li>
|
148 |
-
</ul>
|
149 |
-
<h2>Conclusion</h2>
|
150 |
-
<p>In conclusion, downloading Facebook APK for Android is a great way to get the best version of Facebook for your Android device. You can enjoy all the features and functions of Facebook without any limitations or restrictions. You can also get access to the latest updates and features before anyone else. All you need to do is follow the steps we have provided in this article and you will be able to download, install, update, and troubleshoot Facebook APK for Android with ease.</p>
|
151 |
-
<p>We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you!</p>
|
152 |
-
<h2>Frequently Asked Questions</h2>
|
153 |
-
<p>Here are some of the most frequently asked questions about downloading Facebook APK for Android. We hope they will answer some of your queries and doubts.</p>
|
154 |
-
<ol>
|
155 |
-
<li>Is downloading Facebook APK for Android safe and legal?</li>
|
156 |
-
<p>Yes, downloading Facebook APK for Android is safe and legal, as long as you download it from a reliable and reputable source, such as APKCombo. You should also scan the APK file with an antivirus or malware scanner before installing it on your device. However, you should be aware that downloading Facebook APK for Android may violate the terms and conditions of Facebook and Google Play Store, and you may lose some features or support from the official app.</p>
|
157 |
-
<li>What are the minimum requirements for running Facebook APK for Android?</li>
|
158 |
-
<p>The minimum requirements for running Facebook APK for Android may vary depending on the version and the device you are using. However, generally speaking, you will need an Android device with at least 1 GB of RAM, 100 MB of free storage space, and a stable internet connection. You will also need to enable unknown sources on your device to install the APK file.</p>
|
159 |
-
<li>How can I delete Facebook APK for Android from my device?</li>
|
160 |
-
<p>If you want to delete Facebook APK for Android from your device, you can follow these steps:</p>
|
161 |
-
<ul>
|
162 |
-
<li>Go to Settings > Apps > Facebook > Uninstall.</li>
|
163 |
-
<li>Tap on OK or Uninstall to confirm.</li>
|
164 |
-
<li>Wait for the uninstallation to finish.</li>
|
165 |
-
<li>You can also delete the APK file from your Downloads folder or any other location where you saved it.</li>
|
166 |
-
</ul>
|
167 |
-
<li>How can I contact Facebook support if I have any issues or problems with Facebook APK for Android?</li>
|
168 |
-
<p>If you have any issues or problems with Facebook APK for Android, you can try contacting Facebook support by going to Settings > Help & Support > Report a Problem within the app. You can also visit the official Facebook Help Center website or the Facebook Community Forum website for more information and assistance.</p>
|
169 |
-
<li>Can I use Facebook APK for Android on other devices or platforms, such as iOS, Windows, or Mac?</li>
|
170 |
-
<p>No, you cannot use Facebook APK for Android on other devices or platforms, such as iOS, Windows, or Mac. The APK file is only compatible with Android devices and emulators. If you want to use Facebook on other devices or platforms, you will need to download the official app from the respective app store or use the web version of Facebook on your browser.</p>
|
171 |
-
</ol></p> 197e85843d<br />
|
172 |
-
<br />
|
173 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/2ndelement/voicevox/test/test_user_dict_model.py
DELETED
@@ -1,108 +0,0 @@
|
|
1 |
-
from copy import deepcopy
|
2 |
-
from unittest import TestCase
|
3 |
-
|
4 |
-
from pydantic import ValidationError
|
5 |
-
|
6 |
-
from voicevox_engine.kana_parser import parse_kana
|
7 |
-
from voicevox_engine.model import UserDictWord
|
8 |
-
|
9 |
-
|
10 |
-
class TestUserDictWords(TestCase):
|
11 |
-
def setUp(self):
|
12 |
-
self.test_model = {
|
13 |
-
"surface": "テスト",
|
14 |
-
"priority": 0,
|
15 |
-
"part_of_speech": "名詞",
|
16 |
-
"part_of_speech_detail_1": "固有名詞",
|
17 |
-
"part_of_speech_detail_2": "一般",
|
18 |
-
"part_of_speech_detail_3": "*",
|
19 |
-
"inflectional_type": "*",
|
20 |
-
"inflectional_form": "*",
|
21 |
-
"stem": "*",
|
22 |
-
"yomi": "テスト",
|
23 |
-
"pronunciation": "テスト",
|
24 |
-
"accent_type": 0,
|
25 |
-
"accent_associative_rule": "*",
|
26 |
-
}
|
27 |
-
|
28 |
-
def test_valid_word(self):
|
29 |
-
test_value = deepcopy(self.test_model)
|
30 |
-
try:
|
31 |
-
UserDictWord(**test_value)
|
32 |
-
except ValidationError as e:
|
33 |
-
self.fail(f"Unexpected Validation Error\n{str(e)}")
|
34 |
-
|
35 |
-
def test_convert_to_zenkaku(self):
|
36 |
-
test_value = deepcopy(self.test_model)
|
37 |
-
test_value["surface"] = "test"
|
38 |
-
self.assertEqual(UserDictWord(**test_value).surface, "test")
|
39 |
-
|
40 |
-
def test_count_mora(self):
|
41 |
-
test_value = deepcopy(self.test_model)
|
42 |
-
self.assertEqual(UserDictWord(**test_value).mora_count, 3)
|
43 |
-
|
44 |
-
def test_count_mora_x(self):
|
45 |
-
test_value = deepcopy(self.test_model)
|
46 |
-
for s in [chr(i) for i in range(12449, 12533)]:
|
47 |
-
if s in ["ァ", "ィ", "ゥ", "ェ", "ォ", "ッ", "ャ", "ュ", "ョ", "ヮ"]:
|
48 |
-
continue
|
49 |
-
for x in "ァィゥェォャュョ":
|
50 |
-
expected_count = 0
|
51 |
-
test_value["pronunciation"] = s + x
|
52 |
-
for accent_phrase in parse_kana(
|
53 |
-
test_value["pronunciation"] + "'",
|
54 |
-
):
|
55 |
-
expected_count += len(accent_phrase.moras)
|
56 |
-
with self.subTest(s=s, x=x):
|
57 |
-
self.assertEqual(
|
58 |
-
UserDictWord(**test_value).mora_count,
|
59 |
-
expected_count,
|
60 |
-
)
|
61 |
-
|
62 |
-
def test_count_mora_xwa(self):
|
63 |
-
test_value = deepcopy(self.test_model)
|
64 |
-
test_value["pronunciation"] = "クヮンセイ"
|
65 |
-
expected_count = 0
|
66 |
-
for accent_phrase in parse_kana(
|
67 |
-
test_value["pronunciation"] + "'",
|
68 |
-
):
|
69 |
-
expected_count += len(accent_phrase.moras)
|
70 |
-
self.assertEqual(
|
71 |
-
UserDictWord(**test_value).mora_count,
|
72 |
-
expected_count,
|
73 |
-
)
|
74 |
-
|
75 |
-
def test_invalid_pronunciation_not_katakana(self):
|
76 |
-
test_value = deepcopy(self.test_model)
|
77 |
-
test_value["pronunciation"] = "ぼいぼ"
|
78 |
-
with self.assertRaises(ValidationError):
|
79 |
-
UserDictWord(**test_value)
|
80 |
-
|
81 |
-
def test_invalid_pronunciation_invalid_sutegana(self):
|
82 |
-
test_value = deepcopy(self.test_model)
|
83 |
-
test_value["pronunciation"] = "アィウェォ"
|
84 |
-
with self.assertRaises(ValidationError):
|
85 |
-
UserDictWord(**test_value)
|
86 |
-
|
87 |
-
def test_invalid_pronunciation_invalid_xwa(self):
|
88 |
-
test_value = deepcopy(self.test_model)
|
89 |
-
test_value["pronunciation"] = "アヮ"
|
90 |
-
with self.assertRaises(ValidationError):
|
91 |
-
UserDictWord(**test_value)
|
92 |
-
|
93 |
-
def test_count_mora_voiced_sound(self):
|
94 |
-
test_value = deepcopy(self.test_model)
|
95 |
-
test_value["pronunciation"] = "ボイボ"
|
96 |
-
self.assertEqual(UserDictWord(**test_value).mora_count, 3)
|
97 |
-
|
98 |
-
def test_invalid_accent_type(self):
|
99 |
-
test_value = deepcopy(self.test_model)
|
100 |
-
test_value["accent_type"] = 4
|
101 |
-
with self.assertRaises(ValidationError):
|
102 |
-
UserDictWord(**test_value)
|
103 |
-
|
104 |
-
def test_invalid_accent_type_2(self):
|
105 |
-
test_value = deepcopy(self.test_model)
|
106 |
-
test_value["accent_type"] = -1
|
107 |
-
with self.assertRaises(ValidationError):
|
108 |
-
UserDictWord(**test_value)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/demucs/pretrained.py
DELETED
@@ -1,107 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
# All rights reserved.
|
3 |
-
#
|
4 |
-
# This source code is licensed under the license found in the
|
5 |
-
# LICENSE file in the root directory of this source tree.
|
6 |
-
# author: adefossez
|
7 |
-
|
8 |
-
import logging
|
9 |
-
|
10 |
-
from diffq import DiffQuantizer
|
11 |
-
import torch.hub
|
12 |
-
|
13 |
-
from .model import Demucs
|
14 |
-
from .tasnet import ConvTasNet
|
15 |
-
from .utils import set_state
|
16 |
-
|
17 |
-
logger = logging.getLogger(__name__)
|
18 |
-
ROOT = "https://dl.fbaipublicfiles.com/demucs/v3.0/"
|
19 |
-
|
20 |
-
PRETRAINED_MODELS = {
|
21 |
-
'demucs': 'e07c671f',
|
22 |
-
'demucs48_hq': '28a1282c',
|
23 |
-
'demucs_extra': '3646af93',
|
24 |
-
'demucs_quantized': '07afea75',
|
25 |
-
'tasnet': 'beb46fac',
|
26 |
-
'tasnet_extra': 'df3777b2',
|
27 |
-
'demucs_unittest': '09ebc15f',
|
28 |
-
}
|
29 |
-
|
30 |
-
SOURCES = ["drums", "bass", "other", "vocals"]
|
31 |
-
|
32 |
-
|
33 |
-
def get_url(name):
|
34 |
-
sig = PRETRAINED_MODELS[name]
|
35 |
-
return ROOT + name + "-" + sig[:8] + ".th"
|
36 |
-
|
37 |
-
|
38 |
-
def is_pretrained(name):
|
39 |
-
return name in PRETRAINED_MODELS
|
40 |
-
|
41 |
-
|
42 |
-
def load_pretrained(name):
|
43 |
-
if name == "demucs":
|
44 |
-
return demucs(pretrained=True)
|
45 |
-
elif name == "demucs48_hq":
|
46 |
-
return demucs(pretrained=True, hq=True, channels=48)
|
47 |
-
elif name == "demucs_extra":
|
48 |
-
return demucs(pretrained=True, extra=True)
|
49 |
-
elif name == "demucs_quantized":
|
50 |
-
return demucs(pretrained=True, quantized=True)
|
51 |
-
elif name == "demucs_unittest":
|
52 |
-
return demucs_unittest(pretrained=True)
|
53 |
-
elif name == "tasnet":
|
54 |
-
return tasnet(pretrained=True)
|
55 |
-
elif name == "tasnet_extra":
|
56 |
-
return tasnet(pretrained=True, extra=True)
|
57 |
-
else:
|
58 |
-
raise ValueError(f"Invalid pretrained name {name}")
|
59 |
-
|
60 |
-
|
61 |
-
def _load_state(name, model, quantizer=None):
|
62 |
-
url = get_url(name)
|
63 |
-
state = torch.hub.load_state_dict_from_url(url, map_location='cpu', check_hash=True)
|
64 |
-
set_state(model, quantizer, state)
|
65 |
-
if quantizer:
|
66 |
-
quantizer.detach()
|
67 |
-
|
68 |
-
|
69 |
-
def demucs_unittest(pretrained=True):
|
70 |
-
model = Demucs(channels=4, sources=SOURCES)
|
71 |
-
if pretrained:
|
72 |
-
_load_state('demucs_unittest', model)
|
73 |
-
return model
|
74 |
-
|
75 |
-
|
76 |
-
def demucs(pretrained=True, extra=False, quantized=False, hq=False, channels=64):
|
77 |
-
if not pretrained and (extra or quantized or hq):
|
78 |
-
raise ValueError("if extra or quantized is True, pretrained must be True.")
|
79 |
-
model = Demucs(sources=SOURCES, channels=channels)
|
80 |
-
if pretrained:
|
81 |
-
name = 'demucs'
|
82 |
-
if channels != 64:
|
83 |
-
name += str(channels)
|
84 |
-
quantizer = None
|
85 |
-
if sum([extra, quantized, hq]) > 1:
|
86 |
-
raise ValueError("Only one of extra, quantized, hq, can be True.")
|
87 |
-
if quantized:
|
88 |
-
quantizer = DiffQuantizer(model, group_size=8, min_size=1)
|
89 |
-
name += '_quantized'
|
90 |
-
if extra:
|
91 |
-
name += '_extra'
|
92 |
-
if hq:
|
93 |
-
name += '_hq'
|
94 |
-
_load_state(name, model, quantizer)
|
95 |
-
return model
|
96 |
-
|
97 |
-
|
98 |
-
def tasnet(pretrained=True, extra=False):
|
99 |
-
if not pretrained and extra:
|
100 |
-
raise ValueError("if extra is True, pretrained must be True.")
|
101 |
-
model = ConvTasNet(X=10, sources=SOURCES)
|
102 |
-
if pretrained:
|
103 |
-
name = 'tasnet'
|
104 |
-
if extra:
|
105 |
-
name = 'tasnet_extra'
|
106 |
-
_load_state(name, model)
|
107 |
-
return model
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/infer/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py
DELETED
@@ -1,87 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import pyworld
|
3 |
-
|
4 |
-
from infer.lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
|
5 |
-
|
6 |
-
|
7 |
-
class HarvestF0Predictor(F0Predictor):
|
8 |
-
def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
|
9 |
-
self.hop_length = hop_length
|
10 |
-
self.f0_min = f0_min
|
11 |
-
self.f0_max = f0_max
|
12 |
-
self.sampling_rate = sampling_rate
|
13 |
-
|
14 |
-
def interpolate_f0(self, f0):
|
15 |
-
"""
|
16 |
-
对F0进行插值处理
|
17 |
-
"""
|
18 |
-
|
19 |
-
data = np.reshape(f0, (f0.size, 1))
|
20 |
-
|
21 |
-
vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
|
22 |
-
vuv_vector[data > 0.0] = 1.0
|
23 |
-
vuv_vector[data <= 0.0] = 0.0
|
24 |
-
|
25 |
-
ip_data = data
|
26 |
-
|
27 |
-
frame_number = data.size
|
28 |
-
last_value = 0.0
|
29 |
-
for i in range(frame_number):
|
30 |
-
if data[i] <= 0.0:
|
31 |
-
j = i + 1
|
32 |
-
for j in range(i + 1, frame_number):
|
33 |
-
if data[j] > 0.0:
|
34 |
-
break
|
35 |
-
if j < frame_number - 1:
|
36 |
-
if last_value > 0.0:
|
37 |
-
step = (data[j] - data[i - 1]) / float(j - i)
|
38 |
-
for k in range(i, j):
|
39 |
-
ip_data[k] = data[i - 1] + step * (k - i + 1)
|
40 |
-
else:
|
41 |
-
for k in range(i, j):
|
42 |
-
ip_data[k] = data[j]
|
43 |
-
else:
|
44 |
-
for k in range(i, frame_number):
|
45 |
-
ip_data[k] = last_value
|
46 |
-
else:
|
47 |
-
ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
|
48 |
-
last_value = data[i]
|
49 |
-
|
50 |
-
return ip_data[:, 0], vuv_vector[:, 0]
|
51 |
-
|
52 |
-
def resize_f0(self, x, target_len):
|
53 |
-
source = np.array(x)
|
54 |
-
source[source < 0.001] = np.nan
|
55 |
-
target = np.interp(
|
56 |
-
np.arange(0, len(source) * target_len, len(source)) / target_len,
|
57 |
-
np.arange(0, len(source)),
|
58 |
-
source,
|
59 |
-
)
|
60 |
-
res = np.nan_to_num(target)
|
61 |
-
return res
|
62 |
-
|
63 |
-
def compute_f0(self, wav, p_len=None):
|
64 |
-
if p_len is None:
|
65 |
-
p_len = wav.shape[0] // self.hop_length
|
66 |
-
f0, t = pyworld.harvest(
|
67 |
-
wav.astype(np.double),
|
68 |
-
fs=self.hop_length,
|
69 |
-
f0_ceil=self.f0_max,
|
70 |
-
f0_floor=self.f0_min,
|
71 |
-
frame_period=1000 * self.hop_length / self.sampling_rate,
|
72 |
-
)
|
73 |
-
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs)
|
74 |
-
return self.interpolate_f0(self.resize_f0(f0, p_len))[0]
|
75 |
-
|
76 |
-
def compute_f0_uv(self, wav, p_len=None):
|
77 |
-
if p_len is None:
|
78 |
-
p_len = wav.shape[0] // self.hop_length
|
79 |
-
f0, t = pyworld.harvest(
|
80 |
-
wav.astype(np.double),
|
81 |
-
fs=self.sampling_rate,
|
82 |
-
f0_floor=self.f0_min,
|
83 |
-
f0_ceil=self.f0_max,
|
84 |
-
frame_period=1000 * self.hop_length / self.sampling_rate,
|
85 |
-
)
|
86 |
-
f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate)
|
87 |
-
return self.interpolate_f0(self.resize_f0(f0, p_len))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/801artistry/RVC801/lib/uvr5_pack/lib_v5/layers_537227KB.py
DELETED
@@ -1,126 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch import nn
|
3 |
-
import torch.nn.functional as F
|
4 |
-
|
5 |
-
from . import spec_utils
|
6 |
-
|
7 |
-
|
8 |
-
class Conv2DBNActiv(nn.Module):
|
9 |
-
def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
|
10 |
-
super(Conv2DBNActiv, self).__init__()
|
11 |
-
self.conv = nn.Sequential(
|
12 |
-
nn.Conv2d(
|
13 |
-
nin,
|
14 |
-
nout,
|
15 |
-
kernel_size=ksize,
|
16 |
-
stride=stride,
|
17 |
-
padding=pad,
|
18 |
-
dilation=dilation,
|
19 |
-
bias=False,
|
20 |
-
),
|
21 |
-
nn.BatchNorm2d(nout),
|
22 |
-
activ(),
|
23 |
-
)
|
24 |
-
|
25 |
-
def __call__(self, x):
|
26 |
-
return self.conv(x)
|
27 |
-
|
28 |
-
|
29 |
-
class SeperableConv2DBNActiv(nn.Module):
|
30 |
-
def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU):
|
31 |
-
super(SeperableConv2DBNActiv, self).__init__()
|
32 |
-
self.conv = nn.Sequential(
|
33 |
-
nn.Conv2d(
|
34 |
-
nin,
|
35 |
-
nin,
|
36 |
-
kernel_size=ksize,
|
37 |
-
stride=stride,
|
38 |
-
padding=pad,
|
39 |
-
dilation=dilation,
|
40 |
-
groups=nin,
|
41 |
-
bias=False,
|
42 |
-
),
|
43 |
-
nn.Conv2d(nin, nout, kernel_size=1, bias=False),
|
44 |
-
nn.BatchNorm2d(nout),
|
45 |
-
activ(),
|
46 |
-
)
|
47 |
-
|
48 |
-
def __call__(self, x):
|
49 |
-
return self.conv(x)
|
50 |
-
|
51 |
-
|
52 |
-
class Encoder(nn.Module):
|
53 |
-
def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU):
|
54 |
-
super(Encoder, self).__init__()
|
55 |
-
self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
|
56 |
-
self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ)
|
57 |
-
|
58 |
-
def __call__(self, x):
|
59 |
-
skip = self.conv1(x)
|
60 |
-
h = self.conv2(skip)
|
61 |
-
|
62 |
-
return h, skip
|
63 |
-
|
64 |
-
|
65 |
-
class Decoder(nn.Module):
|
66 |
-
def __init__(
|
67 |
-
self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False
|
68 |
-
):
|
69 |
-
super(Decoder, self).__init__()
|
70 |
-
self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ)
|
71 |
-
self.dropout = nn.Dropout2d(0.1) if dropout else None
|
72 |
-
|
73 |
-
def __call__(self, x, skip=None):
|
74 |
-
x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True)
|
75 |
-
if skip is not None:
|
76 |
-
skip = spec_utils.crop_center(skip, x)
|
77 |
-
x = torch.cat([x, skip], dim=1)
|
78 |
-
h = self.conv(x)
|
79 |
-
|
80 |
-
if self.dropout is not None:
|
81 |
-
h = self.dropout(h)
|
82 |
-
|
83 |
-
return h
|
84 |
-
|
85 |
-
|
86 |
-
class ASPPModule(nn.Module):
|
87 |
-
def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU):
|
88 |
-
super(ASPPModule, self).__init__()
|
89 |
-
self.conv1 = nn.Sequential(
|
90 |
-
nn.AdaptiveAvgPool2d((1, None)),
|
91 |
-
Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ),
|
92 |
-
)
|
93 |
-
self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ)
|
94 |
-
self.conv3 = SeperableConv2DBNActiv(
|
95 |
-
nin, nin, 3, 1, dilations[0], dilations[0], activ=activ
|
96 |
-
)
|
97 |
-
self.conv4 = SeperableConv2DBNActiv(
|
98 |
-
nin, nin, 3, 1, dilations[1], dilations[1], activ=activ
|
99 |
-
)
|
100 |
-
self.conv5 = SeperableConv2DBNActiv(
|
101 |
-
nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
|
102 |
-
)
|
103 |
-
self.conv6 = SeperableConv2DBNActiv(
|
104 |
-
nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
|
105 |
-
)
|
106 |
-
self.conv7 = SeperableConv2DBNActiv(
|
107 |
-
nin, nin, 3, 1, dilations[2], dilations[2], activ=activ
|
108 |
-
)
|
109 |
-
self.bottleneck = nn.Sequential(
|
110 |
-
Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1)
|
111 |
-
)
|
112 |
-
|
113 |
-
def forward(self, x):
|
114 |
-
_, _, h, w = x.size()
|
115 |
-
feat1 = F.interpolate(
|
116 |
-
self.conv1(x), size=(h, w), mode="bilinear", align_corners=True
|
117 |
-
)
|
118 |
-
feat2 = self.conv2(x)
|
119 |
-
feat3 = self.conv3(x)
|
120 |
-
feat4 = self.conv4(x)
|
121 |
-
feat5 = self.conv5(x)
|
122 |
-
feat6 = self.conv6(x)
|
123 |
-
feat7 = self.conv7(x)
|
124 |
-
out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1)
|
125 |
-
bottle = self.bottleneck(out)
|
126 |
-
return bottle
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio/ldm/modules/diffusionmodules/openaimodel.py
DELETED
@@ -1,963 +0,0 @@
|
|
1 |
-
from abc import abstractmethod
|
2 |
-
from functools import partial
|
3 |
-
import math
|
4 |
-
from typing import Iterable
|
5 |
-
|
6 |
-
import numpy as np
|
7 |
-
import torch as th
|
8 |
-
import torch.nn as nn
|
9 |
-
import torch.nn.functional as F
|
10 |
-
|
11 |
-
from ldm.modules.diffusionmodules.util import (
|
12 |
-
checkpoint,
|
13 |
-
conv_nd,
|
14 |
-
linear,
|
15 |
-
avg_pool_nd,
|
16 |
-
zero_module,
|
17 |
-
normalization,
|
18 |
-
timestep_embedding,
|
19 |
-
)
|
20 |
-
from ldm.modules.attention import SpatialTransformer
|
21 |
-
|
22 |
-
|
23 |
-
# dummy replace
|
24 |
-
def convert_module_to_f16(x):
|
25 |
-
pass
|
26 |
-
|
27 |
-
def convert_module_to_f32(x):
|
28 |
-
pass
|
29 |
-
|
30 |
-
|
31 |
-
## go
|
32 |
-
class AttentionPool2d(nn.Module):
|
33 |
-
"""
|
34 |
-
Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py
|
35 |
-
"""
|
36 |
-
|
37 |
-
def __init__(
|
38 |
-
self,
|
39 |
-
spacial_dim: int,
|
40 |
-
embed_dim: int,
|
41 |
-
num_heads_channels: int,
|
42 |
-
output_dim: int = None,
|
43 |
-
):
|
44 |
-
super().__init__()
|
45 |
-
self.positional_embedding = nn.Parameter(th.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5)
|
46 |
-
self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1)
|
47 |
-
self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1)
|
48 |
-
self.num_heads = embed_dim // num_heads_channels
|
49 |
-
self.attention = QKVAttention(self.num_heads)
|
50 |
-
|
51 |
-
def forward(self, x):
|
52 |
-
b, c, *_spatial = x.shape
|
53 |
-
x = x.reshape(b, c, -1) # NC(HW)
|
54 |
-
x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1)
|
55 |
-
x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1)
|
56 |
-
x = self.qkv_proj(x)
|
57 |
-
x = self.attention(x)
|
58 |
-
x = self.c_proj(x)
|
59 |
-
return x[:, :, 0]
|
60 |
-
|
61 |
-
|
62 |
-
class TimestepBlock(nn.Module):
|
63 |
-
"""
|
64 |
-
Any module where forward() takes timestep embeddings as a second argument.
|
65 |
-
"""
|
66 |
-
|
67 |
-
@abstractmethod
|
68 |
-
def forward(self, x, emb):
|
69 |
-
"""
|
70 |
-
Apply the module to `x` given `emb` timestep embeddings.
|
71 |
-
"""
|
72 |
-
|
73 |
-
|
74 |
-
class TimestepEmbedSequential(nn.Sequential, TimestepBlock):
|
75 |
-
"""
|
76 |
-
A sequential module that passes timestep embeddings to the children that
|
77 |
-
support it as an extra input.
|
78 |
-
"""
|
79 |
-
|
80 |
-
def forward(self, x, emb, context=None):
|
81 |
-
for layer in self:
|
82 |
-
if isinstance(layer, TimestepBlock):
|
83 |
-
x = layer(x, emb)
|
84 |
-
elif isinstance(layer, SpatialTransformer):
|
85 |
-
x = layer(x, context)
|
86 |
-
else:
|
87 |
-
x = layer(x)
|
88 |
-
return x
|
89 |
-
|
90 |
-
|
91 |
-
class Upsample(nn.Module):
|
92 |
-
"""
|
93 |
-
An upsampling layer with an optional convolution.
|
94 |
-
:param channels: channels in the inputs and outputs.
|
95 |
-
:param use_conv: a bool determining if a convolution is applied.
|
96 |
-
:param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
|
97 |
-
upsampling occurs in the inner-two dimensions.
|
98 |
-
"""
|
99 |
-
|
100 |
-
def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1):
|
101 |
-
super().__init__()
|
102 |
-
self.channels = channels
|
103 |
-
self.out_channels = out_channels or channels
|
104 |
-
self.use_conv = use_conv
|
105 |
-
self.dims = dims
|
106 |
-
if use_conv:
|
107 |
-
self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=padding)
|
108 |
-
|
109 |
-
def forward(self, x):
|
110 |
-
assert x.shape[1] == self.channels
|
111 |
-
if self.dims == 3:
|
112 |
-
x = F.interpolate(
|
113 |
-
x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest"
|
114 |
-
)
|
115 |
-
else:
|
116 |
-
x = F.interpolate(x, scale_factor=2, mode="nearest")
|
117 |
-
if self.use_conv:
|
118 |
-
x = self.conv(x)
|
119 |
-
return x
|
120 |
-
|
121 |
-
class TransposedUpsample(nn.Module):
|
122 |
-
'Learned 2x upsampling without padding'
|
123 |
-
def __init__(self, channels, out_channels=None, ks=5):
|
124 |
-
super().__init__()
|
125 |
-
self.channels = channels
|
126 |
-
self.out_channels = out_channels or channels
|
127 |
-
|
128 |
-
self.up = nn.ConvTranspose2d(self.channels,self.out_channels,kernel_size=ks,stride=2)
|
129 |
-
|
130 |
-
def forward(self,x):
|
131 |
-
return self.up(x)
|
132 |
-
|
133 |
-
|
134 |
-
class Downsample(nn.Module):
|
135 |
-
"""
|
136 |
-
A downsampling layer with an optional convolution.
|
137 |
-
:param channels: channels in the inputs and outputs.
|
138 |
-
:param use_conv: a bool determining if a convolution is applied.
|
139 |
-
:param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then
|
140 |
-
downsampling occurs in the inner-two dimensions.
|
141 |
-
"""
|
142 |
-
|
143 |
-
def __init__(self, channels, use_conv, dims=2, out_channels=None,padding=1):
|
144 |
-
super().__init__()
|
145 |
-
self.channels = channels
|
146 |
-
self.out_channels = out_channels or channels
|
147 |
-
self.use_conv = use_conv
|
148 |
-
self.dims = dims
|
149 |
-
stride = 2 if dims != 3 else (1, 2, 2)
|
150 |
-
if use_conv:
|
151 |
-
self.op = conv_nd(
|
152 |
-
dims, self.channels, self.out_channels, 3, stride=stride, padding=padding
|
153 |
-
)
|
154 |
-
else:
|
155 |
-
assert self.channels == self.out_channels
|
156 |
-
self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride)
|
157 |
-
|
158 |
-
def forward(self, x):
|
159 |
-
assert x.shape[1] == self.channels
|
160 |
-
return self.op(x)
|
161 |
-
|
162 |
-
|
163 |
-
class ResBlock(TimestepBlock):
|
164 |
-
"""
|
165 |
-
A residual block that can optionally change the number of channels.
|
166 |
-
:param channels: the number of input channels.
|
167 |
-
:param emb_channels: the number of timestep embedding channels.
|
168 |
-
:param dropout: the rate of dropout.
|
169 |
-
:param out_channels: if specified, the number of out channels.
|
170 |
-
:param use_conv: if True and out_channels is specified, use a spatial
|
171 |
-
convolution instead of a smaller 1x1 convolution to change the
|
172 |
-
channels in the skip connection.
|
173 |
-
:param dims: determines if the signal is 1D, 2D, or 3D.
|
174 |
-
:param use_checkpoint: if True, use gradient checkpointing on this module.
|
175 |
-
:param up: if True, use this block for upsampling.
|
176 |
-
:param down: if True, use this block for downsampling.
|
177 |
-
"""
|
178 |
-
|
179 |
-
def __init__(
|
180 |
-
self,
|
181 |
-
channels,
|
182 |
-
emb_channels,
|
183 |
-
dropout,
|
184 |
-
out_channels=None,
|
185 |
-
use_conv=False,
|
186 |
-
use_scale_shift_norm=False,
|
187 |
-
dims=2,
|
188 |
-
use_checkpoint=False,
|
189 |
-
up=False,
|
190 |
-
down=False,
|
191 |
-
):
|
192 |
-
super().__init__()
|
193 |
-
self.channels = channels
|
194 |
-
self.emb_channels = emb_channels
|
195 |
-
self.dropout = dropout
|
196 |
-
self.out_channels = out_channels or channels
|
197 |
-
self.use_conv = use_conv
|
198 |
-
self.use_checkpoint = use_checkpoint
|
199 |
-
self.use_scale_shift_norm = use_scale_shift_norm
|
200 |
-
|
201 |
-
self.in_layers = nn.Sequential(
|
202 |
-
normalization(channels),
|
203 |
-
nn.SiLU(),
|
204 |
-
conv_nd(dims, channels, self.out_channels, 3, padding=1),
|
205 |
-
)
|
206 |
-
|
207 |
-
self.updown = up or down
|
208 |
-
|
209 |
-
if up:
|
210 |
-
self.h_upd = Upsample(channels, False, dims)
|
211 |
-
self.x_upd = Upsample(channels, False, dims)
|
212 |
-
elif down:
|
213 |
-
self.h_upd = Downsample(channels, False, dims)
|
214 |
-
self.x_upd = Downsample(channels, False, dims)
|
215 |
-
else:
|
216 |
-
self.h_upd = self.x_upd = nn.Identity()
|
217 |
-
|
218 |
-
self.emb_layers = nn.Sequential(
|
219 |
-
nn.SiLU(),
|
220 |
-
linear(
|
221 |
-
emb_channels,
|
222 |
-
2 * self.out_channels if use_scale_shift_norm else self.out_channels,
|
223 |
-
),
|
224 |
-
)
|
225 |
-
self.out_layers = nn.Sequential(
|
226 |
-
normalization(self.out_channels),
|
227 |
-
nn.SiLU(),
|
228 |
-
nn.Dropout(p=dropout),
|
229 |
-
zero_module(
|
230 |
-
conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1)
|
231 |
-
),
|
232 |
-
)
|
233 |
-
|
234 |
-
if self.out_channels == channels:
|
235 |
-
self.skip_connection = nn.Identity()
|
236 |
-
elif use_conv:
|
237 |
-
self.skip_connection = conv_nd(
|
238 |
-
dims, channels, self.out_channels, 3, padding=1
|
239 |
-
)
|
240 |
-
else:
|
241 |
-
self.skip_connection = conv_nd(dims, channels, self.out_channels, 1)
|
242 |
-
|
243 |
-
def forward(self, x, emb):
|
244 |
-
"""
|
245 |
-
Apply the block to a Tensor, conditioned on a timestep embedding.
|
246 |
-
:param x: an [N x C x ...] Tensor of features.
|
247 |
-
:param emb: an [N x emb_channels] Tensor of timestep embeddings.
|
248 |
-
:return: an [N x C x ...] Tensor of outputs.
|
249 |
-
"""
|
250 |
-
return checkpoint(
|
251 |
-
self._forward, (x, emb), self.parameters(), self.use_checkpoint
|
252 |
-
)
|
253 |
-
|
254 |
-
|
255 |
-
def _forward(self, x, emb):
|
256 |
-
if self.updown:
|
257 |
-
in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1]
|
258 |
-
h = in_rest(x)
|
259 |
-
h = self.h_upd(h)
|
260 |
-
x = self.x_upd(x)
|
261 |
-
h = in_conv(h)
|
262 |
-
else:
|
263 |
-
h = self.in_layers(x)
|
264 |
-
emb_out = self.emb_layers(emb).type(h.dtype)
|
265 |
-
while len(emb_out.shape) < len(h.shape):
|
266 |
-
emb_out = emb_out[..., None]
|
267 |
-
if self.use_scale_shift_norm:
|
268 |
-
out_norm, out_rest = self.out_layers[0], self.out_layers[1:]
|
269 |
-
scale, shift = th.chunk(emb_out, 2, dim=1)
|
270 |
-
h = out_norm(h) * (1 + scale) + shift
|
271 |
-
h = out_rest(h)
|
272 |
-
else:
|
273 |
-
h = h + emb_out
|
274 |
-
h = self.out_layers(h)
|
275 |
-
return self.skip_connection(x) + h
|
276 |
-
|
277 |
-
|
278 |
-
class AttentionBlock(nn.Module):
|
279 |
-
"""
|
280 |
-
An attention block that allows spatial positions to attend to each other.
|
281 |
-
Originally ported from here, but adapted to the N-d case.
|
282 |
-
https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66.
|
283 |
-
"""
|
284 |
-
|
285 |
-
def __init__(
|
286 |
-
self,
|
287 |
-
channels,
|
288 |
-
num_heads=1,
|
289 |
-
num_head_channels=-1,
|
290 |
-
use_checkpoint=False,
|
291 |
-
use_new_attention_order=False,
|
292 |
-
):
|
293 |
-
super().__init__()
|
294 |
-
self.channels = channels
|
295 |
-
if num_head_channels == -1:
|
296 |
-
self.num_heads = num_heads
|
297 |
-
else:
|
298 |
-
assert (
|
299 |
-
channels % num_head_channels == 0
|
300 |
-
), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}"
|
301 |
-
self.num_heads = channels // num_head_channels
|
302 |
-
self.use_checkpoint = use_checkpoint
|
303 |
-
self.norm = normalization(channels)
|
304 |
-
self.qkv = conv_nd(1, channels, channels * 3, 1)
|
305 |
-
if use_new_attention_order:
|
306 |
-
# split qkv before split heads
|
307 |
-
self.attention = QKVAttention(self.num_heads)
|
308 |
-
else:
|
309 |
-
# split heads before split qkv
|
310 |
-
self.attention = QKVAttentionLegacy(self.num_heads)
|
311 |
-
|
312 |
-
self.proj_out = zero_module(conv_nd(1, channels, channels, 1))
|
313 |
-
|
314 |
-
def forward(self, x):
|
315 |
-
return checkpoint(self._forward, (x,), self.parameters(), True) # TODO: check checkpoint usage, is True # TODO: fix the .half call!!!
|
316 |
-
#return pt_checkpoint(self._forward, x) # pytorch
|
317 |
-
|
318 |
-
def _forward(self, x):
|
319 |
-
b, c, *spatial = x.shape
|
320 |
-
x = x.reshape(b, c, -1)
|
321 |
-
qkv = self.qkv(self.norm(x))
|
322 |
-
h = self.attention(qkv)
|
323 |
-
h = self.proj_out(h)
|
324 |
-
return (x + h).reshape(b, c, *spatial)
|
325 |
-
|
326 |
-
|
327 |
-
def count_flops_attn(model, _x, y):
|
328 |
-
"""
|
329 |
-
A counter for the `thop` package to count the operations in an
|
330 |
-
attention operation.
|
331 |
-
Meant to be used like:
|
332 |
-
macs, params = thop.profile(
|
333 |
-
model,
|
334 |
-
inputs=(inputs, timestamps),
|
335 |
-
custom_ops={QKVAttention: QKVAttention.count_flops},
|
336 |
-
)
|
337 |
-
"""
|
338 |
-
b, c, *spatial = y[0].shape
|
339 |
-
num_spatial = int(np.prod(spatial))
|
340 |
-
# We perform two matmuls with the same number of ops.
|
341 |
-
# The first computes the weight matrix, the second computes
|
342 |
-
# the combination of the value vectors.
|
343 |
-
matmul_ops = 2 * b * (num_spatial ** 2) * c
|
344 |
-
model.total_ops += th.DoubleTensor([matmul_ops])
|
345 |
-
|
346 |
-
|
347 |
-
class QKVAttentionLegacy(nn.Module):
|
348 |
-
"""
|
349 |
-
A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping
|
350 |
-
"""
|
351 |
-
|
352 |
-
def __init__(self, n_heads):
|
353 |
-
super().__init__()
|
354 |
-
self.n_heads = n_heads
|
355 |
-
|
356 |
-
def forward(self, qkv):
|
357 |
-
"""
|
358 |
-
Apply QKV attention.
|
359 |
-
:param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs.
|
360 |
-
:return: an [N x (H * C) x T] tensor after attention.
|
361 |
-
"""
|
362 |
-
bs, width, length = qkv.shape
|
363 |
-
assert width % (3 * self.n_heads) == 0
|
364 |
-
ch = width // (3 * self.n_heads)
|
365 |
-
q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1)
|
366 |
-
scale = 1 / math.sqrt(math.sqrt(ch))
|
367 |
-
weight = th.einsum(
|
368 |
-
"bct,bcs->bts", q * scale, k * scale
|
369 |
-
) # More stable with f16 than dividing afterwards
|
370 |
-
weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
|
371 |
-
a = th.einsum("bts,bcs->bct", weight, v)
|
372 |
-
return a.reshape(bs, -1, length)
|
373 |
-
|
374 |
-
@staticmethod
|
375 |
-
def count_flops(model, _x, y):
|
376 |
-
return count_flops_attn(model, _x, y)
|
377 |
-
|
378 |
-
|
379 |
-
class QKVAttention(nn.Module):
|
380 |
-
"""
|
381 |
-
A module which performs QKV attention and splits in a different order.
|
382 |
-
"""
|
383 |
-
|
384 |
-
def __init__(self, n_heads):
|
385 |
-
super().__init__()
|
386 |
-
self.n_heads = n_heads
|
387 |
-
|
388 |
-
def forward(self, qkv):
|
389 |
-
"""
|
390 |
-
Apply QKV attention.
|
391 |
-
:param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs.
|
392 |
-
:return: an [N x (H * C) x T] tensor after attention.
|
393 |
-
"""
|
394 |
-
bs, width, length = qkv.shape
|
395 |
-
assert width % (3 * self.n_heads) == 0
|
396 |
-
ch = width // (3 * self.n_heads)
|
397 |
-
q, k, v = qkv.chunk(3, dim=1)
|
398 |
-
scale = 1 / math.sqrt(math.sqrt(ch))
|
399 |
-
weight = th.einsum(
|
400 |
-
"bct,bcs->bts",
|
401 |
-
(q * scale).view(bs * self.n_heads, ch, length),
|
402 |
-
(k * scale).view(bs * self.n_heads, ch, length),
|
403 |
-
) # More stable with f16 than dividing afterwards
|
404 |
-
weight = th.softmax(weight.float(), dim=-1).type(weight.dtype)
|
405 |
-
a = th.einsum("bts,bcs->bct", weight, v.reshape(bs * self.n_heads, ch, length))
|
406 |
-
return a.reshape(bs, -1, length)
|
407 |
-
|
408 |
-
@staticmethod
|
409 |
-
def count_flops(model, _x, y):
|
410 |
-
return count_flops_attn(model, _x, y)
|
411 |
-
|
412 |
-
|
413 |
-
class UNetModel(nn.Module):
|
414 |
-
"""
|
415 |
-
The full UNet model with attention and timestep embedding.
|
416 |
-
:param in_channels: channels in the input Tensor.
|
417 |
-
:param model_channels: base channel count for the model.
|
418 |
-
:param out_channels: channels in the output Tensor.
|
419 |
-
:param num_res_blocks: number of residual blocks per downsample.
|
420 |
-
:param attention_resolutions: a collection of downsample rates at which
|
421 |
-
attention will take place. May be a set, list, or tuple.
|
422 |
-
For example, if this contains 4, then at 4x downsampling, attention
|
423 |
-
will be used.
|
424 |
-
:param dropout: the dropout probability.
|
425 |
-
:param channel_mult: channel multiplier for each level of the UNet.
|
426 |
-
:param conv_resample: if True, use learned convolutions for upsampling and
|
427 |
-
downsampling.
|
428 |
-
:param dims: determines if the signal is 1D, 2D, or 3D.
|
429 |
-
:param num_classes: if specified (as an int), then this model will be
|
430 |
-
class-conditional with `num_classes` classes.
|
431 |
-
:param use_checkpoint: use gradient checkpointing to reduce memory usage.
|
432 |
-
:param num_heads: the number of attention heads in each attention layer.
|
433 |
-
:param num_heads_channels: if specified, ignore num_heads and instead use
|
434 |
-
a fixed channel width per attention head.
|
435 |
-
:param num_heads_upsample: works with num_heads to set a different number
|
436 |
-
of heads for upsampling. Deprecated.
|
437 |
-
:param use_scale_shift_norm: use a FiLM-like conditioning mechanism.
|
438 |
-
:param resblock_updown: use residual blocks for up/downsampling.
|
439 |
-
:param use_new_attention_order: use a different attention pattern for potentially
|
440 |
-
increased efficiency.
|
441 |
-
"""
|
442 |
-
|
443 |
-
def __init__(
|
444 |
-
self,
|
445 |
-
image_size,
|
446 |
-
in_channels,
|
447 |
-
model_channels,
|
448 |
-
out_channels,
|
449 |
-
num_res_blocks,
|
450 |
-
attention_resolutions,
|
451 |
-
dropout=0,
|
452 |
-
channel_mult=(1, 2, 4, 8),
|
453 |
-
conv_resample=True,
|
454 |
-
dims=2,
|
455 |
-
num_classes=None,
|
456 |
-
use_checkpoint=False,
|
457 |
-
use_fp16=False,
|
458 |
-
num_heads=-1,
|
459 |
-
num_head_channels=-1,
|
460 |
-
num_heads_upsample=-1,
|
461 |
-
use_scale_shift_norm=False,
|
462 |
-
resblock_updown=False,
|
463 |
-
use_new_attention_order=False,
|
464 |
-
use_spatial_transformer=False, # custom transformer support
|
465 |
-
transformer_depth=1, # custom transformer support
|
466 |
-
context_dim=None, # custom transformer support
|
467 |
-
n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model
|
468 |
-
legacy=True,
|
469 |
-
):
|
470 |
-
super().__init__()
|
471 |
-
if use_spatial_transformer:
|
472 |
-
assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...'
|
473 |
-
|
474 |
-
if context_dim is not None:
|
475 |
-
assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...'
|
476 |
-
from omegaconf.listconfig import ListConfig
|
477 |
-
if type(context_dim) == ListConfig:
|
478 |
-
context_dim = list(context_dim)
|
479 |
-
|
480 |
-
if num_heads_upsample == -1:
|
481 |
-
num_heads_upsample = num_heads
|
482 |
-
|
483 |
-
if num_heads == -1:
|
484 |
-
assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set'
|
485 |
-
|
486 |
-
if num_head_channels == -1:
|
487 |
-
assert num_heads != -1, 'Either num_heads or num_head_channels has to be set'
|
488 |
-
|
489 |
-
self.image_size = image_size
|
490 |
-
self.in_channels = in_channels
|
491 |
-
self.model_channels = model_channels
|
492 |
-
self.out_channels = out_channels
|
493 |
-
self.num_res_blocks = num_res_blocks
|
494 |
-
self.attention_resolutions = attention_resolutions
|
495 |
-
self.dropout = dropout
|
496 |
-
self.channel_mult = channel_mult
|
497 |
-
self.conv_resample = conv_resample
|
498 |
-
self.num_classes = num_classes
|
499 |
-
self.use_checkpoint = use_checkpoint
|
500 |
-
self.dtype = th.float16 if use_fp16 else th.float32
|
501 |
-
self.num_heads = num_heads
|
502 |
-
self.num_head_channels = num_head_channels
|
503 |
-
self.num_heads_upsample = num_heads_upsample
|
504 |
-
self.predict_codebook_ids = n_embed is not None
|
505 |
-
|
506 |
-
time_embed_dim = model_channels * 4
|
507 |
-
self.time_embed = nn.Sequential(
|
508 |
-
linear(model_channels, time_embed_dim),
|
509 |
-
nn.SiLU(),
|
510 |
-
linear(time_embed_dim, time_embed_dim),
|
511 |
-
)
|
512 |
-
|
513 |
-
if self.num_classes is not None:
|
514 |
-
self.label_emb = nn.Embedding(num_classes, time_embed_dim)
|
515 |
-
|
516 |
-
self.input_blocks = nn.ModuleList(
|
517 |
-
[
|
518 |
-
TimestepEmbedSequential(
|
519 |
-
conv_nd(dims, in_channels, model_channels, 3, padding=1)# conv2d for txt2img/audio
|
520 |
-
)
|
521 |
-
]
|
522 |
-
)
|
523 |
-
self._feature_size = model_channels
|
524 |
-
input_block_chans = [model_channels]
|
525 |
-
ch = model_channels
|
526 |
-
ds = 1
|
527 |
-
# downsample blocks
|
528 |
-
for level, mult in enumerate(channel_mult):
|
529 |
-
for _ in range(num_res_blocks):
|
530 |
-
layers = [
|
531 |
-
ResBlock(
|
532 |
-
ch,
|
533 |
-
time_embed_dim,
|
534 |
-
dropout,
|
535 |
-
out_channels=mult * model_channels,
|
536 |
-
dims=dims,
|
537 |
-
use_checkpoint=use_checkpoint,
|
538 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
539 |
-
)
|
540 |
-
]
|
541 |
-
ch = mult * model_channels
|
542 |
-
if ds in attention_resolutions:
|
543 |
-
if num_head_channels == -1:
|
544 |
-
dim_head = ch // num_heads
|
545 |
-
else:
|
546 |
-
num_heads = ch // num_head_channels
|
547 |
-
dim_head = num_head_channels
|
548 |
-
if legacy:
|
549 |
-
#num_heads = 1
|
550 |
-
dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
|
551 |
-
layers.append(
|
552 |
-
AttentionBlock(
|
553 |
-
ch,
|
554 |
-
use_checkpoint=use_checkpoint,
|
555 |
-
num_heads=num_heads,
|
556 |
-
num_head_channels=dim_head,
|
557 |
-
use_new_attention_order=use_new_attention_order,
|
558 |
-
) if not use_spatial_transformer else SpatialTransformer(# transformer_depth is 1
|
559 |
-
ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim
|
560 |
-
)
|
561 |
-
)
|
562 |
-
self.input_blocks.append(TimestepEmbedSequential(*layers))
|
563 |
-
self._feature_size += ch
|
564 |
-
input_block_chans.append(ch)
|
565 |
-
if level != len(channel_mult) - 1:
|
566 |
-
out_ch = ch
|
567 |
-
self.input_blocks.append(
|
568 |
-
TimestepEmbedSequential(
|
569 |
-
ResBlock(
|
570 |
-
ch,
|
571 |
-
time_embed_dim,
|
572 |
-
dropout,
|
573 |
-
out_channels=out_ch,
|
574 |
-
dims=dims,
|
575 |
-
use_checkpoint=use_checkpoint,
|
576 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
577 |
-
down=True,
|
578 |
-
)
|
579 |
-
if resblock_updown
|
580 |
-
else Downsample(
|
581 |
-
ch, conv_resample, dims=dims, out_channels=out_ch
|
582 |
-
)
|
583 |
-
)
|
584 |
-
)
|
585 |
-
ch = out_ch
|
586 |
-
input_block_chans.append(ch)
|
587 |
-
ds *= 2
|
588 |
-
self._feature_size += ch
|
589 |
-
|
590 |
-
if num_head_channels == -1:
|
591 |
-
dim_head = ch // num_heads
|
592 |
-
else:
|
593 |
-
num_heads = ch // num_head_channels
|
594 |
-
dim_head = num_head_channels
|
595 |
-
if legacy:
|
596 |
-
#num_heads = 1
|
597 |
-
dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
|
598 |
-
self.middle_block = TimestepEmbedSequential(
|
599 |
-
ResBlock(
|
600 |
-
ch,
|
601 |
-
time_embed_dim,
|
602 |
-
dropout,
|
603 |
-
dims=dims,
|
604 |
-
use_checkpoint=use_checkpoint,
|
605 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
606 |
-
),
|
607 |
-
AttentionBlock(
|
608 |
-
ch,
|
609 |
-
use_checkpoint=use_checkpoint,
|
610 |
-
num_heads=num_heads,
|
611 |
-
num_head_channels=dim_head,
|
612 |
-
use_new_attention_order=use_new_attention_order,
|
613 |
-
) if not use_spatial_transformer else SpatialTransformer(
|
614 |
-
ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim
|
615 |
-
),
|
616 |
-
ResBlock(
|
617 |
-
ch,
|
618 |
-
time_embed_dim,
|
619 |
-
dropout,
|
620 |
-
dims=dims,
|
621 |
-
use_checkpoint=use_checkpoint,
|
622 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
623 |
-
),
|
624 |
-
)
|
625 |
-
self._feature_size += ch
|
626 |
-
# upsample blocks
|
627 |
-
self.output_blocks = nn.ModuleList([])
|
628 |
-
for level, mult in list(enumerate(channel_mult))[::-1]:
|
629 |
-
for i in range(num_res_blocks + 1):
|
630 |
-
ich = input_block_chans.pop()
|
631 |
-
layers = [
|
632 |
-
ResBlock(
|
633 |
-
ch + ich,
|
634 |
-
time_embed_dim,
|
635 |
-
dropout,
|
636 |
-
out_channels=model_channels * mult,
|
637 |
-
dims=dims,
|
638 |
-
use_checkpoint=use_checkpoint,
|
639 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
640 |
-
)
|
641 |
-
]
|
642 |
-
ch = model_channels * mult
|
643 |
-
if ds in attention_resolutions:
|
644 |
-
if num_head_channels == -1:
|
645 |
-
dim_head = ch // num_heads
|
646 |
-
else:
|
647 |
-
num_heads = ch // num_head_channels
|
648 |
-
dim_head = num_head_channels
|
649 |
-
if legacy:
|
650 |
-
#num_heads = 1
|
651 |
-
dim_head = ch // num_heads if use_spatial_transformer else num_head_channels
|
652 |
-
layers.append(
|
653 |
-
AttentionBlock(
|
654 |
-
ch,
|
655 |
-
use_checkpoint=use_checkpoint,
|
656 |
-
num_heads=num_heads_upsample,
|
657 |
-
num_head_channels=dim_head,
|
658 |
-
use_new_attention_order=use_new_attention_order,
|
659 |
-
) if not use_spatial_transformer else SpatialTransformer(
|
660 |
-
ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim
|
661 |
-
)
|
662 |
-
)
|
663 |
-
if level and i == num_res_blocks:
|
664 |
-
out_ch = ch
|
665 |
-
layers.append(
|
666 |
-
ResBlock(
|
667 |
-
ch,
|
668 |
-
time_embed_dim,
|
669 |
-
dropout,
|
670 |
-
out_channels=out_ch,
|
671 |
-
dims=dims,
|
672 |
-
use_checkpoint=use_checkpoint,
|
673 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
674 |
-
up=True,
|
675 |
-
)
|
676 |
-
if resblock_updown
|
677 |
-
else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch)
|
678 |
-
)
|
679 |
-
ds //= 2
|
680 |
-
self.output_blocks.append(TimestepEmbedSequential(*layers))
|
681 |
-
self._feature_size += ch
|
682 |
-
|
683 |
-
self.out = nn.Sequential(
|
684 |
-
normalization(ch),
|
685 |
-
nn.SiLU(),
|
686 |
-
zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)),
|
687 |
-
)
|
688 |
-
if self.predict_codebook_ids:
|
689 |
-
self.id_predictor = nn.Sequential(
|
690 |
-
normalization(ch),
|
691 |
-
conv_nd(dims, model_channels, n_embed, 1),
|
692 |
-
#nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits
|
693 |
-
)
|
694 |
-
|
695 |
-
def convert_to_fp16(self):
|
696 |
-
"""
|
697 |
-
Convert the torso of the model to float16.
|
698 |
-
"""
|
699 |
-
self.input_blocks.apply(convert_module_to_f16)
|
700 |
-
self.middle_block.apply(convert_module_to_f16)
|
701 |
-
self.output_blocks.apply(convert_module_to_f16)
|
702 |
-
|
703 |
-
def convert_to_fp32(self):
|
704 |
-
"""
|
705 |
-
Convert the torso of the model to float32.
|
706 |
-
"""
|
707 |
-
self.input_blocks.apply(convert_module_to_f32)
|
708 |
-
self.middle_block.apply(convert_module_to_f32)
|
709 |
-
self.output_blocks.apply(convert_module_to_f32)
|
710 |
-
|
711 |
-
def forward(self, x, timesteps=None, context=None, y=None,**kwargs):
|
712 |
-
"""
|
713 |
-
Apply the model to an input batch.
|
714 |
-
:param x: an [N x C x ...] Tensor of inputs.
|
715 |
-
:param timesteps: a 1-D batch of timesteps,shape [N]
|
716 |
-
:param context: conditioning plugged in via crossattn. for txt2img shape is [N,77,context_dim]
|
717 |
-
:param y: an [N] Tensor of labels, if class-conditional.
|
718 |
-
:return: an [N x C x ...] Tensor of outputs.
|
719 |
-
"""
|
720 |
-
# print(f"in unet {x.shape}")
|
721 |
-
assert (y is not None) == (
|
722 |
-
self.num_classes is not None
|
723 |
-
), "must specify y if and only if the model is class-conditional"
|
724 |
-
hs = []
|
725 |
-
t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False)# shape [N,self.model_channels]
|
726 |
-
emb = self.time_embed(t_emb)# shape [N,context_dim]
|
727 |
-
|
728 |
-
if self.num_classes is not None:# only for class label
|
729 |
-
assert y.shape == (x.shape[0],)
|
730 |
-
emb = emb + self.label_emb(y)
|
731 |
-
|
732 |
-
h = x.type(self.dtype)# [N,C,10,106]
|
733 |
-
for module in self.input_blocks:
|
734 |
-
h = module(h, emb, context)# 0:[N,self.model_channels,10,106],1:[N,self.model_channels,10,106],2:[N,self.model_channels,10,106] 3:[N,self.model_channels,5,53] 4:[N,self.model_channels,5,53] 5:[N,self.model_channels*2,5,53]
|
735 |
-
hs.append(h)
|
736 |
-
h = self.middle_block(h, emb, context)# no shape change
|
737 |
-
for module in self.output_blocks:
|
738 |
-
h = th.cat([h, hs.pop()], dim=1)# 在这里c维度乘2或+self.model_channels,其余维度不变
|
739 |
-
h = module(h, emb, context)# 在这里c维度/2回到之前维度,h,w不变或*2
|
740 |
-
h = h.type(x.dtype)# 至此h维度和输入x维度回到相同状态
|
741 |
-
if self.predict_codebook_ids:
|
742 |
-
return self.id_predictor(h)
|
743 |
-
else:
|
744 |
-
return self.out(h)
|
745 |
-
|
746 |
-
|
747 |
-
class EncoderUNetModel(nn.Module):
|
748 |
-
"""
|
749 |
-
The half UNet model with attention and timestep embedding.
|
750 |
-
For usage, see UNet.
|
751 |
-
"""
|
752 |
-
|
753 |
-
def __init__(
|
754 |
-
self,
|
755 |
-
image_size,
|
756 |
-
in_channels,
|
757 |
-
model_channels,
|
758 |
-
out_channels,
|
759 |
-
num_res_blocks,
|
760 |
-
attention_resolutions,
|
761 |
-
dropout=0,
|
762 |
-
channel_mult=(1, 2, 4, 8),
|
763 |
-
conv_resample=True,
|
764 |
-
dims=2,
|
765 |
-
use_checkpoint=False,
|
766 |
-
use_fp16=False,
|
767 |
-
num_heads=1,
|
768 |
-
num_head_channels=-1,
|
769 |
-
num_heads_upsample=-1,
|
770 |
-
use_scale_shift_norm=False,
|
771 |
-
resblock_updown=False,
|
772 |
-
use_new_attention_order=False,
|
773 |
-
pool="adaptive",
|
774 |
-
*args,
|
775 |
-
**kwargs
|
776 |
-
):
|
777 |
-
super().__init__()
|
778 |
-
|
779 |
-
if num_heads_upsample == -1:
|
780 |
-
num_heads_upsample = num_heads
|
781 |
-
|
782 |
-
self.in_channels = in_channels
|
783 |
-
self.model_channels = model_channels
|
784 |
-
self.out_channels = out_channels
|
785 |
-
self.num_res_blocks = num_res_blocks
|
786 |
-
self.attention_resolutions = attention_resolutions
|
787 |
-
self.dropout = dropout
|
788 |
-
self.channel_mult = channel_mult
|
789 |
-
self.conv_resample = conv_resample
|
790 |
-
self.use_checkpoint = use_checkpoint
|
791 |
-
self.dtype = th.float16 if use_fp16 else th.float32
|
792 |
-
self.num_heads = num_heads
|
793 |
-
self.num_head_channels = num_head_channels
|
794 |
-
self.num_heads_upsample = num_heads_upsample
|
795 |
-
|
796 |
-
time_embed_dim = model_channels * 4
|
797 |
-
self.time_embed = nn.Sequential(
|
798 |
-
linear(model_channels, time_embed_dim),
|
799 |
-
nn.SiLU(),
|
800 |
-
linear(time_embed_dim, time_embed_dim),
|
801 |
-
)
|
802 |
-
|
803 |
-
self.input_blocks = nn.ModuleList(
|
804 |
-
[
|
805 |
-
TimestepEmbedSequential(
|
806 |
-
conv_nd(dims, in_channels, model_channels, 3, padding=1)
|
807 |
-
)
|
808 |
-
]
|
809 |
-
)
|
810 |
-
self._feature_size = model_channels
|
811 |
-
input_block_chans = [model_channels]
|
812 |
-
ch = model_channels
|
813 |
-
ds = 1
|
814 |
-
for level, mult in enumerate(channel_mult):
|
815 |
-
for _ in range(num_res_blocks):
|
816 |
-
layers = [
|
817 |
-
ResBlock(
|
818 |
-
ch,
|
819 |
-
time_embed_dim,
|
820 |
-
dropout,
|
821 |
-
out_channels=mult * model_channels,
|
822 |
-
dims=dims,
|
823 |
-
use_checkpoint=use_checkpoint,
|
824 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
825 |
-
)
|
826 |
-
]
|
827 |
-
ch = mult * model_channels
|
828 |
-
if ds in attention_resolutions:
|
829 |
-
layers.append(
|
830 |
-
AttentionBlock(
|
831 |
-
ch,
|
832 |
-
use_checkpoint=use_checkpoint,
|
833 |
-
num_heads=num_heads,
|
834 |
-
num_head_channels=num_head_channels,
|
835 |
-
use_new_attention_order=use_new_attention_order,
|
836 |
-
)
|
837 |
-
)
|
838 |
-
self.input_blocks.append(TimestepEmbedSequential(*layers))
|
839 |
-
self._feature_size += ch
|
840 |
-
input_block_chans.append(ch)
|
841 |
-
if level != len(channel_mult) - 1:
|
842 |
-
out_ch = ch
|
843 |
-
self.input_blocks.append(
|
844 |
-
TimestepEmbedSequential(
|
845 |
-
ResBlock(
|
846 |
-
ch,
|
847 |
-
time_embed_dim,
|
848 |
-
dropout,
|
849 |
-
out_channels=out_ch,
|
850 |
-
dims=dims,
|
851 |
-
use_checkpoint=use_checkpoint,
|
852 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
853 |
-
down=True,
|
854 |
-
)
|
855 |
-
if resblock_updown
|
856 |
-
else Downsample(
|
857 |
-
ch, conv_resample, dims=dims, out_channels=out_ch
|
858 |
-
)
|
859 |
-
)
|
860 |
-
)
|
861 |
-
ch = out_ch
|
862 |
-
input_block_chans.append(ch)
|
863 |
-
ds *= 2
|
864 |
-
self._feature_size += ch
|
865 |
-
|
866 |
-
self.middle_block = TimestepEmbedSequential(
|
867 |
-
ResBlock(
|
868 |
-
ch,
|
869 |
-
time_embed_dim,
|
870 |
-
dropout,
|
871 |
-
dims=dims,
|
872 |
-
use_checkpoint=use_checkpoint,
|
873 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
874 |
-
),
|
875 |
-
AttentionBlock(
|
876 |
-
ch,
|
877 |
-
use_checkpoint=use_checkpoint,
|
878 |
-
num_heads=num_heads,
|
879 |
-
num_head_channels=num_head_channels,
|
880 |
-
use_new_attention_order=use_new_attention_order,
|
881 |
-
),
|
882 |
-
ResBlock(
|
883 |
-
ch,
|
884 |
-
time_embed_dim,
|
885 |
-
dropout,
|
886 |
-
dims=dims,
|
887 |
-
use_checkpoint=use_checkpoint,
|
888 |
-
use_scale_shift_norm=use_scale_shift_norm,
|
889 |
-
),
|
890 |
-
)
|
891 |
-
self._feature_size += ch
|
892 |
-
self.pool = pool
|
893 |
-
if pool == "adaptive":
|
894 |
-
self.out = nn.Sequential(
|
895 |
-
normalization(ch),
|
896 |
-
nn.SiLU(),
|
897 |
-
nn.AdaptiveAvgPool2d((1, 1)),
|
898 |
-
zero_module(conv_nd(dims, ch, out_channels, 1)),
|
899 |
-
nn.Flatten(),
|
900 |
-
)
|
901 |
-
elif pool == "attention":
|
902 |
-
assert num_head_channels != -1
|
903 |
-
self.out = nn.Sequential(
|
904 |
-
normalization(ch),
|
905 |
-
nn.SiLU(),
|
906 |
-
AttentionPool2d(
|
907 |
-
(image_size // ds), ch, num_head_channels, out_channels
|
908 |
-
),
|
909 |
-
)
|
910 |
-
elif pool == "spatial":
|
911 |
-
self.out = nn.Sequential(
|
912 |
-
nn.Linear(self._feature_size, 2048),
|
913 |
-
nn.ReLU(),
|
914 |
-
nn.Linear(2048, self.out_channels),
|
915 |
-
)
|
916 |
-
elif pool == "spatial_v2":
|
917 |
-
self.out = nn.Sequential(
|
918 |
-
nn.Linear(self._feature_size, 2048),
|
919 |
-
normalization(2048),
|
920 |
-
nn.SiLU(),
|
921 |
-
nn.Linear(2048, self.out_channels),
|
922 |
-
)
|
923 |
-
else:
|
924 |
-
raise NotImplementedError(f"Unexpected {pool} pooling")
|
925 |
-
|
926 |
-
def convert_to_fp16(self):
|
927 |
-
"""
|
928 |
-
Convert the torso of the model to float16.
|
929 |
-
"""
|
930 |
-
self.input_blocks.apply(convert_module_to_f16)
|
931 |
-
self.middle_block.apply(convert_module_to_f16)
|
932 |
-
|
933 |
-
def convert_to_fp32(self):
|
934 |
-
"""
|
935 |
-
Convert the torso of the model to float32.
|
936 |
-
"""
|
937 |
-
self.input_blocks.apply(convert_module_to_f32)
|
938 |
-
self.middle_block.apply(convert_module_to_f32)
|
939 |
-
|
940 |
-
def forward(self, x, timesteps):
|
941 |
-
"""
|
942 |
-
Apply the model to an input batch.
|
943 |
-
:param x: an [N x C x ...] Tensor of inputs.
|
944 |
-
:param timesteps: a 1-D batch of timesteps.
|
945 |
-
:return: an [N x K] Tensor of outputs.
|
946 |
-
"""
|
947 |
-
emb = self.time_embed(timestep_embedding(timesteps, self.model_channels))
|
948 |
-
|
949 |
-
results = []
|
950 |
-
h = x.type(self.dtype)
|
951 |
-
for module in self.input_blocks:
|
952 |
-
h = module(h, emb)
|
953 |
-
if self.pool.startswith("spatial"):
|
954 |
-
results.append(h.type(x.dtype).mean(dim=(2, 3)))
|
955 |
-
h = self.middle_block(h, emb)
|
956 |
-
if self.pool.startswith("spatial"):
|
957 |
-
results.append(h.type(x.dtype).mean(dim=(2, 3)))
|
958 |
-
h = th.cat(results, axis=-1)
|
959 |
-
return self.out(h)
|
960 |
-
else:
|
961 |
-
h = h.type(x.dtype)
|
962 |
-
return self.out(h)
|
963 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Abubakari/Sepsis-prediction-streamlit-app/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Sepsis Prediction Streamlit App
|
3 |
-
emoji: 😻
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: green
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.21.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT-Chat-UI/.svelte-kit/types/src/routes/login/callback/$types.d.ts
DELETED
@@ -1,22 +0,0 @@
|
|
1 |
-
import type * as Kit from '@sveltejs/kit';
|
2 |
-
|
3 |
-
type Expand<T> = T extends infer O ? { [K in keyof O]: O[K] } : never;
|
4 |
-
type RouteParams = { }
|
5 |
-
type RouteId = '/login/callback';
|
6 |
-
type MaybeWithVoid<T> = {} extends T ? T | void : T;
|
7 |
-
export type RequiredKeys<T> = { [K in keyof T]-?: {} extends { [P in K]: T[K] } ? never : K; }[keyof T];
|
8 |
-
type OutputDataShape<T> = MaybeWithVoid<Omit<App.PageData, RequiredKeys<T>> & Partial<Pick<App.PageData, keyof T & keyof App.PageData>> & Record<string, any>>
|
9 |
-
type EnsureDefined<T> = T extends null | undefined ? {} : T;
|
10 |
-
type OptionalUnion<U extends Record<string, any>, A extends keyof U = U extends U ? keyof U : never> = U extends unknown ? { [P in Exclude<A, keyof U>]?: never } & U : never;
|
11 |
-
export type Snapshot<T = any> = Kit.Snapshot<T>;
|
12 |
-
type PageServerParentData = EnsureDefined<import('../../$types.js').LayoutServerData>;
|
13 |
-
type PageParentData = EnsureDefined<import('../../$types.js').LayoutData>;
|
14 |
-
|
15 |
-
export type PageServerLoad<OutputData extends OutputDataShape<PageServerParentData> = OutputDataShape<PageServerParentData>> = Kit.ServerLoad<RouteParams, PageServerParentData, OutputData, RouteId>;
|
16 |
-
export type PageServerLoadEvent = Parameters<PageServerLoad>[0];
|
17 |
-
export type ActionData = unknown;
|
18 |
-
export type PageServerData = Expand<OptionalUnion<EnsureDefined<Kit.AwaitedProperties<Awaited<ReturnType<typeof import('../../../../../../src/routes/login/callback/+page.server.js').load>>>>>>;
|
19 |
-
export type PageData = Expand<Omit<PageParentData, keyof PageServerData> & EnsureDefined<PageServerData>>;
|
20 |
-
export type Action<OutputData extends Record<string, any> | void = Record<string, any> | void> = Kit.Action<RouteParams, OutputData, RouteId>
|
21 |
-
export type Actions<OutputData extends Record<string, any> | void = Record<string, any> | void> = Kit.Actions<RouteParams, OutputData, RouteId>
|
22 |
-
export type RequestEvent = Kit.RequestEvent<RouteParams, RouteId>;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/cover-plugin.js
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
import Factory from './gameobjects/shape/cover/Factory.js';
|
2 |
-
import Creator from './gameobjects/shape/cover/Creator.js';
|
3 |
-
import Cover from './gameobjects/shape/cover/Cover.js';
|
4 |
-
import SetValue from './utils/object/SetValue.js';
|
5 |
-
|
6 |
-
class CoverPlugin extends Phaser.Plugins.BasePlugin {
|
7 |
-
|
8 |
-
constructor(pluginManager) {
|
9 |
-
super(pluginManager);
|
10 |
-
|
11 |
-
// Register our new Game Object type
|
12 |
-
pluginManager.registerGameObject('rexCover', Factory, Creator);
|
13 |
-
}
|
14 |
-
|
15 |
-
start() {
|
16 |
-
var eventEmitter = this.game.events;
|
17 |
-
eventEmitter.on('destroy', this.destroy, this);
|
18 |
-
}
|
19 |
-
}
|
20 |
-
|
21 |
-
SetValue(window, 'RexPlugins.GameObjects.Cover', Cover);
|
22 |
-
|
23 |
-
export default CoverPlugin;
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/stylegan_human/torch_utils/ops/conv2d_resample.py
DELETED
@@ -1,176 +0,0 @@
|
|
1 |
-
# Copyright (c) SenseTime Research. All rights reserved.
|
2 |
-
|
3 |
-
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
|
4 |
-
#
|
5 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
6 |
-
# and proprietary rights in and to this software, related documentation
|
7 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
8 |
-
# distribution of this software and related documentation without an express
|
9 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
10 |
-
|
11 |
-
"""2D convolution with optional up/downsampling."""
|
12 |
-
|
13 |
-
import torch
|
14 |
-
|
15 |
-
from .. import misc
|
16 |
-
from . import conv2d_gradfix
|
17 |
-
from . import upfirdn2d
|
18 |
-
from .upfirdn2d import _parse_padding
|
19 |
-
from .upfirdn2d import _get_filter_size
|
20 |
-
|
21 |
-
# ----------------------------------------------------------------------------
|
22 |
-
|
23 |
-
|
24 |
-
def _get_weight_shape(w):
|
25 |
-
with misc.suppress_tracer_warnings(): # this value will be treated as a constant
|
26 |
-
shape = [int(sz) for sz in w.shape]
|
27 |
-
misc.assert_shape(w, shape)
|
28 |
-
return shape
|
29 |
-
|
30 |
-
# ----------------------------------------------------------------------------
|
31 |
-
|
32 |
-
|
33 |
-
def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True):
|
34 |
-
"""Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations.
|
35 |
-
"""
|
36 |
-
out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w)
|
37 |
-
|
38 |
-
# Flip weight if requested.
|
39 |
-
# conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False).
|
40 |
-
if not flip_weight:
|
41 |
-
w = w.flip([2, 3])
|
42 |
-
|
43 |
-
# Workaround performance pitfall in cuDNN 8.0.5, triggered when using
|
44 |
-
# 1x1 kernel + memory_format=channels_last + less than 64 channels.
|
45 |
-
if kw == 1 and kh == 1 and stride == 1 and padding in [0, [0, 0], (0, 0)] and not transpose:
|
46 |
-
if x.stride()[1] == 1 and min(out_channels, in_channels_per_group) < 64:
|
47 |
-
if out_channels <= 4 and groups == 1:
|
48 |
-
in_shape = x.shape
|
49 |
-
x = w.squeeze(3).squeeze(
|
50 |
-
2) @ x.reshape([in_shape[0], in_channels_per_group, -1])
|
51 |
-
x = x.reshape([in_shape[0], out_channels,
|
52 |
-
in_shape[2], in_shape[3]])
|
53 |
-
else:
|
54 |
-
x = x.to(memory_format=torch.contiguous_format)
|
55 |
-
w = w.to(memory_format=torch.contiguous_format)
|
56 |
-
x = conv2d_gradfix.conv2d(x, w, groups=groups)
|
57 |
-
return x.to(memory_format=torch.channels_last)
|
58 |
-
|
59 |
-
# Otherwise => execute using conv2d_gradfix.
|
60 |
-
op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d
|
61 |
-
return op(x, w, stride=stride, padding=padding, groups=groups)
|
62 |
-
|
63 |
-
# ----------------------------------------------------------------------------
|
64 |
-
|
65 |
-
|
66 |
-
@misc.profiled_function
|
67 |
-
def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False):
|
68 |
-
r"""2D convolution with optional up/downsampling.
|
69 |
-
|
70 |
-
Padding is performed only once at the beginning, not between the operations.
|
71 |
-
|
72 |
-
Args:
|
73 |
-
x: Input tensor of shape
|
74 |
-
`[batch_size, in_channels, in_height, in_width]`.
|
75 |
-
w: Weight tensor of shape
|
76 |
-
`[out_channels, in_channels//groups, kernel_height, kernel_width]`.
|
77 |
-
f: Low-pass filter for up/downsampling. Must be prepared beforehand by
|
78 |
-
calling upfirdn2d.setup_filter(). None = identity (default).
|
79 |
-
up: Integer upsampling factor (default: 1).
|
80 |
-
down: Integer downsampling factor (default: 1).
|
81 |
-
padding: Padding with respect to the upsampled image. Can be a single number
|
82 |
-
or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
|
83 |
-
(default: 0).
|
84 |
-
groups: Split input channels into N groups (default: 1).
|
85 |
-
flip_weight: False = convolution, True = correlation (default: True).
|
86 |
-
flip_filter: False = convolution, True = correlation (default: False).
|
87 |
-
|
88 |
-
Returns:
|
89 |
-
Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
|
90 |
-
"""
|
91 |
-
# Validate arguments.
|
92 |
-
assert isinstance(x, torch.Tensor) and (x.ndim == 4)
|
93 |
-
assert isinstance(w, torch.Tensor) and (
|
94 |
-
w.ndim == 4) and (w.dtype == x.dtype)
|
95 |
-
assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [
|
96 |
-
1, 2] and f.dtype == torch.float32)
|
97 |
-
assert isinstance(up, int) and (up >= 1)
|
98 |
-
assert isinstance(down, int) and (down >= 1)
|
99 |
-
assert isinstance(groups, int) and (groups >= 1)
|
100 |
-
out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w)
|
101 |
-
fw, fh = _get_filter_size(f)
|
102 |
-
px0, px1, py0, py1 = _parse_padding(padding)
|
103 |
-
|
104 |
-
# Adjust padding to account for up/downsampling.
|
105 |
-
if up > 1:
|
106 |
-
px0 += (fw + up - 1) // 2
|
107 |
-
px1 += (fw - up) // 2
|
108 |
-
py0 += (fh + up - 1) // 2
|
109 |
-
py1 += (fh - up) // 2
|
110 |
-
if down > 1:
|
111 |
-
px0 += (fw - down + 1) // 2
|
112 |
-
px1 += (fw - down) // 2
|
113 |
-
py0 += (fh - down + 1) // 2
|
114 |
-
py1 += (fh - down) // 2
|
115 |
-
|
116 |
-
# Fast path: 1x1 convolution with downsampling only => downsample first, then convolve.
|
117 |
-
if kw == 1 and kh == 1 and (down > 1 and up == 1):
|
118 |
-
x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[
|
119 |
-
px0, px1, py0, py1], flip_filter=flip_filter)
|
120 |
-
x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
|
121 |
-
return x
|
122 |
-
|
123 |
-
# Fast path: 1x1 convolution with upsampling only => convolve first, then upsample.
|
124 |
-
if kw == 1 and kh == 1 and (up > 1 and down == 1):
|
125 |
-
x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
|
126 |
-
x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[
|
127 |
-
px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter)
|
128 |
-
return x
|
129 |
-
|
130 |
-
# Fast path: downsampling only => use strided convolution.
|
131 |
-
if down > 1 and up == 1:
|
132 |
-
x = upfirdn2d.upfirdn2d(
|
133 |
-
x=x, f=f, padding=[px0, px1, py0, py1], flip_filter=flip_filter)
|
134 |
-
x = _conv2d_wrapper(x=x, w=w, stride=down,
|
135 |
-
groups=groups, flip_weight=flip_weight)
|
136 |
-
return x
|
137 |
-
|
138 |
-
# Fast path: upsampling with optional downsampling => use transpose strided convolution.
|
139 |
-
if up > 1:
|
140 |
-
if groups == 1:
|
141 |
-
w = w.transpose(0, 1)
|
142 |
-
else:
|
143 |
-
w = w.reshape(groups, out_channels // groups,
|
144 |
-
in_channels_per_group, kh, kw)
|
145 |
-
w = w.transpose(1, 2)
|
146 |
-
w = w.reshape(groups * in_channels_per_group,
|
147 |
-
out_channels // groups, kh, kw)
|
148 |
-
px0 -= kw - 1
|
149 |
-
px1 -= kw - up
|
150 |
-
py0 -= kh - 1
|
151 |
-
py1 -= kh - up
|
152 |
-
pxt = max(min(-px0, -px1), 0)
|
153 |
-
pyt = max(min(-py0, -py1), 0)
|
154 |
-
x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[
|
155 |
-
pyt, pxt], groups=groups, transpose=True, flip_weight=(not flip_weight))
|
156 |
-
x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[
|
157 |
-
px0+pxt, px1+pxt, py0+pyt, py1+pyt], gain=up**2, flip_filter=flip_filter)
|
158 |
-
if down > 1:
|
159 |
-
x = upfirdn2d.upfirdn2d(
|
160 |
-
x=x, f=f, down=down, flip_filter=flip_filter)
|
161 |
-
return x
|
162 |
-
|
163 |
-
# Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d.
|
164 |
-
if up == 1 and down == 1:
|
165 |
-
if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0:
|
166 |
-
return _conv2d_wrapper(x=x, w=w, padding=[py0, px0], groups=groups, flip_weight=flip_weight)
|
167 |
-
|
168 |
-
# Fallback: Generic reference implementation.
|
169 |
-
x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[
|
170 |
-
px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter)
|
171 |
-
x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight)
|
172 |
-
if down > 1:
|
173 |
-
x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter)
|
174 |
-
return x
|
175 |
-
|
176 |
-
# ----------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/An-619/FastSAM/README.md
DELETED
@@ -1,46 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: FastSAM
|
3 |
-
emoji: 🐠
|
4 |
-
colorFrom: pink
|
5 |
-
colorTo: indigo
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.45.1
|
8 |
-
app_file: app_gradio.py
|
9 |
-
pinned: false
|
10 |
-
license: apache-2.0
|
11 |
-
---
|
12 |
-
|
13 |
-
# Fast Segment Anything
|
14 |
-
|
15 |
-
Official PyTorch Implementation of the <a href="https://github.com/CASIA-IVA-Lab/FastSAM">.
|
16 |
-
|
17 |
-
The **Fast Segment Anything Model(FastSAM)** is a CNN Segment Anything Model trained by only 2% of the SA-1B dataset published by SAM authors. The FastSAM achieve a comparable performance with
|
18 |
-
the SAM method at **50× higher run-time speed**.
|
19 |
-
|
20 |
-
|
21 |
-
## License
|
22 |
-
|
23 |
-
The model is licensed under the [Apache 2.0 license](LICENSE).
|
24 |
-
|
25 |
-
|
26 |
-
## Acknowledgement
|
27 |
-
|
28 |
-
- [Segment Anything](https://segment-anything.com/) provides the SA-1B dataset and the base codes.
|
29 |
-
- [YOLOv8](https://github.com/ultralytics/ultralytics) provides codes and pre-trained models.
|
30 |
-
- [YOLACT](https://arxiv.org/abs/2112.10003) provides powerful instance segmentation method.
|
31 |
-
- [Grounded-Segment-Anything](https://huggingface.co/spaces/yizhangliu/Grounded-Segment-Anything) provides a useful web demo template.
|
32 |
-
|
33 |
-
## Citing FastSAM
|
34 |
-
|
35 |
-
If you find this project useful for your research, please consider citing the following BibTeX entry.
|
36 |
-
|
37 |
-
```
|
38 |
-
@misc{zhao2023fast,
|
39 |
-
title={Fast Segment Anything},
|
40 |
-
author={Xu Zhao and Wenchao Ding and Yongqi An and Yinglong Du and Tao Yu and Min Li and Ming Tang and Jinqiao Wang},
|
41 |
-
year={2023},
|
42 |
-
eprint={2306.12156},
|
43 |
-
archivePrefix={arXiv},
|
44 |
-
primaryClass={cs.CV}
|
45 |
-
}
|
46 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnTo2209/3D_Zeroshot_Neural_Style_Transfer/src/model/styleRF.py
DELETED
@@ -1,95 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
from datetime import datetime
|
3 |
-
|
4 |
-
import pytorch_lightning as pl
|
5 |
-
# from torch.utils.tensorboard import SummaryWriter
|
6 |
-
import torch
|
7 |
-
|
8 |
-
from src.decoder import DECODER_REGISTRY
|
9 |
-
from src.utils.loss import TVLoss
|
10 |
-
|
11 |
-
class StyleRF(pl.LightningModule):
|
12 |
-
def __init__(self, cfg):
|
13 |
-
super().__init__()
|
14 |
-
self.cfg = cfg
|
15 |
-
self.init_model()
|
16 |
-
|
17 |
-
def init_model(self):
|
18 |
-
logfolder = f'{self.cfg["global"]["base_dir"]}/{self.cfg["global"]["expname"]}{datetime.now().strftime("-%Y%m%d-%H%M%S")}'
|
19 |
-
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
20 |
-
# init log file
|
21 |
-
os.makedirs(logfolder, exist_ok=True)
|
22 |
-
# summary_writer = SummaryWriter(logfolder)
|
23 |
-
|
24 |
-
# init parameters
|
25 |
-
assert self.cfg["model"]["tensorf"]["ckpt"] is not None, 'Have to be pre-trained to get density fielded!'
|
26 |
-
type = self.cfg["model"]["type"]
|
27 |
-
ckpt = torch.load(self.cfg["model"]["tensorf"]["ckpt"], map_location=device)
|
28 |
-
kwargs = ckpt['kwargs']
|
29 |
-
kwargs.update({'device': device})
|
30 |
-
self.ndc_ray = self.cfg["model"]["tensorf"]["ndc_ray"]
|
31 |
-
|
32 |
-
if type == "feature":
|
33 |
-
self.tensorf = DECODER_REGISTRY.get(self.cfg["model"]["tensorf"]["model_name"])(**kwargs)
|
34 |
-
self.tensorf.load(ckpt)
|
35 |
-
self.tensorf.change_to_feature_mod(self.cfg["model"]["tensorf"]["lamb_sh"], device)
|
36 |
-
elif type == "style":
|
37 |
-
self.tensorf = DECODER_REGISTRY.get(self.cfg["model"]["tensorf"]["model_name"])(**kwargs)
|
38 |
-
self.tensorf.change_to_feature_mod(self.cfg["model"]["tensorf"]["lamb_sh"], device)
|
39 |
-
self.tensorf.load(ckpt)
|
40 |
-
self.tensorf.change_to_style_mod(device)
|
41 |
-
|
42 |
-
self.tensorf.rayMarch_weight_thres = self.cfg["model"]["tensorf"]["rm_weight_mask_thre"]
|
43 |
-
|
44 |
-
TV_weight_feature = self.cfg["model"]["tensorf"]["TV_weight_feature"]
|
45 |
-
self.tvreg = TVLoss()
|
46 |
-
print(f"initial TV_weight_feature: {TV_weight_feature}")
|
47 |
-
|
48 |
-
def forward(self, batch):
|
49 |
-
pass
|
50 |
-
|
51 |
-
def training_step(self, batch, batch_idx):
|
52 |
-
pass
|
53 |
-
# # 2. Calculate loss
|
54 |
-
# loss = self.compute_loss(forwarded_batch=forwarded_batch, input_batch=batch)
|
55 |
-
# # 3. Update monitor
|
56 |
-
# self.log("train_loss", loss, on_step=True, on_epoch=True, prog_bar=True)
|
57 |
-
# return {"loss": loss}
|
58 |
-
|
59 |
-
# def validation_step(self, batch, batch_idx):
|
60 |
-
# # 1. Get embeddings from model
|
61 |
-
# forwarded_batch = self.forward(batch)
|
62 |
-
# # 2. Calculate loss
|
63 |
-
# loss = self.compute_loss(forwarded_batch=forwarded_batch, input_batch=batch)
|
64 |
-
# # 3. Update metric for each batch
|
65 |
-
# self.log("val_loss", loss, on_step=True, on_epoch=True, prog_bar=True)
|
66 |
-
# self.metric_evaluator.append(
|
67 |
-
# g_emb=forwarded_batch["pc_embedding_feats"].float().clone().detach(),
|
68 |
-
# q_emb=forwarded_batch["query_embedding_feats"].float().clone().detach(),
|
69 |
-
# query_ids=batch["query_ids"],
|
70 |
-
# gallery_ids=batch["point_cloud_ids"],
|
71 |
-
# target_ids=batch["point_cloud_ids"],
|
72 |
-
# )
|
73 |
-
#
|
74 |
-
# return {"loss": loss}
|
75 |
-
#
|
76 |
-
# def validation_epoch_end(self, outputs) -> None:
|
77 |
-
# """
|
78 |
-
# Callback at validation epoch end to do additional works
|
79 |
-
# with output of validation step, note that this is called
|
80 |
-
# before `training_epoch_end()`
|
81 |
-
# Args:
|
82 |
-
# outputs: output of validation step
|
83 |
-
# """
|
84 |
-
# self.log_dict(
|
85 |
-
# self.metric_evaluator.evaluate(),
|
86 |
-
# prog_bar=True,
|
87 |
-
# on_step=False,
|
88 |
-
# on_epoch=True,
|
89 |
-
# )
|
90 |
-
# self.metric_evaluator.reset()
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/paa/paa_r101_fpn_mstrain_3x_coco.py
DELETED
@@ -1,2 +0,0 @@
|
|
1 |
-
_base_ = './paa_r50_fpn_mstrain_3x_coco.py'
|
2 |
-
model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101))
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/positional_encoding.py
DELETED
@@ -1,150 +0,0 @@
|
|
1 |
-
import math
|
2 |
-
|
3 |
-
import torch
|
4 |
-
import torch.nn as nn
|
5 |
-
from mmcv.cnn import uniform_init
|
6 |
-
|
7 |
-
from .builder import POSITIONAL_ENCODING
|
8 |
-
|
9 |
-
|
10 |
-
@POSITIONAL_ENCODING.register_module()
|
11 |
-
class SinePositionalEncoding(nn.Module):
|
12 |
-
"""Position encoding with sine and cosine functions.
|
13 |
-
|
14 |
-
See `End-to-End Object Detection with Transformers
|
15 |
-
<https://arxiv.org/pdf/2005.12872>`_ for details.
|
16 |
-
|
17 |
-
Args:
|
18 |
-
num_feats (int): The feature dimension for each position
|
19 |
-
along x-axis or y-axis. Note the final returned dimension
|
20 |
-
for each position is 2 times of this value.
|
21 |
-
temperature (int, optional): The temperature used for scaling
|
22 |
-
the position embedding. Default 10000.
|
23 |
-
normalize (bool, optional): Whether to normalize the position
|
24 |
-
embedding. Default False.
|
25 |
-
scale (float, optional): A scale factor that scales the position
|
26 |
-
embedding. The scale will be used only when `normalize` is True.
|
27 |
-
Default 2*pi.
|
28 |
-
eps (float, optional): A value added to the denominator for
|
29 |
-
numerical stability. Default 1e-6.
|
30 |
-
"""
|
31 |
-
|
32 |
-
def __init__(self,
|
33 |
-
num_feats,
|
34 |
-
temperature=10000,
|
35 |
-
normalize=False,
|
36 |
-
scale=2 * math.pi,
|
37 |
-
eps=1e-6):
|
38 |
-
super(SinePositionalEncoding, self).__init__()
|
39 |
-
if normalize:
|
40 |
-
assert isinstance(scale, (float, int)), 'when normalize is set,' \
|
41 |
-
'scale should be provided and in float or int type, ' \
|
42 |
-
f'found {type(scale)}'
|
43 |
-
self.num_feats = num_feats
|
44 |
-
self.temperature = temperature
|
45 |
-
self.normalize = normalize
|
46 |
-
self.scale = scale
|
47 |
-
self.eps = eps
|
48 |
-
|
49 |
-
def forward(self, mask):
|
50 |
-
"""Forward function for `SinePositionalEncoding`.
|
51 |
-
|
52 |
-
Args:
|
53 |
-
mask (Tensor): ByteTensor mask. Non-zero values representing
|
54 |
-
ignored positions, while zero values means valid positions
|
55 |
-
for this image. Shape [bs, h, w].
|
56 |
-
|
57 |
-
Returns:
|
58 |
-
pos (Tensor): Returned position embedding with shape
|
59 |
-
[bs, num_feats*2, h, w].
|
60 |
-
"""
|
61 |
-
not_mask = ~mask
|
62 |
-
y_embed = not_mask.cumsum(1, dtype=torch.float32)
|
63 |
-
x_embed = not_mask.cumsum(2, dtype=torch.float32)
|
64 |
-
if self.normalize:
|
65 |
-
y_embed = y_embed / (y_embed[:, -1:, :] + self.eps) * self.scale
|
66 |
-
x_embed = x_embed / (x_embed[:, :, -1:] + self.eps) * self.scale
|
67 |
-
dim_t = torch.arange(
|
68 |
-
self.num_feats, dtype=torch.float32, device=mask.device)
|
69 |
-
dim_t = self.temperature**(2 * (dim_t // 2) / self.num_feats)
|
70 |
-
pos_x = x_embed[:, :, :, None] / dim_t
|
71 |
-
pos_y = y_embed[:, :, :, None] / dim_t
|
72 |
-
pos_x = torch.stack(
|
73 |
-
(pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()),
|
74 |
-
dim=4).flatten(3)
|
75 |
-
pos_y = torch.stack(
|
76 |
-
(pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()),
|
77 |
-
dim=4).flatten(3)
|
78 |
-
pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2)
|
79 |
-
return pos
|
80 |
-
|
81 |
-
def __repr__(self):
|
82 |
-
"""str: a string that describes the module"""
|
83 |
-
repr_str = self.__class__.__name__
|
84 |
-
repr_str += f'(num_feats={self.num_feats}, '
|
85 |
-
repr_str += f'temperature={self.temperature}, '
|
86 |
-
repr_str += f'normalize={self.normalize}, '
|
87 |
-
repr_str += f'scale={self.scale}, '
|
88 |
-
repr_str += f'eps={self.eps})'
|
89 |
-
return repr_str
|
90 |
-
|
91 |
-
|
92 |
-
@POSITIONAL_ENCODING.register_module()
|
93 |
-
class LearnedPositionalEncoding(nn.Module):
|
94 |
-
"""Position embedding with learnable embedding weights.
|
95 |
-
|
96 |
-
Args:
|
97 |
-
num_feats (int): The feature dimension for each position
|
98 |
-
along x-axis or y-axis. The final returned dimension for
|
99 |
-
each position is 2 times of this value.
|
100 |
-
row_num_embed (int, optional): The dictionary size of row embeddings.
|
101 |
-
Default 50.
|
102 |
-
col_num_embed (int, optional): The dictionary size of col embeddings.
|
103 |
-
Default 50.
|
104 |
-
"""
|
105 |
-
|
106 |
-
def __init__(self, num_feats, row_num_embed=50, col_num_embed=50):
|
107 |
-
super(LearnedPositionalEncoding, self).__init__()
|
108 |
-
self.row_embed = nn.Embedding(row_num_embed, num_feats)
|
109 |
-
self.col_embed = nn.Embedding(col_num_embed, num_feats)
|
110 |
-
self.num_feats = num_feats
|
111 |
-
self.row_num_embed = row_num_embed
|
112 |
-
self.col_num_embed = col_num_embed
|
113 |
-
self.init_weights()
|
114 |
-
|
115 |
-
def init_weights(self):
|
116 |
-
"""Initialize the learnable weights."""
|
117 |
-
uniform_init(self.row_embed)
|
118 |
-
uniform_init(self.col_embed)
|
119 |
-
|
120 |
-
def forward(self, mask):
|
121 |
-
"""Forward function for `LearnedPositionalEncoding`.
|
122 |
-
|
123 |
-
Args:
|
124 |
-
mask (Tensor): ByteTensor mask. Non-zero values representing
|
125 |
-
ignored positions, while zero values means valid positions
|
126 |
-
for this image. Shape [bs, h, w].
|
127 |
-
|
128 |
-
Returns:
|
129 |
-
pos (Tensor): Returned position embedding with shape
|
130 |
-
[bs, num_feats*2, h, w].
|
131 |
-
"""
|
132 |
-
h, w = mask.shape[-2:]
|
133 |
-
x = torch.arange(w, device=mask.device)
|
134 |
-
y = torch.arange(h, device=mask.device)
|
135 |
-
x_embed = self.col_embed(x)
|
136 |
-
y_embed = self.row_embed(y)
|
137 |
-
pos = torch.cat(
|
138 |
-
(x_embed.unsqueeze(0).repeat(h, 1, 1), y_embed.unsqueeze(1).repeat(
|
139 |
-
1, w, 1)),
|
140 |
-
dim=-1).permute(2, 0,
|
141 |
-
1).unsqueeze(0).repeat(mask.shape[0], 1, 1, 1)
|
142 |
-
return pos
|
143 |
-
|
144 |
-
def __repr__(self):
|
145 |
-
"""str: a string that describes the module"""
|
146 |
-
repr_str = self.__class__.__name__
|
147 |
-
repr_str += f'(num_feats={self.num_feats}, '
|
148 |
-
repr_str += f'row_num_embed={self.row_num_embed}, '
|
149 |
-
repr_str += f'col_num_embed={self.col_num_embed})'
|
150 |
-
return repr_str
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Anni123/AuRoRA/llm_utils.py
DELETED
@@ -1,78 +0,0 @@
|
|
1 |
-
import time
|
2 |
-
import openai
|
3 |
-
|
4 |
-
#openai.api_key = "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"
|
5 |
-
|
6 |
-
|
7 |
-
# Sentence Generator (Decoder) for GPT-3 ...
|
8 |
-
def decoder_for_gpt3(input, max_length, temperature=0, engine="text-davinci-003"):
|
9 |
-
# GPT-3 API allows each users execute the API within 60 times in a minute ...
|
10 |
-
if engine == "gpt-3.5-turbo":
|
11 |
-
time.sleep(1)
|
12 |
-
response = openai.ChatCompletion.create(
|
13 |
-
model=engine,
|
14 |
-
messages=[
|
15 |
-
#{"role": "system", "content": "You need to answer commonsense questions."},
|
16 |
-
{"role": "user", "content": input}
|
17 |
-
],
|
18 |
-
max_tokens=max_length,
|
19 |
-
temperature=temperature,
|
20 |
-
stop=None
|
21 |
-
)
|
22 |
-
response = response["choices"][0]["message"]["content"]
|
23 |
-
|
24 |
-
else:
|
25 |
-
time.sleep(1)
|
26 |
-
response = openai.Completion.create(
|
27 |
-
model=engine,
|
28 |
-
prompt=input,
|
29 |
-
max_tokens=max_length,
|
30 |
-
stop=None,
|
31 |
-
temperature=temperature
|
32 |
-
)
|
33 |
-
response = response["choices"][0]["text"]
|
34 |
-
return response
|
35 |
-
|
36 |
-
def decoder_for_gpt3_consistency(input, max_length, temp=0.7, n=5, engine="text-davinci-003"):
|
37 |
-
# GPT-3 API allows each users execute the API within 60 times in a minute ...
|
38 |
-
if engine == "gpt-3.5-turbo":
|
39 |
-
time.sleep(1)
|
40 |
-
responses = openai.ChatCompletion.create(
|
41 |
-
model=engine,
|
42 |
-
messages=[
|
43 |
-
{"role": "user", "content": input}
|
44 |
-
],
|
45 |
-
max_tokens=max_length,
|
46 |
-
temperature=temp,
|
47 |
-
top_p=1,
|
48 |
-
n=5,
|
49 |
-
stop=["\n"],
|
50 |
-
)
|
51 |
-
responses = [responses["choices"][i]["message"]["content"] for i in range(n)]
|
52 |
-
else:
|
53 |
-
time.sleep(1)
|
54 |
-
responses = openai.Completion.create(
|
55 |
-
model=engine,
|
56 |
-
prompt=input,
|
57 |
-
max_tokens=max_length,
|
58 |
-
temperature=temp,
|
59 |
-
stop=["\n"],
|
60 |
-
n=5,
|
61 |
-
logprobs=5,
|
62 |
-
top_p=1,
|
63 |
-
)
|
64 |
-
responses = [responses["choices"][i]["text"] for i in range(n)]
|
65 |
-
|
66 |
-
return responses
|
67 |
-
|
68 |
-
def zero_shot(question):
|
69 |
-
input = question + " " + "Among A through E, the answer is"
|
70 |
-
response = openai.ChatCompletion.create(
|
71 |
-
model="gpt-3.5-turbo",
|
72 |
-
messages=[
|
73 |
-
{"role": "system", "content": "You are a helpful assistant that answer commonsense questions."},
|
74 |
-
{"role": "user", "content": input}
|
75 |
-
]
|
76 |
-
)
|
77 |
-
response = response["choices"][0]["message"]["content"]
|
78 |
-
return response
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnthonyTruchetPoC/persistent-docker/src/athai/data_utils.py
DELETED
@@ -1,32 +0,0 @@
|
|
1 |
-
import logging
|
2 |
-
import shelve
|
3 |
-
from pathlib import Path
|
4 |
-
|
5 |
-
import pandas as pd
|
6 |
-
|
7 |
-
|
8 |
-
logging.basicConfig(level=logging.INFO)
|
9 |
-
|
10 |
-
|
11 |
-
def cached_download_csv(data_path: Path, url: str, **kwargs) -> pd.DataFrame:
|
12 |
-
shelf_filename = data_path.joinpath("shelf.dat").as_posix()
|
13 |
-
try:
|
14 |
-
s = shelve.open(shelf_filename, writeback=True, flag="c")
|
15 |
-
except Exception as exn:
|
16 |
-
logging.exception(
|
17 |
-
"Unexpected exception, while trying to access the shelf"
|
18 |
-
)
|
19 |
-
return pd.read_csv(url, **kwargs)
|
20 |
-
with s as shelf:
|
21 |
-
logging.info(f"Opened or created the shelf file '{shelf_filename}'")
|
22 |
-
maybe_cached = shelf.get(url)
|
23 |
-
if maybe_cached is None:
|
24 |
-
logging.info(f"Downloawding URL '{url}'")
|
25 |
-
df = pd.read_csv(url, **kwargs)
|
26 |
-
shelf[url] = df
|
27 |
-
return df
|
28 |
-
else:
|
29 |
-
logging.info(f"Re-using cached URL '{url}'")
|
30 |
-
assert isinstance(maybe_cached, pd.DataFrame)
|
31 |
-
return maybe_cached
|
32 |
-
# The context manager 'shelf' ensure set value as written back or return
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/groundingdino/models/GroundingDINO/backbone/__init__.py
DELETED
@@ -1 +0,0 @@
|
|
1 |
-
from .backbone import build_backbone
|
|
|
|
spaces/Averyng/averyng/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Averyng
|
3 |
-
emoji: 🐠
|
4 |
-
colorFrom: blue
|
5 |
-
colorTo: green
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.19.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/grit/modeling/roi_heads/grit_fast_rcnn.py
DELETED
@@ -1,126 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
# Modified by Jialian Wu from https://github.com/facebookresearch/Detic/blob/main/detic/modeling/roi_heads/detic_fast_rcnn.py
|
3 |
-
import torch
|
4 |
-
from fvcore.nn import giou_loss, smooth_l1_loss
|
5 |
-
from torch import nn
|
6 |
-
from torch.nn import functional as F
|
7 |
-
import fvcore.nn.weight_init as weight_init
|
8 |
-
from detectron2.config import configurable
|
9 |
-
from detectron2.layers import ShapeSpec, batched_nms, cat, cross_entropy, nonzero_tuple
|
10 |
-
from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers
|
11 |
-
from detectron2.modeling.roi_heads.fast_rcnn import _log_classification_stats
|
12 |
-
|
13 |
-
|
14 |
-
__all__ = ["GRiTFastRCNNOutputLayers"]
|
15 |
-
|
16 |
-
|
17 |
-
class GRiTFastRCNNOutputLayers(FastRCNNOutputLayers):
|
18 |
-
@configurable
|
19 |
-
def __init__(
|
20 |
-
self,
|
21 |
-
input_shape: ShapeSpec,
|
22 |
-
**kwargs,
|
23 |
-
):
|
24 |
-
super().__init__(
|
25 |
-
input_shape=input_shape,
|
26 |
-
**kwargs,
|
27 |
-
)
|
28 |
-
|
29 |
-
input_size = input_shape.channels * \
|
30 |
-
(input_shape.width or 1) * (input_shape.height or 1)
|
31 |
-
|
32 |
-
self.bbox_pred = nn.Sequential(
|
33 |
-
nn.Linear(input_size, input_size),
|
34 |
-
nn.ReLU(inplace=True),
|
35 |
-
nn.Linear(input_size, 4)
|
36 |
-
)
|
37 |
-
weight_init.c2_xavier_fill(self.bbox_pred[0])
|
38 |
-
nn.init.normal_(self.bbox_pred[-1].weight, std=0.001)
|
39 |
-
nn.init.constant_(self.bbox_pred[-1].bias, 0)
|
40 |
-
|
41 |
-
@classmethod
|
42 |
-
def from_config(cls, cfg, input_shape):
|
43 |
-
ret = super().from_config(cfg, input_shape)
|
44 |
-
return ret
|
45 |
-
|
46 |
-
def losses(self, predictions, proposals):
|
47 |
-
scores, proposal_deltas = predictions
|
48 |
-
gt_classes = (
|
49 |
-
cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0)
|
50 |
-
)
|
51 |
-
num_classes = self.num_classes
|
52 |
-
_log_classification_stats(scores, gt_classes)
|
53 |
-
|
54 |
-
if len(proposals):
|
55 |
-
proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4
|
56 |
-
assert not proposal_boxes.requires_grad, "Proposals should not require gradients!"
|
57 |
-
gt_boxes = cat(
|
58 |
-
[(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals],
|
59 |
-
dim=0,
|
60 |
-
)
|
61 |
-
else:
|
62 |
-
proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device)
|
63 |
-
|
64 |
-
loss_cls = self.softmax_cross_entropy_loss(scores, gt_classes)
|
65 |
-
return {
|
66 |
-
"loss_cls": loss_cls,
|
67 |
-
"loss_box_reg": self.box_reg_loss(
|
68 |
-
proposal_boxes, gt_boxes, proposal_deltas, gt_classes,
|
69 |
-
num_classes=num_classes)
|
70 |
-
}
|
71 |
-
|
72 |
-
def softmax_cross_entropy_loss(self, pred_class_logits, gt_classes):
|
73 |
-
if pred_class_logits.numel() == 0:
|
74 |
-
return pred_class_logits.new_zeros([1])[0]
|
75 |
-
|
76 |
-
loss = F.cross_entropy(
|
77 |
-
pred_class_logits, gt_classes, reduction="mean")
|
78 |
-
return loss
|
79 |
-
|
80 |
-
def box_reg_loss(
|
81 |
-
self, proposal_boxes, gt_boxes, pred_deltas, gt_classes,
|
82 |
-
num_classes=-1):
|
83 |
-
num_classes = num_classes if num_classes > 0 else self.num_classes
|
84 |
-
box_dim = proposal_boxes.shape[1]
|
85 |
-
fg_inds = nonzero_tuple((gt_classes >= 0) & (gt_classes < num_classes))[0]
|
86 |
-
if pred_deltas.shape[1] == box_dim:
|
87 |
-
fg_pred_deltas = pred_deltas[fg_inds]
|
88 |
-
else:
|
89 |
-
fg_pred_deltas = pred_deltas.view(-1, self.num_classes, box_dim)[
|
90 |
-
fg_inds, gt_classes[fg_inds]
|
91 |
-
]
|
92 |
-
|
93 |
-
if self.box_reg_loss_type == "smooth_l1":
|
94 |
-
gt_pred_deltas = self.box2box_transform.get_deltas(
|
95 |
-
proposal_boxes[fg_inds],
|
96 |
-
gt_boxes[fg_inds],
|
97 |
-
)
|
98 |
-
loss_box_reg = smooth_l1_loss(
|
99 |
-
fg_pred_deltas, gt_pred_deltas, self.smooth_l1_beta, reduction="sum"
|
100 |
-
)
|
101 |
-
elif self.box_reg_loss_type == "giou":
|
102 |
-
fg_pred_boxes = self.box2box_transform.apply_deltas(
|
103 |
-
fg_pred_deltas, proposal_boxes[fg_inds]
|
104 |
-
)
|
105 |
-
loss_box_reg = giou_loss(fg_pred_boxes, gt_boxes[fg_inds], reduction="sum")
|
106 |
-
else:
|
107 |
-
raise ValueError(f"Invalid bbox reg loss type '{self.box_reg_loss_type}'")
|
108 |
-
return loss_box_reg / max(gt_classes.numel(), 1.0)
|
109 |
-
|
110 |
-
def predict_probs(self, predictions, proposals):
|
111 |
-
scores = predictions[0]
|
112 |
-
num_inst_per_image = [len(p) for p in proposals]
|
113 |
-
probs = F.softmax(scores, dim=-1)
|
114 |
-
return probs.split(num_inst_per_image, dim=0)
|
115 |
-
|
116 |
-
def forward(self, x):
|
117 |
-
if x.dim() > 2:
|
118 |
-
x = torch.flatten(x, start_dim=1)
|
119 |
-
scores = []
|
120 |
-
|
121 |
-
cls_scores = self.cls_score(x)
|
122 |
-
scores.append(cls_scores)
|
123 |
-
scores = torch.cat(scores, dim=1)
|
124 |
-
|
125 |
-
proposal_deltas = self.bbox_pred(x)
|
126 |
-
return scores, proposal_deltas
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/samplers/distributed_sampler.py
DELETED
@@ -1,278 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import itertools
|
3 |
-
import logging
|
4 |
-
import math
|
5 |
-
from collections import defaultdict
|
6 |
-
from typing import Optional
|
7 |
-
import torch
|
8 |
-
from torch.utils.data.sampler import Sampler
|
9 |
-
|
10 |
-
from detectron2.utils import comm
|
11 |
-
|
12 |
-
logger = logging.getLogger(__name__)
|
13 |
-
|
14 |
-
|
15 |
-
class TrainingSampler(Sampler):
|
16 |
-
"""
|
17 |
-
In training, we only care about the "infinite stream" of training data.
|
18 |
-
So this sampler produces an infinite stream of indices and
|
19 |
-
all workers cooperate to correctly shuffle the indices and sample different indices.
|
20 |
-
|
21 |
-
The samplers in each worker effectively produces `indices[worker_id::num_workers]`
|
22 |
-
where `indices` is an infinite stream of indices consisting of
|
23 |
-
`shuffle(range(size)) + shuffle(range(size)) + ...` (if shuffle is True)
|
24 |
-
or `range(size) + range(size) + ...` (if shuffle is False)
|
25 |
-
|
26 |
-
Note that this sampler does not shard based on pytorch DataLoader worker id.
|
27 |
-
A sampler passed to pytorch DataLoader is used only with map-style dataset
|
28 |
-
and will not be executed inside workers.
|
29 |
-
But if this sampler is used in a way that it gets execute inside a dataloader
|
30 |
-
worker, then extra work needs to be done to shard its outputs based on worker id.
|
31 |
-
This is required so that workers don't produce identical data.
|
32 |
-
:class:`ToIterableDataset` implements this logic.
|
33 |
-
This note is true for all samplers in detectron2.
|
34 |
-
"""
|
35 |
-
|
36 |
-
def __init__(self, size: int, shuffle: bool = True, seed: Optional[int] = None):
|
37 |
-
"""
|
38 |
-
Args:
|
39 |
-
size (int): the total number of data of the underlying dataset to sample from
|
40 |
-
shuffle (bool): whether to shuffle the indices or not
|
41 |
-
seed (int): the initial seed of the shuffle. Must be the same
|
42 |
-
across all workers. If None, will use a random seed shared
|
43 |
-
among workers (require synchronization among all workers).
|
44 |
-
"""
|
45 |
-
if not isinstance(size, int):
|
46 |
-
raise TypeError(f"TrainingSampler(size=) expects an int. Got type {type(size)}.")
|
47 |
-
if size <= 0:
|
48 |
-
raise ValueError(f"TrainingSampler(size=) expects a positive int. Got {size}.")
|
49 |
-
self._size = size
|
50 |
-
self._shuffle = shuffle
|
51 |
-
if seed is None:
|
52 |
-
seed = comm.shared_random_seed()
|
53 |
-
self._seed = int(seed)
|
54 |
-
|
55 |
-
self._rank = comm.get_rank()
|
56 |
-
self._world_size = comm.get_world_size()
|
57 |
-
|
58 |
-
def __iter__(self):
|
59 |
-
start = self._rank
|
60 |
-
yield from itertools.islice(self._infinite_indices(), start, None, self._world_size)
|
61 |
-
|
62 |
-
def _infinite_indices(self):
|
63 |
-
g = torch.Generator()
|
64 |
-
g.manual_seed(self._seed)
|
65 |
-
while True:
|
66 |
-
if self._shuffle:
|
67 |
-
yield from torch.randperm(self._size, generator=g).tolist()
|
68 |
-
else:
|
69 |
-
yield from torch.arange(self._size).tolist()
|
70 |
-
|
71 |
-
|
72 |
-
class RandomSubsetTrainingSampler(TrainingSampler):
|
73 |
-
"""
|
74 |
-
Similar to TrainingSampler, but only sample a random subset of indices.
|
75 |
-
This is useful when you want to estimate the accuracy vs data-number curves by
|
76 |
-
training the model with different subset_ratio.
|
77 |
-
"""
|
78 |
-
|
79 |
-
def __init__(
|
80 |
-
self,
|
81 |
-
size: int,
|
82 |
-
subset_ratio: float,
|
83 |
-
shuffle: bool = True,
|
84 |
-
seed_shuffle: Optional[int] = None,
|
85 |
-
seed_subset: Optional[int] = None,
|
86 |
-
):
|
87 |
-
"""
|
88 |
-
Args:
|
89 |
-
size (int): the total number of data of the underlying dataset to sample from
|
90 |
-
subset_ratio (float): the ratio of subset data to sample from the underlying dataset
|
91 |
-
shuffle (bool): whether to shuffle the indices or not
|
92 |
-
seed_shuffle (int): the initial seed of the shuffle. Must be the same
|
93 |
-
across all workers. If None, will use a random seed shared
|
94 |
-
among workers (require synchronization among all workers).
|
95 |
-
seed_subset (int): the seed to randomize the subset to be sampled.
|
96 |
-
Must be the same across all workers. If None, will use a random seed shared
|
97 |
-
among workers (require synchronization among all workers).
|
98 |
-
"""
|
99 |
-
super().__init__(size=size, shuffle=shuffle, seed=seed_shuffle)
|
100 |
-
|
101 |
-
assert 0.0 < subset_ratio <= 1.0
|
102 |
-
self._size_subset = int(size * subset_ratio)
|
103 |
-
assert self._size_subset > 0
|
104 |
-
if seed_subset is None:
|
105 |
-
seed_subset = comm.shared_random_seed()
|
106 |
-
self._seed_subset = int(seed_subset)
|
107 |
-
|
108 |
-
# randomly generate the subset indexes to be sampled from
|
109 |
-
g = torch.Generator()
|
110 |
-
g.manual_seed(self._seed_subset)
|
111 |
-
indexes_randperm = torch.randperm(self._size, generator=g)
|
112 |
-
self._indexes_subset = indexes_randperm[: self._size_subset]
|
113 |
-
|
114 |
-
logger.info("Using RandomSubsetTrainingSampler......")
|
115 |
-
logger.info(f"Randomly sample {self._size_subset} data from the original {self._size} data")
|
116 |
-
|
117 |
-
def _infinite_indices(self):
|
118 |
-
g = torch.Generator()
|
119 |
-
g.manual_seed(self._seed) # self._seed equals seed_shuffle from __init__()
|
120 |
-
while True:
|
121 |
-
if self._shuffle:
|
122 |
-
# generate a random permutation to shuffle self._indexes_subset
|
123 |
-
randperm = torch.randperm(self._size_subset, generator=g)
|
124 |
-
yield from self._indexes_subset[randperm].tolist()
|
125 |
-
else:
|
126 |
-
yield from self._indexes_subset.tolist()
|
127 |
-
|
128 |
-
|
129 |
-
class RepeatFactorTrainingSampler(Sampler):
|
130 |
-
"""
|
131 |
-
Similar to TrainingSampler, but a sample may appear more times than others based
|
132 |
-
on its "repeat factor". This is suitable for training on class imbalanced datasets like LVIS.
|
133 |
-
"""
|
134 |
-
|
135 |
-
def __init__(self, repeat_factors, *, shuffle=True, seed=None):
|
136 |
-
"""
|
137 |
-
Args:
|
138 |
-
repeat_factors (Tensor): a float vector, the repeat factor for each indice. When it's
|
139 |
-
full of ones, it is equivalent to ``TrainingSampler(len(repeat_factors), ...)``.
|
140 |
-
shuffle (bool): whether to shuffle the indices or not
|
141 |
-
seed (int): the initial seed of the shuffle. Must be the same
|
142 |
-
across all workers. If None, will use a random seed shared
|
143 |
-
among workers (require synchronization among all workers).
|
144 |
-
"""
|
145 |
-
self._shuffle = shuffle
|
146 |
-
if seed is None:
|
147 |
-
seed = comm.shared_random_seed()
|
148 |
-
self._seed = int(seed)
|
149 |
-
|
150 |
-
self._rank = comm.get_rank()
|
151 |
-
self._world_size = comm.get_world_size()
|
152 |
-
|
153 |
-
# Split into whole number (_int_part) and fractional (_frac_part) parts.
|
154 |
-
self._int_part = torch.trunc(repeat_factors)
|
155 |
-
self._frac_part = repeat_factors - self._int_part
|
156 |
-
|
157 |
-
@staticmethod
|
158 |
-
def repeat_factors_from_category_frequency(dataset_dicts, repeat_thresh):
|
159 |
-
"""
|
160 |
-
Compute (fractional) per-image repeat factors based on category frequency.
|
161 |
-
The repeat factor for an image is a function of the frequency of the rarest
|
162 |
-
category labeled in that image. The "frequency of category c" in [0, 1] is defined
|
163 |
-
as the fraction of images in the training set (without repeats) in which category c
|
164 |
-
appears.
|
165 |
-
See :paper:`lvis` (>= v2) Appendix B.2.
|
166 |
-
|
167 |
-
Args:
|
168 |
-
dataset_dicts (list[dict]): annotations in Detectron2 dataset format.
|
169 |
-
repeat_thresh (float): frequency threshold below which data is repeated.
|
170 |
-
If the frequency is half of `repeat_thresh`, the image will be
|
171 |
-
repeated twice.
|
172 |
-
|
173 |
-
Returns:
|
174 |
-
torch.Tensor:
|
175 |
-
the i-th element is the repeat factor for the dataset image at index i.
|
176 |
-
"""
|
177 |
-
# 1. For each category c, compute the fraction of images that contain it: f(c)
|
178 |
-
category_freq = defaultdict(int)
|
179 |
-
for dataset_dict in dataset_dicts: # For each image (without repeats)
|
180 |
-
cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]}
|
181 |
-
for cat_id in cat_ids:
|
182 |
-
category_freq[cat_id] += 1
|
183 |
-
num_images = len(dataset_dicts)
|
184 |
-
for k, v in category_freq.items():
|
185 |
-
category_freq[k] = v / num_images
|
186 |
-
|
187 |
-
# 2. For each category c, compute the category-level repeat factor:
|
188 |
-
# r(c) = max(1, sqrt(t / f(c)))
|
189 |
-
category_rep = {
|
190 |
-
cat_id: max(1.0, math.sqrt(repeat_thresh / cat_freq))
|
191 |
-
for cat_id, cat_freq in category_freq.items()
|
192 |
-
}
|
193 |
-
|
194 |
-
# 3. For each image I, compute the image-level repeat factor:
|
195 |
-
# r(I) = max_{c in I} r(c)
|
196 |
-
rep_factors = []
|
197 |
-
for dataset_dict in dataset_dicts:
|
198 |
-
cat_ids = {ann["category_id"] for ann in dataset_dict["annotations"]}
|
199 |
-
rep_factor = max({category_rep[cat_id] for cat_id in cat_ids}, default=1.0)
|
200 |
-
rep_factors.append(rep_factor)
|
201 |
-
|
202 |
-
return torch.tensor(rep_factors, dtype=torch.float32)
|
203 |
-
|
204 |
-
def _get_epoch_indices(self, generator):
|
205 |
-
"""
|
206 |
-
Create a list of dataset indices (with repeats) to use for one epoch.
|
207 |
-
|
208 |
-
Args:
|
209 |
-
generator (torch.Generator): pseudo random number generator used for
|
210 |
-
stochastic rounding.
|
211 |
-
|
212 |
-
Returns:
|
213 |
-
torch.Tensor: list of dataset indices to use in one epoch. Each index
|
214 |
-
is repeated based on its calculated repeat factor.
|
215 |
-
"""
|
216 |
-
# Since repeat factors are fractional, we use stochastic rounding so
|
217 |
-
# that the target repeat factor is achieved in expectation over the
|
218 |
-
# course of training
|
219 |
-
rands = torch.rand(len(self._frac_part), generator=generator)
|
220 |
-
rep_factors = self._int_part + (rands < self._frac_part).float()
|
221 |
-
# Construct a list of indices in which we repeat images as specified
|
222 |
-
indices = []
|
223 |
-
for dataset_index, rep_factor in enumerate(rep_factors):
|
224 |
-
indices.extend([dataset_index] * int(rep_factor.item()))
|
225 |
-
return torch.tensor(indices, dtype=torch.int64)
|
226 |
-
|
227 |
-
def __iter__(self):
|
228 |
-
start = self._rank
|
229 |
-
yield from itertools.islice(self._infinite_indices(), start, None, self._world_size)
|
230 |
-
|
231 |
-
def _infinite_indices(self):
|
232 |
-
g = torch.Generator()
|
233 |
-
g.manual_seed(self._seed)
|
234 |
-
while True:
|
235 |
-
# Sample indices with repeats determined by stochastic rounding; each
|
236 |
-
# "epoch" may have a slightly different size due to the rounding.
|
237 |
-
indices = self._get_epoch_indices(g)
|
238 |
-
if self._shuffle:
|
239 |
-
randperm = torch.randperm(len(indices), generator=g)
|
240 |
-
yield from indices[randperm].tolist()
|
241 |
-
else:
|
242 |
-
yield from indices.tolist()
|
243 |
-
|
244 |
-
|
245 |
-
class InferenceSampler(Sampler):
|
246 |
-
"""
|
247 |
-
Produce indices for inference across all workers.
|
248 |
-
Inference needs to run on the __exact__ set of samples,
|
249 |
-
therefore when the total number of samples is not divisible by the number of workers,
|
250 |
-
this sampler produces different number of samples on different workers.
|
251 |
-
"""
|
252 |
-
|
253 |
-
def __init__(self, size: int):
|
254 |
-
"""
|
255 |
-
Args:
|
256 |
-
size (int): the total number of data of the underlying dataset to sample from
|
257 |
-
"""
|
258 |
-
self._size = size
|
259 |
-
assert size > 0
|
260 |
-
self._rank = comm.get_rank()
|
261 |
-
self._world_size = comm.get_world_size()
|
262 |
-
self._local_indices = self._get_local_indices(size, self._world_size, self._rank)
|
263 |
-
|
264 |
-
@staticmethod
|
265 |
-
def _get_local_indices(total_size, world_size, rank):
|
266 |
-
shard_size = total_size // world_size
|
267 |
-
left = total_size % world_size
|
268 |
-
shard_sizes = [shard_size + int(r < left) for r in range(world_size)]
|
269 |
-
|
270 |
-
begin = sum(shard_sizes[:rank])
|
271 |
-
end = min(sum(shard_sizes[: rank + 1]), total_size)
|
272 |
-
return range(begin, end)
|
273 |
-
|
274 |
-
def __iter__(self):
|
275 |
-
yield from self._local_indices
|
276 |
-
|
277 |
-
def __len__(self):
|
278 |
-
return len(self._local_indices)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bakuman/Real-CUGAN/app.py
DELETED
@@ -1,62 +0,0 @@
|
|
1 |
-
from upcunet_v3 import RealWaifuUpScaler
|
2 |
-
import gradio as gr
|
3 |
-
import time
|
4 |
-
import logging
|
5 |
-
import os
|
6 |
-
from PIL import ImageOps
|
7 |
-
import numpy as np
|
8 |
-
import math
|
9 |
-
|
10 |
-
|
11 |
-
def greet(input_img, input_model_name, input_tile_mode):
|
12 |
-
# if input_img.size[0] * input_img.size[1] > 256 * 256:
|
13 |
-
# y = int(math.sqrt(256*256/input_img.size[0]*input_img.size[1]))
|
14 |
-
# x = int(input_img.size[0]/input_img.size[1]*y)
|
15 |
-
# input_img = ImageOps.fit(input_img, (x, y))
|
16 |
-
input_img = np.array(input_img)
|
17 |
-
if input_model_name not in model_cache:
|
18 |
-
t1 = time.time()
|
19 |
-
upscaler = RealWaifuUpScaler(input_model_name[2], ModelPath + input_model_name, half=False, device="cpu")
|
20 |
-
t2 = time.time()
|
21 |
-
logger.info(f'load model time, {t2 - t1}')
|
22 |
-
model_cache[input_model_name] = upscaler
|
23 |
-
else:
|
24 |
-
upscaler = model_cache[input_model_name]
|
25 |
-
logger.info(f'load model from cache')
|
26 |
-
|
27 |
-
start = time.time()
|
28 |
-
result = upscaler(input_img, tile_mode=input_tile_mode)
|
29 |
-
end = time.time()
|
30 |
-
logger.info(f'input_model_name, {input_model_name}')
|
31 |
-
logger.info(f'input_tile_mode, {input_tile_mode}')
|
32 |
-
logger.info(f'input shape, {input_img.shape}')
|
33 |
-
logger.info(f'output shape, {result.shape}')
|
34 |
-
logger.info(f'speed time, {end - start}')
|
35 |
-
return result
|
36 |
-
|
37 |
-
|
38 |
-
if __name__ == '__main__':
|
39 |
-
logging.basicConfig(level=logging.INFO, format="[%(asctime)s] [%(process)d] [%(levelname)s] %(message)s")
|
40 |
-
logger = logging.getLogger()
|
41 |
-
|
42 |
-
ModelPath = "weights_v3/"
|
43 |
-
model_cache = {}
|
44 |
-
|
45 |
-
input_model_name = gr.inputs.Dropdown(os.listdir(ModelPath), default="up2x-latest-denoise2x.pth", label='选择model')
|
46 |
-
input_tile_mode = gr.inputs.Dropdown([0, 1, 2, 3, 4], default=2, label='选择tile_mode')
|
47 |
-
input_img = gr.inputs.Image(label='image', type='pil')
|
48 |
-
|
49 |
-
inputs = [input_img, input_model_name, input_tile_mode]
|
50 |
-
outputs = "image"
|
51 |
-
iface = gr.Interface(fn=greet,
|
52 |
-
inputs=inputs,
|
53 |
-
outputs=outputs,
|
54 |
-
allow_screenshot=False,
|
55 |
-
allow_flagging='never',
|
56 |
-
examples=[['test-img.jpg', "up2x-latest-denoise2x.pth", 2]],
|
57 |
-
article='[https://github.com/bilibili/ailab/tree/main/Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN)<br>'
|
58 |
-
'感谢b站开源的项目,图片过大会导致内存不足,所有我将图片裁剪小,想体验大图片的效果请自行前往上面的链接。<br>'
|
59 |
-
'修改bbb'
|
60 |
-
'The large image will lead to memory limit exceeded. So I crop and resize image. '
|
61 |
-
'If you want to experience the large image, please go to the link above.')
|
62 |
-
iface.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bakuman/Real-CUGAN/upcunet_v3.py
DELETED
@@ -1,714 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch import nn as nn
|
3 |
-
from torch.nn import functional as F
|
4 |
-
import os, sys
|
5 |
-
import numpy as np
|
6 |
-
|
7 |
-
root_path = os.path.abspath('.')
|
8 |
-
sys.path.append(root_path)
|
9 |
-
|
10 |
-
|
11 |
-
class SEBlock(nn.Module):
|
12 |
-
def __init__(self, in_channels, reduction=8, bias=False):
|
13 |
-
super(SEBlock, self).__init__()
|
14 |
-
self.conv1 = nn.Conv2d(in_channels, in_channels // reduction, 1, 1, 0, bias=bias)
|
15 |
-
self.conv2 = nn.Conv2d(in_channels // reduction, in_channels, 1, 1, 0, bias=bias)
|
16 |
-
|
17 |
-
def forward(self, x):
|
18 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
19 |
-
x0 = torch.mean(x.float(), dim=(2, 3), keepdim=True).half()
|
20 |
-
else:
|
21 |
-
x0 = torch.mean(x, dim=(2, 3), keepdim=True)
|
22 |
-
x0 = self.conv1(x0)
|
23 |
-
x0 = F.relu(x0, inplace=True)
|
24 |
-
x0 = self.conv2(x0)
|
25 |
-
x0 = torch.sigmoid(x0)
|
26 |
-
x = torch.mul(x, x0)
|
27 |
-
return x
|
28 |
-
|
29 |
-
def forward_mean(self, x, x0):
|
30 |
-
x0 = self.conv1(x0)
|
31 |
-
x0 = F.relu(x0, inplace=True)
|
32 |
-
x0 = self.conv2(x0)
|
33 |
-
x0 = torch.sigmoid(x0)
|
34 |
-
x = torch.mul(x, x0)
|
35 |
-
return x
|
36 |
-
|
37 |
-
|
38 |
-
class UNetConv(nn.Module):
|
39 |
-
def __init__(self, in_channels, mid_channels, out_channels, se):
|
40 |
-
super(UNetConv, self).__init__()
|
41 |
-
self.conv = nn.Sequential(
|
42 |
-
nn.Conv2d(in_channels, mid_channels, 3, 1, 0),
|
43 |
-
nn.LeakyReLU(0.1, inplace=True),
|
44 |
-
nn.Conv2d(mid_channels, out_channels, 3, 1, 0),
|
45 |
-
nn.LeakyReLU(0.1, inplace=True),
|
46 |
-
)
|
47 |
-
if se:
|
48 |
-
self.seblock = SEBlock(out_channels, reduction=8, bias=True)
|
49 |
-
else:
|
50 |
-
self.seblock = None
|
51 |
-
|
52 |
-
def forward(self, x):
|
53 |
-
z = self.conv(x)
|
54 |
-
if self.seblock is not None:
|
55 |
-
z = self.seblock(z)
|
56 |
-
return z
|
57 |
-
|
58 |
-
|
59 |
-
class UNet1(nn.Module):
|
60 |
-
def __init__(self, in_channels, out_channels, deconv):
|
61 |
-
super(UNet1, self).__init__()
|
62 |
-
self.conv1 = UNetConv(in_channels, 32, 64, se=False)
|
63 |
-
self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
|
64 |
-
self.conv2 = UNetConv(64, 128, 64, se=True)
|
65 |
-
self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
|
66 |
-
self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
|
67 |
-
|
68 |
-
if deconv:
|
69 |
-
self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
|
70 |
-
else:
|
71 |
-
self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
|
72 |
-
|
73 |
-
for m in self.modules():
|
74 |
-
if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
|
75 |
-
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
|
76 |
-
elif isinstance(m, nn.Linear):
|
77 |
-
nn.init.normal_(m.weight, 0, 0.01)
|
78 |
-
if m.bias is not None:
|
79 |
-
nn.init.constant_(m.bias, 0)
|
80 |
-
|
81 |
-
def forward(self, x):
|
82 |
-
x1 = self.conv1(x)
|
83 |
-
x2 = self.conv1_down(x1)
|
84 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
85 |
-
x2 = self.conv2(x2)
|
86 |
-
x2 = self.conv2_up(x2)
|
87 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
88 |
-
|
89 |
-
x1 = F.pad(x1, (-4, -4, -4, -4))
|
90 |
-
x3 = self.conv3(x1 + x2)
|
91 |
-
x3 = F.leaky_relu(x3, 0.1, inplace=True)
|
92 |
-
z = self.conv_bottom(x3)
|
93 |
-
return z
|
94 |
-
|
95 |
-
def forward_a(self, x):
|
96 |
-
x1 = self.conv1(x)
|
97 |
-
x2 = self.conv1_down(x1)
|
98 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
99 |
-
x2 = self.conv2.conv(x2)
|
100 |
-
return x1, x2
|
101 |
-
|
102 |
-
def forward_b(self, x1, x2):
|
103 |
-
x2 = self.conv2_up(x2)
|
104 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
105 |
-
|
106 |
-
x1 = F.pad(x1, (-4, -4, -4, -4))
|
107 |
-
x3 = self.conv3(x1 + x2)
|
108 |
-
x3 = F.leaky_relu(x3, 0.1, inplace=True)
|
109 |
-
z = self.conv_bottom(x3)
|
110 |
-
return z
|
111 |
-
|
112 |
-
|
113 |
-
class UNet1x3(nn.Module):
|
114 |
-
def __init__(self, in_channels, out_channels, deconv):
|
115 |
-
super(UNet1x3, self).__init__()
|
116 |
-
self.conv1 = UNetConv(in_channels, 32, 64, se=False)
|
117 |
-
self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
|
118 |
-
self.conv2 = UNetConv(64, 128, 64, se=True)
|
119 |
-
self.conv2_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
|
120 |
-
self.conv3 = nn.Conv2d(64, 64, 3, 1, 0)
|
121 |
-
|
122 |
-
if deconv:
|
123 |
-
self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 5, 3, 2)
|
124 |
-
else:
|
125 |
-
self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
|
126 |
-
|
127 |
-
for m in self.modules():
|
128 |
-
if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
|
129 |
-
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
|
130 |
-
elif isinstance(m, nn.Linear):
|
131 |
-
nn.init.normal_(m.weight, 0, 0.01)
|
132 |
-
if m.bias is not None:
|
133 |
-
nn.init.constant_(m.bias, 0)
|
134 |
-
|
135 |
-
def forward(self, x):
|
136 |
-
x1 = self.conv1(x)
|
137 |
-
x2 = self.conv1_down(x1)
|
138 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
139 |
-
x2 = self.conv2(x2)
|
140 |
-
x2 = self.conv2_up(x2)
|
141 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
142 |
-
|
143 |
-
x1 = F.pad(x1, (-4, -4, -4, -4))
|
144 |
-
x3 = self.conv3(x1 + x2)
|
145 |
-
x3 = F.leaky_relu(x3, 0.1, inplace=True)
|
146 |
-
z = self.conv_bottom(x3)
|
147 |
-
return z
|
148 |
-
|
149 |
-
def forward_a(self, x):
|
150 |
-
x1 = self.conv1(x)
|
151 |
-
x2 = self.conv1_down(x1)
|
152 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
153 |
-
x2 = self.conv2.conv(x2)
|
154 |
-
return x1, x2
|
155 |
-
|
156 |
-
def forward_b(self, x1, x2):
|
157 |
-
x2 = self.conv2_up(x2)
|
158 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
159 |
-
|
160 |
-
x1 = F.pad(x1, (-4, -4, -4, -4))
|
161 |
-
x3 = self.conv3(x1 + x2)
|
162 |
-
x3 = F.leaky_relu(x3, 0.1, inplace=True)
|
163 |
-
z = self.conv_bottom(x3)
|
164 |
-
return z
|
165 |
-
|
166 |
-
|
167 |
-
class UNet2(nn.Module):
|
168 |
-
def __init__(self, in_channels, out_channels, deconv):
|
169 |
-
super(UNet2, self).__init__()
|
170 |
-
|
171 |
-
self.conv1 = UNetConv(in_channels, 32, 64, se=False)
|
172 |
-
self.conv1_down = nn.Conv2d(64, 64, 2, 2, 0)
|
173 |
-
self.conv2 = UNetConv(64, 64, 128, se=True)
|
174 |
-
self.conv2_down = nn.Conv2d(128, 128, 2, 2, 0)
|
175 |
-
self.conv3 = UNetConv(128, 256, 128, se=True)
|
176 |
-
self.conv3_up = nn.ConvTranspose2d(128, 128, 2, 2, 0)
|
177 |
-
self.conv4 = UNetConv(128, 64, 64, se=True)
|
178 |
-
self.conv4_up = nn.ConvTranspose2d(64, 64, 2, 2, 0)
|
179 |
-
self.conv5 = nn.Conv2d(64, 64, 3, 1, 0)
|
180 |
-
|
181 |
-
if deconv:
|
182 |
-
self.conv_bottom = nn.ConvTranspose2d(64, out_channels, 4, 2, 3)
|
183 |
-
else:
|
184 |
-
self.conv_bottom = nn.Conv2d(64, out_channels, 3, 1, 0)
|
185 |
-
|
186 |
-
for m in self.modules():
|
187 |
-
if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):
|
188 |
-
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
|
189 |
-
elif isinstance(m, nn.Linear):
|
190 |
-
nn.init.normal_(m.weight, 0, 0.01)
|
191 |
-
if m.bias is not None:
|
192 |
-
nn.init.constant_(m.bias, 0)
|
193 |
-
|
194 |
-
def forward(self, x):
|
195 |
-
x1 = self.conv1(x)
|
196 |
-
x2 = self.conv1_down(x1)
|
197 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
198 |
-
x2 = self.conv2(x2)
|
199 |
-
|
200 |
-
x3 = self.conv2_down(x2)
|
201 |
-
x3 = F.leaky_relu(x3, 0.1, inplace=True)
|
202 |
-
x3 = self.conv3(x3)
|
203 |
-
x3 = self.conv3_up(x3)
|
204 |
-
x3 = F.leaky_relu(x3, 0.1, inplace=True)
|
205 |
-
|
206 |
-
x2 = F.pad(x2, (-4, -4, -4, -4))
|
207 |
-
x4 = self.conv4(x2 + x3)
|
208 |
-
x4 = self.conv4_up(x4)
|
209 |
-
x4 = F.leaky_relu(x4, 0.1, inplace=True)
|
210 |
-
|
211 |
-
x1 = F.pad(x1, (-16, -16, -16, -16))
|
212 |
-
x5 = self.conv5(x1 + x4)
|
213 |
-
x5 = F.leaky_relu(x5, 0.1, inplace=True)
|
214 |
-
|
215 |
-
z = self.conv_bottom(x5)
|
216 |
-
return z
|
217 |
-
|
218 |
-
def forward_a(self, x): # conv234结尾有se
|
219 |
-
x1 = self.conv1(x)
|
220 |
-
x2 = self.conv1_down(x1)
|
221 |
-
x2 = F.leaky_relu(x2, 0.1, inplace=True)
|
222 |
-
x2 = self.conv2.conv(x2)
|
223 |
-
return x1, x2
|
224 |
-
|
225 |
-
def forward_b(self, x2): # conv234结尾有se
|
226 |
-
x3 = self.conv2_down(x2)
|
227 |
-
x3 = F.leaky_relu(x3, 0.1, inplace=True)
|
228 |
-
x3 = self.conv3.conv(x3)
|
229 |
-
return x3
|
230 |
-
|
231 |
-
def forward_c(self, x2, x3): # conv234结尾有se
|
232 |
-
x3 = self.conv3_up(x3)
|
233 |
-
x3 = F.leaky_relu(x3, 0.1, inplace=True)
|
234 |
-
|
235 |
-
x2 = F.pad(x2, (-4, -4, -4, -4))
|
236 |
-
x4 = self.conv4.conv(x2 + x3)
|
237 |
-
return x4
|
238 |
-
|
239 |
-
def forward_d(self, x1, x4): # conv234结尾有se
|
240 |
-
x4 = self.conv4_up(x4)
|
241 |
-
x4 = F.leaky_relu(x4, 0.1, inplace=True)
|
242 |
-
|
243 |
-
x1 = F.pad(x1, (-16, -16, -16, -16))
|
244 |
-
x5 = self.conv5(x1 + x4)
|
245 |
-
x5 = F.leaky_relu(x5, 0.1, inplace=True)
|
246 |
-
|
247 |
-
z = self.conv_bottom(x5)
|
248 |
-
return z
|
249 |
-
|
250 |
-
|
251 |
-
class UpCunet2x(nn.Module): # 完美tile,全程无损
|
252 |
-
def __init__(self, in_channels=3, out_channels=3):
|
253 |
-
super(UpCunet2x, self).__init__()
|
254 |
-
self.unet1 = UNet1(in_channels, out_channels, deconv=True)
|
255 |
-
self.unet2 = UNet2(in_channels, out_channels, deconv=False)
|
256 |
-
|
257 |
-
def forward(self, x, tile_mode): # 1.7G
|
258 |
-
n, c, h0, w0 = x.shape
|
259 |
-
if (tile_mode == 0): # 不tile
|
260 |
-
ph = ((h0 - 1) // 2 + 1) * 2
|
261 |
-
pw = ((w0 - 1) // 2 + 1) * 2
|
262 |
-
x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect') # 需要保证被2整除
|
263 |
-
x = self.unet1.forward(x)
|
264 |
-
x0 = self.unet2.forward(x)
|
265 |
-
x1 = F.pad(x, (-20, -20, -20, -20))
|
266 |
-
x = torch.add(x0, x1)
|
267 |
-
if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 2, :w0 * 2]
|
268 |
-
return x
|
269 |
-
elif (tile_mode == 1): # 对长边减半
|
270 |
-
if (w0 >= h0):
|
271 |
-
crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
|
272 |
-
crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
|
273 |
-
else:
|
274 |
-
crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
|
275 |
-
crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
|
276 |
-
crop_size = (crop_size_h, crop_size_w) # 6.6G
|
277 |
-
elif (tile_mode == 2): # hw都减半
|
278 |
-
crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
|
279 |
-
elif (tile_mode == 3): # hw都三分之一
|
280 |
-
crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.2G
|
281 |
-
elif (tile_mode == 4): # hw都四分���一
|
282 |
-
crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
|
283 |
-
ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
|
284 |
-
pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
|
285 |
-
x = F.pad(x, (18, 18 + pw - w0, 18, 18 + ph - h0), 'reflect')
|
286 |
-
n, c, h, w = x.shape
|
287 |
-
se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
|
288 |
-
if ("Half" in x.type()):
|
289 |
-
se_mean0 = se_mean0.half()
|
290 |
-
n_patch = 0
|
291 |
-
tmp_dict = {}
|
292 |
-
opt_res_dict = {}
|
293 |
-
for i in range(0, h - 36, crop_size[0]):
|
294 |
-
tmp_dict[i] = {}
|
295 |
-
for j in range(0, w - 36, crop_size[1]):
|
296 |
-
x_crop = x[:, :, i:i + crop_size[0] + 36, j:j + crop_size[1] + 36]
|
297 |
-
n, c1, h1, w1 = x_crop.shape
|
298 |
-
tmp0, x_crop = self.unet1.forward_a(x_crop)
|
299 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
300 |
-
tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
|
301 |
-
else:
|
302 |
-
tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
|
303 |
-
se_mean0 += tmp_se_mean
|
304 |
-
n_patch += 1
|
305 |
-
tmp_dict[i][j] = (tmp0, x_crop)
|
306 |
-
se_mean0 /= n_patch
|
307 |
-
se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
|
308 |
-
if ("Half" in x.type()):
|
309 |
-
se_mean1 = se_mean1.half()
|
310 |
-
for i in range(0, h - 36, crop_size[0]):
|
311 |
-
for j in range(0, w - 36, crop_size[1]):
|
312 |
-
tmp0, x_crop = tmp_dict[i][j]
|
313 |
-
x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
|
314 |
-
opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
|
315 |
-
tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
|
316 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
317 |
-
tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
|
318 |
-
else:
|
319 |
-
tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
|
320 |
-
se_mean1 += tmp_se_mean
|
321 |
-
tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
|
322 |
-
se_mean1 /= n_patch
|
323 |
-
se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
|
324 |
-
if ("Half" in x.type()):
|
325 |
-
se_mean0 = se_mean0.half()
|
326 |
-
for i in range(0, h - 36, crop_size[0]):
|
327 |
-
for j in range(0, w - 36, crop_size[1]):
|
328 |
-
opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
|
329 |
-
tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
|
330 |
-
tmp_x3 = self.unet2.forward_b(tmp_x2)
|
331 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
332 |
-
tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
|
333 |
-
else:
|
334 |
-
tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
|
335 |
-
se_mean0 += tmp_se_mean
|
336 |
-
tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
|
337 |
-
se_mean0 /= n_patch
|
338 |
-
se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
|
339 |
-
if ("Half" in x.type()):
|
340 |
-
se_mean1 = se_mean1.half()
|
341 |
-
for i in range(0, h - 36, crop_size[0]):
|
342 |
-
for j in range(0, w - 36, crop_size[1]):
|
343 |
-
opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
|
344 |
-
tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
|
345 |
-
tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
|
346 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
347 |
-
tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
|
348 |
-
else:
|
349 |
-
tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
|
350 |
-
se_mean1 += tmp_se_mean
|
351 |
-
tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
|
352 |
-
se_mean1 /= n_patch
|
353 |
-
for i in range(0, h - 36, crop_size[0]):
|
354 |
-
opt_res_dict[i] = {}
|
355 |
-
for j in range(0, w - 36, crop_size[1]):
|
356 |
-
opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
|
357 |
-
tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
|
358 |
-
x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
|
359 |
-
x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
|
360 |
-
x_crop = torch.add(x0, x1) # x0是unet2的最终输出
|
361 |
-
opt_res_dict[i][j] = x_crop
|
362 |
-
del tmp_dict
|
363 |
-
torch.cuda.empty_cache()
|
364 |
-
res = torch.zeros((n, c, h * 2 - 72, w * 2 - 72)).to(x.device)
|
365 |
-
if ("Half" in x.type()):
|
366 |
-
res = res.half()
|
367 |
-
for i in range(0, h - 36, crop_size[0]):
|
368 |
-
for j in range(0, w - 36, crop_size[1]):
|
369 |
-
res[:, :, i * 2:i * 2 + h1 * 2 - 72, j * 2:j * 2 + w1 * 2 - 72] = opt_res_dict[i][j]
|
370 |
-
del opt_res_dict
|
371 |
-
torch.cuda.empty_cache()
|
372 |
-
if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 2, :w0 * 2]
|
373 |
-
return res #
|
374 |
-
|
375 |
-
|
376 |
-
class UpCunet3x(nn.Module): # 完美tile,全程无损
|
377 |
-
def __init__(self, in_channels=3, out_channels=3):
|
378 |
-
super(UpCunet3x, self).__init__()
|
379 |
-
self.unet1 = UNet1x3(in_channels, out_channels, deconv=True)
|
380 |
-
self.unet2 = UNet2(in_channels, out_channels, deconv=False)
|
381 |
-
|
382 |
-
def forward(self, x, tile_mode): # 1.7G
|
383 |
-
n, c, h0, w0 = x.shape
|
384 |
-
if (tile_mode == 0): # 不tile
|
385 |
-
ph = ((h0 - 1) // 4 + 1) * 4
|
386 |
-
pw = ((w0 - 1) // 4 + 1) * 4
|
387 |
-
x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect') # 需要保证被2整除
|
388 |
-
x = self.unet1.forward(x)
|
389 |
-
x0 = self.unet2.forward(x)
|
390 |
-
x1 = F.pad(x, (-20, -20, -20, -20))
|
391 |
-
x = torch.add(x0, x1)
|
392 |
-
if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 3, :w0 * 3]
|
393 |
-
return x
|
394 |
-
elif (tile_mode == 1): # 对长边减半
|
395 |
-
if (w0 >= h0):
|
396 |
-
crop_size_w = ((w0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
|
397 |
-
crop_size_h = (h0 - 1) // 4 * 4 + 4 # 能被4整除
|
398 |
-
else:
|
399 |
-
crop_size_h = ((h0 - 1) // 8 * 8 + 8) // 2 # 减半后能被4整除,所以要先被8整除
|
400 |
-
crop_size_w = (w0 - 1) // 4 * 4 + 4 # 能被4整除
|
401 |
-
crop_size = (crop_size_h, crop_size_w) # 6.6G
|
402 |
-
elif (tile_mode == 2): # hw都减半
|
403 |
-
crop_size = (((h0 - 1) // 8 * 8 + 8) // 2, ((w0 - 1) // 8 * 8 + 8) // 2) # 5.6G
|
404 |
-
elif (tile_mode == 3): # hw都三分之一
|
405 |
-
crop_size = (((h0 - 1) // 12 * 12 + 12) // 3, ((w0 - 1) // 12 * 12 + 12) // 3) # 4.2G
|
406 |
-
elif (tile_mode == 4): # hw都四分之一
|
407 |
-
crop_size = (((h0 - 1) // 16 * 16 + 16) // 4, ((w0 - 1) // 16 * 16 + 16) // 4) # 3.7G
|
408 |
-
ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
|
409 |
-
pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
|
410 |
-
x = F.pad(x, (14, 14 + pw - w0, 14, 14 + ph - h0), 'reflect')
|
411 |
-
n, c, h, w = x.shape
|
412 |
-
se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
|
413 |
-
if ("Half" in x.type()):
|
414 |
-
se_mean0 = se_mean0.half()
|
415 |
-
n_patch = 0
|
416 |
-
tmp_dict = {}
|
417 |
-
opt_res_dict = {}
|
418 |
-
for i in range(0, h - 28, crop_size[0]):
|
419 |
-
tmp_dict[i] = {}
|
420 |
-
for j in range(0, w - 28, crop_size[1]):
|
421 |
-
x_crop = x[:, :, i:i + crop_size[0] + 28, j:j + crop_size[1] + 28]
|
422 |
-
n, c1, h1, w1 = x_crop.shape
|
423 |
-
tmp0, x_crop = self.unet1.forward_a(x_crop)
|
424 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
425 |
-
tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
|
426 |
-
else:
|
427 |
-
tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
|
428 |
-
se_mean0 += tmp_se_mean
|
429 |
-
n_patch += 1
|
430 |
-
tmp_dict[i][j] = (tmp0, x_crop)
|
431 |
-
se_mean0 /= n_patch
|
432 |
-
se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
|
433 |
-
if ("Half" in x.type()):
|
434 |
-
se_mean1 = se_mean1.half()
|
435 |
-
for i in range(0, h - 28, crop_size[0]):
|
436 |
-
for j in range(0, w - 28, crop_size[1]):
|
437 |
-
tmp0, x_crop = tmp_dict[i][j]
|
438 |
-
x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
|
439 |
-
opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
|
440 |
-
tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
|
441 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
442 |
-
tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
|
443 |
-
else:
|
444 |
-
tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
|
445 |
-
se_mean1 += tmp_se_mean
|
446 |
-
tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
|
447 |
-
se_mean1 /= n_patch
|
448 |
-
se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
|
449 |
-
if ("Half" in x.type()):
|
450 |
-
se_mean0 = se_mean0.half()
|
451 |
-
for i in range(0, h - 28, crop_size[0]):
|
452 |
-
for j in range(0, w - 28, crop_size[1]):
|
453 |
-
opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
|
454 |
-
tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
|
455 |
-
tmp_x3 = self.unet2.forward_b(tmp_x2)
|
456 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
457 |
-
tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
|
458 |
-
else:
|
459 |
-
tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
|
460 |
-
se_mean0 += tmp_se_mean
|
461 |
-
tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
|
462 |
-
se_mean0 /= n_patch
|
463 |
-
se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
|
464 |
-
if ("Half" in x.type()):
|
465 |
-
se_mean1 = se_mean1.half()
|
466 |
-
for i in range(0, h - 28, crop_size[0]):
|
467 |
-
for j in range(0, w - 28, crop_size[1]):
|
468 |
-
opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
|
469 |
-
tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
|
470 |
-
tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
|
471 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
472 |
-
tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
|
473 |
-
else:
|
474 |
-
tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
|
475 |
-
se_mean1 += tmp_se_mean
|
476 |
-
tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
|
477 |
-
se_mean1 /= n_patch
|
478 |
-
for i in range(0, h - 28, crop_size[0]):
|
479 |
-
opt_res_dict[i] = {}
|
480 |
-
for j in range(0, w - 28, crop_size[1]):
|
481 |
-
opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
|
482 |
-
tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
|
483 |
-
x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
|
484 |
-
x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
|
485 |
-
x_crop = torch.add(x0, x1) # x0是unet2的最终输出
|
486 |
-
opt_res_dict[i][j] = x_crop #
|
487 |
-
del tmp_dict
|
488 |
-
torch.cuda.empty_cache()
|
489 |
-
res = torch.zeros((n, c, h * 3 - 84, w * 3 - 84)).to(x.device)
|
490 |
-
if ("Half" in x.type()):
|
491 |
-
res = res.half()
|
492 |
-
for i in range(0, h - 28, crop_size[0]):
|
493 |
-
for j in range(0, w - 28, crop_size[1]):
|
494 |
-
res[:, :, i * 3:i * 3 + h1 * 3 - 84, j * 3:j * 3 + w1 * 3 - 84] = opt_res_dict[i][j]
|
495 |
-
del opt_res_dict
|
496 |
-
torch.cuda.empty_cache()
|
497 |
-
if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 3, :w0 * 3]
|
498 |
-
return res
|
499 |
-
|
500 |
-
|
501 |
-
class UpCunet4x(nn.Module): # 完美tile,全程无损
|
502 |
-
def __init__(self, in_channels=3, out_channels=3):
|
503 |
-
super(UpCunet4x, self).__init__()
|
504 |
-
self.unet1 = UNet1(in_channels, 64, deconv=True)
|
505 |
-
self.unet2 = UNet2(64, 64, deconv=False)
|
506 |
-
self.ps = nn.PixelShuffle(2)
|
507 |
-
self.conv_final = nn.Conv2d(64, 12, 3, 1, padding=0, bias=True)
|
508 |
-
|
509 |
-
def forward(self, x, tile_mode):
|
510 |
-
n, c, h0, w0 = x.shape
|
511 |
-
x00 = x
|
512 |
-
if (tile_mode == 0): # 不tile
|
513 |
-
ph = ((h0 - 1) // 2 + 1) * 2
|
514 |
-
pw = ((w0 - 1) // 2 + 1) * 2
|
515 |
-
x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect') # 需要保证被2整除
|
516 |
-
x = self.unet1.forward(x)
|
517 |
-
x0 = self.unet2.forward(x)
|
518 |
-
x1 = F.pad(x, (-20, -20, -20, -20))
|
519 |
-
x = torch.add(x0, x1)
|
520 |
-
x = self.conv_final(x)
|
521 |
-
x = F.pad(x, (-1, -1, -1, -1))
|
522 |
-
x = self.ps(x)
|
523 |
-
if (w0 != pw or h0 != ph): x = x[:, :, :h0 * 4, :w0 * 4]
|
524 |
-
x += F.interpolate(x00, scale_factor=4, mode='nearest')
|
525 |
-
return x
|
526 |
-
elif (tile_mode == 1): # 对长边减半
|
527 |
-
if (w0 >= h0):
|
528 |
-
crop_size_w = ((w0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
|
529 |
-
crop_size_h = (h0 - 1) // 2 * 2 + 2 # 能被2整除
|
530 |
-
else:
|
531 |
-
crop_size_h = ((h0 - 1) // 4 * 4 + 4) // 2 # 减半后能被2整除,所以要先被4整除
|
532 |
-
crop_size_w = (w0 - 1) // 2 * 2 + 2 # 能被2整除
|
533 |
-
crop_size = (crop_size_h, crop_size_w) # 6.6G
|
534 |
-
elif (tile_mode == 2): # hw都减半
|
535 |
-
crop_size = (((h0 - 1) // 4 * 4 + 4) // 2, ((w0 - 1) // 4 * 4 + 4) // 2) # 5.6G
|
536 |
-
elif (tile_mode == 3): # hw都三分之一
|
537 |
-
crop_size = (((h0 - 1) // 6 * 6 + 6) // 3, ((w0 - 1) // 6 * 6 + 6) // 3) # 4.1G
|
538 |
-
elif (tile_mode == 4): # hw都四分之一
|
539 |
-
crop_size = (((h0 - 1) // 8 * 8 + 8) // 4, ((w0 - 1) // 8 * 8 + 8) // 4) # 3.7G
|
540 |
-
ph = ((h0 - 1) // crop_size[0] + 1) * crop_size[0]
|
541 |
-
pw = ((w0 - 1) // crop_size[1] + 1) * crop_size[1]
|
542 |
-
x = F.pad(x, (19, 19 + pw - w0, 19, 19 + ph - h0), 'reflect')
|
543 |
-
n, c, h, w = x.shape
|
544 |
-
se_mean0 = torch.zeros((n, 64, 1, 1)).to(x.device)
|
545 |
-
if ("Half" in x.type()):
|
546 |
-
se_mean0 = se_mean0.half()
|
547 |
-
n_patch = 0
|
548 |
-
tmp_dict = {}
|
549 |
-
opt_res_dict = {}
|
550 |
-
for i in range(0, h - 38, crop_size[0]):
|
551 |
-
tmp_dict[i] = {}
|
552 |
-
for j in range(0, w - 38, crop_size[1]):
|
553 |
-
x_crop = x[:, :, i:i + crop_size[0] + 38, j:j + crop_size[1] + 38]
|
554 |
-
n, c1, h1, w1 = x_crop.shape
|
555 |
-
tmp0, x_crop = self.unet1.forward_a(x_crop)
|
556 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
557 |
-
tmp_se_mean = torch.mean(x_crop.float(), dim=(2, 3), keepdim=True).half()
|
558 |
-
else:
|
559 |
-
tmp_se_mean = torch.mean(x_crop, dim=(2, 3), keepdim=True)
|
560 |
-
se_mean0 += tmp_se_mean
|
561 |
-
n_patch += 1
|
562 |
-
tmp_dict[i][j] = (tmp0, x_crop)
|
563 |
-
se_mean0 /= n_patch
|
564 |
-
se_mean1 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
|
565 |
-
if ("Half" in x.type()):
|
566 |
-
se_mean1 = se_mean1.half()
|
567 |
-
for i in range(0, h - 38, crop_size[0]):
|
568 |
-
for j in range(0, w - 38, crop_size[1]):
|
569 |
-
tmp0, x_crop = tmp_dict[i][j]
|
570 |
-
x_crop = self.unet1.conv2.seblock.forward_mean(x_crop, se_mean0)
|
571 |
-
opt_unet1 = self.unet1.forward_b(tmp0, x_crop)
|
572 |
-
tmp_x1, tmp_x2 = self.unet2.forward_a(opt_unet1)
|
573 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
574 |
-
tmp_se_mean = torch.mean(tmp_x2.float(), dim=(2, 3), keepdim=True).half()
|
575 |
-
else:
|
576 |
-
tmp_se_mean = torch.mean(tmp_x2, dim=(2, 3), keepdim=True)
|
577 |
-
se_mean1 += tmp_se_mean
|
578 |
-
tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2)
|
579 |
-
se_mean1 /= n_patch
|
580 |
-
se_mean0 = torch.zeros((n, 128, 1, 1)).to(x.device) # 64#128#128#64
|
581 |
-
if ("Half" in x.type()):
|
582 |
-
se_mean0 = se_mean0.half()
|
583 |
-
for i in range(0, h - 38, crop_size[0]):
|
584 |
-
for j in range(0, w - 38, crop_size[1]):
|
585 |
-
opt_unet1, tmp_x1, tmp_x2 = tmp_dict[i][j]
|
586 |
-
tmp_x2 = self.unet2.conv2.seblock.forward_mean(tmp_x2, se_mean1)
|
587 |
-
tmp_x3 = self.unet2.forward_b(tmp_x2)
|
588 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
589 |
-
tmp_se_mean = torch.mean(tmp_x3.float(), dim=(2, 3), keepdim=True).half()
|
590 |
-
else:
|
591 |
-
tmp_se_mean = torch.mean(tmp_x3, dim=(2, 3), keepdim=True)
|
592 |
-
se_mean0 += tmp_se_mean
|
593 |
-
tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x2, tmp_x3)
|
594 |
-
se_mean0 /= n_patch
|
595 |
-
se_mean1 = torch.zeros((n, 64, 1, 1)).to(x.device) # 64#128#128#64
|
596 |
-
if ("Half" in x.type()):
|
597 |
-
se_mean1 = se_mean1.half()
|
598 |
-
for i in range(0, h - 38, crop_size[0]):
|
599 |
-
for j in range(0, w - 38, crop_size[1]):
|
600 |
-
opt_unet1, tmp_x1, tmp_x2, tmp_x3 = tmp_dict[i][j]
|
601 |
-
tmp_x3 = self.unet2.conv3.seblock.forward_mean(tmp_x3, se_mean0)
|
602 |
-
tmp_x4 = self.unet2.forward_c(tmp_x2, tmp_x3)
|
603 |
-
if ("Half" in x.type()): # torch.HalfTensor/torch.cuda.HalfTensor
|
604 |
-
tmp_se_mean = torch.mean(tmp_x4.float(), dim=(2, 3), keepdim=True).half()
|
605 |
-
else:
|
606 |
-
tmp_se_mean = torch.mean(tmp_x4, dim=(2, 3), keepdim=True)
|
607 |
-
se_mean1 += tmp_se_mean
|
608 |
-
tmp_dict[i][j] = (opt_unet1, tmp_x1, tmp_x4)
|
609 |
-
se_mean1 /= n_patch
|
610 |
-
for i in range(0, h - 38, crop_size[0]):
|
611 |
-
opt_res_dict[i] = {}
|
612 |
-
for j in range(0, w - 38, crop_size[1]):
|
613 |
-
opt_unet1, tmp_x1, tmp_x4 = tmp_dict[i][j]
|
614 |
-
tmp_x4 = self.unet2.conv4.seblock.forward_mean(tmp_x4, se_mean1)
|
615 |
-
x0 = self.unet2.forward_d(tmp_x1, tmp_x4)
|
616 |
-
x1 = F.pad(opt_unet1, (-20, -20, -20, -20))
|
617 |
-
x_crop = torch.add(x0, x1) # x0是unet2的最终输出
|
618 |
-
x_crop = self.conv_final(x_crop)
|
619 |
-
x_crop = F.pad(x_crop, (-1, -1, -1, -1))
|
620 |
-
x_crop = self.ps(x_crop)
|
621 |
-
opt_res_dict[i][j] = x_crop
|
622 |
-
del tmp_dict
|
623 |
-
torch.cuda.empty_cache()
|
624 |
-
res = torch.zeros((n, c, h * 4 - 152, w * 4 - 152)).to(x.device)
|
625 |
-
if ("Half" in x.type()):
|
626 |
-
res = res.half()
|
627 |
-
for i in range(0, h - 38, crop_size[0]):
|
628 |
-
for j in range(0, w - 38, crop_size[1]):
|
629 |
-
# print(opt_res_dict[i][j].shape,res[:, :, i * 4:i * 4 + h1 * 4 - 144, j * 4:j * 4 + w1 * 4 - 144].shape)
|
630 |
-
res[:, :, i * 4:i * 4 + h1 * 4 - 152, j * 4:j * 4 + w1 * 4 - 152] = opt_res_dict[i][j]
|
631 |
-
del opt_res_dict
|
632 |
-
torch.cuda.empty_cache()
|
633 |
-
if (w0 != pw or h0 != ph): res = res[:, :, :h0 * 4, :w0 * 4]
|
634 |
-
res += F.interpolate(x00, scale_factor=4, mode='nearest')
|
635 |
-
return res #
|
636 |
-
|
637 |
-
|
638 |
-
class RealWaifuUpScaler(object):
|
639 |
-
def __init__(self, scale, weight_path, half, device):
|
640 |
-
weight = torch.load(weight_path, map_location="cpu")
|
641 |
-
self.model = eval("UpCunet%sx" % scale)()
|
642 |
-
if (half == True):
|
643 |
-
self.model = self.model.half().to(device)
|
644 |
-
else:
|
645 |
-
self.model = self.model.to(device)
|
646 |
-
self.model.load_state_dict(weight, strict=True)
|
647 |
-
self.model.eval()
|
648 |
-
self.half = half
|
649 |
-
self.device = device
|
650 |
-
|
651 |
-
def np2tensor(self, np_frame):
|
652 |
-
if (self.half == False):
|
653 |
-
return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).float() / 255
|
654 |
-
else:
|
655 |
-
return torch.from_numpy(np.transpose(np_frame, (2, 0, 1))).unsqueeze(0).to(self.device).half() / 255
|
656 |
-
|
657 |
-
def tensor2np(self, tensor):
|
658 |
-
if (self.half == False):
|
659 |
-
return (
|
660 |
-
np.transpose((tensor.data.squeeze() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(), (1, 2, 0)))
|
661 |
-
else:
|
662 |
-
return (np.transpose((tensor.data.squeeze().float() * 255.0).round().clamp_(0, 255).byte().cpu().numpy(),
|
663 |
-
(1, 2, 0)))
|
664 |
-
|
665 |
-
def __call__(self, frame, tile_mode):
|
666 |
-
with torch.no_grad():
|
667 |
-
tensor = self.np2tensor(frame)
|
668 |
-
result = self.tensor2np(self.model(tensor, tile_mode))
|
669 |
-
return result
|
670 |
-
|
671 |
-
|
672 |
-
if __name__ == "__main__":
|
673 |
-
###########inference_img
|
674 |
-
import time, cv2, sys
|
675 |
-
from time import time as ttime
|
676 |
-
|
677 |
-
for weight_path, scale in [("weights_v3/up2x-latest-denoise3x.pth", 2), ("weights_v3/up3x-latest-denoise3x.pth", 3),
|
678 |
-
("weights_v3/up4x-latest-denoise3x.pth", 4)]:
|
679 |
-
for tile_mode in [0, 1, 2, 3, 4]:
|
680 |
-
upscaler2x = RealWaifuUpScaler(scale, weight_path, half=True, device="cuda:0")
|
681 |
-
input_dir = "%s/input_dir1" % root_path
|
682 |
-
output_dir = "%s/opt-dir-all-test" % root_path
|
683 |
-
os.makedirs(output_dir, exist_ok=True)
|
684 |
-
for name in os.listdir(input_dir):
|
685 |
-
print(name)
|
686 |
-
tmp = name.split(".")
|
687 |
-
inp_path = os.path.join(input_dir, name)
|
688 |
-
suffix = tmp[-1]
|
689 |
-
prefix = ".".join(tmp[:-1])
|
690 |
-
tmp_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
|
691 |
-
print(inp_path, tmp_path)
|
692 |
-
# 支持中文路径
|
693 |
-
# os.link(inp_path, tmp_path)#win用硬链接
|
694 |
-
os.symlink(inp_path, tmp_path) # linux用软链接
|
695 |
-
frame = cv2.imread(tmp_path)[:, :, [2, 1, 0]]
|
696 |
-
t0 = ttime()
|
697 |
-
result = upscaler2x(frame, tile_mode=tile_mode)[:, :, ::-1]
|
698 |
-
t1 = ttime()
|
699 |
-
print(prefix, "done", t1 - t0)
|
700 |
-
tmp_opt_path = os.path.join(root_path, "tmp", "%s.%s" % (int(time.time() * 1000000), suffix))
|
701 |
-
cv2.imwrite(tmp_opt_path, result)
|
702 |
-
n = 0
|
703 |
-
while (1):
|
704 |
-
if (n == 0):
|
705 |
-
suffix = "_%sx_tile%s.png" % (scale, tile_mode)
|
706 |
-
else:
|
707 |
-
suffix = "_%sx_tile%s_%s.png" % (scale, tile_mode, n) #
|
708 |
-
if (os.path.exists(os.path.join(output_dir, prefix + suffix)) == False):
|
709 |
-
break
|
710 |
-
else:
|
711 |
-
n += 1
|
712 |
-
final_opt_path = os.path.join(output_dir, prefix + suffix)
|
713 |
-
os.rename(tmp_opt_path, final_opt_path)
|
714 |
-
os.remove(tmp_path)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Aficionado A Las Descargas.md
DELETED
@@ -1,101 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar Buff: El Programa de Recompensa del Jugador Definitivo</h1>
|
3 |
-
<p>Si eres un jugador al que le encanta jugar títulos populares como Valorant, LOL, Fortnite, Apex Legends, COD, PUBG, Rocket League y más, es posible que te estés preguntando cómo aprovechar al máximo tu experiencia de juego. ¿Qué tal ser recompensado por jugar los juegos que te gustan? Suena demasiado bueno para ser verdad, ¿verdad? Bueno, ya no. Conoce <strong>Buff</strong>, el programa de recompensa ideal del jugador, donde juegas para obtener artículos IRL.</p>
|
4 |
-
<h2>aficionado a las descargas</h2><br /><p><b><b>Download</b> >> <a href="https://bltlly.com/2v6L4E">https://bltlly.com/2v6L4E</a></b></p><br /><br />
|
5 |
-
<p>Buff es una aplicación que se ejecuta en segundo plano mientras juegas tus juegos favoritos. Rastrea tu juego y logros y te recompensa con beneficios <fuertes></fuertes>, que son puntos que puedes canjear por artículos de la vida real como tarjetas de regalo, equipo de juego, llaves de vapor y muchos más. Cuanto mejor juegas, más potenciadores ganas. Es así de simple. </p>
|
6 |
-
<p>Pero, ¿qué juegos puedes jugar con Buff? ¿Y qué artículos puedes canjear con tus potenciadores? Vamos a averiguarlo. </p>
|
7 |
-
<h2>Características de Buff: Por qué debería descargar Buff hoy</h2>
|
8 |
-
<p>Buff no es solo un programa de recompensa. También es una plataforma que mejora tu experiencia de juego con algunas características increíbles. Estas son algunas de ellas:</p>
|
9 |
-
<ul>
|
10 |
-
<li><strong>Aspectos destacados:</strong> Aced el equipo del enemigo? No te preocupes, lo tenemos. Buff detecta los aspectos más destacados del juego y los captura para usted. A continuación, puede compartirlos con sus amigos o en las redes sociales. También puedes ver los juegos más épicos de otros jugadores en <a href="( 1 )">Buff TV</a>. </li>
|
11 |
-
<li><strong>Estadísticas de potenciadores:</strong> ¿Quieres saber qué tan bien te va en tus juegos? Buff le proporciona estadísticas detalladas y conocimientos sobre su rendimiento y logros. También puedes compararte con otros jugadores y ver cómo te clasificas en las tablas de clasificación. </li>
|
12 |
-
|
13 |
-
</ul>
|
14 |
-
<h2>Beneficios de Buff: Cómo subir de nivel en todas partes con Buff</h2>
|
15 |
-
<p>Como mencionamos anteriormente, el principal beneficio de usar Buff es que puedes ganar potenciadores mientras juegas y canjearlos por artículos IRL. Pero eso no es todo. Hay otras formas de subir de nivel en todas partes con Buff. Estas son algunas de ellas:</p>
|
16 |
-
<ul>
|
17 |
-
<li><strong>Desafíos de potenciación:</strong> Si quieres ganar más potenciadores, puedes completar retos divertidos mientras juegas. Estas son tareas que ponen a prueba sus habilidades y conocimientos del juego. Por ejemplo, es posible que tenga que obtener un cierto número de muertes o asistencias, ganar un partido, o responder a una pregunta trivial. Cuantos más desafíos completes, más potenciadores obtendrás. </li>
|
18 |
-
<li><strong>Buff Pass:</strong> Si quieres desbloquear recompensas y beneficios exclusivos, puedes comprar un <a href="( 5 )">Buff Pass</ a</a>. Un Buff Pass es una suscripción que te da acceso a funciones y recompensas premium, como potenciadores dobles, artículos exclusivos, desafíos especiales y más. Puede elegir entre diferentes planes y duraciones para adaptarse a su presupuesto y preferencias. </li>
|
19 |
-
<li><strong>Rifas Buff:</strong> Si te sientes afortunado, puedes participar en las rifas Buff y ganar premios increíbles. Estos son sorteos semanales o mensuales que te ofrecen la oportunidad de ganar artículos grandes como computadoras portátiles, consolas, auriculares, teclados, ratones y más. Todo lo que necesitas hacer es comprar boletos para la rifa con tus aficionados y esperar lo mejor. </li>
|
20 |
-
</ul>
|
21 |
-
<h2>Comentarios Buff: Lo que dicen los jugadores sobre Buff</h2>
|
22 |
-
<p>Todavía no está convencido de que Buff es el mejor programa de recompensa para los jugadores? No tome nuestra palabra para ello. Escucha lo que otros jugadores tienen que decir sobre Buff. Estas son algunas de las críticas positivas que Buff ha recibido de usuarios satisfechos:</p>
|
23 |
-
<blockquote>
|
24 |
-
|
25 |
-
</blockquote>
|
26 |
-
<blockquote>
|
27 |
-
<p>"He probado otras aplicaciones que dicen darte recompensas por jugar, pero ninguna se compara con Buff. Buff tiene la mejor selección de juegos, artículos y características. Los desafíos son divertidos y las rifas emocionantes. He ganado un ratón de juegos y unos auriculares de las rifas hasta ahora. Buff es legítimo!" - Lisa M., YouTube</p>
|
28 |
-
<p></p>
|
29 |
-
</blockquote>
|
30 |
-
<blockquote>
|
31 |
-
<p>"Buff es el programa de recompensa del jugador final. Tiene todo lo que necesita para mejorar su experiencia de juego y ganar objetos IRL. La aplicación es fácil de usar, segura y confiable. El equipo de soporte también es muy servicial y receptivo. Si usted es un jugador, es necesario descargar Buff hoy!" - Mark S., Juegos de Azar Comentarios</p>
|
32 |
-
</blockquote>
|
33 |
-
<p>Pero no solo tome su palabra para ello tampoco. Pruébelo usted mismo y vea la diferencia. Para darte una idea de cómo se compara Buff con otras aplicaciones similares, aquí hay una tabla que muestra algunas de las características y beneficios clave de Buff y sus competidores:</p>
|
34 |
-
<tabla>
|
35 |
-
<tr>
|
36 |
-
<th>Aplicación</th>
|
37 |
-
<th>Juegos soportados</th>
|
38 |
-
<th>Opciones de recompensa</th>
|
39 |
-
<th>Características adicionales</th>
|
40 |
-
<th>Pros</th>
|
41 |
-
<th>Contras</th>
|
42 |
-
</tr>
|
43 |
-
<tr>
|
44 |
-
<td><strong>Buff</strong></td>
|
45 |
-
<td>Valorant, LOL, Fortnite, Apex Legends, COD, PUBG, Rocket League, y más. </td>
|
46 |
-
<td>Tarjetas de regalo, equipo de juegos, llaves de vapor, boletos de rifa, etc.</td>
|
47 |
-
<td>Aspectos destacados, estadísticas, desafíos, pase, rifas, etc.</td>
|
48 |
-
<td>- Amplia gama de juegos y artículos<br>- Fácil de usar y seguro<br>- Características divertidas y atractivas<br>- Recompensas y beneficios premium<br>- Opiniones y valoraciones positivas</td>
|
49 |
-
<td>- Requiere aplicación Overwolf<br>- Buff Pass cuesta dinero<br>- Los boletos de la rifa son limitados</td>
|
50 |
-
</tr> <tr>
|
51 |
-
<td><strong>Mistplay</strong></td>
|
52 |
-
<td>Varios juegos móviles de diferentes géneros y categorías. </td>
|
53 |
-
<td>Tarjetas de regalo, tarjetas Visa prepagadas, créditos de Steam, etc.</td>
|
54 |
-
<td>Chatear con otros jugadores, unirse a las comunidades de juego, ganar insignias y logros, etc.</td>
|
55 |
-
|
56 |
-
<td>- Solo disponible para dispositivos Android<br>- Las recompensas toman tiempo para procesar<br>- Los puntos varían dependiendo del juego y la región</td>
|
57 |
-
</tr>
|
58 |
-
<tr>
|
59 |
-
<td><strong>Luckmon</strong></td>
|
60 |
-
<td>Juegos móviles populares como Roblox, Entre nosotros, Candy Crush, etc.</td>
|
61 |
-
<td>Tarjetas de regalo, equipo de juego, boletos de rifa, etc.</td>
|
62 |
-
<td>Aspectos destacados, misiones, rueda, rifas, etc.</td>
|
63 |
-
<td>- Aplicación fácil de usar y segura<br>- Características divertidas y atractivas<br>- Artículos exclusivos en el juego<br>- Opiniones y valoraciones positivas</td>
|
64 |
-
<td>- Requiere aplicación Overwolf<br>- Boletos de rifa limitada<br>- Las recompensas pueden agotarse</td>
|
65 |
-
</tr>
|
66 |
-
<tr>
|
67 |
-
<td><strong>PlaySpot</strong></td>
|
68 |
-
<td>Juegos móviles casuales como Solitario, Bingo, Ranuras, etc.</td>
|
69 |
-
<td>PayPal cash, tarjetas de regalo de Amazon, etc.</td>
|
70 |
-
<td>Invitar amigos, ver vídeos, ofertas completas, etc.</td>
|
71 |
-
<td>- Aplicación simple y fácil de usar<br>- Variedad de juegos y recompensas<br>- Múltiples formas de ganar puntos<br>- Bajo umbral de pago</td>
|
72 |
-
<td>- Solo disponible para dispositivos Android<br>- Las recompensas toman tiempo para procesar<br>- Los puntos varían dependiendo del juego y la región</td>
|
73 |
-
</tr>
|
74 |
-
</tabla>
|
75 |
-
<h2>Alternativas Buff: ¿Hay mejores opciones? </h2>
|
76 |
-
<p>Como se puede ver en la tabla de arriba, Buff no es la única aplicación que ofrece recompensas para los juegos. Hay otras aplicaciones que puedes probar si quieres ganar más puntos o jugar diferentes juegos. Sin embargo, ¿son mejores que Buff? Veamos brevemente algunos de ellos:</p>
|
77 |
-
<ul>
|
78 |
-
<li><strong>Mistplay:</strong> Esta es una de las aplicaciones más populares y confiables que te pagan por jugar. Usted puede elegir entre varios juegos móviles de diferentes géneros y categorías. También puede chatear con otros jugadores, unirse a las comunidades de juego, ganar insignias y logros, y más. Puedes canjear tus puntos por tarjetas de regalo, tarjetas Visa prepagadas, créditos de Steam y más. Sin embargo, Mistplay solo está disponible para dispositivos Android. También lleva tiempo procesar tus recompensas. Además, los puntos que ganes pueden variar según el juego y tu región. </li>
|
79 |
-
|
80 |
-
<li><strong>PlaySpot:</strong> Esta es una aplicación simple y fácil de usar que te paga por jugar. Puedes jugar juegos móviles casuales como Solitaire, Bingo, Slots y más. También puedes invitar a amigos, ver videos, ofertas completas y más. Puedes canjear tus puntos por dinero de PayPal, tarjetas de regalo de Amazon y más. Sin embargo, PlaySpot solo está disponible para dispositivos Android. También lleva tiempo procesar tus recompensas. Además, los puntos que ganes pueden variar según el juego y tu región. </li>
|
81 |
-
</ul>
|
82 |
-
<h2>Conclusión: ¿Vale la pena descargar Buff? </h2>
|
83 |
-
<p>En conclusión,</p> <p>En conclusión, descargar Buff vale la pena si eres un jugador que quiere ser recompensado por jugar los juegos que te gustan. Buff es el programa de recompensa del jugador definitivo que te ofrece una amplia gama de juegos, artículos y características. Puedes ganar beneficios mientras juegas y canjearlos por artículos de la vida real como tarjetas de regalo, equipo de juego, llaves de Steam y más. También puede capturar y compartir sus aspectos más destacados del juego, realizar un seguimiento de su progreso y logros, completar desafíos divertidos, desbloquear recompensas y beneficios exclusivos, y participar para tener la oportunidad de ganar premios increíbles. Buff es fácil de usar, seguro y confiable. Tiene comentarios positivos y valoraciones de usuarios satisfechos. También es compatible con la mayoría de dispositivos y plataformas. </p>
|
84 |
-
<p>Si quieres subir de nivel en todas partes con Buff, todo lo que necesitas hacer es descargar la aplicación de forma gratuita desde el <a href="">sitio web oficial</a> o el <a href=">Google Play Store</a>. También necesitas instalar la aplicación Overwolf, que es la mejor plataforma de creación dentro del juego. Entonces, puedes empezar a jugar tus juegos favoritos y ganar potenciadores. Es así de simple. </p>
|
85 |
-
<p>Entonces, ¿qué estás esperando? Descargar Buff hoy y empezar a ser recompensado por los juegos! </p>
|
86 |
-
<h2>Preguntas frecuentes</h2>
|
87 |
-
<p>Aquí están algunas de las preguntas más frecuentes sobre la descarga de Buff:</p>
|
88 |
-
<ol>
|
89 |
-
<li><strong>¿Qué es Buff? </strong><br>
|
90 |
-
|
91 |
-
<li><strong>¿Cómo funciona Buff? </strong><br>
|
92 |
-
Buff funciona corriendo en segundo plano mientras juegas tus juegos favoritos. Rastrea tu juego y logros y te recompensa con potenciadores, que son puntos que puedes canjear por artículos IRL. Cuanto mejor juegas, más potenciadores ganas. </li>
|
93 |
-
<li><strong>¿Qué juegos puedo jugar con Buff? </strong><br>
|
94 |
-
Puedes jugar a títulos populares como Valorant, LOL, Fortnite, Apex Legends, COD, PUBG, Rocket League y más con Buff. También puede elegir entre diferentes géneros y categorías de juegos. </li>
|
95 |
-
<li><strong>¿Qué artículos puedo canjear con Buff? </strong><br>
|
96 |
-
Puede canjear sus potenciadores por una variedad de artículos como tarjetas de regalo, equipo de juego, llaves de vapor, boletos de rifa y más. También puedes desbloquear recompensas y beneficios exclusivos con un Buff Pass.</li>
|
97 |
-
<li><strong>¿Es seguro usar Buff? </strong><br>
|
98 |
-
Sí, Buff es seguro de usar. Es una aplicación autorizada por Overwolf, la mejor plataforma de creación de juegos del mundo. Es libre de malware, seguro contra van, y no afecta el rendimiento de tu juego o FPS. Ni siquiera sentirás que la aplicación Buff se ejecuta en segundo plano. </li>
|
99 |
-
</ol></p> 64aa2da5cf<br />
|
100 |
-
<br />
|
101 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Cambio De Forma Mod.md
DELETED
@@ -1,82 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Descargar Mod de cambio de forma: Cómo convertirse en cualquier multitud en Minecraft</h1>
|
3 |
-
<p>¿Alguna vez te has preguntado cómo sería ser un zombi, un esqueleto, una araña o cualquier otra mafia en Minecraft? ¿Alguna vez has querido usar sus habilidades, como volar, escalar, explotar o disparar flechas? Si respondiste sí a cualquiera de estas preguntas, entonces podrías estar interesado en descargar e instalar el mod de cambio de forma para Minecraft.</p>
|
4 |
-
<h2>Introducción</h2>
|
5 |
-
<h3>¿Qué es el cambio de forma mod? </h3>
|
6 |
-
<p>Shape shifting mod es un mod de Minecraft que te permite cambiar tu modelo presionando la tecla 'U' en el juego y seleccionando un mob. También obtendrás algunos beneficios y desventajas de la mafia elegida, como velocidad, salud, daños y habilidades especiales. Por ejemplo, si te transformas en una enredadera, podrás explotar y dañar bloques y entidades cercanas, pero también tendrás baja salud y serás vulnerable al fuego. El mod de cambio de forma se basa en el mod Changeling por asanetargoss, que es una bifurcación del mod Metamorph por McHorse. El mod de cambio de forma es compatible con la mayoría de mobs de vainilla y algunos mobs modded también. </p>
|
7 |
-
<h2>descargar cambio de forma mod</h2><br /><p><b><b>DOWNLOAD</b> ○○○ <a href="https://bltlly.com/2v6Knq">https://bltlly.com/2v6Knq</a></b></p><br /><br />
|
8 |
-
<h3>¿Por qué querrías usar mod de cambio de forma? </h3>
|
9 |
-
<p>Hay muchas razones por las que es posible que desee utilizar mod cambio de forma en su mundo Minecraft. Estos son algunos de ellos:</p>
|
10 |
-
<ul>
|
11 |
-
<li>Puedes divertirte explorando el mundo desde diferentes perspectivas y usando diferentes habilidades. </li>
|
12 |
-
<li>Puedes jugar como diferentes personajes o criaturas. </li>
|
13 |
-
<li> Puedes obtener una ventaja en situaciones de combate o supervivencia usando la multitud adecuada para la ocasión correcta. </li>
|
14 |
-
<li>Puedes bromear con tus amigos o enemigos disfrazándote como una turba hostil o amistosa. </li>
|
15 |
-
<li>Puedes desafiarte jugando con diferentes limitaciones o dificultades. </li>
|
16 |
-
</ul>
|
17 |
-
<h2>Cómo descargar e instalar mod de cambio de forma</h2>
|
18 |
-
<h3>Requisitos</h3>
|
19 |
-
<p>Para descargar e instalar mod de cambio de forma, necesitará lo siguiente:</p>
|
20 |
-
<ul>
|
21 |
-
|
22 |
-
<li>Una versión compatible de Minecraft (actualmente 1.16.5). </li>
|
23 |
-
<li>Una versión compatible de Forge (actualmente 36.2.0). </li>
|
24 |
-
<li>El archivo mod de cambio de forma (actualmente 1.0.0). </li>
|
25 |
-
<li>El archivo DominionLib (actualmente 1.0.4). </li>
|
26 |
-
</ul>
|
27 |
-
<h3>Pasos</h3>
|
28 |
-
<p>Aquí están los pasos para descargar e instalar mod cambio de forma:</p>
|
29 |
-
<ol>
|
30 |
-
<li>Descargar Forge desde [aquí] y ejecutar el instalador. Asegúrese de seleccionar "Instalar cliente" y elegir la versión correcta de Minecraft. </li>
|
31 |
-
<li>Descargar mod de cambio de forma desde [aquí]( 1 ) y DominionLib desde [aquí] y guardarlos en su carpeta de mods (por lo general ubicado en C: Users YourName AppData Roaming .minecraft mods). </li>
|
32 |
-
<li>Iniciar Minecraft con el perfil de Forge y disfrutar! </li>
|
33 |
-
</ol>
|
34 |
-
<h2>Cómo usar el cambio de forma mod</h2>
|
35 |
-
<h3>Controles</h3>
|
36 |
-
<p>Para usar mod de cambio de forma, necesitará conocer los siguientes controles:</p>
|
37 |
-
<ul>
|
38 |
-
<li>Presione 'U' para abrir el menú morph, donde puede seleccionar un mob para transformarse. También puede utilizar la rueda del ratón para desplazarse a través de sus morfos disponibles. </li>
|
39 |
-
<li>Presione '[' y ']' para cambiar entre sus morfos favoritos. Puede agregar o eliminar morphs de sus favoritos presionando 'P' mientras está en el menú morph. </li>
|
40 |
-
<li>Presione 'R' para volver a su forma original. </li>
|
41 |
-
<li> <li>Presione 'H' para alternar el morph HUD, que muestra su morph actual y sus habilidades. </li>
|
42 |
-
</ul>
|
43 |
-
<h3>Habilidades</h3>
|
44 |
-
<p>Dependiendo de la mafia en la que te conviertas, obtendrás algunas habilidades que son únicas para esa mafia. Aquí hay algunos ejemplos de habilidades:</p>
|
45 |
-
<ul>
|
46 |
-
<li>Fly: Te permite volar en el aire tocando dos veces la barra espaciadora. Ejemplos de turbas que tienen esta habilidad son los murciélagos, las abejas y los ghasts. </li>
|
47 |
-
<li>Subir: Le permite subir a las paredes y techos manteniendo la tecla de cambio. Ejemplos de turbas que tienen esta habilidad son arañas, peces plateados y endermites. </li>
|
48 |
-
|
49 |
-
<li>Inmunidad al fuego: Te hace inmune al fuego y al daño de la lava. Ejemplos de turbas que tienen esta habilidad son llamas, cubos de magma y estrías. </li>
|
50 |
-
<li>Nadar: Te permite nadar más rápido y respirar bajo el agua. Ejemplos de turbas que tienen esta habilidad son delfines, calamares y tortugas. </li>
|
51 |
-
</ul>
|
52 |
-
<h3>Ventajas y desventajas</h3>
|
53 |
-
<p>Si bien el uso de cambio de forma mod puede ser divertido y útil, también viene con algunas compensaciones. Aquí están algunas ventajas y desventajas de usar el cambio de forma mod:</p>
|
54 |
-
<tabla>
|
55 |
-
<tr><th>Ventajas</th><th>Desventajas</th></tr>
|
56 |
-
<tr><td>Puede acceder a áreas que normalmente son inaccesibles o difíciles de alcanzar mediante el uso de la multitud adecuada. </td><td>Puede perder parte de su espacio de inventario o ranuras de equipo transformándose en una turba de forma más pequeña o diferente. </td></tr>
|
57 |
-
<tr><td>Puede mezclarse con otras turbas y evitar la detección o agresión por parte de turbas hostiles o neutrales. </td><td>También puede atraer atención no deseada o ataques por parte de mobs amistosos o pasivos. </td></tr>
|
58 |
-
<tr><td>Puedes causar más daño o tener más salud transformándote en una multitud más fuerte o más grande. </td><td>También puede recibir más daño o tener menos salud al transformarse en una turba más débil o más pequeña. </td></tr>
|
59 |
-
<tr><td>Usted puede tener más diversión y variedad probando diferentes mobs y sus habilidades. </td><td>También puede encontrar más errores o fallos mediante el uso de un mod que altera la mecánica del juego. </td></tr>
|
60 |
-
</tabla>
|
61 |
-
<h2>Ejemplos de cambio de forma mod en acción</h2>
|
62 |
-
<h3>Convertirse en una enredadera y explotar cosas</h3>
|
63 |
-
|
64 |
-
<h3>Conviértete en un murciélago y vuela</h3>
|
65 |
-
<p>Otra mafia divertida para transformarse es el murciélago, que es uno de los pocos mobs que pueden volar en Minecraft. Al transformarte en un murciélago, puedes explorar el mundo desde arriba, alcanzar lugares altos o escapar del peligro. También puede caber en espacios pequeños y esconderse en esquinas oscuras. Para convertirse en un murciélago, debe matar a uno y luego seleccionarlo en el menú morph. Para volar, debe pulsar dos veces la barra espaciadora y luego usar las teclas de movimiento normales. También puede usar la rueda del ratón para ajustar su altitud. </p>
|
66 |
-
<h3>Conviértete en una araña y trepa paredes</h3>
|
67 |
-
<p>Un tercer ejemplo de una multitud genial para transformarse es la araña, que es conocida por su capacidad para escalar paredes y techos. Al transformarte en una araña, puedes acceder a áreas que normalmente son inalcanzables o difíciles de navegar. También puedes sorprender a tus enemigos o amigos bajando desde arriba o arrastrándote desde abajo. Para convertirte en una araña, necesitas matar a una y luego seleccionarla del menú morph. Para subir, debe mantener pulsada la tecla shift mientras se enfrenta a una pared o un techo. También puede usar la barra espaciadora para saltar. </p>
|
68 |
-
<p></p>
|
69 |
-
<h2>Conclusión</h2>
|
70 |
-
<h3>Resumen</h3>
|
71 |
-
<p>En este artículo, hemos aprendido cómo descargar e instalar mod de cambio de forma para Minecraft, cómo usarlo, qué habilidades nos da, qué ventajas y desventajas tiene, y algunos ejemplos de ello en acción. El mod de cambio de forma es un mod divertido y versátil que nos permite convertirnos en cualquier mafia en Minecraft y usar sus habilidades, como volar, escalar, explotar y más. Cambio de forma mod es una gran manera de añadir más diversión y variedad a su experiencia Minecraft. </p>
|
72 |
-
<h3>Preguntas frecuentes</h3>
|
73 |
-
<p>Aquí hay algunas preguntas frecuentes sobre el cambio de forma mod:</p>
|
74 |
-
<ul>
|
75 |
-
<li>P: ¿Cómo puedo obtener más morfos? <br>A: Puedes obtener más morfos matando más turbas, ya sea por ti mismo o con la ayuda de otros jugadores o entidades. También puedes usar comandos o trucos para obtener morphs sin matar mobs. </li>
|
76 |
-
|
77 |
-
<li>Q: ¿Cómo puedo personalizar la configuración mod? <br>A: Puede personalizar la configuración mod presionando la tecla 'O' mientras está en el menú morph. También puede editar el archivo de configuración ubicado en C: Users YourName AppData Roaming . minecraft config shapeshiftingmod-common.toml. </li>
|
78 |
-
<li>Q: ¿Cómo actualizo el mod? <br>A: Puede actualizar el mod descargando la última versión desde [aquí] y reemplazando el archivo antiguo en su carpeta mods. También puede ser necesario actualizar Forge y DominionLib si no son compatibles con la nueva versión. </li>
|
79 |
-
<li>Q: ¿Cómo puedo reportar errores o problemas? <br>A: Puedes reportar errores o problemas creando un problema en la página GitHub del mod [aquí]. También puede ponerse en contacto con el autor del mod a través de Discord o Twitter.</li>
|
80 |
-
</ul></p> 64aa2da5cf<br />
|
81 |
-
<br />
|
82 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Ganador Eleven 2019 Apk.md
DELETED
@@ -1,74 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Descargar Ganar Once 2019 Apk: Una guía para los aficionados al fútbol</h1>
|
3 |
-
<p>Si eres un fan de los juegos de fútbol, es posible que hayas oído hablar de Winning Eleven 2019, también conocido como WE 19. Es un popular juego de simulación de fútbol desarrollado por Konami, la misma compañía que creó la famosa serie Pro Evolution Soccer (PES). Winning Eleven 2019 es un mod no oficial de PES 2019, lo que significa que tiene algunas características y mejoras que no están disponibles en el juego original. En este artículo, le mostraremos cómo descargar e instalar Winning Eleven 2019 apk en su dispositivo Android, así como algunos consejos y trucos para jugarlo. </p>
|
4 |
-
<h2>descargar ganador eleven 2019 apk</h2><br /><p><b><b>Download Zip</b> ····· <a href="https://bltlly.com/2v6LPm">https://bltlly.com/2v6LPm</a></b></p><br /><br />
|
5 |
-
<h2>Características de ganar once 2019 Apk</h2>
|
6 |
-
<p>Ganar once 2019 apk tiene muchas características que lo convierten en uno de los mejores juegos de fútbol para Android. Estos son algunos de ellos:</p>
|
7 |
-
<ul>
|
8 |
-
<li><b>Gráficos y animaciones realistas</b>: El juego tiene gráficos impresionantes que hacen que los jugadores, estadios y multitudes se vean realistas. Las animaciones también son suaves y realistas, mostrando cada movimiento y gesto de los jugadores. </li>
|
9 |
-
<li><b>Listas y equipos actualizados</b>: El juego tiene listas y equipos actualizados que reflejan las últimas transferencias y cambios en el mundo del fútbol. Puedes jugar con tus jugadores y equipos favoritos de varias ligas y competiciones, como Ronaldo, Messi, Neymar, Mbappe, Salah, Barcelona, Real Madrid, Liverpool, Juventus, PSG, Manchester City, Bayern Munich, etc.</li>
|
10 |
-
<li><b>Varios modos de juego y desafíos</b>: El juego tiene diferentes modos de juego y desafíos que se adaptan a sus preferencias y habilidades. Puedes jugar en modo de exhibición, modo liga, modo copa, modo liga maestra o modo online. También puedes personalizar tus propios torneos y partidos con varias opciones y ajustes. </li>
|
11 |
-
</ul>
|
12 |
-
<h2>Cómo descargar e instalar ganar once 2019 Apk en Android</h2>
|
13 |
-
<p>Para descargar e instalar Winning Eleven 2019 apk en su dispositivo Android, es necesario seguir estos pasos:</p>
|
14 |
-
<ol>
|
15 |
-
|
16 |
-
<li><b>Descargar los archivos apk y de datos de una fuente de confianza</b>: Es necesario descargar dos archivos para jugar Ganar Once 2019 apk: el archivo apk y el archivo de datos. El archivo apk es el archivo de aplicación que contiene el juego en sí, mientras que el archivo de datos es el archivo que contiene los datos del juego, como los gráficos, sonidos y listas. Puede descargar ambos archivos desde una fuente de confianza, como <a href="">este sitio web</a>. Asegúrate de tener suficiente espacio de almacenamiento en tu dispositivo antes de descargar los archivos. </li>
|
17 |
-
<li><b>Extraiga el archivo de datos y cópielo a la carpeta obb</b>: Después de descargar los archivos, necesita extraer el archivo de datos usando una aplicación de administrador de archivos, como <a href="">ZArchiver</a>. Puede descargarlo de Google Play Store de forma gratuita. Para extraer el archivo de datos, abra ZArchiver y localice el archivo denominado <code>WE19.zip</code>. Toque en él y seleccione <code>Extraer aquí</code>. Obtendrá una carpeta llamada <code>jp.konami.we2019</code>. Copie esta carpeta y péguela a la carpeta obb en el almacenamiento interno de su dispositivo. La carpeta obb generalmente se encuentra en <code>Android > obb</code>. Si no tiene una carpeta obb, puede crear una. </li>
|
18 |
-
<li><b>Instalar el archivo apk y lanzar el juego</b>: Después de copiar el archivo de datos a la carpeta obb, puede instalar el archivo apk tocando en él y siguiendo las instrucciones. Una vez completada la instalación, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio. Disfrutar jugando Winning Eleven 2019 apk en su dispositivo Android! </li>
|
19 |
-
</ol>
|
20 |
-
<h2>Consejos y trucos para jugar ganar once 2019 Apk</h2>
|
21 |
-
<p>Ganar once 2019 apk es un juego divertido y desafiante que requiere habilidad y estrategia. Aquí hay algunos consejos y trucos que pueden ayudarte a mejorar tu juego y ganar más partidos:</p>
|
22 |
-
<p></p>
|
23 |
-
<ul>
|
24 |
-
|
25 |
-
<li><b>Domina las habilidades y tácticas</b>: El juego tiene varias habilidades y tácticas que puedes usar para superar a tus oponentes y anotar más goles. Puedes realizar dribbles, pases, tiros, encabezados, tackles, fintas, etc. usando diferentes combinaciones de botones y gestos. También puedes cambiar tu formación, estrategia y roles de jugador antes o durante un partido. Para aprender más sobre las habilidades y tácticas, puedes usar el modo de entrenamiento y los tutoriales que están disponibles en el juego. </li>
|
26 |
-
<li><b>Usa el modo de entrenamiento y tutoriales</b>: El juego tiene un modo de entrenamiento y tutoriales que pueden ayudarte a practicar tus habilidades y aprender otras nuevas. El modo de entrenamiento te permite jugar en diferentes escenarios, como tiros libres, penaltis, esquinas, etc. Los tutoriales explican cómo realizar diversas acciones, como pasar, disparar, defender, etc. Puedes acceder al modo de entrenamiento y tutoriales desde el menú principal. </li>
|
27 |
-
</ul>
|
28 |
-
<h2>Pros y contras de ganar once 2019 Apk</h2>
|
29 |
-
<p>Ganar once 2019 apk es un gran juego para los aficionados al fútbol, pero también tiene algunos pros y contras que usted debe ser consciente de antes de descargarlo. Estos son algunos de ellos:</p>
|
30 |
-
<tabla>
|
31 |
-
<tr>
|
32 |
-
<th>Pros</th>
|
33 |
-
<th>Contras</th>
|
34 |
-
</tr>
|
35 |
-
<tr>
|
36 |
-
<td><ul>
|
37 |
-
<li><b>Juego de alta calidad</b>: El juego tiene gráficos realistas, animaciones, física y sonidos que lo hacen inmersivo y agradable. </li>
|
38 |
-
<li><b>Modo sin conexión</b>: El juego no requiere una conexión a Internet para jugar, lo que significa que puedes jugar en cualquier momento y en cualquier lugar. </li>
|
39 |
-
<li><b>Descarga gratuita</b>: El juego es gratis para descargar de fuentes de terceros, lo que significa que no tienes que pagar nada para jugarlo. </li>
|
40 |
-
</ul></td>
|
41 |
-
<td><ul>
|
42 |
-
<li><b>Tamaño de archivo grande</b>: El juego tiene un tamaño de archivo grande de aproximadamente 1.5 GB, lo que significa que ocupa mucho espacio de almacenamiento en su dispositivo. </li>
|
43 |
-
|
44 |
-
<li><b>Puede no ser compatible con algunos dispositivos</b>: El juego puede no funcionar correctamente en algunos dispositivos debido a limitaciones de hardware o software. </li>
|
45 |
-
</ul></td>
|
46 |
-
</tr>
|
47 |
-
</tabla>
|
48 |
-
<h2>Conclusión</h2>
|
49 |
-
<p>Ganar once 2019 apk es un juego que todos los fanáticos del fútbol deben probar. Tiene características sorprendentes, como gráficos realistas, listas actualizadas, varios modos de juego y modo fuera de línea. También es gratis para descargar de fuentes de terceros. Sin embargo, también tiene algunos inconvenientes, como el gran tamaño del archivo, la extracción de datos y problemas de compatibilidad. Si desea descargar e instalar Winning Eleven 2019 apk en su dispositivo Android, es necesario seguir los pasos que hemos proporcionado en este artículo. También es necesario dominar las habilidades y tácticas para jugar bien el juego. Esperamos que disfrute jugando Winning Eleven 2019 apk y divertirse! <h2>Preguntas frecuentes</h2>
|
50 |
-
<p>Aquí hay algunas preguntas frecuentes sobre ganar once 2019 apk:</p>
|
51 |
-
<ol>
|
52 |
-
<li><b>¿Cuál es la diferencia entre Winning Eleven 2019 y PES 2019? </b><br>
|
53 |
-
Winning Eleven 2019 es un mod no oficial de PES 2019, lo que significa que tiene algunas características y mejoras que no están disponibles en el juego original. Por ejemplo, Winning Eleven 2019 tiene listas y equipos más actualizados, gráficos y animaciones más realistas, y más modos de juego y desafíos. </li>
|
54 |
-
<li><b>¿Cómo puedo actualizar el juego a la última versión? </b><br>
|
55 |
-
Para actualizar el juego a la última versión, es necesario descargar el nuevo apk y archivos de datos de una fuente de confianza y repetir el proceso de instalación. También debe eliminar el archivo de datos antiguo de la carpeta obb antes de copiar el nuevo. </li>
|
56 |
-
<li><b>¿Cómo puedo jugar online con otros jugadores? </b><br>
|
57 |
-
|
58 |
-
<li><b>¿Cómo puedo arreglar el juego si se bloquea o se congela? </b><br>
|
59 |
-
Si el juego se bloquea o se congela, puedes probar algunas de estas soluciones: <ul>
|
60 |
-
<li>Reiniciar el dispositivo y lanzar el juego de nuevo. </li>
|
61 |
-
<li>Borra la caché y los datos del juego de la configuración de tu dispositivo. </li>
|
62 |
-
<li> Reinstalar el juego y copiar el archivo de datos de nuevo. </li>
|
63 |
-
<li>Compruebe la compatibilidad y el rendimiento de su dispositivo. </li>
|
64 |
-
</ul></li>
|
65 |
-
<li><b>¿Dónde puedo encontrar más información y soporte para Winning Eleven 2019? </b><br>
|
66 |
-
Puedes encontrar más información y soporte para Winning Eleven 2019 en estas fuentes: <ul>
|
67 |
-
<li>El sitio web oficial de Konami: <a href="">https://www.konami.com/</a></li>
|
68 |
-
<li>La página oficial de Facebook de PES: <a href="">https://www.facebook.com/PES/</a></li>
|
69 |
-
<li>La cuenta oficial de Twitter de PES: <a href="">https://twitter.com/officialpes</a></li>
|
70 |
-
<li>El canal oficial de YouTube de PES: <a href="">https://www.youtube.com/user/officialpes</a></li>
|
71 |
-
</ul></li>
|
72 |
-
</ol></p> 64aa2da5cf<br />
|
73 |
-
<br />
|
74 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BernardoOlisan/vqganclip/taming-transformers/taming/data/base.py
DELETED
@@ -1,70 +0,0 @@
|
|
1 |
-
import bisect
|
2 |
-
import numpy as np
|
3 |
-
import albumentations
|
4 |
-
from PIL import Image
|
5 |
-
from torch.utils.data import Dataset, ConcatDataset
|
6 |
-
|
7 |
-
|
8 |
-
class ConcatDatasetWithIndex(ConcatDataset):
|
9 |
-
"""Modified from original pytorch code to return dataset idx"""
|
10 |
-
def __getitem__(self, idx):
|
11 |
-
if idx < 0:
|
12 |
-
if -idx > len(self):
|
13 |
-
raise ValueError("absolute value of index should not exceed dataset length")
|
14 |
-
idx = len(self) + idx
|
15 |
-
dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx)
|
16 |
-
if dataset_idx == 0:
|
17 |
-
sample_idx = idx
|
18 |
-
else:
|
19 |
-
sample_idx = idx - self.cumulative_sizes[dataset_idx - 1]
|
20 |
-
return self.datasets[dataset_idx][sample_idx], dataset_idx
|
21 |
-
|
22 |
-
|
23 |
-
class ImagePaths(Dataset):
|
24 |
-
def __init__(self, paths, size=None, random_crop=False, labels=None):
|
25 |
-
self.size = size
|
26 |
-
self.random_crop = random_crop
|
27 |
-
|
28 |
-
self.labels = dict() if labels is None else labels
|
29 |
-
self.labels["file_path_"] = paths
|
30 |
-
self._length = len(paths)
|
31 |
-
|
32 |
-
if self.size is not None and self.size > 0:
|
33 |
-
self.rescaler = albumentations.SmallestMaxSize(max_size = self.size)
|
34 |
-
if not self.random_crop:
|
35 |
-
self.cropper = albumentations.CenterCrop(height=self.size,width=self.size)
|
36 |
-
else:
|
37 |
-
self.cropper = albumentations.RandomCrop(height=self.size,width=self.size)
|
38 |
-
self.preprocessor = albumentations.Compose([self.rescaler, self.cropper])
|
39 |
-
else:
|
40 |
-
self.preprocessor = lambda **kwargs: kwargs
|
41 |
-
|
42 |
-
def __len__(self):
|
43 |
-
return self._length
|
44 |
-
|
45 |
-
def preprocess_image(self, image_path):
|
46 |
-
image = Image.open(image_path)
|
47 |
-
if not image.mode == "RGB":
|
48 |
-
image = image.convert("RGB")
|
49 |
-
image = np.array(image).astype(np.uint8)
|
50 |
-
image = self.preprocessor(image=image)["image"]
|
51 |
-
image = (image/127.5 - 1.0).astype(np.float32)
|
52 |
-
return image
|
53 |
-
|
54 |
-
def __getitem__(self, i):
|
55 |
-
example = dict()
|
56 |
-
example["image"] = self.preprocess_image(self.labels["file_path_"][i])
|
57 |
-
for k in self.labels:
|
58 |
-
example[k] = self.labels[k][i]
|
59 |
-
return example
|
60 |
-
|
61 |
-
|
62 |
-
class NumpyPaths(ImagePaths):
|
63 |
-
def preprocess_image(self, image_path):
|
64 |
-
image = np.load(image_path).squeeze(0) # 3 x 1024 x 1024
|
65 |
-
image = np.transpose(image, (1,2,0))
|
66 |
-
image = Image.fromarray(image, mode="RGB")
|
67 |
-
image = np.array(image).astype(np.uint8)
|
68 |
-
image = self.preprocessor(image=image)["image"]
|
69 |
-
image = (image/127.5 - 1.0).astype(np.float32)
|
70 |
-
return image
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/detail/tuple_algorithms.h
DELETED
@@ -1,111 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
#include <thrust/detail/cpp11_required.h>
|
21 |
-
|
22 |
-
#if THRUST_CPP_DIALECT >= 2011
|
23 |
-
|
24 |
-
#include <thrust/detail/type_deduction.h>
|
25 |
-
#include <thrust/type_traits/integer_sequence.h>
|
26 |
-
|
27 |
-
#include <tuple>
|
28 |
-
|
29 |
-
namespace thrust
|
30 |
-
{
|
31 |
-
|
32 |
-
template <typename Tuple, std::size_t... Is>
|
33 |
-
auto tuple_subset(Tuple&& t, index_sequence<Is...>)
|
34 |
-
THRUST_DECLTYPE_RETURNS(std::make_tuple(std::get<Is>(THRUST_FWD(t))...));
|
35 |
-
|
36 |
-
namespace detail
|
37 |
-
{
|
38 |
-
|
39 |
-
template <typename Tuple, typename F, std::size_t... Is>
|
40 |
-
void tuple_for_each_impl(Tuple&& t, F&& f, index_sequence<Is...>)
|
41 |
-
{
|
42 |
-
auto l = { (f(std::get<Is>(t)), 0)... };
|
43 |
-
THRUST_UNUSED_VAR(l);
|
44 |
-
}
|
45 |
-
|
46 |
-
template <typename Tuple, typename F, std::size_t... Is>
|
47 |
-
auto tuple_transform_impl(Tuple&& t, F&& f, index_sequence<Is...>)
|
48 |
-
THRUST_DECLTYPE_RETURNS(std::make_tuple(f(std::get<Is>(t))...));
|
49 |
-
|
50 |
-
} // namespace detail
|
51 |
-
|
52 |
-
template <typename... Ts, typename F>
|
53 |
-
auto tuple_for_each(std::tuple<Ts...>& t, F&& f)
|
54 |
-
THRUST_DECLTYPE_RETURNS(
|
55 |
-
detail::tuple_for_each_impl(
|
56 |
-
t
|
57 |
-
, THRUST_FWD(f)
|
58 |
-
, make_index_sequence<sizeof...(Ts)>{}
|
59 |
-
)
|
60 |
-
);
|
61 |
-
template <typename... Ts, typename F>
|
62 |
-
auto tuple_for_each(std::tuple<Ts...> const& t, F&& f)
|
63 |
-
THRUST_DECLTYPE_RETURNS(
|
64 |
-
detail::tuple_for_each_impl(
|
65 |
-
t
|
66 |
-
, THRUST_FWD(f)
|
67 |
-
, make_index_sequence<sizeof...(Ts)>{}
|
68 |
-
)
|
69 |
-
);
|
70 |
-
template <typename... Ts, typename F>
|
71 |
-
auto tuple_for_each(std::tuple<Ts...>&& t, F&& f)
|
72 |
-
THRUST_DECLTYPE_RETURNS(
|
73 |
-
detail::tuple_for_each_impl(
|
74 |
-
std::move(t)
|
75 |
-
, THRUST_FWD(f)
|
76 |
-
, make_index_sequence<sizeof...(Ts)>{}
|
77 |
-
)
|
78 |
-
);
|
79 |
-
|
80 |
-
template <typename... Ts, typename F>
|
81 |
-
auto tuple_transform(std::tuple<Ts...>& t, F&& f)
|
82 |
-
THRUST_DECLTYPE_RETURNS(
|
83 |
-
detail::tuple_transform_impl(
|
84 |
-
t
|
85 |
-
, THRUST_FWD(f)
|
86 |
-
, make_index_sequence<sizeof...(Ts)>{}
|
87 |
-
)
|
88 |
-
);
|
89 |
-
template <typename... Ts, typename F>
|
90 |
-
auto tuple_transform(std::tuple<Ts...> const& t, F&& f)
|
91 |
-
THRUST_DECLTYPE_RETURNS(
|
92 |
-
detail::tuple_transform_impl(
|
93 |
-
t
|
94 |
-
, THRUST_FWD(f)
|
95 |
-
, make_index_sequence<sizeof...(Ts)>{}
|
96 |
-
)
|
97 |
-
);
|
98 |
-
template <typename... Ts, typename F>
|
99 |
-
auto tuple_transform(std::tuple<Ts...>&& t, F&& f)
|
100 |
-
THRUST_DECLTYPE_RETURNS(
|
101 |
-
detail::tuple_transform_impl(
|
102 |
-
std::move(t)
|
103 |
-
, THRUST_FWD(f)
|
104 |
-
, make_index_sequence<sizeof...(Ts)>{}
|
105 |
-
)
|
106 |
-
);
|
107 |
-
|
108 |
-
} // end namespace thrust
|
109 |
-
|
110 |
-
#endif // THRUST_CPP_DIALECT >= 2011
|
111 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/has_trivial_assign.h
DELETED
@@ -1,54 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
|
18 |
-
/*! \file type_traits.h
|
19 |
-
* \brief Temporarily define some type traits
|
20 |
-
* until nvcc can compile tr1::type_traits.
|
21 |
-
*/
|
22 |
-
|
23 |
-
#pragma once
|
24 |
-
|
25 |
-
#include <thrust/detail/config.h>
|
26 |
-
#include <thrust/detail/type_traits.h>
|
27 |
-
|
28 |
-
namespace thrust
|
29 |
-
{
|
30 |
-
|
31 |
-
namespace detail
|
32 |
-
{
|
33 |
-
|
34 |
-
template<typename T> struct has_trivial_assign
|
35 |
-
: public integral_constant<
|
36 |
-
bool,
|
37 |
-
(is_pod<T>::value && !is_const<T>::value)
|
38 |
-
#if THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_MSVC
|
39 |
-
|| __has_trivial_assign(T)
|
40 |
-
#elif THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_GCC
|
41 |
-
// only use the intrinsic for >= 4.3
|
42 |
-
#if (__GNUC__ >= 4) && (__GNUC_MINOR__ >= 3)
|
43 |
-
|| __has_trivial_assign(T)
|
44 |
-
#endif // GCC VERSION
|
45 |
-
#elif THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_CLANG
|
46 |
-
|| __has_trivial_assign(T)
|
47 |
-
#endif // THRUST_HOST_COMPILER
|
48 |
-
>
|
49 |
-
{};
|
50 |
-
|
51 |
-
} // end detail
|
52 |
-
|
53 |
-
} // end thrust
|
54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/Leaderboard/README.md
DELETED
@@ -1,12 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Leaderboard
|
3 |
-
emoji: 📈
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.0.15
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
---
|
11 |
-
|
12 |
-
This Demo is the TokenCut demo, the original demo is from https://huggingface.co/spaces/akhaliq/TokenCut. Thanks for Ahsen Khaliq's nicely contribution.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/WALT/mmdet/datasets/samplers/group_sampler.py
DELETED
@@ -1,148 +0,0 @@
|
|
1 |
-
from __future__ import division
|
2 |
-
import math
|
3 |
-
|
4 |
-
import numpy as np
|
5 |
-
import torch
|
6 |
-
from mmcv.runner import get_dist_info
|
7 |
-
from torch.utils.data import Sampler
|
8 |
-
|
9 |
-
|
10 |
-
class GroupSampler(Sampler):
|
11 |
-
|
12 |
-
def __init__(self, dataset, samples_per_gpu=1):
|
13 |
-
assert hasattr(dataset, 'flag')
|
14 |
-
self.dataset = dataset
|
15 |
-
self.samples_per_gpu = samples_per_gpu
|
16 |
-
self.flag = dataset.flag.astype(np.int64)
|
17 |
-
self.group_sizes = np.bincount(self.flag)
|
18 |
-
self.num_samples = 0
|
19 |
-
for i, size in enumerate(self.group_sizes):
|
20 |
-
self.num_samples += int(np.ceil(
|
21 |
-
size / self.samples_per_gpu)) * self.samples_per_gpu
|
22 |
-
|
23 |
-
def __iter__(self):
|
24 |
-
indices = []
|
25 |
-
for i, size in enumerate(self.group_sizes):
|
26 |
-
if size == 0:
|
27 |
-
continue
|
28 |
-
indice = np.where(self.flag == i)[0]
|
29 |
-
assert len(indice) == size
|
30 |
-
np.random.shuffle(indice)
|
31 |
-
num_extra = int(np.ceil(size / self.samples_per_gpu)
|
32 |
-
) * self.samples_per_gpu - len(indice)
|
33 |
-
indice = np.concatenate(
|
34 |
-
[indice, np.random.choice(indice, num_extra)])
|
35 |
-
indices.append(indice)
|
36 |
-
indices = np.concatenate(indices)
|
37 |
-
indices = [
|
38 |
-
indices[i * self.samples_per_gpu:(i + 1) * self.samples_per_gpu]
|
39 |
-
for i in np.random.permutation(
|
40 |
-
range(len(indices) // self.samples_per_gpu))
|
41 |
-
]
|
42 |
-
indices = np.concatenate(indices)
|
43 |
-
indices = indices.astype(np.int64).tolist()
|
44 |
-
assert len(indices) == self.num_samples
|
45 |
-
return iter(indices)
|
46 |
-
|
47 |
-
def __len__(self):
|
48 |
-
return self.num_samples
|
49 |
-
|
50 |
-
|
51 |
-
class DistributedGroupSampler(Sampler):
|
52 |
-
"""Sampler that restricts data loading to a subset of the dataset.
|
53 |
-
|
54 |
-
It is especially useful in conjunction with
|
55 |
-
:class:`torch.nn.parallel.DistributedDataParallel`. In such case, each
|
56 |
-
process can pass a DistributedSampler instance as a DataLoader sampler,
|
57 |
-
and load a subset of the original dataset that is exclusive to it.
|
58 |
-
|
59 |
-
.. note::
|
60 |
-
Dataset is assumed to be of constant size.
|
61 |
-
|
62 |
-
Arguments:
|
63 |
-
dataset: Dataset used for sampling.
|
64 |
-
num_replicas (optional): Number of processes participating in
|
65 |
-
distributed training.
|
66 |
-
rank (optional): Rank of the current process within num_replicas.
|
67 |
-
seed (int, optional): random seed used to shuffle the sampler if
|
68 |
-
``shuffle=True``. This number should be identical across all
|
69 |
-
processes in the distributed group. Default: 0.
|
70 |
-
"""
|
71 |
-
|
72 |
-
def __init__(self,
|
73 |
-
dataset,
|
74 |
-
samples_per_gpu=1,
|
75 |
-
num_replicas=None,
|
76 |
-
rank=None,
|
77 |
-
seed=0):
|
78 |
-
_rank, _num_replicas = get_dist_info()
|
79 |
-
if num_replicas is None:
|
80 |
-
num_replicas = _num_replicas
|
81 |
-
if rank is None:
|
82 |
-
rank = _rank
|
83 |
-
self.dataset = dataset
|
84 |
-
self.samples_per_gpu = samples_per_gpu
|
85 |
-
self.num_replicas = num_replicas
|
86 |
-
self.rank = rank
|
87 |
-
self.epoch = 0
|
88 |
-
self.seed = seed if seed is not None else 0
|
89 |
-
|
90 |
-
assert hasattr(self.dataset, 'flag')
|
91 |
-
self.flag = self.dataset.flag
|
92 |
-
self.group_sizes = np.bincount(self.flag)
|
93 |
-
|
94 |
-
self.num_samples = 0
|
95 |
-
for i, j in enumerate(self.group_sizes):
|
96 |
-
self.num_samples += int(
|
97 |
-
math.ceil(self.group_sizes[i] * 1.0 / self.samples_per_gpu /
|
98 |
-
self.num_replicas)) * self.samples_per_gpu
|
99 |
-
self.total_size = self.num_samples * self.num_replicas
|
100 |
-
|
101 |
-
def __iter__(self):
|
102 |
-
# deterministically shuffle based on epoch
|
103 |
-
g = torch.Generator()
|
104 |
-
g.manual_seed(self.epoch + self.seed)
|
105 |
-
|
106 |
-
indices = []
|
107 |
-
for i, size in enumerate(self.group_sizes):
|
108 |
-
if size > 0:
|
109 |
-
indice = np.where(self.flag == i)[0]
|
110 |
-
assert len(indice) == size
|
111 |
-
# add .numpy() to avoid bug when selecting indice in parrots.
|
112 |
-
# TODO: check whether torch.randperm() can be replaced by
|
113 |
-
# numpy.random.permutation().
|
114 |
-
indice = indice[list(
|
115 |
-
torch.randperm(int(size), generator=g).numpy())].tolist()
|
116 |
-
extra = int(
|
117 |
-
math.ceil(
|
118 |
-
size * 1.0 / self.samples_per_gpu / self.num_replicas)
|
119 |
-
) * self.samples_per_gpu * self.num_replicas - len(indice)
|
120 |
-
# pad indice
|
121 |
-
tmp = indice.copy()
|
122 |
-
for _ in range(extra // size):
|
123 |
-
indice.extend(tmp)
|
124 |
-
indice.extend(tmp[:extra % size])
|
125 |
-
indices.extend(indice)
|
126 |
-
|
127 |
-
assert len(indices) == self.total_size
|
128 |
-
|
129 |
-
indices = [
|
130 |
-
indices[j] for i in list(
|
131 |
-
torch.randperm(
|
132 |
-
len(indices) // self.samples_per_gpu, generator=g))
|
133 |
-
for j in range(i * self.samples_per_gpu, (i + 1) *
|
134 |
-
self.samples_per_gpu)
|
135 |
-
]
|
136 |
-
|
137 |
-
# subsample
|
138 |
-
offset = self.num_samples * self.rank
|
139 |
-
indices = indices[offset:offset + self.num_samples]
|
140 |
-
assert len(indices) == self.num_samples
|
141 |
-
|
142 |
-
return iter(indices)
|
143 |
-
|
144 |
-
def __len__(self):
|
145 |
-
return self.num_samples
|
146 |
-
|
147 |
-
def set_epoch(self, epoch):
|
148 |
-
self.epoch = epoch
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Cloudy1225/stackoverflow-sentiment-analysis/sentiment_analyser.py
DELETED
@@ -1,84 +0,0 @@
|
|
1 |
-
import time
|
2 |
-
import openai
|
3 |
-
import random
|
4 |
-
from transformers import pipeline
|
5 |
-
|
6 |
-
|
7 |
-
class RandomAnalyser:
|
8 |
-
def __init__(self):
|
9 |
-
self.LABELS = ['negative', 'neutral', 'positive']
|
10 |
-
|
11 |
-
def predict(self, X: list):
|
12 |
-
return [random.choice(self.LABELS) for x in X]
|
13 |
-
|
14 |
-
|
15 |
-
class RoBERTaAnalyser:
|
16 |
-
def __init__(self):
|
17 |
-
self.analyser = pipeline(task="sentiment-analysis", model="Cloudy1225/stackoverflow-roberta-base-sentiment")
|
18 |
-
|
19 |
-
def predict(self, X: list):
|
20 |
-
sentiments = []
|
21 |
-
for x in X:
|
22 |
-
x = RoBERTaAnalyser.preprocess(x)
|
23 |
-
prediction = self.analyser(x)
|
24 |
-
sentiments.append(prediction[0]['label'])
|
25 |
-
return sentiments
|
26 |
-
|
27 |
-
@staticmethod
|
28 |
-
def preprocess(text):
|
29 |
-
"""Preprocess text (username and link placeholders, remove line breaks)"""
|
30 |
-
new_text = []
|
31 |
-
for t in text.split(' '):
|
32 |
-
t = '@user' if t.startswith('@') and len(t) > 1 else t
|
33 |
-
t = 'http' if t.startswith('http') else t
|
34 |
-
new_text.append(t)
|
35 |
-
return ' '.join(new_text).strip()
|
36 |
-
|
37 |
-
|
38 |
-
class ChatGPTAnalyser:
|
39 |
-
def __init__(self):
|
40 |
-
# import os
|
41 |
-
# os.environ["http_proxy"] = "http://127.0.0.1:10080"
|
42 |
-
# os.environ["https_proxy"] = "http://127.0.0.1:10080"
|
43 |
-
self.MODEL = "gpt-3.5-turbo"
|
44 |
-
self.KEYs = [
|
45 |
-
"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",
|
46 |
-
"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",
|
47 |
-
"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",
|
48 |
-
"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",
|
49 |
-
"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",
|
50 |
-
"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"
|
51 |
-
]
|
52 |
-
self.TASK_NAME = 'Sentiment Classification'
|
53 |
-
self.TASK_DEFINITION = 'Given the sentence, assign a sentiment label from [negative, neutral, positive].'
|
54 |
-
self.OUT_FORMAT = 'Return label only without any other text.'
|
55 |
-
self.PROMPT_PREFIX = f"Please perform {self.TASK_NAME} task.{self.TASK_DEFINITION}{self.OUT_FORMAT}\nSentence:\n{{}}\nLabel:"
|
56 |
-
|
57 |
-
def predict(self, X: list):
|
58 |
-
sentiments = []
|
59 |
-
for i in range(len(X)):
|
60 |
-
prompt = self.PROMPT_PREFIX.format(X[i])
|
61 |
-
messages = [{"role": "user", "content": prompt}]
|
62 |
-
# openai.api_key = self.KEYs[i % len(self.KEYs)]
|
63 |
-
openai.api_key = random.choice(self.KEYs)
|
64 |
-
while True:
|
65 |
-
try:
|
66 |
-
response = openai.ChatCompletion.create(
|
67 |
-
model=self.MODEL,
|
68 |
-
messages=messages,
|
69 |
-
temperature=0,
|
70 |
-
n=1,
|
71 |
-
stop=None
|
72 |
-
)
|
73 |
-
sentiment = response.choices[0].message.content
|
74 |
-
sentiments.append(sentiment.strip().lower())
|
75 |
-
break
|
76 |
-
except openai.error.RateLimitError:
|
77 |
-
sleep_snds = 60
|
78 |
-
time.sleep(sleep_snds)
|
79 |
-
continue
|
80 |
-
except openai.error.APIError:
|
81 |
-
sleep_snds = 60
|
82 |
-
time.sleep(sleep_snds)
|
83 |
-
continue
|
84 |
-
return sentiments
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CofAI/chat/client/css/typing.css
DELETED
@@ -1,15 +0,0 @@
|
|
1 |
-
.typing {
|
2 |
-
position: absolute;
|
3 |
-
top: -25px;
|
4 |
-
left: 0;
|
5 |
-
font-size: 14px;
|
6 |
-
animation: show_popup 0.4s;
|
7 |
-
}
|
8 |
-
|
9 |
-
.typing-hiding {
|
10 |
-
animation: hide_popup 0.4s;
|
11 |
-
}
|
12 |
-
|
13 |
-
.typing-hidden {
|
14 |
-
display: none;
|
15 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CorvaeOboro/gen_ability_icon/torch_utils/ops/upfirdn2d.py
DELETED
@@ -1,384 +0,0 @@
|
|
1 |
-
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
|
2 |
-
#
|
3 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
# and proprietary rights in and to this software, related documentation
|
5 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
# distribution of this software and related documentation without an express
|
7 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
"""Custom PyTorch ops for efficient resampling of 2D images."""
|
10 |
-
|
11 |
-
import os
|
12 |
-
import warnings
|
13 |
-
import numpy as np
|
14 |
-
import torch
|
15 |
-
import traceback
|
16 |
-
|
17 |
-
from .. import custom_ops
|
18 |
-
from .. import misc
|
19 |
-
from . import conv2d_gradfix
|
20 |
-
|
21 |
-
#----------------------------------------------------------------------------
|
22 |
-
|
23 |
-
_inited = False
|
24 |
-
_plugin = None
|
25 |
-
|
26 |
-
def _init():
|
27 |
-
global _inited, _plugin
|
28 |
-
if not _inited:
|
29 |
-
sources = ['upfirdn2d.cpp', 'upfirdn2d.cu']
|
30 |
-
sources = [os.path.join(os.path.dirname(__file__), s) for s in sources]
|
31 |
-
try:
|
32 |
-
_plugin = custom_ops.get_plugin('upfirdn2d_plugin', sources=sources, extra_cuda_cflags=['--use_fast_math'])
|
33 |
-
except:
|
34 |
-
warnings.warn('Failed to build CUDA kernels for upfirdn2d. Falling back to slow reference implementation. Details:\n\n' + traceback.format_exc())
|
35 |
-
return _plugin is not None
|
36 |
-
|
37 |
-
def _parse_scaling(scaling):
|
38 |
-
if isinstance(scaling, int):
|
39 |
-
scaling = [scaling, scaling]
|
40 |
-
assert isinstance(scaling, (list, tuple))
|
41 |
-
assert all(isinstance(x, int) for x in scaling)
|
42 |
-
sx, sy = scaling
|
43 |
-
assert sx >= 1 and sy >= 1
|
44 |
-
return sx, sy
|
45 |
-
|
46 |
-
def _parse_padding(padding):
|
47 |
-
if isinstance(padding, int):
|
48 |
-
padding = [padding, padding]
|
49 |
-
assert isinstance(padding, (list, tuple))
|
50 |
-
assert all(isinstance(x, int) for x in padding)
|
51 |
-
if len(padding) == 2:
|
52 |
-
padx, pady = padding
|
53 |
-
padding = [padx, padx, pady, pady]
|
54 |
-
padx0, padx1, pady0, pady1 = padding
|
55 |
-
return padx0, padx1, pady0, pady1
|
56 |
-
|
57 |
-
def _get_filter_size(f):
|
58 |
-
if f is None:
|
59 |
-
return 1, 1
|
60 |
-
assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
|
61 |
-
fw = f.shape[-1]
|
62 |
-
fh = f.shape[0]
|
63 |
-
with misc.suppress_tracer_warnings():
|
64 |
-
fw = int(fw)
|
65 |
-
fh = int(fh)
|
66 |
-
misc.assert_shape(f, [fh, fw][:f.ndim])
|
67 |
-
assert fw >= 1 and fh >= 1
|
68 |
-
return fw, fh
|
69 |
-
|
70 |
-
#----------------------------------------------------------------------------
|
71 |
-
|
72 |
-
def setup_filter(f, device=torch.device('cpu'), normalize=True, flip_filter=False, gain=1, separable=None):
|
73 |
-
r"""Convenience function to setup 2D FIR filter for `upfirdn2d()`.
|
74 |
-
|
75 |
-
Args:
|
76 |
-
f: Torch tensor, numpy array, or python list of the shape
|
77 |
-
`[filter_height, filter_width]` (non-separable),
|
78 |
-
`[filter_taps]` (separable),
|
79 |
-
`[]` (impulse), or
|
80 |
-
`None` (identity).
|
81 |
-
device: Result device (default: cpu).
|
82 |
-
normalize: Normalize the filter so that it retains the magnitude
|
83 |
-
for constant input signal (DC)? (default: True).
|
84 |
-
flip_filter: Flip the filter? (default: False).
|
85 |
-
gain: Overall scaling factor for signal magnitude (default: 1).
|
86 |
-
separable: Return a separable filter? (default: select automatically).
|
87 |
-
|
88 |
-
Returns:
|
89 |
-
Float32 tensor of the shape
|
90 |
-
`[filter_height, filter_width]` (non-separable) or
|
91 |
-
`[filter_taps]` (separable).
|
92 |
-
"""
|
93 |
-
# Validate.
|
94 |
-
if f is None:
|
95 |
-
f = 1
|
96 |
-
f = torch.as_tensor(f, dtype=torch.float32)
|
97 |
-
assert f.ndim in [0, 1, 2]
|
98 |
-
assert f.numel() > 0
|
99 |
-
if f.ndim == 0:
|
100 |
-
f = f[np.newaxis]
|
101 |
-
|
102 |
-
# Separable?
|
103 |
-
if separable is None:
|
104 |
-
separable = (f.ndim == 1 and f.numel() >= 8)
|
105 |
-
if f.ndim == 1 and not separable:
|
106 |
-
f = f.ger(f)
|
107 |
-
assert f.ndim == (1 if separable else 2)
|
108 |
-
|
109 |
-
# Apply normalize, flip, gain, and device.
|
110 |
-
if normalize:
|
111 |
-
f /= f.sum()
|
112 |
-
if flip_filter:
|
113 |
-
f = f.flip(list(range(f.ndim)))
|
114 |
-
f = f * (gain ** (f.ndim / 2))
|
115 |
-
f = f.to(device=device)
|
116 |
-
return f
|
117 |
-
|
118 |
-
#----------------------------------------------------------------------------
|
119 |
-
|
120 |
-
def upfirdn2d(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1, impl='cuda'):
|
121 |
-
r"""Pad, upsample, filter, and downsample a batch of 2D images.
|
122 |
-
|
123 |
-
Performs the following sequence of operations for each channel:
|
124 |
-
|
125 |
-
1. Upsample the image by inserting N-1 zeros after each pixel (`up`).
|
126 |
-
|
127 |
-
2. Pad the image with the specified number of zeros on each side (`padding`).
|
128 |
-
Negative padding corresponds to cropping the image.
|
129 |
-
|
130 |
-
3. Convolve the image with the specified 2D FIR filter (`f`), shrinking it
|
131 |
-
so that the footprint of all output pixels lies within the input image.
|
132 |
-
|
133 |
-
4. Downsample the image by keeping every Nth pixel (`down`).
|
134 |
-
|
135 |
-
This sequence of operations bears close resemblance to scipy.signal.upfirdn().
|
136 |
-
The fused op is considerably more efficient than performing the same calculation
|
137 |
-
using standard PyTorch ops. It supports gradients of arbitrary order.
|
138 |
-
|
139 |
-
Args:
|
140 |
-
x: Float32/float64/float16 input tensor of the shape
|
141 |
-
`[batch_size, num_channels, in_height, in_width]`.
|
142 |
-
f: Float32 FIR filter of the shape
|
143 |
-
`[filter_height, filter_width]` (non-separable),
|
144 |
-
`[filter_taps]` (separable), or
|
145 |
-
`None` (identity).
|
146 |
-
up: Integer upsampling factor. Can be a single int or a list/tuple
|
147 |
-
`[x, y]` (default: 1).
|
148 |
-
down: Integer downsampling factor. Can be a single int or a list/tuple
|
149 |
-
`[x, y]` (default: 1).
|
150 |
-
padding: Padding with respect to the upsampled image. Can be a single number
|
151 |
-
or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
|
152 |
-
(default: 0).
|
153 |
-
flip_filter: False = convolution, True = correlation (default: False).
|
154 |
-
gain: Overall scaling factor for signal magnitude (default: 1).
|
155 |
-
impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
|
156 |
-
|
157 |
-
Returns:
|
158 |
-
Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
|
159 |
-
"""
|
160 |
-
assert isinstance(x, torch.Tensor)
|
161 |
-
assert impl in ['ref', 'cuda']
|
162 |
-
if impl == 'cuda' and x.device.type == 'cuda' and _init():
|
163 |
-
return _upfirdn2d_cuda(up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain).apply(x, f)
|
164 |
-
return _upfirdn2d_ref(x, f, up=up, down=down, padding=padding, flip_filter=flip_filter, gain=gain)
|
165 |
-
|
166 |
-
#----------------------------------------------------------------------------
|
167 |
-
|
168 |
-
@misc.profiled_function
|
169 |
-
def _upfirdn2d_ref(x, f, up=1, down=1, padding=0, flip_filter=False, gain=1):
|
170 |
-
"""Slow reference implementation of `upfirdn2d()` using standard PyTorch ops.
|
171 |
-
"""
|
172 |
-
# Validate arguments.
|
173 |
-
assert isinstance(x, torch.Tensor) and x.ndim == 4
|
174 |
-
if f is None:
|
175 |
-
f = torch.ones([1, 1], dtype=torch.float32, device=x.device)
|
176 |
-
assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
|
177 |
-
assert f.dtype == torch.float32 and not f.requires_grad
|
178 |
-
batch_size, num_channels, in_height, in_width = x.shape
|
179 |
-
upx, upy = _parse_scaling(up)
|
180 |
-
downx, downy = _parse_scaling(down)
|
181 |
-
padx0, padx1, pady0, pady1 = _parse_padding(padding)
|
182 |
-
|
183 |
-
# Upsample by inserting zeros.
|
184 |
-
x = x.reshape([batch_size, num_channels, in_height, 1, in_width, 1])
|
185 |
-
x = torch.nn.functional.pad(x, [0, upx - 1, 0, 0, 0, upy - 1])
|
186 |
-
x = x.reshape([batch_size, num_channels, in_height * upy, in_width * upx])
|
187 |
-
|
188 |
-
# Pad or crop.
|
189 |
-
x = torch.nn.functional.pad(x, [max(padx0, 0), max(padx1, 0), max(pady0, 0), max(pady1, 0)])
|
190 |
-
x = x[:, :, max(-pady0, 0) : x.shape[2] - max(-pady1, 0), max(-padx0, 0) : x.shape[3] - max(-padx1, 0)]
|
191 |
-
|
192 |
-
# Setup filter.
|
193 |
-
f = f * (gain ** (f.ndim / 2))
|
194 |
-
f = f.to(x.dtype)
|
195 |
-
if not flip_filter:
|
196 |
-
f = f.flip(list(range(f.ndim)))
|
197 |
-
|
198 |
-
# Convolve with the filter.
|
199 |
-
f = f[np.newaxis, np.newaxis].repeat([num_channels, 1] + [1] * f.ndim)
|
200 |
-
if f.ndim == 4:
|
201 |
-
x = conv2d_gradfix.conv2d(input=x, weight=f, groups=num_channels)
|
202 |
-
else:
|
203 |
-
x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(2), groups=num_channels)
|
204 |
-
x = conv2d_gradfix.conv2d(input=x, weight=f.unsqueeze(3), groups=num_channels)
|
205 |
-
|
206 |
-
# Downsample by throwing away pixels.
|
207 |
-
x = x[:, :, ::downy, ::downx]
|
208 |
-
return x
|
209 |
-
|
210 |
-
#----------------------------------------------------------------------------
|
211 |
-
|
212 |
-
_upfirdn2d_cuda_cache = dict()
|
213 |
-
|
214 |
-
def _upfirdn2d_cuda(up=1, down=1, padding=0, flip_filter=False, gain=1):
|
215 |
-
"""Fast CUDA implementation of `upfirdn2d()` using custom ops.
|
216 |
-
"""
|
217 |
-
# Parse arguments.
|
218 |
-
upx, upy = _parse_scaling(up)
|
219 |
-
downx, downy = _parse_scaling(down)
|
220 |
-
padx0, padx1, pady0, pady1 = _parse_padding(padding)
|
221 |
-
|
222 |
-
# Lookup from cache.
|
223 |
-
key = (upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain)
|
224 |
-
if key in _upfirdn2d_cuda_cache:
|
225 |
-
return _upfirdn2d_cuda_cache[key]
|
226 |
-
|
227 |
-
# Forward op.
|
228 |
-
class Upfirdn2dCuda(torch.autograd.Function):
|
229 |
-
@staticmethod
|
230 |
-
def forward(ctx, x, f): # pylint: disable=arguments-differ
|
231 |
-
assert isinstance(x, torch.Tensor) and x.ndim == 4
|
232 |
-
if f is None:
|
233 |
-
f = torch.ones([1, 1], dtype=torch.float32, device=x.device)
|
234 |
-
assert isinstance(f, torch.Tensor) and f.ndim in [1, 2]
|
235 |
-
y = x
|
236 |
-
if f.ndim == 2:
|
237 |
-
y = _plugin.upfirdn2d(y, f, upx, upy, downx, downy, padx0, padx1, pady0, pady1, flip_filter, gain)
|
238 |
-
else:
|
239 |
-
y = _plugin.upfirdn2d(y, f.unsqueeze(0), upx, 1, downx, 1, padx0, padx1, 0, 0, flip_filter, np.sqrt(gain))
|
240 |
-
y = _plugin.upfirdn2d(y, f.unsqueeze(1), 1, upy, 1, downy, 0, 0, pady0, pady1, flip_filter, np.sqrt(gain))
|
241 |
-
ctx.save_for_backward(f)
|
242 |
-
ctx.x_shape = x.shape
|
243 |
-
return y
|
244 |
-
|
245 |
-
@staticmethod
|
246 |
-
def backward(ctx, dy): # pylint: disable=arguments-differ
|
247 |
-
f, = ctx.saved_tensors
|
248 |
-
_, _, ih, iw = ctx.x_shape
|
249 |
-
_, _, oh, ow = dy.shape
|
250 |
-
fw, fh = _get_filter_size(f)
|
251 |
-
p = [
|
252 |
-
fw - padx0 - 1,
|
253 |
-
iw * upx - ow * downx + padx0 - upx + 1,
|
254 |
-
fh - pady0 - 1,
|
255 |
-
ih * upy - oh * downy + pady0 - upy + 1,
|
256 |
-
]
|
257 |
-
dx = None
|
258 |
-
df = None
|
259 |
-
|
260 |
-
if ctx.needs_input_grad[0]:
|
261 |
-
dx = _upfirdn2d_cuda(up=down, down=up, padding=p, flip_filter=(not flip_filter), gain=gain).apply(dy, f)
|
262 |
-
|
263 |
-
assert not ctx.needs_input_grad[1]
|
264 |
-
return dx, df
|
265 |
-
|
266 |
-
# Add to cache.
|
267 |
-
_upfirdn2d_cuda_cache[key] = Upfirdn2dCuda
|
268 |
-
return Upfirdn2dCuda
|
269 |
-
|
270 |
-
#----------------------------------------------------------------------------
|
271 |
-
|
272 |
-
def filter2d(x, f, padding=0, flip_filter=False, gain=1, impl='cuda'):
|
273 |
-
r"""Filter a batch of 2D images using the given 2D FIR filter.
|
274 |
-
|
275 |
-
By default, the result is padded so that its shape matches the input.
|
276 |
-
User-specified padding is applied on top of that, with negative values
|
277 |
-
indicating cropping. Pixels outside the image are assumed to be zero.
|
278 |
-
|
279 |
-
Args:
|
280 |
-
x: Float32/float64/float16 input tensor of the shape
|
281 |
-
`[batch_size, num_channels, in_height, in_width]`.
|
282 |
-
f: Float32 FIR filter of the shape
|
283 |
-
`[filter_height, filter_width]` (non-separable),
|
284 |
-
`[filter_taps]` (separable), or
|
285 |
-
`None` (identity).
|
286 |
-
padding: Padding with respect to the output. Can be a single number or a
|
287 |
-
list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
|
288 |
-
(default: 0).
|
289 |
-
flip_filter: False = convolution, True = correlation (default: False).
|
290 |
-
gain: Overall scaling factor for signal magnitude (default: 1).
|
291 |
-
impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
|
292 |
-
|
293 |
-
Returns:
|
294 |
-
Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
|
295 |
-
"""
|
296 |
-
padx0, padx1, pady0, pady1 = _parse_padding(padding)
|
297 |
-
fw, fh = _get_filter_size(f)
|
298 |
-
p = [
|
299 |
-
padx0 + fw // 2,
|
300 |
-
padx1 + (fw - 1) // 2,
|
301 |
-
pady0 + fh // 2,
|
302 |
-
pady1 + (fh - 1) // 2,
|
303 |
-
]
|
304 |
-
return upfirdn2d(x, f, padding=p, flip_filter=flip_filter, gain=gain, impl=impl)
|
305 |
-
|
306 |
-
#----------------------------------------------------------------------------
|
307 |
-
|
308 |
-
def upsample2d(x, f, up=2, padding=0, flip_filter=False, gain=1, impl='cuda'):
|
309 |
-
r"""Upsample a batch of 2D images using the given 2D FIR filter.
|
310 |
-
|
311 |
-
By default, the result is padded so that its shape is a multiple of the input.
|
312 |
-
User-specified padding is applied on top of that, with negative values
|
313 |
-
indicating cropping. Pixels outside the image are assumed to be zero.
|
314 |
-
|
315 |
-
Args:
|
316 |
-
x: Float32/float64/float16 input tensor of the shape
|
317 |
-
`[batch_size, num_channels, in_height, in_width]`.
|
318 |
-
f: Float32 FIR filter of the shape
|
319 |
-
`[filter_height, filter_width]` (non-separable),
|
320 |
-
`[filter_taps]` (separable), or
|
321 |
-
`None` (identity).
|
322 |
-
up: Integer upsampling factor. Can be a single int or a list/tuple
|
323 |
-
`[x, y]` (default: 1).
|
324 |
-
padding: Padding with respect to the output. Can be a single number or a
|
325 |
-
list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
|
326 |
-
(default: 0).
|
327 |
-
flip_filter: False = convolution, True = correlation (default: False).
|
328 |
-
gain: Overall scaling factor for signal magnitude (default: 1).
|
329 |
-
impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
|
330 |
-
|
331 |
-
Returns:
|
332 |
-
Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
|
333 |
-
"""
|
334 |
-
upx, upy = _parse_scaling(up)
|
335 |
-
padx0, padx1, pady0, pady1 = _parse_padding(padding)
|
336 |
-
fw, fh = _get_filter_size(f)
|
337 |
-
p = [
|
338 |
-
padx0 + (fw + upx - 1) // 2,
|
339 |
-
padx1 + (fw - upx) // 2,
|
340 |
-
pady0 + (fh + upy - 1) // 2,
|
341 |
-
pady1 + (fh - upy) // 2,
|
342 |
-
]
|
343 |
-
return upfirdn2d(x, f, up=up, padding=p, flip_filter=flip_filter, gain=gain*upx*upy, impl=impl)
|
344 |
-
|
345 |
-
#----------------------------------------------------------------------------
|
346 |
-
|
347 |
-
def downsample2d(x, f, down=2, padding=0, flip_filter=False, gain=1, impl='cuda'):
|
348 |
-
r"""Downsample a batch of 2D images using the given 2D FIR filter.
|
349 |
-
|
350 |
-
By default, the result is padded so that its shape is a fraction of the input.
|
351 |
-
User-specified padding is applied on top of that, with negative values
|
352 |
-
indicating cropping. Pixels outside the image are assumed to be zero.
|
353 |
-
|
354 |
-
Args:
|
355 |
-
x: Float32/float64/float16 input tensor of the shape
|
356 |
-
`[batch_size, num_channels, in_height, in_width]`.
|
357 |
-
f: Float32 FIR filter of the shape
|
358 |
-
`[filter_height, filter_width]` (non-separable),
|
359 |
-
`[filter_taps]` (separable), or
|
360 |
-
`None` (identity).
|
361 |
-
down: Integer downsampling factor. Can be a single int or a list/tuple
|
362 |
-
`[x, y]` (default: 1).
|
363 |
-
padding: Padding with respect to the input. Can be a single number or a
|
364 |
-
list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]`
|
365 |
-
(default: 0).
|
366 |
-
flip_filter: False = convolution, True = correlation (default: False).
|
367 |
-
gain: Overall scaling factor for signal magnitude (default: 1).
|
368 |
-
impl: Implementation to use. Can be `'ref'` or `'cuda'` (default: `'cuda'`).
|
369 |
-
|
370 |
-
Returns:
|
371 |
-
Tensor of the shape `[batch_size, num_channels, out_height, out_width]`.
|
372 |
-
"""
|
373 |
-
downx, downy = _parse_scaling(down)
|
374 |
-
padx0, padx1, pady0, pady1 = _parse_padding(padding)
|
375 |
-
fw, fh = _get_filter_size(f)
|
376 |
-
p = [
|
377 |
-
padx0 + (fw - downx + 1) // 2,
|
378 |
-
padx1 + (fw - downx) // 2,
|
379 |
-
pady0 + (fh - downy + 1) // 2,
|
380 |
-
pady1 + (fh - downy) // 2,
|
381 |
-
]
|
382 |
-
return upfirdn2d(x, f, down=down, padding=p, flip_filter=flip_filter, gain=gain, impl=impl)
|
383 |
-
|
384 |
-
#----------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CosmicSage/Linaqruf-anything-v3.0pruned/app.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/Linaqruf/anything-v3.0").launch()
|
|
|
|
|
|
|
|
spaces/Cyril666/ContourNet-ABI/modules/__init__.py
DELETED
File without changes
|
spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_compat.py
DELETED
@@ -1,43 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
The _compat module is used for code which requires branching between different
|
3 |
-
Python environments. It is excluded from the code coverage checks.
|
4 |
-
"""
|
5 |
-
import ssl
|
6 |
-
import sys
|
7 |
-
|
8 |
-
# Brotli support is optional
|
9 |
-
# The C bindings in `brotli` are recommended for CPython.
|
10 |
-
# The CFFI bindings in `brotlicffi` are recommended for PyPy and everything else.
|
11 |
-
try:
|
12 |
-
import brotlicffi as brotli
|
13 |
-
except ImportError: # pragma: no cover
|
14 |
-
try:
|
15 |
-
import brotli
|
16 |
-
except ImportError:
|
17 |
-
brotli = None
|
18 |
-
|
19 |
-
if sys.version_info >= (3, 10) or (
|
20 |
-
sys.version_info >= (3, 7) and ssl.OPENSSL_VERSION_INFO >= (1, 1, 0, 7)
|
21 |
-
):
|
22 |
-
|
23 |
-
def set_minimum_tls_version_1_2(context: ssl.SSLContext) -> None:
|
24 |
-
# The OP_NO_SSL* and OP_NO_TLS* become deprecated in favor of
|
25 |
-
# 'SSLContext.minimum_version' from Python 3.7 onwards, however
|
26 |
-
# this attribute is not available unless the ssl module is compiled
|
27 |
-
# with OpenSSL 1.1.0g or newer.
|
28 |
-
# https://docs.python.org/3.10/library/ssl.html#ssl.SSLContext.minimum_version
|
29 |
-
# https://docs.python.org/3.7/library/ssl.html#ssl.SSLContext.minimum_version
|
30 |
-
context.minimum_version = ssl.TLSVersion.TLSv1_2
|
31 |
-
|
32 |
-
else:
|
33 |
-
|
34 |
-
def set_minimum_tls_version_1_2(context: ssl.SSLContext) -> None:
|
35 |
-
# If 'minimum_version' isn't available, we configure these options with
|
36 |
-
# the older deprecated variants.
|
37 |
-
context.options |= ssl.OP_NO_SSLv2
|
38 |
-
context.options |= ssl.OP_NO_SSLv3
|
39 |
-
context.options |= ssl.OP_NO_TLSv1
|
40 |
-
context.options |= ssl.OP_NO_TLSv1_1
|
41 |
-
|
42 |
-
|
43 |
-
__all__ = ["brotli", "set_minimum_tls_version_1_2"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DVLH/consciousAI-question-answering-roberta-vsgshshshsbase-s-v2/app.py
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
|
3 |
-
gr.Interface.load("models/consciousAI/question-answering-roberta-base-s-v2").launch()
|
|
|
|
|
|
|
|
spaces/Dao3/MBTI_Test/README.md
DELETED
@@ -1,14 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: MBTI_Test
|
3 |
-
emoji: 🐨
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: gray
|
6 |
-
sdk: gradio
|
7 |
-
sdk_version: 3.20.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: false
|
10 |
-
license: cc-by-4.0
|
11 |
-
duplicated_from: Dao3/XiaoZhaoZhao
|
12 |
-
---
|
13 |
-
|
14 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Datasculptor/MusicGen/CHANGELOG.md
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
# Changelog
|
2 |
-
|
3 |
-
All notable changes to this project will be documented in this file.
|
4 |
-
|
5 |
-
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
6 |
-
|
7 |
-
## [0.0.2a] - TBD
|
8 |
-
|
9 |
-
Improved demo, fixed top p (thanks @jnordberg).
|
10 |
-
|
11 |
-
Compressor tanh on output to avoid clipping with some style (especially piano).
|
12 |
-
Now repeating the conditioning periodically if it is too short.
|
13 |
-
|
14 |
-
More options when launching Gradio app locally (thanks @ashleykleynhans).
|
15 |
-
|
16 |
-
Testing out PyTorch 2.0 memory efficient attention.
|
17 |
-
|
18 |
-
Added extended generation (infinite length) by slowly moving the windows.
|
19 |
-
Note that other implementations exist: https://github.com/camenduru/MusicGen-colab.
|
20 |
-
|
21 |
-
## [0.0.1] - 2023-06-09
|
22 |
-
|
23 |
-
Initial release, with model evaluation only.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/DaweiZ/toy-gpt/Dockerfile
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker
|
2 |
-
# you will also find guides on how best to write your Dockerfile
|
3 |
-
|
4 |
-
FROM python:3.11
|
5 |
-
|
6 |
-
# Set up a new user named "user" with user ID 1000
|
7 |
-
RUN useradd -m -u 1000 user
|
8 |
-
|
9 |
-
# Switch to the "user" user
|
10 |
-
USER user
|
11 |
-
|
12 |
-
# Set home to the user's home directory
|
13 |
-
ENV HOME=/home/user \
|
14 |
-
PATH=/home/user/.local/bin:$PATH
|
15 |
-
|
16 |
-
# Set the working directory to the user's home directory
|
17 |
-
WORKDIR $HOME/app
|
18 |
-
|
19 |
-
# Try and run pip command after setting the user with `USER user` to avoid permission issues with Python
|
20 |
-
RUN pip install --no-cache-dir --upgrade pip
|
21 |
-
|
22 |
-
# Copy the current directory contents into the container at $HOME/app setting the owner to the user
|
23 |
-
COPY --chown=user . $HOME/app
|
24 |
-
|
25 |
-
RUN pip install --no-cache-dir --upgrade -r $HOME/app/requirements.txt
|
26 |
-
|
27 |
-
|
28 |
-
CMD ["chainlit", "run", "app.py", "--host", "0.0.0.0", "--port", "7860"]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Deevyankar/Deep-AD/README.md
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Deep-AD
|
3 |
-
emoji: 🧠
|
4 |
-
colorFrom: red
|
5 |
-
colorTo: blue
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.10.0
|
8 |
-
app_file: app.py
|
9 |
-
python_version: 3.8.15
|
10 |
-
pinned: false
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EdBianchi/Social_Toximeter/app.py
DELETED
@@ -1,156 +0,0 @@
|
|
1 |
-
from tkinter.ttk import setup_master
|
2 |
-
from turtle import color
|
3 |
-
from sklearn.model_selection import PredefinedSplit
|
4 |
-
import streamlit as st
|
5 |
-
# UTILITY
|
6 |
-
import joblib
|
7 |
-
import pickle
|
8 |
-
from joblib import load
|
9 |
-
# NLP
|
10 |
-
import re
|
11 |
-
import nltk
|
12 |
-
from nltk.corpus import stopwords
|
13 |
-
from nltk.tokenize import word_tokenize
|
14 |
-
from nltk import SnowballStemmer
|
15 |
-
from gensim.models.doc2vec import Doc2Vec
|
16 |
-
import numpy as np
|
17 |
-
|
18 |
-
comment = ""
|
19 |
-
tresh1 = 0.500
|
20 |
-
tresh2 = 0.937
|
21 |
-
tresh3 = 0.999
|
22 |
-
|
23 |
-
|
24 |
-
# set page setting
|
25 |
-
st.set_page_config(page_title='Toxic Comments')
|
26 |
-
|
27 |
-
# set history var
|
28 |
-
if 'history' not in st.session_state:
|
29 |
-
st.session_state.history = []
|
30 |
-
|
31 |
-
# import similarity (to be cached)
|
32 |
-
@st.cache(persist=True)
|
33 |
-
def importModel(filename):
|
34 |
-
model = load(filename)
|
35 |
-
return model
|
36 |
-
|
37 |
-
@st.cache(persist=True, allow_output_mutation=True)
|
38 |
-
def importD2V(filename):
|
39 |
-
model = Doc2Vec.load(filename)
|
40 |
-
return model
|
41 |
-
|
42 |
-
@st.cache(persist=True)
|
43 |
-
def loadPickle(filename):
|
44 |
-
file = pickle.load(open(filename, 'rb'))
|
45 |
-
return file
|
46 |
-
|
47 |
-
@st.cache(persist=True)
|
48 |
-
def loadPunkt():
|
49 |
-
nltk.download('punkt')
|
50 |
-
|
51 |
-
with st.spinner('Loading the models, this could take some time...'):
|
52 |
-
loadPunkt()
|
53 |
-
normalizer = importModel("normalizerD2V.joblib")
|
54 |
-
classifier = importModel("toxicCommModel.joblib")
|
55 |
-
model_d2v = importD2V("d2v_comments.model")
|
56 |
-
|
57 |
-
|
58 |
-
# REGEX
|
59 |
-
def apply_regex(corpus):
|
60 |
-
corpus = re.sub("\S*\d\S*"," ", corpus)
|
61 |
-
corpus = re.sub("\S*@\S*\s?"," ", corpus)
|
62 |
-
corpus = re.sub("\S*#\S*\s?"," ", corpus)
|
63 |
-
corpus = re.sub(r'http\S+', ' ', corpus)
|
64 |
-
corpus = re.sub(r'[^a-zA-Z0-9 ]', ' ',corpus)
|
65 |
-
corpus = corpus.replace(u'\ufffd', '8')
|
66 |
-
corpus = re.sub(' +', ' ', corpus)
|
67 |
-
return corpus
|
68 |
-
|
69 |
-
# TOKENIZE TEXT - we use the Spacy library stopwords
|
70 |
-
stop_words = loadPickle("stop_words.pkl")
|
71 |
-
|
72 |
-
# TOKENIZE TEXT and STOP WORDS REMOVAL - execution (removes also the words shorter than 2 and longer than 15 chars)
|
73 |
-
def tokenize(doc):
|
74 |
-
tokens_1 = word_tokenize(str(doc))
|
75 |
-
return [word.lower() for word in tokens_1 if len(word) > 1 and len(word) < 15 and word not in stop_words and not word.isdigit()]
|
76 |
-
|
77 |
-
# STEMMING
|
78 |
-
stemmer = SnowballStemmer(language="english")
|
79 |
-
def applyStemming(listOfTokens):
|
80 |
-
return [stemmer.stem(token) for token in listOfTokens]
|
81 |
-
|
82 |
-
# PROBS TO CLASS
|
83 |
-
def probs_to_prediction(probs, threshold):
|
84 |
-
pred=[]
|
85 |
-
for x in probs[:,1]:
|
86 |
-
if x>=threshold:
|
87 |
-
pred = 1
|
88 |
-
else:
|
89 |
-
pred = 0
|
90 |
-
return pred
|
91 |
-
|
92 |
-
|
93 |
-
# PROCESSING
|
94 |
-
def compute(comment, tresh):
|
95 |
-
global preds
|
96 |
-
global probs
|
97 |
-
global stems
|
98 |
-
stems = ""
|
99 |
-
preds = []
|
100 |
-
comment = apply_regex(comment)
|
101 |
-
comment = tokenize(comment)
|
102 |
-
comment = applyStemming(comment)
|
103 |
-
stems = comment
|
104 |
-
|
105 |
-
vectorizedComment = model_d2v.infer_vector(comment, epochs=70)
|
106 |
-
normComment = normalizer.transform([vectorizedComment])
|
107 |
-
probs = classifier.predict_proba(normComment)
|
108 |
-
for t in tresh:
|
109 |
-
preds.append(probs_to_prediction(probs, t))
|
110 |
-
|
111 |
-
with st.container():
|
112 |
-
col1, col2, col6 = st.columns(3)
|
113 |
-
#col1.metric("Toxic", round(preds[0][1], 4))
|
114 |
-
#col2.metric("Non Toxic", round(1-preds[0][1], 4))
|
115 |
-
col1.metric("Toxic", round(probs[0][1], 4))
|
116 |
-
col2.metric("", "")
|
117 |
-
col6.metric("Non Toxic", round(probs[0][0], 4))
|
118 |
-
st.markdown("""---""")
|
119 |
-
display()
|
120 |
-
return None
|
121 |
-
|
122 |
-
def display():
|
123 |
-
with st.container():
|
124 |
-
st.write("#### Different classification outputs at different threshold values:")
|
125 |
-
col3, col4, col5 = st.columns(3)
|
126 |
-
col3.metric("", "TOXIC" if preds[0]==1 else "NON TOXIC", delta = 0.500)
|
127 |
-
col4.metric("", "TOXIC" if preds[1]==1 else "NON TOXIC", delta = 0.937)
|
128 |
-
col5.metric("", "TOXIC" if preds[2]==1 else "NON TOXIC", delta = 0.999)
|
129 |
-
st.markdown("""---""")
|
130 |
-
with st.container():
|
131 |
-
st.write("#### Result of the NLP Pipeline:")
|
132 |
-
st.write(stems)
|
133 |
-
return None
|
134 |
-
|
135 |
-
# TITLE
|
136 |
-
st.write("# ☢️ Toxic Comments Classification ☢️")
|
137 |
-
st.write("#### Drop a comment and wait for toxicity.")
|
138 |
-
|
139 |
-
# INPUT TEXTBOX
|
140 |
-
comment = st.text_area('', "Drop your comment here! 😎")
|
141 |
-
|
142 |
-
# IMPUT THRESHOLD
|
143 |
-
#tresh = st.slider('Set the Threshold, default 0.5', 0.000, 1.000, step=0.0001, value=0.500)
|
144 |
-
compute(comment, [tresh1, tresh2, tresh3])
|
145 |
-
|
146 |
-
# sidebar
|
147 |
-
st.sidebar.write("""
|
148 |
-
This is a Toxic Comment Classifier that uses tokenization, stemming, Doc2Vec encoding and tuned logistic regression model.
|
149 |
-
It's been trained on a large corpus of comments.
|
150 |
-
|
151 |
-
A threshold is used to convert the predicted probability of toxicity into a categorical class [toxic, non toxic].
|
152 |
-
|
153 |
-
The value of the threshold can be chosen accordingly to the final application of the classifier.
|
154 |
-
|
155 |
-
Here are presented three sample thresholds to see the differences on the final output.
|
156 |
-
""")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Elbhnasy/Eye-Tracking-Diagnosis/app.py
DELETED
@@ -1,78 +0,0 @@
|
|
1 |
-
### 1. Imports and class names setup ###
|
2 |
-
import gradio as gr
|
3 |
-
import os
|
4 |
-
import torch
|
5 |
-
|
6 |
-
from model import create_ResNetb34_model
|
7 |
-
from timeit import default_timer as timer
|
8 |
-
from typing import Tuple, Dict
|
9 |
-
|
10 |
-
# Setup class names
|
11 |
-
class_names = ["Autistic", "Non_Autistic"]
|
12 |
-
|
13 |
-
### 2. Model and transforms preparation ###
|
14 |
-
|
15 |
-
|
16 |
-
resnet34, resnet34_transforms = create_ResNetb34_model(num_classes=len(class_names) )
|
17 |
-
|
18 |
-
# Load saved weights
|
19 |
-
resnet34.load_state_dict(torch.load(f="eyetracking_model.pth",
|
20 |
-
map_location=torch.device("cpu"),))
|
21 |
-
### 3. Predict function ###
|
22 |
-
# Create predict function
|
23 |
-
def predict(img)-> Tuple[Dict, float]:
|
24 |
-
"""
|
25 |
-
Transforms and performs a prediction on img.
|
26 |
-
:param img: target image .
|
27 |
-
:return: prediction and time taken.
|
28 |
-
"""
|
29 |
-
# Start the timer
|
30 |
-
start_time=timer()
|
31 |
-
# Transform the target image and add a batch dimension
|
32 |
-
img=img.convert('RGB')
|
33 |
-
img = resnet34_transforms(img).unsqueeze(0)
|
34 |
-
# put model into evaluation mode and turn infarance mode
|
35 |
-
resnet34.eval()
|
36 |
-
with torch.inference_mode():
|
37 |
-
|
38 |
-
# Pass the transformed image through the model and turn the prediction logits into prediction probabilities
|
39 |
-
pred_probs=torch.softmax(resnet34(img),dim=1)
|
40 |
-
# Create a prediction label and prediction probability dictionary for each prediction class (this is the required format for Gradio's output parameter)
|
41 |
-
|
42 |
-
pred_labels_and_probs={class_names[i]:float(pred_probs[0][i]) for i in range(len(class_names))}
|
43 |
-
# Calculate the prediction time
|
44 |
-
pred_time = round(timer() - start_time, 5)
|
45 |
-
|
46 |
-
# Return the prediction dictionary and prediction time
|
47 |
-
return pred_labels_and_probs, pred_time
|
48 |
-
|
49 |
-
### 4. Gradio app ###
|
50 |
-
example_list = [["examples/" + example] for example in os.listdir("examples")]
|
51 |
-
|
52 |
-
# Create title, description and article strings
|
53 |
-
import gradio as gr
|
54 |
-
|
55 |
-
# Create title, description and article strings
|
56 |
-
title = "Eye Tracking diagnosis"
|
57 |
-
description = """A feature extractor computer vision model to Identification of Autism in Children Using visualization of eyetracking records and Deep Neural Networks."""
|
58 |
-
|
59 |
-
|
60 |
-
article = """Eye-tracking is the process of capturing, tracking and measuring eye movements or the absolute point of gaze (POG), which refers to the point where the eye gaze is focused in the visual scene.
|
61 |
-
**Visualization of Eye-tracking Scanpaths** represents the sequence of consecutive fixations and saccades as a trace through time and space that may overlap itself.
|
62 |
-
We used pre-trained CNN model ResNet34 as feature extractors and a DNN model as a binary classifier to identify autism in children accurately.
|
63 |
-
We used a publicly available dataset to train the suggested models, which consisted of Visualization of Eye-tracking Scanpaths of children diagnosed with autism and controls classed as autistic and non-autistic. The Resnet34
|
64 |
-
model outperformed the others, with an accuracy of 95.41%."""
|
65 |
-
|
66 |
-
# Create the Gradio demo
|
67 |
-
demo = gr.Interface(fn=predict, # mapping function from input to output
|
68 |
-
inputs=gr.Image(type="pil",source="upload"), # what are the inputs?
|
69 |
-
outputs=[gr.Label(num_top_classes=2, label="Predictions"), # what are the outputs?
|
70 |
-
gr.Number(label="Prediction time (s)")], # our fn has two outputs, therefore we have two outputs
|
71 |
-
examples=example_list,
|
72 |
-
title=title,
|
73 |
-
description=description,
|
74 |
-
article=article)
|
75 |
-
|
76 |
-
|
77 |
-
# Launch the demo!
|
78 |
-
demo.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EleutherAI/VQGAN_CLIP/CLIP/tests/test_consistency.py
DELETED
@@ -1,25 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import pytest
|
3 |
-
import torch
|
4 |
-
from PIL import Image
|
5 |
-
|
6 |
-
import clip
|
7 |
-
|
8 |
-
|
9 |
-
@pytest.mark.parametrize('model_name', clip.available_models())
|
10 |
-
def test_consistency(model_name):
|
11 |
-
device = "cpu"
|
12 |
-
jit_model, transform = clip.load(model_name, device=device, jit=True)
|
13 |
-
py_model, _ = clip.load(model_name, device=device, jit=False)
|
14 |
-
|
15 |
-
image = transform(Image.open("CLIP.png")).unsqueeze(0).to(device)
|
16 |
-
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
|
17 |
-
|
18 |
-
with torch.no_grad():
|
19 |
-
logits_per_image, _ = jit_model(image, text)
|
20 |
-
jit_probs = logits_per_image.softmax(dim=-1).cpu().numpy()
|
21 |
-
|
22 |
-
logits_per_image, _ = py_model(image, text)
|
23 |
-
py_probs = logits_per_image.softmax(dim=-1).cpu().numpy()
|
24 |
-
|
25 |
-
assert np.allclose(jit_probs, py_probs, atol=0.01, rtol=0.1)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EleutherAI/VQGAN_CLIP/taming-transformers/setup.py
DELETED
@@ -1,13 +0,0 @@
|
|
1 |
-
from setuptools import setup, find_packages
|
2 |
-
|
3 |
-
setup(
|
4 |
-
name='taming-transformers',
|
5 |
-
version='0.0.1',
|
6 |
-
description='Taming Transformers for High-Resolution Image Synthesis',
|
7 |
-
packages=find_packages(),
|
8 |
-
install_requires=[
|
9 |
-
'torch',
|
10 |
-
'numpy',
|
11 |
-
'tqdm',
|
12 |
-
],
|
13 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Enderfga/mtCNN_sysu/utils/tool.py
DELETED
@@ -1,117 +0,0 @@
|
|
1 |
-
import numpy as np
|
2 |
-
import time
|
3 |
-
|
4 |
-
def IoU(box, boxes):
|
5 |
-
"""Compute IoU between detect box and gt boxes
|
6 |
-
|
7 |
-
Parameters:
|
8 |
-
----------
|
9 |
-
box: numpy array , shape (5, ): x1, y1, x2, y2, score
|
10 |
-
input box
|
11 |
-
boxes: numpy array, shape (n, 4): x1, y1, x2, y2
|
12 |
-
input ground truth boxes
|
13 |
-
|
14 |
-
Returns:
|
15 |
-
-------
|
16 |
-
ovr: numpy.array, shape (n, )
|
17 |
-
IoU
|
18 |
-
"""
|
19 |
-
box_area = (box[2] - box[0] + 1) * (box[3] - box[1] + 1)
|
20 |
-
area = (boxes[:, 2] - boxes[:, 0] + 1) * (boxes[:, 3] - boxes[:, 1] + 1)
|
21 |
-
xx1 = np.maximum(box[0], boxes[:, 0])
|
22 |
-
yy1 = np.maximum(box[1], boxes[:, 1])
|
23 |
-
xx2 = np.minimum(box[2], boxes[:, 2])
|
24 |
-
yy2 = np.minimum(box[3], boxes[:, 3])
|
25 |
-
|
26 |
-
# compute the width and height of the bounding box
|
27 |
-
w = np.maximum(0, xx2 - xx1 + 1)
|
28 |
-
h = np.maximum(0, yy2 - yy1 + 1)
|
29 |
-
|
30 |
-
inter = w * h
|
31 |
-
ovr = np.true_divide(inter,(box_area + area - inter))
|
32 |
-
#ovr = inter / (box_area + area - inter)
|
33 |
-
return ovr
|
34 |
-
|
35 |
-
|
36 |
-
def convert_to_square(bbox):
|
37 |
-
"""Convert bbox to square
|
38 |
-
|
39 |
-
Parameters:
|
40 |
-
----------
|
41 |
-
bbox: numpy array , shape n x 5
|
42 |
-
input bbox
|
43 |
-
|
44 |
-
Returns:
|
45 |
-
-------
|
46 |
-
square bbox
|
47 |
-
"""
|
48 |
-
square_bbox = bbox.copy()
|
49 |
-
|
50 |
-
h = bbox[:, 3] - bbox[:, 1] + 1
|
51 |
-
w = bbox[:, 2] - bbox[:, 0] + 1
|
52 |
-
max_side = np.maximum(h,w)
|
53 |
-
square_bbox[:, 0] = bbox[:, 0] + w*0.5 - max_side*0.5
|
54 |
-
square_bbox[:, 1] = bbox[:, 1] + h*0.5 - max_side*0.5
|
55 |
-
square_bbox[:, 2] = square_bbox[:, 0] + max_side - 1
|
56 |
-
square_bbox[:, 3] = square_bbox[:, 1] + max_side - 1
|
57 |
-
return square_bbox
|
58 |
-
|
59 |
-
# non-maximum suppression: eleminates the box which have large interception with the box which have the largest score
|
60 |
-
def nms(dets, thresh, mode="Union"):
|
61 |
-
"""
|
62 |
-
greedily select boxes with high confidence
|
63 |
-
keep boxes overlap <= thresh
|
64 |
-
rule out overlap > thresh
|
65 |
-
:param dets: [[x1, y1, x2, y2 score]]
|
66 |
-
:param thresh: retain overlap <= thresh
|
67 |
-
:return: indexes to keep
|
68 |
-
"""
|
69 |
-
x1 = dets[:, 0]
|
70 |
-
y1 = dets[:, 1]
|
71 |
-
x2 = dets[:, 2]
|
72 |
-
y2 = dets[:, 3]
|
73 |
-
scores = dets[:, 4]
|
74 |
-
|
75 |
-
# shape of x1 = (454,), shape of scores = (454,)
|
76 |
-
# print("shape of x1 = {0}, shape of scores = {1}".format(x1.shape, scores.shape))
|
77 |
-
# time.sleep(5)
|
78 |
-
|
79 |
-
areas = (x2 - x1 + 1) * (y2 - y1 + 1)
|
80 |
-
order = scores.argsort()[::-1] # argsort: ascending order then [::-1] reverse the order --> descending order
|
81 |
-
# print("shape of order {0}".format(order.size)) # (454,)
|
82 |
-
# time.sleep(5)
|
83 |
-
|
84 |
-
# eleminates the box which have large interception with the box which have the largest score in order
|
85 |
-
# matain the box with largest score and boxes don't have large interception with it
|
86 |
-
keep = []
|
87 |
-
while order.size > 0:
|
88 |
-
i = order[0]
|
89 |
-
keep.append(i)
|
90 |
-
xx1 = np.maximum(x1[i], x1[order[1:]])
|
91 |
-
yy1 = np.maximum(y1[i], y1[order[1:]])
|
92 |
-
xx2 = np.minimum(x2[i], x2[order[1:]])
|
93 |
-
yy2 = np.minimum(y2[i], y2[order[1:]])
|
94 |
-
|
95 |
-
w = np.maximum(0.0, xx2 - xx1 + 1)
|
96 |
-
h = np.maximum(0.0, yy2 - yy1 + 1)
|
97 |
-
inter = w * h
|
98 |
-
|
99 |
-
# cacaulate the IOU between box which have largest score with other boxes
|
100 |
-
if mode == "Union":
|
101 |
-
# area[i]: the area of largest score
|
102 |
-
ovr = inter / (areas[i] + areas[order[1:]] - inter)
|
103 |
-
elif mode == "Minimum":
|
104 |
-
ovr = inter / np.minimum(areas[i], areas[order[1:]])
|
105 |
-
|
106 |
-
|
107 |
-
inds = np.where(ovr <= thresh)[0]
|
108 |
-
order = order[inds + 1] # +1: eliminates the first element in order
|
109 |
-
# print(inds)
|
110 |
-
# print("shape of order {0}".format(order.shape)) # (454,)
|
111 |
-
# time.sleep(2)
|
112 |
-
|
113 |
-
return keep
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/EronSamez/RVC_HFmeu/lib/uvr5_pack/lib_v5/nets_123812KB.py
DELETED
@@ -1,122 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from torch import nn
|
3 |
-
import torch.nn.functional as F
|
4 |
-
|
5 |
-
from . import layers_123821KB as layers
|
6 |
-
|
7 |
-
|
8 |
-
class BaseASPPNet(nn.Module):
|
9 |
-
def __init__(self, nin, ch, dilations=(4, 8, 16)):
|
10 |
-
super(BaseASPPNet, self).__init__()
|
11 |
-
self.enc1 = layers.Encoder(nin, ch, 3, 2, 1)
|
12 |
-
self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1)
|
13 |
-
self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1)
|
14 |
-
self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1)
|
15 |
-
|
16 |
-
self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations)
|
17 |
-
|
18 |
-
self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1)
|
19 |
-
self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1)
|
20 |
-
self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1)
|
21 |
-
self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1)
|
22 |
-
|
23 |
-
def __call__(self, x):
|
24 |
-
h, e1 = self.enc1(x)
|
25 |
-
h, e2 = self.enc2(h)
|
26 |
-
h, e3 = self.enc3(h)
|
27 |
-
h, e4 = self.enc4(h)
|
28 |
-
|
29 |
-
h = self.aspp(h)
|
30 |
-
|
31 |
-
h = self.dec4(h, e4)
|
32 |
-
h = self.dec3(h, e3)
|
33 |
-
h = self.dec2(h, e2)
|
34 |
-
h = self.dec1(h, e1)
|
35 |
-
|
36 |
-
return h
|
37 |
-
|
38 |
-
|
39 |
-
class CascadedASPPNet(nn.Module):
|
40 |
-
def __init__(self, n_fft):
|
41 |
-
super(CascadedASPPNet, self).__init__()
|
42 |
-
self.stg1_low_band_net = BaseASPPNet(2, 32)
|
43 |
-
self.stg1_high_band_net = BaseASPPNet(2, 32)
|
44 |
-
|
45 |
-
self.stg2_bridge = layers.Conv2DBNActiv(34, 16, 1, 1, 0)
|
46 |
-
self.stg2_full_band_net = BaseASPPNet(16, 32)
|
47 |
-
|
48 |
-
self.stg3_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0)
|
49 |
-
self.stg3_full_band_net = BaseASPPNet(32, 64)
|
50 |
-
|
51 |
-
self.out = nn.Conv2d(64, 2, 1, bias=False)
|
52 |
-
self.aux1_out = nn.Conv2d(32, 2, 1, bias=False)
|
53 |
-
self.aux2_out = nn.Conv2d(32, 2, 1, bias=False)
|
54 |
-
|
55 |
-
self.max_bin = n_fft // 2
|
56 |
-
self.output_bin = n_fft // 2 + 1
|
57 |
-
|
58 |
-
self.offset = 128
|
59 |
-
|
60 |
-
def forward(self, x, aggressiveness=None):
|
61 |
-
mix = x.detach()
|
62 |
-
x = x.clone()
|
63 |
-
|
64 |
-
x = x[:, :, : self.max_bin]
|
65 |
-
|
66 |
-
bandw = x.size()[2] // 2
|
67 |
-
aux1 = torch.cat(
|
68 |
-
[
|
69 |
-
self.stg1_low_band_net(x[:, :, :bandw]),
|
70 |
-
self.stg1_high_band_net(x[:, :, bandw:]),
|
71 |
-
],
|
72 |
-
dim=2,
|
73 |
-
)
|
74 |
-
|
75 |
-
h = torch.cat([x, aux1], dim=1)
|
76 |
-
aux2 = self.stg2_full_band_net(self.stg2_bridge(h))
|
77 |
-
|
78 |
-
h = torch.cat([x, aux1, aux2], dim=1)
|
79 |
-
h = self.stg3_full_band_net(self.stg3_bridge(h))
|
80 |
-
|
81 |
-
mask = torch.sigmoid(self.out(h))
|
82 |
-
mask = F.pad(
|
83 |
-
input=mask,
|
84 |
-
pad=(0, 0, 0, self.output_bin - mask.size()[2]),
|
85 |
-
mode="replicate",
|
86 |
-
)
|
87 |
-
|
88 |
-
if self.training:
|
89 |
-
aux1 = torch.sigmoid(self.aux1_out(aux1))
|
90 |
-
aux1 = F.pad(
|
91 |
-
input=aux1,
|
92 |
-
pad=(0, 0, 0, self.output_bin - aux1.size()[2]),
|
93 |
-
mode="replicate",
|
94 |
-
)
|
95 |
-
aux2 = torch.sigmoid(self.aux2_out(aux2))
|
96 |
-
aux2 = F.pad(
|
97 |
-
input=aux2,
|
98 |
-
pad=(0, 0, 0, self.output_bin - aux2.size()[2]),
|
99 |
-
mode="replicate",
|
100 |
-
)
|
101 |
-
return mask * mix, aux1 * mix, aux2 * mix
|
102 |
-
else:
|
103 |
-
if aggressiveness:
|
104 |
-
mask[:, :, : aggressiveness["split_bin"]] = torch.pow(
|
105 |
-
mask[:, :, : aggressiveness["split_bin"]],
|
106 |
-
1 + aggressiveness["value"] / 3,
|
107 |
-
)
|
108 |
-
mask[:, :, aggressiveness["split_bin"] :] = torch.pow(
|
109 |
-
mask[:, :, aggressiveness["split_bin"] :],
|
110 |
-
1 + aggressiveness["value"],
|
111 |
-
)
|
112 |
-
|
113 |
-
return mask * mix
|
114 |
-
|
115 |
-
def predict(self, x_mag, aggressiveness=None):
|
116 |
-
h = self.forward(x_mag, aggressiveness)
|
117 |
-
|
118 |
-
if self.offset > 0:
|
119 |
-
h = h[:, :, :, self.offset : -self.offset]
|
120 |
-
assert h.size()[3] > 0
|
121 |
-
|
122 |
-
return h
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|