Commit
·
0e65e41
1
Parent(s):
9cc3222
Update parquet files (step 98 of 121)
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Archicad 22 The Ultimate Guide for Architectural Design.md +0 -53
- spaces/1acneusushi/gradio-2dmoleculeeditor/data/Best Ways to Download GTA 5 in Myanmar Without Any Hassle.md +0 -37
- spaces/1line/AutoGPT/tests/local_cache_test.py +0 -67
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Convert Your RPGMVP Images to JPG with This Online Tool.md +0 -145
- spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download APK Bar Bar Everything You Need to Know About Live Streaming Apps.md +0 -88
- spaces/1phancelerku/anime-remove-background/ 2 2 .md +0 -99
- spaces/1phancelerku/anime-remove-background/Download Eureka Season 1 and Discover the Hidden Secrets of a Rustic Town of Genius.md +0 -140
- spaces/1phancelerku/anime-remove-background/Final Thoughts The Punjabi Song by AP Dhillon Shinda Kahlon Gminxr that Everyone is Talking About.md +0 -101
- spaces/4Taps/SadTalker/src/face3d/models/__init__.py +0 -67
- spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/vocoder_infer/hifigan.py +0 -31
- spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/dataset_utils.py +0 -247
- spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/__init__.py +0 -7
- spaces/AIatUIUC/CodeLATS/README.md +0 -28
- spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/FastGpt.py +0 -87
- spaces/Adapter/CoAdapter/configs/mm/hrnet_w48_coco_256x192.py +0 -169
- spaces/Admin08077/Record/app.py +0 -26
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/Maker.d.ts +0 -35
- spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/AddChildMethods.js +0 -102
- spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/autosummary.py +0 -193
- spaces/Amrrs/DragGan-Inversion/torch_utils/ops/bias_act.py +0 -220
- spaces/AnandSoni2001/StockMarket/app.py +0 -656
- spaces/Andres99/Tune-A-Video-Training-UI/trainer.py +0 -166
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py +0 -1059
- spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py +0 -299
- spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_caffe_c4_1x_coco.py +0 -38
- spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_1x_coco.py +0 -54
- spaces/Andy1621/uniformer_image_segmentation/configs/point_rend/README.md +0 -31
- spaces/Andy1621/uniformer_image_segmentation/configs/psanet/README.md +0 -48
- spaces/AngoHF/ANGO-Leaderboard/components/data.py +0 -77
- spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-html/popper.min.js +0 -6
- spaces/Apex-X/ROOPOK/roop/utilities.py +0 -149
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/wait.py +0 -228
- spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/helpers.py +0 -1088
- spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/chinese_bert.py +0 -59
- spaces/Banbri/zcvzcv/next.config.js +0 -11
- spaces/Bart92/RVC_HF/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py +0 -97
- spaces/Bart92/RVC_HF/mdx_processing_script.py +0 -146
- spaces/Benson/text-generation/Examples/3d Modelo 3d Descargar.md +0 -84
- spaces/Benson/text-generation/Examples/Appking Io.md +0 -89
- spaces/Benson/text-generation/Examples/Ciudad Helada Hile Apk Day.md +0 -39
- spaces/Benson/text-generation/Examples/Coche Deriva Carreras Carretera Mod Apk.md +0 -58
- spaces/Benson/text-generation/Examples/Descargar Amor Nikki Mod Apk.md +0 -74
- spaces/Benson/text-generation/Examples/Descargar Dls 2020 Apk Obb ltima Versin (7.42).md +0 -97
- spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/__init__.py +0 -0
- spaces/BramVanroy/mai-simplification-nl-2023-demo/README.md +0 -19
- spaces/BraydenMoore/MARCI-NFL-Betting/get_record.py +0 -177
- spaces/CVH-vn1210/make_hair/minigpt4/common/registry.py +0 -329
- spaces/CVPR/GFPGAN-example/scripts/parse_landmark.py +0 -85
- spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/merge.h +0 -23
- spaces/CVPR/regionclip-demo/detectron2/evaluation/lvis_evaluation.py +0 -358
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Archicad 22 The Ultimate Guide for Architectural Design.md
DELETED
@@ -1,53 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Use Archicad 22 for Architectural Design</h1>
|
3 |
-
<p>Archicad 22 is a powerful software for architectural design and documentation. It allows you to create 3D models, 2D drawings, and BIM data for your projects. Archicad 22 also has many features and tools to help you optimize your workflow and enhance your creativity.</p>
|
4 |
-
<h2>download archicad 22 crack</h2><br /><p><b><b>Download Zip</b> ———>>> <a href="https://byltly.com/2uKyA2">https://byltly.com/2uKyA2</a></b></p><br /><br />
|
5 |
-
<p>In this article, we will show you some of the basics of using Archicad 22 for architectural design. We will cover how to:</p>
|
6 |
-
<ul>
|
7 |
-
<li>Set up a new project and customize the settings</li>
|
8 |
-
<li>Create walls, slabs, roofs, and other building elements</li>
|
9 |
-
<li>Add doors, windows, furniture, and other objects</li>
|
10 |
-
<li>Apply materials, textures, and colors to your model</li>
|
11 |
-
<li>Generate views, sections, elevations, and layouts</li>
|
12 |
-
<li>Export and print your documents</li>
|
13 |
-
</ul>
|
14 |
-
<p>By the end of this article, you should have a good understanding of how to use Archicad 22 for architectural design. Let's get started!</p>
|
15 |
-
|
16 |
-
<h2>Setting Up a New Project</h2>
|
17 |
-
<p>The first step in using Archicad 22 is to set up a new project. To do this, you need to:</p>
|
18 |
-
<p></p>
|
19 |
-
<ol>
|
20 |
-
<li>Open Archicad 22 and click on File > New.</li>
|
21 |
-
<li>Select the template that matches your project type and location. For example, if you are working on a residential project in the US, you can choose the US Residential template.</li>
|
22 |
-
<li>Click on OK to create a new project.</li>
|
23 |
-
<li>Adjust the project settings according to your preferences. You can change the units, grid, layers, stories, zones, dimensions, and other options by clicking on Options > Project Preferences.</li>
|
24 |
-
<li>Save your project by clicking on File > Save As and choosing a name and location for your file.</li>
|
25 |
-
</ol>
|
26 |
-
<p>You have now set up a new project in Archicad 22. You can start designing your building in the next step.</p>
|
27 |
-
|
28 |
-
<h2>Creating Building Elements</h2>
|
29 |
-
<p>The next step in using Archicad 22 is to create the building elements that make up your structure. You can use the tools in the Toolbox palette to draw walls, slabs, roofs, columns, beams, stairs, railings, and other elements. To create a building element, you need to:</p>
|
30 |
-
<ol>
|
31 |
-
<li>Select the tool that corresponds to the element you want to create. For example, if you want to create a wall, select the Wall tool.</li>
|
32 |
-
<li>Choose the settings for the element in the Info Box palette. You can change the height, thickness, material, layer, and other properties of the element.</li>
|
33 |
-
<li>Draw the element on the floor plan by clicking and dragging on the screen. You can use the Tracker palette to enter precise coordinates and dimensions for the element.</li>
|
34 |
-
<li>Repeat steps 1-3 for each element you want to create.</li>
|
35 |
-
</ol>
|
36 |
-
<p>You have now created some building elements in Archicad 22. You can add more details and objects to your model in the next step.</p>
|
37 |
-
|
38 |
-
<h2>Adding Objects</h2>
|
39 |
-
<p>The next step in using Archicad 22 is to add objects to your model. Objects are predefined or custom-made components that represent doors, windows, furniture, fixtures, appliances, plants, cars, people, and other items. You can use the Object tool or the Library Manager palette to insert objects into your model. To add an object, you need to:</p>
|
40 |
-
<ol>
|
41 |
-
<li>Select the Object tool or open the Library Manager palette.</li>
|
42 |
-
<li>Browse through the library folders and find the object you want to add. You can also search for an object by name or keyword.</li>
|
43 |
-
<li>Drag and drop the object onto your floor plan or click on Insert > Place Object.</li>
|
44 |
-
<li>Adjust the settings for the object in the Info Box palette. You can change the size, orientation, layer, and other properties of the object.</li>
|
45 |
-
<li>Move and rotate the object as needed by using the Edit > Move or Edit > Rotate commands.</li>
|
46 |
-
<li>Repeat steps 1-5 for each object you want to add.</li>
|
47 |
-
</ol>
|
48 |
-
<p>You have now added some objects to your model in Archicad 22. You can apply materials and colors to your model in the next step.</p>
|
49 |
-
|
50 |
-
<h2>Applying Materials and Colors</h2>
|
51 |
-
<p>The next step in using</p> ddb901b051<br />
|
52 |
-
<br />
|
53 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1acneusushi/gradio-2dmoleculeeditor/data/Best Ways to Download GTA 5 in Myanmar Without Any Hassle.md
DELETED
@@ -1,37 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download GTA 5 in Myanmar</h1>
|
3 |
-
<p>GTA 5 is one of the most popular and exciting games in the world. It is an open-world action-adventure game that lets you explore the city of Los Santos and its surrounding areas. You can play as three different characters: Michael, Franklin and Trevor, each with their own story and skills. You can also switch between them at any time and experience different missions, activities and challenges.</p>
|
4 |
-
<h2>gta 5 download myanmar</h2><br /><p><b><b>Download</b> ===== <a href="https://byltly.com/2uKykq">https://byltly.com/2uKykq</a></b></p><br /><br />
|
5 |
-
<p>However, if you live in Myanmar, you may have some difficulties in downloading GTA 5. This is because the game is not officially available in the country due to some legal issues. Moreover, the internet speed and bandwidth in Myanmar are not very reliable, which can make the download process very slow and frustrating.</p>
|
6 |
-
<p>But don't worry, there are still some ways to download GTA 5 in Myanmar. In this article, we will show you how to do it step by step.</p>
|
7 |
-
<h2>Method 1: Use a VPN</h2>
|
8 |
-
<p>A VPN (Virtual Private Network) is a service that allows you to connect to a server in another country and access the internet from there. This way, you can bypass the geo-restrictions and censorship that may prevent you from downloading GTA 5 in Myanmar.</p>
|
9 |
-
<p>Here are the steps to use a VPN to download GTA 5 in Myanmar:</p>
|
10 |
-
<ol>
|
11 |
-
<li>Choose a VPN service that has servers in countries where GTA 5 is available, such as the US, UK, Canada, Australia, etc. Some of the best VPNs for gaming are ExpressVPN, NordVPN, Surfshark and CyberGhost.</li>
|
12 |
-
<li>Download and install the VPN app on your device. You can use a VPN on your PC, laptop, smartphone or tablet.</li>
|
13 |
-
<li>Launch the VPN app and sign in with your account.</li>
|
14 |
-
<li>Select a server location where GTA 5 is available and connect to it.</li>
|
15 |
-
<li>Open your browser and go to the official website of GTA 5: https://www.rockstargames.com/V/</li>
|
16 |
-
<li>Click on the "Buy Now" button and choose your preferred edition and platform.</li>
|
17 |
-
<li>Follow the instructions to complete the purchase and download process.</li>
|
18 |
-
<li>Enjoy playing GTA 5 in Myanmar!</li>
|
19 |
-
</ol>
|
20 |
-
<h2>Method 2: Use a Torrent</h2>
|
21 |
-
<p>A torrent is a file that contains information about other files that are shared by users on a peer-to-peer network. You can use a torrent client such as BitTorrent or uTorrent to download these files on your device.</p>
|
22 |
-
<p></p>
|
23 |
-
<p>However, using a torrent to download GTA 5 in Myanmar is not recommended for several reasons. First of all, it is illegal and may violate the copyright laws of both Myanmar and the country where GTA 5 is produced. Secondly, it may expose you to malware and viruses that can harm your device and data. Thirdly, it may not guarantee the quality and performance of the game.</p>
|
24 |
-
<p>If you still want to use a torrent to download GTA 5 in Myanmar, do it at your own risk. Here are the steps to do it:</p>
|
25 |
-
<ol>
|
26 |
-
<li>Find a reliable torrent website that has GTA 5 available for download. Some of the popular torrent sites are The Pirate Bay, RARBG, 1337x and Kickass Torrents.</li>
|
27 |
-
<li>Download and install a torrent client on your device.</li>
|
28 |
-
<li>Search for GTA 5 on the torrent website and choose a torrent file that has a high number of seeders (users who have the complete file) and leechers (users who are downloading the file).</li>
|
29 |
-
<li>Download the torrent file and open it with your torrent client.</li>
|
30 |
-
<li>Select a folder where you want to save the game files and start the download process.</li>
|
31 |
-
<li>Wait for the download to finish. It may take several hours or days depending on your internet speed and the size of the game.</li>
|
32 |
-
<li>Once the download is complete, open the folder where you saved the game files and run the setup.exe file.</li>
|
33 |
-
<li>Follow the instructions to install GTA 5 on your device.</li>
|
34 |
-
<li>Enjoy playing GTA 5 in Myanmar!</li>
|
35 |
-
</ol></p> ddb901b051<br />
|
36 |
-
<br />
|
37 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1line/AutoGPT/tests/local_cache_test.py
DELETED
@@ -1,67 +0,0 @@
|
|
1 |
-
# sourcery skip: snake-case-functions
|
2 |
-
"""Tests for LocalCache class"""
|
3 |
-
import os
|
4 |
-
import sys
|
5 |
-
import unittest
|
6 |
-
|
7 |
-
import pytest
|
8 |
-
|
9 |
-
from autogpt.memory.local import LocalCache
|
10 |
-
|
11 |
-
|
12 |
-
def mock_config() -> dict:
|
13 |
-
"""Mock the Config class"""
|
14 |
-
return type(
|
15 |
-
"MockConfig",
|
16 |
-
(object,),
|
17 |
-
{
|
18 |
-
"debug_mode": False,
|
19 |
-
"continuous_mode": False,
|
20 |
-
"speak_mode": False,
|
21 |
-
"memory_index": "auto-gpt",
|
22 |
-
},
|
23 |
-
)
|
24 |
-
|
25 |
-
|
26 |
-
@pytest.mark.integration_test
|
27 |
-
class TestLocalCache(unittest.TestCase):
|
28 |
-
"""Tests for LocalCache class"""
|
29 |
-
|
30 |
-
def setUp(self) -> None:
|
31 |
-
"""Set up the test environment"""
|
32 |
-
self.cfg = mock_config()
|
33 |
-
self.cache = LocalCache(self.cfg)
|
34 |
-
|
35 |
-
def test_add(self) -> None:
|
36 |
-
"""Test adding a text to the cache"""
|
37 |
-
text = "Sample text"
|
38 |
-
self.cache.add(text)
|
39 |
-
self.assertIn(text, self.cache.data.texts)
|
40 |
-
|
41 |
-
def test_clear(self) -> None:
|
42 |
-
"""Test clearing the cache"""
|
43 |
-
self.cache.clear()
|
44 |
-
self.assertEqual(self.cache.data.texts, [])
|
45 |
-
|
46 |
-
def test_get(self) -> None:
|
47 |
-
"""Test getting a text from the cache"""
|
48 |
-
text = "Sample text"
|
49 |
-
self.cache.add(text)
|
50 |
-
result = self.cache.get(text)
|
51 |
-
self.assertEqual(result, [text])
|
52 |
-
|
53 |
-
def test_get_relevant(self) -> None:
|
54 |
-
"""Test getting relevant texts from the cache"""
|
55 |
-
text1 = "Sample text 1"
|
56 |
-
text2 = "Sample text 2"
|
57 |
-
self.cache.add(text1)
|
58 |
-
self.cache.add(text2)
|
59 |
-
result = self.cache.get_relevant(text1, 1)
|
60 |
-
self.assertEqual(result, [text1])
|
61 |
-
|
62 |
-
def test_get_stats(self) -> None:
|
63 |
-
"""Test getting the cache stats"""
|
64 |
-
text = "Sample text"
|
65 |
-
self.cache.add(text)
|
66 |
-
stats = self.cache.get_stats()
|
67 |
-
self.assertEqual(stats, (4, self.cache.data.embeddings.shape))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Convert Your RPGMVP Images to JPG with This Online Tool.md
DELETED
@@ -1,145 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>RPGMVP to JPG Converter Download: How to Convert RPG Maker MV Encrypted PNG Files</h1>
|
3 |
-
<p>If you are a fan of role-playing games (RPGs) created with RPG Maker MV, you may have encountered some image files with the extension .rpgmvp. These are encrypted PNG files that are used by the game engine to protect the assets from modification. However, sometimes you may want to convert these files to JPG format for various purposes, such as viewing, editing, sharing, or printing.</p>
|
4 |
-
<p>In this article, we will show you how to convert rpgmvp to jpg using different methods, both online and offline. We will also explain the advantages and disadvantages of each method, and how to use an encryption key if you have one. By the end of this article, you will be able to choose the best rpgmvp to jpg converter software for your needs.</p>
|
5 |
-
<h2>rpgmvp to jpg converter download</h2><br /><p><b><b>Download File</b> 🔗 <a href="https://urlin.us/2uSTSd">https://urlin.us/2uSTSd</a></b></p><br /><br />
|
6 |
-
<h2>How to Convert RPGMVP to JPG Online</h2>
|
7 |
-
<p>One of the easiest ways to convert rpgmvp to jpg is to use an online converter tool. There are many websites that offer this service for free, but we recommend using Docpose RPGMVP Converter, which is fast, secure, and simple. Here are the steps to follow:</p>
|
8 |
-
<ol>
|
9 |
-
<li>Go to [Docpose RPGMVP Converter](^13^) website.</li>
|
10 |
-
<li>Upload your rpgmvp file(s) by dragging and dropping them or clicking on the "Choose File" button.</li>
|
11 |
-
<li>Select "jpg" as the output format.</li>
|
12 |
-
<li>Click on the "Convert" button and wait for the process to finish.</li>
|
13 |
-
<li>Download or view your converted jpg file(s).</li>
|
14 |
-
</ol>
|
15 |
-
<p>The pros of using an online converter are:</p>
|
16 |
-
<ul>
|
17 |
-
<li>You don't need to install any software on your device.</li>
|
18 |
-
<li>You can access it from any browser and any device.</li>
|
19 |
-
<li>You can convert multiple files at once.</li>
|
20 |
-
</ul>
|
21 |
-
<p>The cons of using an online converter are:</p>
|
22 |
-
<p>rpgmvp to jpg converter online free<br />
|
23 |
-
rpgmvp to jpg converter github<br />
|
24 |
-
rpgmvp to jpg converter mac<br />
|
25 |
-
rpgmvp to jpg converter windows<br />
|
26 |
-
rpgmvp to jpg converter linux<br />
|
27 |
-
rpgmvp to jpg converter software<br />
|
28 |
-
rpgmvp to jpg converter tool<br />
|
29 |
-
rpgmvp to jpg converter app<br />
|
30 |
-
rpgmvp to jpg converter apk<br />
|
31 |
-
rpgmvp to jpg converter exe<br />
|
32 |
-
rpgmvp to jpg converter python<br />
|
33 |
-
rpgmvp to jpg converter rust<br />
|
34 |
-
rpgmvp to jpg converter petschko<br />
|
35 |
-
rpgmvp to jpg converter catink123<br />
|
36 |
-
rpgmvp to jpg converter fileproinfo<br />
|
37 |
-
rpgmvp to jpg converter batch<br />
|
38 |
-
rpgmvp to jpg converter command line<br />
|
39 |
-
rpgmvp to jpg converter drag and drop<br />
|
40 |
-
rpgmvp to jpg converter open source<br />
|
41 |
-
rpgmvp to jpg converter no watermark<br />
|
42 |
-
rpgmvp to jpg converter without encryption key<br />
|
43 |
-
rpgmvp to jpg converter decrypter<br />
|
44 |
-
rpgmvp to jpg converter encryption key finder<br />
|
45 |
-
rpgmvp to jpg converter for android<br />
|
46 |
-
rpgmvp to jpg converter for ios<br />
|
47 |
-
rpgmvp to jpg converter for windows 10<br />
|
48 |
-
rpgmvp to jpg converter for mac os x<br />
|
49 |
-
rpgmvp to jpg converter for linux ubuntu<br />
|
50 |
-
rpgmvp to jpg converter for chromebook<br />
|
51 |
-
rpgmvp to jpg converter for web browser<br />
|
52 |
-
rpgmvp to jpg converter for RPG Maker MV games<br />
|
53 |
-
rpgmvp to jpg converter for RPG Maker MZ games<br />
|
54 |
-
rpgmvp to jpg converter for RPG Maker VX Ace games<br />
|
55 |
-
rpgmvp to jpg converter for RPG Maker XP games<br />
|
56 |
-
rpgmvp to jpg converter for RPG Maker 2003 games<br />
|
57 |
-
how to convert rpgmvp to jpg online free<br />
|
58 |
-
how to convert rpgmvp to jpg on mac<br />
|
59 |
-
how to convert rpgmvp to jpg on windows<br />
|
60 |
-
how to convert rpgmvp to jpg on linux<br />
|
61 |
-
how to convert rpgmvp to jpg using github tools<br />
|
62 |
-
how to convert rpgmvp to jpg using software tools<br />
|
63 |
-
how to convert rpgmvp to jpg using command line tools<br />
|
64 |
-
how to convert rpgmvp files into JPG files easily and quickly</p>
|
65 |
-
<ul>
|
66 |
-
<li>You need a stable internet connection.</li>
|
67 |
-
<li>You may lose some image quality due to compression.</li>
|
68 |
-
<li>You may not be able to convert files that are too large or encrypted with a key.</li>
|
69 |
-
</ul>
|
70 |
-
<h2>How to Convert RPGMVP to JPG Offline</h2>
|
71 |
-
<p>If you prefer to convert rpgmvp to jpg offline, you can use a program that runs on your device. One of the best options is rpgmvp_converter, which is a simple program that can convert an (almost) proprietary picture format from the RPG Maker V game engine. Here are the steps to follow:</p>
|
72 |
-
<ol>
|
73 |
-
<li>Download the program from [GitHub](^1^) or from the [Releases](^15^) section.</li>
|
74 |
-
<li>Extract the program executable from the zip file and place it in a folder of your choice.</li>
|
75 |
-
<li>Open a terminal window in the folder with the program executable.</li>
|
76 |
-
<li>Type out this command: <code>rpgmvp_converter path_to_your_file</code>, replacing <code>path_to_your_file</code> with the actual path to your file. For example, if your file is called picture.rpgmvp, you would enter this in terminal: < code>rpgmvp_converter picture.rpgmvp</code>.</li>
|
77 |
-
<li>The program will create a new file called picture.png in the same folder as the original file.</li>
|
78 |
-
<li>Open the png file with any image editor and save it as jpg.</li>
|
79 |
-
</ol>
|
80 |
-
<p>The pros of using an offline converter are:</p>
|
81 |
-
<ul>
|
82 |
-
<li>You don't need an internet connection.</li>
|
83 |
-
<li>You have more control over the image quality and size.</li>
|
84 |
-
<li>You can convert files that are encrypted with a key (see next section).</li>
|
85 |
-
</ul>
|
86 |
-
<p>The cons of using an offline converter are:</p>
|
87 |
-
<ul>
|
88 |
-
<li>You need to install and run a program on your device.</li>
|
89 |
-
<li>You may encounter compatibility issues with different operating systems or devices.</li>
|
90 |
-
<li>You can only convert one file at a time.</li>
|
91 |
-
</ul>
|
92 |
-
<h2>How to Convert RPGMVP to JPG with Encryption Key</h2>
|
93 |
-
<p>Some rpgmvp files may be encrypted with a key that prevents them from being converted by normal methods. This is usually done by the game developers to protect their intellectual property. However, if you have the permission and the key to decrypt these files, you can use a tool called Petschko RPG-Maker MV-Decrypter, which is a web-based application that can decrypt and encrypt RPG Maker MV files. Here are the steps to follow:</p>
|
94 |
-
<ol>
|
95 |
-
<li>Go to [Petschko RPG-Maker MV-Decrypter] website.</li>
|
96 |
-
<li>Click on the "Choose File" button and select your rpgmvp file.</li>
|
97 |
-
<li>Enter the encryption key in the "Key" field. You can find the key in the game folder, under www/js/rpg_core.js, in a line that looks like this: <code>var encryptionKey = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';</code>, where xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx is the key.</li>
|
98 |
-
<li>Click on the "Decrypt" button and wait for the process to finish.</li>
|
99 |
-
<li>Download or view your decrypted png file.</li>
|
100 |
-
<li>Open the png file with any image editor and save it as jpg.</li>
|
101 |
-
</ol>
|
102 |
-
<p>The pros of using a decrypter with key are:</p>
|
103 |
-
<ul>
|
104 |
-
<li>You can convert files that are otherwise inaccessible.</li>
|
105 |
-
<li>You can respect the game developers' wishes and rights.</li>
|
106 |
-
<li>You can enjoy the original graphics and art of the game.</li>
|
107 |
-
</ul>
|
108 |
-
<p>The cons of using a decrypter with key are:</p>
|
109 |
-
<ul>
|
110 |
-
<li>You need to have the permission and the key from the game developers.</li>
|
111 |
-
<li>You need an internet connection.</li>
|
112 |
-
<li>You may lose some image quality due to compression.</li>
|
113 |
-
</ul>
|
114 |
-
<h2>Conclusion</h2>
|
115 |
-
<p>In this article, we have shown you how to convert rpgmvp to jpg using different methods, both online and offline. We have also explained the advantages and disadvantages of each method, and how to use an encryption key if you have one. We hope that this article has helped you choose the best rpgmvp to jpg converter software for your needs.</p>
|
116 |
-
<p>If you want to learn more about rpgmvp files and how they work, you can check out this [article] by Petschko, which explains the technical details of the encryption process. You can also visit [RPG Maker Web], which is the official website of RPG Maker MV, where you can find more resources, tutorials, and community support for creating your own RPGs.</p>
|
117 |
-
<p>Thank you for reading this article. If you have any questions or feedback, please leave a comment below. Happy gaming!</p>
|
118 |
-
<h2>FAQs</h2>
|
119 |
-
<h3>What is the difference between rpgmvp and png files?</h3>
|
120 |
-
<p>RPGMVP files are encrypted PNG files that are used by RPG Maker MV game engine. PNG files are standard image files that can be opened by any image viewer or editor. RPGMVP files need to be decrypted before they can be converted to other formats, such as JPG.</p>
|
121 |
-
<h3>How can I open rpgmvp files without converting them?</h3>
|
122 |
-
<p>The easiest way to open rpgmvp files without converting them is to play the RPG Maker MV game that contains them. You can also use a program called [RPG Maker MV Player], which is a free application that allows you to play any RPG Maker MV game without installing it on your device.</p>
|
123 |
-
<h3>What are the best settings for rpgmvp to jpg conversion?</h3>
|
124 |
-
<p>The best settings for rpgmvp to jpg conversion depend on your purpose and preference. Generally, you want to balance between image quality and file size. JPG files use lossy compression, which means that some data is lost when converting from png to jpg. Therefore, you may notice some loss of detail, sharpness, or color accuracy in the converted jpg files. To minimize this, you can choose a higher quality setting for the jpg output, such as 80% or 90%. However, this will also increase the file size, which may affect the loading speed or storage space of your device. To reduce the file size, you can choose a lower quality setting, such as 50% or 60%, but this will also reduce the image quality. You can experiment with different settings until you find the optimal balance for your needs.</p>
|
125 |
-
<h3>How can I optimize jpg files for web publishing?</h3>
|
126 |
-
<p>If you want to publish your converted jpg files on the web, such as on a blog, a website, or a social media platform, you may want to optimize them for faster loading and better performance. There are several ways to do this, such as:</p>
|
127 |
-
<ul>
|
128 |
-
<li>Resizing the images to fit the dimensions of your web page or screen.</li>
|
129 |
-
<li>Cropping the images to remove unnecessary parts or focus on the main subject.</li>
|
130 |
-
<li>Compressing the images to reduce the file size without compromising the quality too much.</li>
|
131 |
-
<li>Using progressive jpg format, which loads the image gradually from low to high resolution, instead of baseline jpg format, which loads the image from top to bottom.</li>
|
132 |
-
<li>Using a CDN (content delivery network), which is a service that distributes your images across multiple servers around the world, to improve the loading speed and reliability of your images.</li>
|
133 |
-
</ul>
|
134 |
-
<p>You can use various online tools or programs to perform these optimization tasks, such as [TinyJPG], [Image Resizer], [Crop Image], [Progressive JPEG Converter], or [Cloudflare].</p>
|
135 |
-
<h3>How can I protect my rpgmvp files from unauthorized use?</h3>
|
136 |
-
<p>If you are a game developer who uses RPG Maker MV to create your own RPGs, you may want to protect your rpgmvp files from unauthorized use, such as copying, modifying, or distributing them without your permission. There are several ways to do this, such as:</p>
|
137 |
-
<ul>
|
138 |
-
<li>Encrypting your rpgmvp files with a key that only you know.</li>
|
139 |
-
<li>Using a digital watermark or signature on your rpgmvp files that identifies you as the owner.</li>
|
140 |
-
<li>Using a license agreement or terms of service that specifies how your rpgmvp files can be used by others.</li>
|
141 |
-
<li>Using a DRM (digital rights management) system that restricts how your rpgmvp files can be accessed or played by others.</li>
|
142 |
-
</ul>
|
143 |
-
<p>However, you should also be aware that none of these methods are foolproof, and there may be ways to bypass or break them. Therefore, you should always keep a backup of your original rpgmvp files and monitor their usage and distribution. You should also respect the rights and wishes of other game developers who use RPG Maker MV and do not use their rpgmvp files without their permission.</p> 197e85843d<br />
|
144 |
-
<br />
|
145 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download APK Bar Bar Everything You Need to Know About Live Streaming Apps.md
DELETED
@@ -1,88 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download APK Bar Bar: A Guide for Android Users</h1>
|
3 |
-
<p>If you are looking for a way to spice up your online entertainment, you might want to check out <strong>APK Bar Bar</strong>. This is an app that lets you watch live streams of various hosts from different countries, such as Indonesia, Vietnam, Thailand, Russia, Korea, America, and more. You can interact with them, send gifts, chat with other viewers, and enjoy a variety of content.</p>
|
4 |
-
<p>In this article, we will show you how to download APK Bar Bar safely and easily on your Android device. We will also explain what APK Bar Bar is, why you might want to download it, how to install and use it, and answer some frequently asked questions. So, let's get started!</p>
|
5 |
-
<h2>download apk bar bar</h2><br /><p><b><b>Download</b> ★★★★★ <a href="https://urlin.us/2uSS5M">https://urlin.us/2uSS5M</a></b></p><br /><br />
|
6 |
-
<h2>What is APK Bar Bar?</h2>
|
7 |
-
<p>APK Bar Bar is an app that allows you to watch live broadcasts of various hosts from different countries. You can choose from a wide range of categories, such as music, dance, comedy, gaming, beauty, lifestyle, education, and more. You can also filter by language, region, gender, age, popularity, and other criteria.</p>
|
8 |
-
<p>Some of the features and benefits of using APK Bar Bar are:</p>
|
9 |
-
<ul>
|
10 |
-
<li>You can watch high-quality live streams for free.</li>
|
11 |
-
<li>You can interact with the hosts and other viewers through chat messages.</li>
|
12 |
-
<li>You can send gifts to your favorite hosts and show your support.</li>
|
13 |
-
<li>You can follow your favorite hosts and get notified when they go live.</li>
|
14 |
-
<li>You can discover new hosts and content that suit your preferences.</li>
|
15 |
-
<li>You can enjoy a diverse and multicultural online community.</li> <h2>Why Download APK Bar Bar?</h2>
|
16 |
-
<p>You might be wondering why you should download APK Bar Bar when there are so many other live streaming apps available. Well, here are some of the reasons why APK Bar Bar stands out from the crowd:</p>
|
17 |
-
<ul>
|
18 |
-
<li>It offers a unique and diverse selection of hosts and content from different countries and cultures. You can learn new things, explore new perspectives, and have fun at the same time.</li>
|
19 |
-
<li>It has a user-friendly and intuitive interface that makes it easy to navigate and find what you are looking for. You can also customize your profile, settings, and preferences to suit your needs.</li>
|
20 |
-
<li>It has a loyal and active community of users who share their opinions, feedback, and suggestions. You can make new friends, join groups, and participate in events and contests.</li>
|
21 |
-
<li>It has a responsive and helpful customer service team that is ready to assist you with any issues or questions you might have. You can also report any inappropriate or abusive behavior and get it resolved quickly.</li>
|
22 |
-
<li>It is constantly updated and improved to provide you with the best possible online experience. You can also enjoy new features, functions, and content regularly.</li>
|
23 |
-
</ul>
|
24 |
-
<p>So, if you are looking for a live streaming app that offers you more than just entertainment, APK Bar Bar is the one for you!</p>
|
25 |
-
<h2>How to Download APK Bar Bar Safely and Easily?</h2>
|
26 |
-
<p>Now that you know what APK Bar Bar is and why you should download it, let's see how you can do it safely and easily on your Android device. Here are the steps you need to follow:</p>
|
27 |
-
<ol>
|
28 |
-
<li>Go to the official website of APK Bar Bar at <a href="">https://apkbarbar.com/</a>. This is the most trusted and reliable source to download APK Bar Bar. Do not download APK Bar Bar from any other websites or sources, as they might contain viruses, malware, or other harmful elements.</li>
|
29 |
-
<li>On the homepage, you will see a button that says "Download APK". Tap on it and wait for the download to start. You might see a pop-up message that asks you to allow downloads from unknown sources. Tap on "OK" or "Allow" to proceed.</li>
|
30 |
-
<li>Once the download is complete, go to your file manager or downloads folder and locate the APK file of APK Bar Bar. Tap on it and follow the instructions to install it on your device. You might need to grant some permissions to APK Bar Bar to access your device's features and functions.</li>
|
31 |
-
<li>After the installation is done, you will see an icon of APK Bar Bar on your home screen or app drawer. Tap on it and launch the app. You will need to create an account or log in with your existing account to use APK Bar Bar.</li>
|
32 |
-
<li>Congratulations! You have successfully downloaded and installed APK Bar Bar on your Android device. You can now enjoy watching live streams of various hosts from different countries and interact with them.</li>
|
33 |
-
</ol>
|
34 |
-
<p>Note: Downloading APK files from unknown sources can be risky and dangerous for your device's security and performance. Make sure you have a reliable antivirus or anti-malware app installed on your device before downloading any APK files. Also, scan the APK file before installing it to make sure it is safe and clean.</p>
|
35 |
-
<h2>How to Install and Use APK Bar Bar?</h2>
|
36 |
-
<p>Installing APK Bar Bar is easy, but how do you use it effectively and efficiently? Here are some tips on how to use APK Bar Bar:</p>
|
37 |
-
<p>download apk navigation bar<br />
|
38 |
-
download apk live bar bar indonesia<br />
|
39 |
-
download apk status bar<br />
|
40 |
-
download apk sound bar<br />
|
41 |
-
download apk live bar bar vietnam<br />
|
42 |
-
download apk action bar<br />
|
43 |
-
download apk live bar bar thailand<br />
|
44 |
-
download apk bottom bar<br />
|
45 |
-
download apk live bar bar rusia<br />
|
46 |
-
download apk volume bar<br />
|
47 |
-
download apk live bar bar korea<br />
|
48 |
-
download apk swipe bar<br />
|
49 |
-
download apk live bar bar amerika<br />
|
50 |
-
download apk notification bar<br />
|
51 |
-
download apk live bar bar bebas parah<br />
|
52 |
-
download apk search bar<br />
|
53 |
-
download apk live bar bar no sensor<br />
|
54 |
-
download apk gesture bar<br />
|
55 |
-
download apk live bar bar 2023<br />
|
56 |
-
download apk control center ios 14 - screen recorder, assistive touch, night mode, screen capture, video recorder, screenshot, screen recorder with audio, virtual home button, touch screen anywhere with one hand mode, lock screen, quick settings, smart control - iphone x control center, music control, brightness control, volume control, auto rotation, flashlight, do not disturb mode, calculator, camera, alarm clock and more.<br />
|
57 |
-
download apk live streaming hot - watch live stream videos of talented stars from all over the world on your phone. You can chat with them and send gifts to show your support. You can also join the fun by broadcasting your own talents and skills. Whether you like singing, dancing, gaming, cooking, or anything else, you can find your audience here. You can also follow your favorite streamers and get notified when they go live. Join the community and enjoy the best live entertainment on your phone.<br />
|
58 |
-
download apk floating toolbar - a handy tool that allows you to access various functions and apps from anywhere on your screen. You can customize the toolbar with your favorite apps and shortcuts. You can also adjust the size, position, and transparency of the toolbar. You can use the floating toolbar to launch apps, switch tasks, take screenshots, record videos, adjust settings, and more. You can also hide the toolbar when you don't need it.<br />
|
59 |
-
download apk video editor - a powerful and easy-to-use video editing app that lets you create amazing videos with your photos and clips. You can trim, crop, rotate, merge, split, add music, apply filters, stickers, texts, transitions, effects, and more. You can also adjust the speed, brightness, contrast, saturation, and other parameters of your videos. You can export your videos in HD quality and share them on social media platforms.<br />
|
60 |
-
download apk music player - a stylish and feature-rich music player app that lets you enjoy your favorite songs on your phone. You can browse and play music by albums, artists, genres, playlists, folders, and more. You can also create and edit your own playlists. You can customize the sound quality with the equalizer and bass booster. You can also change the theme and appearance of the music player according to your preference.<br />
|
61 |
-
download apk photo editor - a fun and creative photo editing app that lets you enhance your pictures with various tools and effects. You can crop, resize, rotate, flip, adjust color, brightness,</p>
|
62 |
-
<ul>
|
63 |
-
<li>To watch live streams of hosts from different countries, tap on the "Live" tab at the bottom of the screen. You will see a list of categories that you can choose from, such as music, dance, comedy, gaming, beauty, lifestyle, education, etc. Tap on any category that interests you and browse through the available live streams.</li>
|
64 |
-
<li>To filter by language, region, gender, age, popularity, or other criteria, tap on the "Filter" icon at the top right corner of the screen. You will see a menu that allows you to adjust your preferences. Tap on "Apply" when you are done.</li>
|
65 |
-
<li>To interact with the hosts and other viewers, tap on the chat box at the bottom of the screen. You can type your message and send it by tapping on the "Send" icon. You can also use emojis, stickers, gifs, or voice messages to express yourself.</li>
|
66 |
-
<li>To send gifts to your favorite hosts, tap on the "Gift" icon at the bottom right corner of the screen. You will see a menu that shows you various types of gifts that you can send, such as flowers, hearts, stars, diamonds, etc. Tap on any gift that you want to send and confirm your purchase. You will need coins or diamonds to buy or earn them by completing tasks or watching ads.</li>
|
67 |
-
<li>To follow your favorite hosts, tap on the "Follow" button at the top of the screen. You will see a list of hosts that you have followed and get notified when they go live. You can also unfollow them by tapping on the "Unfollow" button.</li>
|
68 |
-
<li>To discover new hosts and content, tap on the "Discover" tab at the bottom of the screen. You will see a list of recommended live streams that you might like based on your preferences and history. You can also swipe left or right to see more options.</li>
|
69 |
-
<li>To troubleshoot any issues or errors, tap on the "Settings" icon at the top left corner of the screen. You will see a menu that allows you to access various options, such as feedback, help, privacy, terms, etc. Tap on any option that you need and follow the instructions.</li>
|
70 |
-
</ul>
|
71 |
-
<p>Using APK Bar Bar is simple and fun, but make sure you follow the rules and guidelines of the app and respect the hosts and other users. Do not engage in any inappropriate or abusive behavior, such as spamming, trolling, harassing, bullying, or violating any laws or regulations. If you encounter any such behavior, report it immediately and block the user.</p>
|
72 |
-
<h2>Conclusion</h2>
|
73 |
-
<p>APK Bar Bar is an app that lets you watch live streams of various hosts from different countries and interact with them. It offers you a unique and diverse online entertainment experience that you can enjoy for free. You can also send gifts, follow your favorite hosts, discover new content, and join a vibrant and multicultural community.</p>
|
74 |
-
<p>If you are interested in trying out APK Bar Bar, you can download it safely and easily from its official website at <a href="">https://apkbarbar.com/</a>. You can also follow the steps we have provided in this article to install and use it on your Android device. We hope you have fun with APK Bar Bar and share your feedback with us!</p>
|
75 |
-
<p>Thank you for reading this article. We hope you found it helpful and informative. If you have any questions or comments, please feel free to leave them below. We would love to hear from you!</p>
|
76 |
-
<h2>FAQs</h2>
|
77 |
-
<h3>What are the requirements to use APK Bar Bar?</h3>
|
78 |
-
<p>To use APK Bar Bar, you need an Android device that runs on Android 4.4 or higher and has at least 1 GB of RAM and 100 MB of free storage space. You also need a stable internet connection and a valid email address or phone number to create an account.</p>
|
79 |
-
<h3>Is APK Bar Bar safe and legal?</h3>
|
80 |
-
<p>APK Bar Bar is safe and legal as long as you download it from its official website at <a href="">https://apkbarbar.com/</a>. Do not download it from any other websites or sources, as they might contain viruses, malware, or other harmful elements. Also, make sure you scan the APK file before installing it to make sure it is safe and clean.</p>
|
81 |
-
<h3>How can I earn coins or diamonds on APK Bar Bar?</h3>
|
82 |
-
<p>You can earn coins or diamonds on APK Bar Bar by completing tasks or watching ads. You can also buy them with real money through various payment methods. Coins or diamonds are used to send gifts to your favorite hosts or exchange them for cash or other rewards.</p>
|
83 |
-
<h3>How can I contact APK Bar Bar customer service?</h3>
|
84 |
-
<p>You can contact APK Bar Bar customer service by tapping on the "Settings" icon at the top left corner of the screen and then tapping on "Feedback". You can also email them at <a href="mailto:[email protected]">[email protected]</a> or visit their Facebook page at <a href="">https://www.facebook.com/apkbarbar</a>.</p>
|
85 |
-
<h3>How can I delete my APK Bar Bar account?</h3>
|
86 |
-
<p>You can delete your APK Bar Bar account by tapping on the "Settings" icon at the top left corner of the screen and then tapping on "Account". You will see an option that says "Delete Account". Tap on it and confirm your decision. Once you delete your account, all your data and history will be erased and you will not be able to recover them.</p> 197e85843d<br />
|
87 |
-
<br />
|
88 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/ 2 2 .md
DELETED
@@ -1,99 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Case Simulator 2 Standoff: A Guide for Beginners</h1>
|
3 |
-
<p>If you are a fan of Standoff 2, the dynamic first-person shooter game with realistic graphics and animation, you might have wondered what it would be like to open more cases and boxes and get rare and powerful skins for your weapons. Well, wonder no more, because there is a game that lets you do just that: Case Simulator 2 Standoff.</p>
|
4 |
-
<p>Case Simulator 2 Standoff is a simulation game that mimics the case opening mechanics of Standoff 2. You can open all kinds of cases and boxes, from the basic ones to the newest collections, and get a chance to win skins, stickers, charms, and even knives. You can also simulate battles in the background to earn gold and other items, and try your luck in various game modes such as Upgrade, Jackpot, Crash, Quiz, Tower, Bomb Defuse, and more.</p>
|
5 |
-
<h2>скачать кейс симулятор 2 стандофф</h2><br /><p><b><b>Download Zip</b> … <a href="https://jinyurl.com/2uNKnu">https://jinyurl.com/2uNKnu</a></b></p><br /><br />
|
6 |
-
<p>In this article, we will give you a brief overview of what Case Simulator 2 Standoff is, what features it has, how to play it, and some tips and tricks to help you get the most out of it. We will also answer some frequently asked questions about the game. So, let's get started!</p>
|
7 |
-
<h2>What is Case Simulator 2 Standoff?</h2>
|
8 |
-
<p>Case Simulator 2 Standoff is a game created by fans of Standoff 2 for fans of Standoff 2. It is not an official game by the developers of Standoff 2, nor is it affiliated with them in any way. It is simply a fun and entertaining way to experience the thrill of opening cases and boxes without spending real money or affecting your progress in the original game.</p>
|
9 |
-
<p>скачать кейс симулятор 2 стандофф на пк<br />
|
10 |
-
скачать кейс симулятор 2 стандофф мод много денег<br />
|
11 |
-
скачать кейс симулятор 2 стандофф последняя версия<br />
|
12 |
-
скачать кейс симулятор 2 стандофф бесплатно<br />
|
13 |
-
скачать кейс симулятор 2 стандофф взлом<br />
|
14 |
-
скачать кейс симулятор 2 стандофф на андроид<br />
|
15 |
-
скачать кейс симулятор 2 стандофф на ios<br />
|
16 |
-
скачать кейс симулятор 2 стандофф на компьютер<br />
|
17 |
-
скачать кейс симулятор 2 стандофф на телефон<br />
|
18 |
-
скачать кейс симулятор 2 стандофф на windows<br />
|
19 |
-
как скачать кейс симулятор 2 стандофф<br />
|
20 |
-
где скачать кейс симулятор 2 стандофф<br />
|
21 |
-
отзывы о кейс симулятор 2 стандофф<br />
|
22 |
-
обзор кейс симулятор 2 стандофф<br />
|
23 |
-
видео про кейс симулятор 2 стандофф<br />
|
24 |
-
играть в кейс симулятор 2 стандофф онлайн<br />
|
25 |
-
играть в кейс симулятор 2 стандофф без интернета<br />
|
26 |
-
играть в кейс симулятор 2 стандофф без регистрации<br />
|
27 |
-
играть в кейс симулятор 2 стандофф бесплатно<br />
|
28 |
-
играть в кейс симулятор 2 стандофф на пк<br />
|
29 |
-
как играть в кейс симулятор 2 стандофф<br />
|
30 |
-
как выигрывать в кейс симулятор 2 стандофф<br />
|
31 |
-
как получить легендарное оружие в кейс симулятор 2 стандофф<br />
|
32 |
-
как открыть все кейсы в кейс симулятор 2 стандофф<br />
|
33 |
-
как заработать голду в кейс симулятор 2 стандофф<br />
|
34 |
-
коды для кейс симулятор 2 стандофф<br />
|
35 |
-
читы для кейс симулятор 2 стандофф<br />
|
36 |
-
хаки для кейс симулятор 2 стандофф<br />
|
37 |
-
трюки для кейс симулятор 2 стандофф<br />
|
38 |
-
подсказки для кейс симулятор 2 стандофф<br />
|
39 |
-
руководство по кейс симулятор 2 стандофф<br />
|
40 |
-
инструкция по кейс симулятор 2 стандофвв<br />
|
41 |
-
установить кейс симулятор 2 стандоввв <br />
|
42 |
-
обновить кейс симулятор 2 становвв <br />
|
43 |
-
удалить кейс симулятор 2 становвв <br />
|
44 |
-
переустановить кейс симулятор 2 становвв <br />
|
45 |
-
запустить кейс симулятор 2 становвв <br />
|
46 |
-
остановить кейс симулятор 2 становвв <br />
|
47 |
-
выйти из кейс симулятор 2 становвв <br />
|
48 |
-
перезапустить кейс симулятор 2 становвв</p>
|
49 |
-
<p>The game is available for Android devices on Google Play Store , and for iOS devices on App Store . You can also play it online on Yandex Games . The game is free to download and play, but it contains ads and offers in-app purchases.</p>
|
50 |
-
<h2>Features of Case Simulator 2 Standoff</h2>
|
51 |
-
<p>Case Simulator 2 Standoff has many features that make it an enjoyable and addictive game for anyone who loves Standoff 2. Here are some of them:</p>
|
52 |
-
<ul>
|
53 |
-
<li><b>All cases and boxes from Standoff 2</b>: You can open any case or box that is available in Standoff 2, from the basic ones like Standard or Special to the newest ones like Dreams & Nightmares or Recoil. You can also open souvenir cases that contain special skins with stickers from tournaments.</li>
|
54 |
-
<li><b>Various skins, stickers, charms, and knives</b>: You can get all kinds of items from the cases and boxes, including skins for your weapons with different rarities and qualities, stickers that you can apply on your skins, charms that you can hang on your weapons, and knives that you can use as melee weapons.</li>
|
55 |
-
<li><b>Simulated battles</b>: You can simulate battles in the background while you open cases and boxes. This way, you can earn gold and other items that you can use to buy more cases or boxes or play other game modes.</li>
|
56 |
-
<li><b>Multiple game modes</b>: You can try your luck in different game modes such as Upgrade, Jackpot, Crash, Quiz, Tower, Bomb Defuse, and more. Each game mode has its own rules and rewards. For example, in Upgrade mode, you can upgrade your skins up to x10 times of their original value; in Jackpot mode, you can gamble your skins away with other players; in Crash mode, you can bet on a multiplier that can crash any second; in Quiz mode, you can test your knowledge about Standoff 2; in Tower mode, you can build a tower of skins; in Bomb Defuse mode, you can defuse a bomb before it explodes.</li>
|
57 |
-
<li><b>Marketplace </li>
|
58 |
-
<li><b>Marketplace</b>: You can buy and sell skins, stickers, charms, and knives with other players in the marketplace. You can also trade your items with other players or exchange them for gold or gems.</li>
|
59 |
-
<li><b>Inventory</b>: You can view and manage your items in your inventory. You can also apply stickers and charms on your skins, or equip your skins and knives on your weapons.</li>
|
60 |
-
<li><b>Statistics</b>: You can track your progress and achievements in the game. You can see how many cases and boxes you have opened, how many skins, stickers, charms, and knives you have collected, how much gold and gems you have earned, and more.</li>
|
61 |
-
<li><b>Settings</b>: You can customize your game experience by changing the language, sound, graphics, notifications, and other options.</li>
|
62 |
-
</ul>
|
63 |
-
<h2>How to play Case Simulator 2 Standoff?</h2>
|
64 |
-
<p>Playing Case Simulator 2 Standoff is very easy and intuitive. Here are the basic steps to get you started:</p>
|
65 |
-
<ol>
|
66 |
-
<li><b>Download and install the game</b>: You can download and install the game from Google Play Store , App Store , or Yandex Games . The game is free to play, but it contains ads and offers in-app purchases.</li>
|
67 |
-
<li><b>Open the game and choose a mode</b>: When you open the game, you will see a menu with different modes to choose from. You can start by opening cases and boxes in the main mode, or you can try other modes such as Upgrade, Jackpot, Crash, Quiz, Tower, Bomb Defuse, and more.</li>
|
68 |
-
<li><b>Open cases and boxes</b>: To open a case or a box, you need to have enough gold or gems to buy it. You can earn gold by simulating battles in the background or by playing other modes. You can earn gems by watching ads or by buying them with real money. Once you have enough currency, you can select a case or a box from the list and tap on it to open it. You will see a spinning wheel with different items on it. The item that stops at the center of the wheel is the item that you get. You can also skip the animation by tapping on the screen.</li>
|
69 |
-
<li><b>Collect items</b>: The items that you get from opening cases and boxes are automatically added to your inventory. You can view and manage your items in your inventory. You can also apply stickers and charms on your skins, or equip your skins and knives on your weapons.</li>
|
70 |
-
<li><b>Buy, sell, trade, or exchange items</b>: If you want to get more items or get rid of some items that you don't need, you can use the marketplace. In the marketplace, you can buy and sell skins, stickers, charms, and knives with other players using gold or gems. You can also trade your items with other players or exchange them for gold or gems.</li>
|
71 |
-
<li><b>Have fun</b>: The most important thing is to have fun while playing Case Simulator 2 Standoff. You can enjoy opening cases and boxes and collecting items without spending real money or affecting your progress in Standoff 2. You can also challenge yourself in different game modes and compete with other players. You can also share your results with your friends on social media.</li>
|
72 |
-
</ol>
|
73 |
-
<h2>Tips and tricks for Case Simulator 2 Standoff</h2>
|
74 |
-
<p>To help you get the most out of Case Simulator 2 Standoff, here are some tips and tricks that you might find useful:</p>
|
75 |
-
<ul>
|
76 |
-
<li><b>Watch ads for free gems</b>: If you want to get more gems without spending real money, you can watch ads in exchange for free gems. You can watch up to 10 ads per day and get 10 gems per ad. You can use these gems to buy more cases or boxes or play other modes.</li>
|
77 |
-
<li><b>Simulate battles for free gold</b>: If you want to get more gold without playing other modes, you can simulate battles in the background while you open cases and boxes. You can choose between two teams: Alpha or Bravo. The team that wins the battle will earn more gold than the team that loses. You can also change the difficulty level of the battle: Easy, Normal, Hard, or Extreme. The higher the difficulty level, the more gold you will earn.</li>
|
78 |
-
<li><b>Upgrade your skins for higher value</b>: If you want to increase the value of your skins without selling them, you can upgrade them in the Upgrade mode. In this mode, you can choose a skin that you want to upgrade and a skin that you want to get. The chance of success depends on the difference between the values of the two skins. You can also use gems to increase the chance of success. If you succeed, you will get the upgraded skin; if you fail, you will lose the original skin.</li>
|
79 |
-
<li><b>Play Jackpot for high-risk high-reward</b>: If you want to gamble your skins away with other players, you can play Jackpot mode. In this mode, you can join a room with up to 10 players and put your skins into a common pot. The value of the pot depends on the total value of the skins in it. The higher the value of the pot, the higher the fee that is deducted from it. After a countdown, a random winner is chosen and gets the whole pot minus the fee. The chance of winning depends on the value of your skins compared to the value of the pot. The higher the value of your skins, the higher the chance of winning.</li>
|
80 |
-
<li><b>Play Crash for fast-paced action</b>: If you want to bet on a multiplier that can crash any second, you can play Crash mode. In this mode, you can place a bet on a multiplier that starts at 1x and increases exponentially until it crashes. You can cash out at any time before it crashes and get your bet multiplied by the current multiplier. The longer you wait, the higher the multiplier, but also the higher the risk of crashing. You can also use auto-cash out to set a target multiplier that will automatically cash out your bet when it is reached.</li>
|
81 |
-
<li><b>Play Quiz for fun and knowledge</b>: If you want to test your knowledge about Standoff 2, you can play Quiz mode. In this mode, you will be asked 10 questions about Standoff 2, such as maps, weapons, skins, modes, etc. You will have 10 seconds to answer each question. The more questions you answer correctly, the more gold you will earn.</li>
|
82 |
-
<li><b>Play Tower for strategic thinking</b>: If you want to build a tower of skins, you can play Tower mode. In this mode, you will be given a random skin and a tower with 10 slots. You can place the skin on any slot of the tower, but you have to follow these rules: You can only place a skin on an empty slot or on a slot with a skin of lower value; You can only place a skin on a slot that is adjacent to another slot with a skin; You can only place a skin on a slot that is not blocked by another skin above it. The goal is to fill all 10 slots with skins without breaking any rule. The higher the value of your tower, the more gold you will earn.</li>
|
83 |
-
<li><b>Play Bomb Defuse for quick reflexes</b>: If you want to defuse a bomb before it explodes, you can play Bomb Defuse mode. In this mode, you will be shown a bomb with four wires: red, blue, green, and yellow. You will also be shown a sequence of colors that indicates which wire to cut. You have to cut the wires in the correct order and in time to defuse the bomb. The faster you defuse the bomb, the more gold you will earn.</li>
|
84 |
-
</ul>
|
85 |
-
<h2>FAQ about Case Simulator 2 Standoff</h2>
|
86 |
-
<p>Here are some frequently asked questions about Case Simulator 2 Standoff and their answers:</p>
|
87 |
-
<ol>
|
88 |
-
<li><b>Can I transfer my items from Case Simulator 2 Standoff to Standoff 2?</b>: No, you cannot transfer your items from Case Simulator 2 Standoff to Standoff 2 or vice versa. Case Simulator 2 Standoff is not an official game by the developers of Standoff 2, nor is it affiliated with them in any way. It is simply a simulation game that mimics the case opening mechanics of Standoff 2.</li>
|
89 |
-
<li><b>Can I play Case Simulator 2 Standoff offline?</b>: Yes, you can play Case Simulator 2 Standoff offline without an internet connection. However, some features such as marketplace, statistics, and ads may not work properly offline.</li>
|
90 |
-
<li><b>How can I get more gems in Case Simulator 2 Standoff?</b>: There are several ways to get more gems in Case Simulator 2 Standoff: You can watch ads for free gems; You can buy gems with real money; You can exchange gold for gems; You can sell or trade your items for gems; You can win gems in some game modes such as Jackpot or Crash.</li>
|
91 |
-
<li><b>How can I get rare and legendary skins in Case Simulator 2 Standoff?</b>: There is no guaranteed way to get rare and legendary skins in Case Simulator 2 Standoff. It all depends on your luck and probability. However, there are some factors that may increase your chances of getting rare and legendary skins: You can open more cases or boxes; You can open higher-tier cases or boxes that have higher chances of dropping rare and legendary skins; You can upgrade your skins in the Upgrade mode; You can gamble your skins in the Jackpot or Crash mode; You can buy or trade rare and legendary skins in the marketplace.</li>
|
92 |
-
<li><b>Is Case Simulator 2 Standoff safe to play?</b>: Yes, Case Simulator 2 Standoff is safe to play as long as you download it from a trusted source such as Google Play Store , App Store , or Yandex Games . The game does not contain any viruses, malware, or harmful content. However, you should be careful when making in-app purchases or watching ads, as they may lead you to external websites or apps that may not be safe or secure.</li>
|
93 |
-
</ol>
|
94 |
-
<h2>Conclusion</h2>
|
95 |
-
<p>Case Simulator 2 Standoff is a game that simulates opening cases and boxes from the popular first-person shooter game Standoff 2. You can get various skins, stickers, charms, and knives for your weapons and test your luck in different game modes. You can also simulate battles in the background to earn gold and other items, and buy, sell, trade, or exchange items with other players in the marketplace. The game is free to play, but it contains ads and offers in-app purchases.</p>
|
96 |
-
<p>If you are a fan of Standoff 2, you might enjoy playing Case Simulator 2 Standoff as a way to experience the thrill of opening cases and boxes without spending real money or affecting your progress in the original game. You can also challenge yourself in different game modes and compete with other players. You can also share your results with your friends on social media.</p>
|
97 |
-
<p>We hope that this article has given you a brief overview of what Case Simulator 2 Standoff is, what features it has, how to play it, and some tips and tricks to help you get the most out of it. We also hope that we have answered some of your questions about the game. If you have any more questions or feedback, feel free to leave a comment below. Thank you for reading!</p> 197e85843d<br />
|
98 |
-
<br />
|
99 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Download Eureka Season 1 and Discover the Hidden Secrets of a Rustic Town of Genius.md
DELETED
@@ -1,140 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>How to Download Eureka Season 1</h1>
|
3 |
-
<p>If you are looking for a fun and smart sci-fi show to watch, you might want to check out Eureka. This is a series that follows the adventures of a U.S. Marshal who becomes the sheriff of a secret town where the best minds in America work on cutting-edge inventions for the government. In this article, we will tell you what Eureka is, why you should watch it, and where you can download season 1 legally and safely.</p>
|
4 |
-
<h2>download eureka season 1</h2><br /><p><b><b>Download</b> ->>> <a href="https://jinyurl.com/2uNPHn">https://jinyurl.com/2uNPHn</a></b></p><br /><br />
|
5 |
-
<h2>What is Eureka?</h2>
|
6 |
-
<h3>A brief summary of the show's premise and main characters</h3>
|
7 |
-
<p>Eureka is an American science fiction comedy drama that aired on Syfy from 2006 to 2012. The show is set in a fictional town called Eureka, located in Oregon, where the most brilliant scientists and engineers live and work on various projects for the Department of Defense. The town is also home to Global Dynamics, a top-secret research facility that houses some of the most advanced technologies in the world.</p>
|
8 |
-
<p>The show's protagonist is Jack Carter, a U.S. Marshal who stumbles upon Eureka while transporting his rebellious daughter Zoe back to Los Angeles. He soon discovers that the town is not as normal as it seems, and that he has to deal with strange phenomena, rogue experiments, and eccentric residents on a daily basis. He also develops a friendship with Henry Deacon, a jack-of-all-trades who knows everything about Eureka, and a romantic interest in Allison Blake, a Department of Defense agent who oversees Global Dynamics.</p>
|
9 |
-
<p>Other main characters include Jo Lupo, Carter's deputy sheriff who is a former Army Ranger; Douglas Fargo, a clumsy but brilliant scientist who works at Global Dynamics; Nathan Stark, a Nobel Prize-winning physicist who is Allison's ex-husband and rival; Zoe Carter, Jack's teenage daughter who attends Eureka High School; Jim Taggart, an Australian zoologist who specializes in capturing dangerous creatures; and Beverly Barlowe, a psychiatrist who has ulterior motives.</p>
|
10 |
-
<h3>A list of the episodes in season 1 and their titles</h3>
|
11 |
-
<p>The first season of Eureka consists of 12 episodes that were aired from July to October in 2006. Here is the list of the episodes and their titles:</p>
|
12 |
-
<ul>
|
13 |
-
<li>Episode 1: Pilot</li>
|
14 |
-
<li>Episode 2: Many Happy Returns</li>
|
15 |
-
<li>Episode 3: Before I Forget</li>
|
16 |
-
<li>Episode 4: Alienated</li>
|
17 |
-
<li>Episode 5: Invincible</li>
|
18 |
-
<li>Episode 6: Dr. Nobel</li>
|
19 |
-
<li>Episode 7: Blink</li>
|
20 |
-
<li>Episode 8: Right as Raynes</li>
|
21 |
-
<li>Episode 9: Primal</li>
|
22 |
-
<li>Episode 10: Purple Haze</li>
|
23 |
-
<li>Episode 11: H.O.U.S.E. Rules</li>
|
24 |
-
<li>Episode 12: Once in a Lifetime</li>
|
25 |
-
</ul>
|
26 |
-
<h2>Why watch Eureka season 1?</h2>
|
27 |
-
<h3>The benefits of watching a sci-fi comedy drama</h3>
|
28 |
-
<p>Eureka is not your typical sci-fi show. It is not only about futuristic gadgets and scientific concepts, but also about humor, drama, and human relationships. It is a show that can make you laugh, think, and feel at the same time. It is a show that can appeal to a wide range of audiences, from sci-fi fans to comedy lovers, from adults to kids, from casual viewers to binge-watchers.</p>
|
29 |
-
<p>Some of the benefits of watching a sci-fi comedy drama like Eureka are:</p>
|
30 |
-
<p>download eureka season 1 episodes<br />
|
31 |
-
download eureka season 1 free online<br />
|
32 |
-
download eureka season 1 full hd<br />
|
33 |
-
download eureka season 1 amazon prime<br />
|
34 |
-
download eureka season 1 peacock premium<br />
|
35 |
-
download eureka season 1 vudu<br />
|
36 |
-
download eureka season 1 apple tv<br />
|
37 |
-
download eureka season 1 google play movies<br />
|
38 |
-
download eureka season 1 microsoft store<br />
|
39 |
-
download eureka season 1 torrent<br />
|
40 |
-
download eureka season 1 netflix<br />
|
41 |
-
download eureka season 1 hulu<br />
|
42 |
-
download eureka season 1 dvd<br />
|
43 |
-
download eureka season 1 blu-ray<br />
|
44 |
-
download eureka season 1 pilot episode<br />
|
45 |
-
download eureka season 1 finale episode<br />
|
46 |
-
download eureka season 1 extended version<br />
|
47 |
-
download eureka season 1 webisodes<br />
|
48 |
-
download eureka season 1 behind the scenes<br />
|
49 |
-
download eureka season 1 cast interviews<br />
|
50 |
-
download eureka season 1 subtitles<br />
|
51 |
-
download eureka season 1 english subtitles<br />
|
52 |
-
download eureka season 1 spanish subtitles<br />
|
53 |
-
download eureka season 1 french subtitles<br />
|
54 |
-
download eureka season 1 german subtitles<br />
|
55 |
-
download eureka season 1 trailer<br />
|
56 |
-
download eureka season 1 promo<br />
|
57 |
-
download eureka season 1 sneak peek<br />
|
58 |
-
download eureka season 1 recap<br />
|
59 |
-
download eureka season 1 review<br />
|
60 |
-
download eureka season 1 ratings<br />
|
61 |
-
download eureka season 1 soundtrack<br />
|
62 |
-
download eureka season 1 theme song<br />
|
63 |
-
download eureka season 1 bloopers<br />
|
64 |
-
download eureka season 1 deleted scenes<br />
|
65 |
-
download eureka season 1 fan art<br />
|
66 |
-
download eureka season 1 fan fiction<br />
|
67 |
-
download eureka season 1 trivia quiz<br />
|
68 |
-
download eureka season 1 script pdf<br />
|
69 |
-
download eureka season 1 wallpaper hd</p>
|
70 |
-
<ul>
|
71 |
-
<li>It stimulates your imagination and creativity. You can marvel at the amazing inventions and discoveries that the characters make, and wonder what it would be like to live in a town like Eureka. You can also learn some interesting facts and trivia about science and technology along the way.</li>
|
72 |
-
<li>It entertains and educates you. You can enjoy the witty dialogue, the hilarious situations, and the quirky personalities of the characters. You can also appreciate the deeper themes and messages that the show explores, such as the ethical dilemmas of scientific progress, the importance of teamwork and friendship, and the value of diversity and acceptance.</li>
|
73 |
-
<li>It relaxes and inspires you. You can escape from the stress and boredom of your daily life, and immerse yourself in a world of wonder and adventure. You can also feel inspired by the courage and resilience of the characters, who face various challenges and dangers with humor and optimism.</li>
|
74 |
-
</ul>
|
75 |
-
<h3>The positive reviews and ratings of the show</h3>
|
76 |
-
<p>Eureka is not only a fan-favorite show, but also a critically acclaimed one. The show has received positive reviews and ratings from both critics and viewers alike. Here are some of the accolades that the show has earned:</p>
|
77 |
-
<ul>
|
78 |
-
<li>The show has an average rating of 7.9 out of 10 on IMDb, based on over 46,000 user ratings. It also has an 88% audience score on Rotten Tomatoes, based on over 1,000 user ratings.</li>
|
79 |
-
<li>The show has been nominated for several awards, including four Primetime Emmy Awards, two Saturn Awards, two Visual Effects Society Awards, and one Hugo Award. It has also won one Leo Award for Best Visual Effects in a Dramatic Series.</li>
|
80 |
-
<li>The show has been praised by critics for its originality, humor, charm, and intelligence. For example, The New York Times called it "a smart new series" that "mixes science fiction with comedy". TV Guide described it as "a delightful surprise" that "combines elements of The X-Files, Twin Peaks and Northern Exposure". And Entertainment Weekly said it was "a clever sci-fi series" that "has a lot going for it".</li>
|
81 |
-
</ul>
|
82 |
-
<h3>The awards and nominations of the show</h3>
|
83 |
-
<p>Eureka is not only a popular show, but also a prestigious one. The show has been recognized by various organizations and institutions for its excellence in different aspects of television production. Here are some of the awards and nominations that the show has received:</p>
|
84 |
-
<table>
|
85 |
-
<tr><th>Award</th><th>Category</th><th>Year</th><th>Result</th></tr>
|
86 |
-
<tr><td>Primetime Emmy Award</td><td>Outstanding Special Visual Effects for a Series</td><td>2007</td><td>Nominated</td></tr>
|
87 |
-
<tr><td>Primetime Emmy Award</td><td>Outstanding Special Visual Effects for a Series</td><td>2008</td><td>Nominated</td></tr>
|
88 |
-
<tr><td>Primetime Emmy Award</td><td>Outstanding Special Visual Effects for a Series</td><td>2010</td><td>Nominated</td></tr>
|
89 |
-
<tr><td>Primetime Emmy Award</td><td>Outstanding Original Main Title Theme Music</td><td>2011</td><td>Nominated</td></tr>
|
90 |
-
<tr><td>Saturn Award</td><td>Best Syndicated/Cable Television Series</td><td>2008</td><td>Nominated</td></tr>
|
91 |
-
<tr><td>Saturn Award</td><td>Best Syndicated/Cable Television Series</td><td>2010</td><td>Nominated</td></tr>
|
92 |
-
<tr><td>Visual Effects Society Award</td><td>Outstanding Visual Effects in a Broadcast Series</td><td>2007</td><td>Nominated</td></tr>
|
93 |
-
<tr><td>Visual Effects Society Award</td><td>Outstanding Visual Effects in a Broadcast Series</td><td>2008</td><td>Nominated</td></tr>
|
94 |
-
<tr><td>Hugo Award</td><td>Best Dramatic Presentation, Short Form (for episode "Your Face or Mine")</t d><td>2008</td><td>Nominated</td></tr>
|
95 |
-
<tr><td>Leo Award</td><td>Best Visual Effects in a Dramatic Series</td><td>2009</td><td>Won</td></tr>
|
96 |
-
</table>
|
97 |
-
<h2>Where to download Eureka season 1?</h2>
|
98 |
-
<h3>The legal and safe options for streaming or buying the show online</h3>
|
99 |
-
<p>If you are interested in watching Eureka season 1, you might be wondering where you can download it legally and safely. The good news is that there are several options available for you to choose from, depending on your preferences and budget. Here are some of the legal and safe options for streaming or buying the show online:</p>
|
100 |
-
<ul>
|
101 |
-
<li>Amazon Prime Video: You can stream Eureka season 1 on Amazon Prime Video, which is a subscription-based service that offers a wide range of movies and TV shows. You can also buy or rent individual episodes or the whole season on Amazon Prime Video, which is a pay-per-view service that allows you to download or watch online. The price for buying the whole season is $19.99, while the price for renting the whole season is $9.99. The price for buying or renting individual episodes is $1.99.</li>
|
102 |
-
<li>iTunes: You can buy Eureka season 1 on iTunes, which is a digital media store that offers music, movies, TV shows, and more. You can download or watch online the episodes or the whole season on iTunes, which is compatible with various devices such as iPhone, iPad, iPod, Apple TV, Mac, and PC. The price for buying the whole season is $19.99, while the price for buying individual episodes is $1.99.</li>
|
103 |
-
<li>Google Play: You can buy Eureka season 1 on Google Play, which is a digital distribution platform that offers apps, games, music, movies, TV shows, and more. You can download or watch online the episodes or the whole season on Google Play, which is compatible with various devices such as Android phones and tablets, Chromebooks, Chromecast, and Smart TVs. The price for buying the whole season is $19.99, while the price for buying individual episodes is $1.99.</li>
|
104 |
-
<li>Vudu: You can buy or rent Eureka season 1 on Vudu, which is a video-on-demand service that offers movies and TV shows. You can download or watch online the episodes or the whole season on Vudu, which is compatible with various devices such as Roku, PlayStation, Xbox, Smart TVs, and more. The price for buying the whole season is $19.99, while the price for renting the whole season is $9.99. The price for buying or renting individual episodes is $1.99.</li>
|
105 |
-
</ul>
|
106 |
-
<h3>The comparison of the prices and features of different platforms</h3>
|
107 |
-
<p>To help you decide which option is best for you, here is a comparison of the prices and features of different platforms that offer Eureka season 1:</p>
|
108 |
-
<table>
|
109 |
-
<tr><th>Platform</th><th>Price</th><th>Features</th></tr>
|
110 |
-
<tr><td>Amazon Prime Video</td><td>$19.99 (buy) / $9.99 (rent) / $0 (stream)</td><td>- Stream with Prime membership ($12.99/month or $119/year)<br>- Download or watch online<br>- HD quality<br>- Closed captions<br>- Watch on multiple devices<br>- No ads</td></tr>
|
111 |
-
<tr><td>iTunes</td><td>$19.99 (buy) / N/A (rent) / N/A (stream)</td><td>- Download or watch online<br>- HD quality<br>- Closed captions<br>- Watch on multiple devices<br>- No ads</td></tr>
|
112 |
-
<tr><td>Google Play</td><td>$19.99 (buy) / N/A (rent) / N/A (stream)</td><td>- Download or watch online<br>- HD quality<br>- Closed captions<br>- Watch on multiple devices<br>- No ads</td></tr>
|
113 |
-
<tr><td>Vudu</td><td>$19.99 (buy) / $9.99 (rent) / N/A (stream)</td><td>- Download or watch online<br>- HD quality<br>- Closed captions<br>- Watch on multiple devices<br>- No ads</td></tr>
|
114 |
-
</table>
|
115 |
-
<h3>The table of the download options and their links</h3>
|
116 |
-
<p>Here is a table of the download options and their links for Eureka season 1:</p>
|
117 |
-
<table>
|
118 |
-
<tr><th>Platform</th><th>Link</th></tr>
|
119 |
-
<tr><td>Amazon Prime Video</td><td>[Download Eureka Season 1 on Amazon Prime Video]</td></tr>
|
120 |
-
<tr><td>iTunes</td><td <td>[Download Eureka Season 1 on iTunes]</td></tr>
|
121 |
-
<tr><td>Google Play</td><td>[Download Eureka Season 1 on Google Play]</td></tr>
|
122 |
-
<tr><td>Vudu</td><td>[Download Eureka Season 1 on Vudu]</td></tr>
|
123 |
-
</table>
|
124 |
-
<h2>Conclusion</h2>
|
125 |
-
<p>Eureka is a sci-fi comedy drama that you don't want to miss. It is a show that combines science, humor, and heart in a unique and entertaining way. It is a show that features a talented cast, a creative plot, and a stunning visual effects. It is a show that has received rave reviews and awards from critics and fans alike.</p>
|
126 |
-
<p>If you are interested in watching Eureka season 1, you have several options to download it legally and safely. You can stream it on Amazon Prime Video, or buy or rent it on iTunes, Google Play, or Vudu. You can compare the prices and features of different platforms, and choose the one that suits you best. You can also click on the links provided in the table above to access the download options directly.</p>
|
127 |
-
<p>So what are you waiting for? Download Eureka season 1 today, and enjoy the amazing adventures of Jack Carter and his friends in the town of Eureka. You will not regret it!</p>
|
128 |
-
<h2>FAQs</h2>
|
129 |
-
<h3>Q1: Is Eureka based on a true story?</h3>
|
130 |
-
<p>A1: No, Eureka is not based on a true story. It is a fictional show that was created by Andrew Cosby and Jaime Paglia. However, some of the scientific concepts and inventions that are featured in the show are inspired by real-life research and technology.</p>
|
131 |
-
<h3>Q2: How many seasons are there in Eureka?</h3>
|
132 |
-
<p>A2: There are five seasons in Eureka, with a total of 77 episodes. The show ran from 2006 to 2012, and ended with a special Christmas episode.</p>
|
133 |
-
<h3>Q3: Who are the creators and stars of Eureka?</h3>
|
134 |
-
<p>A3: The creators of Eureka are Andrew Cosby and Jaime Paglia, who also served as executive producers and writers for the show. The stars of Eureka are Colin Ferguson as Jack Carter, Salli Richardson-Whitfield as Allison Blake, Joe Morton as Henry Deacon, Erica Cerra as Jo Lupo, Neil Grayston as Douglas Fargo, Ed Quinn as Nathan Stark, Jordan Hinson as Zoe Carter, Matt Frewer as Jim Taggart, and Debrah Farentino as Beverly Barlowe.</p>
|
135 |
-
<h3>Q4: What is the spin-off series of Eureka?</h3>
|
136 |
-
<p>A4: The spin-off series of Eureka is Warehouse 13, which is another sci-fi comedy drama that aired on Syfy from 2009 to 2014. Warehouse 13 is about a secret government facility that stores supernatural artifacts collected from around the world. The show has crossed over with Eureka several times, featuring guest appearances from some of the characters.</p>
|
137 |
-
<h3>Q5: Where was Eureka filmed?</h3>
|
138 |
-
<p>A5: Eureka was filmed mostly in Vancouver, British Columbia, Canada. Some of the locations that were used for filming include Burnaby Village Museum, Riverview Hospital, Lynn Canyon Park, Britannia Beach, and Fort Langley.</p> 197e85843d<br />
|
139 |
-
<br />
|
140 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/1phancelerku/anime-remove-background/Final Thoughts The Punjabi Song by AP Dhillon Shinda Kahlon Gminxr that Everyone is Talking About.md
DELETED
@@ -1,101 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Final Thoughts by AP Dhillon: A Punjabi Song That Will Make You Feel</h1>
|
3 |
-
<h2>Introduction</h2>
|
4 |
-
<p>Final Thoughts is a Punjabi song by AP Dhillon, Shinda Kahlon, and Gminxr, released in 2022. The song is a heartfelt expression of love, regret, and hope, as the singer reflects on his past relationship and wishes for a better future. The song is part of AP Dhillon's album Hidden Gems, which showcases his versatility and talent as a singer, rapper, songwriter, and record producer.</p>
|
5 |
-
<p>The song has gained immense popularity among Punjabi music lovers, as well as listeners from other regions and countries. The song has over 100 million views on YouTube, and has also been featured on various music streaming services, such as Spotify, Apple Music, Amazon Music, and more. You can listen to the song on any of these platforms, or download it legally from websites that offer free music downloads, such as [SoundCloud](^1^), [Amazon](^2^), or [Spotify](^3^).</p>
|
6 |
-
<h2>final thoughts ap dhillon song download</h2><br /><p><b><b>Download</b> ❤ <a href="https://jinyurl.com/2uNJNP">https://jinyurl.com/2uNJNP</a></b></p><br /><br />
|
7 |
-
<h2>The Lyrics and Meaning of Final Thoughts</h2>
|
8 |
-
<p>The lyrics of Final Thoughts are written by Shinda Kahlon, who is also a singer and rapper. The lyrics are in Punjabi, with some English words mixed in. The lyrics convey a mix of emotions, such as sadness, nostalgia, anger, guilt, forgiveness, and optimism. The singer addresses his ex-girlfriend, who has left him for someone else. He tells her that he still loves her, but he also blames her for breaking his heart. He admits that he made some mistakes, but he also asks her to remember the good times they had together. He hopes that she will come back to him someday, but he also wishes her happiness with her new partner. He says that these are his final thoughts before he moves on with his life.</p>
|
9 |
-
<p>Some of the key lines and verses from the song are:</p>
|
10 |
-
<ul>
|
11 |
-
<li>"Mainu chhad ke tu kise hor nu mil gayi ae" (You left me and found someone else)</li>
|
12 |
-
<li>"Mainu pata si tu kade mere naal ni rahugi" (I knew you would never stay with me)</li>
|
13 |
-
<li>"Par main tainu pyar karda si" (But I loved you)</li>
|
14 |
-
<li>"Tere naal bitaye din raat yaad karke" (Remembering the days and nights I spent with you)</li>
|
15 |
-
<li>"Mainu lagda si tu meri zindagi ae" (I thought you were my life)</li>
|
16 |
-
<li>"Par tu taan meri zindagi hi chhin gayi ae" (But you took away my life)</li>
|
17 |
-
<li>"Mainu maaf kar de" (Forgive me)</li>
|
18 |
-
<li>"Main tainu khush dekhna chaunda si" (I wanted to see you happy)</li>
|
19 |
-
<li>"Par tere bina main khush ni hona" (But I can't be happy without you)</li>
|
20 |
-
<li>"Eh mere final thoughts ne" (These are my final thoughts)</li>
|
21 |
-
</ul>
|
22 |
-
<h2>The Music and Video of Final Thoughts</h2>
|
23 |
-
<p>The music of Final Thoughts is composed by Gminxr, who is also a record producer and DJ. The music is in the genre of hip hop, with elements of trap, R&B, and pop. The music is catchy, upbeat, and melodic, with a smooth and soothing vocal delivery by AP Dhillon. The music also features some samples and effects, such as sirens, gunshots, and echoes, that add to the mood and intensity of the song. The music is well-produced and mixed, with a clear and balanced sound quality.</p>
|
24 |
-
<p>final thoughts ap dhillon lyrics<br />
|
25 |
-
final thoughts ap dhillon mp3 download<br />
|
26 |
-
final thoughts ap dhillon shazam<br />
|
27 |
-
final thoughts ap dhillon spotify<br />
|
28 |
-
final thoughts ap dhillon youtube<br />
|
29 |
-
final thoughts ap dhillon shinda kahlon gminxr<br />
|
30 |
-
final thoughts ap dhillon apple music<br />
|
31 |
-
final thoughts ap dhillon run-up records<br />
|
32 |
-
final thoughts ap dhillon video<br />
|
33 |
-
final thoughts ap dhillon soundcloud<br />
|
34 |
-
final thoughts ap dhillon genius<br />
|
35 |
-
final thoughts ap dhillon ringtone<br />
|
36 |
-
final thoughts ap dhillon instrumental<br />
|
37 |
-
final thoughts ap dhillon reaction<br />
|
38 |
-
final thoughts ap dhillon remix<br />
|
39 |
-
final thoughts ap dhillon chords<br />
|
40 |
-
final thoughts ap dhillon karaoke<br />
|
41 |
-
final thoughts ap dhillon meaning<br />
|
42 |
-
final thoughts ap dhillon 320kbps<br />
|
43 |
-
final thoughts ap dhillon mr jatt<br />
|
44 |
-
final thoughts ap dhillon punjabi song<br />
|
45 |
-
final thoughts ap dhillon two hearts never break the same<br />
|
46 |
-
final thoughts ap dhillon tiktok<br />
|
47 |
-
final thoughts ap dhillon instagram<br />
|
48 |
-
final thoughts ap dhillon whatsapp status<br />
|
49 |
-
final thoughts ap dhillon english translation<br />
|
50 |
-
final thoughts ap dhillon amazon music<br />
|
51 |
-
final thoughts ap dhillon gaana<br />
|
52 |
-
final thoughts ap dhillon hungama<br />
|
53 |
-
final thoughts ap dhillon wynk music<br />
|
54 |
-
final thoughts ap dhillon jiosaavn<br />
|
55 |
-
final thoughts ap dhillon pandora<br />
|
56 |
-
final thoughts ap dhillon deezer<br />
|
57 |
-
final thoughts ap dhillon tidal<br />
|
58 |
-
final thoughts ap dhillon napster<br />
|
59 |
-
final thoughts ap dhillon iheartradio<br />
|
60 |
-
final thoughts ap dhillon last.fm<br />
|
61 |
-
final thoughts ap dhillon musixmatch<br />
|
62 |
-
final thoughts ap dhillon audiomack<br />
|
63 |
-
final thoughts ap dhillon bandcamp<br />
|
64 |
-
final thoughts ap dhillon vimeo<br />
|
65 |
-
final thoughts ap dhillon vevo<br />
|
66 |
-
final thoughts ap dhillon facebook<br />
|
67 |
-
final thoughts ap dhillon twitter<br />
|
68 |
-
final thoughts ap dhillon snapchat<br />
|
69 |
-
final thoughts ap dhillon reddit<br />
|
70 |
-
final thoughts ap dhillon quora<br />
|
71 |
-
final thoughts ap dhillon pinterest<br />
|
72 |
-
final thoughts ap dhillon tumblr</p>
|
73 |
-
<p>The video of Final Thoughts is directed by Sukh Sanghera, who is also a filmmaker and photographer. The video is shot in various locations, such as a beach, a park, a street, and a house. The video shows AP Dhillon singing and rapping the song, while also acting out some scenes from his past relationship with his ex-girlfriend. The video also features some other actors and models, who play the roles of his friends, his new girlfriend, and his ex-girlfriend's new boyfriend. The video is well-edited and cinematographed, with a vibrant and colorful visual style.</p>
|
74 |
-
<h2>The Impact and Reception of Final Thoughts</h2>
|
75 |
-
<p>The impact and reception of Final Thoughts have been phenomenal and positive. The song has influenced the Punjabi music industry and culture by bringing a fresh and unique perspective to the genre of hip hop. The song has also showcased the talent and potential of AP Dhillon and his collaborators, who have been praised for their creativity and originality. The song has also inspired many young and aspiring artists to pursue their passion and dreams in music.</p>
|
76 |
-
<p>The song has been received by critics and fans with admiration and appreciation. The song has been hailed as one of the best Punjabi songs of 2022, and one of the most emotional and relatable songs ever. The song has also been lauded for its lyrics, music, video, and performance, which have been described as captivating, powerful, authentic, and impressive. The song has also been nominated for several awards and honors, such as the BritAsia TV Music Awards, the PTC Punjabi Music Awards, the Mirchi Music Awards, and more.</p>
|
77 |
-
<p>The song has performed well on various charts and platforms, both nationally and internationally. The song has topped the charts in India, Canada, UK, Australia, New Zealand, and more. The song has also been featured on several playlists and radio stations around the world. The song has also broken several records and milestones, such as the most viewed Punjabi song on YouTube in 24 hours, the most streamed Punjabi song on Spotify in a week, the most liked Punjabi song on Instagram in a month, and more.</p>
|
78 |
-
<h2>Conclusion</h2>
|
79 |
-
<p>In conclusion, Final Thoughts by AP Dhillon is a Punjabi song that will make you feel a range of emotions, from sadness to happiness. The song is a masterpiece of artistry and expression, that showcases the skills and talents of AP Dhillon and his collaborators. The song is also a hit among listeners and critics alike, who have praised it for its quality and impact. The song is definitely worth listening to and downloading, as it will touch your heart and soul.</p>
|
80 |
-
<p>If you are looking for a Punjabi song that will make you feel something deep and real, then you should definitely check out Final Thoughts by AP Dhillon. You can download the song from any of the links below, or listen to it on your favorite music streaming service. You will not regret it, as it will make you appreciate the beauty and complexity of love and life.</p>
|
81 |
-
<h2>FAQs</h2>
|
82 |
-
<h3>Q: Who is AP Dhillon?</h3>
|
83 |
-
<p>A: AP Dhillon is a Punjabi singer, rapper, songwriter, and record producer, based in Canada. He is known for his fusion of hip hop, R&B, and Punjabi music, and his collaborations with other artists, such as Gurinder Gill, Money Musik, Shinda Kahlon, and Gminxr. He is also the founder of Run-Up Records, an independent music label.</p>
|
84 |
-
<h3>Q: What is the meaning of the title Final Thoughts?</h3>
|
85 |
-
<p>A: The title Final Thoughts refers to the last thoughts that the singer has about his ex-girlfriend, before he decides to move on with his life. The title also implies that this is the final song that he will make about her, as he closes this chapter of his life.</p>
|
86 |
-
<h3>Q: How can I download Final Thoughts by AP Dhillon for free?</h3>
|
87 |
-
<p>A: You can download Final Thoughts by AP Dhillon for free from websites that offer free music downloads, such as [SoundCloud], [Amazon], or [Spotify]. You can also use a YouTube to MP3 converter to download the song from YouTube. However, you should always respect the rights and wishes of the artists and support them by buying their music legally.</p>
|
88 |
-
<h3>Q: What are some other songs by AP Dhillon that I should listen to?</h3>
|
89 |
-
<p>A: Some other songs by AP Dhillon that you should listen to are:</p>
|
90 |
-
<table>
|
91 |
-
<tr><th>Song</th><th>Album</th><th>Year</th></tr>
|
92 |
-
<tr><td>Majhail</td><td>Hidden Gems</td><td>2022</td></tr>
|
93 |
-
<tr><td>Brown Munde</td><td>Brown Munde</td><td>2021</td></tr>
|
94 |
-
<tr><td>All Good</td><td>All Good</td><td>2021</td></tr>
|
95 |
-
<tr><td>Excuses</td><td>Excuses</td><td>2020</td></tr>
|
96 |
-
<tr><td>Tinted Windows</td><td>Tinted Windows</td><td>2020</td></tr>
|
97 |
-
</table>
|
98 |
-
<h3>Q: Where can I find more information about AP Dhillon and his music?</h3>
|
99 |
-
<p>A: You can find more information about AP Dhillon and his music on his official website [apdhillon.com], or on his social media accounts, such as [Instagram], [Twitter], [Facebook], or [YouTube]. You can also follow his music label Run-Up Records on [Instagram] or [YouTube].</p> 401be4b1e0<br />
|
100 |
-
<br />
|
101 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/4Taps/SadTalker/src/face3d/models/__init__.py
DELETED
@@ -1,67 +0,0 @@
|
|
1 |
-
"""This package contains modules related to objective functions, optimizations, and network architectures.
|
2 |
-
|
3 |
-
To add a custom model class called 'dummy', you need to add a file called 'dummy_model.py' and define a subclass DummyModel inherited from BaseModel.
|
4 |
-
You need to implement the following five functions:
|
5 |
-
-- <__init__>: initialize the class; first call BaseModel.__init__(self, opt).
|
6 |
-
-- <set_input>: unpack data from dataset and apply preprocessing.
|
7 |
-
-- <forward>: produce intermediate results.
|
8 |
-
-- <optimize_parameters>: calculate loss, gradients, and update network weights.
|
9 |
-
-- <modify_commandline_options>: (optionally) add model-specific options and set default options.
|
10 |
-
|
11 |
-
In the function <__init__>, you need to define four lists:
|
12 |
-
-- self.loss_names (str list): specify the training losses that you want to plot and save.
|
13 |
-
-- self.model_names (str list): define networks used in our training.
|
14 |
-
-- self.visual_names (str list): specify the images that you want to display and save.
|
15 |
-
-- self.optimizers (optimizer list): define and initialize optimizers. You can define one optimizer for each network. If two networks are updated at the same time, you can use itertools.chain to group them. See cycle_gan_model.py for an usage.
|
16 |
-
|
17 |
-
Now you can use the model class by specifying flag '--model dummy'.
|
18 |
-
See our template model class 'template_model.py' for more details.
|
19 |
-
"""
|
20 |
-
|
21 |
-
import importlib
|
22 |
-
from src.face3d.models.base_model import BaseModel
|
23 |
-
|
24 |
-
|
25 |
-
def find_model_using_name(model_name):
|
26 |
-
"""Import the module "models/[model_name]_model.py".
|
27 |
-
|
28 |
-
In the file, the class called DatasetNameModel() will
|
29 |
-
be instantiated. It has to be a subclass of BaseModel,
|
30 |
-
and it is case-insensitive.
|
31 |
-
"""
|
32 |
-
model_filename = "face3d.models." + model_name + "_model"
|
33 |
-
modellib = importlib.import_module(model_filename)
|
34 |
-
model = None
|
35 |
-
target_model_name = model_name.replace('_', '') + 'model'
|
36 |
-
for name, cls in modellib.__dict__.items():
|
37 |
-
if name.lower() == target_model_name.lower() \
|
38 |
-
and issubclass(cls, BaseModel):
|
39 |
-
model = cls
|
40 |
-
|
41 |
-
if model is None:
|
42 |
-
print("In %s.py, there should be a subclass of BaseModel with class name that matches %s in lowercase." % (model_filename, target_model_name))
|
43 |
-
exit(0)
|
44 |
-
|
45 |
-
return model
|
46 |
-
|
47 |
-
|
48 |
-
def get_option_setter(model_name):
|
49 |
-
"""Return the static method <modify_commandline_options> of the model class."""
|
50 |
-
model_class = find_model_using_name(model_name)
|
51 |
-
return model_class.modify_commandline_options
|
52 |
-
|
53 |
-
|
54 |
-
def create_model(opt):
|
55 |
-
"""Create a model given the option.
|
56 |
-
|
57 |
-
This function warps the class CustomDatasetDataLoader.
|
58 |
-
This is the main interface between this package and 'train.py'/'test.py'
|
59 |
-
|
60 |
-
Example:
|
61 |
-
>>> from models import create_model
|
62 |
-
>>> model = create_model(opt)
|
63 |
-
"""
|
64 |
-
model = find_model_using_name(opt.model)
|
65 |
-
instance = model(opt)
|
66 |
-
print("model [%s] was created" % type(instance).__name__)
|
67 |
-
return instance
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/vocoder_infer/hifigan.py
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
from text_to_speech.modules.vocoder.hifigan.hifigan import HifiGanGenerator
|
3 |
-
from tasks.tts.vocoder_infer.base_vocoder import register_vocoder, BaseVocoder
|
4 |
-
from text_to_speech.utils.commons.ckpt_utils import load_ckpt
|
5 |
-
from text_to_speech.utils.commons.hparams import set_hparams, hparams
|
6 |
-
from text_to_speech.utils.commons.meters import Timer
|
7 |
-
|
8 |
-
total_time = 0
|
9 |
-
|
10 |
-
|
11 |
-
@register_vocoder('HifiGAN')
|
12 |
-
class HifiGAN(BaseVocoder):
|
13 |
-
def __init__(self):
|
14 |
-
base_dir = hparams['vocoder_ckpt']
|
15 |
-
config_path = f'{base_dir}/config.yaml'
|
16 |
-
self.config = config = set_hparams(config_path, global_hparams=False)
|
17 |
-
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
18 |
-
self.model = HifiGanGenerator(config)
|
19 |
-
load_ckpt(self.model, base_dir, 'model_gen')
|
20 |
-
self.model.to(self.device)
|
21 |
-
self.model.eval()
|
22 |
-
|
23 |
-
def spec2wav(self, mel, **kwargs):
|
24 |
-
device = self.device
|
25 |
-
with torch.no_grad():
|
26 |
-
c = torch.FloatTensor(mel).unsqueeze(0).to(device)
|
27 |
-
c = c.transpose(2, 1)
|
28 |
-
with Timer('hifigan', enable=hparams['profile_infer']):
|
29 |
-
y = self.model(c).view(-1)
|
30 |
-
wav_out = y.cpu().numpy()
|
31 |
-
return wav_out
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/commons/dataset_utils.py
DELETED
@@ -1,247 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
import sys
|
3 |
-
import traceback
|
4 |
-
import types
|
5 |
-
from functools import wraps
|
6 |
-
from itertools import chain
|
7 |
-
import numpy as np
|
8 |
-
import torch.utils.data
|
9 |
-
from torch.utils.data import ConcatDataset
|
10 |
-
from text_to_speech.utils.commons.hparams import hparams
|
11 |
-
|
12 |
-
|
13 |
-
def collate_1d_or_2d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None, shift_id=1):
|
14 |
-
if len(values[0].shape) == 1:
|
15 |
-
return collate_1d(values, pad_idx, left_pad, shift_right, max_len, shift_id)
|
16 |
-
else:
|
17 |
-
return collate_2d(values, pad_idx, left_pad, shift_right, max_len)
|
18 |
-
|
19 |
-
|
20 |
-
def collate_1d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None, shift_id=1):
|
21 |
-
"""Convert a list of 1d tensors into a padded 2d tensor."""
|
22 |
-
size = max(v.size(0) for v in values) if max_len is None else max_len
|
23 |
-
res = values[0].new(len(values), size).fill_(pad_idx)
|
24 |
-
|
25 |
-
def copy_tensor(src, dst):
|
26 |
-
assert dst.numel() == src.numel()
|
27 |
-
if shift_right:
|
28 |
-
dst[1:] = src[:-1]
|
29 |
-
dst[0] = shift_id
|
30 |
-
else:
|
31 |
-
dst.copy_(src)
|
32 |
-
|
33 |
-
for i, v in enumerate(values):
|
34 |
-
copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)])
|
35 |
-
return res
|
36 |
-
|
37 |
-
|
38 |
-
def collate_2d(values, pad_idx=0, left_pad=False, shift_right=False, max_len=None):
|
39 |
-
"""Convert a list of 2d tensors into a padded 3d tensor."""
|
40 |
-
size = max(v.size(0) for v in values) if max_len is None else max_len
|
41 |
-
res = values[0].new(len(values), size, values[0].shape[1]).fill_(pad_idx)
|
42 |
-
|
43 |
-
def copy_tensor(src, dst):
|
44 |
-
assert dst.numel() == src.numel()
|
45 |
-
if shift_right:
|
46 |
-
dst[1:] = src[:-1]
|
47 |
-
else:
|
48 |
-
dst.copy_(src)
|
49 |
-
|
50 |
-
for i, v in enumerate(values):
|
51 |
-
copy_tensor(v, res[i][size - len(v):] if left_pad else res[i][:len(v)])
|
52 |
-
return res
|
53 |
-
|
54 |
-
|
55 |
-
def _is_batch_full(batch, num_tokens, max_tokens, max_sentences):
|
56 |
-
if len(batch) == 0:
|
57 |
-
return 0
|
58 |
-
if len(batch) == max_sentences:
|
59 |
-
return 1
|
60 |
-
if num_tokens > max_tokens:
|
61 |
-
return 1
|
62 |
-
return 0
|
63 |
-
|
64 |
-
|
65 |
-
def batch_by_size(
|
66 |
-
indices, num_tokens_fn, max_tokens=None, max_sentences=None,
|
67 |
-
required_batch_size_multiple=1, distributed=False
|
68 |
-
):
|
69 |
-
"""
|
70 |
-
Yield mini-batches of indices bucketed by size. Batches may contain
|
71 |
-
sequences of different lengths.
|
72 |
-
|
73 |
-
Args:
|
74 |
-
indices (List[int]): ordered list of dataset indices
|
75 |
-
num_tokens_fn (callable): function that returns the number of tokens at
|
76 |
-
a given index
|
77 |
-
max_tokens (int, optional): max number of tokens in each batch
|
78 |
-
(default: None).
|
79 |
-
max_sentences (int, optional): max number of sentences in each
|
80 |
-
batch (default: None).
|
81 |
-
required_batch_size_multiple (int, optional): require batch size to
|
82 |
-
be a multiple of N (default: 1).
|
83 |
-
"""
|
84 |
-
max_tokens = max_tokens if max_tokens is not None else sys.maxsize
|
85 |
-
max_sentences = max_sentences if max_sentences is not None else sys.maxsize
|
86 |
-
bsz_mult = required_batch_size_multiple
|
87 |
-
|
88 |
-
if isinstance(indices, types.GeneratorType):
|
89 |
-
indices = np.fromiter(indices, dtype=np.int64, count=-1)
|
90 |
-
|
91 |
-
sample_len = 0
|
92 |
-
sample_lens = []
|
93 |
-
batch = []
|
94 |
-
batches = []
|
95 |
-
for i in range(len(indices)):
|
96 |
-
idx = indices[i]
|
97 |
-
num_tokens = num_tokens_fn(idx)
|
98 |
-
sample_lens.append(num_tokens)
|
99 |
-
sample_len = max(sample_len, num_tokens)
|
100 |
-
|
101 |
-
assert sample_len <= max_tokens, (
|
102 |
-
"sentence at index {} of size {} exceeds max_tokens "
|
103 |
-
"limit of {}!".format(idx, sample_len, max_tokens)
|
104 |
-
)
|
105 |
-
num_tokens = (len(batch) + 1) * sample_len
|
106 |
-
|
107 |
-
if _is_batch_full(batch, num_tokens, max_tokens, max_sentences):
|
108 |
-
mod_len = max(
|
109 |
-
bsz_mult * (len(batch) // bsz_mult),
|
110 |
-
len(batch) % bsz_mult,
|
111 |
-
)
|
112 |
-
batches.append(batch[:mod_len])
|
113 |
-
batch = batch[mod_len:]
|
114 |
-
sample_lens = sample_lens[mod_len:]
|
115 |
-
sample_len = max(sample_lens) if len(sample_lens) > 0 else 0
|
116 |
-
batch.append(idx)
|
117 |
-
if len(batch) > 0:
|
118 |
-
batches.append(batch)
|
119 |
-
return batches
|
120 |
-
|
121 |
-
|
122 |
-
def unpack_dict_to_list(samples):
|
123 |
-
samples_ = []
|
124 |
-
bsz = samples.get('outputs').size(0)
|
125 |
-
for i in range(bsz):
|
126 |
-
res = {}
|
127 |
-
for k, v in samples.items():
|
128 |
-
try:
|
129 |
-
res[k] = v[i]
|
130 |
-
except:
|
131 |
-
pass
|
132 |
-
samples_.append(res)
|
133 |
-
return samples_
|
134 |
-
|
135 |
-
|
136 |
-
def remove_padding(x, padding_idx=0):
|
137 |
-
if x is None:
|
138 |
-
return None
|
139 |
-
assert len(x.shape) in [1, 2]
|
140 |
-
if len(x.shape) == 2: # [T, H]
|
141 |
-
return x[np.abs(x).sum(-1) != padding_idx]
|
142 |
-
elif len(x.shape) == 1: # [T]
|
143 |
-
return x[x != padding_idx]
|
144 |
-
|
145 |
-
|
146 |
-
def data_loader(fn):
|
147 |
-
"""
|
148 |
-
Decorator to make any fx with this use the lazy property
|
149 |
-
:param fn:
|
150 |
-
:return:
|
151 |
-
"""
|
152 |
-
|
153 |
-
wraps(fn)
|
154 |
-
attr_name = '_lazy_' + fn.__name__
|
155 |
-
|
156 |
-
def _get_data_loader(self):
|
157 |
-
try:
|
158 |
-
value = getattr(self, attr_name)
|
159 |
-
except AttributeError:
|
160 |
-
try:
|
161 |
-
value = fn(self) # Lazy evaluation, done only once.
|
162 |
-
except AttributeError as e:
|
163 |
-
# Guard against AttributeError suppression. (Issue #142)
|
164 |
-
traceback.print_exc()
|
165 |
-
error = f'{fn.__name__}: An AttributeError was encountered: ' + str(e)
|
166 |
-
raise RuntimeError(error) from e
|
167 |
-
setattr(self, attr_name, value) # Memoize evaluation.
|
168 |
-
return value
|
169 |
-
|
170 |
-
return _get_data_loader
|
171 |
-
|
172 |
-
|
173 |
-
class BaseDataset(torch.utils.data.Dataset):
|
174 |
-
def __init__(self, shuffle):
|
175 |
-
super().__init__()
|
176 |
-
self.hparams = hparams
|
177 |
-
self.shuffle = shuffle
|
178 |
-
self.sort_by_len = hparams['sort_by_len']
|
179 |
-
self.sizes = None
|
180 |
-
|
181 |
-
@property
|
182 |
-
def _sizes(self):
|
183 |
-
return self.sizes
|
184 |
-
|
185 |
-
def __getitem__(self, index):
|
186 |
-
raise NotImplementedError
|
187 |
-
|
188 |
-
def collater(self, samples):
|
189 |
-
raise NotImplementedError
|
190 |
-
|
191 |
-
def __len__(self):
|
192 |
-
return len(self._sizes)
|
193 |
-
|
194 |
-
def num_tokens(self, index):
|
195 |
-
return self.size(index)
|
196 |
-
|
197 |
-
def size(self, index):
|
198 |
-
"""Return an example's size as a float or tuple. This value is used when
|
199 |
-
filtering a dataset with ``--max-positions``."""
|
200 |
-
return min(self._sizes[index], hparams['max_frames'])
|
201 |
-
|
202 |
-
def ordered_indices(self):
|
203 |
-
"""Return an ordered list of indices. Batches will be constructed based
|
204 |
-
on this order."""
|
205 |
-
if self.shuffle:
|
206 |
-
indices = np.random.permutation(len(self))
|
207 |
-
if self.sort_by_len:
|
208 |
-
indices = indices[np.argsort(np.array(self._sizes)[indices], kind='mergesort')]
|
209 |
-
else:
|
210 |
-
indices = np.arange(len(self))
|
211 |
-
return indices
|
212 |
-
|
213 |
-
@property
|
214 |
-
def num_workers(self):
|
215 |
-
return int(os.getenv('NUM_WORKERS', hparams['ds_workers']))
|
216 |
-
|
217 |
-
|
218 |
-
class BaseConcatDataset(ConcatDataset):
|
219 |
-
def collater(self, samples):
|
220 |
-
return self.datasets[0].collater(samples)
|
221 |
-
|
222 |
-
@property
|
223 |
-
def _sizes(self):
|
224 |
-
if not hasattr(self, 'sizes'):
|
225 |
-
self.sizes = list(chain.from_iterable([d._sizes for d in self.datasets]))
|
226 |
-
return self.sizes
|
227 |
-
|
228 |
-
def size(self, index):
|
229 |
-
return min(self._sizes[index], hparams['max_frames'])
|
230 |
-
|
231 |
-
def num_tokens(self, index):
|
232 |
-
return self.size(index)
|
233 |
-
|
234 |
-
def ordered_indices(self):
|
235 |
-
"""Return an ordered list of indices. Batches will be constructed based
|
236 |
-
on this order."""
|
237 |
-
if self.datasets[0].shuffle:
|
238 |
-
indices = np.random.permutation(len(self))
|
239 |
-
if self.datasets[0].sort_by_len:
|
240 |
-
indices = indices[np.argsort(np.array(self._sizes)[indices], kind='mergesort')]
|
241 |
-
else:
|
242 |
-
indices = np.arange(len(self))
|
243 |
-
return indices
|
244 |
-
|
245 |
-
@property
|
246 |
-
def num_workers(self):
|
247 |
-
return self.datasets[0].num_workers
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIGC-Audio/Make_An_Audio/ldm/modules/losses_audio/__init__.py
DELETED
@@ -1,7 +0,0 @@
|
|
1 |
-
from ldm.modules.losses_audio.vqperceptual import DummyLoss
|
2 |
-
|
3 |
-
# relative imports pain
|
4 |
-
import os
|
5 |
-
import sys
|
6 |
-
path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'vggishish')
|
7 |
-
sys.path.append(path)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AIatUIUC/CodeLATS/README.md
DELETED
@@ -1,28 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: CodeLATS
|
3 |
-
emoji: 🏃
|
4 |
-
colorFrom: gray
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.27.1
|
8 |
-
app_file: app.py
|
9 |
-
pinned: true
|
10 |
-
license: mit
|
11 |
-
---
|
12 |
-
|
13 |
-
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
14 |
-
|
15 |
-
## Citations
|
16 |
-
Feel free to contact [email protected] for any questions
|
17 |
-
|
18 |
-
```bibtex
|
19 |
-
@misc{zhou2023language,
|
20 |
-
title={Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models},
|
21 |
-
author={Andy Zhou and Kai Yan and Michal Shlapentokh-Rothman and Haohan Wang and Yu-Xiong Wang},
|
22 |
-
year={2023},
|
23 |
-
eprint={2310.04406},
|
24 |
-
archivePrefix={arXiv},
|
25 |
-
primaryClass={cs.AI}
|
26 |
-
}
|
27 |
-
|
28 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AchyuthGamer/OpenGPT/g4f/Provider/deprecated/FastGpt.py
DELETED
@@ -1,87 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import json
|
4 |
-
import random
|
5 |
-
from abc import ABC, abstractmethod
|
6 |
-
|
7 |
-
import requests
|
8 |
-
|
9 |
-
from ...typing import Any, CreateResult
|
10 |
-
from ..base_provider import BaseProvider
|
11 |
-
|
12 |
-
|
13 |
-
class FastGpt(BaseProvider):
|
14 |
-
url: str = 'https://chat9.fastgpt.me/'
|
15 |
-
working = False
|
16 |
-
needs_auth = False
|
17 |
-
supports_stream = True
|
18 |
-
supports_gpt_35_turbo = True
|
19 |
-
supports_gpt_4 = False
|
20 |
-
|
21 |
-
@staticmethod
|
22 |
-
@abstractmethod
|
23 |
-
def create_completion(
|
24 |
-
model: str,
|
25 |
-
messages: list[dict[str, str]],
|
26 |
-
stream: bool, **kwargs: Any) -> CreateResult:
|
27 |
-
|
28 |
-
headers = {
|
29 |
-
'authority' : 'chat9.fastgpt.me',
|
30 |
-
'accept' : 'text/event-stream',
|
31 |
-
'accept-language' : 'en,fr-FR;q=0.9,fr;q=0.8,es-ES;q=0.7,es;q=0.6,en-US;q=0.5,am;q=0.4,de;q=0.3',
|
32 |
-
'cache-control' : 'no-cache',
|
33 |
-
'content-type' : 'application/json',
|
34 |
-
'origin' : 'https://chat9.fastgpt.me',
|
35 |
-
'plugins' : '0',
|
36 |
-
'pragma' : 'no-cache',
|
37 |
-
'referer' : 'https://chat9.fastgpt.me/',
|
38 |
-
'sec-ch-ua' : '"Not/A)Brand";v="99", "Google Chrome";v="115", "Chromium";v="115"',
|
39 |
-
'sec-ch-ua-mobile' : '?0',
|
40 |
-
'sec-ch-ua-platform': '"macOS"',
|
41 |
-
'sec-fetch-dest' : 'empty',
|
42 |
-
'sec-fetch-mode' : 'cors',
|
43 |
-
'sec-fetch-site' : 'same-origin',
|
44 |
-
'user-agent' : 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36',
|
45 |
-
'usesearch' : 'false',
|
46 |
-
'x-requested-with' : 'XMLHttpRequest',
|
47 |
-
}
|
48 |
-
|
49 |
-
json_data = {
|
50 |
-
'messages' : messages,
|
51 |
-
'stream' : stream,
|
52 |
-
'model' : model,
|
53 |
-
'temperature' : kwargs.get('temperature', 0.5),
|
54 |
-
'presence_penalty' : kwargs.get('presence_penalty', 0),
|
55 |
-
'frequency_penalty' : kwargs.get('frequency_penalty', 0),
|
56 |
-
'top_p' : kwargs.get('top_p', 1),
|
57 |
-
}
|
58 |
-
|
59 |
-
subdomain = random.choice([
|
60 |
-
'jdaen979ew',
|
61 |
-
'chat9'
|
62 |
-
])
|
63 |
-
|
64 |
-
response = requests.post(f'https://{subdomain}.fastgpt.me/api/openai/v1/chat/completions',
|
65 |
-
headers=headers, json=json_data, stream=stream)
|
66 |
-
|
67 |
-
for line in response.iter_lines():
|
68 |
-
if line:
|
69 |
-
try:
|
70 |
-
if b'content' in line:
|
71 |
-
line_json = json.loads(line.decode('utf-8').split('data: ')[1])
|
72 |
-
token = line_json['choices'][0]['delta'].get('content')
|
73 |
-
if token:
|
74 |
-
yield token
|
75 |
-
except:
|
76 |
-
continue
|
77 |
-
|
78 |
-
@classmethod
|
79 |
-
@property
|
80 |
-
def params(cls):
|
81 |
-
params = [
|
82 |
-
("model", "str"),
|
83 |
-
("messages", "list[dict[str, str]]"),
|
84 |
-
("stream", "bool"),
|
85 |
-
]
|
86 |
-
param = ", ".join([": ".join(p) for p in params])
|
87 |
-
return f"g4f.provider.{cls.__name__} supports: ({param})"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Adapter/CoAdapter/configs/mm/hrnet_w48_coco_256x192.py
DELETED
@@ -1,169 +0,0 @@
|
|
1 |
-
# _base_ = [
|
2 |
-
# '../../../../_base_/default_runtime.py',
|
3 |
-
# '../../../../_base_/datasets/coco.py'
|
4 |
-
# ]
|
5 |
-
evaluation = dict(interval=10, metric='mAP', save_best='AP')
|
6 |
-
|
7 |
-
optimizer = dict(
|
8 |
-
type='Adam',
|
9 |
-
lr=5e-4,
|
10 |
-
)
|
11 |
-
optimizer_config = dict(grad_clip=None)
|
12 |
-
# learning policy
|
13 |
-
lr_config = dict(
|
14 |
-
policy='step',
|
15 |
-
warmup='linear',
|
16 |
-
warmup_iters=500,
|
17 |
-
warmup_ratio=0.001,
|
18 |
-
step=[170, 200])
|
19 |
-
total_epochs = 210
|
20 |
-
channel_cfg = dict(
|
21 |
-
num_output_channels=17,
|
22 |
-
dataset_joints=17,
|
23 |
-
dataset_channel=[
|
24 |
-
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16],
|
25 |
-
],
|
26 |
-
inference_channel=[
|
27 |
-
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
|
28 |
-
])
|
29 |
-
|
30 |
-
# model settings
|
31 |
-
model = dict(
|
32 |
-
type='TopDown',
|
33 |
-
pretrained='https://download.openmmlab.com/mmpose/'
|
34 |
-
'pretrain_models/hrnet_w48-8ef0771d.pth',
|
35 |
-
backbone=dict(
|
36 |
-
type='HRNet',
|
37 |
-
in_channels=3,
|
38 |
-
extra=dict(
|
39 |
-
stage1=dict(
|
40 |
-
num_modules=1,
|
41 |
-
num_branches=1,
|
42 |
-
block='BOTTLENECK',
|
43 |
-
num_blocks=(4, ),
|
44 |
-
num_channels=(64, )),
|
45 |
-
stage2=dict(
|
46 |
-
num_modules=1,
|
47 |
-
num_branches=2,
|
48 |
-
block='BASIC',
|
49 |
-
num_blocks=(4, 4),
|
50 |
-
num_channels=(48, 96)),
|
51 |
-
stage3=dict(
|
52 |
-
num_modules=4,
|
53 |
-
num_branches=3,
|
54 |
-
block='BASIC',
|
55 |
-
num_blocks=(4, 4, 4),
|
56 |
-
num_channels=(48, 96, 192)),
|
57 |
-
stage4=dict(
|
58 |
-
num_modules=3,
|
59 |
-
num_branches=4,
|
60 |
-
block='BASIC',
|
61 |
-
num_blocks=(4, 4, 4, 4),
|
62 |
-
num_channels=(48, 96, 192, 384))),
|
63 |
-
),
|
64 |
-
keypoint_head=dict(
|
65 |
-
type='TopdownHeatmapSimpleHead',
|
66 |
-
in_channels=48,
|
67 |
-
out_channels=channel_cfg['num_output_channels'],
|
68 |
-
num_deconv_layers=0,
|
69 |
-
extra=dict(final_conv_kernel=1, ),
|
70 |
-
loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)),
|
71 |
-
train_cfg=dict(),
|
72 |
-
test_cfg=dict(
|
73 |
-
flip_test=True,
|
74 |
-
post_process='default',
|
75 |
-
shift_heatmap=True,
|
76 |
-
modulate_kernel=11))
|
77 |
-
|
78 |
-
data_cfg = dict(
|
79 |
-
image_size=[192, 256],
|
80 |
-
heatmap_size=[48, 64],
|
81 |
-
num_output_channels=channel_cfg['num_output_channels'],
|
82 |
-
num_joints=channel_cfg['dataset_joints'],
|
83 |
-
dataset_channel=channel_cfg['dataset_channel'],
|
84 |
-
inference_channel=channel_cfg['inference_channel'],
|
85 |
-
soft_nms=False,
|
86 |
-
nms_thr=1.0,
|
87 |
-
oks_thr=0.9,
|
88 |
-
vis_thr=0.2,
|
89 |
-
use_gt_bbox=False,
|
90 |
-
det_bbox_thr=0.0,
|
91 |
-
bbox_file='data/coco/person_detection_results/'
|
92 |
-
'COCO_val2017_detections_AP_H_56_person.json',
|
93 |
-
)
|
94 |
-
|
95 |
-
train_pipeline = [
|
96 |
-
dict(type='LoadImageFromFile'),
|
97 |
-
dict(type='TopDownGetBboxCenterScale', padding=1.25),
|
98 |
-
dict(type='TopDownRandomShiftBboxCenter', shift_factor=0.16, prob=0.3),
|
99 |
-
dict(type='TopDownRandomFlip', flip_prob=0.5),
|
100 |
-
dict(
|
101 |
-
type='TopDownHalfBodyTransform',
|
102 |
-
num_joints_half_body=8,
|
103 |
-
prob_half_body=0.3),
|
104 |
-
dict(
|
105 |
-
type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5),
|
106 |
-
dict(type='TopDownAffine'),
|
107 |
-
dict(type='ToTensor'),
|
108 |
-
dict(
|
109 |
-
type='NormalizeTensor',
|
110 |
-
mean=[0.485, 0.456, 0.406],
|
111 |
-
std=[0.229, 0.224, 0.225]),
|
112 |
-
dict(type='TopDownGenerateTarget', sigma=2),
|
113 |
-
dict(
|
114 |
-
type='Collect',
|
115 |
-
keys=['img', 'target', 'target_weight'],
|
116 |
-
meta_keys=[
|
117 |
-
'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale',
|
118 |
-
'rotation', 'bbox_score', 'flip_pairs'
|
119 |
-
]),
|
120 |
-
]
|
121 |
-
|
122 |
-
val_pipeline = [
|
123 |
-
dict(type='LoadImageFromFile'),
|
124 |
-
dict(type='TopDownGetBboxCenterScale', padding=1.25),
|
125 |
-
dict(type='TopDownAffine'),
|
126 |
-
dict(type='ToTensor'),
|
127 |
-
dict(
|
128 |
-
type='NormalizeTensor',
|
129 |
-
mean=[0.485, 0.456, 0.406],
|
130 |
-
std=[0.229, 0.224, 0.225]),
|
131 |
-
dict(
|
132 |
-
type='Collect',
|
133 |
-
keys=['img'],
|
134 |
-
meta_keys=[
|
135 |
-
'image_file', 'center', 'scale', 'rotation', 'bbox_score',
|
136 |
-
'flip_pairs'
|
137 |
-
]),
|
138 |
-
]
|
139 |
-
|
140 |
-
test_pipeline = val_pipeline
|
141 |
-
|
142 |
-
data_root = 'data/coco'
|
143 |
-
data = dict(
|
144 |
-
samples_per_gpu=32,
|
145 |
-
workers_per_gpu=2,
|
146 |
-
val_dataloader=dict(samples_per_gpu=32),
|
147 |
-
test_dataloader=dict(samples_per_gpu=32),
|
148 |
-
train=dict(
|
149 |
-
type='TopDownCocoDataset',
|
150 |
-
ann_file=f'{data_root}/annotations/person_keypoints_train2017.json',
|
151 |
-
img_prefix=f'{data_root}/train2017/',
|
152 |
-
data_cfg=data_cfg,
|
153 |
-
pipeline=train_pipeline,
|
154 |
-
dataset_info={{_base_.dataset_info}}),
|
155 |
-
val=dict(
|
156 |
-
type='TopDownCocoDataset',
|
157 |
-
ann_file=f'{data_root}/annotations/person_keypoints_val2017.json',
|
158 |
-
img_prefix=f'{data_root}/val2017/',
|
159 |
-
data_cfg=data_cfg,
|
160 |
-
pipeline=val_pipeline,
|
161 |
-
dataset_info={{_base_.dataset_info}}),
|
162 |
-
test=dict(
|
163 |
-
type='TopDownCocoDataset',
|
164 |
-
ann_file=f'{data_root}/annotations/person_keypoints_val2017.json',
|
165 |
-
img_prefix=f'{data_root}/val2017/',
|
166 |
-
data_cfg=data_cfg,
|
167 |
-
pipeline=test_pipeline,
|
168 |
-
dataset_info={{_base_.dataset_info}}),
|
169 |
-
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Admin08077/Record/app.py
DELETED
@@ -1,26 +0,0 @@
|
|
1 |
-
import gradio as gr
|
2 |
-
import openai
|
3 |
-
|
4 |
-
# Set your OpenAI API key
|
5 |
-
openai.api_key = "your_api_key_here"
|
6 |
-
|
7 |
-
# Function to run simulations based on user input
|
8 |
-
def run_simulation(input_text):
|
9 |
-
# Use the input_text to define simulation parameters and run the simulation
|
10 |
-
# Replace this with your actual simulation code
|
11 |
-
simulation_result = "Simulation result: Your simulation code goes here."
|
12 |
-
|
13 |
-
return simulation_result
|
14 |
-
|
15 |
-
# Create a Gradio interface for running simulations
|
16 |
-
iface = gr.Interface(
|
17 |
-
fn=run_simulation,
|
18 |
-
inputs="text",
|
19 |
-
outputs="text",
|
20 |
-
live=True,
|
21 |
-
title="Simulation Collaborator",
|
22 |
-
description="Run simulations and view the results.",
|
23 |
-
)
|
24 |
-
|
25 |
-
# Launch the Gradio interface
|
26 |
-
iface.launch()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/Maker.d.ts
DELETED
@@ -1,35 +0,0 @@
|
|
1 |
-
import Make from './Make';
|
2 |
-
|
3 |
-
export default Maker;
|
4 |
-
|
5 |
-
declare namespace Maker {
|
6 |
-
type BuilderType = Make.BuilderType;
|
7 |
-
type BuildersType = Make.BuildersType;
|
8 |
-
}
|
9 |
-
|
10 |
-
declare class Maker {
|
11 |
-
constructor(
|
12 |
-
scene: Phaser.Scene,
|
13 |
-
styles?: Object | string,
|
14 |
-
customBuilders?: Maker.BuildersType
|
15 |
-
);
|
16 |
-
|
17 |
-
setScene(scene: Phaser.Scene): this;
|
18 |
-
scene: Phaser.Scene;
|
19 |
-
|
20 |
-
setStyles(styles?: Object | string): this;
|
21 |
-
addStyle(key: string, style: Object | string): this;
|
22 |
-
addStyle(styles: Object | string): this;
|
23 |
-
clearStyles(): this;
|
24 |
-
styles: Object | undefined;
|
25 |
-
|
26 |
-
setBuilders(builders?: Maker.BuildersType): this;
|
27 |
-
addBuilder(key: string, builder: Maker.BuilderType): this;
|
28 |
-
clearBuilder(): this;
|
29 |
-
customBuilders: Maker.BuildersType | undefined;
|
30 |
-
|
31 |
-
make(
|
32 |
-
data: Object | string,
|
33 |
-
view?: Object | string
|
34 |
-
): Phaser.GameObjects.GameObject;
|
35 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/AddChildMethods.js
DELETED
@@ -1,102 +0,0 @@
|
|
1 |
-
import AddChild from '../basesizer/utils/AddChild.js';
|
2 |
-
import ALIGNMODE from '../utils/AlignConst.js';
|
3 |
-
import GetBoundsConfig from '../utils/GetBoundsConfig.js';
|
4 |
-
import { GetDisplayWidth, GetDisplayHeight } from '../../../plugins/utils/size/GetDisplaySize.js';
|
5 |
-
|
6 |
-
const IsPlainObject = Phaser.Utils.Objects.IsPlainObject;
|
7 |
-
const GetValue = Phaser.Utils.Objects.GetValue;
|
8 |
-
const ALIGN_CENTER = Phaser.Display.Align.CENTER;
|
9 |
-
const UUID = Phaser.Utils.String.UUID;
|
10 |
-
|
11 |
-
var Add = function (gameObject, childKey, align, padding, expand, minWidth, minHeight, offsetX, offsetY) {
|
12 |
-
AddChild.call(this, gameObject);
|
13 |
-
|
14 |
-
if (IsPlainObject(childKey)) {
|
15 |
-
var config = childKey;
|
16 |
-
childKey = GetValue(config, 'key', undefined);
|
17 |
-
align = GetValue(config, 'align', ALIGN_CENTER);
|
18 |
-
offsetX = GetValue(config, 'offsetX', 0);
|
19 |
-
offsetY = GetValue(config, 'offsetY', 0);
|
20 |
-
padding = GetValue(config, 'padding', 0);
|
21 |
-
expand = GetValue(config, 'expand', true);
|
22 |
-
|
23 |
-
if (!gameObject.isRexSizer) {
|
24 |
-
// Get minWidth,minHeight from config
|
25 |
-
minWidth = GetValue(config, 'minWidth', gameObject._minWidth);
|
26 |
-
minHeight = GetValue(config, 'minHeight', gameObject._minHeighted);
|
27 |
-
}
|
28 |
-
}
|
29 |
-
|
30 |
-
var hasValidKey = (childKey !== undefined);
|
31 |
-
if (!hasValidKey) {
|
32 |
-
childKey = UUID();
|
33 |
-
}
|
34 |
-
|
35 |
-
if (typeof (align) === 'string') {
|
36 |
-
align = ALIGNMODE[align];
|
37 |
-
}
|
38 |
-
|
39 |
-
if (align === undefined) {
|
40 |
-
align = ALIGN_CENTER;
|
41 |
-
}
|
42 |
-
if (offsetX === undefined) {
|
43 |
-
offsetX = 0;
|
44 |
-
}
|
45 |
-
if (offsetY === undefined) {
|
46 |
-
offsetY = 0;
|
47 |
-
}
|
48 |
-
if (padding === undefined) {
|
49 |
-
padding = 0;
|
50 |
-
}
|
51 |
-
if (expand === undefined) {
|
52 |
-
expand = true;
|
53 |
-
}
|
54 |
-
if (!gameObject.isRexSizer) {
|
55 |
-
// Get minWidth,minHeight from game object
|
56 |
-
if (minWidth === undefined) {
|
57 |
-
minWidth = gameObject._minWidth;
|
58 |
-
}
|
59 |
-
if (minHeight === undefined) {
|
60 |
-
minHeight = gameObject._minHeight;
|
61 |
-
}
|
62 |
-
}
|
63 |
-
|
64 |
-
var config = this.getSizerConfig(gameObject);
|
65 |
-
config.align = align;
|
66 |
-
config.alignOffsetX = offsetX;
|
67 |
-
config.alignOffsetY = offsetY;
|
68 |
-
config.padding = GetBoundsConfig(padding);
|
69 |
-
|
70 |
-
if (IsPlainObject(expand)) {
|
71 |
-
config.expandWidth = GetValue(expand, 'width', false);
|
72 |
-
config.expandHeight = GetValue(expand, 'height', false);
|
73 |
-
} else {
|
74 |
-
config.expandWidth = expand;
|
75 |
-
config.expandHeight = expand;
|
76 |
-
}
|
77 |
-
|
78 |
-
if (!gameObject.isRexSizer) { // Expand normal game object
|
79 |
-
if (config.expandWidth) {
|
80 |
-
// minWidth is still undefined, uses current display width
|
81 |
-
gameObject.minWidth = (minWidth === undefined) ? GetDisplayWidth(gameObject) : minWidth;
|
82 |
-
}
|
83 |
-
if (config.expandHeight) {
|
84 |
-
// minHeight is still undefined, uses current display height
|
85 |
-
gameObject.minHeight = (minHeight === undefined) ? GetDisplayHeight(gameObject) : minHeight;
|
86 |
-
}
|
87 |
-
}
|
88 |
-
|
89 |
-
if (this.sizerChildren.hasOwnProperty(childKey)) {
|
90 |
-
this.sizerChildren[childKey].destroy();
|
91 |
-
}
|
92 |
-
this.sizerChildren[childKey] = gameObject;
|
93 |
-
|
94 |
-
if (hasValidKey) {
|
95 |
-
this.addChildrenMap(childKey, gameObject)
|
96 |
-
}
|
97 |
-
return this;
|
98 |
-
}
|
99 |
-
|
100 |
-
export default {
|
101 |
-
add: Add
|
102 |
-
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/autosummary.py
DELETED
@@ -1,193 +0,0 @@
|
|
1 |
-
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
|
2 |
-
#
|
3 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
# and proprietary rights in and to this software, related documentation
|
5 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
# distribution of this software and related documentation without an express
|
7 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
"""Helper for adding automatically tracked values to Tensorboard.
|
10 |
-
|
11 |
-
Autosummary creates an identity op that internally keeps track of the input
|
12 |
-
values and automatically shows up in TensorBoard. The reported value
|
13 |
-
represents an average over input components. The average is accumulated
|
14 |
-
constantly over time and flushed when save_summaries() is called.
|
15 |
-
|
16 |
-
Notes:
|
17 |
-
- The output tensor must be used as an input for something else in the
|
18 |
-
graph. Otherwise, the autosummary op will not get executed, and the average
|
19 |
-
value will not get accumulated.
|
20 |
-
- It is perfectly fine to include autosummaries with the same name in
|
21 |
-
several places throughout the graph, even if they are executed concurrently.
|
22 |
-
- It is ok to also pass in a python scalar or numpy array. In this case, it
|
23 |
-
is added to the average immediately.
|
24 |
-
"""
|
25 |
-
|
26 |
-
from collections import OrderedDict
|
27 |
-
import numpy as np
|
28 |
-
import tensorflow as tf
|
29 |
-
from tensorboard import summary as summary_lib
|
30 |
-
from tensorboard.plugins.custom_scalar import layout_pb2
|
31 |
-
|
32 |
-
from . import tfutil
|
33 |
-
from .tfutil import TfExpression
|
34 |
-
from .tfutil import TfExpressionEx
|
35 |
-
|
36 |
-
# Enable "Custom scalars" tab in TensorBoard for advanced formatting.
|
37 |
-
# Disabled by default to reduce tfevents file size.
|
38 |
-
enable_custom_scalars = False
|
39 |
-
|
40 |
-
_dtype = tf.float64
|
41 |
-
_vars = OrderedDict() # name => [var, ...]
|
42 |
-
_immediate = OrderedDict() # name => update_op, update_value
|
43 |
-
_finalized = False
|
44 |
-
_merge_op = None
|
45 |
-
|
46 |
-
|
47 |
-
def _create_var(name: str, value_expr: TfExpression) -> TfExpression:
|
48 |
-
"""Internal helper for creating autosummary accumulators."""
|
49 |
-
assert not _finalized
|
50 |
-
name_id = name.replace("/", "_")
|
51 |
-
v = tf.cast(value_expr, _dtype)
|
52 |
-
|
53 |
-
if v.shape.is_fully_defined():
|
54 |
-
size = np.prod(v.shape.as_list())
|
55 |
-
size_expr = tf.constant(size, dtype=_dtype)
|
56 |
-
else:
|
57 |
-
size = None
|
58 |
-
size_expr = tf.reduce_prod(tf.cast(tf.shape(v), _dtype))
|
59 |
-
|
60 |
-
if size == 1:
|
61 |
-
if v.shape.ndims != 0:
|
62 |
-
v = tf.reshape(v, [])
|
63 |
-
v = [size_expr, v, tf.square(v)]
|
64 |
-
else:
|
65 |
-
v = [size_expr, tf.reduce_sum(v), tf.reduce_sum(tf.square(v))]
|
66 |
-
v = tf.cond(tf.is_finite(v[1]), lambda: tf.stack(v), lambda: tf.zeros(3, dtype=_dtype))
|
67 |
-
|
68 |
-
with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.control_dependencies(None):
|
69 |
-
var = tf.Variable(tf.zeros(3, dtype=_dtype), trainable=False) # [sum(1), sum(x), sum(x**2)]
|
70 |
-
update_op = tf.cond(tf.is_variable_initialized(var), lambda: tf.assign_add(var, v), lambda: tf.assign(var, v))
|
71 |
-
|
72 |
-
if name in _vars:
|
73 |
-
_vars[name].append(var)
|
74 |
-
else:
|
75 |
-
_vars[name] = [var]
|
76 |
-
return update_op
|
77 |
-
|
78 |
-
|
79 |
-
def autosummary(name: str, value: TfExpressionEx, passthru: TfExpressionEx = None, condition: TfExpressionEx = True) -> TfExpressionEx:
|
80 |
-
"""Create a new autosummary.
|
81 |
-
|
82 |
-
Args:
|
83 |
-
name: Name to use in TensorBoard
|
84 |
-
value: TensorFlow expression or python value to track
|
85 |
-
passthru: Optionally return this TF node without modifications but tack an autosummary update side-effect to this node.
|
86 |
-
|
87 |
-
Example use of the passthru mechanism:
|
88 |
-
|
89 |
-
n = autosummary('l2loss', loss, passthru=n)
|
90 |
-
|
91 |
-
This is a shorthand for the following code:
|
92 |
-
|
93 |
-
with tf.control_dependencies([autosummary('l2loss', loss)]):
|
94 |
-
n = tf.identity(n)
|
95 |
-
"""
|
96 |
-
tfutil.assert_tf_initialized()
|
97 |
-
name_id = name.replace("/", "_")
|
98 |
-
|
99 |
-
if tfutil.is_tf_expression(value):
|
100 |
-
with tf.name_scope("summary_" + name_id), tf.device(value.device):
|
101 |
-
condition = tf.convert_to_tensor(condition, name='condition')
|
102 |
-
update_op = tf.cond(condition, lambda: tf.group(_create_var(name, value)), tf.no_op)
|
103 |
-
with tf.control_dependencies([update_op]):
|
104 |
-
return tf.identity(value if passthru is None else passthru)
|
105 |
-
|
106 |
-
else: # python scalar or numpy array
|
107 |
-
assert not tfutil.is_tf_expression(passthru)
|
108 |
-
assert not tfutil.is_tf_expression(condition)
|
109 |
-
if condition:
|
110 |
-
if name not in _immediate:
|
111 |
-
with tfutil.absolute_name_scope("Autosummary/" + name_id), tf.device(None), tf.control_dependencies(None):
|
112 |
-
update_value = tf.placeholder(_dtype)
|
113 |
-
update_op = _create_var(name, update_value)
|
114 |
-
_immediate[name] = update_op, update_value
|
115 |
-
update_op, update_value = _immediate[name]
|
116 |
-
tfutil.run(update_op, {update_value: value})
|
117 |
-
return value if passthru is None else passthru
|
118 |
-
|
119 |
-
|
120 |
-
def finalize_autosummaries() -> None:
|
121 |
-
"""Create the necessary ops to include autosummaries in TensorBoard report.
|
122 |
-
Note: This should be done only once per graph.
|
123 |
-
"""
|
124 |
-
global _finalized
|
125 |
-
tfutil.assert_tf_initialized()
|
126 |
-
|
127 |
-
if _finalized:
|
128 |
-
return None
|
129 |
-
|
130 |
-
_finalized = True
|
131 |
-
tfutil.init_uninitialized_vars([var for vars_list in _vars.values() for var in vars_list])
|
132 |
-
|
133 |
-
# Create summary ops.
|
134 |
-
with tf.device(None), tf.control_dependencies(None):
|
135 |
-
for name, vars_list in _vars.items():
|
136 |
-
name_id = name.replace("/", "_")
|
137 |
-
with tfutil.absolute_name_scope("Autosummary/" + name_id):
|
138 |
-
moments = tf.add_n(vars_list)
|
139 |
-
moments /= moments[0]
|
140 |
-
with tf.control_dependencies([moments]): # read before resetting
|
141 |
-
reset_ops = [tf.assign(var, tf.zeros(3, dtype=_dtype)) for var in vars_list]
|
142 |
-
with tf.name_scope(None), tf.control_dependencies(reset_ops): # reset before reporting
|
143 |
-
mean = moments[1]
|
144 |
-
std = tf.sqrt(moments[2] - tf.square(moments[1]))
|
145 |
-
tf.summary.scalar(name, mean)
|
146 |
-
if enable_custom_scalars:
|
147 |
-
tf.summary.scalar("xCustomScalars/" + name + "/margin_lo", mean - std)
|
148 |
-
tf.summary.scalar("xCustomScalars/" + name + "/margin_hi", mean + std)
|
149 |
-
|
150 |
-
# Setup layout for custom scalars.
|
151 |
-
layout = None
|
152 |
-
if enable_custom_scalars:
|
153 |
-
cat_dict = OrderedDict()
|
154 |
-
for series_name in sorted(_vars.keys()):
|
155 |
-
p = series_name.split("/")
|
156 |
-
cat = p[0] if len(p) >= 2 else ""
|
157 |
-
chart = "/".join(p[1:-1]) if len(p) >= 3 else p[-1]
|
158 |
-
if cat not in cat_dict:
|
159 |
-
cat_dict[cat] = OrderedDict()
|
160 |
-
if chart not in cat_dict[cat]:
|
161 |
-
cat_dict[cat][chart] = []
|
162 |
-
cat_dict[cat][chart].append(series_name)
|
163 |
-
categories = []
|
164 |
-
for cat_name, chart_dict in cat_dict.items():
|
165 |
-
charts = []
|
166 |
-
for chart_name, series_names in chart_dict.items():
|
167 |
-
series = []
|
168 |
-
for series_name in series_names:
|
169 |
-
series.append(layout_pb2.MarginChartContent.Series(
|
170 |
-
value=series_name,
|
171 |
-
lower="xCustomScalars/" + series_name + "/margin_lo",
|
172 |
-
upper="xCustomScalars/" + series_name + "/margin_hi"))
|
173 |
-
margin = layout_pb2.MarginChartContent(series=series)
|
174 |
-
charts.append(layout_pb2.Chart(title=chart_name, margin=margin))
|
175 |
-
categories.append(layout_pb2.Category(title=cat_name, chart=charts))
|
176 |
-
layout = summary_lib.custom_scalar_pb(layout_pb2.Layout(category=categories))
|
177 |
-
return layout
|
178 |
-
|
179 |
-
def save_summaries(file_writer, global_step=None):
|
180 |
-
"""Call FileWriter.add_summary() with all summaries in the default graph,
|
181 |
-
automatically finalizing and merging them on the first call.
|
182 |
-
"""
|
183 |
-
global _merge_op
|
184 |
-
tfutil.assert_tf_initialized()
|
185 |
-
|
186 |
-
if _merge_op is None:
|
187 |
-
layout = finalize_autosummaries()
|
188 |
-
if layout is not None:
|
189 |
-
file_writer.add_summary(layout)
|
190 |
-
with tf.device(None), tf.control_dependencies(None):
|
191 |
-
_merge_op = tf.summary.merge_all()
|
192 |
-
|
193 |
-
file_writer.add_summary(_merge_op.eval(), global_step)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Amrrs/DragGan-Inversion/torch_utils/ops/bias_act.py
DELETED
@@ -1,220 +0,0 @@
|
|
1 |
-
# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
2 |
-
#
|
3 |
-
# NVIDIA CORPORATION and its licensors retain all intellectual property
|
4 |
-
# and proprietary rights in and to this software, related documentation
|
5 |
-
# and any modifications thereto. Any use, reproduction, disclosure or
|
6 |
-
# distribution of this software and related documentation without an express
|
7 |
-
# license agreement from NVIDIA CORPORATION is strictly prohibited.
|
8 |
-
|
9 |
-
"""Custom PyTorch ops for efficient bias and activation."""
|
10 |
-
|
11 |
-
import os
|
12 |
-
import numpy as np
|
13 |
-
import torch
|
14 |
-
import dnnlib
|
15 |
-
|
16 |
-
from .. import custom_ops
|
17 |
-
from .. import misc
|
18 |
-
|
19 |
-
# ----------------------------------------------------------------------------
|
20 |
-
|
21 |
-
activation_funcs = {
|
22 |
-
'linear': dnnlib.EasyDict(func=lambda x, **_: x, def_alpha=0, def_gain=1, cuda_idx=1, ref='', has_2nd_grad=False),
|
23 |
-
'relu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.relu(x), def_alpha=0, def_gain=np.sqrt(2), cuda_idx=2, ref='y', has_2nd_grad=False),
|
24 |
-
'lrelu': dnnlib.EasyDict(func=lambda x, alpha, **_: torch.nn.functional.leaky_relu(x, alpha), def_alpha=0.2, def_gain=np.sqrt(2), cuda_idx=3, ref='y', has_2nd_grad=False),
|
25 |
-
'tanh': dnnlib.EasyDict(func=lambda x, **_: torch.tanh(x), def_alpha=0, def_gain=1, cuda_idx=4, ref='y', has_2nd_grad=True),
|
26 |
-
'sigmoid': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x), def_alpha=0, def_gain=1, cuda_idx=5, ref='y', has_2nd_grad=True),
|
27 |
-
'elu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.elu(x), def_alpha=0, def_gain=1, cuda_idx=6, ref='y', has_2nd_grad=True),
|
28 |
-
'selu': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.selu(x), def_alpha=0, def_gain=1, cuda_idx=7, ref='y', has_2nd_grad=True),
|
29 |
-
'softplus': dnnlib.EasyDict(func=lambda x, **_: torch.nn.functional.softplus(x), def_alpha=0, def_gain=1, cuda_idx=8, ref='y', has_2nd_grad=True),
|
30 |
-
'swish': dnnlib.EasyDict(func=lambda x, **_: torch.sigmoid(x) * x, def_alpha=0, def_gain=np.sqrt(2), cuda_idx=9, ref='x', has_2nd_grad=True),
|
31 |
-
}
|
32 |
-
|
33 |
-
# ----------------------------------------------------------------------------
|
34 |
-
|
35 |
-
_plugin = None
|
36 |
-
_null_tensor = torch.empty([0])
|
37 |
-
|
38 |
-
|
39 |
-
def _init():
|
40 |
-
global _plugin
|
41 |
-
if _plugin is None:
|
42 |
-
_plugin = custom_ops.get_plugin(
|
43 |
-
module_name='bias_act_plugin',
|
44 |
-
sources=['bias_act.cpp', 'bias_act.cu'],
|
45 |
-
headers=['bias_act.h'],
|
46 |
-
source_dir=os.path.dirname(__file__),
|
47 |
-
extra_cuda_cflags=['--use_fast_math',
|
48 |
-
'--allow-unsupported-compiler'],
|
49 |
-
)
|
50 |
-
return True
|
51 |
-
|
52 |
-
# ----------------------------------------------------------------------------
|
53 |
-
|
54 |
-
|
55 |
-
def bias_act(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None, impl='cuda'):
|
56 |
-
r"""Fused bias and activation function.
|
57 |
-
|
58 |
-
Adds bias `b` to activation tensor `x`, evaluates activation function `act`,
|
59 |
-
and scales the result by `gain`. Each of the steps is optional. In most cases,
|
60 |
-
the fused op is considerably more efficient than performing the same calculation
|
61 |
-
using standard PyTorch ops. It supports first and second order gradients,
|
62 |
-
but not third order gradients.
|
63 |
-
|
64 |
-
Args:
|
65 |
-
x: Input activation tensor. Can be of any shape.
|
66 |
-
b: Bias vector, or `None` to disable. Must be a 1D tensor of the same type
|
67 |
-
as `x`. The shape must be known, and it must match the dimension of `x`
|
68 |
-
corresponding to `dim`.
|
69 |
-
dim: The dimension in `x` corresponding to the elements of `b`.
|
70 |
-
The value of `dim` is ignored if `b` is not specified.
|
71 |
-
act: Name of the activation function to evaluate, or `"linear"` to disable.
|
72 |
-
Can be e.g. `"relu"`, `"lrelu"`, `"tanh"`, `"sigmoid"`, `"swish"`, etc.
|
73 |
-
See `activation_funcs` for a full list. `None` is not allowed.
|
74 |
-
alpha: Shape parameter for the activation function, or `None` to use the default.
|
75 |
-
gain: Scaling factor for the output tensor, or `None` to use default.
|
76 |
-
See `activation_funcs` for the default scaling of each activation function.
|
77 |
-
If unsure, consider specifying 1.
|
78 |
-
clamp: Clamp the output values to `[-clamp, +clamp]`, or `None` to disable
|
79 |
-
the clamping (default).
|
80 |
-
impl: Name of the implementation to use. Can be `"ref"` or `"cuda"` (default).
|
81 |
-
|
82 |
-
Returns:
|
83 |
-
Tensor of the same shape and datatype as `x`.
|
84 |
-
"""
|
85 |
-
assert isinstance(x, torch.Tensor)
|
86 |
-
assert impl in ['ref', 'cuda']
|
87 |
-
if impl == 'cuda' and x.device.type == 'cuda' and _init():
|
88 |
-
return _bias_act_cuda(dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp).apply(x, b)
|
89 |
-
return _bias_act_ref(x=x, b=b, dim=dim, act=act, alpha=alpha, gain=gain, clamp=clamp)
|
90 |
-
|
91 |
-
# ----------------------------------------------------------------------------
|
92 |
-
|
93 |
-
|
94 |
-
@misc.profiled_function
|
95 |
-
def _bias_act_ref(x, b=None, dim=1, act='linear', alpha=None, gain=None, clamp=None):
|
96 |
-
"""Slow reference implementation of `bias_act()` using standard TensorFlow ops.
|
97 |
-
"""
|
98 |
-
assert isinstance(x, torch.Tensor)
|
99 |
-
assert clamp is None or clamp >= 0
|
100 |
-
spec = activation_funcs[act]
|
101 |
-
alpha = float(alpha if alpha is not None else spec.def_alpha)
|
102 |
-
gain = float(gain if gain is not None else spec.def_gain)
|
103 |
-
clamp = float(clamp if clamp is not None else -1)
|
104 |
-
|
105 |
-
# Add bias.
|
106 |
-
if b is not None:
|
107 |
-
assert isinstance(b, torch.Tensor) and b.ndim == 1
|
108 |
-
assert 0 <= dim < x.ndim
|
109 |
-
assert b.shape[0] == x.shape[dim]
|
110 |
-
x = x + b.reshape([-1 if i == dim else 1 for i in range(x.ndim)])
|
111 |
-
|
112 |
-
# Evaluate activation function.
|
113 |
-
alpha = float(alpha)
|
114 |
-
x = spec.func(x, alpha=alpha)
|
115 |
-
|
116 |
-
# Scale by gain.
|
117 |
-
gain = float(gain)
|
118 |
-
if gain != 1:
|
119 |
-
x = x * gain
|
120 |
-
|
121 |
-
# Clamp.
|
122 |
-
if clamp >= 0:
|
123 |
-
x = x.clamp(-clamp, clamp) # pylint: disable=invalid-unary-operand-type
|
124 |
-
return x
|
125 |
-
|
126 |
-
# ----------------------------------------------------------------------------
|
127 |
-
|
128 |
-
|
129 |
-
_bias_act_cuda_cache = dict()
|
130 |
-
|
131 |
-
|
132 |
-
def _bias_act_cuda(dim=1, act='linear', alpha=None, gain=None, clamp=None):
|
133 |
-
"""Fast CUDA implementation of `bias_act()` using custom ops.
|
134 |
-
"""
|
135 |
-
# Parse arguments.
|
136 |
-
assert clamp is None or clamp >= 0
|
137 |
-
spec = activation_funcs[act]
|
138 |
-
alpha = float(alpha if alpha is not None else spec.def_alpha)
|
139 |
-
gain = float(gain if gain is not None else spec.def_gain)
|
140 |
-
clamp = float(clamp if clamp is not None else -1)
|
141 |
-
|
142 |
-
# Lookup from cache.
|
143 |
-
key = (dim, act, alpha, gain, clamp)
|
144 |
-
if key in _bias_act_cuda_cache:
|
145 |
-
return _bias_act_cuda_cache[key]
|
146 |
-
|
147 |
-
# Forward op.
|
148 |
-
class BiasActCuda(torch.autograd.Function):
|
149 |
-
@staticmethod
|
150 |
-
def forward(ctx, x, b): # pylint: disable=arguments-differ
|
151 |
-
ctx.memory_format = torch.channels_last if x.ndim > 2 and x.stride(
|
152 |
-
1) == 1 else torch.contiguous_format
|
153 |
-
x = x.contiguous(memory_format=ctx.memory_format)
|
154 |
-
b = b.contiguous() if b is not None else _null_tensor
|
155 |
-
y = x
|
156 |
-
if act != 'linear' or gain != 1 or clamp >= 0 or b is not _null_tensor:
|
157 |
-
y = _plugin.bias_act(x, b, _null_tensor, _null_tensor,
|
158 |
-
_null_tensor, 0, dim, spec.cuda_idx, alpha, gain, clamp)
|
159 |
-
ctx.save_for_backward(
|
160 |
-
x if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor,
|
161 |
-
b if 'x' in spec.ref or spec.has_2nd_grad else _null_tensor,
|
162 |
-
y if 'y' in spec.ref else _null_tensor)
|
163 |
-
return y
|
164 |
-
|
165 |
-
@staticmethod
|
166 |
-
def backward(ctx, dy): # pylint: disable=arguments-differ
|
167 |
-
dy = dy.contiguous(memory_format=ctx.memory_format)
|
168 |
-
x, b, y = ctx.saved_tensors
|
169 |
-
dx = None
|
170 |
-
db = None
|
171 |
-
|
172 |
-
if ctx.needs_input_grad[0] or ctx.needs_input_grad[1]:
|
173 |
-
dx = dy
|
174 |
-
if act != 'linear' or gain != 1 or clamp >= 0:
|
175 |
-
dx = BiasActCudaGrad.apply(dy, x, b, y)
|
176 |
-
|
177 |
-
if ctx.needs_input_grad[1]:
|
178 |
-
db = dx.sum([i for i in range(dx.ndim) if i != dim])
|
179 |
-
|
180 |
-
return dx, db
|
181 |
-
|
182 |
-
# Backward op.
|
183 |
-
class BiasActCudaGrad(torch.autograd.Function):
|
184 |
-
@staticmethod
|
185 |
-
def forward(ctx, dy, x, b, y): # pylint: disable=arguments-differ
|
186 |
-
ctx.memory_format = torch.channels_last if dy.ndim > 2 and dy.stride(
|
187 |
-
1) == 1 else torch.contiguous_format
|
188 |
-
dx = _plugin.bias_act(dy, b, x, y, _null_tensor,
|
189 |
-
1, dim, spec.cuda_idx, alpha, gain, clamp)
|
190 |
-
ctx.save_for_backward(
|
191 |
-
dy if spec.has_2nd_grad else _null_tensor,
|
192 |
-
x, b, y)
|
193 |
-
return dx
|
194 |
-
|
195 |
-
@staticmethod
|
196 |
-
def backward(ctx, d_dx): # pylint: disable=arguments-differ
|
197 |
-
d_dx = d_dx.contiguous(memory_format=ctx.memory_format)
|
198 |
-
dy, x, b, y = ctx.saved_tensors
|
199 |
-
d_dy = None
|
200 |
-
d_x = None
|
201 |
-
d_b = None
|
202 |
-
d_y = None
|
203 |
-
|
204 |
-
if ctx.needs_input_grad[0]:
|
205 |
-
d_dy = BiasActCudaGrad.apply(d_dx, x, b, y)
|
206 |
-
|
207 |
-
if spec.has_2nd_grad and (ctx.needs_input_grad[1] or ctx.needs_input_grad[2]):
|
208 |
-
d_x = _plugin.bias_act(
|
209 |
-
d_dx, b, x, y, dy, 2, dim, spec.cuda_idx, alpha, gain, clamp)
|
210 |
-
|
211 |
-
if spec.has_2nd_grad and ctx.needs_input_grad[2]:
|
212 |
-
d_b = d_x.sum([i for i in range(d_x.ndim) if i != dim])
|
213 |
-
|
214 |
-
return d_dy, d_x, d_b, d_y
|
215 |
-
|
216 |
-
# Add to cache.
|
217 |
-
_bias_act_cuda_cache[key] = BiasActCuda
|
218 |
-
return BiasActCuda
|
219 |
-
|
220 |
-
# ----------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnandSoni2001/StockMarket/app.py
DELETED
@@ -1,656 +0,0 @@
|
|
1 |
-
#Import Libraries
|
2 |
-
import streamlit as st
|
3 |
-
import plotly.graph_objects as go
|
4 |
-
import pandas as pd
|
5 |
-
import plotly.express as px
|
6 |
-
from yahoo_fin import stock_info
|
7 |
-
from yahoo_fin.stock_info import *
|
8 |
-
import math
|
9 |
-
import numpy as np
|
10 |
-
from sklearn.preprocessing import MinMaxScaler
|
11 |
-
import joblib
|
12 |
-
import yfinance as yf
|
13 |
-
import time
|
14 |
-
import requests
|
15 |
-
from bs4 import BeautifulSoup
|
16 |
-
|
17 |
-
#Heading
|
18 |
-
st.title('Stock Market Analysis and Prediction')
|
19 |
-
st.write("#")
|
20 |
-
|
21 |
-
#TCS Data Taken
|
22 |
-
tcsdaily = stock_info.get_data("TCS.NS", interval="1d")
|
23 |
-
tcsmonthly= stock_info.get_data("TCS.NS", interval="1mo")
|
24 |
-
tcsyearly = pd.read_csv('data/tcs-yearly.csv')
|
25 |
-
|
26 |
-
#Reliance Data Taken
|
27 |
-
reldaily = stock_info.get_data("RELIANCE.NS", interval="1d")
|
28 |
-
relmonthly= stock_info.get_data("RELIANCE.NS", interval="1mo")
|
29 |
-
relyearly = pd.read_csv('data/relianceind-yearly.csv')
|
30 |
-
|
31 |
-
#Infosys Data Taken
|
32 |
-
infdaily = stock_info.get_data("INFY.NS", interval="1d")
|
33 |
-
infmonthly= stock_info.get_data("INFY.NS", interval="1mo")
|
34 |
-
infyearly = pd.read_csv('data/infosys-yearly.csv')
|
35 |
-
|
36 |
-
#Select Box
|
37 |
-
comp = st.selectbox('Select a Company from the below options :', ('Tata Consultancy Services - TCS', 'Reliance Industries - RELIANCE', 'Infosys - INFY'))
|
38 |
-
|
39 |
-
if comp == 'Tata Consultancy Services - TCS':
|
40 |
-
|
41 |
-
page = requests.get('https://groww.in/stocks/tata-consultancy-services-ltd')
|
42 |
-
soup=BeautifulSoup(page.content,'html.parser')
|
43 |
-
fund=soup.find_all('td',class_="ft785Value")
|
44 |
-
|
45 |
-
#Fundamental Values
|
46 |
-
pb = float(fund[4].text)
|
47 |
-
pe = float(fund[2].text)
|
48 |
-
de = float(fund[8].text)
|
49 |
-
div = float(fund[5].text.replace('%',''))
|
50 |
-
roe = float(fund[1].text.replace('%',''))
|
51 |
-
indpe = float(fund[6].text)
|
52 |
-
|
53 |
-
col1, col2, col3, col4 = st.columns(4)
|
54 |
-
x = round(stock_info.get_live_price("TCS.NS"),2)
|
55 |
-
y = round(tcsdaily['close'].iloc[-2],2)
|
56 |
-
tcs = get_stats('TCS.NS')['Value']
|
57 |
-
|
58 |
-
col1.metric(label="Market Price", value=x, delta = round(x-y,2))
|
59 |
-
col2.metric(label="52 Week High", value=tcs[3])
|
60 |
-
col3.metric(label="52 Week Low", value=tcs[4])
|
61 |
-
col4.metric(label="Return on Equity", value=tcs[34])
|
62 |
-
|
63 |
-
col1, col2, col3, col4 = st.columns(4)
|
64 |
-
col1.metric(label='P/B Ratio', value=pb)
|
65 |
-
col2.metric(label="P/E Ratio", value=pe)
|
66 |
-
col3.metric(label='Industry P/E', value=indpe)
|
67 |
-
col4.metric(label="Debt to Equity", value=de)
|
68 |
-
|
69 |
-
col1, col2, col3, col4 = st.columns(4)
|
70 |
-
col1.metric(label='Previous Close', value=y)
|
71 |
-
col2.metric(label="Book Value Per Share", value=tcs[48])
|
72 |
-
col3.metric(label='Earning Per Share', value=tcs[41])
|
73 |
-
col4.metric(label="Dividend Yield", value=tcs[22])
|
74 |
-
|
75 |
-
|
76 |
-
if comp == 'Reliance Industries - RELIANCE':
|
77 |
-
|
78 |
-
page = requests.get('https://groww.in/stocks/reliance-industries-ltd')
|
79 |
-
soup=BeautifulSoup(page.content,'html.parser')
|
80 |
-
fund=soup.find_all('td',class_="ft785Value")
|
81 |
-
|
82 |
-
#Fundamental Values
|
83 |
-
pb = float(fund[4].text)
|
84 |
-
pe = float(fund[2].text)
|
85 |
-
de = float(fund[8].text)
|
86 |
-
div = float(fund[5].text.replace('%',''))
|
87 |
-
roe = float(fund[1].text.replace('%',''))
|
88 |
-
indpe = float(fund[6].text)
|
89 |
-
|
90 |
-
col1, col2, col3, col4 = st.columns(4)
|
91 |
-
x = round(stock_info.get_live_price("RELIANCE.NS"),2)
|
92 |
-
y = round(reldaily['close'].iloc[-2],2)
|
93 |
-
rel = get_stats('RELIANCE.NS')['Value']
|
94 |
-
col1.metric(label="Market Price", value=x, delta = round(x-y,2))
|
95 |
-
col2.metric(label="52 Week High", value=rel[3])
|
96 |
-
col3.metric(label="52 Week Low", value=rel[4])
|
97 |
-
col4.metric(label="Return on Equity", value='8.21%')
|
98 |
-
|
99 |
-
col1, col2, col3, col4 = st.columns(4)
|
100 |
-
col1.metric(label='P/B Ratio', value=pb)
|
101 |
-
col2.metric(label="P/E Ratio", value=pe)
|
102 |
-
col3.metric(label='Industry P/E', value=indpe)
|
103 |
-
col4.metric(label="Debt to Equity", value=de)
|
104 |
-
|
105 |
-
col1, col2, col3, col4 = st.columns(4)
|
106 |
-
col1.metric(label='Previous Close', value=y)
|
107 |
-
col2.metric(label="Book Value Per Share", value=float(fund[7].text))
|
108 |
-
col3.metric(label='Earning Per Share', value=float(fund[3].text))
|
109 |
-
col4.metric(label="Dividend Yield", value=div)
|
110 |
-
|
111 |
-
if comp == 'Infosys - INFY':
|
112 |
-
|
113 |
-
page = requests.get('https://groww.in/stocks/infosys-ltd')
|
114 |
-
soup=BeautifulSoup(page.content,'html.parser')
|
115 |
-
fund=soup.find_all('td',class_="ft785Value")
|
116 |
-
|
117 |
-
#Fundamental Values
|
118 |
-
pb = float(fund[4].text)
|
119 |
-
pe = float(fund[2].text)
|
120 |
-
de = float(fund[8].text)
|
121 |
-
div = float(fund[5].text.replace('%',''))
|
122 |
-
roe = float(fund[1].text.replace('%',''))
|
123 |
-
indpe = float(fund[6].text)
|
124 |
-
|
125 |
-
col1, col2, col3, col4 = st.columns(4)
|
126 |
-
x = round(stock_info.get_live_price("INFY.NS"),2)
|
127 |
-
y = round(infdaily['close'].iloc[-2],2)
|
128 |
-
inf = get_stats('INFY.NS')['Value']
|
129 |
-
col1.metric(label="Market Price", value=x, delta = round(x-y,2))
|
130 |
-
col2.metric(label="52 Week High", value=inf[3])
|
131 |
-
col3.metric(label="52 Week Low", value=inf[4])
|
132 |
-
col4.metric(label="Return on Equity", value=inf[34])
|
133 |
-
|
134 |
-
col1, col2, col3, col4 = st.columns(4)
|
135 |
-
col1.metric(label='P/B Ratio', value=pb)
|
136 |
-
col2.metric(label="P/E Ratio", value=pe)
|
137 |
-
col3.metric(label='Industry P/E', value=indpe)
|
138 |
-
col4.metric(label="Debt to Equity", value=de)
|
139 |
-
|
140 |
-
col1, col2, col3, col4 = st.columns(4)
|
141 |
-
col1.metric(label='Previous Close', value=y)
|
142 |
-
col2.metric(label="Book Value Per Share", value=inf[48])
|
143 |
-
col3.metric(label='Earning Per Share', value=inf[41])
|
144 |
-
col4.metric(label="Dividend Yield", value=inf[22])
|
145 |
-
|
146 |
-
#Tab for Hist Data
|
147 |
-
st.write("#")
|
148 |
-
st.subheader('Historic data : ')
|
149 |
-
option1, option2, option3 = st.tabs(["Daily", "Monthly", "Yearly"])
|
150 |
-
|
151 |
-
cl1, cl2, cl3, cl4 = st.columns(4)
|
152 |
-
with cl1:
|
153 |
-
ag1 = st.checkbox('Close', value='True')
|
154 |
-
with cl2:
|
155 |
-
ag2 = st.checkbox('Open', value='True')
|
156 |
-
with cl3:
|
157 |
-
ag3 = st.checkbox('High', value='True')
|
158 |
-
with cl4:
|
159 |
-
ag4 = st.checkbox('Low', value='True')
|
160 |
-
|
161 |
-
with option1:
|
162 |
-
opt = st.radio("Select timelength :", ('All Time', '1 Week', '1 Month', '1 Year'))
|
163 |
-
st.write('<style>div.row-widget.stRadio > div{flex-direction:row;}</style>', unsafe_allow_html=True)
|
164 |
-
|
165 |
-
if comp == 'Tata Consultancy Services - TCS':
|
166 |
-
if opt=='All Time' :
|
167 |
-
fig = px.line(tcsdaily, y='close',markers=False, title='Tata Consultancy Services daily data of all time')
|
168 |
-
if opt=='1 Week' :
|
169 |
-
fig = px.line(tcsdaily.tail(5), y='close',markers=False, title='Tata Consultancy Services daily data of 1 week')
|
170 |
-
if opt=='1 Month' :
|
171 |
-
fig = px.line(tcsdaily.tail(20), y='close',markers=False, title='Tata Consultancy Services daily data of 1 month')
|
172 |
-
if opt=='1 Year' :
|
173 |
-
fig = px.line(tcsdaily.tail(251), y='close',markers=False, title='Tata Consultancy Services daily data of 1 year')
|
174 |
-
st.plotly_chart(fig, use_container_width=True)
|
175 |
-
|
176 |
-
fig = go.Figure()
|
177 |
-
if(ag1):
|
178 |
-
fig.add_trace(go.Scatter(x=tcsdaily.index,y=tcsdaily['close'], name='Closing'))
|
179 |
-
if(ag2):
|
180 |
-
fig.add_trace(go.Scatter(x=tcsdaily.index,y=tcsdaily['open'], name = 'Opening', line=dict(color='yellow')))
|
181 |
-
if(ag3):
|
182 |
-
fig.add_trace(go.Scatter(x=tcsdaily.index,y=tcsdaily['high'], name = 'High', line=dict(color='green')))
|
183 |
-
if(ag4):
|
184 |
-
fig.add_trace(go.Scatter(x=tcsdaily.index,y=tcsdaily['low'], name = 'Low', line=dict(color='red')))
|
185 |
-
fig.update_layout(xaxis_title='Date', yaxis_title='Price', title='Comparing other relevant parameters along close')
|
186 |
-
st.plotly_chart(fig, use_container_width=True, title='Comparing other relevant parameters')
|
187 |
-
|
188 |
-
if comp == 'Infosys - INFY':
|
189 |
-
if opt=='All Time' :
|
190 |
-
fig = px.line(infdaily, y='close',markers=False, title='Infosys daily data of all time')
|
191 |
-
if opt=='1 Week' :
|
192 |
-
fig = px.line(infdaily.tail(5), y='close',markers=False, title='Infosys daily data of 1 week')
|
193 |
-
if opt=='1 Month' :
|
194 |
-
fig = px.line(infdaily.tail(20), y='close',markers=False, title='Infosys daily data of 1 month')
|
195 |
-
if opt=='1 Year' :
|
196 |
-
fig = px.line(infdaily.tail(251), y='close',markers=False, title='Infosys daily data of 1 year')
|
197 |
-
st.plotly_chart(fig, use_container_width=True)
|
198 |
-
|
199 |
-
fig = go.Figure()
|
200 |
-
if(ag1):
|
201 |
-
fig.add_trace(go.Scatter(x=infdaily.index, y=infdaily['close'], name='Closing', line=dict(color='blue')))
|
202 |
-
if(ag2):
|
203 |
-
fig.add_trace(go.Scatter(x=infdaily.index,y=infdaily['open'], name = 'Opening', line=dict(color='yellow')))
|
204 |
-
if(ag3):
|
205 |
-
fig.add_trace(go.Scatter(x=infdaily.index,y=infdaily['high'], name = 'High', line=dict(color='green')))
|
206 |
-
if(ag4):
|
207 |
-
fig.add_trace(go.Scatter(x=infdaily.index,y=infdaily['low'], name = 'Low', line=dict(color='red')))
|
208 |
-
fig.update_layout(xaxis_title='Date', yaxis_title='Price', title='Comparing other relevant parameters')
|
209 |
-
st.plotly_chart(fig, use_container_width=True)
|
210 |
-
|
211 |
-
if comp == 'Reliance Industries - RELIANCE':
|
212 |
-
if opt=='All Time' :
|
213 |
-
fig = px.line(reldaily, y='close',markers=False, title='Reliance Industries daily data of all time')
|
214 |
-
if opt=='1 Week' :
|
215 |
-
fig = px.line(reldaily.tail(5), y='close',markers=False, title='Reliance Industries daily data of 1 week')
|
216 |
-
if opt=='1 Month' :
|
217 |
-
fig = px.line(reldaily.tail(20), y='close',markers=False, title='Reliance Industries daily data of 1 month')
|
218 |
-
if opt=='1 Year' :
|
219 |
-
fig = px.line(reldaily.tail(251), y='close',markers=False, title='Reliance Industries daily data of 1 year')
|
220 |
-
st.plotly_chart(fig, use_container_width=True)
|
221 |
-
|
222 |
-
fig = go.Figure()
|
223 |
-
if(ag1):
|
224 |
-
fig.add_trace(go.Scatter(x=reldaily.index, y=reldaily['close'], name='Closing', line=dict(color='blue')))
|
225 |
-
if(ag2):
|
226 |
-
fig.add_trace(go.Scatter(x=reldaily.index,y=reldaily['open'], name = 'Opening', line=dict(color='yellow')))
|
227 |
-
if(ag3):
|
228 |
-
fig.add_trace(go.Scatter(x=reldaily.index,y=reldaily['high'], name = 'High', line=dict(color='green')))
|
229 |
-
if(ag4):
|
230 |
-
fig.add_trace(go.Scatter(x=reldaily.index,y=reldaily['low'], name = 'Low', line=dict(color='red')))
|
231 |
-
fig.update_layout(xaxis_title='Date', yaxis_title='Price', title='Comparing other relevant parameters along close')
|
232 |
-
st.plotly_chart(fig, use_container_width=True)
|
233 |
-
|
234 |
-
with option2:
|
235 |
-
if comp == 'Tata Consultancy Services - TCS':
|
236 |
-
fig = px.line(tcsmonthly,y='close', markers=False, title='Tata Consultancy Services monthly data')
|
237 |
-
st.plotly_chart(fig, use_container_width=True)
|
238 |
-
|
239 |
-
fig = go.Figure()
|
240 |
-
if(ag1):
|
241 |
-
fig.add_trace(go.Scatter(x=tcsmonthly.index,y=tcsmonthly['close'], name='Closing', line=dict(color='blue')))
|
242 |
-
if(ag2):
|
243 |
-
fig.add_trace(go.Scatter(x=tcsmonthly.index,y=tcsmonthly['open'], name = 'Opening', line=dict(color='yellow')))
|
244 |
-
if(ag3):
|
245 |
-
fig.add_trace(go.Scatter(x=tcsmonthly.index,y=tcsmonthly['high'], name = 'High', line=dict(color='green')))
|
246 |
-
if(ag4):
|
247 |
-
fig.add_trace(go.Scatter(x=tcsmonthly.index,y=tcsmonthly['low'], name = 'Low', line=dict(color='red')))
|
248 |
-
fig.update_layout(xaxis_title='Month', yaxis_title='Price', title='Comparing other relevant parameters')
|
249 |
-
st.plotly_chart(fig, use_container_width=True)
|
250 |
-
|
251 |
-
if comp == 'Infosys - INFY':
|
252 |
-
fig = px.line(infmonthly, y='close',markers=False, title='Infosys monthly data')
|
253 |
-
st.plotly_chart(fig, use_container_width=True)
|
254 |
-
|
255 |
-
fig = go.Figure()
|
256 |
-
if(ag1):
|
257 |
-
fig.add_trace(go.Scatter(x=infmonthly.index, y=infmonthly['close'], name='Closing', line=dict(color='blue')))
|
258 |
-
if(ag2):
|
259 |
-
fig.add_trace(go.Scatter(x=infmonthly.index,y=infmonthly['open'], name = 'Opening', line=dict(color='yellow')))
|
260 |
-
if(ag3):
|
261 |
-
fig.add_trace(go.Scatter(x=infmonthly.index,y=infmonthly['high'], name = 'High', line=dict(color='green')))
|
262 |
-
if(ag4):
|
263 |
-
fig.add_trace(go.Scatter(y=infmonthly['low'], name = 'Low', line=dict(color='red')))
|
264 |
-
fig.update_layout(xaxis_title='Month', yaxis_title='Price', title='Comparing other relevant parameters')
|
265 |
-
st.plotly_chart(fig, use_container_width=True)
|
266 |
-
|
267 |
-
if comp == 'Reliance Industries - RELIANCE':
|
268 |
-
fig = px.line(relmonthly, y='close',markers=False, title='Reliance Industries monthly data')
|
269 |
-
st.plotly_chart(fig, use_container_width=True)
|
270 |
-
|
271 |
-
fig = go.Figure()
|
272 |
-
if(ag1):
|
273 |
-
fig.add_trace(go.Scatter(x=relmonthly.index,y=relmonthly['close'], name='Closing', line=dict(color='blue')))
|
274 |
-
if(ag2):
|
275 |
-
fig.add_trace(go.Scatter(x=relmonthly.index,y=relmonthly['open'], name = 'Opening', line=dict(color='yellow')))
|
276 |
-
if(ag3):
|
277 |
-
fig.add_trace(go.Scatter(x=relmonthly.index,y=relmonthly['high'], name = 'High', line=dict(color='green')))
|
278 |
-
if(ag4):
|
279 |
-
fig.add_trace(go.Scatter(x=relmonthly.index,y=relmonthly['low'], name = 'Low', line=dict(color='red')))
|
280 |
-
fig.update_layout(xaxis_title='Month', yaxis_title='Price', title='Comparing other relevant parameters')
|
281 |
-
st.plotly_chart(fig, use_container_width=True)
|
282 |
-
|
283 |
-
with option3:
|
284 |
-
if comp == 'Tata Consultancy Services - TCS':
|
285 |
-
fig = px.line(tcsyearly, x='Year', y='Close Price',markers=True, title='Tata Consultancy Services Yearly Data from 2004')
|
286 |
-
st.plotly_chart(fig, use_container_width=True)
|
287 |
-
|
288 |
-
fig = go.Figure()
|
289 |
-
if(ag1):
|
290 |
-
fig.add_trace(go.Scatter(x=tcsyearly['Year'], y=tcsyearly['Close Price'], name='Closing', line=dict(color='blue')))
|
291 |
-
if(ag2):
|
292 |
-
fig.add_trace(go.Scatter(x=tcsyearly['Year'], y=tcsyearly['Open Price'], name = 'Opening', line=dict(color='yellow')))
|
293 |
-
if(ag3):
|
294 |
-
fig.add_trace(go.Scatter(x=tcsyearly['Year'], y=tcsyearly['High Price'], name = 'High', line=dict(color='green')))
|
295 |
-
if(ag4):
|
296 |
-
fig.add_trace(go.Scatter(x=tcsyearly['Year'], y=tcsyearly['Low Price'], name = 'Low', line=dict(color='red')))
|
297 |
-
fig.update_layout(xaxis_title='Year', yaxis_title='Price', title='Comparing other relevant parameters along close price')
|
298 |
-
st.plotly_chart(fig, use_container_width=True, title='Comparing other relevant parameters')
|
299 |
-
|
300 |
-
if comp == 'Infosys - INFY':
|
301 |
-
fig = px.line(infyearly, x='Year', y='Close Price',markers=True, title='Infosys Yearly Data from 2004')
|
302 |
-
st.plotly_chart(fig, use_container_width=True)
|
303 |
-
|
304 |
-
fig = go.Figure()
|
305 |
-
if(ag1):
|
306 |
-
fig.add_trace(go.Scatter(x=infyearly['Year'], y=infyearly['Close Price'], name='Closing', line=dict(color='blue')))
|
307 |
-
if(ag2):
|
308 |
-
fig.add_trace(go.Scatter(x=infyearly['Year'], y=infyearly['Open Price'], name = 'Opening', line=dict(color='yellow')))
|
309 |
-
if(ag3):
|
310 |
-
fig.add_trace(go.Scatter(x=infyearly['Year'], y=infyearly['High Price'], name = 'High', line=dict(color='green')))
|
311 |
-
if(ag4):
|
312 |
-
fig.add_trace(go.Scatter(x=infyearly['Year'], y=infyearly['Low Price'], name = 'Low', line=dict(color='red')))
|
313 |
-
fig.update_layout(xaxis_title='Year', yaxis_title='Price', title='Comparing other relevant parameters')
|
314 |
-
st.plotly_chart(fig, use_container_width=True)
|
315 |
-
|
316 |
-
if comp == 'Reliance Industries - RELIANCE':
|
317 |
-
fig = px.line(relyearly, x='Year', y='Close Price',markers=True, title='Reliance Industries Yearly Data from 2004')
|
318 |
-
st.plotly_chart(fig, use_container_width=True)
|
319 |
-
|
320 |
-
fig = go.Figure()
|
321 |
-
if(ag1):
|
322 |
-
fig.add_trace(go.Scatter(x=relyearly['Year'], y=relyearly['Close Price'], name='Closing', line=dict(color='blue')))
|
323 |
-
if(ag2):
|
324 |
-
fig.add_trace(go.Scatter(x=relyearly['Year'], y=relyearly['Open Price'], name = 'Opening', line=dict(color='yellow')))
|
325 |
-
if(ag3):
|
326 |
-
fig.add_trace(go.Scatter(x=relyearly['Year'], y=relyearly['High Price'], name = 'High', line=dict(color='green')))
|
327 |
-
if(ag4):
|
328 |
-
fig.add_trace(go.Scatter(x=relyearly['Year'], y=relyearly['Low Price'], name = 'Low', line=dict(color='red')))
|
329 |
-
fig.update_layout(xaxis_title='Year', yaxis_title='Price', title='Comparing other relevant parameters')
|
330 |
-
st.plotly_chart(fig, use_container_width=True)
|
331 |
-
|
332 |
-
st.write("#")
|
333 |
-
#Riskometer
|
334 |
-
# Create object page
|
335 |
-
def get_info(url, x):
|
336 |
-
score = 0
|
337 |
-
page = requests.get(url)
|
338 |
-
soup=BeautifulSoup(page.content,'html.parser')
|
339 |
-
fund=soup.find_all('td',class_="ft785Value")
|
340 |
-
|
341 |
-
#Fundamental Values
|
342 |
-
pb = float(fund[4].text)
|
343 |
-
pe = float(fund[2].text)
|
344 |
-
de = float(fund[8].text)
|
345 |
-
div = float(fund[5].text.replace('%',''))
|
346 |
-
roe = float(fund[1].text.replace('%',''))
|
347 |
-
indpe = float(fund[6].text)
|
348 |
-
pat = soup.find_all('div',class_="shp76TextRight")
|
349 |
-
promo = float(pat[0].text.replace('%',''))
|
350 |
-
df = get_stats(x)
|
351 |
-
l_52 = float(df['Value'][4])
|
352 |
-
h_52 = float(df['Value'][3])
|
353 |
-
live = round(stock_info.get_live_price(x),2)
|
354 |
-
|
355 |
-
#1 - 52Week
|
356 |
-
if abs(live-h_52) <= abs(live-l_52):
|
357 |
-
score = score + 1
|
358 |
-
|
359 |
-
#2 - Rev
|
360 |
-
if x == 'TCS.NS':
|
361 |
-
score = score +1
|
362 |
-
|
363 |
-
#3 - PB
|
364 |
-
if pb<3:
|
365 |
-
score = score+1
|
366 |
-
elif pb>3:
|
367 |
-
score = score - 1
|
368 |
-
|
369 |
-
#4 - SHP
|
370 |
-
if promo>50:
|
371 |
-
score = score+1
|
372 |
-
elif promo<50:
|
373 |
-
score = score-1
|
374 |
-
|
375 |
-
#5 - Last 5 Year all 3 stocks made profit
|
376 |
-
score = score + 1
|
377 |
-
|
378 |
-
#6 - PE
|
379 |
-
if pe < 30:
|
380 |
-
score = score+1
|
381 |
-
elif pe > 100:
|
382 |
-
score = score - 1
|
383 |
-
|
384 |
-
#7 - DE
|
385 |
-
if de < 1:
|
386 |
-
score = score+1
|
387 |
-
elif de > 2:
|
388 |
-
score = score - 1
|
389 |
-
|
390 |
-
#8 - DivY
|
391 |
-
if div > 2:
|
392 |
-
score = score + 1
|
393 |
-
|
394 |
-
elif div == 'NULL' or div == 'NA':
|
395 |
-
score = score - 1
|
396 |
-
|
397 |
-
#9 - ROE
|
398 |
-
if roe > 25:
|
399 |
-
score = score + 1
|
400 |
-
elif roe < 5:
|
401 |
-
score = score - 1
|
402 |
-
|
403 |
-
#10 - Ind
|
404 |
-
if pe>indpe:
|
405 |
-
score = score-1
|
406 |
-
elif abs(pe-indpe)<(indpe*0.1):
|
407 |
-
score = score+1
|
408 |
-
|
409 |
-
return score
|
410 |
-
|
411 |
-
#Access URL object
|
412 |
-
if comp == 'Tata Consultancy Services - TCS':
|
413 |
-
ans = get_info('https://groww.in/stocks/tata-consultancy-services-ltd', 'TCS.NS')
|
414 |
-
if comp == 'Infosys - INFY':
|
415 |
-
ans = get_info('https://groww.in/stocks/infosys-ltd', 'INFY.NS')
|
416 |
-
if comp == 'Reliance Industries - RELIANCE':
|
417 |
-
ans = get_info('https://groww.in/stocks/reliance-industries-ltd', 'RELIANCE.NS')
|
418 |
-
|
419 |
-
score = 10 - ans
|
420 |
-
|
421 |
-
st.subheader('Riskometer')
|
422 |
-
if score >=9 :
|
423 |
-
progress_text = "Very High Risk"
|
424 |
-
my_bar = st.progress(0, text=progress_text)
|
425 |
-
|
426 |
-
score = 10 if score>10 else score
|
427 |
-
for percent_complete in range(score*10):
|
428 |
-
time.sleep(0.02)
|
429 |
-
my_bar.progress(percent_complete + 1, text=progress_text)
|
430 |
-
st.write(score*10,'%')
|
431 |
-
|
432 |
-
elif score <=1 :
|
433 |
-
progress_text = "Very Low Risk"
|
434 |
-
my_bar = st.progress(0, text=progress_text)
|
435 |
-
|
436 |
-
for percent_complete in range(score*10):
|
437 |
-
time.sleep(0.02)
|
438 |
-
my_bar.progress(percent_complete + 1, text=progress_text)
|
439 |
-
|
440 |
-
st.write(score*10,'%')
|
441 |
-
|
442 |
-
elif score <=3 and score>=2:
|
443 |
-
progress_text = "Low Risk"
|
444 |
-
my_bar = st.progress(40, text=progress_text)
|
445 |
-
|
446 |
-
for percent_complete in range(score*10):
|
447 |
-
time.sleep(0.02)
|
448 |
-
my_bar.progress(percent_complete + 1, text=progress_text)
|
449 |
-
|
450 |
-
st.write(score*10,'%')
|
451 |
-
|
452 |
-
elif score <=6 and score >=4 :
|
453 |
-
progress_text = "Moderate Risk"
|
454 |
-
my_bar = st.progress(60, text=progress_text)
|
455 |
-
|
456 |
-
for percent_complete in range(score*10):
|
457 |
-
time.sleep(0.02)
|
458 |
-
my_bar.progress(percent_complete + 1, text=progress_text)
|
459 |
-
st.write(score*10,'%')
|
460 |
-
|
461 |
-
elif score <=8 and score >=7 :
|
462 |
-
progress_text = "High Risk"
|
463 |
-
my_bar = st.progress(80, text=progress_text)
|
464 |
-
|
465 |
-
for percent_complete in range(score*10):
|
466 |
-
time.sleep(0.02)
|
467 |
-
my_bar.progress(percent_complete + 1, text=progress_text)
|
468 |
-
st.write(score*10,'%')
|
469 |
-
|
470 |
-
st.caption('Based on 10 fundamental aspects of an equity.')
|
471 |
-
|
472 |
-
#Predictions
|
473 |
-
st.write("#")
|
474 |
-
st.subheader('Predict : ')
|
475 |
-
|
476 |
-
if st.button('Click Here'):
|
477 |
-
if comp == 'Tata Consultancy Services - TCS':
|
478 |
-
x = round(stock_info.get_live_price("TCS.NS"),2)
|
479 |
-
tcsweekly = stock_info.get_data("TCS.NS", interval="1d")
|
480 |
-
tcsweekly=tcsweekly.dropna()
|
481 |
-
values = tcsweekly['close'].values
|
482 |
-
data_len = math.ceil(len(values)*0.8)
|
483 |
-
scaler = MinMaxScaler(feature_range=(0,1))
|
484 |
-
scaled_data = scaler.fit_transform(values.reshape(-1,1))
|
485 |
-
test_data = scaled_data[data_len-60: , : ]
|
486 |
-
x_test = []
|
487 |
-
for i in range(60, len(test_data)):
|
488 |
-
x_test.append(test_data[i-60:i, 0])
|
489 |
-
x_test = np.array(x_test)
|
490 |
-
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
|
491 |
-
new = joblib.load('New/tcsmodelnew.pkl')
|
492 |
-
ans = new.predict(x_test)
|
493 |
-
ans1 = scaler.inverse_transform(ans)
|
494 |
-
val = np.around(ans1[-1][0], decimals=2)
|
495 |
-
st.metric(label="Prediction", value=val, delta = round(val-x,2))
|
496 |
-
|
497 |
-
if comp == 'Reliance Industries - RELIANCE':
|
498 |
-
x = round(stock_info.get_live_price("RELIANCE.NS"),2)
|
499 |
-
relweekly = stock_info.get_data("RELIANCE.NS", interval="1d")
|
500 |
-
relweekly=relweekly.dropna()
|
501 |
-
values = relweekly['close'].values
|
502 |
-
data_len = math.ceil(len(values)*0.8)
|
503 |
-
scaler = MinMaxScaler(feature_range=(0,1))
|
504 |
-
scaled_data = scaler.fit_transform(values.reshape(-1,1))
|
505 |
-
test_data = scaled_data[data_len-60: , : ]
|
506 |
-
x_test = []
|
507 |
-
for i in range(60, len(test_data)):
|
508 |
-
x_test.append(test_data[i-60:i, 0])
|
509 |
-
x_test = np.array(x_test)
|
510 |
-
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
|
511 |
-
new = joblib.load('New/relmodelnew.pkl')
|
512 |
-
ans = new.predict(x_test)
|
513 |
-
ans1 = scaler.inverse_transform(ans)
|
514 |
-
val = np.around(ans1[-1][0], decimals=2)
|
515 |
-
st.metric(label="Prediction", value=val, delta = round(val-x,2))
|
516 |
-
|
517 |
-
if comp == 'Infosys - INFY':
|
518 |
-
x = round(stock_info.get_live_price("INFY.NS"),2)
|
519 |
-
infweekly = stock_info.get_data("INFY.NS", interval="1d")
|
520 |
-
infweekly=infweekly.dropna()
|
521 |
-
values = infweekly['close'].values
|
522 |
-
data_len = math.ceil(len(values)*0.8)
|
523 |
-
scaler = MinMaxScaler(feature_range=(0,1))
|
524 |
-
scaled_data = scaler.fit_transform(values.reshape(-1,1))
|
525 |
-
test_data = scaled_data[data_len-60: , : ]
|
526 |
-
x_test = []
|
527 |
-
for i in range(60, len(test_data)):
|
528 |
-
x_test.append(test_data[i-60:i, 0])
|
529 |
-
x_test = np.array(x_test)
|
530 |
-
x_test = np.reshape(x_test, (x_test.shape[0], x_test.shape[1], 1))
|
531 |
-
new = joblib.load('New/infymodelnew.pkl')
|
532 |
-
ans = new.predict(x_test)
|
533 |
-
ans1 = scaler.inverse_transform(ans)
|
534 |
-
val = np.around(ans1[-1][0], decimals=2)
|
535 |
-
st.metric(label="Prediction", value=val, delta = round(val-x,2))
|
536 |
-
|
537 |
-
|
538 |
-
#Support & Resistance
|
539 |
-
st.write("#")
|
540 |
-
st.subheader('Support and Resistance Indicators : ')
|
541 |
-
|
542 |
-
def supp_resis(x):
|
543 |
-
tcsdaily = stock_info.get_data(x, interval="1d")
|
544 |
-
new = tcsdaily.tail(15).head(1)
|
545 |
-
|
546 |
-
high = new['high']
|
547 |
-
low = new['low']
|
548 |
-
close = new['close']
|
549 |
-
pp = (high + low + close)/3
|
550 |
-
r1 = 2*pp - low
|
551 |
-
s1 = 2*pp - high
|
552 |
-
r2 = pp + (r1-s1)
|
553 |
-
s2 = pp - (r1-s1)
|
554 |
-
r3 = high + 2*(pp-low)
|
555 |
-
s3 = low - 2*(high - pp)
|
556 |
-
|
557 |
-
fig = px.line(tcsdaily.tail(20), y='close',markers=False, title=x+' daily data of 1 month')
|
558 |
-
fig.add_hline(y=r1[0], line_dash="dash", line_color="orange", annotation_text="1st Resistance")
|
559 |
-
fig.add_hline(y=s1[0], line_dash="dash", line_color="lime", annotation_text="1st Support")
|
560 |
-
|
561 |
-
fig.add_hline(y=r2[0], line_dash="dash", line_color="red", annotation_text="2nd Resistance")
|
562 |
-
fig.add_hline(y=s2[0], line_dash="dash", line_color="green", annotation_text="2nd Support")
|
563 |
-
|
564 |
-
fig.add_hline(y=r3[0], line_dash="dash", line_color="darkred", annotation_text="3rd Resistance")
|
565 |
-
fig.add_hline(y=s3[0], line_dash="dash", line_color="darkgreen", annotation_text="3rd Support")
|
566 |
-
|
567 |
-
st.plotly_chart(fig, use_container_width=True)
|
568 |
-
|
569 |
-
data = yf.download(
|
570 |
-
tickers = x,
|
571 |
-
period = "5d",
|
572 |
-
interval = "60m",
|
573 |
-
group_by = 'ticker',
|
574 |
-
auto_adjust = True,
|
575 |
-
prepost = False,
|
576 |
-
threads = True,
|
577 |
-
proxy = None)
|
578 |
-
|
579 |
-
fig = px.line(data, y='Close',markers=False, title=x+' hourly data of 5 days')
|
580 |
-
fig.add_hline(y=r1[0], line_dash="dash", line_color="orange", annotation_text="1st Resistance")
|
581 |
-
fig.add_hline(y=s1[0], line_dash="dash", line_color="lime", annotation_text="1st Support")
|
582 |
-
|
583 |
-
fig.add_hline(y=r2[0], line_dash="dash", line_color="red", annotation_text="2nd Resistance")
|
584 |
-
fig.add_hline(y=s2[0], line_dash="dash", line_color="green", annotation_text="2nd Support")
|
585 |
-
|
586 |
-
fig.add_hline(y=r3[0], line_dash="dash", line_color="darkred", annotation_text="3rd Resistance")
|
587 |
-
fig.add_hline(y=s3[0], line_dash="dash", line_color="darkgreen", annotation_text="3rd Support")
|
588 |
-
|
589 |
-
st.plotly_chart(fig, use_container_width=True)
|
590 |
-
|
591 |
-
if comp == 'Tata Consultancy Services - TCS':
|
592 |
-
supp_resis('TCS.NS')
|
593 |
-
if comp == 'Infosys - INFY':
|
594 |
-
supp_resis('INFY.NS')
|
595 |
-
if comp == 'Reliance Industries - RELIANCE':
|
596 |
-
supp_resis('RELIANCE.NS')
|
597 |
-
|
598 |
-
#Tab for Hist Data
|
599 |
-
st.write("#")
|
600 |
-
st.subheader('Financial data : ')
|
601 |
-
a1, a2, a3 = st.tabs(["Revenue & Profit", "Net Worth", "Shareholding Pattern"])
|
602 |
-
|
603 |
-
tier=['Promoters', 'Mutual Funds', 'Retail', 'Foreign Institutions','Others']
|
604 |
-
y=['2018', '2019', '2020', '2021', '2022']
|
605 |
-
|
606 |
-
with a1:
|
607 |
-
st.caption('All values in Crs')
|
608 |
-
if comp == 'Infosys - INFY':
|
609 |
-
chart_data = pd.DataFrame([[70522,16029], [82675,15404], [90791,16594], [100472,19351], [121641,22110]],
|
610 |
-
index=y, columns=["Revenue", "Profit"])
|
611 |
-
st.bar_chart(chart_data, height=350)
|
612 |
-
|
613 |
-
if comp == 'Tata Consultancy Services - TCS':
|
614 |
-
chart_data = pd.DataFrame([[123104,25826], [146463,31472], [156949,32430], [164177,32430], [191754,38327]],
|
615 |
-
index=y, columns=["Revenue", "Profit"])
|
616 |
-
st.bar_chart(chart_data, height=350)
|
617 |
-
|
618 |
-
if comp == 'Reliance Industries - RELIANCE':
|
619 |
-
chart_data = pd.DataFrame([[408265,36075], [583094,39588], [611645,39354], [486326,49128], [721634,60705]],
|
620 |
-
index=y, columns=["Revenue", "Profit"])
|
621 |
-
st.bar_chart(chart_data, height=350)
|
622 |
-
|
623 |
-
|
624 |
-
with a2:
|
625 |
-
st.caption('All values in Crs')
|
626 |
-
if comp == 'Infosys - INFY':
|
627 |
-
chart_data = pd.DataFrame([64923, 64948, 65450, 76351, 75350], index=y, columns=['Net Worth'])
|
628 |
-
st.bar_chart(chart_data, height=350)
|
629 |
-
|
630 |
-
if comp == 'Tata Consultancy Services - TCS':
|
631 |
-
chart_data = pd.DataFrame([85128, 89446, 84126, 86433, 89139], index=y, columns=['Net Worth'])
|
632 |
-
st.bar_chart(chart_data, height=350)
|
633 |
-
|
634 |
-
if comp == 'Reliance Industries - RELIANCE':
|
635 |
-
chart_data = pd.DataFrame([293506, 387112, 453331, 700172, 779485], index=y, columns=['Net Worth'])
|
636 |
-
st.bar_chart(chart_data, height=350)
|
637 |
-
|
638 |
-
with a3:
|
639 |
-
st.caption('As of March, 2023')
|
640 |
-
if comp == 'Infosys - INFY':
|
641 |
-
x = [15.11, 17.71, 18.22, 36.28, 12.68]
|
642 |
-
fig = px.pie(values=x, names=tier)
|
643 |
-
st.plotly_chart(fig, use_container_width=True, height=350)
|
644 |
-
|
645 |
-
if comp == 'Tata Consultancy Services - TCS':
|
646 |
-
x = [72.30, 3.31, 5.96, 12.94, 5.49]
|
647 |
-
fig = px.pie(values=x, names=tier)
|
648 |
-
st.plotly_chart(fig, use_container_width=True, height=350)
|
649 |
-
|
650 |
-
if comp == 'Reliance Industries - RELIANCE':
|
651 |
-
x = [50.49, 5.81, 11.64, 23.43, 8.63]
|
652 |
-
fig = px.pie(values=x, names=tier)
|
653 |
-
st.plotly_chart(fig, use_container_width=True, height=350)
|
654 |
-
|
655 |
-
st.write('Thanks ! We hope our webpage was useful.')
|
656 |
-
st.caption('The Web Application was made by Anand Soni and Deepak Rathore.')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andres99/Tune-A-Video-Training-UI/trainer.py
DELETED
@@ -1,166 +0,0 @@
|
|
1 |
-
from __future__ import annotations
|
2 |
-
|
3 |
-
import datetime
|
4 |
-
import os
|
5 |
-
import pathlib
|
6 |
-
import shlex
|
7 |
-
import shutil
|
8 |
-
import subprocess
|
9 |
-
import sys
|
10 |
-
|
11 |
-
import gradio as gr
|
12 |
-
import slugify
|
13 |
-
import torch
|
14 |
-
from huggingface_hub import HfApi
|
15 |
-
from omegaconf import OmegaConf
|
16 |
-
|
17 |
-
from app_upload import ModelUploader
|
18 |
-
from utils import save_model_card
|
19 |
-
|
20 |
-
sys.path.append('Tune-A-Video')
|
21 |
-
|
22 |
-
URL_TO_JOIN_MODEL_LIBRARY_ORG = 'https://huggingface.co/organizations/Tune-A-Video-library/share/YjTcaNJmKyeHFpMBioHhzBcTzCYddVErEk'
|
23 |
-
ORIGINAL_SPACE_ID = 'Tune-A-Video-library/Tune-A-Video-Training-UI'
|
24 |
-
SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID)
|
25 |
-
|
26 |
-
|
27 |
-
class Trainer:
|
28 |
-
def __init__(self, hf_token: str | None = None):
|
29 |
-
self.hf_token = hf_token
|
30 |
-
self.model_uploader = ModelUploader(hf_token)
|
31 |
-
|
32 |
-
self.checkpoint_dir = pathlib.Path('checkpoints')
|
33 |
-
self.checkpoint_dir.mkdir(exist_ok=True)
|
34 |
-
|
35 |
-
def download_base_model(self, base_model_id: str) -> str:
|
36 |
-
model_dir = self.checkpoint_dir / base_model_id
|
37 |
-
if not model_dir.exists():
|
38 |
-
org_name = base_model_id.split('/')[0]
|
39 |
-
org_dir = self.checkpoint_dir / org_name
|
40 |
-
org_dir.mkdir(exist_ok=True)
|
41 |
-
subprocess.run(shlex.split(
|
42 |
-
f'git clone https://huggingface.co/{base_model_id}'),
|
43 |
-
cwd=org_dir)
|
44 |
-
return model_dir.as_posix()
|
45 |
-
|
46 |
-
def join_model_library_org(self, token: str) -> None:
|
47 |
-
subprocess.run(
|
48 |
-
shlex.split(
|
49 |
-
f'curl -X POST -H "Authorization: Bearer {token}" -H "Content-Type: application/json" {URL_TO_JOIN_MODEL_LIBRARY_ORG}'
|
50 |
-
))
|
51 |
-
|
52 |
-
def run(
|
53 |
-
self,
|
54 |
-
training_video: str,
|
55 |
-
training_prompt: str,
|
56 |
-
output_model_name: str,
|
57 |
-
overwrite_existing_model: bool,
|
58 |
-
validation_prompt: str,
|
59 |
-
base_model: str,
|
60 |
-
resolution_s: str,
|
61 |
-
n_steps: int,
|
62 |
-
learning_rate: float,
|
63 |
-
gradient_accumulation: int,
|
64 |
-
seed: int,
|
65 |
-
fp16: bool,
|
66 |
-
use_8bit_adam: bool,
|
67 |
-
checkpointing_steps: int,
|
68 |
-
validation_epochs: int,
|
69 |
-
upload_to_hub: bool,
|
70 |
-
use_private_repo: bool,
|
71 |
-
delete_existing_repo: bool,
|
72 |
-
upload_to: str,
|
73 |
-
remove_gpu_after_training: bool,
|
74 |
-
input_token: str,
|
75 |
-
) -> str:
|
76 |
-
if SPACE_ID == ORIGINAL_SPACE_ID:
|
77 |
-
raise gr.Error(
|
78 |
-
'This Space does not work on this Shared UI. Duplicate the Space and attribute a GPU'
|
79 |
-
)
|
80 |
-
if not torch.cuda.is_available():
|
81 |
-
raise gr.Error('CUDA is not available.')
|
82 |
-
if training_video is None:
|
83 |
-
raise gr.Error('You need to upload a video.')
|
84 |
-
if not training_prompt:
|
85 |
-
raise gr.Error('The training prompt is missing.')
|
86 |
-
if not validation_prompt:
|
87 |
-
raise gr.Error('The validation prompt is missing.')
|
88 |
-
|
89 |
-
resolution = int(resolution_s)
|
90 |
-
|
91 |
-
if not output_model_name:
|
92 |
-
timestamp = datetime.datetime.now().strftime('%Y-%m-%d-%H-%M-%S')
|
93 |
-
output_model_name = f'tune-a-video-{timestamp}'
|
94 |
-
output_model_name = slugify.slugify(output_model_name)
|
95 |
-
|
96 |
-
repo_dir = pathlib.Path(__file__).parent
|
97 |
-
output_dir = repo_dir / 'experiments' / output_model_name
|
98 |
-
if overwrite_existing_model or upload_to_hub:
|
99 |
-
shutil.rmtree(output_dir, ignore_errors=True)
|
100 |
-
output_dir.mkdir(parents=True)
|
101 |
-
|
102 |
-
if upload_to_hub:
|
103 |
-
self.join_model_library_org(
|
104 |
-
self.hf_token if self.hf_token else input_token)
|
105 |
-
|
106 |
-
config = OmegaConf.load('Tune-A-Video/configs/man-surfing.yaml')
|
107 |
-
config.pretrained_model_path = self.download_base_model(base_model)
|
108 |
-
config.output_dir = output_dir.as_posix()
|
109 |
-
config.train_data.video_path = training_video.name # type: ignore
|
110 |
-
config.train_data.prompt = training_prompt
|
111 |
-
config.train_data.n_sample_frames = 8
|
112 |
-
config.train_data.width = resolution
|
113 |
-
config.train_data.height = resolution
|
114 |
-
config.train_data.sample_start_idx = 0
|
115 |
-
config.train_data.sample_frame_rate = 1
|
116 |
-
config.validation_data.prompts = [validation_prompt]
|
117 |
-
config.validation_data.video_length = 8
|
118 |
-
config.validation_data.width = resolution
|
119 |
-
config.validation_data.height = resolution
|
120 |
-
config.validation_data.num_inference_steps = 50
|
121 |
-
config.validation_data.guidance_scale = 7.5
|
122 |
-
config.learning_rate = learning_rate
|
123 |
-
config.gradient_accumulation_steps = gradient_accumulation
|
124 |
-
config.train_batch_size = 1
|
125 |
-
config.max_train_steps = n_steps
|
126 |
-
config.checkpointing_steps = checkpointing_steps
|
127 |
-
config.validation_steps = validation_epochs
|
128 |
-
config.seed = seed
|
129 |
-
config.mixed_precision = 'fp16' if fp16 else ''
|
130 |
-
config.use_8bit_adam = use_8bit_adam
|
131 |
-
|
132 |
-
config_path = output_dir / 'config.yaml'
|
133 |
-
with open(config_path, 'w') as f:
|
134 |
-
OmegaConf.save(config, f)
|
135 |
-
|
136 |
-
command = f'accelerate launch Tune-A-Video/train_tuneavideo.py --config {config_path}'
|
137 |
-
subprocess.run(shlex.split(command))
|
138 |
-
save_model_card(save_dir=output_dir,
|
139 |
-
base_model=base_model,
|
140 |
-
training_prompt=training_prompt,
|
141 |
-
test_prompt=validation_prompt,
|
142 |
-
test_image_dir='samples')
|
143 |
-
|
144 |
-
message = 'Training completed!'
|
145 |
-
print(message)
|
146 |
-
|
147 |
-
if upload_to_hub:
|
148 |
-
upload_message = self.model_uploader.upload_model(
|
149 |
-
folder_path=output_dir.as_posix(),
|
150 |
-
repo_name=output_model_name,
|
151 |
-
upload_to=upload_to,
|
152 |
-
private=use_private_repo,
|
153 |
-
delete_existing_repo=delete_existing_repo,
|
154 |
-
input_token=input_token)
|
155 |
-
print(upload_message)
|
156 |
-
message = message + '\n' + upload_message
|
157 |
-
|
158 |
-
if remove_gpu_after_training:
|
159 |
-
space_id = os.getenv('SPACE_ID')
|
160 |
-
if space_id:
|
161 |
-
api = HfApi(
|
162 |
-
token=self.hf_token if self.hf_token else input_token)
|
163 |
-
api.request_space_hardware(repo_id=space_id,
|
164 |
-
hardware='cpu-basic')
|
165 |
-
|
166 |
-
return message
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/deepfloyd_if/pipeline_if_inpainting.py
DELETED
@@ -1,1059 +0,0 @@
|
|
1 |
-
import html
|
2 |
-
import inspect
|
3 |
-
import re
|
4 |
-
import urllib.parse as ul
|
5 |
-
from typing import Any, Callable, Dict, List, Optional, Union
|
6 |
-
|
7 |
-
import numpy as np
|
8 |
-
import PIL
|
9 |
-
import torch
|
10 |
-
from transformers import CLIPImageProcessor, T5EncoderModel, T5Tokenizer
|
11 |
-
|
12 |
-
from ...loaders import LoraLoaderMixin
|
13 |
-
from ...models import UNet2DConditionModel
|
14 |
-
from ...schedulers import DDPMScheduler
|
15 |
-
from ...utils import (
|
16 |
-
BACKENDS_MAPPING,
|
17 |
-
PIL_INTERPOLATION,
|
18 |
-
is_accelerate_available,
|
19 |
-
is_accelerate_version,
|
20 |
-
is_bs4_available,
|
21 |
-
is_ftfy_available,
|
22 |
-
logging,
|
23 |
-
randn_tensor,
|
24 |
-
replace_example_docstring,
|
25 |
-
)
|
26 |
-
from ..pipeline_utils import DiffusionPipeline
|
27 |
-
from . import IFPipelineOutput
|
28 |
-
from .safety_checker import IFSafetyChecker
|
29 |
-
from .watermark import IFWatermarker
|
30 |
-
|
31 |
-
|
32 |
-
logger = logging.get_logger(__name__) # pylint: disable=invalid-name
|
33 |
-
|
34 |
-
if is_bs4_available():
|
35 |
-
from bs4 import BeautifulSoup
|
36 |
-
|
37 |
-
if is_ftfy_available():
|
38 |
-
import ftfy
|
39 |
-
|
40 |
-
|
41 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_img2img.resize
|
42 |
-
def resize(images: PIL.Image.Image, img_size: int) -> PIL.Image.Image:
|
43 |
-
w, h = images.size
|
44 |
-
|
45 |
-
coef = w / h
|
46 |
-
|
47 |
-
w, h = img_size, img_size
|
48 |
-
|
49 |
-
if coef >= 1:
|
50 |
-
w = int(round(img_size / 8 * coef) * 8)
|
51 |
-
else:
|
52 |
-
h = int(round(img_size / 8 / coef) * 8)
|
53 |
-
|
54 |
-
images = images.resize((w, h), resample=PIL_INTERPOLATION["bicubic"], reducing_gap=None)
|
55 |
-
|
56 |
-
return images
|
57 |
-
|
58 |
-
|
59 |
-
EXAMPLE_DOC_STRING = """
|
60 |
-
Examples:
|
61 |
-
```py
|
62 |
-
>>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline
|
63 |
-
>>> from diffusers.utils import pt_to_pil
|
64 |
-
>>> import torch
|
65 |
-
>>> from PIL import Image
|
66 |
-
>>> import requests
|
67 |
-
>>> from io import BytesIO
|
68 |
-
|
69 |
-
>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png"
|
70 |
-
>>> response = requests.get(url)
|
71 |
-
>>> original_image = Image.open(BytesIO(response.content)).convert("RGB")
|
72 |
-
>>> original_image = original_image
|
73 |
-
|
74 |
-
>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png"
|
75 |
-
>>> response = requests.get(url)
|
76 |
-
>>> mask_image = Image.open(BytesIO(response.content))
|
77 |
-
>>> mask_image = mask_image
|
78 |
-
|
79 |
-
>>> pipe = IFInpaintingPipeline.from_pretrained(
|
80 |
-
... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16
|
81 |
-
... )
|
82 |
-
>>> pipe.enable_model_cpu_offload()
|
83 |
-
|
84 |
-
>>> prompt = "blue sunglasses"
|
85 |
-
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt)
|
86 |
-
|
87 |
-
>>> image = pipe(
|
88 |
-
... image=original_image,
|
89 |
-
... mask_image=mask_image,
|
90 |
-
... prompt_embeds=prompt_embeds,
|
91 |
-
... negative_prompt_embeds=negative_embeds,
|
92 |
-
... output_type="pt",
|
93 |
-
... ).images
|
94 |
-
|
95 |
-
>>> # save intermediate image
|
96 |
-
>>> pil_image = pt_to_pil(image)
|
97 |
-
>>> pil_image[0].save("./if_stage_I.png")
|
98 |
-
|
99 |
-
>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained(
|
100 |
-
... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16
|
101 |
-
... )
|
102 |
-
>>> super_res_1_pipe.enable_model_cpu_offload()
|
103 |
-
|
104 |
-
>>> image = super_res_1_pipe(
|
105 |
-
... image=image,
|
106 |
-
... mask_image=mask_image,
|
107 |
-
... original_image=original_image,
|
108 |
-
... prompt_embeds=prompt_embeds,
|
109 |
-
... negative_prompt_embeds=negative_embeds,
|
110 |
-
... ).images
|
111 |
-
>>> image[0].save("./if_stage_II.png")
|
112 |
-
```
|
113 |
-
"""
|
114 |
-
|
115 |
-
|
116 |
-
class IFInpaintingPipeline(DiffusionPipeline, LoraLoaderMixin):
|
117 |
-
tokenizer: T5Tokenizer
|
118 |
-
text_encoder: T5EncoderModel
|
119 |
-
|
120 |
-
unet: UNet2DConditionModel
|
121 |
-
scheduler: DDPMScheduler
|
122 |
-
|
123 |
-
feature_extractor: Optional[CLIPImageProcessor]
|
124 |
-
safety_checker: Optional[IFSafetyChecker]
|
125 |
-
|
126 |
-
watermarker: Optional[IFWatermarker]
|
127 |
-
|
128 |
-
bad_punct_regex = re.compile(
|
129 |
-
r"[" + "#®•©™&@·º½¾¿¡§~" + "\)" + "\(" + "\]" + "\[" + "\}" + "\{" + "\|" + "\\" + "\/" + "\*" + r"]{1,}"
|
130 |
-
) # noqa
|
131 |
-
|
132 |
-
_optional_components = ["tokenizer", "text_encoder", "safety_checker", "feature_extractor", "watermarker"]
|
133 |
-
|
134 |
-
def __init__(
|
135 |
-
self,
|
136 |
-
tokenizer: T5Tokenizer,
|
137 |
-
text_encoder: T5EncoderModel,
|
138 |
-
unet: UNet2DConditionModel,
|
139 |
-
scheduler: DDPMScheduler,
|
140 |
-
safety_checker: Optional[IFSafetyChecker],
|
141 |
-
feature_extractor: Optional[CLIPImageProcessor],
|
142 |
-
watermarker: Optional[IFWatermarker],
|
143 |
-
requires_safety_checker: bool = True,
|
144 |
-
):
|
145 |
-
super().__init__()
|
146 |
-
|
147 |
-
if safety_checker is None and requires_safety_checker:
|
148 |
-
logger.warning(
|
149 |
-
f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
|
150 |
-
" that you abide to the conditions of the IF license and do not expose unfiltered"
|
151 |
-
" results in services or applications open to the public. Both the diffusers team and Hugging Face"
|
152 |
-
" strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
|
153 |
-
" it only for use-cases that involve analyzing network behavior or auditing its results. For more"
|
154 |
-
" information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
|
155 |
-
)
|
156 |
-
|
157 |
-
if safety_checker is not None and feature_extractor is None:
|
158 |
-
raise ValueError(
|
159 |
-
"Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
|
160 |
-
" checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
|
161 |
-
)
|
162 |
-
|
163 |
-
self.register_modules(
|
164 |
-
tokenizer=tokenizer,
|
165 |
-
text_encoder=text_encoder,
|
166 |
-
unet=unet,
|
167 |
-
scheduler=scheduler,
|
168 |
-
safety_checker=safety_checker,
|
169 |
-
feature_extractor=feature_extractor,
|
170 |
-
watermarker=watermarker,
|
171 |
-
)
|
172 |
-
self.register_to_config(requires_safety_checker=requires_safety_checker)
|
173 |
-
|
174 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.enable_model_cpu_offload
|
175 |
-
def enable_model_cpu_offload(self, gpu_id=0):
|
176 |
-
r"""
|
177 |
-
Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
|
178 |
-
to `enable_sequential_cpu_offload`, this method moves one whole model at a time to the GPU when its `forward`
|
179 |
-
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
|
180 |
-
`enable_sequential_cpu_offload`, but performance is much better due to the iterative execution of the `unet`.
|
181 |
-
"""
|
182 |
-
if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
|
183 |
-
from accelerate import cpu_offload_with_hook
|
184 |
-
else:
|
185 |
-
raise ImportError("`enable_model_cpu_offload` requires `accelerate v0.17.0` or higher.")
|
186 |
-
|
187 |
-
device = torch.device(f"cuda:{gpu_id}")
|
188 |
-
|
189 |
-
if self.device.type != "cpu":
|
190 |
-
self.to("cpu", silence_dtype_warnings=True)
|
191 |
-
torch.cuda.empty_cache() # otherwise we don't see the memory savings (but they probably exist)
|
192 |
-
|
193 |
-
hook = None
|
194 |
-
|
195 |
-
if self.text_encoder is not None:
|
196 |
-
_, hook = cpu_offload_with_hook(self.text_encoder, device, prev_module_hook=hook)
|
197 |
-
|
198 |
-
# Accelerate will move the next model to the device _before_ calling the offload hook of the
|
199 |
-
# previous model. This will cause both models to be present on the device at the same time.
|
200 |
-
# IF uses T5 for its text encoder which is really large. We can manually call the offload
|
201 |
-
# hook for the text encoder to ensure it's moved to the cpu before the unet is moved to
|
202 |
-
# the GPU.
|
203 |
-
self.text_encoder_offload_hook = hook
|
204 |
-
|
205 |
-
_, hook = cpu_offload_with_hook(self.unet, device, prev_module_hook=hook)
|
206 |
-
|
207 |
-
# if the safety checker isn't called, `unet_offload_hook` will have to be called to manually offload the unet
|
208 |
-
self.unet_offload_hook = hook
|
209 |
-
|
210 |
-
if self.safety_checker is not None:
|
211 |
-
_, hook = cpu_offload_with_hook(self.safety_checker, device, prev_module_hook=hook)
|
212 |
-
|
213 |
-
# We'll offload the last model manually.
|
214 |
-
self.final_offload_hook = hook
|
215 |
-
|
216 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.remove_all_hooks
|
217 |
-
def remove_all_hooks(self):
|
218 |
-
if is_accelerate_available():
|
219 |
-
from accelerate.hooks import remove_hook_from_module
|
220 |
-
else:
|
221 |
-
raise ImportError("Please install accelerate via `pip install accelerate`")
|
222 |
-
|
223 |
-
for model in [self.text_encoder, self.unet, self.safety_checker]:
|
224 |
-
if model is not None:
|
225 |
-
remove_hook_from_module(model, recurse=True)
|
226 |
-
|
227 |
-
self.unet_offload_hook = None
|
228 |
-
self.text_encoder_offload_hook = None
|
229 |
-
self.final_offload_hook = None
|
230 |
-
|
231 |
-
@torch.no_grad()
|
232 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.encode_prompt
|
233 |
-
def encode_prompt(
|
234 |
-
self,
|
235 |
-
prompt,
|
236 |
-
do_classifier_free_guidance=True,
|
237 |
-
num_images_per_prompt=1,
|
238 |
-
device=None,
|
239 |
-
negative_prompt=None,
|
240 |
-
prompt_embeds: Optional[torch.FloatTensor] = None,
|
241 |
-
negative_prompt_embeds: Optional[torch.FloatTensor] = None,
|
242 |
-
clean_caption: bool = False,
|
243 |
-
):
|
244 |
-
r"""
|
245 |
-
Encodes the prompt into text encoder hidden states.
|
246 |
-
|
247 |
-
Args:
|
248 |
-
prompt (`str` or `List[str]`, *optional*):
|
249 |
-
prompt to be encoded
|
250 |
-
device: (`torch.device`, *optional*):
|
251 |
-
torch device to place the resulting embeddings on
|
252 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
253 |
-
number of images that should be generated per prompt
|
254 |
-
do_classifier_free_guidance (`bool`, *optional*, defaults to `True`):
|
255 |
-
whether to use classifier free guidance or not
|
256 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
257 |
-
The prompt or prompts not to guide the image generation. If not defined, one has to pass
|
258 |
-
`negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
|
259 |
-
Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
|
260 |
-
prompt_embeds (`torch.FloatTensor`, *optional*):
|
261 |
-
Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
|
262 |
-
provided, text embeddings will be generated from `prompt` input argument.
|
263 |
-
negative_prompt_embeds (`torch.FloatTensor`, *optional*):
|
264 |
-
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
|
265 |
-
weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
|
266 |
-
argument.
|
267 |
-
"""
|
268 |
-
if prompt is not None and negative_prompt is not None:
|
269 |
-
if type(prompt) is not type(negative_prompt):
|
270 |
-
raise TypeError(
|
271 |
-
f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
|
272 |
-
f" {type(prompt)}."
|
273 |
-
)
|
274 |
-
|
275 |
-
if device is None:
|
276 |
-
device = self._execution_device
|
277 |
-
|
278 |
-
if prompt is not None and isinstance(prompt, str):
|
279 |
-
batch_size = 1
|
280 |
-
elif prompt is not None and isinstance(prompt, list):
|
281 |
-
batch_size = len(prompt)
|
282 |
-
else:
|
283 |
-
batch_size = prompt_embeds.shape[0]
|
284 |
-
|
285 |
-
# while T5 can handle much longer input sequences than 77, the text encoder was trained with a max length of 77 for IF
|
286 |
-
max_length = 77
|
287 |
-
|
288 |
-
if prompt_embeds is None:
|
289 |
-
prompt = self._text_preprocessing(prompt, clean_caption=clean_caption)
|
290 |
-
text_inputs = self.tokenizer(
|
291 |
-
prompt,
|
292 |
-
padding="max_length",
|
293 |
-
max_length=max_length,
|
294 |
-
truncation=True,
|
295 |
-
add_special_tokens=True,
|
296 |
-
return_tensors="pt",
|
297 |
-
)
|
298 |
-
text_input_ids = text_inputs.input_ids
|
299 |
-
untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
|
300 |
-
|
301 |
-
if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
|
302 |
-
text_input_ids, untruncated_ids
|
303 |
-
):
|
304 |
-
removed_text = self.tokenizer.batch_decode(untruncated_ids[:, max_length - 1 : -1])
|
305 |
-
logger.warning(
|
306 |
-
"The following part of your input was truncated because CLIP can only handle sequences up to"
|
307 |
-
f" {max_length} tokens: {removed_text}"
|
308 |
-
)
|
309 |
-
|
310 |
-
attention_mask = text_inputs.attention_mask.to(device)
|
311 |
-
|
312 |
-
prompt_embeds = self.text_encoder(
|
313 |
-
text_input_ids.to(device),
|
314 |
-
attention_mask=attention_mask,
|
315 |
-
)
|
316 |
-
prompt_embeds = prompt_embeds[0]
|
317 |
-
|
318 |
-
if self.text_encoder is not None:
|
319 |
-
dtype = self.text_encoder.dtype
|
320 |
-
elif self.unet is not None:
|
321 |
-
dtype = self.unet.dtype
|
322 |
-
else:
|
323 |
-
dtype = None
|
324 |
-
|
325 |
-
prompt_embeds = prompt_embeds.to(dtype=dtype, device=device)
|
326 |
-
|
327 |
-
bs_embed, seq_len, _ = prompt_embeds.shape
|
328 |
-
# duplicate text embeddings for each generation per prompt, using mps friendly method
|
329 |
-
prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
|
330 |
-
prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
|
331 |
-
|
332 |
-
# get unconditional embeddings for classifier free guidance
|
333 |
-
if do_classifier_free_guidance and negative_prompt_embeds is None:
|
334 |
-
uncond_tokens: List[str]
|
335 |
-
if negative_prompt is None:
|
336 |
-
uncond_tokens = [""] * batch_size
|
337 |
-
elif isinstance(negative_prompt, str):
|
338 |
-
uncond_tokens = [negative_prompt]
|
339 |
-
elif batch_size != len(negative_prompt):
|
340 |
-
raise ValueError(
|
341 |
-
f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
|
342 |
-
f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
|
343 |
-
" the batch size of `prompt`."
|
344 |
-
)
|
345 |
-
else:
|
346 |
-
uncond_tokens = negative_prompt
|
347 |
-
|
348 |
-
uncond_tokens = self._text_preprocessing(uncond_tokens, clean_caption=clean_caption)
|
349 |
-
max_length = prompt_embeds.shape[1]
|
350 |
-
uncond_input = self.tokenizer(
|
351 |
-
uncond_tokens,
|
352 |
-
padding="max_length",
|
353 |
-
max_length=max_length,
|
354 |
-
truncation=True,
|
355 |
-
return_attention_mask=True,
|
356 |
-
add_special_tokens=True,
|
357 |
-
return_tensors="pt",
|
358 |
-
)
|
359 |
-
attention_mask = uncond_input.attention_mask.to(device)
|
360 |
-
|
361 |
-
negative_prompt_embeds = self.text_encoder(
|
362 |
-
uncond_input.input_ids.to(device),
|
363 |
-
attention_mask=attention_mask,
|
364 |
-
)
|
365 |
-
negative_prompt_embeds = negative_prompt_embeds[0]
|
366 |
-
|
367 |
-
if do_classifier_free_guidance:
|
368 |
-
# duplicate unconditional embeddings for each generation per prompt, using mps friendly method
|
369 |
-
seq_len = negative_prompt_embeds.shape[1]
|
370 |
-
|
371 |
-
negative_prompt_embeds = negative_prompt_embeds.to(dtype=dtype, device=device)
|
372 |
-
|
373 |
-
negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
|
374 |
-
negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
|
375 |
-
|
376 |
-
# For classifier free guidance, we need to do two forward passes.
|
377 |
-
# Here we concatenate the unconditional and text embeddings into a single batch
|
378 |
-
# to avoid doing two forward passes
|
379 |
-
else:
|
380 |
-
negative_prompt_embeds = None
|
381 |
-
|
382 |
-
return prompt_embeds, negative_prompt_embeds
|
383 |
-
|
384 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.run_safety_checker
|
385 |
-
def run_safety_checker(self, image, device, dtype):
|
386 |
-
if self.safety_checker is not None:
|
387 |
-
safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
|
388 |
-
image, nsfw_detected, watermark_detected = self.safety_checker(
|
389 |
-
images=image,
|
390 |
-
clip_input=safety_checker_input.pixel_values.to(dtype=dtype),
|
391 |
-
)
|
392 |
-
else:
|
393 |
-
nsfw_detected = None
|
394 |
-
watermark_detected = None
|
395 |
-
|
396 |
-
if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None:
|
397 |
-
self.unet_offload_hook.offload()
|
398 |
-
|
399 |
-
return image, nsfw_detected, watermark_detected
|
400 |
-
|
401 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline.prepare_extra_step_kwargs
|
402 |
-
def prepare_extra_step_kwargs(self, generator, eta):
|
403 |
-
# prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
|
404 |
-
# eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
|
405 |
-
# eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
|
406 |
-
# and should be between [0, 1]
|
407 |
-
|
408 |
-
accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
409 |
-
extra_step_kwargs = {}
|
410 |
-
if accepts_eta:
|
411 |
-
extra_step_kwargs["eta"] = eta
|
412 |
-
|
413 |
-
# check if the scheduler accepts generator
|
414 |
-
accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
|
415 |
-
if accepts_generator:
|
416 |
-
extra_step_kwargs["generator"] = generator
|
417 |
-
return extra_step_kwargs
|
418 |
-
|
419 |
-
def check_inputs(
|
420 |
-
self,
|
421 |
-
prompt,
|
422 |
-
image,
|
423 |
-
mask_image,
|
424 |
-
batch_size,
|
425 |
-
callback_steps,
|
426 |
-
negative_prompt=None,
|
427 |
-
prompt_embeds=None,
|
428 |
-
negative_prompt_embeds=None,
|
429 |
-
):
|
430 |
-
if (callback_steps is None) or (
|
431 |
-
callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
|
432 |
-
):
|
433 |
-
raise ValueError(
|
434 |
-
f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
|
435 |
-
f" {type(callback_steps)}."
|
436 |
-
)
|
437 |
-
|
438 |
-
if prompt is not None and prompt_embeds is not None:
|
439 |
-
raise ValueError(
|
440 |
-
f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
|
441 |
-
" only forward one of the two."
|
442 |
-
)
|
443 |
-
elif prompt is None and prompt_embeds is None:
|
444 |
-
raise ValueError(
|
445 |
-
"Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
|
446 |
-
)
|
447 |
-
elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
|
448 |
-
raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
|
449 |
-
|
450 |
-
if negative_prompt is not None and negative_prompt_embeds is not None:
|
451 |
-
raise ValueError(
|
452 |
-
f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
|
453 |
-
f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
|
454 |
-
)
|
455 |
-
|
456 |
-
if prompt_embeds is not None and negative_prompt_embeds is not None:
|
457 |
-
if prompt_embeds.shape != negative_prompt_embeds.shape:
|
458 |
-
raise ValueError(
|
459 |
-
"`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
|
460 |
-
f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
|
461 |
-
f" {negative_prompt_embeds.shape}."
|
462 |
-
)
|
463 |
-
|
464 |
-
# image
|
465 |
-
|
466 |
-
if isinstance(image, list):
|
467 |
-
check_image_type = image[0]
|
468 |
-
else:
|
469 |
-
check_image_type = image
|
470 |
-
|
471 |
-
if (
|
472 |
-
not isinstance(check_image_type, torch.Tensor)
|
473 |
-
and not isinstance(check_image_type, PIL.Image.Image)
|
474 |
-
and not isinstance(check_image_type, np.ndarray)
|
475 |
-
):
|
476 |
-
raise ValueError(
|
477 |
-
"`image` has to be of type `torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, or List[...] but is"
|
478 |
-
f" {type(check_image_type)}"
|
479 |
-
)
|
480 |
-
|
481 |
-
if isinstance(image, list):
|
482 |
-
image_batch_size = len(image)
|
483 |
-
elif isinstance(image, torch.Tensor):
|
484 |
-
image_batch_size = image.shape[0]
|
485 |
-
elif isinstance(image, PIL.Image.Image):
|
486 |
-
image_batch_size = 1
|
487 |
-
elif isinstance(image, np.ndarray):
|
488 |
-
image_batch_size = image.shape[0]
|
489 |
-
else:
|
490 |
-
assert False
|
491 |
-
|
492 |
-
if batch_size != image_batch_size:
|
493 |
-
raise ValueError(f"image batch size: {image_batch_size} must be same as prompt batch size {batch_size}")
|
494 |
-
|
495 |
-
# mask_image
|
496 |
-
|
497 |
-
if isinstance(mask_image, list):
|
498 |
-
check_image_type = mask_image[0]
|
499 |
-
else:
|
500 |
-
check_image_type = mask_image
|
501 |
-
|
502 |
-
if (
|
503 |
-
not isinstance(check_image_type, torch.Tensor)
|
504 |
-
and not isinstance(check_image_type, PIL.Image.Image)
|
505 |
-
and not isinstance(check_image_type, np.ndarray)
|
506 |
-
):
|
507 |
-
raise ValueError(
|
508 |
-
"`mask_image` has to be of type `torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, or List[...] but is"
|
509 |
-
f" {type(check_image_type)}"
|
510 |
-
)
|
511 |
-
|
512 |
-
if isinstance(mask_image, list):
|
513 |
-
image_batch_size = len(mask_image)
|
514 |
-
elif isinstance(mask_image, torch.Tensor):
|
515 |
-
image_batch_size = mask_image.shape[0]
|
516 |
-
elif isinstance(mask_image, PIL.Image.Image):
|
517 |
-
image_batch_size = 1
|
518 |
-
elif isinstance(mask_image, np.ndarray):
|
519 |
-
image_batch_size = mask_image.shape[0]
|
520 |
-
else:
|
521 |
-
assert False
|
522 |
-
|
523 |
-
if image_batch_size != 1 and batch_size != image_batch_size:
|
524 |
-
raise ValueError(
|
525 |
-
f"mask_image batch size: {image_batch_size} must be `1` or the same as prompt batch size {batch_size}"
|
526 |
-
)
|
527 |
-
|
528 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline._text_preprocessing
|
529 |
-
def _text_preprocessing(self, text, clean_caption=False):
|
530 |
-
if clean_caption and not is_bs4_available():
|
531 |
-
logger.warn(BACKENDS_MAPPING["bs4"][-1].format("Setting `clean_caption=True`"))
|
532 |
-
logger.warn("Setting `clean_caption` to False...")
|
533 |
-
clean_caption = False
|
534 |
-
|
535 |
-
if clean_caption and not is_ftfy_available():
|
536 |
-
logger.warn(BACKENDS_MAPPING["ftfy"][-1].format("Setting `clean_caption=True`"))
|
537 |
-
logger.warn("Setting `clean_caption` to False...")
|
538 |
-
clean_caption = False
|
539 |
-
|
540 |
-
if not isinstance(text, (tuple, list)):
|
541 |
-
text = [text]
|
542 |
-
|
543 |
-
def process(text: str):
|
544 |
-
if clean_caption:
|
545 |
-
text = self._clean_caption(text)
|
546 |
-
text = self._clean_caption(text)
|
547 |
-
else:
|
548 |
-
text = text.lower().strip()
|
549 |
-
return text
|
550 |
-
|
551 |
-
return [process(t) for t in text]
|
552 |
-
|
553 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if.IFPipeline._clean_caption
|
554 |
-
def _clean_caption(self, caption):
|
555 |
-
caption = str(caption)
|
556 |
-
caption = ul.unquote_plus(caption)
|
557 |
-
caption = caption.strip().lower()
|
558 |
-
caption = re.sub("<person>", "person", caption)
|
559 |
-
# urls:
|
560 |
-
caption = re.sub(
|
561 |
-
r"\b((?:https?:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa
|
562 |
-
"",
|
563 |
-
caption,
|
564 |
-
) # regex for urls
|
565 |
-
caption = re.sub(
|
566 |
-
r"\b((?:www:(?:\/{1,3}|[a-zA-Z0-9%])|[a-zA-Z0-9.\-]+[.](?:com|co|ru|net|org|edu|gov|it)[\w/-]*\b\/?(?!@)))", # noqa
|
567 |
-
"",
|
568 |
-
caption,
|
569 |
-
) # regex for urls
|
570 |
-
# html:
|
571 |
-
caption = BeautifulSoup(caption, features="html.parser").text
|
572 |
-
|
573 |
-
# @<nickname>
|
574 |
-
caption = re.sub(r"@[\w\d]+\b", "", caption)
|
575 |
-
|
576 |
-
# 31C0—31EF CJK Strokes
|
577 |
-
# 31F0—31FF Katakana Phonetic Extensions
|
578 |
-
# 3200—32FF Enclosed CJK Letters and Months
|
579 |
-
# 3300—33FF CJK Compatibility
|
580 |
-
# 3400—4DBF CJK Unified Ideographs Extension A
|
581 |
-
# 4DC0—4DFF Yijing Hexagram Symbols
|
582 |
-
# 4E00—9FFF CJK Unified Ideographs
|
583 |
-
caption = re.sub(r"[\u31c0-\u31ef]+", "", caption)
|
584 |
-
caption = re.sub(r"[\u31f0-\u31ff]+", "", caption)
|
585 |
-
caption = re.sub(r"[\u3200-\u32ff]+", "", caption)
|
586 |
-
caption = re.sub(r"[\u3300-\u33ff]+", "", caption)
|
587 |
-
caption = re.sub(r"[\u3400-\u4dbf]+", "", caption)
|
588 |
-
caption = re.sub(r"[\u4dc0-\u4dff]+", "", caption)
|
589 |
-
caption = re.sub(r"[\u4e00-\u9fff]+", "", caption)
|
590 |
-
#######################################################
|
591 |
-
|
592 |
-
# все виды тире / all types of dash --> "-"
|
593 |
-
caption = re.sub(
|
594 |
-
r"[\u002D\u058A\u05BE\u1400\u1806\u2010-\u2015\u2E17\u2E1A\u2E3A\u2E3B\u2E40\u301C\u3030\u30A0\uFE31\uFE32\uFE58\uFE63\uFF0D]+", # noqa
|
595 |
-
"-",
|
596 |
-
caption,
|
597 |
-
)
|
598 |
-
|
599 |
-
# кавычки к одному стандарту
|
600 |
-
caption = re.sub(r"[`´«»“”¨]", '"', caption)
|
601 |
-
caption = re.sub(r"[‘’]", "'", caption)
|
602 |
-
|
603 |
-
# "
|
604 |
-
caption = re.sub(r""?", "", caption)
|
605 |
-
# &
|
606 |
-
caption = re.sub(r"&", "", caption)
|
607 |
-
|
608 |
-
# ip adresses:
|
609 |
-
caption = re.sub(r"\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}", " ", caption)
|
610 |
-
|
611 |
-
# article ids:
|
612 |
-
caption = re.sub(r"\d:\d\d\s+$", "", caption)
|
613 |
-
|
614 |
-
# \n
|
615 |
-
caption = re.sub(r"\\n", " ", caption)
|
616 |
-
|
617 |
-
# "#123"
|
618 |
-
caption = re.sub(r"#\d{1,3}\b", "", caption)
|
619 |
-
# "#12345.."
|
620 |
-
caption = re.sub(r"#\d{5,}\b", "", caption)
|
621 |
-
# "123456.."
|
622 |
-
caption = re.sub(r"\b\d{6,}\b", "", caption)
|
623 |
-
# filenames:
|
624 |
-
caption = re.sub(r"[\S]+\.(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)", "", caption)
|
625 |
-
|
626 |
-
#
|
627 |
-
caption = re.sub(r"[\"\']{2,}", r'"', caption) # """AUSVERKAUFT"""
|
628 |
-
caption = re.sub(r"[\.]{2,}", r" ", caption) # """AUSVERKAUFT"""
|
629 |
-
|
630 |
-
caption = re.sub(self.bad_punct_regex, r" ", caption) # ***AUSVERKAUFT***, #AUSVERKAUFT
|
631 |
-
caption = re.sub(r"\s+\.\s+", r" ", caption) # " . "
|
632 |
-
|
633 |
-
# this-is-my-cute-cat / this_is_my_cute_cat
|
634 |
-
regex2 = re.compile(r"(?:\-|\_)")
|
635 |
-
if len(re.findall(regex2, caption)) > 3:
|
636 |
-
caption = re.sub(regex2, " ", caption)
|
637 |
-
|
638 |
-
caption = ftfy.fix_text(caption)
|
639 |
-
caption = html.unescape(html.unescape(caption))
|
640 |
-
|
641 |
-
caption = re.sub(r"\b[a-zA-Z]{1,3}\d{3,15}\b", "", caption) # jc6640
|
642 |
-
caption = re.sub(r"\b[a-zA-Z]+\d+[a-zA-Z]+\b", "", caption) # jc6640vc
|
643 |
-
caption = re.sub(r"\b\d+[a-zA-Z]+\d+\b", "", caption) # 6640vc231
|
644 |
-
|
645 |
-
caption = re.sub(r"(worldwide\s+)?(free\s+)?shipping", "", caption)
|
646 |
-
caption = re.sub(r"(free\s)?download(\sfree)?", "", caption)
|
647 |
-
caption = re.sub(r"\bclick\b\s(?:for|on)\s\w+", "", caption)
|
648 |
-
caption = re.sub(r"\b(?:png|jpg|jpeg|bmp|webp|eps|pdf|apk|mp4)(\simage[s]?)?", "", caption)
|
649 |
-
caption = re.sub(r"\bpage\s+\d+\b", "", caption)
|
650 |
-
|
651 |
-
caption = re.sub(r"\b\d*[a-zA-Z]+\d+[a-zA-Z]+\d+[a-zA-Z\d]*\b", r" ", caption) # j2d1a2a...
|
652 |
-
|
653 |
-
caption = re.sub(r"\b\d+\.?\d*[xх×]\d+\.?\d*\b", "", caption)
|
654 |
-
|
655 |
-
caption = re.sub(r"\b\s+\:\s+", r": ", caption)
|
656 |
-
caption = re.sub(r"(\D[,\./])\b", r"\1 ", caption)
|
657 |
-
caption = re.sub(r"\s+", " ", caption)
|
658 |
-
|
659 |
-
caption.strip()
|
660 |
-
|
661 |
-
caption = re.sub(r"^[\"\']([\w\W]+)[\"\']$", r"\1", caption)
|
662 |
-
caption = re.sub(r"^[\'\_,\-\:;]", r"", caption)
|
663 |
-
caption = re.sub(r"[\'\_,\-\:\-\+]$", r"", caption)
|
664 |
-
caption = re.sub(r"^\.\S+$", "", caption)
|
665 |
-
|
666 |
-
return caption.strip()
|
667 |
-
|
668 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_img2img.IFImg2ImgPipeline.preprocess_image
|
669 |
-
def preprocess_image(self, image: PIL.Image.Image) -> torch.Tensor:
|
670 |
-
if not isinstance(image, list):
|
671 |
-
image = [image]
|
672 |
-
|
673 |
-
def numpy_to_pt(images):
|
674 |
-
if images.ndim == 3:
|
675 |
-
images = images[..., None]
|
676 |
-
|
677 |
-
images = torch.from_numpy(images.transpose(0, 3, 1, 2))
|
678 |
-
return images
|
679 |
-
|
680 |
-
if isinstance(image[0], PIL.Image.Image):
|
681 |
-
new_image = []
|
682 |
-
|
683 |
-
for image_ in image:
|
684 |
-
image_ = image_.convert("RGB")
|
685 |
-
image_ = resize(image_, self.unet.sample_size)
|
686 |
-
image_ = np.array(image_)
|
687 |
-
image_ = image_.astype(np.float32)
|
688 |
-
image_ = image_ / 127.5 - 1
|
689 |
-
new_image.append(image_)
|
690 |
-
|
691 |
-
image = new_image
|
692 |
-
|
693 |
-
image = np.stack(image, axis=0) # to np
|
694 |
-
image = numpy_to_pt(image) # to pt
|
695 |
-
|
696 |
-
elif isinstance(image[0], np.ndarray):
|
697 |
-
image = np.concatenate(image, axis=0) if image[0].ndim == 4 else np.stack(image, axis=0)
|
698 |
-
image = numpy_to_pt(image)
|
699 |
-
|
700 |
-
elif isinstance(image[0], torch.Tensor):
|
701 |
-
image = torch.cat(image, axis=0) if image[0].ndim == 4 else torch.stack(image, axis=0)
|
702 |
-
|
703 |
-
return image
|
704 |
-
|
705 |
-
def preprocess_mask_image(self, mask_image) -> torch.Tensor:
|
706 |
-
if not isinstance(mask_image, list):
|
707 |
-
mask_image = [mask_image]
|
708 |
-
|
709 |
-
if isinstance(mask_image[0], torch.Tensor):
|
710 |
-
mask_image = torch.cat(mask_image, axis=0) if mask_image[0].ndim == 4 else torch.stack(mask_image, axis=0)
|
711 |
-
|
712 |
-
if mask_image.ndim == 2:
|
713 |
-
# Batch and add channel dim for single mask
|
714 |
-
mask_image = mask_image.unsqueeze(0).unsqueeze(0)
|
715 |
-
elif mask_image.ndim == 3 and mask_image.shape[0] == 1:
|
716 |
-
# Single mask, the 0'th dimension is considered to be
|
717 |
-
# the existing batch size of 1
|
718 |
-
mask_image = mask_image.unsqueeze(0)
|
719 |
-
elif mask_image.ndim == 3 and mask_image.shape[0] != 1:
|
720 |
-
# Batch of mask, the 0'th dimension is considered to be
|
721 |
-
# the batching dimension
|
722 |
-
mask_image = mask_image.unsqueeze(1)
|
723 |
-
|
724 |
-
mask_image[mask_image < 0.5] = 0
|
725 |
-
mask_image[mask_image >= 0.5] = 1
|
726 |
-
|
727 |
-
elif isinstance(mask_image[0], PIL.Image.Image):
|
728 |
-
new_mask_image = []
|
729 |
-
|
730 |
-
for mask_image_ in mask_image:
|
731 |
-
mask_image_ = mask_image_.convert("L")
|
732 |
-
mask_image_ = resize(mask_image_, self.unet.sample_size)
|
733 |
-
mask_image_ = np.array(mask_image_)
|
734 |
-
mask_image_ = mask_image_[None, None, :]
|
735 |
-
new_mask_image.append(mask_image_)
|
736 |
-
|
737 |
-
mask_image = new_mask_image
|
738 |
-
|
739 |
-
mask_image = np.concatenate(mask_image, axis=0)
|
740 |
-
mask_image = mask_image.astype(np.float32) / 255.0
|
741 |
-
mask_image[mask_image < 0.5] = 0
|
742 |
-
mask_image[mask_image >= 0.5] = 1
|
743 |
-
mask_image = torch.from_numpy(mask_image)
|
744 |
-
|
745 |
-
elif isinstance(mask_image[0], np.ndarray):
|
746 |
-
mask_image = np.concatenate([m[None, None, :] for m in mask_image], axis=0)
|
747 |
-
|
748 |
-
mask_image[mask_image < 0.5] = 0
|
749 |
-
mask_image[mask_image >= 0.5] = 1
|
750 |
-
mask_image = torch.from_numpy(mask_image)
|
751 |
-
|
752 |
-
return mask_image
|
753 |
-
|
754 |
-
# Copied from diffusers.pipelines.deepfloyd_if.pipeline_if_img2img.IFImg2ImgPipeline.get_timesteps
|
755 |
-
def get_timesteps(self, num_inference_steps, strength):
|
756 |
-
# get the original timestep using init_timestep
|
757 |
-
init_timestep = min(int(num_inference_steps * strength), num_inference_steps)
|
758 |
-
|
759 |
-
t_start = max(num_inference_steps - init_timestep, 0)
|
760 |
-
timesteps = self.scheduler.timesteps[t_start:]
|
761 |
-
|
762 |
-
return timesteps, num_inference_steps - t_start
|
763 |
-
|
764 |
-
def prepare_intermediate_images(
|
765 |
-
self, image, timestep, batch_size, num_images_per_prompt, dtype, device, mask_image, generator=None
|
766 |
-
):
|
767 |
-
image_batch_size, channels, height, width = image.shape
|
768 |
-
|
769 |
-
batch_size = batch_size * num_images_per_prompt
|
770 |
-
|
771 |
-
shape = (batch_size, channels, height, width)
|
772 |
-
|
773 |
-
if isinstance(generator, list) and len(generator) != batch_size:
|
774 |
-
raise ValueError(
|
775 |
-
f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
|
776 |
-
f" size of {batch_size}. Make sure the batch size matches the length of the generators."
|
777 |
-
)
|
778 |
-
|
779 |
-
noise = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
|
780 |
-
|
781 |
-
image = image.repeat_interleave(num_images_per_prompt, dim=0)
|
782 |
-
noised_image = self.scheduler.add_noise(image, noise, timestep)
|
783 |
-
|
784 |
-
image = (1 - mask_image) * image + mask_image * noised_image
|
785 |
-
|
786 |
-
return image
|
787 |
-
|
788 |
-
@torch.no_grad()
|
789 |
-
@replace_example_docstring(EXAMPLE_DOC_STRING)
|
790 |
-
def __call__(
|
791 |
-
self,
|
792 |
-
prompt: Union[str, List[str]] = None,
|
793 |
-
image: Union[
|
794 |
-
PIL.Image.Image, torch.Tensor, np.ndarray, List[PIL.Image.Image], List[torch.Tensor], List[np.ndarray]
|
795 |
-
] = None,
|
796 |
-
mask_image: Union[
|
797 |
-
PIL.Image.Image, torch.Tensor, np.ndarray, List[PIL.Image.Image], List[torch.Tensor], List[np.ndarray]
|
798 |
-
] = None,
|
799 |
-
strength: float = 1.0,
|
800 |
-
num_inference_steps: int = 50,
|
801 |
-
timesteps: List[int] = None,
|
802 |
-
guidance_scale: float = 7.0,
|
803 |
-
negative_prompt: Optional[Union[str, List[str]]] = None,
|
804 |
-
num_images_per_prompt: Optional[int] = 1,
|
805 |
-
eta: float = 0.0,
|
806 |
-
generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
|
807 |
-
prompt_embeds: Optional[torch.FloatTensor] = None,
|
808 |
-
negative_prompt_embeds: Optional[torch.FloatTensor] = None,
|
809 |
-
output_type: Optional[str] = "pil",
|
810 |
-
return_dict: bool = True,
|
811 |
-
callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
|
812 |
-
callback_steps: int = 1,
|
813 |
-
clean_caption: bool = True,
|
814 |
-
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
|
815 |
-
):
|
816 |
-
"""
|
817 |
-
Function invoked when calling the pipeline for generation.
|
818 |
-
|
819 |
-
Args:
|
820 |
-
prompt (`str` or `List[str]`, *optional*):
|
821 |
-
The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
|
822 |
-
instead.
|
823 |
-
image (`torch.FloatTensor` or `PIL.Image.Image`):
|
824 |
-
`Image`, or tensor representing an image batch, that will be used as the starting point for the
|
825 |
-
process.
|
826 |
-
mask_image (`PIL.Image.Image`):
|
827 |
-
`Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
|
828 |
-
repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
|
829 |
-
to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
|
830 |
-
instead of 3, so the expected shape would be `(B, H, W, 1)`.
|
831 |
-
strength (`float`, *optional*, defaults to 0.8):
|
832 |
-
Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
|
833 |
-
will be used as a starting point, adding more noise to it the larger the `strength`. The number of
|
834 |
-
denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
|
835 |
-
be maximum and the denoising process will run for the full number of iterations specified in
|
836 |
-
`num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
|
837 |
-
num_inference_steps (`int`, *optional*, defaults to 50):
|
838 |
-
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
839 |
-
expense of slower inference.
|
840 |
-
timesteps (`List[int]`, *optional*):
|
841 |
-
Custom timesteps to use for the denoising process. If not defined, equal spaced `num_inference_steps`
|
842 |
-
timesteps are used. Must be in descending order.
|
843 |
-
guidance_scale (`float`, *optional*, defaults to 7.5):
|
844 |
-
Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
|
845 |
-
`guidance_scale` is defined as `w` of equation 2. of [Imagen
|
846 |
-
Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
|
847 |
-
1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
|
848 |
-
usually at the expense of lower image quality.
|
849 |
-
negative_prompt (`str` or `List[str]`, *optional*):
|
850 |
-
The prompt or prompts not to guide the image generation. If not defined, one has to pass
|
851 |
-
`negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
|
852 |
-
less than `1`).
|
853 |
-
num_images_per_prompt (`int`, *optional*, defaults to 1):
|
854 |
-
The number of images to generate per prompt.
|
855 |
-
eta (`float`, *optional*, defaults to 0.0):
|
856 |
-
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
|
857 |
-
[`schedulers.DDIMScheduler`], will be ignored for others.
|
858 |
-
generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
|
859 |
-
One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
|
860 |
-
to make generation deterministic.
|
861 |
-
prompt_embeds (`torch.FloatTensor`, *optional*):
|
862 |
-
Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
|
863 |
-
provided, text embeddings will be generated from `prompt` input argument.
|
864 |
-
negative_prompt_embeds (`torch.FloatTensor`, *optional*):
|
865 |
-
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
|
866 |
-
weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
|
867 |
-
argument.
|
868 |
-
output_type (`str`, *optional*, defaults to `"pil"`):
|
869 |
-
The output format of the generate image. Choose between
|
870 |
-
[PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
|
871 |
-
return_dict (`bool`, *optional*, defaults to `True`):
|
872 |
-
Whether or not to return a [`~pipelines.stable_diffusion.IFPipelineOutput`] instead of a plain tuple.
|
873 |
-
callback (`Callable`, *optional*):
|
874 |
-
A function that will be called every `callback_steps` steps during inference. The function will be
|
875 |
-
called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
|
876 |
-
callback_steps (`int`, *optional*, defaults to 1):
|
877 |
-
The frequency at which the `callback` function will be called. If not specified, the callback will be
|
878 |
-
called at every step.
|
879 |
-
clean_caption (`bool`, *optional*, defaults to `True`):
|
880 |
-
Whether or not to clean the caption before creating embeddings. Requires `beautifulsoup4` and `ftfy` to
|
881 |
-
be installed. If the dependencies are not installed, the embeddings will be created from the raw
|
882 |
-
prompt.
|
883 |
-
cross_attention_kwargs (`dict`, *optional*):
|
884 |
-
A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
|
885 |
-
`self.processor` in
|
886 |
-
[diffusers.cross_attention](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/cross_attention.py).
|
887 |
-
|
888 |
-
Examples:
|
889 |
-
|
890 |
-
Returns:
|
891 |
-
[`~pipelines.stable_diffusion.IFPipelineOutput`] or `tuple`:
|
892 |
-
[`~pipelines.stable_diffusion.IFPipelineOutput`] if `return_dict` is True, otherwise a `tuple. When
|
893 |
-
returning a tuple, the first element is a list with the generated images, and the second element is a list
|
894 |
-
of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw)
|
895 |
-
or watermarked content, according to the `safety_checker`.
|
896 |
-
"""
|
897 |
-
# 1. Check inputs. Raise error if not correct
|
898 |
-
if prompt is not None and isinstance(prompt, str):
|
899 |
-
batch_size = 1
|
900 |
-
elif prompt is not None and isinstance(prompt, list):
|
901 |
-
batch_size = len(prompt)
|
902 |
-
else:
|
903 |
-
batch_size = prompt_embeds.shape[0]
|
904 |
-
|
905 |
-
self.check_inputs(
|
906 |
-
prompt,
|
907 |
-
image,
|
908 |
-
mask_image,
|
909 |
-
batch_size,
|
910 |
-
callback_steps,
|
911 |
-
negative_prompt,
|
912 |
-
prompt_embeds,
|
913 |
-
negative_prompt_embeds,
|
914 |
-
)
|
915 |
-
|
916 |
-
# 2. Define call parameters
|
917 |
-
device = self._execution_device
|
918 |
-
|
919 |
-
# here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
|
920 |
-
# of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
|
921 |
-
# corresponds to doing no classifier free guidance.
|
922 |
-
do_classifier_free_guidance = guidance_scale > 1.0
|
923 |
-
|
924 |
-
# 3. Encode input prompt
|
925 |
-
prompt_embeds, negative_prompt_embeds = self.encode_prompt(
|
926 |
-
prompt,
|
927 |
-
do_classifier_free_guidance,
|
928 |
-
num_images_per_prompt=num_images_per_prompt,
|
929 |
-
device=device,
|
930 |
-
negative_prompt=negative_prompt,
|
931 |
-
prompt_embeds=prompt_embeds,
|
932 |
-
negative_prompt_embeds=negative_prompt_embeds,
|
933 |
-
clean_caption=clean_caption,
|
934 |
-
)
|
935 |
-
|
936 |
-
if do_classifier_free_guidance:
|
937 |
-
prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds])
|
938 |
-
|
939 |
-
dtype = prompt_embeds.dtype
|
940 |
-
|
941 |
-
# 4. Prepare timesteps
|
942 |
-
if timesteps is not None:
|
943 |
-
self.scheduler.set_timesteps(timesteps=timesteps, device=device)
|
944 |
-
timesteps = self.scheduler.timesteps
|
945 |
-
num_inference_steps = len(timesteps)
|
946 |
-
else:
|
947 |
-
self.scheduler.set_timesteps(num_inference_steps, device=device)
|
948 |
-
timesteps = self.scheduler.timesteps
|
949 |
-
|
950 |
-
timesteps, num_inference_steps = self.get_timesteps(num_inference_steps, strength)
|
951 |
-
|
952 |
-
# 5. Prepare intermediate images
|
953 |
-
image = self.preprocess_image(image)
|
954 |
-
image = image.to(device=device, dtype=dtype)
|
955 |
-
|
956 |
-
mask_image = self.preprocess_mask_image(mask_image)
|
957 |
-
mask_image = mask_image.to(device=device, dtype=dtype)
|
958 |
-
|
959 |
-
if mask_image.shape[0] == 1:
|
960 |
-
mask_image = mask_image.repeat_interleave(batch_size * num_images_per_prompt, dim=0)
|
961 |
-
else:
|
962 |
-
mask_image = mask_image.repeat_interleave(num_images_per_prompt, dim=0)
|
963 |
-
|
964 |
-
noise_timestep = timesteps[0:1]
|
965 |
-
noise_timestep = noise_timestep.repeat(batch_size * num_images_per_prompt)
|
966 |
-
|
967 |
-
intermediate_images = self.prepare_intermediate_images(
|
968 |
-
image, noise_timestep, batch_size, num_images_per_prompt, dtype, device, mask_image, generator
|
969 |
-
)
|
970 |
-
|
971 |
-
# 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
|
972 |
-
extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
|
973 |
-
|
974 |
-
# HACK: see comment in `enable_model_cpu_offload`
|
975 |
-
if hasattr(self, "text_encoder_offload_hook") and self.text_encoder_offload_hook is not None:
|
976 |
-
self.text_encoder_offload_hook.offload()
|
977 |
-
|
978 |
-
# 7. Denoising loop
|
979 |
-
num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
|
980 |
-
with self.progress_bar(total=num_inference_steps) as progress_bar:
|
981 |
-
for i, t in enumerate(timesteps):
|
982 |
-
model_input = (
|
983 |
-
torch.cat([intermediate_images] * 2) if do_classifier_free_guidance else intermediate_images
|
984 |
-
)
|
985 |
-
model_input = self.scheduler.scale_model_input(model_input, t)
|
986 |
-
|
987 |
-
# predict the noise residual
|
988 |
-
noise_pred = self.unet(
|
989 |
-
model_input,
|
990 |
-
t,
|
991 |
-
encoder_hidden_states=prompt_embeds,
|
992 |
-
cross_attention_kwargs=cross_attention_kwargs,
|
993 |
-
return_dict=False,
|
994 |
-
)[0]
|
995 |
-
|
996 |
-
# perform guidance
|
997 |
-
if do_classifier_free_guidance:
|
998 |
-
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
|
999 |
-
noise_pred_uncond, _ = noise_pred_uncond.split(model_input.shape[1], dim=1)
|
1000 |
-
noise_pred_text, predicted_variance = noise_pred_text.split(model_input.shape[1], dim=1)
|
1001 |
-
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
|
1002 |
-
noise_pred = torch.cat([noise_pred, predicted_variance], dim=1)
|
1003 |
-
|
1004 |
-
if self.scheduler.config.variance_type not in ["learned", "learned_range"]:
|
1005 |
-
noise_pred, _ = noise_pred.split(model_input.shape[1], dim=1)
|
1006 |
-
|
1007 |
-
# compute the previous noisy sample x_t -> x_t-1
|
1008 |
-
prev_intermediate_images = intermediate_images
|
1009 |
-
|
1010 |
-
intermediate_images = self.scheduler.step(
|
1011 |
-
noise_pred, t, intermediate_images, **extra_step_kwargs, return_dict=False
|
1012 |
-
)[0]
|
1013 |
-
|
1014 |
-
intermediate_images = (1 - mask_image) * prev_intermediate_images + mask_image * intermediate_images
|
1015 |
-
|
1016 |
-
# call the callback, if provided
|
1017 |
-
if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
|
1018 |
-
progress_bar.update()
|
1019 |
-
if callback is not None and i % callback_steps == 0:
|
1020 |
-
callback(i, t, intermediate_images)
|
1021 |
-
|
1022 |
-
image = intermediate_images
|
1023 |
-
|
1024 |
-
if output_type == "pil":
|
1025 |
-
# 8. Post-processing
|
1026 |
-
image = (image / 2 + 0.5).clamp(0, 1)
|
1027 |
-
image = image.cpu().permute(0, 2, 3, 1).float().numpy()
|
1028 |
-
|
1029 |
-
# 9. Run safety checker
|
1030 |
-
image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype)
|
1031 |
-
|
1032 |
-
# 10. Convert to PIL
|
1033 |
-
image = self.numpy_to_pil(image)
|
1034 |
-
|
1035 |
-
# 11. Apply watermark
|
1036 |
-
if self.watermarker is not None:
|
1037 |
-
self.watermarker.apply_watermark(image, self.unet.config.sample_size)
|
1038 |
-
elif output_type == "pt":
|
1039 |
-
nsfw_detected = None
|
1040 |
-
watermark_detected = None
|
1041 |
-
|
1042 |
-
if hasattr(self, "unet_offload_hook") and self.unet_offload_hook is not None:
|
1043 |
-
self.unet_offload_hook.offload()
|
1044 |
-
else:
|
1045 |
-
# 8. Post-processing
|
1046 |
-
image = (image / 2 + 0.5).clamp(0, 1)
|
1047 |
-
image = image.cpu().permute(0, 2, 3, 1).float().numpy()
|
1048 |
-
|
1049 |
-
# 9. Run safety checker
|
1050 |
-
image, nsfw_detected, watermark_detected = self.run_safety_checker(image, device, prompt_embeds.dtype)
|
1051 |
-
|
1052 |
-
# Offload last model to CPU
|
1053 |
-
if hasattr(self, "final_offload_hook") and self.final_offload_hook is not None:
|
1054 |
-
self.final_offload_hook.offload()
|
1055 |
-
|
1056 |
-
if not return_dict:
|
1057 |
-
return (image, nsfw_detected, watermark_detected)
|
1058 |
-
|
1059 |
-
return IFPipelineOutput(images=image, nsfw_detected=nsfw_detected, watermark_detected=watermark_detected)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/schedulers/scheduling_ddpm_flax.py
DELETED
@@ -1,299 +0,0 @@
|
|
1 |
-
# Copyright 2023 UC Berkeley Team and The HuggingFace Team. All rights reserved.
|
2 |
-
#
|
3 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
4 |
-
# you may not use this file except in compliance with the License.
|
5 |
-
# You may obtain a copy of the License at
|
6 |
-
#
|
7 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
8 |
-
#
|
9 |
-
# Unless required by applicable law or agreed to in writing, software
|
10 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
11 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
12 |
-
# See the License for the specific language governing permissions and
|
13 |
-
# limitations under the License.
|
14 |
-
|
15 |
-
# DISCLAIMER: This file is strongly influenced by https://github.com/ermongroup/ddim
|
16 |
-
|
17 |
-
from dataclasses import dataclass
|
18 |
-
from typing import Optional, Tuple, Union
|
19 |
-
|
20 |
-
import flax
|
21 |
-
import jax
|
22 |
-
import jax.numpy as jnp
|
23 |
-
|
24 |
-
from ..configuration_utils import ConfigMixin, register_to_config
|
25 |
-
from .scheduling_utils_flax import (
|
26 |
-
CommonSchedulerState,
|
27 |
-
FlaxKarrasDiffusionSchedulers,
|
28 |
-
FlaxSchedulerMixin,
|
29 |
-
FlaxSchedulerOutput,
|
30 |
-
add_noise_common,
|
31 |
-
get_velocity_common,
|
32 |
-
)
|
33 |
-
|
34 |
-
|
35 |
-
@flax.struct.dataclass
|
36 |
-
class DDPMSchedulerState:
|
37 |
-
common: CommonSchedulerState
|
38 |
-
|
39 |
-
# setable values
|
40 |
-
init_noise_sigma: jnp.ndarray
|
41 |
-
timesteps: jnp.ndarray
|
42 |
-
num_inference_steps: Optional[int] = None
|
43 |
-
|
44 |
-
@classmethod
|
45 |
-
def create(cls, common: CommonSchedulerState, init_noise_sigma: jnp.ndarray, timesteps: jnp.ndarray):
|
46 |
-
return cls(common=common, init_noise_sigma=init_noise_sigma, timesteps=timesteps)
|
47 |
-
|
48 |
-
|
49 |
-
@dataclass
|
50 |
-
class FlaxDDPMSchedulerOutput(FlaxSchedulerOutput):
|
51 |
-
state: DDPMSchedulerState
|
52 |
-
|
53 |
-
|
54 |
-
class FlaxDDPMScheduler(FlaxSchedulerMixin, ConfigMixin):
|
55 |
-
"""
|
56 |
-
Denoising diffusion probabilistic models (DDPMs) explores the connections between denoising score matching and
|
57 |
-
Langevin dynamics sampling.
|
58 |
-
|
59 |
-
[`~ConfigMixin`] takes care of storing all config attributes that are passed in the scheduler's `__init__`
|
60 |
-
function, such as `num_train_timesteps`. They can be accessed via `scheduler.config.num_train_timesteps`.
|
61 |
-
[`SchedulerMixin`] provides general loading and saving functionality via the [`SchedulerMixin.save_pretrained`] and
|
62 |
-
[`~SchedulerMixin.from_pretrained`] functions.
|
63 |
-
|
64 |
-
For more details, see the original paper: https://arxiv.org/abs/2006.11239
|
65 |
-
|
66 |
-
Args:
|
67 |
-
num_train_timesteps (`int`): number of diffusion steps used to train the model.
|
68 |
-
beta_start (`float`): the starting `beta` value of inference.
|
69 |
-
beta_end (`float`): the final `beta` value.
|
70 |
-
beta_schedule (`str`):
|
71 |
-
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
|
72 |
-
`linear`, `scaled_linear`, or `squaredcos_cap_v2`.
|
73 |
-
trained_betas (`np.ndarray`, optional):
|
74 |
-
option to pass an array of betas directly to the constructor to bypass `beta_start`, `beta_end` etc.
|
75 |
-
variance_type (`str`):
|
76 |
-
options to clip the variance used when adding noise to the denoised sample. Choose from `fixed_small`,
|
77 |
-
`fixed_small_log`, `fixed_large`, `fixed_large_log`, `learned` or `learned_range`.
|
78 |
-
clip_sample (`bool`, default `True`):
|
79 |
-
option to clip predicted sample between -1 and 1 for numerical stability.
|
80 |
-
prediction_type (`str`, default `epsilon`):
|
81 |
-
indicates whether the model predicts the noise (epsilon), or the samples. One of `epsilon`, `sample`.
|
82 |
-
`v-prediction` is not supported for this scheduler.
|
83 |
-
dtype (`jnp.dtype`, *optional*, defaults to `jnp.float32`):
|
84 |
-
the `dtype` used for params and computation.
|
85 |
-
"""
|
86 |
-
|
87 |
-
_compatibles = [e.name for e in FlaxKarrasDiffusionSchedulers]
|
88 |
-
|
89 |
-
dtype: jnp.dtype
|
90 |
-
|
91 |
-
@property
|
92 |
-
def has_state(self):
|
93 |
-
return True
|
94 |
-
|
95 |
-
@register_to_config
|
96 |
-
def __init__(
|
97 |
-
self,
|
98 |
-
num_train_timesteps: int = 1000,
|
99 |
-
beta_start: float = 0.0001,
|
100 |
-
beta_end: float = 0.02,
|
101 |
-
beta_schedule: str = "linear",
|
102 |
-
trained_betas: Optional[jnp.ndarray] = None,
|
103 |
-
variance_type: str = "fixed_small",
|
104 |
-
clip_sample: bool = True,
|
105 |
-
prediction_type: str = "epsilon",
|
106 |
-
dtype: jnp.dtype = jnp.float32,
|
107 |
-
):
|
108 |
-
self.dtype = dtype
|
109 |
-
|
110 |
-
def create_state(self, common: Optional[CommonSchedulerState] = None) -> DDPMSchedulerState:
|
111 |
-
if common is None:
|
112 |
-
common = CommonSchedulerState.create(self)
|
113 |
-
|
114 |
-
# standard deviation of the initial noise distribution
|
115 |
-
init_noise_sigma = jnp.array(1.0, dtype=self.dtype)
|
116 |
-
|
117 |
-
timesteps = jnp.arange(0, self.config.num_train_timesteps).round()[::-1]
|
118 |
-
|
119 |
-
return DDPMSchedulerState.create(
|
120 |
-
common=common,
|
121 |
-
init_noise_sigma=init_noise_sigma,
|
122 |
-
timesteps=timesteps,
|
123 |
-
)
|
124 |
-
|
125 |
-
def scale_model_input(
|
126 |
-
self, state: DDPMSchedulerState, sample: jnp.ndarray, timestep: Optional[int] = None
|
127 |
-
) -> jnp.ndarray:
|
128 |
-
"""
|
129 |
-
Args:
|
130 |
-
state (`PNDMSchedulerState`): the `FlaxPNDMScheduler` state data class instance.
|
131 |
-
sample (`jnp.ndarray`): input sample
|
132 |
-
timestep (`int`, optional): current timestep
|
133 |
-
|
134 |
-
Returns:
|
135 |
-
`jnp.ndarray`: scaled input sample
|
136 |
-
"""
|
137 |
-
return sample
|
138 |
-
|
139 |
-
def set_timesteps(
|
140 |
-
self, state: DDPMSchedulerState, num_inference_steps: int, shape: Tuple = ()
|
141 |
-
) -> DDPMSchedulerState:
|
142 |
-
"""
|
143 |
-
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
|
144 |
-
|
145 |
-
Args:
|
146 |
-
state (`DDIMSchedulerState`):
|
147 |
-
the `FlaxDDPMScheduler` state data class instance.
|
148 |
-
num_inference_steps (`int`):
|
149 |
-
the number of diffusion steps used when generating samples with a pre-trained model.
|
150 |
-
"""
|
151 |
-
|
152 |
-
step_ratio = self.config.num_train_timesteps // num_inference_steps
|
153 |
-
# creates integer timesteps by multiplying by ratio
|
154 |
-
# rounding to avoid issues when num_inference_step is power of 3
|
155 |
-
timesteps = (jnp.arange(0, num_inference_steps) * step_ratio).round()[::-1]
|
156 |
-
|
157 |
-
return state.replace(
|
158 |
-
num_inference_steps=num_inference_steps,
|
159 |
-
timesteps=timesteps,
|
160 |
-
)
|
161 |
-
|
162 |
-
def _get_variance(self, state: DDPMSchedulerState, t, predicted_variance=None, variance_type=None):
|
163 |
-
alpha_prod_t = state.common.alphas_cumprod[t]
|
164 |
-
alpha_prod_t_prev = jnp.where(t > 0, state.common.alphas_cumprod[t - 1], jnp.array(1.0, dtype=self.dtype))
|
165 |
-
|
166 |
-
# For t > 0, compute predicted variance βt (see formula (6) and (7) from https://arxiv.org/pdf/2006.11239.pdf)
|
167 |
-
# and sample from it to get previous sample
|
168 |
-
# x_{t-1} ~ N(pred_prev_sample, variance) == add variance to pred_sample
|
169 |
-
variance = (1 - alpha_prod_t_prev) / (1 - alpha_prod_t) * state.common.betas[t]
|
170 |
-
|
171 |
-
if variance_type is None:
|
172 |
-
variance_type = self.config.variance_type
|
173 |
-
|
174 |
-
# hacks - were probably added for training stability
|
175 |
-
if variance_type == "fixed_small":
|
176 |
-
variance = jnp.clip(variance, a_min=1e-20)
|
177 |
-
# for rl-diffuser https://arxiv.org/abs/2205.09991
|
178 |
-
elif variance_type == "fixed_small_log":
|
179 |
-
variance = jnp.log(jnp.clip(variance, a_min=1e-20))
|
180 |
-
elif variance_type == "fixed_large":
|
181 |
-
variance = state.common.betas[t]
|
182 |
-
elif variance_type == "fixed_large_log":
|
183 |
-
# Glide max_log
|
184 |
-
variance = jnp.log(state.common.betas[t])
|
185 |
-
elif variance_type == "learned":
|
186 |
-
return predicted_variance
|
187 |
-
elif variance_type == "learned_range":
|
188 |
-
min_log = variance
|
189 |
-
max_log = state.common.betas[t]
|
190 |
-
frac = (predicted_variance + 1) / 2
|
191 |
-
variance = frac * max_log + (1 - frac) * min_log
|
192 |
-
|
193 |
-
return variance
|
194 |
-
|
195 |
-
def step(
|
196 |
-
self,
|
197 |
-
state: DDPMSchedulerState,
|
198 |
-
model_output: jnp.ndarray,
|
199 |
-
timestep: int,
|
200 |
-
sample: jnp.ndarray,
|
201 |
-
key: Optional[jax.random.KeyArray] = None,
|
202 |
-
return_dict: bool = True,
|
203 |
-
) -> Union[FlaxDDPMSchedulerOutput, Tuple]:
|
204 |
-
"""
|
205 |
-
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion
|
206 |
-
process from the learned model outputs (most often the predicted noise).
|
207 |
-
|
208 |
-
Args:
|
209 |
-
state (`DDPMSchedulerState`): the `FlaxDDPMScheduler` state data class instance.
|
210 |
-
model_output (`jnp.ndarray`): direct output from learned diffusion model.
|
211 |
-
timestep (`int`): current discrete timestep in the diffusion chain.
|
212 |
-
sample (`jnp.ndarray`):
|
213 |
-
current instance of sample being created by diffusion process.
|
214 |
-
key (`jax.random.KeyArray`): a PRNG key.
|
215 |
-
return_dict (`bool`): option for returning tuple rather than FlaxDDPMSchedulerOutput class
|
216 |
-
|
217 |
-
Returns:
|
218 |
-
[`FlaxDDPMSchedulerOutput`] or `tuple`: [`FlaxDDPMSchedulerOutput`] if `return_dict` is True, otherwise a
|
219 |
-
`tuple`. When returning a tuple, the first element is the sample tensor.
|
220 |
-
|
221 |
-
"""
|
222 |
-
t = timestep
|
223 |
-
|
224 |
-
if key is None:
|
225 |
-
key = jax.random.PRNGKey(0)
|
226 |
-
|
227 |
-
if model_output.shape[1] == sample.shape[1] * 2 and self.config.variance_type in ["learned", "learned_range"]:
|
228 |
-
model_output, predicted_variance = jnp.split(model_output, sample.shape[1], axis=1)
|
229 |
-
else:
|
230 |
-
predicted_variance = None
|
231 |
-
|
232 |
-
# 1. compute alphas, betas
|
233 |
-
alpha_prod_t = state.common.alphas_cumprod[t]
|
234 |
-
alpha_prod_t_prev = jnp.where(t > 0, state.common.alphas_cumprod[t - 1], jnp.array(1.0, dtype=self.dtype))
|
235 |
-
beta_prod_t = 1 - alpha_prod_t
|
236 |
-
beta_prod_t_prev = 1 - alpha_prod_t_prev
|
237 |
-
|
238 |
-
# 2. compute predicted original sample from predicted noise also called
|
239 |
-
# "predicted x_0" of formula (15) from https://arxiv.org/pdf/2006.11239.pdf
|
240 |
-
if self.config.prediction_type == "epsilon":
|
241 |
-
pred_original_sample = (sample - beta_prod_t ** (0.5) * model_output) / alpha_prod_t ** (0.5)
|
242 |
-
elif self.config.prediction_type == "sample":
|
243 |
-
pred_original_sample = model_output
|
244 |
-
elif self.config.prediction_type == "v_prediction":
|
245 |
-
pred_original_sample = (alpha_prod_t**0.5) * sample - (beta_prod_t**0.5) * model_output
|
246 |
-
else:
|
247 |
-
raise ValueError(
|
248 |
-
f"prediction_type given as {self.config.prediction_type} must be one of `epsilon`, `sample` "
|
249 |
-
" for the FlaxDDPMScheduler."
|
250 |
-
)
|
251 |
-
|
252 |
-
# 3. Clip "predicted x_0"
|
253 |
-
if self.config.clip_sample:
|
254 |
-
pred_original_sample = jnp.clip(pred_original_sample, -1, 1)
|
255 |
-
|
256 |
-
# 4. Compute coefficients for pred_original_sample x_0 and current sample x_t
|
257 |
-
# See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
|
258 |
-
pred_original_sample_coeff = (alpha_prod_t_prev ** (0.5) * state.common.betas[t]) / beta_prod_t
|
259 |
-
current_sample_coeff = state.common.alphas[t] ** (0.5) * beta_prod_t_prev / beta_prod_t
|
260 |
-
|
261 |
-
# 5. Compute predicted previous sample µ_t
|
262 |
-
# See formula (7) from https://arxiv.org/pdf/2006.11239.pdf
|
263 |
-
pred_prev_sample = pred_original_sample_coeff * pred_original_sample + current_sample_coeff * sample
|
264 |
-
|
265 |
-
# 6. Add noise
|
266 |
-
def random_variance():
|
267 |
-
split_key = jax.random.split(key, num=1)
|
268 |
-
noise = jax.random.normal(split_key, shape=model_output.shape, dtype=self.dtype)
|
269 |
-
return (self._get_variance(state, t, predicted_variance=predicted_variance) ** 0.5) * noise
|
270 |
-
|
271 |
-
variance = jnp.where(t > 0, random_variance(), jnp.zeros(model_output.shape, dtype=self.dtype))
|
272 |
-
|
273 |
-
pred_prev_sample = pred_prev_sample + variance
|
274 |
-
|
275 |
-
if not return_dict:
|
276 |
-
return (pred_prev_sample, state)
|
277 |
-
|
278 |
-
return FlaxDDPMSchedulerOutput(prev_sample=pred_prev_sample, state=state)
|
279 |
-
|
280 |
-
def add_noise(
|
281 |
-
self,
|
282 |
-
state: DDPMSchedulerState,
|
283 |
-
original_samples: jnp.ndarray,
|
284 |
-
noise: jnp.ndarray,
|
285 |
-
timesteps: jnp.ndarray,
|
286 |
-
) -> jnp.ndarray:
|
287 |
-
return add_noise_common(state.common, original_samples, noise, timesteps)
|
288 |
-
|
289 |
-
def get_velocity(
|
290 |
-
self,
|
291 |
-
state: DDPMSchedulerState,
|
292 |
-
sample: jnp.ndarray,
|
293 |
-
noise: jnp.ndarray,
|
294 |
-
timesteps: jnp.ndarray,
|
295 |
-
) -> jnp.ndarray:
|
296 |
-
return get_velocity_common(state.common, sample, noise, timesteps)
|
297 |
-
|
298 |
-
def __len__(self):
|
299 |
-
return self.config.num_train_timesteps
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/rpn/rpn_r50_caffe_c4_1x_coco.py
DELETED
@@ -1,38 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/rpn_r50_caffe_c4.py',
|
3 |
-
'../_base_/datasets/coco_detection.py',
|
4 |
-
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
|
5 |
-
]
|
6 |
-
# dataset settings
|
7 |
-
img_norm_cfg = dict(
|
8 |
-
mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
|
9 |
-
train_pipeline = [
|
10 |
-
dict(type='LoadImageFromFile'),
|
11 |
-
dict(type='LoadAnnotations', with_bbox=True, with_label=False),
|
12 |
-
dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
|
13 |
-
dict(type='RandomFlip', flip_ratio=0.5),
|
14 |
-
dict(type='Normalize', **img_norm_cfg),
|
15 |
-
dict(type='Pad', size_divisor=32),
|
16 |
-
dict(type='DefaultFormatBundle'),
|
17 |
-
dict(type='Collect', keys=['img', 'gt_bboxes']),
|
18 |
-
]
|
19 |
-
test_pipeline = [
|
20 |
-
dict(type='LoadImageFromFile'),
|
21 |
-
dict(
|
22 |
-
type='MultiScaleFlipAug',
|
23 |
-
img_scale=(1333, 800),
|
24 |
-
flip=False,
|
25 |
-
transforms=[
|
26 |
-
dict(type='Resize', keep_ratio=True),
|
27 |
-
dict(type='RandomFlip'),
|
28 |
-
dict(type='Normalize', **img_norm_cfg),
|
29 |
-
dict(type='Pad', size_divisor=32),
|
30 |
-
dict(type='ImageToTensor', keys=['img']),
|
31 |
-
dict(type='Collect', keys=['img']),
|
32 |
-
])
|
33 |
-
]
|
34 |
-
data = dict(
|
35 |
-
train=dict(pipeline=train_pipeline),
|
36 |
-
val=dict(pipeline=test_pipeline),
|
37 |
-
test=dict(pipeline=test_pipeline))
|
38 |
-
evaluation = dict(interval=1, metric='proposal_fast')
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_detection/configs/sabl/sabl_retinanet_r101_fpn_gn_1x_coco.py
DELETED
@@ -1,54 +0,0 @@
|
|
1 |
-
_base_ = [
|
2 |
-
'../_base_/models/retinanet_r50_fpn.py',
|
3 |
-
'../_base_/datasets/coco_detection.py',
|
4 |
-
'../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
|
5 |
-
]
|
6 |
-
# model settings
|
7 |
-
norm_cfg = dict(type='GN', num_groups=32, requires_grad=True)
|
8 |
-
model = dict(
|
9 |
-
pretrained='torchvision://resnet101',
|
10 |
-
backbone=dict(depth=101),
|
11 |
-
bbox_head=dict(
|
12 |
-
_delete_=True,
|
13 |
-
type='SABLRetinaHead',
|
14 |
-
num_classes=80,
|
15 |
-
in_channels=256,
|
16 |
-
stacked_convs=4,
|
17 |
-
feat_channels=256,
|
18 |
-
approx_anchor_generator=dict(
|
19 |
-
type='AnchorGenerator',
|
20 |
-
octave_base_scale=4,
|
21 |
-
scales_per_octave=3,
|
22 |
-
ratios=[0.5, 1.0, 2.0],
|
23 |
-
strides=[8, 16, 32, 64, 128]),
|
24 |
-
square_anchor_generator=dict(
|
25 |
-
type='AnchorGenerator',
|
26 |
-
ratios=[1.0],
|
27 |
-
scales=[4],
|
28 |
-
strides=[8, 16, 32, 64, 128]),
|
29 |
-
norm_cfg=norm_cfg,
|
30 |
-
bbox_coder=dict(
|
31 |
-
type='BucketingBBoxCoder', num_buckets=14, scale_factor=3.0),
|
32 |
-
loss_cls=dict(
|
33 |
-
type='FocalLoss',
|
34 |
-
use_sigmoid=True,
|
35 |
-
gamma=2.0,
|
36 |
-
alpha=0.25,
|
37 |
-
loss_weight=1.0),
|
38 |
-
loss_bbox_cls=dict(
|
39 |
-
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.5),
|
40 |
-
loss_bbox_reg=dict(
|
41 |
-
type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.5)),
|
42 |
-
# training and testing settings
|
43 |
-
train_cfg=dict(
|
44 |
-
assigner=dict(
|
45 |
-
type='ApproxMaxIoUAssigner',
|
46 |
-
pos_iou_thr=0.5,
|
47 |
-
neg_iou_thr=0.4,
|
48 |
-
min_pos_iou=0.0,
|
49 |
-
ignore_iof_thr=-1),
|
50 |
-
allowed_border=-1,
|
51 |
-
pos_weight=-1,
|
52 |
-
debug=False))
|
53 |
-
# optimizer
|
54 |
-
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/point_rend/README.md
DELETED
@@ -1,31 +0,0 @@
|
|
1 |
-
# PointRend: Image Segmentation as Rendering
|
2 |
-
|
3 |
-
## Introduction
|
4 |
-
|
5 |
-
<!-- [ALGORITHM] -->
|
6 |
-
|
7 |
-
```
|
8 |
-
@inproceedings{kirillov2020pointrend,
|
9 |
-
title={Pointrend: Image segmentation as rendering},
|
10 |
-
author={Kirillov, Alexander and Wu, Yuxin and He, Kaiming and Girshick, Ross},
|
11 |
-
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
|
12 |
-
pages={9799--9808},
|
13 |
-
year={2020}
|
14 |
-
}
|
15 |
-
```
|
16 |
-
|
17 |
-
## Results and models
|
18 |
-
|
19 |
-
### Cityscapes
|
20 |
-
|
21 |
-
| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
|
22 |
-
| --------- | -------- | --------- | ------: | -------: | -------------- | ----: | ------------- | ------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
23 |
-
| PointRend | R-50 | 512x1024 | 80000 | 3.1 | 8.48 | 76.47 | 78.13 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/point_rend/pointrend_r50_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/point_rend/pointrend_r50_512x1024_80k_cityscapes/pointrend_r50_512x1024_80k_cityscapes_20200711_015821-bb1ff523.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/point_rend/pointrend_r50_512x1024_80k_cityscapes/pointrend_r50_512x1024_80k_cityscapes-20200715_214714.log.json) |
|
24 |
-
| PointRend | R-101 | 512x1024 | 80000 | 4.2 | 7.00 | 78.30 | 79.97 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/point_rend/pointrend_r101_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/point_rend/pointrend_r101_512x1024_80k_cityscapes/pointrend_r101_512x1024_80k_cityscapes_20200711_170850-d0ca84be.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/point_rend/pointrend_r101_512x1024_80k_cityscapes/pointrend_r101_512x1024_80k_cityscapes-20200715_214824.log.json) |
|
25 |
-
|
26 |
-
### ADE20K
|
27 |
-
|
28 |
-
| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
|
29 |
-
| --------- | -------- | --------- | ------: | -------: | -------------- | ----: | ------------- | --------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
30 |
-
| PointRend | R-50 | 512x512 | 160000 | 5.1 | 17.31 | 37.64 | 39.17 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/point_rend/pointrend_r50_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/point_rend/pointrend_r50_512x512_160k_ade20k/pointrend_r50_512x512_160k_ade20k_20200807_232644-ac3febf2.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/point_rend/pointrend_r50_512x512_160k_ade20k/pointrend_r50_512x512_160k_ade20k-20200807_232644.log.json) |
|
31 |
-
| PointRend | R-101 | 512x512 | 160000 | 6.1 | 15.50 | 40.02 | 41.60 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/point_rend/pointrend_r101_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/point_rend/pointrend_r101_512x512_160k_ade20k/pointrend_r101_512x512_160k_ade20k_20200808_030852-8834902a.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/point_rend/pointrend_r101_512x512_160k_ade20k/pointrend_r101_512x512_160k_ade20k-20200808_030852.log.json) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Andy1621/uniformer_image_segmentation/configs/psanet/README.md
DELETED
@@ -1,48 +0,0 @@
|
|
1 |
-
# PSANet: Point-wise Spatial Attention Network for Scene Parsing
|
2 |
-
|
3 |
-
## Introduction
|
4 |
-
|
5 |
-
<!-- [ALGORITHM] -->
|
6 |
-
|
7 |
-
```latex
|
8 |
-
@inproceedings{zhao2018psanet,
|
9 |
-
title={Psanet: Point-wise spatial attention network for scene parsing},
|
10 |
-
author={Zhao, Hengshuang and Zhang, Yi and Liu, Shu and Shi, Jianping and Change Loy, Chen and Lin, Dahua and Jia, Jiaya},
|
11 |
-
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
|
12 |
-
pages={267--283},
|
13 |
-
year={2018}
|
14 |
-
}
|
15 |
-
```
|
16 |
-
|
17 |
-
## Results and models
|
18 |
-
|
19 |
-
### Cityscapes
|
20 |
-
|
21 |
-
| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
|
22 |
-
| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | --------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
23 |
-
| PSANet | R-50-D8 | 512x1024 | 40000 | 7 | 3.17 | 77.63 | 79.04 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r50-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x1024_40k_cityscapes/psanet_r50-d8_512x1024_40k_cityscapes_20200606_103117-99fac37c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x1024_40k_cityscapes/psanet_r50-d8_512x1024_40k_cityscapes_20200606_103117.log.json) |
|
24 |
-
| PSANet | R-101-D8 | 512x1024 | 40000 | 10.5 | 2.20 | 79.14 | 80.19 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r101-d8_512x1024_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x1024_40k_cityscapes/psanet_r101-d8_512x1024_40k_cityscapes_20200606_001418-27b9cfa7.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x1024_40k_cityscapes/psanet_r101-d8_512x1024_40k_cityscapes_20200606_001418.log.json) |
|
25 |
-
| PSANet | R-50-D8 | 769x769 | 40000 | 7.9 | 1.40 | 77.99 | 79.64 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r50-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_769x769_40k_cityscapes/psanet_r50-d8_769x769_40k_cityscapes_20200530_033717-d5365506.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_769x769_40k_cityscapes/psanet_r50-d8_769x769_40k_cityscapes_20200530_033717.log.json) |
|
26 |
-
| PSANet | R-101-D8 | 769x769 | 40000 | 11.9 | 0.98 | 78.43 | 80.26 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r101-d8_769x769_40k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_769x769_40k_cityscapes/psanet_r101-d8_769x769_40k_cityscapes_20200530_035107-997da1e6.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_769x769_40k_cityscapes/psanet_r101-d8_769x769_40k_cityscapes_20200530_035107.log.json) |
|
27 |
-
| PSANet | R-50-D8 | 512x1024 | 80000 | - | - | 77.24 | 78.69 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r50-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x1024_80k_cityscapes/psanet_r50-d8_512x1024_80k_cityscapes_20200606_161842-ab60a24f.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x1024_80k_cityscapes/psanet_r50-d8_512x1024_80k_cityscapes_20200606_161842.log.json) |
|
28 |
-
| PSANet | R-101-D8 | 512x1024 | 80000 | - | - | 79.31 | 80.53 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r101-d8_512x1024_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x1024_80k_cityscapes/psanet_r101-d8_512x1024_80k_cityscapes_20200606_161823-0f73a169.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x1024_80k_cityscapes/psanet_r101-d8_512x1024_80k_cityscapes_20200606_161823.log.json) |
|
29 |
-
| PSANet | R-50-D8 | 769x769 | 80000 | - | - | 79.31 | 80.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r50-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_769x769_80k_cityscapes/psanet_r50-d8_769x769_80k_cityscapes_20200606_225134-fe42f49e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_769x769_80k_cityscapes/psanet_r50-d8_769x769_80k_cityscapes_20200606_225134.log.json) |
|
30 |
-
| PSANet | R-101-D8 | 769x769 | 80000 | - | - | 79.69 | 80.89 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r101-d8_769x769_80k_cityscapes.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_769x769_80k_cityscapes/psanet_r101-d8_769x769_80k_cityscapes_20200606_214550-7665827b.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_769x769_80k_cityscapes/psanet_r101-d8_769x769_80k_cityscapes_20200606_214550.log.json) |
|
31 |
-
|
32 |
-
### ADE20K
|
33 |
-
|
34 |
-
| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
|
35 |
-
| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ----------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
36 |
-
| PSANet | R-50-D8 | 512x512 | 80000 | 9 | 18.91 | 41.14 | 41.91 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r50-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x512_80k_ade20k/psanet_r50-d8_512x512_80k_ade20k_20200614_144141-835e4b97.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x512_80k_ade20k/psanet_r50-d8_512x512_80k_ade20k_20200614_144141.log.json) |
|
37 |
-
| PSANet | R-101-D8 | 512x512 | 80000 | 12.5 | 13.13 | 43.80 | 44.75 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r101-d8_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x512_80k_ade20k/psanet_r101-d8_512x512_80k_ade20k_20200614_185117-1fab60d4.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x512_80k_ade20k/psanet_r101-d8_512x512_80k_ade20k_20200614_185117.log.json) |
|
38 |
-
| PSANet | R-50-D8 | 512x512 | 160000 | - | - | 41.67 | 42.95 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r50-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x512_160k_ade20k/psanet_r50-d8_512x512_160k_ade20k_20200615_161258-148077dd.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x512_160k_ade20k/psanet_r50-d8_512x512_160k_ade20k_20200615_161258.log.json) |
|
39 |
-
| PSANet | R-101-D8 | 512x512 | 160000 | - | - | 43.74 | 45.38 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r101-d8_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x512_160k_ade20k/psanet_r101-d8_512x512_160k_ade20k_20200615_161537-dbfa564c.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x512_160k_ade20k/psanet_r101-d8_512x512_160k_ade20k_20200615_161537.log.json) |
|
40 |
-
|
41 |
-
### Pascal VOC 2012 + Aug
|
42 |
-
|
43 |
-
| Method | Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
|
44 |
-
| ------ | -------- | --------- | ------: | -------- | -------------- | ----: | ------------: | ------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
45 |
-
| PSANet | R-50-D8 | 512x512 | 20000 | 6.9 | 18.24 | 76.39 | 77.34 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r50-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x512_20k_voc12aug/psanet_r50-d8_512x512_20k_voc12aug_20200617_102413-2f1bbaa1.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x512_20k_voc12aug/psanet_r50-d8_512x512_20k_voc12aug_20200617_102413.log.json) |
|
46 |
-
| PSANet | R-101-D8 | 512x512 | 20000 | 10.4 | 12.63 | 77.91 | 79.30 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r101-d8_512x512_20k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x512_20k_voc12aug/psanet_r101-d8_512x512_20k_voc12aug_20200617_110624-946fef11.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x512_20k_voc12aug/psanet_r101-d8_512x512_20k_voc12aug_20200617_110624.log.json) |
|
47 |
-
| PSANet | R-50-D8 | 512x512 | 40000 | - | - | 76.30 | 77.35 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r50-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x512_40k_voc12aug/psanet_r50-d8_512x512_40k_voc12aug_20200613_161946-f596afb5.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r50-d8_512x512_40k_voc12aug/psanet_r50-d8_512x512_40k_voc12aug_20200613_161946.log.json) |
|
48 |
-
| PSANet | R-101-D8 | 512x512 | 40000 | - | - | 77.73 | 79.05 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/psanet/psanet_r101-d8_512x512_40k_voc12aug.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x512_40k_voc12aug/psanet_r101-d8_512x512_40k_voc12aug_20200613_161946-1f560f9e.pth) | [log](https://download.openmmlab.com/mmsegmentation/v0.5/psanet/psanet_r101-d8_512x512_40k_voc12aug/psanet_r101-d8_512x512_40k_voc12aug_20200613_161946.log.json) |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AngoHF/ANGO-Leaderboard/components/data.py
DELETED
@@ -1,77 +0,0 @@
|
|
1 |
-
import os
|
2 |
-
|
3 |
-
import gradio as gr
|
4 |
-
import pandas as pd
|
5 |
-
import plotly
|
6 |
-
import plotly.graph_objects as go
|
7 |
-
|
8 |
-
from assets.color import color_dict
|
9 |
-
from assets.content import KEYPOINT_DISTRIBUTION, DIFFICULTY_DISTRIBUTION
|
10 |
-
from assets.path import SEASON
|
11 |
-
|
12 |
-
|
13 |
-
def read_testset(season):
|
14 |
-
return pd.read_json(os.path.join("results", SEASON[season], "test_dataset.json"))
|
15 |
-
|
16 |
-
|
17 |
-
def build_keypoint_plot(dataset):
|
18 |
-
labels, parents, values, colors = {}, [], [], []
|
19 |
-
for categories, count in dataset['categories'].value_counts().items():
|
20 |
-
for category in categories:
|
21 |
-
parent = ""
|
22 |
-
for keypoint in category:
|
23 |
-
if not keypoint:
|
24 |
-
keypoint = "未分类"
|
25 |
-
if keypoint not in labels:
|
26 |
-
labels[keypoint] = len(labels)
|
27 |
-
values.append(0)
|
28 |
-
parents.append(parent)
|
29 |
-
colors.append(color_dict[category[0]])
|
30 |
-
values[labels[keypoint]] += count
|
31 |
-
parent = keypoint
|
32 |
-
|
33 |
-
fig = go.Figure(go.Sunburst(
|
34 |
-
labels=list(labels),
|
35 |
-
parents=parents,
|
36 |
-
values=values,
|
37 |
-
branchvalues="total",
|
38 |
-
insidetextorientation='radial',
|
39 |
-
marker={"colors": colors}
|
40 |
-
))
|
41 |
-
return fig
|
42 |
-
|
43 |
-
|
44 |
-
def build_difficulty_plot(dataset):
|
45 |
-
xs, ys = [], []
|
46 |
-
for x, y in dataset['difficulty'].value_counts().sort_index().items():
|
47 |
-
xs.append(x)
|
48 |
-
ys.append(y)
|
49 |
-
|
50 |
-
fig = go.Figure([go.Bar(x=xs, y=ys, marker={"color": ys, "colorscale": "Viridis",
|
51 |
-
"colorbar": {"title": "Total"}})])
|
52 |
-
fig.update_layout(yaxis=dict(type='log'))
|
53 |
-
return fig
|
54 |
-
|
55 |
-
|
56 |
-
def build_plot(season):
|
57 |
-
dataset = pd.read_json(os.path.join("results", SEASON[season], "test_dataset.json"))
|
58 |
-
return build_keypoint_plot(dataset), build_difficulty_plot(dataset)
|
59 |
-
|
60 |
-
|
61 |
-
def create_data(top_components):
|
62 |
-
k_fig, d_fig = build_plot("latest")
|
63 |
-
with gr.Tab("All data"):
|
64 |
-
with gr.Row():
|
65 |
-
all_keypoint_plot = gr.Plot(
|
66 |
-
plotly.io.from_json(KEYPOINT_DISTRIBUTION),
|
67 |
-
label="Keypoint Distribution")
|
68 |
-
all_difficulty_plot = gr.Plot(
|
69 |
-
plotly.io.from_json(DIFFICULTY_DISTRIBUTION),
|
70 |
-
label="Difficulty Distribution")
|
71 |
-
with gr.Tab("Test Data"):
|
72 |
-
with gr.Row():
|
73 |
-
test_keypoint_plot = gr.Plot(k_fig, label="Keypoint Distribution")
|
74 |
-
test_difficulty_plot = gr.Plot(d_fig, label="Difficulty Distribution")
|
75 |
-
|
76 |
-
return {"all_keypoint": all_keypoint_plot, "all_difficulty": all_difficulty_plot,
|
77 |
-
"test_keypoint": test_keypoint_plot, "test_difficulty": test_difficulty_plot}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-html/popper.min.js
DELETED
@@ -1,6 +0,0 @@
|
|
1 |
-
/**
|
2 |
-
* @popperjs/core v2.11.4 - MIT License
|
3 |
-
*/
|
4 |
-
|
5 |
-
!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports):"function"==typeof define&&define.amd?define(["exports"],t):t((e="undefined"!=typeof globalThis?globalThis:e||self).Popper={})}(this,(function(e){"use strict";function t(e){if(null==e)return window;if("[object Window]"!==e.toString()){var t=e.ownerDocument;return t&&t.defaultView||window}return e}function n(e){return e instanceof t(e).Element||e instanceof Element}function r(e){return e instanceof t(e).HTMLElement||e instanceof HTMLElement}function o(e){return"undefined"!=typeof ShadowRoot&&(e instanceof t(e).ShadowRoot||e instanceof ShadowRoot)}var i=Math.max,a=Math.min,s=Math.round;function f(e,t){void 0===t&&(t=!1);var n=e.getBoundingClientRect(),o=1,i=1;if(r(e)&&t){var a=e.offsetHeight,f=e.offsetWidth;f>0&&(o=s(n.width)/f||1),a>0&&(i=s(n.height)/a||1)}return{width:n.width/o,height:n.height/i,top:n.top/i,right:n.right/o,bottom:n.bottom/i,left:n.left/o,x:n.left/o,y:n.top/i}}function c(e){var n=t(e);return{scrollLeft:n.pageXOffset,scrollTop:n.pageYOffset}}function p(e){return e?(e.nodeName||"").toLowerCase():null}function u(e){return((n(e)?e.ownerDocument:e.document)||window.document).documentElement}function l(e){return f(u(e)).left+c(e).scrollLeft}function d(e){return t(e).getComputedStyle(e)}function h(e){var t=d(e),n=t.overflow,r=t.overflowX,o=t.overflowY;return/auto|scroll|overlay|hidden/.test(n+o+r)}function m(e,n,o){void 0===o&&(o=!1);var i,a,d=r(n),m=r(n)&&function(e){var t=e.getBoundingClientRect(),n=s(t.width)/e.offsetWidth||1,r=s(t.height)/e.offsetHeight||1;return 1!==n||1!==r}(n),v=u(n),g=f(e,m),y={scrollLeft:0,scrollTop:0},b={x:0,y:0};return(d||!d&&!o)&&(("body"!==p(n)||h(v))&&(y=(i=n)!==t(i)&&r(i)?{scrollLeft:(a=i).scrollLeft,scrollTop:a.scrollTop}:c(i)),r(n)?((b=f(n,!0)).x+=n.clientLeft,b.y+=n.clientTop):v&&(b.x=l(v))),{x:g.left+y.scrollLeft-b.x,y:g.top+y.scrollTop-b.y,width:g.width,height:g.height}}function v(e){var t=f(e),n=e.offsetWidth,r=e.offsetHeight;return Math.abs(t.width-n)<=1&&(n=t.width),Math.abs(t.height-r)<=1&&(r=t.height),{x:e.offsetLeft,y:e.offsetTop,width:n,height:r}}function g(e){return"html"===p(e)?e:e.assignedSlot||e.parentNode||(o(e)?e.host:null)||u(e)}function y(e){return["html","body","#document"].indexOf(p(e))>=0?e.ownerDocument.body:r(e)&&h(e)?e:y(g(e))}function b(e,n){var r;void 0===n&&(n=[]);var o=y(e),i=o===(null==(r=e.ownerDocument)?void 0:r.body),a=t(o),s=i?[a].concat(a.visualViewport||[],h(o)?o:[]):o,f=n.concat(s);return i?f:f.concat(b(g(s)))}function x(e){return["table","td","th"].indexOf(p(e))>=0}function w(e){return r(e)&&"fixed"!==d(e).position?e.offsetParent:null}function O(e){for(var n=t(e),i=w(e);i&&x(i)&&"static"===d(i).position;)i=w(i);return i&&("html"===p(i)||"body"===p(i)&&"static"===d(i).position)?n:i||function(e){var t=-1!==navigator.userAgent.toLowerCase().indexOf("firefox");if(-1!==navigator.userAgent.indexOf("Trident")&&r(e)&&"fixed"===d(e).position)return null;var n=g(e);for(o(n)&&(n=n.host);r(n)&&["html","body"].indexOf(p(n))<0;){var i=d(n);if("none"!==i.transform||"none"!==i.perspective||"paint"===i.contain||-1!==["transform","perspective"].indexOf(i.willChange)||t&&"filter"===i.willChange||t&&i.filter&&"none"!==i.filter)return n;n=n.parentNode}return null}(e)||n}var j="top",E="bottom",D="right",A="left",L="auto",P=[j,E,D,A],M="start",k="end",W="viewport",B="popper",H=P.reduce((function(e,t){return e.concat([t+"-"+M,t+"-"+k])}),[]),T=[].concat(P,[L]).reduce((function(e,t){return e.concat([t,t+"-"+M,t+"-"+k])}),[]),R=["beforeRead","read","afterRead","beforeMain","main","afterMain","beforeWrite","write","afterWrite"];function S(e){var t=new Map,n=new Set,r=[];function o(e){n.add(e.name),[].concat(e.requires||[],e.requiresIfExists||[]).forEach((function(e){if(!n.has(e)){var r=t.get(e);r&&o(r)}})),r.push(e)}return e.forEach((function(e){t.set(e.name,e)})),e.forEach((function(e){n.has(e.name)||o(e)})),r}function C(e){return e.split("-")[0]}function q(e,t){var n=t.getRootNode&&t.getRootNode();if(e.contains(t))return!0;if(n&&o(n)){var r=t;do{if(r&&e.isSameNode(r))return!0;r=r.parentNode||r.host}while(r)}return!1}function V(e){return Object.assign({},e,{left:e.x,top:e.y,right:e.x+e.width,bottom:e.y+e.height})}function N(e,r){return r===W?V(function(e){var n=t(e),r=u(e),o=n.visualViewport,i=r.clientWidth,a=r.clientHeight,s=0,f=0;return o&&(i=o.width,a=o.height,/^((?!chrome|android).)*safari/i.test(navigator.userAgent)||(s=o.offsetLeft,f=o.offsetTop)),{width:i,height:a,x:s+l(e),y:f}}(e)):n(r)?function(e){var t=f(e);return t.top=t.top+e.clientTop,t.left=t.left+e.clientLeft,t.bottom=t.top+e.clientHeight,t.right=t.left+e.clientWidth,t.width=e.clientWidth,t.height=e.clientHeight,t.x=t.left,t.y=t.top,t}(r):V(function(e){var t,n=u(e),r=c(e),o=null==(t=e.ownerDocument)?void 0:t.body,a=i(n.scrollWidth,n.clientWidth,o?o.scrollWidth:0,o?o.clientWidth:0),s=i(n.scrollHeight,n.clientHeight,o?o.scrollHeight:0,o?o.clientHeight:0),f=-r.scrollLeft+l(e),p=-r.scrollTop;return"rtl"===d(o||n).direction&&(f+=i(n.clientWidth,o?o.clientWidth:0)-a),{width:a,height:s,x:f,y:p}}(u(e)))}function I(e,t,o){var s="clippingParents"===t?function(e){var t=b(g(e)),o=["absolute","fixed"].indexOf(d(e).position)>=0&&r(e)?O(e):e;return n(o)?t.filter((function(e){return n(e)&&q(e,o)&&"body"!==p(e)})):[]}(e):[].concat(t),f=[].concat(s,[o]),c=f[0],u=f.reduce((function(t,n){var r=N(e,n);return t.top=i(r.top,t.top),t.right=a(r.right,t.right),t.bottom=a(r.bottom,t.bottom),t.left=i(r.left,t.left),t}),N(e,c));return u.width=u.right-u.left,u.height=u.bottom-u.top,u.x=u.left,u.y=u.top,u}function _(e){return e.split("-")[1]}function F(e){return["top","bottom"].indexOf(e)>=0?"x":"y"}function U(e){var t,n=e.reference,r=e.element,o=e.placement,i=o?C(o):null,a=o?_(o):null,s=n.x+n.width/2-r.width/2,f=n.y+n.height/2-r.height/2;switch(i){case j:t={x:s,y:n.y-r.height};break;case E:t={x:s,y:n.y+n.height};break;case D:t={x:n.x+n.width,y:f};break;case A:t={x:n.x-r.width,y:f};break;default:t={x:n.x,y:n.y}}var c=i?F(i):null;if(null!=c){var p="y"===c?"height":"width";switch(a){case M:t[c]=t[c]-(n[p]/2-r[p]/2);break;case k:t[c]=t[c]+(n[p]/2-r[p]/2)}}return t}function z(e){return Object.assign({},{top:0,right:0,bottom:0,left:0},e)}function X(e,t){return t.reduce((function(t,n){return t[n]=e,t}),{})}function Y(e,t){void 0===t&&(t={});var r=t,o=r.placement,i=void 0===o?e.placement:o,a=r.boundary,s=void 0===a?"clippingParents":a,c=r.rootBoundary,p=void 0===c?W:c,l=r.elementContext,d=void 0===l?B:l,h=r.altBoundary,m=void 0!==h&&h,v=r.padding,g=void 0===v?0:v,y=z("number"!=typeof g?g:X(g,P)),b=d===B?"reference":B,x=e.rects.popper,w=e.elements[m?b:d],O=I(n(w)?w:w.contextElement||u(e.elements.popper),s,p),A=f(e.elements.reference),L=U({reference:A,element:x,strategy:"absolute",placement:i}),M=V(Object.assign({},x,L)),k=d===B?M:A,H={top:O.top-k.top+y.top,bottom:k.bottom-O.bottom+y.bottom,left:O.left-k.left+y.left,right:k.right-O.right+y.right},T=e.modifiersData.offset;if(d===B&&T){var R=T[i];Object.keys(H).forEach((function(e){var t=[D,E].indexOf(e)>=0?1:-1,n=[j,E].indexOf(e)>=0?"y":"x";H[e]+=R[n]*t}))}return H}var G={placement:"bottom",modifiers:[],strategy:"absolute"};function J(){for(var e=arguments.length,t=new Array(e),n=0;n<e;n++)t[n]=arguments[n];return!t.some((function(e){return!(e&&"function"==typeof e.getBoundingClientRect)}))}function K(e){void 0===e&&(e={});var t=e,r=t.defaultModifiers,o=void 0===r?[]:r,i=t.defaultOptions,a=void 0===i?G:i;return function(e,t,r){void 0===r&&(r=a);var i,s,f={placement:"bottom",orderedModifiers:[],options:Object.assign({},G,a),modifiersData:{},elements:{reference:e,popper:t},attributes:{},styles:{}},c=[],p=!1,u={state:f,setOptions:function(r){var i="function"==typeof r?r(f.options):r;l(),f.options=Object.assign({},a,f.options,i),f.scrollParents={reference:n(e)?b(e):e.contextElement?b(e.contextElement):[],popper:b(t)};var s,p,d=function(e){var t=S(e);return R.reduce((function(e,n){return e.concat(t.filter((function(e){return e.phase===n})))}),[])}((s=[].concat(o,f.options.modifiers),p=s.reduce((function(e,t){var n=e[t.name];return e[t.name]=n?Object.assign({},n,t,{options:Object.assign({},n.options,t.options),data:Object.assign({},n.data,t.data)}):t,e}),{}),Object.keys(p).map((function(e){return p[e]}))));return f.orderedModifiers=d.filter((function(e){return e.enabled})),f.orderedModifiers.forEach((function(e){var t=e.name,n=e.options,r=void 0===n?{}:n,o=e.effect;if("function"==typeof o){var i=o({state:f,name:t,instance:u,options:r}),a=function(){};c.push(i||a)}})),u.update()},forceUpdate:function(){if(!p){var e=f.elements,t=e.reference,n=e.popper;if(J(t,n)){f.rects={reference:m(t,O(n),"fixed"===f.options.strategy),popper:v(n)},f.reset=!1,f.placement=f.options.placement,f.orderedModifiers.forEach((function(e){return f.modifiersData[e.name]=Object.assign({},e.data)}));for(var r=0;r<f.orderedModifiers.length;r++)if(!0!==f.reset){var o=f.orderedModifiers[r],i=o.fn,a=o.options,s=void 0===a?{}:a,c=o.name;"function"==typeof i&&(f=i({state:f,options:s,name:c,instance:u})||f)}else f.reset=!1,r=-1}}},update:(i=function(){return new Promise((function(e){u.forceUpdate(),e(f)}))},function(){return s||(s=new Promise((function(e){Promise.resolve().then((function(){s=void 0,e(i())}))}))),s}),destroy:function(){l(),p=!0}};if(!J(e,t))return u;function l(){c.forEach((function(e){return e()})),c=[]}return u.setOptions(r).then((function(e){!p&&r.onFirstUpdate&&r.onFirstUpdate(e)})),u}}var Q={passive:!0};var Z={name:"eventListeners",enabled:!0,phase:"write",fn:function(){},effect:function(e){var n=e.state,r=e.instance,o=e.options,i=o.scroll,a=void 0===i||i,s=o.resize,f=void 0===s||s,c=t(n.elements.popper),p=[].concat(n.scrollParents.reference,n.scrollParents.popper);return a&&p.forEach((function(e){e.addEventListener("scroll",r.update,Q)})),f&&c.addEventListener("resize",r.update,Q),function(){a&&p.forEach((function(e){e.removeEventListener("scroll",r.update,Q)})),f&&c.removeEventListener("resize",r.update,Q)}},data:{}};var $={name:"popperOffsets",enabled:!0,phase:"read",fn:function(e){var t=e.state,n=e.name;t.modifiersData[n]=U({reference:t.rects.reference,element:t.rects.popper,strategy:"absolute",placement:t.placement})},data:{}},ee={top:"auto",right:"auto",bottom:"auto",left:"auto"};function te(e){var n,r=e.popper,o=e.popperRect,i=e.placement,a=e.variation,f=e.offsets,c=e.position,p=e.gpuAcceleration,l=e.adaptive,h=e.roundOffsets,m=e.isFixed,v=f.x,g=void 0===v?0:v,y=f.y,b=void 0===y?0:y,x="function"==typeof h?h({x:g,y:b}):{x:g,y:b};g=x.x,b=x.y;var w=f.hasOwnProperty("x"),L=f.hasOwnProperty("y"),P=A,M=j,W=window;if(l){var B=O(r),H="clientHeight",T="clientWidth";if(B===t(r)&&"static"!==d(B=u(r)).position&&"absolute"===c&&(H="scrollHeight",T="scrollWidth"),B=B,i===j||(i===A||i===D)&&a===k)M=E,b-=(m&&B===W&&W.visualViewport?W.visualViewport.height:B[H])-o.height,b*=p?1:-1;if(i===A||(i===j||i===E)&&a===k)P=D,g-=(m&&B===W&&W.visualViewport?W.visualViewport.width:B[T])-o.width,g*=p?1:-1}var R,S=Object.assign({position:c},l&&ee),C=!0===h?function(e){var t=e.x,n=e.y,r=window.devicePixelRatio||1;return{x:s(t*r)/r||0,y:s(n*r)/r||0}}({x:g,y:b}):{x:g,y:b};return g=C.x,b=C.y,p?Object.assign({},S,((R={})[M]=L?"0":"",R[P]=w?"0":"",R.transform=(W.devicePixelRatio||1)<=1?"translate("+g+"px, "+b+"px)":"translate3d("+g+"px, "+b+"px, 0)",R)):Object.assign({},S,((n={})[M]=L?b+"px":"",n[P]=w?g+"px":"",n.transform="",n))}var ne={name:"computeStyles",enabled:!0,phase:"beforeWrite",fn:function(e){var t=e.state,n=e.options,r=n.gpuAcceleration,o=void 0===r||r,i=n.adaptive,a=void 0===i||i,s=n.roundOffsets,f=void 0===s||s,c={placement:C(t.placement),variation:_(t.placement),popper:t.elements.popper,popperRect:t.rects.popper,gpuAcceleration:o,isFixed:"fixed"===t.options.strategy};null!=t.modifiersData.popperOffsets&&(t.styles.popper=Object.assign({},t.styles.popper,te(Object.assign({},c,{offsets:t.modifiersData.popperOffsets,position:t.options.strategy,adaptive:a,roundOffsets:f})))),null!=t.modifiersData.arrow&&(t.styles.arrow=Object.assign({},t.styles.arrow,te(Object.assign({},c,{offsets:t.modifiersData.arrow,position:"absolute",adaptive:!1,roundOffsets:f})))),t.attributes.popper=Object.assign({},t.attributes.popper,{"data-popper-placement":t.placement})},data:{}};var re={name:"applyStyles",enabled:!0,phase:"write",fn:function(e){var t=e.state;Object.keys(t.elements).forEach((function(e){var n=t.styles[e]||{},o=t.attributes[e]||{},i=t.elements[e];r(i)&&p(i)&&(Object.assign(i.style,n),Object.keys(o).forEach((function(e){var t=o[e];!1===t?i.removeAttribute(e):i.setAttribute(e,!0===t?"":t)})))}))},effect:function(e){var t=e.state,n={popper:{position:t.options.strategy,left:"0",top:"0",margin:"0"},arrow:{position:"absolute"},reference:{}};return Object.assign(t.elements.popper.style,n.popper),t.styles=n,t.elements.arrow&&Object.assign(t.elements.arrow.style,n.arrow),function(){Object.keys(t.elements).forEach((function(e){var o=t.elements[e],i=t.attributes[e]||{},a=Object.keys(t.styles.hasOwnProperty(e)?t.styles[e]:n[e]).reduce((function(e,t){return e[t]="",e}),{});r(o)&&p(o)&&(Object.assign(o.style,a),Object.keys(i).forEach((function(e){o.removeAttribute(e)})))}))}},requires:["computeStyles"]};var oe={name:"offset",enabled:!0,phase:"main",requires:["popperOffsets"],fn:function(e){var t=e.state,n=e.options,r=e.name,o=n.offset,i=void 0===o?[0,0]:o,a=T.reduce((function(e,n){return e[n]=function(e,t,n){var r=C(e),o=[A,j].indexOf(r)>=0?-1:1,i="function"==typeof n?n(Object.assign({},t,{placement:e})):n,a=i[0],s=i[1];return a=a||0,s=(s||0)*o,[A,D].indexOf(r)>=0?{x:s,y:a}:{x:a,y:s}}(n,t.rects,i),e}),{}),s=a[t.placement],f=s.x,c=s.y;null!=t.modifiersData.popperOffsets&&(t.modifiersData.popperOffsets.x+=f,t.modifiersData.popperOffsets.y+=c),t.modifiersData[r]=a}},ie={left:"right",right:"left",bottom:"top",top:"bottom"};function ae(e){return e.replace(/left|right|bottom|top/g,(function(e){return ie[e]}))}var se={start:"end",end:"start"};function fe(e){return e.replace(/start|end/g,(function(e){return se[e]}))}function ce(e,t){void 0===t&&(t={});var n=t,r=n.placement,o=n.boundary,i=n.rootBoundary,a=n.padding,s=n.flipVariations,f=n.allowedAutoPlacements,c=void 0===f?T:f,p=_(r),u=p?s?H:H.filter((function(e){return _(e)===p})):P,l=u.filter((function(e){return c.indexOf(e)>=0}));0===l.length&&(l=u);var d=l.reduce((function(t,n){return t[n]=Y(e,{placement:n,boundary:o,rootBoundary:i,padding:a})[C(n)],t}),{});return Object.keys(d).sort((function(e,t){return d[e]-d[t]}))}var pe={name:"flip",enabled:!0,phase:"main",fn:function(e){var t=e.state,n=e.options,r=e.name;if(!t.modifiersData[r]._skip){for(var o=n.mainAxis,i=void 0===o||o,a=n.altAxis,s=void 0===a||a,f=n.fallbackPlacements,c=n.padding,p=n.boundary,u=n.rootBoundary,l=n.altBoundary,d=n.flipVariations,h=void 0===d||d,m=n.allowedAutoPlacements,v=t.options.placement,g=C(v),y=f||(g===v||!h?[ae(v)]:function(e){if(C(e)===L)return[];var t=ae(e);return[fe(e),t,fe(t)]}(v)),b=[v].concat(y).reduce((function(e,n){return e.concat(C(n)===L?ce(t,{placement:n,boundary:p,rootBoundary:u,padding:c,flipVariations:h,allowedAutoPlacements:m}):n)}),[]),x=t.rects.reference,w=t.rects.popper,O=new Map,P=!0,k=b[0],W=0;W<b.length;W++){var B=b[W],H=C(B),T=_(B)===M,R=[j,E].indexOf(H)>=0,S=R?"width":"height",q=Y(t,{placement:B,boundary:p,rootBoundary:u,altBoundary:l,padding:c}),V=R?T?D:A:T?E:j;x[S]>w[S]&&(V=ae(V));var N=ae(V),I=[];if(i&&I.push(q[H]<=0),s&&I.push(q[V]<=0,q[N]<=0),I.every((function(e){return e}))){k=B,P=!1;break}O.set(B,I)}if(P)for(var F=function(e){var t=b.find((function(t){var n=O.get(t);if(n)return n.slice(0,e).every((function(e){return e}))}));if(t)return k=t,"break"},U=h?3:1;U>0;U--){if("break"===F(U))break}t.placement!==k&&(t.modifiersData[r]._skip=!0,t.placement=k,t.reset=!0)}},requiresIfExists:["offset"],data:{_skip:!1}};function ue(e,t,n){return i(e,a(t,n))}var le={name:"preventOverflow",enabled:!0,phase:"main",fn:function(e){var t=e.state,n=e.options,r=e.name,o=n.mainAxis,s=void 0===o||o,f=n.altAxis,c=void 0!==f&&f,p=n.boundary,u=n.rootBoundary,l=n.altBoundary,d=n.padding,h=n.tether,m=void 0===h||h,g=n.tetherOffset,y=void 0===g?0:g,b=Y(t,{boundary:p,rootBoundary:u,padding:d,altBoundary:l}),x=C(t.placement),w=_(t.placement),L=!w,P=F(x),k="x"===P?"y":"x",W=t.modifiersData.popperOffsets,B=t.rects.reference,H=t.rects.popper,T="function"==typeof y?y(Object.assign({},t.rects,{placement:t.placement})):y,R="number"==typeof T?{mainAxis:T,altAxis:T}:Object.assign({mainAxis:0,altAxis:0},T),S=t.modifiersData.offset?t.modifiersData.offset[t.placement]:null,q={x:0,y:0};if(W){if(s){var V,N="y"===P?j:A,I="y"===P?E:D,U="y"===P?"height":"width",z=W[P],X=z+b[N],G=z-b[I],J=m?-H[U]/2:0,K=w===M?B[U]:H[U],Q=w===M?-H[U]:-B[U],Z=t.elements.arrow,$=m&&Z?v(Z):{width:0,height:0},ee=t.modifiersData["arrow#persistent"]?t.modifiersData["arrow#persistent"].padding:{top:0,right:0,bottom:0,left:0},te=ee[N],ne=ee[I],re=ue(0,B[U],$[U]),oe=L?B[U]/2-J-re-te-R.mainAxis:K-re-te-R.mainAxis,ie=L?-B[U]/2+J+re+ne+R.mainAxis:Q+re+ne+R.mainAxis,ae=t.elements.arrow&&O(t.elements.arrow),se=ae?"y"===P?ae.clientTop||0:ae.clientLeft||0:0,fe=null!=(V=null==S?void 0:S[P])?V:0,ce=z+ie-fe,pe=ue(m?a(X,z+oe-fe-se):X,z,m?i(G,ce):G);W[P]=pe,q[P]=pe-z}if(c){var le,de="x"===P?j:A,he="x"===P?E:D,me=W[k],ve="y"===k?"height":"width",ge=me+b[de],ye=me-b[he],be=-1!==[j,A].indexOf(x),xe=null!=(le=null==S?void 0:S[k])?le:0,we=be?ge:me-B[ve]-H[ve]-xe+R.altAxis,Oe=be?me+B[ve]+H[ve]-xe-R.altAxis:ye,je=m&&be?function(e,t,n){var r=ue(e,t,n);return r>n?n:r}(we,me,Oe):ue(m?we:ge,me,m?Oe:ye);W[k]=je,q[k]=je-me}t.modifiersData[r]=q}},requiresIfExists:["offset"]};var de={name:"arrow",enabled:!0,phase:"main",fn:function(e){var t,n=e.state,r=e.name,o=e.options,i=n.elements.arrow,a=n.modifiersData.popperOffsets,s=C(n.placement),f=F(s),c=[A,D].indexOf(s)>=0?"height":"width";if(i&&a){var p=function(e,t){return z("number"!=typeof(e="function"==typeof e?e(Object.assign({},t.rects,{placement:t.placement})):e)?e:X(e,P))}(o.padding,n),u=v(i),l="y"===f?j:A,d="y"===f?E:D,h=n.rects.reference[c]+n.rects.reference[f]-a[f]-n.rects.popper[c],m=a[f]-n.rects.reference[f],g=O(i),y=g?"y"===f?g.clientHeight||0:g.clientWidth||0:0,b=h/2-m/2,x=p[l],w=y-u[c]-p[d],L=y/2-u[c]/2+b,M=ue(x,L,w),k=f;n.modifiersData[r]=((t={})[k]=M,t.centerOffset=M-L,t)}},effect:function(e){var t=e.state,n=e.options.element,r=void 0===n?"[data-popper-arrow]":n;null!=r&&("string"!=typeof r||(r=t.elements.popper.querySelector(r)))&&q(t.elements.popper,r)&&(t.elements.arrow=r)},requires:["popperOffsets"],requiresIfExists:["preventOverflow"]};function he(e,t,n){return void 0===n&&(n={x:0,y:0}),{top:e.top-t.height-n.y,right:e.right-t.width+n.x,bottom:e.bottom-t.height+n.y,left:e.left-t.width-n.x}}function me(e){return[j,D,E,A].some((function(t){return e[t]>=0}))}var ve={name:"hide",enabled:!0,phase:"main",requiresIfExists:["preventOverflow"],fn:function(e){var t=e.state,n=e.name,r=t.rects.reference,o=t.rects.popper,i=t.modifiersData.preventOverflow,a=Y(t,{elementContext:"reference"}),s=Y(t,{altBoundary:!0}),f=he(a,r),c=he(s,o,i),p=me(f),u=me(c);t.modifiersData[n]={referenceClippingOffsets:f,popperEscapeOffsets:c,isReferenceHidden:p,hasPopperEscaped:u},t.attributes.popper=Object.assign({},t.attributes.popper,{"data-popper-reference-hidden":p,"data-popper-escaped":u})}},ge=K({defaultModifiers:[Z,$,ne,re]}),ye=[Z,$,ne,re,oe,pe,le,de,ve],be=K({defaultModifiers:ye});e.applyStyles=re,e.arrow=de,e.computeStyles=ne,e.createPopper=be,e.createPopperLite=ge,e.defaultModifiers=ye,e.detectOverflow=Y,e.eventListeners=Z,e.flip=pe,e.hide=ve,e.offset=oe,e.popperGenerator=K,e.popperOffsets=$,e.preventOverflow=le,Object.defineProperty(e,"__esModule",{value:!0})}));
|
6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Apex-X/ROOPOK/roop/utilities.py
DELETED
@@ -1,149 +0,0 @@
|
|
1 |
-
import glob
|
2 |
-
import mimetypes
|
3 |
-
import os
|
4 |
-
import platform
|
5 |
-
import shutil
|
6 |
-
import ssl
|
7 |
-
import subprocess
|
8 |
-
import urllib
|
9 |
-
from pathlib import Path
|
10 |
-
from typing import List, Optional
|
11 |
-
from tqdm import tqdm
|
12 |
-
|
13 |
-
import roop.globals
|
14 |
-
|
15 |
-
TEMP_DIRECTORY = 'temp'
|
16 |
-
TEMP_VIDEO_FILE = 'temp.mp4'
|
17 |
-
|
18 |
-
# monkey patch ssl for mac
|
19 |
-
if platform.system().lower() == 'darwin':
|
20 |
-
ssl._create_default_https_context = ssl._create_unverified_context
|
21 |
-
|
22 |
-
|
23 |
-
def run_ffmpeg(args: List[str]) -> bool:
|
24 |
-
commands = ['ffmpeg', '-hide_banner', '-loglevel', roop.globals.log_level]
|
25 |
-
commands.extend(args)
|
26 |
-
try:
|
27 |
-
subprocess.check_output(commands, stderr=subprocess.STDOUT)
|
28 |
-
return True
|
29 |
-
except Exception:
|
30 |
-
pass
|
31 |
-
return False
|
32 |
-
|
33 |
-
|
34 |
-
def detect_fps(target_path: str) -> float:
|
35 |
-
command = ['ffprobe', '-v', 'error', '-select_streams', 'v:0', '-show_entries', 'stream=r_frame_rate', '-of', 'default=noprint_wrappers=1:nokey=1', target_path]
|
36 |
-
output = subprocess.check_output(command).decode().strip().split('/')
|
37 |
-
try:
|
38 |
-
numerator, denominator = map(int, output)
|
39 |
-
return numerator / denominator
|
40 |
-
except Exception:
|
41 |
-
pass
|
42 |
-
return 30
|
43 |
-
|
44 |
-
|
45 |
-
def extract_frames(target_path: str, fps: float = 30) -> bool:
|
46 |
-
temp_directory_path = get_temp_directory_path(target_path)
|
47 |
-
temp_frame_quality = roop.globals.temp_frame_quality * 31 // 100
|
48 |
-
return run_ffmpeg(['-hwaccel', 'auto', '-i', target_path, '-q:v', str(temp_frame_quality), '-pix_fmt', 'rgb24', '-vf', 'fps=' + str(fps), os.path.join(temp_directory_path, '%04d.' + roop.globals.temp_frame_format)])
|
49 |
-
|
50 |
-
|
51 |
-
def create_video(target_path: str, fps: float = 30) -> bool:
|
52 |
-
temp_output_path = get_temp_output_path(target_path)
|
53 |
-
temp_directory_path = get_temp_directory_path(target_path)
|
54 |
-
output_video_quality = (roop.globals.output_video_quality + 1) * 51 // 100
|
55 |
-
commands = ['-hwaccel', 'auto', '-r', str(fps), '-i', os.path.join(temp_directory_path, '%04d.' + roop.globals.temp_frame_format), '-c:v', roop.globals.output_video_encoder]
|
56 |
-
if roop.globals.output_video_encoder in ['libx264', 'libx265', 'libvpx']:
|
57 |
-
commands.extend(['-crf', str(output_video_quality)])
|
58 |
-
if roop.globals.output_video_encoder in ['h264_nvenc', 'hevc_nvenc']:
|
59 |
-
commands.extend(['-cq', str(output_video_quality)])
|
60 |
-
commands.extend(['-pix_fmt', 'yuv420p', '-vf', 'colorspace=bt709:iall=bt601-6-625:fast=1', '-y', temp_output_path])
|
61 |
-
return run_ffmpeg(commands)
|
62 |
-
|
63 |
-
|
64 |
-
def restore_audio(target_path: str, output_path: str) -> None:
|
65 |
-
temp_output_path = get_temp_output_path(target_path)
|
66 |
-
done = run_ffmpeg(['-i', temp_output_path, '-i', target_path, '-c:v', 'copy', '-map', '0:v:0', '-map', '1:a:0', '-y', output_path])
|
67 |
-
if not done:
|
68 |
-
move_temp(target_path, output_path)
|
69 |
-
|
70 |
-
|
71 |
-
def get_temp_frame_paths(target_path: str) -> List[str]:
|
72 |
-
temp_directory_path = get_temp_directory_path(target_path)
|
73 |
-
return glob.glob((os.path.join(glob.escape(temp_directory_path), '*.' + roop.globals.temp_frame_format)))
|
74 |
-
|
75 |
-
|
76 |
-
def get_temp_directory_path(target_path: str) -> str:
|
77 |
-
target_name, _ = os.path.splitext(os.path.basename(target_path))
|
78 |
-
target_directory_path = os.path.dirname(target_path)
|
79 |
-
return os.path.join(target_directory_path, TEMP_DIRECTORY, target_name)
|
80 |
-
|
81 |
-
|
82 |
-
def get_temp_output_path(target_path: str) -> str:
|
83 |
-
temp_directory_path = get_temp_directory_path(target_path)
|
84 |
-
return os.path.join(temp_directory_path, TEMP_VIDEO_FILE)
|
85 |
-
|
86 |
-
|
87 |
-
def normalize_output_path(source_path: str, target_path: str, output_path: str) -> Optional[str]:
|
88 |
-
if source_path and target_path and output_path:
|
89 |
-
source_name, _ = os.path.splitext(os.path.basename(source_path))
|
90 |
-
target_name, target_extension = os.path.splitext(os.path.basename(target_path))
|
91 |
-
if os.path.isdir(output_path):
|
92 |
-
return os.path.join(output_path, source_name + '-' + target_name + target_extension)
|
93 |
-
return output_path
|
94 |
-
|
95 |
-
|
96 |
-
def create_temp(target_path: str) -> None:
|
97 |
-
temp_directory_path = get_temp_directory_path(target_path)
|
98 |
-
Path(temp_directory_path).mkdir(parents=True, exist_ok=True)
|
99 |
-
|
100 |
-
|
101 |
-
def move_temp(target_path: str, output_path: str) -> None:
|
102 |
-
temp_output_path = get_temp_output_path(target_path)
|
103 |
-
if os.path.isfile(temp_output_path):
|
104 |
-
if os.path.isfile(output_path):
|
105 |
-
os.remove(output_path)
|
106 |
-
shutil.move(temp_output_path, output_path)
|
107 |
-
|
108 |
-
|
109 |
-
def clean_temp(target_path: str) -> None:
|
110 |
-
temp_directory_path = get_temp_directory_path(target_path)
|
111 |
-
parent_directory_path = os.path.dirname(temp_directory_path)
|
112 |
-
if not roop.globals.keep_frames and os.path.isdir(temp_directory_path):
|
113 |
-
shutil.rmtree(temp_directory_path)
|
114 |
-
if os.path.exists(parent_directory_path) and not os.listdir(parent_directory_path):
|
115 |
-
os.rmdir(parent_directory_path)
|
116 |
-
|
117 |
-
|
118 |
-
def has_image_extension(image_path: str) -> bool:
|
119 |
-
return image_path.lower().endswith(('png', 'jpg', 'jpeg', 'webp'))
|
120 |
-
|
121 |
-
|
122 |
-
def is_image(image_path: str) -> bool:
|
123 |
-
if image_path and os.path.isfile(image_path):
|
124 |
-
mimetype, _ = mimetypes.guess_type(image_path)
|
125 |
-
return bool(mimetype and mimetype.startswith('image/'))
|
126 |
-
return False
|
127 |
-
|
128 |
-
|
129 |
-
def is_video(video_path: str) -> bool:
|
130 |
-
if video_path and os.path.isfile(video_path):
|
131 |
-
mimetype, _ = mimetypes.guess_type(video_path)
|
132 |
-
return bool(mimetype and mimetype.startswith('video/'))
|
133 |
-
return False
|
134 |
-
|
135 |
-
|
136 |
-
def conditional_download(download_directory_path: str, urls: List[str]) -> None:
|
137 |
-
if not os.path.exists(download_directory_path):
|
138 |
-
os.makedirs(download_directory_path)
|
139 |
-
for url in urls:
|
140 |
-
download_file_path = os.path.join(download_directory_path, os.path.basename(url))
|
141 |
-
if not os.path.exists(download_file_path):
|
142 |
-
request = urllib.request.urlopen(url) # type: ignore[attr-defined]
|
143 |
-
total = int(request.headers.get('Content-Length', 0))
|
144 |
-
with tqdm(total=total, desc='Downloading', unit='B', unit_scale=True, unit_divisor=1024) as progress:
|
145 |
-
urllib.request.urlretrieve(url, download_file_path, reporthook=lambda count, block_size, total_size: progress.update(block_size)) # type: ignore[attr-defined]
|
146 |
-
|
147 |
-
|
148 |
-
def resolve_relative_path(path: str) -> str:
|
149 |
-
return os.path.abspath(os.path.join(os.path.dirname(__file__), path))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/tenacity/wait.py
DELETED
@@ -1,228 +0,0 @@
|
|
1 |
-
# Copyright 2016–2021 Julien Danjou
|
2 |
-
# Copyright 2016 Joshua Harlow
|
3 |
-
# Copyright 2013-2014 Ray Holder
|
4 |
-
#
|
5 |
-
# Licensed under the Apache License, Version 2.0 (the "License");
|
6 |
-
# you may not use this file except in compliance with the License.
|
7 |
-
# You may obtain a copy of the License at
|
8 |
-
#
|
9 |
-
# http://www.apache.org/licenses/LICENSE-2.0
|
10 |
-
#
|
11 |
-
# Unless required by applicable law or agreed to in writing, software
|
12 |
-
# distributed under the License is distributed on an "AS IS" BASIS,
|
13 |
-
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
14 |
-
# See the License for the specific language governing permissions and
|
15 |
-
# limitations under the License.
|
16 |
-
|
17 |
-
import abc
|
18 |
-
import random
|
19 |
-
import typing
|
20 |
-
|
21 |
-
from pip._vendor.tenacity import _utils
|
22 |
-
|
23 |
-
if typing.TYPE_CHECKING:
|
24 |
-
from pip._vendor.tenacity import RetryCallState
|
25 |
-
|
26 |
-
|
27 |
-
class wait_base(abc.ABC):
|
28 |
-
"""Abstract base class for wait strategies."""
|
29 |
-
|
30 |
-
@abc.abstractmethod
|
31 |
-
def __call__(self, retry_state: "RetryCallState") -> float:
|
32 |
-
pass
|
33 |
-
|
34 |
-
def __add__(self, other: "wait_base") -> "wait_combine":
|
35 |
-
return wait_combine(self, other)
|
36 |
-
|
37 |
-
def __radd__(self, other: "wait_base") -> typing.Union["wait_combine", "wait_base"]:
|
38 |
-
# make it possible to use multiple waits with the built-in sum function
|
39 |
-
if other == 0: # type: ignore[comparison-overlap]
|
40 |
-
return self
|
41 |
-
return self.__add__(other)
|
42 |
-
|
43 |
-
|
44 |
-
WaitBaseT = typing.Union[wait_base, typing.Callable[["RetryCallState"], typing.Union[float, int]]]
|
45 |
-
|
46 |
-
|
47 |
-
class wait_fixed(wait_base):
|
48 |
-
"""Wait strategy that waits a fixed amount of time between each retry."""
|
49 |
-
|
50 |
-
def __init__(self, wait: _utils.time_unit_type) -> None:
|
51 |
-
self.wait_fixed = _utils.to_seconds(wait)
|
52 |
-
|
53 |
-
def __call__(self, retry_state: "RetryCallState") -> float:
|
54 |
-
return self.wait_fixed
|
55 |
-
|
56 |
-
|
57 |
-
class wait_none(wait_fixed):
|
58 |
-
"""Wait strategy that doesn't wait at all before retrying."""
|
59 |
-
|
60 |
-
def __init__(self) -> None:
|
61 |
-
super().__init__(0)
|
62 |
-
|
63 |
-
|
64 |
-
class wait_random(wait_base):
|
65 |
-
"""Wait strategy that waits a random amount of time between min/max."""
|
66 |
-
|
67 |
-
def __init__(self, min: _utils.time_unit_type = 0, max: _utils.time_unit_type = 1) -> None: # noqa
|
68 |
-
self.wait_random_min = _utils.to_seconds(min)
|
69 |
-
self.wait_random_max = _utils.to_seconds(max)
|
70 |
-
|
71 |
-
def __call__(self, retry_state: "RetryCallState") -> float:
|
72 |
-
return self.wait_random_min + (random.random() * (self.wait_random_max - self.wait_random_min))
|
73 |
-
|
74 |
-
|
75 |
-
class wait_combine(wait_base):
|
76 |
-
"""Combine several waiting strategies."""
|
77 |
-
|
78 |
-
def __init__(self, *strategies: wait_base) -> None:
|
79 |
-
self.wait_funcs = strategies
|
80 |
-
|
81 |
-
def __call__(self, retry_state: "RetryCallState") -> float:
|
82 |
-
return sum(x(retry_state=retry_state) for x in self.wait_funcs)
|
83 |
-
|
84 |
-
|
85 |
-
class wait_chain(wait_base):
|
86 |
-
"""Chain two or more waiting strategies.
|
87 |
-
|
88 |
-
If all strategies are exhausted, the very last strategy is used
|
89 |
-
thereafter.
|
90 |
-
|
91 |
-
For example::
|
92 |
-
|
93 |
-
@retry(wait=wait_chain(*[wait_fixed(1) for i in range(3)] +
|
94 |
-
[wait_fixed(2) for j in range(5)] +
|
95 |
-
[wait_fixed(5) for k in range(4)))
|
96 |
-
def wait_chained():
|
97 |
-
print("Wait 1s for 3 attempts, 2s for 5 attempts and 5s
|
98 |
-
thereafter.")
|
99 |
-
"""
|
100 |
-
|
101 |
-
def __init__(self, *strategies: wait_base) -> None:
|
102 |
-
self.strategies = strategies
|
103 |
-
|
104 |
-
def __call__(self, retry_state: "RetryCallState") -> float:
|
105 |
-
wait_func_no = min(max(retry_state.attempt_number, 1), len(self.strategies))
|
106 |
-
wait_func = self.strategies[wait_func_no - 1]
|
107 |
-
return wait_func(retry_state=retry_state)
|
108 |
-
|
109 |
-
|
110 |
-
class wait_incrementing(wait_base):
|
111 |
-
"""Wait an incremental amount of time after each attempt.
|
112 |
-
|
113 |
-
Starting at a starting value and incrementing by a value for each attempt
|
114 |
-
(and restricting the upper limit to some maximum value).
|
115 |
-
"""
|
116 |
-
|
117 |
-
def __init__(
|
118 |
-
self,
|
119 |
-
start: _utils.time_unit_type = 0,
|
120 |
-
increment: _utils.time_unit_type = 100,
|
121 |
-
max: _utils.time_unit_type = _utils.MAX_WAIT, # noqa
|
122 |
-
) -> None:
|
123 |
-
self.start = _utils.to_seconds(start)
|
124 |
-
self.increment = _utils.to_seconds(increment)
|
125 |
-
self.max = _utils.to_seconds(max)
|
126 |
-
|
127 |
-
def __call__(self, retry_state: "RetryCallState") -> float:
|
128 |
-
result = self.start + (self.increment * (retry_state.attempt_number - 1))
|
129 |
-
return max(0, min(result, self.max))
|
130 |
-
|
131 |
-
|
132 |
-
class wait_exponential(wait_base):
|
133 |
-
"""Wait strategy that applies exponential backoff.
|
134 |
-
|
135 |
-
It allows for a customized multiplier and an ability to restrict the
|
136 |
-
upper and lower limits to some maximum and minimum value.
|
137 |
-
|
138 |
-
The intervals are fixed (i.e. there is no jitter), so this strategy is
|
139 |
-
suitable for balancing retries against latency when a required resource is
|
140 |
-
unavailable for an unknown duration, but *not* suitable for resolving
|
141 |
-
contention between multiple processes for a shared resource. Use
|
142 |
-
wait_random_exponential for the latter case.
|
143 |
-
"""
|
144 |
-
|
145 |
-
def __init__(
|
146 |
-
self,
|
147 |
-
multiplier: typing.Union[int, float] = 1,
|
148 |
-
max: _utils.time_unit_type = _utils.MAX_WAIT, # noqa
|
149 |
-
exp_base: typing.Union[int, float] = 2,
|
150 |
-
min: _utils.time_unit_type = 0, # noqa
|
151 |
-
) -> None:
|
152 |
-
self.multiplier = multiplier
|
153 |
-
self.min = _utils.to_seconds(min)
|
154 |
-
self.max = _utils.to_seconds(max)
|
155 |
-
self.exp_base = exp_base
|
156 |
-
|
157 |
-
def __call__(self, retry_state: "RetryCallState") -> float:
|
158 |
-
try:
|
159 |
-
exp = self.exp_base ** (retry_state.attempt_number - 1)
|
160 |
-
result = self.multiplier * exp
|
161 |
-
except OverflowError:
|
162 |
-
return self.max
|
163 |
-
return max(max(0, self.min), min(result, self.max))
|
164 |
-
|
165 |
-
|
166 |
-
class wait_random_exponential(wait_exponential):
|
167 |
-
"""Random wait with exponentially widening window.
|
168 |
-
|
169 |
-
An exponential backoff strategy used to mediate contention between multiple
|
170 |
-
uncoordinated processes for a shared resource in distributed systems. This
|
171 |
-
is the sense in which "exponential backoff" is meant in e.g. Ethernet
|
172 |
-
networking, and corresponds to the "Full Jitter" algorithm described in
|
173 |
-
this blog post:
|
174 |
-
|
175 |
-
https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/
|
176 |
-
|
177 |
-
Each retry occurs at a random time in a geometrically expanding interval.
|
178 |
-
It allows for a custom multiplier and an ability to restrict the upper
|
179 |
-
limit of the random interval to some maximum value.
|
180 |
-
|
181 |
-
Example::
|
182 |
-
|
183 |
-
wait_random_exponential(multiplier=0.5, # initial window 0.5s
|
184 |
-
max=60) # max 60s timeout
|
185 |
-
|
186 |
-
When waiting for an unavailable resource to become available again, as
|
187 |
-
opposed to trying to resolve contention for a shared resource, the
|
188 |
-
wait_exponential strategy (which uses a fixed interval) may be preferable.
|
189 |
-
|
190 |
-
"""
|
191 |
-
|
192 |
-
def __call__(self, retry_state: "RetryCallState") -> float:
|
193 |
-
high = super().__call__(retry_state=retry_state)
|
194 |
-
return random.uniform(0, high)
|
195 |
-
|
196 |
-
|
197 |
-
class wait_exponential_jitter(wait_base):
|
198 |
-
"""Wait strategy that applies exponential backoff and jitter.
|
199 |
-
|
200 |
-
It allows for a customized initial wait, maximum wait and jitter.
|
201 |
-
|
202 |
-
This implements the strategy described here:
|
203 |
-
https://cloud.google.com/storage/docs/retry-strategy
|
204 |
-
|
205 |
-
The wait time is min(initial * 2**n + random.uniform(0, jitter), maximum)
|
206 |
-
where n is the retry count.
|
207 |
-
"""
|
208 |
-
|
209 |
-
def __init__(
|
210 |
-
self,
|
211 |
-
initial: float = 1,
|
212 |
-
max: float = _utils.MAX_WAIT, # noqa
|
213 |
-
exp_base: float = 2,
|
214 |
-
jitter: float = 1,
|
215 |
-
) -> None:
|
216 |
-
self.initial = initial
|
217 |
-
self.max = max
|
218 |
-
self.exp_base = exp_base
|
219 |
-
self.jitter = jitter
|
220 |
-
|
221 |
-
def __call__(self, retry_state: "RetryCallState") -> float:
|
222 |
-
jitter = random.uniform(0, self.jitter)
|
223 |
-
try:
|
224 |
-
exp = self.exp_base ** (retry_state.attempt_number - 1)
|
225 |
-
result = self.initial * exp + jitter
|
226 |
-
except OverflowError:
|
227 |
-
result = self.max
|
228 |
-
return max(0, min(result, self.max))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/helpers.py
DELETED
@@ -1,1088 +0,0 @@
|
|
1 |
-
# helpers.py
|
2 |
-
import html.entities
|
3 |
-
import re
|
4 |
-
import typing
|
5 |
-
|
6 |
-
from . import __diag__
|
7 |
-
from .core import *
|
8 |
-
from .util import _bslash, _flatten, _escape_regex_range_chars
|
9 |
-
|
10 |
-
|
11 |
-
#
|
12 |
-
# global helpers
|
13 |
-
#
|
14 |
-
def delimited_list(
|
15 |
-
expr: Union[str, ParserElement],
|
16 |
-
delim: Union[str, ParserElement] = ",",
|
17 |
-
combine: bool = False,
|
18 |
-
min: typing.Optional[int] = None,
|
19 |
-
max: typing.Optional[int] = None,
|
20 |
-
*,
|
21 |
-
allow_trailing_delim: bool = False,
|
22 |
-
) -> ParserElement:
|
23 |
-
"""Helper to define a delimited list of expressions - the delimiter
|
24 |
-
defaults to ','. By default, the list elements and delimiters can
|
25 |
-
have intervening whitespace, and comments, but this can be
|
26 |
-
overridden by passing ``combine=True`` in the constructor. If
|
27 |
-
``combine`` is set to ``True``, the matching tokens are
|
28 |
-
returned as a single token string, with the delimiters included;
|
29 |
-
otherwise, the matching tokens are returned as a list of tokens,
|
30 |
-
with the delimiters suppressed.
|
31 |
-
|
32 |
-
If ``allow_trailing_delim`` is set to True, then the list may end with
|
33 |
-
a delimiter.
|
34 |
-
|
35 |
-
Example::
|
36 |
-
|
37 |
-
delimited_list(Word(alphas)).parse_string("aa,bb,cc") # -> ['aa', 'bb', 'cc']
|
38 |
-
delimited_list(Word(hexnums), delim=':', combine=True).parse_string("AA:BB:CC:DD:EE") # -> ['AA:BB:CC:DD:EE']
|
39 |
-
"""
|
40 |
-
if isinstance(expr, str_type):
|
41 |
-
expr = ParserElement._literalStringClass(expr)
|
42 |
-
|
43 |
-
dlName = "{expr} [{delim} {expr}]...{end}".format(
|
44 |
-
expr=str(expr.copy().streamline()),
|
45 |
-
delim=str(delim),
|
46 |
-
end=" [{}]".format(str(delim)) if allow_trailing_delim else "",
|
47 |
-
)
|
48 |
-
|
49 |
-
if not combine:
|
50 |
-
delim = Suppress(delim)
|
51 |
-
|
52 |
-
if min is not None:
|
53 |
-
if min < 1:
|
54 |
-
raise ValueError("min must be greater than 0")
|
55 |
-
min -= 1
|
56 |
-
if max is not None:
|
57 |
-
if min is not None and max <= min:
|
58 |
-
raise ValueError("max must be greater than, or equal to min")
|
59 |
-
max -= 1
|
60 |
-
delimited_list_expr = expr + (delim + expr)[min, max]
|
61 |
-
|
62 |
-
if allow_trailing_delim:
|
63 |
-
delimited_list_expr += Opt(delim)
|
64 |
-
|
65 |
-
if combine:
|
66 |
-
return Combine(delimited_list_expr).set_name(dlName)
|
67 |
-
else:
|
68 |
-
return delimited_list_expr.set_name(dlName)
|
69 |
-
|
70 |
-
|
71 |
-
def counted_array(
|
72 |
-
expr: ParserElement,
|
73 |
-
int_expr: typing.Optional[ParserElement] = None,
|
74 |
-
*,
|
75 |
-
intExpr: typing.Optional[ParserElement] = None,
|
76 |
-
) -> ParserElement:
|
77 |
-
"""Helper to define a counted list of expressions.
|
78 |
-
|
79 |
-
This helper defines a pattern of the form::
|
80 |
-
|
81 |
-
integer expr expr expr...
|
82 |
-
|
83 |
-
where the leading integer tells how many expr expressions follow.
|
84 |
-
The matched tokens returns the array of expr tokens as a list - the
|
85 |
-
leading count token is suppressed.
|
86 |
-
|
87 |
-
If ``int_expr`` is specified, it should be a pyparsing expression
|
88 |
-
that produces an integer value.
|
89 |
-
|
90 |
-
Example::
|
91 |
-
|
92 |
-
counted_array(Word(alphas)).parse_string('2 ab cd ef') # -> ['ab', 'cd']
|
93 |
-
|
94 |
-
# in this parser, the leading integer value is given in binary,
|
95 |
-
# '10' indicating that 2 values are in the array
|
96 |
-
binary_constant = Word('01').set_parse_action(lambda t: int(t[0], 2))
|
97 |
-
counted_array(Word(alphas), int_expr=binary_constant).parse_string('10 ab cd ef') # -> ['ab', 'cd']
|
98 |
-
|
99 |
-
# if other fields must be parsed after the count but before the
|
100 |
-
# list items, give the fields results names and they will
|
101 |
-
# be preserved in the returned ParseResults:
|
102 |
-
count_with_metadata = integer + Word(alphas)("type")
|
103 |
-
typed_array = counted_array(Word(alphanums), int_expr=count_with_metadata)("items")
|
104 |
-
result = typed_array.parse_string("3 bool True True False")
|
105 |
-
print(result.dump())
|
106 |
-
|
107 |
-
# prints
|
108 |
-
# ['True', 'True', 'False']
|
109 |
-
# - items: ['True', 'True', 'False']
|
110 |
-
# - type: 'bool'
|
111 |
-
"""
|
112 |
-
intExpr = intExpr or int_expr
|
113 |
-
array_expr = Forward()
|
114 |
-
|
115 |
-
def count_field_parse_action(s, l, t):
|
116 |
-
nonlocal array_expr
|
117 |
-
n = t[0]
|
118 |
-
array_expr <<= (expr * n) if n else Empty()
|
119 |
-
# clear list contents, but keep any named results
|
120 |
-
del t[:]
|
121 |
-
|
122 |
-
if intExpr is None:
|
123 |
-
intExpr = Word(nums).set_parse_action(lambda t: int(t[0]))
|
124 |
-
else:
|
125 |
-
intExpr = intExpr.copy()
|
126 |
-
intExpr.set_name("arrayLen")
|
127 |
-
intExpr.add_parse_action(count_field_parse_action, call_during_try=True)
|
128 |
-
return (intExpr + array_expr).set_name("(len) " + str(expr) + "...")
|
129 |
-
|
130 |
-
|
131 |
-
def match_previous_literal(expr: ParserElement) -> ParserElement:
|
132 |
-
"""Helper to define an expression that is indirectly defined from
|
133 |
-
the tokens matched in a previous expression, that is, it looks for
|
134 |
-
a 'repeat' of a previous expression. For example::
|
135 |
-
|
136 |
-
first = Word(nums)
|
137 |
-
second = match_previous_literal(first)
|
138 |
-
match_expr = first + ":" + second
|
139 |
-
|
140 |
-
will match ``"1:1"``, but not ``"1:2"``. Because this
|
141 |
-
matches a previous literal, will also match the leading
|
142 |
-
``"1:1"`` in ``"1:10"``. If this is not desired, use
|
143 |
-
:class:`match_previous_expr`. Do *not* use with packrat parsing
|
144 |
-
enabled.
|
145 |
-
"""
|
146 |
-
rep = Forward()
|
147 |
-
|
148 |
-
def copy_token_to_repeater(s, l, t):
|
149 |
-
if t:
|
150 |
-
if len(t) == 1:
|
151 |
-
rep << t[0]
|
152 |
-
else:
|
153 |
-
# flatten t tokens
|
154 |
-
tflat = _flatten(t.as_list())
|
155 |
-
rep << And(Literal(tt) for tt in tflat)
|
156 |
-
else:
|
157 |
-
rep << Empty()
|
158 |
-
|
159 |
-
expr.add_parse_action(copy_token_to_repeater, callDuringTry=True)
|
160 |
-
rep.set_name("(prev) " + str(expr))
|
161 |
-
return rep
|
162 |
-
|
163 |
-
|
164 |
-
def match_previous_expr(expr: ParserElement) -> ParserElement:
|
165 |
-
"""Helper to define an expression that is indirectly defined from
|
166 |
-
the tokens matched in a previous expression, that is, it looks for
|
167 |
-
a 'repeat' of a previous expression. For example::
|
168 |
-
|
169 |
-
first = Word(nums)
|
170 |
-
second = match_previous_expr(first)
|
171 |
-
match_expr = first + ":" + second
|
172 |
-
|
173 |
-
will match ``"1:1"``, but not ``"1:2"``. Because this
|
174 |
-
matches by expressions, will *not* match the leading ``"1:1"``
|
175 |
-
in ``"1:10"``; the expressions are evaluated first, and then
|
176 |
-
compared, so ``"1"`` is compared with ``"10"``. Do *not* use
|
177 |
-
with packrat parsing enabled.
|
178 |
-
"""
|
179 |
-
rep = Forward()
|
180 |
-
e2 = expr.copy()
|
181 |
-
rep <<= e2
|
182 |
-
|
183 |
-
def copy_token_to_repeater(s, l, t):
|
184 |
-
matchTokens = _flatten(t.as_list())
|
185 |
-
|
186 |
-
def must_match_these_tokens(s, l, t):
|
187 |
-
theseTokens = _flatten(t.as_list())
|
188 |
-
if theseTokens != matchTokens:
|
189 |
-
raise ParseException(
|
190 |
-
s, l, "Expected {}, found{}".format(matchTokens, theseTokens)
|
191 |
-
)
|
192 |
-
|
193 |
-
rep.set_parse_action(must_match_these_tokens, callDuringTry=True)
|
194 |
-
|
195 |
-
expr.add_parse_action(copy_token_to_repeater, callDuringTry=True)
|
196 |
-
rep.set_name("(prev) " + str(expr))
|
197 |
-
return rep
|
198 |
-
|
199 |
-
|
200 |
-
def one_of(
|
201 |
-
strs: Union[typing.Iterable[str], str],
|
202 |
-
caseless: bool = False,
|
203 |
-
use_regex: bool = True,
|
204 |
-
as_keyword: bool = False,
|
205 |
-
*,
|
206 |
-
useRegex: bool = True,
|
207 |
-
asKeyword: bool = False,
|
208 |
-
) -> ParserElement:
|
209 |
-
"""Helper to quickly define a set of alternative :class:`Literal` s,
|
210 |
-
and makes sure to do longest-first testing when there is a conflict,
|
211 |
-
regardless of the input order, but returns
|
212 |
-
a :class:`MatchFirst` for best performance.
|
213 |
-
|
214 |
-
Parameters:
|
215 |
-
|
216 |
-
- ``strs`` - a string of space-delimited literals, or a collection of
|
217 |
-
string literals
|
218 |
-
- ``caseless`` - treat all literals as caseless - (default= ``False``)
|
219 |
-
- ``use_regex`` - as an optimization, will
|
220 |
-
generate a :class:`Regex` object; otherwise, will generate
|
221 |
-
a :class:`MatchFirst` object (if ``caseless=True`` or ``asKeyword=True``, or if
|
222 |
-
creating a :class:`Regex` raises an exception) - (default= ``True``)
|
223 |
-
- ``as_keyword`` - enforce :class:`Keyword`-style matching on the
|
224 |
-
generated expressions - (default= ``False``)
|
225 |
-
- ``asKeyword`` and ``useRegex`` are retained for pre-PEP8 compatibility,
|
226 |
-
but will be removed in a future release
|
227 |
-
|
228 |
-
Example::
|
229 |
-
|
230 |
-
comp_oper = one_of("< = > <= >= !=")
|
231 |
-
var = Word(alphas)
|
232 |
-
number = Word(nums)
|
233 |
-
term = var | number
|
234 |
-
comparison_expr = term + comp_oper + term
|
235 |
-
print(comparison_expr.search_string("B = 12 AA=23 B<=AA AA>12"))
|
236 |
-
|
237 |
-
prints::
|
238 |
-
|
239 |
-
[['B', '=', '12'], ['AA', '=', '23'], ['B', '<=', 'AA'], ['AA', '>', '12']]
|
240 |
-
"""
|
241 |
-
asKeyword = asKeyword or as_keyword
|
242 |
-
useRegex = useRegex and use_regex
|
243 |
-
|
244 |
-
if (
|
245 |
-
isinstance(caseless, str_type)
|
246 |
-
and __diag__.warn_on_multiple_string_args_to_oneof
|
247 |
-
):
|
248 |
-
warnings.warn(
|
249 |
-
"More than one string argument passed to one_of, pass"
|
250 |
-
" choices as a list or space-delimited string",
|
251 |
-
stacklevel=2,
|
252 |
-
)
|
253 |
-
|
254 |
-
if caseless:
|
255 |
-
isequal = lambda a, b: a.upper() == b.upper()
|
256 |
-
masks = lambda a, b: b.upper().startswith(a.upper())
|
257 |
-
parseElementClass = CaselessKeyword if asKeyword else CaselessLiteral
|
258 |
-
else:
|
259 |
-
isequal = lambda a, b: a == b
|
260 |
-
masks = lambda a, b: b.startswith(a)
|
261 |
-
parseElementClass = Keyword if asKeyword else Literal
|
262 |
-
|
263 |
-
symbols: List[str] = []
|
264 |
-
if isinstance(strs, str_type):
|
265 |
-
symbols = strs.split()
|
266 |
-
elif isinstance(strs, Iterable):
|
267 |
-
symbols = list(strs)
|
268 |
-
else:
|
269 |
-
raise TypeError("Invalid argument to one_of, expected string or iterable")
|
270 |
-
if not symbols:
|
271 |
-
return NoMatch()
|
272 |
-
|
273 |
-
# reorder given symbols to take care to avoid masking longer choices with shorter ones
|
274 |
-
# (but only if the given symbols are not just single characters)
|
275 |
-
if any(len(sym) > 1 for sym in symbols):
|
276 |
-
i = 0
|
277 |
-
while i < len(symbols) - 1:
|
278 |
-
cur = symbols[i]
|
279 |
-
for j, other in enumerate(symbols[i + 1 :]):
|
280 |
-
if isequal(other, cur):
|
281 |
-
del symbols[i + j + 1]
|
282 |
-
break
|
283 |
-
elif masks(cur, other):
|
284 |
-
del symbols[i + j + 1]
|
285 |
-
symbols.insert(i, other)
|
286 |
-
break
|
287 |
-
else:
|
288 |
-
i += 1
|
289 |
-
|
290 |
-
if useRegex:
|
291 |
-
re_flags: int = re.IGNORECASE if caseless else 0
|
292 |
-
|
293 |
-
try:
|
294 |
-
if all(len(sym) == 1 for sym in symbols):
|
295 |
-
# symbols are just single characters, create range regex pattern
|
296 |
-
patt = "[{}]".format(
|
297 |
-
"".join(_escape_regex_range_chars(sym) for sym in symbols)
|
298 |
-
)
|
299 |
-
else:
|
300 |
-
patt = "|".join(re.escape(sym) for sym in symbols)
|
301 |
-
|
302 |
-
# wrap with \b word break markers if defining as keywords
|
303 |
-
if asKeyword:
|
304 |
-
patt = r"\b(?:{})\b".format(patt)
|
305 |
-
|
306 |
-
ret = Regex(patt, flags=re_flags).set_name(" | ".join(symbols))
|
307 |
-
|
308 |
-
if caseless:
|
309 |
-
# add parse action to return symbols as specified, not in random
|
310 |
-
# casing as found in input string
|
311 |
-
symbol_map = {sym.lower(): sym for sym in symbols}
|
312 |
-
ret.add_parse_action(lambda s, l, t: symbol_map[t[0].lower()])
|
313 |
-
|
314 |
-
return ret
|
315 |
-
|
316 |
-
except re.error:
|
317 |
-
warnings.warn(
|
318 |
-
"Exception creating Regex for one_of, building MatchFirst", stacklevel=2
|
319 |
-
)
|
320 |
-
|
321 |
-
# last resort, just use MatchFirst
|
322 |
-
return MatchFirst(parseElementClass(sym) for sym in symbols).set_name(
|
323 |
-
" | ".join(symbols)
|
324 |
-
)
|
325 |
-
|
326 |
-
|
327 |
-
def dict_of(key: ParserElement, value: ParserElement) -> ParserElement:
|
328 |
-
"""Helper to easily and clearly define a dictionary by specifying
|
329 |
-
the respective patterns for the key and value. Takes care of
|
330 |
-
defining the :class:`Dict`, :class:`ZeroOrMore`, and
|
331 |
-
:class:`Group` tokens in the proper order. The key pattern
|
332 |
-
can include delimiting markers or punctuation, as long as they are
|
333 |
-
suppressed, thereby leaving the significant key text. The value
|
334 |
-
pattern can include named results, so that the :class:`Dict` results
|
335 |
-
can include named token fields.
|
336 |
-
|
337 |
-
Example::
|
338 |
-
|
339 |
-
text = "shape: SQUARE posn: upper left color: light blue texture: burlap"
|
340 |
-
attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
|
341 |
-
print(attr_expr[1, ...].parse_string(text).dump())
|
342 |
-
|
343 |
-
attr_label = label
|
344 |
-
attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)
|
345 |
-
|
346 |
-
# similar to Dict, but simpler call format
|
347 |
-
result = dict_of(attr_label, attr_value).parse_string(text)
|
348 |
-
print(result.dump())
|
349 |
-
print(result['shape'])
|
350 |
-
print(result.shape) # object attribute access works too
|
351 |
-
print(result.as_dict())
|
352 |
-
|
353 |
-
prints::
|
354 |
-
|
355 |
-
[['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']]
|
356 |
-
- color: 'light blue'
|
357 |
-
- posn: 'upper left'
|
358 |
-
- shape: 'SQUARE'
|
359 |
-
- texture: 'burlap'
|
360 |
-
SQUARE
|
361 |
-
SQUARE
|
362 |
-
{'color': 'light blue', 'shape': 'SQUARE', 'posn': 'upper left', 'texture': 'burlap'}
|
363 |
-
"""
|
364 |
-
return Dict(OneOrMore(Group(key + value)))
|
365 |
-
|
366 |
-
|
367 |
-
def original_text_for(
|
368 |
-
expr: ParserElement, as_string: bool = True, *, asString: bool = True
|
369 |
-
) -> ParserElement:
|
370 |
-
"""Helper to return the original, untokenized text for a given
|
371 |
-
expression. Useful to restore the parsed fields of an HTML start
|
372 |
-
tag into the raw tag text itself, or to revert separate tokens with
|
373 |
-
intervening whitespace back to the original matching input text. By
|
374 |
-
default, returns astring containing the original parsed text.
|
375 |
-
|
376 |
-
If the optional ``as_string`` argument is passed as
|
377 |
-
``False``, then the return value is
|
378 |
-
a :class:`ParseResults` containing any results names that
|
379 |
-
were originally matched, and a single token containing the original
|
380 |
-
matched text from the input string. So if the expression passed to
|
381 |
-
:class:`original_text_for` contains expressions with defined
|
382 |
-
results names, you must set ``as_string`` to ``False`` if you
|
383 |
-
want to preserve those results name values.
|
384 |
-
|
385 |
-
The ``asString`` pre-PEP8 argument is retained for compatibility,
|
386 |
-
but will be removed in a future release.
|
387 |
-
|
388 |
-
Example::
|
389 |
-
|
390 |
-
src = "this is test <b> bold <i>text</i> </b> normal text "
|
391 |
-
for tag in ("b", "i"):
|
392 |
-
opener, closer = make_html_tags(tag)
|
393 |
-
patt = original_text_for(opener + SkipTo(closer) + closer)
|
394 |
-
print(patt.search_string(src)[0])
|
395 |
-
|
396 |
-
prints::
|
397 |
-
|
398 |
-
['<b> bold <i>text</i> </b>']
|
399 |
-
['<i>text</i>']
|
400 |
-
"""
|
401 |
-
asString = asString and as_string
|
402 |
-
|
403 |
-
locMarker = Empty().set_parse_action(lambda s, loc, t: loc)
|
404 |
-
endlocMarker = locMarker.copy()
|
405 |
-
endlocMarker.callPreparse = False
|
406 |
-
matchExpr = locMarker("_original_start") + expr + endlocMarker("_original_end")
|
407 |
-
if asString:
|
408 |
-
extractText = lambda s, l, t: s[t._original_start : t._original_end]
|
409 |
-
else:
|
410 |
-
|
411 |
-
def extractText(s, l, t):
|
412 |
-
t[:] = [s[t.pop("_original_start") : t.pop("_original_end")]]
|
413 |
-
|
414 |
-
matchExpr.set_parse_action(extractText)
|
415 |
-
matchExpr.ignoreExprs = expr.ignoreExprs
|
416 |
-
matchExpr.suppress_warning(Diagnostics.warn_ungrouped_named_tokens_in_collection)
|
417 |
-
return matchExpr
|
418 |
-
|
419 |
-
|
420 |
-
def ungroup(expr: ParserElement) -> ParserElement:
|
421 |
-
"""Helper to undo pyparsing's default grouping of And expressions,
|
422 |
-
even if all but one are non-empty.
|
423 |
-
"""
|
424 |
-
return TokenConverter(expr).add_parse_action(lambda t: t[0])
|
425 |
-
|
426 |
-
|
427 |
-
def locatedExpr(expr: ParserElement) -> ParserElement:
|
428 |
-
"""
|
429 |
-
(DEPRECATED - future code should use the Located class)
|
430 |
-
Helper to decorate a returned token with its starting and ending
|
431 |
-
locations in the input string.
|
432 |
-
|
433 |
-
This helper adds the following results names:
|
434 |
-
|
435 |
-
- ``locn_start`` - location where matched expression begins
|
436 |
-
- ``locn_end`` - location where matched expression ends
|
437 |
-
- ``value`` - the actual parsed results
|
438 |
-
|
439 |
-
Be careful if the input text contains ``<TAB>`` characters, you
|
440 |
-
may want to call :class:`ParserElement.parseWithTabs`
|
441 |
-
|
442 |
-
Example::
|
443 |
-
|
444 |
-
wd = Word(alphas)
|
445 |
-
for match in locatedExpr(wd).searchString("ljsdf123lksdjjf123lkkjj1222"):
|
446 |
-
print(match)
|
447 |
-
|
448 |
-
prints::
|
449 |
-
|
450 |
-
[[0, 'ljsdf', 5]]
|
451 |
-
[[8, 'lksdjjf', 15]]
|
452 |
-
[[18, 'lkkjj', 23]]
|
453 |
-
"""
|
454 |
-
locator = Empty().set_parse_action(lambda ss, ll, tt: ll)
|
455 |
-
return Group(
|
456 |
-
locator("locn_start")
|
457 |
-
+ expr("value")
|
458 |
-
+ locator.copy().leaveWhitespace()("locn_end")
|
459 |
-
)
|
460 |
-
|
461 |
-
|
462 |
-
def nested_expr(
|
463 |
-
opener: Union[str, ParserElement] = "(",
|
464 |
-
closer: Union[str, ParserElement] = ")",
|
465 |
-
content: typing.Optional[ParserElement] = None,
|
466 |
-
ignore_expr: ParserElement = quoted_string(),
|
467 |
-
*,
|
468 |
-
ignoreExpr: ParserElement = quoted_string(),
|
469 |
-
) -> ParserElement:
|
470 |
-
"""Helper method for defining nested lists enclosed in opening and
|
471 |
-
closing delimiters (``"("`` and ``")"`` are the default).
|
472 |
-
|
473 |
-
Parameters:
|
474 |
-
- ``opener`` - opening character for a nested list
|
475 |
-
(default= ``"("``); can also be a pyparsing expression
|
476 |
-
- ``closer`` - closing character for a nested list
|
477 |
-
(default= ``")"``); can also be a pyparsing expression
|
478 |
-
- ``content`` - expression for items within the nested lists
|
479 |
-
(default= ``None``)
|
480 |
-
- ``ignore_expr`` - expression for ignoring opening and closing delimiters
|
481 |
-
(default= :class:`quoted_string`)
|
482 |
-
- ``ignoreExpr`` - this pre-PEP8 argument is retained for compatibility
|
483 |
-
but will be removed in a future release
|
484 |
-
|
485 |
-
If an expression is not provided for the content argument, the
|
486 |
-
nested expression will capture all whitespace-delimited content
|
487 |
-
between delimiters as a list of separate values.
|
488 |
-
|
489 |
-
Use the ``ignore_expr`` argument to define expressions that may
|
490 |
-
contain opening or closing characters that should not be treated as
|
491 |
-
opening or closing characters for nesting, such as quoted_string or
|
492 |
-
a comment expression. Specify multiple expressions using an
|
493 |
-
:class:`Or` or :class:`MatchFirst`. The default is
|
494 |
-
:class:`quoted_string`, but if no expressions are to be ignored, then
|
495 |
-
pass ``None`` for this argument.
|
496 |
-
|
497 |
-
Example::
|
498 |
-
|
499 |
-
data_type = one_of("void int short long char float double")
|
500 |
-
decl_data_type = Combine(data_type + Opt(Word('*')))
|
501 |
-
ident = Word(alphas+'_', alphanums+'_')
|
502 |
-
number = pyparsing_common.number
|
503 |
-
arg = Group(decl_data_type + ident)
|
504 |
-
LPAR, RPAR = map(Suppress, "()")
|
505 |
-
|
506 |
-
code_body = nested_expr('{', '}', ignore_expr=(quoted_string | c_style_comment))
|
507 |
-
|
508 |
-
c_function = (decl_data_type("type")
|
509 |
-
+ ident("name")
|
510 |
-
+ LPAR + Opt(delimited_list(arg), [])("args") + RPAR
|
511 |
-
+ code_body("body"))
|
512 |
-
c_function.ignore(c_style_comment)
|
513 |
-
|
514 |
-
source_code = '''
|
515 |
-
int is_odd(int x) {
|
516 |
-
return (x%2);
|
517 |
-
}
|
518 |
-
|
519 |
-
int dec_to_hex(char hchar) {
|
520 |
-
if (hchar >= '0' && hchar <= '9') {
|
521 |
-
return (ord(hchar)-ord('0'));
|
522 |
-
} else {
|
523 |
-
return (10+ord(hchar)-ord('A'));
|
524 |
-
}
|
525 |
-
}
|
526 |
-
'''
|
527 |
-
for func in c_function.search_string(source_code):
|
528 |
-
print("%(name)s (%(type)s) args: %(args)s" % func)
|
529 |
-
|
530 |
-
|
531 |
-
prints::
|
532 |
-
|
533 |
-
is_odd (int) args: [['int', 'x']]
|
534 |
-
dec_to_hex (int) args: [['char', 'hchar']]
|
535 |
-
"""
|
536 |
-
if ignoreExpr != ignore_expr:
|
537 |
-
ignoreExpr = ignore_expr if ignoreExpr == quoted_string() else ignoreExpr
|
538 |
-
if opener == closer:
|
539 |
-
raise ValueError("opening and closing strings cannot be the same")
|
540 |
-
if content is None:
|
541 |
-
if isinstance(opener, str_type) and isinstance(closer, str_type):
|
542 |
-
if len(opener) == 1 and len(closer) == 1:
|
543 |
-
if ignoreExpr is not None:
|
544 |
-
content = Combine(
|
545 |
-
OneOrMore(
|
546 |
-
~ignoreExpr
|
547 |
-
+ CharsNotIn(
|
548 |
-
opener + closer + ParserElement.DEFAULT_WHITE_CHARS,
|
549 |
-
exact=1,
|
550 |
-
)
|
551 |
-
)
|
552 |
-
).set_parse_action(lambda t: t[0].strip())
|
553 |
-
else:
|
554 |
-
content = empty.copy() + CharsNotIn(
|
555 |
-
opener + closer + ParserElement.DEFAULT_WHITE_CHARS
|
556 |
-
).set_parse_action(lambda t: t[0].strip())
|
557 |
-
else:
|
558 |
-
if ignoreExpr is not None:
|
559 |
-
content = Combine(
|
560 |
-
OneOrMore(
|
561 |
-
~ignoreExpr
|
562 |
-
+ ~Literal(opener)
|
563 |
-
+ ~Literal(closer)
|
564 |
-
+ CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1)
|
565 |
-
)
|
566 |
-
).set_parse_action(lambda t: t[0].strip())
|
567 |
-
else:
|
568 |
-
content = Combine(
|
569 |
-
OneOrMore(
|
570 |
-
~Literal(opener)
|
571 |
-
+ ~Literal(closer)
|
572 |
-
+ CharsNotIn(ParserElement.DEFAULT_WHITE_CHARS, exact=1)
|
573 |
-
)
|
574 |
-
).set_parse_action(lambda t: t[0].strip())
|
575 |
-
else:
|
576 |
-
raise ValueError(
|
577 |
-
"opening and closing arguments must be strings if no content expression is given"
|
578 |
-
)
|
579 |
-
ret = Forward()
|
580 |
-
if ignoreExpr is not None:
|
581 |
-
ret <<= Group(
|
582 |
-
Suppress(opener) + ZeroOrMore(ignoreExpr | ret | content) + Suppress(closer)
|
583 |
-
)
|
584 |
-
else:
|
585 |
-
ret <<= Group(Suppress(opener) + ZeroOrMore(ret | content) + Suppress(closer))
|
586 |
-
ret.set_name("nested %s%s expression" % (opener, closer))
|
587 |
-
return ret
|
588 |
-
|
589 |
-
|
590 |
-
def _makeTags(tagStr, xml, suppress_LT=Suppress("<"), suppress_GT=Suppress(">")):
|
591 |
-
"""Internal helper to construct opening and closing tag expressions, given a tag name"""
|
592 |
-
if isinstance(tagStr, str_type):
|
593 |
-
resname = tagStr
|
594 |
-
tagStr = Keyword(tagStr, caseless=not xml)
|
595 |
-
else:
|
596 |
-
resname = tagStr.name
|
597 |
-
|
598 |
-
tagAttrName = Word(alphas, alphanums + "_-:")
|
599 |
-
if xml:
|
600 |
-
tagAttrValue = dbl_quoted_string.copy().set_parse_action(remove_quotes)
|
601 |
-
openTag = (
|
602 |
-
suppress_LT
|
603 |
-
+ tagStr("tag")
|
604 |
-
+ Dict(ZeroOrMore(Group(tagAttrName + Suppress("=") + tagAttrValue)))
|
605 |
-
+ Opt("/", default=[False])("empty").set_parse_action(
|
606 |
-
lambda s, l, t: t[0] == "/"
|
607 |
-
)
|
608 |
-
+ suppress_GT
|
609 |
-
)
|
610 |
-
else:
|
611 |
-
tagAttrValue = quoted_string.copy().set_parse_action(remove_quotes) | Word(
|
612 |
-
printables, exclude_chars=">"
|
613 |
-
)
|
614 |
-
openTag = (
|
615 |
-
suppress_LT
|
616 |
-
+ tagStr("tag")
|
617 |
-
+ Dict(
|
618 |
-
ZeroOrMore(
|
619 |
-
Group(
|
620 |
-
tagAttrName.set_parse_action(lambda t: t[0].lower())
|
621 |
-
+ Opt(Suppress("=") + tagAttrValue)
|
622 |
-
)
|
623 |
-
)
|
624 |
-
)
|
625 |
-
+ Opt("/", default=[False])("empty").set_parse_action(
|
626 |
-
lambda s, l, t: t[0] == "/"
|
627 |
-
)
|
628 |
-
+ suppress_GT
|
629 |
-
)
|
630 |
-
closeTag = Combine(Literal("</") + tagStr + ">", adjacent=False)
|
631 |
-
|
632 |
-
openTag.set_name("<%s>" % resname)
|
633 |
-
# add start<tagname> results name in parse action now that ungrouped names are not reported at two levels
|
634 |
-
openTag.add_parse_action(
|
635 |
-
lambda t: t.__setitem__(
|
636 |
-
"start" + "".join(resname.replace(":", " ").title().split()), t.copy()
|
637 |
-
)
|
638 |
-
)
|
639 |
-
closeTag = closeTag(
|
640 |
-
"end" + "".join(resname.replace(":", " ").title().split())
|
641 |
-
).set_name("</%s>" % resname)
|
642 |
-
openTag.tag = resname
|
643 |
-
closeTag.tag = resname
|
644 |
-
openTag.tag_body = SkipTo(closeTag())
|
645 |
-
return openTag, closeTag
|
646 |
-
|
647 |
-
|
648 |
-
def make_html_tags(
|
649 |
-
tag_str: Union[str, ParserElement]
|
650 |
-
) -> Tuple[ParserElement, ParserElement]:
|
651 |
-
"""Helper to construct opening and closing tag expressions for HTML,
|
652 |
-
given a tag name. Matches tags in either upper or lower case,
|
653 |
-
attributes with namespaces and with quoted or unquoted values.
|
654 |
-
|
655 |
-
Example::
|
656 |
-
|
657 |
-
text = '<td>More info at the <a href="https://github.com/pyparsing/pyparsing/wiki">pyparsing</a> wiki page</td>'
|
658 |
-
# make_html_tags returns pyparsing expressions for the opening and
|
659 |
-
# closing tags as a 2-tuple
|
660 |
-
a, a_end = make_html_tags("A")
|
661 |
-
link_expr = a + SkipTo(a_end)("link_text") + a_end
|
662 |
-
|
663 |
-
for link in link_expr.search_string(text):
|
664 |
-
# attributes in the <A> tag (like "href" shown here) are
|
665 |
-
# also accessible as named results
|
666 |
-
print(link.link_text, '->', link.href)
|
667 |
-
|
668 |
-
prints::
|
669 |
-
|
670 |
-
pyparsing -> https://github.com/pyparsing/pyparsing/wiki
|
671 |
-
"""
|
672 |
-
return _makeTags(tag_str, False)
|
673 |
-
|
674 |
-
|
675 |
-
def make_xml_tags(
|
676 |
-
tag_str: Union[str, ParserElement]
|
677 |
-
) -> Tuple[ParserElement, ParserElement]:
|
678 |
-
"""Helper to construct opening and closing tag expressions for XML,
|
679 |
-
given a tag name. Matches tags only in the given upper/lower case.
|
680 |
-
|
681 |
-
Example: similar to :class:`make_html_tags`
|
682 |
-
"""
|
683 |
-
return _makeTags(tag_str, True)
|
684 |
-
|
685 |
-
|
686 |
-
any_open_tag: ParserElement
|
687 |
-
any_close_tag: ParserElement
|
688 |
-
any_open_tag, any_close_tag = make_html_tags(
|
689 |
-
Word(alphas, alphanums + "_:").set_name("any tag")
|
690 |
-
)
|
691 |
-
|
692 |
-
_htmlEntityMap = {k.rstrip(";"): v for k, v in html.entities.html5.items()}
|
693 |
-
common_html_entity = Regex("&(?P<entity>" + "|".join(_htmlEntityMap) + ");").set_name(
|
694 |
-
"common HTML entity"
|
695 |
-
)
|
696 |
-
|
697 |
-
|
698 |
-
def replace_html_entity(t):
|
699 |
-
"""Helper parser action to replace common HTML entities with their special characters"""
|
700 |
-
return _htmlEntityMap.get(t.entity)
|
701 |
-
|
702 |
-
|
703 |
-
class OpAssoc(Enum):
|
704 |
-
LEFT = 1
|
705 |
-
RIGHT = 2
|
706 |
-
|
707 |
-
|
708 |
-
InfixNotationOperatorArgType = Union[
|
709 |
-
ParserElement, str, Tuple[Union[ParserElement, str], Union[ParserElement, str]]
|
710 |
-
]
|
711 |
-
InfixNotationOperatorSpec = Union[
|
712 |
-
Tuple[
|
713 |
-
InfixNotationOperatorArgType,
|
714 |
-
int,
|
715 |
-
OpAssoc,
|
716 |
-
typing.Optional[ParseAction],
|
717 |
-
],
|
718 |
-
Tuple[
|
719 |
-
InfixNotationOperatorArgType,
|
720 |
-
int,
|
721 |
-
OpAssoc,
|
722 |
-
],
|
723 |
-
]
|
724 |
-
|
725 |
-
|
726 |
-
def infix_notation(
|
727 |
-
base_expr: ParserElement,
|
728 |
-
op_list: List[InfixNotationOperatorSpec],
|
729 |
-
lpar: Union[str, ParserElement] = Suppress("("),
|
730 |
-
rpar: Union[str, ParserElement] = Suppress(")"),
|
731 |
-
) -> ParserElement:
|
732 |
-
"""Helper method for constructing grammars of expressions made up of
|
733 |
-
operators working in a precedence hierarchy. Operators may be unary
|
734 |
-
or binary, left- or right-associative. Parse actions can also be
|
735 |
-
attached to operator expressions. The generated parser will also
|
736 |
-
recognize the use of parentheses to override operator precedences
|
737 |
-
(see example below).
|
738 |
-
|
739 |
-
Note: if you define a deep operator list, you may see performance
|
740 |
-
issues when using infix_notation. See
|
741 |
-
:class:`ParserElement.enable_packrat` for a mechanism to potentially
|
742 |
-
improve your parser performance.
|
743 |
-
|
744 |
-
Parameters:
|
745 |
-
- ``base_expr`` - expression representing the most basic operand to
|
746 |
-
be used in the expression
|
747 |
-
- ``op_list`` - list of tuples, one for each operator precedence level
|
748 |
-
in the expression grammar; each tuple is of the form ``(op_expr,
|
749 |
-
num_operands, right_left_assoc, (optional)parse_action)``, where:
|
750 |
-
|
751 |
-
- ``op_expr`` is the pyparsing expression for the operator; may also
|
752 |
-
be a string, which will be converted to a Literal; if ``num_operands``
|
753 |
-
is 3, ``op_expr`` is a tuple of two expressions, for the two
|
754 |
-
operators separating the 3 terms
|
755 |
-
- ``num_operands`` is the number of terms for this operator (must be 1,
|
756 |
-
2, or 3)
|
757 |
-
- ``right_left_assoc`` is the indicator whether the operator is right
|
758 |
-
or left associative, using the pyparsing-defined constants
|
759 |
-
``OpAssoc.RIGHT`` and ``OpAssoc.LEFT``.
|
760 |
-
- ``parse_action`` is the parse action to be associated with
|
761 |
-
expressions matching this operator expression (the parse action
|
762 |
-
tuple member may be omitted); if the parse action is passed
|
763 |
-
a tuple or list of functions, this is equivalent to calling
|
764 |
-
``set_parse_action(*fn)``
|
765 |
-
(:class:`ParserElement.set_parse_action`)
|
766 |
-
- ``lpar`` - expression for matching left-parentheses; if passed as a
|
767 |
-
str, then will be parsed as Suppress(lpar). If lpar is passed as
|
768 |
-
an expression (such as ``Literal('(')``), then it will be kept in
|
769 |
-
the parsed results, and grouped with them. (default= ``Suppress('(')``)
|
770 |
-
- ``rpar`` - expression for matching right-parentheses; if passed as a
|
771 |
-
str, then will be parsed as Suppress(rpar). If rpar is passed as
|
772 |
-
an expression (such as ``Literal(')')``), then it will be kept in
|
773 |
-
the parsed results, and grouped with them. (default= ``Suppress(')')``)
|
774 |
-
|
775 |
-
Example::
|
776 |
-
|
777 |
-
# simple example of four-function arithmetic with ints and
|
778 |
-
# variable names
|
779 |
-
integer = pyparsing_common.signed_integer
|
780 |
-
varname = pyparsing_common.identifier
|
781 |
-
|
782 |
-
arith_expr = infix_notation(integer | varname,
|
783 |
-
[
|
784 |
-
('-', 1, OpAssoc.RIGHT),
|
785 |
-
(one_of('* /'), 2, OpAssoc.LEFT),
|
786 |
-
(one_of('+ -'), 2, OpAssoc.LEFT),
|
787 |
-
])
|
788 |
-
|
789 |
-
arith_expr.run_tests('''
|
790 |
-
5+3*6
|
791 |
-
(5+3)*6
|
792 |
-
-2--11
|
793 |
-
''', full_dump=False)
|
794 |
-
|
795 |
-
prints::
|
796 |
-
|
797 |
-
5+3*6
|
798 |
-
[[5, '+', [3, '*', 6]]]
|
799 |
-
|
800 |
-
(5+3)*6
|
801 |
-
[[[5, '+', 3], '*', 6]]
|
802 |
-
|
803 |
-
-2--11
|
804 |
-
[[['-', 2], '-', ['-', 11]]]
|
805 |
-
"""
|
806 |
-
# captive version of FollowedBy that does not do parse actions or capture results names
|
807 |
-
class _FB(FollowedBy):
|
808 |
-
def parseImpl(self, instring, loc, doActions=True):
|
809 |
-
self.expr.try_parse(instring, loc)
|
810 |
-
return loc, []
|
811 |
-
|
812 |
-
_FB.__name__ = "FollowedBy>"
|
813 |
-
|
814 |
-
ret = Forward()
|
815 |
-
if isinstance(lpar, str):
|
816 |
-
lpar = Suppress(lpar)
|
817 |
-
if isinstance(rpar, str):
|
818 |
-
rpar = Suppress(rpar)
|
819 |
-
|
820 |
-
# if lpar and rpar are not suppressed, wrap in group
|
821 |
-
if not (isinstance(rpar, Suppress) and isinstance(rpar, Suppress)):
|
822 |
-
lastExpr = base_expr | Group(lpar + ret + rpar)
|
823 |
-
else:
|
824 |
-
lastExpr = base_expr | (lpar + ret + rpar)
|
825 |
-
|
826 |
-
for i, operDef in enumerate(op_list):
|
827 |
-
opExpr, arity, rightLeftAssoc, pa = (operDef + (None,))[:4]
|
828 |
-
if isinstance(opExpr, str_type):
|
829 |
-
opExpr = ParserElement._literalStringClass(opExpr)
|
830 |
-
if arity == 3:
|
831 |
-
if not isinstance(opExpr, (tuple, list)) or len(opExpr) != 2:
|
832 |
-
raise ValueError(
|
833 |
-
"if numterms=3, opExpr must be a tuple or list of two expressions"
|
834 |
-
)
|
835 |
-
opExpr1, opExpr2 = opExpr
|
836 |
-
term_name = "{}{} term".format(opExpr1, opExpr2)
|
837 |
-
else:
|
838 |
-
term_name = "{} term".format(opExpr)
|
839 |
-
|
840 |
-
if not 1 <= arity <= 3:
|
841 |
-
raise ValueError("operator must be unary (1), binary (2), or ternary (3)")
|
842 |
-
|
843 |
-
if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT):
|
844 |
-
raise ValueError("operator must indicate right or left associativity")
|
845 |
-
|
846 |
-
thisExpr: Forward = Forward().set_name(term_name)
|
847 |
-
if rightLeftAssoc is OpAssoc.LEFT:
|
848 |
-
if arity == 1:
|
849 |
-
matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...])
|
850 |
-
elif arity == 2:
|
851 |
-
if opExpr is not None:
|
852 |
-
matchExpr = _FB(lastExpr + opExpr + lastExpr) + Group(
|
853 |
-
lastExpr + (opExpr + lastExpr)[1, ...]
|
854 |
-
)
|
855 |
-
else:
|
856 |
-
matchExpr = _FB(lastExpr + lastExpr) + Group(lastExpr[2, ...])
|
857 |
-
elif arity == 3:
|
858 |
-
matchExpr = _FB(
|
859 |
-
lastExpr + opExpr1 + lastExpr + opExpr2 + lastExpr
|
860 |
-
) + Group(lastExpr + OneOrMore(opExpr1 + lastExpr + opExpr2 + lastExpr))
|
861 |
-
elif rightLeftAssoc is OpAssoc.RIGHT:
|
862 |
-
if arity == 1:
|
863 |
-
# try to avoid LR with this extra test
|
864 |
-
if not isinstance(opExpr, Opt):
|
865 |
-
opExpr = Opt(opExpr)
|
866 |
-
matchExpr = _FB(opExpr.expr + thisExpr) + Group(opExpr + thisExpr)
|
867 |
-
elif arity == 2:
|
868 |
-
if opExpr is not None:
|
869 |
-
matchExpr = _FB(lastExpr + opExpr + thisExpr) + Group(
|
870 |
-
lastExpr + (opExpr + thisExpr)[1, ...]
|
871 |
-
)
|
872 |
-
else:
|
873 |
-
matchExpr = _FB(lastExpr + thisExpr) + Group(
|
874 |
-
lastExpr + thisExpr[1, ...]
|
875 |
-
)
|
876 |
-
elif arity == 3:
|
877 |
-
matchExpr = _FB(
|
878 |
-
lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr
|
879 |
-
) + Group(lastExpr + opExpr1 + thisExpr + opExpr2 + thisExpr)
|
880 |
-
if pa:
|
881 |
-
if isinstance(pa, (tuple, list)):
|
882 |
-
matchExpr.set_parse_action(*pa)
|
883 |
-
else:
|
884 |
-
matchExpr.set_parse_action(pa)
|
885 |
-
thisExpr <<= (matchExpr | lastExpr).setName(term_name)
|
886 |
-
lastExpr = thisExpr
|
887 |
-
ret <<= lastExpr
|
888 |
-
return ret
|
889 |
-
|
890 |
-
|
891 |
-
def indentedBlock(blockStatementExpr, indentStack, indent=True, backup_stacks=[]):
|
892 |
-
"""
|
893 |
-
(DEPRECATED - use IndentedBlock class instead)
|
894 |
-
Helper method for defining space-delimited indentation blocks,
|
895 |
-
such as those used to define block statements in Python source code.
|
896 |
-
|
897 |
-
Parameters:
|
898 |
-
|
899 |
-
- ``blockStatementExpr`` - expression defining syntax of statement that
|
900 |
-
is repeated within the indented block
|
901 |
-
- ``indentStack`` - list created by caller to manage indentation stack
|
902 |
-
(multiple ``statementWithIndentedBlock`` expressions within a single
|
903 |
-
grammar should share a common ``indentStack``)
|
904 |
-
- ``indent`` - boolean indicating whether block must be indented beyond
|
905 |
-
the current level; set to ``False`` for block of left-most statements
|
906 |
-
(default= ``True``)
|
907 |
-
|
908 |
-
A valid block must contain at least one ``blockStatement``.
|
909 |
-
|
910 |
-
(Note that indentedBlock uses internal parse actions which make it
|
911 |
-
incompatible with packrat parsing.)
|
912 |
-
|
913 |
-
Example::
|
914 |
-
|
915 |
-
data = '''
|
916 |
-
def A(z):
|
917 |
-
A1
|
918 |
-
B = 100
|
919 |
-
G = A2
|
920 |
-
A2
|
921 |
-
A3
|
922 |
-
B
|
923 |
-
def BB(a,b,c):
|
924 |
-
BB1
|
925 |
-
def BBA():
|
926 |
-
bba1
|
927 |
-
bba2
|
928 |
-
bba3
|
929 |
-
C
|
930 |
-
D
|
931 |
-
def spam(x,y):
|
932 |
-
def eggs(z):
|
933 |
-
pass
|
934 |
-
'''
|
935 |
-
|
936 |
-
|
937 |
-
indentStack = [1]
|
938 |
-
stmt = Forward()
|
939 |
-
|
940 |
-
identifier = Word(alphas, alphanums)
|
941 |
-
funcDecl = ("def" + identifier + Group("(" + Opt(delimitedList(identifier)) + ")") + ":")
|
942 |
-
func_body = indentedBlock(stmt, indentStack)
|
943 |
-
funcDef = Group(funcDecl + func_body)
|
944 |
-
|
945 |
-
rvalue = Forward()
|
946 |
-
funcCall = Group(identifier + "(" + Opt(delimitedList(rvalue)) + ")")
|
947 |
-
rvalue << (funcCall | identifier | Word(nums))
|
948 |
-
assignment = Group(identifier + "=" + rvalue)
|
949 |
-
stmt << (funcDef | assignment | identifier)
|
950 |
-
|
951 |
-
module_body = stmt[1, ...]
|
952 |
-
|
953 |
-
parseTree = module_body.parseString(data)
|
954 |
-
parseTree.pprint()
|
955 |
-
|
956 |
-
prints::
|
957 |
-
|
958 |
-
[['def',
|
959 |
-
'A',
|
960 |
-
['(', 'z', ')'],
|
961 |
-
':',
|
962 |
-
[['A1'], [['B', '=', '100']], [['G', '=', 'A2']], ['A2'], ['A3']]],
|
963 |
-
'B',
|
964 |
-
['def',
|
965 |
-
'BB',
|
966 |
-
['(', 'a', 'b', 'c', ')'],
|
967 |
-
':',
|
968 |
-
[['BB1'], [['def', 'BBA', ['(', ')'], ':', [['bba1'], ['bba2'], ['bba3']]]]]],
|
969 |
-
'C',
|
970 |
-
'D',
|
971 |
-
['def',
|
972 |
-
'spam',
|
973 |
-
['(', 'x', 'y', ')'],
|
974 |
-
':',
|
975 |
-
[[['def', 'eggs', ['(', 'z', ')'], ':', [['pass']]]]]]]
|
976 |
-
"""
|
977 |
-
backup_stacks.append(indentStack[:])
|
978 |
-
|
979 |
-
def reset_stack():
|
980 |
-
indentStack[:] = backup_stacks[-1]
|
981 |
-
|
982 |
-
def checkPeerIndent(s, l, t):
|
983 |
-
if l >= len(s):
|
984 |
-
return
|
985 |
-
curCol = col(l, s)
|
986 |
-
if curCol != indentStack[-1]:
|
987 |
-
if curCol > indentStack[-1]:
|
988 |
-
raise ParseException(s, l, "illegal nesting")
|
989 |
-
raise ParseException(s, l, "not a peer entry")
|
990 |
-
|
991 |
-
def checkSubIndent(s, l, t):
|
992 |
-
curCol = col(l, s)
|
993 |
-
if curCol > indentStack[-1]:
|
994 |
-
indentStack.append(curCol)
|
995 |
-
else:
|
996 |
-
raise ParseException(s, l, "not a subentry")
|
997 |
-
|
998 |
-
def checkUnindent(s, l, t):
|
999 |
-
if l >= len(s):
|
1000 |
-
return
|
1001 |
-
curCol = col(l, s)
|
1002 |
-
if not (indentStack and curCol in indentStack):
|
1003 |
-
raise ParseException(s, l, "not an unindent")
|
1004 |
-
if curCol < indentStack[-1]:
|
1005 |
-
indentStack.pop()
|
1006 |
-
|
1007 |
-
NL = OneOrMore(LineEnd().set_whitespace_chars("\t ").suppress())
|
1008 |
-
INDENT = (Empty() + Empty().set_parse_action(checkSubIndent)).set_name("INDENT")
|
1009 |
-
PEER = Empty().set_parse_action(checkPeerIndent).set_name("")
|
1010 |
-
UNDENT = Empty().set_parse_action(checkUnindent).set_name("UNINDENT")
|
1011 |
-
if indent:
|
1012 |
-
smExpr = Group(
|
1013 |
-
Opt(NL)
|
1014 |
-
+ INDENT
|
1015 |
-
+ OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL))
|
1016 |
-
+ UNDENT
|
1017 |
-
)
|
1018 |
-
else:
|
1019 |
-
smExpr = Group(
|
1020 |
-
Opt(NL)
|
1021 |
-
+ OneOrMore(PEER + Group(blockStatementExpr) + Opt(NL))
|
1022 |
-
+ Opt(UNDENT)
|
1023 |
-
)
|
1024 |
-
|
1025 |
-
# add a parse action to remove backup_stack from list of backups
|
1026 |
-
smExpr.add_parse_action(
|
1027 |
-
lambda: backup_stacks.pop(-1) and None if backup_stacks else None
|
1028 |
-
)
|
1029 |
-
smExpr.set_fail_action(lambda a, b, c, d: reset_stack())
|
1030 |
-
blockStatementExpr.ignore(_bslash + LineEnd())
|
1031 |
-
return smExpr.set_name("indented block")
|
1032 |
-
|
1033 |
-
|
1034 |
-
# it's easy to get these comment structures wrong - they're very common, so may as well make them available
|
1035 |
-
c_style_comment = Combine(Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/").set_name(
|
1036 |
-
"C style comment"
|
1037 |
-
)
|
1038 |
-
"Comment of the form ``/* ... */``"
|
1039 |
-
|
1040 |
-
html_comment = Regex(r"<!--[\s\S]*?-->").set_name("HTML comment")
|
1041 |
-
"Comment of the form ``<!-- ... -->``"
|
1042 |
-
|
1043 |
-
rest_of_line = Regex(r".*").leave_whitespace().set_name("rest of line")
|
1044 |
-
dbl_slash_comment = Regex(r"//(?:\\\n|[^\n])*").set_name("// comment")
|
1045 |
-
"Comment of the form ``// ... (to end of line)``"
|
1046 |
-
|
1047 |
-
cpp_style_comment = Combine(
|
1048 |
-
Regex(r"/\*(?:[^*]|\*(?!/))*") + "*/" | dbl_slash_comment
|
1049 |
-
).set_name("C++ style comment")
|
1050 |
-
"Comment of either form :class:`c_style_comment` or :class:`dbl_slash_comment`"
|
1051 |
-
|
1052 |
-
java_style_comment = cpp_style_comment
|
1053 |
-
"Same as :class:`cpp_style_comment`"
|
1054 |
-
|
1055 |
-
python_style_comment = Regex(r"#.*").set_name("Python style comment")
|
1056 |
-
"Comment of the form ``# ... (to end of line)``"
|
1057 |
-
|
1058 |
-
|
1059 |
-
# build list of built-in expressions, for future reference if a global default value
|
1060 |
-
# gets updated
|
1061 |
-
_builtin_exprs: List[ParserElement] = [
|
1062 |
-
v for v in vars().values() if isinstance(v, ParserElement)
|
1063 |
-
]
|
1064 |
-
|
1065 |
-
|
1066 |
-
# pre-PEP8 compatible names
|
1067 |
-
delimitedList = delimited_list
|
1068 |
-
countedArray = counted_array
|
1069 |
-
matchPreviousLiteral = match_previous_literal
|
1070 |
-
matchPreviousExpr = match_previous_expr
|
1071 |
-
oneOf = one_of
|
1072 |
-
dictOf = dict_of
|
1073 |
-
originalTextFor = original_text_for
|
1074 |
-
nestedExpr = nested_expr
|
1075 |
-
makeHTMLTags = make_html_tags
|
1076 |
-
makeXMLTags = make_xml_tags
|
1077 |
-
anyOpenTag, anyCloseTag = any_open_tag, any_close_tag
|
1078 |
-
commonHTMLEntity = common_html_entity
|
1079 |
-
replaceHTMLEntity = replace_html_entity
|
1080 |
-
opAssoc = OpAssoc
|
1081 |
-
infixNotation = infix_notation
|
1082 |
-
cStyleComment = c_style_comment
|
1083 |
-
htmlComment = html_comment
|
1084 |
-
restOfLine = rest_of_line
|
1085 |
-
dblSlashComment = dbl_slash_comment
|
1086 |
-
cppStyleComment = cpp_style_comment
|
1087 |
-
javaStyleComment = java_style_comment
|
1088 |
-
pythonStyleComment = python_style_comment
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/AzumaSeren100/XuanShen-Bert-VITS2/text/chinese_bert.py
DELETED
@@ -1,59 +0,0 @@
|
|
1 |
-
import torch
|
2 |
-
import sys
|
3 |
-
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
4 |
-
|
5 |
-
device = torch.device(
|
6 |
-
"cuda"
|
7 |
-
if torch.cuda.is_available()
|
8 |
-
else (
|
9 |
-
"mps"
|
10 |
-
if sys.platform == "darwin" and torch.backends.mps.is_available()
|
11 |
-
else "cpu"
|
12 |
-
)
|
13 |
-
)
|
14 |
-
|
15 |
-
tokenizer = AutoTokenizer.from_pretrained("./bert/chinese-roberta-wwm-ext-large")
|
16 |
-
model = AutoModelForMaskedLM.from_pretrained("./bert/chinese-roberta-wwm-ext-large").to(device)
|
17 |
-
|
18 |
-
def get_bert_feature(text, word2ph):
|
19 |
-
with torch.no_grad():
|
20 |
-
inputs = tokenizer(text, return_tensors='pt')
|
21 |
-
for i in inputs:
|
22 |
-
inputs[i] = inputs[i].to(device)
|
23 |
-
res = model(**inputs, output_hidden_states=True)
|
24 |
-
res = torch.cat(res['hidden_states'][-3:-2], -1)[0].cpu()
|
25 |
-
|
26 |
-
assert len(word2ph) == len(text)+2
|
27 |
-
word2phone = word2ph
|
28 |
-
phone_level_feature = []
|
29 |
-
for i in range(len(word2phone)):
|
30 |
-
repeat_feature = res[i].repeat(word2phone[i], 1)
|
31 |
-
phone_level_feature.append(repeat_feature)
|
32 |
-
|
33 |
-
phone_level_feature = torch.cat(phone_level_feature, dim=0)
|
34 |
-
|
35 |
-
|
36 |
-
return phone_level_feature.T
|
37 |
-
|
38 |
-
if __name__ == '__main__':
|
39 |
-
# feature = get_bert_feature('你好,我是说的道理。')
|
40 |
-
import torch
|
41 |
-
|
42 |
-
word_level_feature = torch.rand(38, 1024) # 12个词,每个词1024维特征
|
43 |
-
word2phone = [1, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 1, 2, 2, 2, 2, 2, 1, 1, 2, 2, 1, 2, 2, 2, 2, 1, 2, 2, 2, 2, 2, 1, 2, 2, 2, 2, 1]
|
44 |
-
|
45 |
-
# 计算总帧数
|
46 |
-
total_frames = sum(word2phone)
|
47 |
-
print(word_level_feature.shape)
|
48 |
-
print(word2phone)
|
49 |
-
phone_level_feature = []
|
50 |
-
for i in range(len(word2phone)):
|
51 |
-
print(word_level_feature[i].shape)
|
52 |
-
|
53 |
-
# 对每个词重复word2phone[i]次
|
54 |
-
repeat_feature = word_level_feature[i].repeat(word2phone[i], 1)
|
55 |
-
phone_level_feature.append(repeat_feature)
|
56 |
-
|
57 |
-
phone_level_feature = torch.cat(phone_level_feature, dim=0)
|
58 |
-
print(phone_level_feature.shape) # torch.Size([36, 1024])
|
59 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Banbri/zcvzcv/next.config.js
DELETED
@@ -1,11 +0,0 @@
|
|
1 |
-
/** @type {import('next').NextConfig} */
|
2 |
-
const nextConfig = {
|
3 |
-
output: 'standalone',
|
4 |
-
|
5 |
-
experimental: {
|
6 |
-
serverActions: true,
|
7 |
-
serverActionsBodySizeLimit: '8mb',
|
8 |
-
},
|
9 |
-
}
|
10 |
-
|
11 |
-
module.exports = nextConfig
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/lib/infer_pack/modules/F0Predictor/PMF0Predictor.py
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor
|
2 |
-
import parselmouth
|
3 |
-
import numpy as np
|
4 |
-
|
5 |
-
|
6 |
-
class PMF0Predictor(F0Predictor):
|
7 |
-
def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100):
|
8 |
-
self.hop_length = hop_length
|
9 |
-
self.f0_min = f0_min
|
10 |
-
self.f0_max = f0_max
|
11 |
-
self.sampling_rate = sampling_rate
|
12 |
-
|
13 |
-
def interpolate_f0(self, f0):
|
14 |
-
"""
|
15 |
-
对F0进行插值处理
|
16 |
-
"""
|
17 |
-
|
18 |
-
data = np.reshape(f0, (f0.size, 1))
|
19 |
-
|
20 |
-
vuv_vector = np.zeros((data.size, 1), dtype=np.float32)
|
21 |
-
vuv_vector[data > 0.0] = 1.0
|
22 |
-
vuv_vector[data <= 0.0] = 0.0
|
23 |
-
|
24 |
-
ip_data = data
|
25 |
-
|
26 |
-
frame_number = data.size
|
27 |
-
last_value = 0.0
|
28 |
-
for i in range(frame_number):
|
29 |
-
if data[i] <= 0.0:
|
30 |
-
j = i + 1
|
31 |
-
for j in range(i + 1, frame_number):
|
32 |
-
if data[j] > 0.0:
|
33 |
-
break
|
34 |
-
if j < frame_number - 1:
|
35 |
-
if last_value > 0.0:
|
36 |
-
step = (data[j] - data[i - 1]) / float(j - i)
|
37 |
-
for k in range(i, j):
|
38 |
-
ip_data[k] = data[i - 1] + step * (k - i + 1)
|
39 |
-
else:
|
40 |
-
for k in range(i, j):
|
41 |
-
ip_data[k] = data[j]
|
42 |
-
else:
|
43 |
-
for k in range(i, frame_number):
|
44 |
-
ip_data[k] = last_value
|
45 |
-
else:
|
46 |
-
ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝
|
47 |
-
last_value = data[i]
|
48 |
-
|
49 |
-
return ip_data[:, 0], vuv_vector[:, 0]
|
50 |
-
|
51 |
-
def compute_f0(self, wav, p_len=None):
|
52 |
-
x = wav
|
53 |
-
if p_len is None:
|
54 |
-
p_len = x.shape[0] // self.hop_length
|
55 |
-
else:
|
56 |
-
assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
|
57 |
-
time_step = self.hop_length / self.sampling_rate * 1000
|
58 |
-
f0 = (
|
59 |
-
parselmouth.Sound(x, self.sampling_rate)
|
60 |
-
.to_pitch_ac(
|
61 |
-
time_step=time_step / 1000,
|
62 |
-
voicing_threshold=0.6,
|
63 |
-
pitch_floor=self.f0_min,
|
64 |
-
pitch_ceiling=self.f0_max,
|
65 |
-
)
|
66 |
-
.selected_array["frequency"]
|
67 |
-
)
|
68 |
-
|
69 |
-
pad_size = (p_len - len(f0) + 1) // 2
|
70 |
-
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
|
71 |
-
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
|
72 |
-
f0, uv = self.interpolate_f0(f0)
|
73 |
-
return f0
|
74 |
-
|
75 |
-
def compute_f0_uv(self, wav, p_len=None):
|
76 |
-
x = wav
|
77 |
-
if p_len is None:
|
78 |
-
p_len = x.shape[0] // self.hop_length
|
79 |
-
else:
|
80 |
-
assert abs(p_len - x.shape[0] // self.hop_length) < 4, "pad length error"
|
81 |
-
time_step = self.hop_length / self.sampling_rate * 1000
|
82 |
-
f0 = (
|
83 |
-
parselmouth.Sound(x, self.sampling_rate)
|
84 |
-
.to_pitch_ac(
|
85 |
-
time_step=time_step / 1000,
|
86 |
-
voicing_threshold=0.6,
|
87 |
-
pitch_floor=self.f0_min,
|
88 |
-
pitch_ceiling=self.f0_max,
|
89 |
-
)
|
90 |
-
.selected_array["frequency"]
|
91 |
-
)
|
92 |
-
|
93 |
-
pad_size = (p_len - len(f0) + 1) // 2
|
94 |
-
if pad_size > 0 or p_len - len(f0) - pad_size > 0:
|
95 |
-
f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant")
|
96 |
-
f0, uv = self.interpolate_f0(f0)
|
97 |
-
return f0, uv
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Bart92/RVC_HF/mdx_processing_script.py
DELETED
@@ -1,146 +0,0 @@
|
|
1 |
-
import gc
|
2 |
-
import requests
|
3 |
-
import subprocess
|
4 |
-
import logging
|
5 |
-
import sys
|
6 |
-
from bs4 import BeautifulSoup
|
7 |
-
import torch, pdb, os, warnings, librosa
|
8 |
-
import soundfile as sf
|
9 |
-
from tqdm import tqdm
|
10 |
-
import numpy as np
|
11 |
-
import torch
|
12 |
-
now_dir = os.getcwd()
|
13 |
-
sys.path.append(now_dir)
|
14 |
-
import mdx
|
15 |
-
branch = "https://github.com/NaJeongMo/Colab-for-MDX_B"
|
16 |
-
|
17 |
-
model_params = "https://raw.githubusercontent.com/TRvlvr/application_data/main/mdx_model_data/model_data.json"
|
18 |
-
_Models = "https://github.com/TRvlvr/model_repo/releases/download/all_public_uvr_models/"
|
19 |
-
# _models = "https://pastebin.com/raw/jBzYB8vz"
|
20 |
-
_models = "https://raw.githubusercontent.com/TRvlvr/application_data/main/filelists/download_checks.json"
|
21 |
-
stem_naming = "https://pastebin.com/raw/mpH4hRcF"
|
22 |
-
|
23 |
-
file_folder = "Colab-for-MDX_B"
|
24 |
-
model_ids = requests.get(_models).json()
|
25 |
-
model_ids = model_ids["mdx_download_list"].values()
|
26 |
-
#print(model_ids)
|
27 |
-
model_params = requests.get(model_params).json()
|
28 |
-
stem_naming = requests.get(stem_naming).json()
|
29 |
-
|
30 |
-
os.makedirs("tmp_models", exist_ok=True)
|
31 |
-
|
32 |
-
warnings.filterwarnings("ignore")
|
33 |
-
cpu = torch.device("cpu")
|
34 |
-
if torch.cuda.is_available():
|
35 |
-
device = torch.device("cuda:0")
|
36 |
-
elif torch.backends.mps.is_available():
|
37 |
-
device = torch.device("mps")
|
38 |
-
else:
|
39 |
-
device = torch.device("cpu")
|
40 |
-
|
41 |
-
|
42 |
-
def get_model_list():
|
43 |
-
return model_ids
|
44 |
-
|
45 |
-
def id_to_ptm(mkey):
|
46 |
-
if mkey in model_ids:
|
47 |
-
mpath = f"{now_dir}/tmp_models/{mkey}"
|
48 |
-
if not os.path.exists(f'{now_dir}/tmp_models/{mkey}'):
|
49 |
-
print('Downloading model...',end=' ')
|
50 |
-
subprocess.run(
|
51 |
-
["wget", _Models+mkey, "-O", mpath]
|
52 |
-
)
|
53 |
-
print(f'saved to {mpath}')
|
54 |
-
# get_ipython().system(f'gdown {model_id} -O /content/tmp_models/{mkey}')
|
55 |
-
return mpath
|
56 |
-
else:
|
57 |
-
return mpath
|
58 |
-
else:
|
59 |
-
mpath = f'models/{mkey}'
|
60 |
-
return mpath
|
61 |
-
|
62 |
-
def prepare_mdx(onnx,custom_param=False, dim_f=None, dim_t=None, n_fft=None, stem_name=None, compensation=None):
|
63 |
-
device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu')
|
64 |
-
if custom_param:
|
65 |
-
assert not (dim_f is None or dim_t is None or n_fft is None or compensation is None), 'Custom parameter selected, but incomplete parameters are provided.'
|
66 |
-
mdx_model = mdx.MDX_Model(
|
67 |
-
device,
|
68 |
-
dim_f = dim_f,
|
69 |
-
dim_t = dim_t,
|
70 |
-
n_fft = n_fft,
|
71 |
-
stem_name=stem_name,
|
72 |
-
compensation=compensation
|
73 |
-
)
|
74 |
-
else:
|
75 |
-
model_hash = mdx.MDX.get_hash(onnx)
|
76 |
-
if model_hash in model_params:
|
77 |
-
mp = model_params.get(model_hash)
|
78 |
-
mdx_model = mdx.MDX_Model(
|
79 |
-
device,
|
80 |
-
dim_f = mp["mdx_dim_f_set"],
|
81 |
-
dim_t = 2**mp["mdx_dim_t_set"],
|
82 |
-
n_fft = mp["mdx_n_fft_scale_set"],
|
83 |
-
stem_name=mp["primary_stem"],
|
84 |
-
compensation=compensation if not custom_param and compensation is not None else mp["compensate"]
|
85 |
-
)
|
86 |
-
return mdx_model
|
87 |
-
|
88 |
-
def run_mdx(onnx, mdx_model,filename, output_format='wav',diff=False,suffix=None,diff_suffix=None, denoise=False, m_threads=2):
|
89 |
-
mdx_sess = mdx.MDX(onnx,mdx_model)
|
90 |
-
print(f"Processing: {filename}")
|
91 |
-
if filename.lower().endswith('.wav'):
|
92 |
-
wave, sr = librosa.load(filename, mono=False, sr=44100)
|
93 |
-
else:
|
94 |
-
temp_wav = 'temp_audio.wav'
|
95 |
-
subprocess.run(['ffmpeg', '-i', filename, '-ar', '44100', '-ac', '2', temp_wav]) # Convert to WAV format
|
96 |
-
wave, sr = librosa.load(temp_wav, mono=False, sr=44100)
|
97 |
-
os.remove(temp_wav)
|
98 |
-
|
99 |
-
#wave, sr = librosa.load(filename,mono=False, sr=44100)
|
100 |
-
# normalizing input wave gives better output
|
101 |
-
peak = max(np.max(wave), abs(np.min(wave)))
|
102 |
-
wave /= peak
|
103 |
-
if denoise:
|
104 |
-
wave_processed = -(mdx_sess.process_wave(-wave, m_threads)) + (mdx_sess.process_wave(wave, m_threads))
|
105 |
-
wave_processed *= 0.5
|
106 |
-
else:
|
107 |
-
wave_processed = mdx_sess.process_wave(wave, m_threads)
|
108 |
-
# return to previous peak
|
109 |
-
wave_processed *= peak
|
110 |
-
|
111 |
-
stem_name = mdx_model.stem_name if suffix is None else suffix # use suffix if provided
|
112 |
-
save_path = os.path.basename(os.path.splitext(filename)[0])
|
113 |
-
#vocals_save_path = os.path.join(vocals_folder, f"{save_path}_{stem_name}.{output_format}")
|
114 |
-
#instrumental_save_path = os.path.join(instrumental_folder, f"{save_path}_{stem_name}.{output_format}")
|
115 |
-
save_path = f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.{output_format}"
|
116 |
-
save_path = os.path.join(
|
117 |
-
'audios',
|
118 |
-
save_path
|
119 |
-
)
|
120 |
-
sf.write(
|
121 |
-
save_path,
|
122 |
-
wave_processed.T,
|
123 |
-
sr
|
124 |
-
)
|
125 |
-
|
126 |
-
print(f'done, saved to: {save_path}')
|
127 |
-
|
128 |
-
if diff:
|
129 |
-
diff_stem_name = stem_naming.get(stem_name) if diff_suffix is None else diff_suffix # use suffix if provided
|
130 |
-
stem_name = f"{stem_name}_diff" if diff_stem_name is None else diff_stem_name
|
131 |
-
save_path = f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.{output_format}"
|
132 |
-
save_path = os.path.join(
|
133 |
-
'audio-others',
|
134 |
-
save_path
|
135 |
-
)
|
136 |
-
sf.write(
|
137 |
-
save_path,
|
138 |
-
(-wave_processed.T*mdx_model.compensation)+wave.T,
|
139 |
-
sr
|
140 |
-
)
|
141 |
-
print(f'invert done, saved to: {save_path}')
|
142 |
-
del mdx_sess, wave_processed, wave
|
143 |
-
gc.collect()
|
144 |
-
|
145 |
-
if __name__ == "__main__":
|
146 |
-
print()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/3d Modelo 3d Descargar.md
DELETED
@@ -1,84 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Cómo descargar y usar modelos 3D</h1>
|
3 |
-
<p>Los modelos 3D son representaciones digitales de objetos que se pueden ver, manipular y representar en tres dimensiones. Se utilizan para diversos fines, como diseño, ingeniería, entretenimiento, educación, arte y más. Los modelos 3D pueden ayudarte a visualizar tus ideas, crear simulaciones realistas, probar tus productos y mostrar tu trabajo. </p>
|
4 |
-
<h2>3d modelo 3d descargar</h2><br /><p><b><b>Download Zip</b> ✶ <a href="https://bltlly.com/2v6JB6">https://bltlly.com/2v6JB6</a></b></p><br /><br />
|
5 |
-
<p>Pero ¿cómo puedes acceder a modelos 3D? ¿Y cómo puedes usarlos para tus propios proyectos? En este artículo, le mostraremos cómo descargar modelos 3D desde la web, cómo usarlos para diferentes aplicaciones y cuáles son los beneficios del modelado 3D. </p>
|
6 |
-
<h2>Cómo descargar modelos 3D desde la Web</h2>
|
7 |
-
<p>Una de las formas más fáciles de obtener modelos 3D es descargarlos desde la web. Hay muchos sitios web que ofrecen modelos 3D gratuitos o de pago que puede descargar y usar para fines personales o comerciales. Sin embargo, antes de descargar cualquier modelo 3D, debe considerar dos cosas: el formato de archivo y la licencia. </p>
|
8 |
-
<h3>Los formatos de archivo 3D más populares</h3>
|
9 |
-
<p>Hay cientos de formatos de archivo 3D diferentes que almacenan información sobre modelos 3D, como geometría, textura, color, animación, etc. Algunos de los formatos de archivo 3D más populares son:</p>
|
10 |
-
<ul>
|
11 |
-
<li><b>OBJ</b>: Un formato simple y ampliamente soportado que almacena solo la geometría de un modelo 3D. </li>
|
12 |
-
<li><b>STL</b>: Un formato que se utiliza comúnmente para la impresión 3D, ya que almacena solo la superficie de un modelo 3D. </li>
|
13 |
-
<li><b>FBX</b>: Un formato que se utiliza para intercambiar modelos 3D entre diferentes aplicaciones de software, ya que almacena geometría, textura, animación y otros datos. </li>
|
14 |
-
<li><b>COLLADA</b>: Un formato diseñado para aplicaciones web, ya que almacena geometría, textura, animación y otros datos en un formato basado en XML. </li>
|
15 |
-
<li><b>GLTF</b>: Un formato optimizado para aplicaciones web, ya que almacena geometría, textura, animación y otros datos en un formato binario o basado en JSON. </li>
|
16 |
-
</ul>
|
17 |
-
|
18 |
-
<h3>Los mejores sitios web de modelos 3D</h3>
|
19 |
-
<p>Hay muchos sitios web que ofrecen modelos 3D gratuitos o de pago que puedes descargar y usar para tus proyectos. Algunos de los mejores son:</p>
|
20 |
-
<p></p>
|
21 |
-
<ul>
|
22 |
-
<li><b>Sketchfab</b>: Un sitio web que alberga millones de modelos 3D en varias categorías, como personajes, vehículos, arquitectura, etc. Puede ver, editar y descargar modelos 3D en varios formatos. Algunos de ellos son gratuitos bajo licencias de Creative Commons, mientras que otros se pagan bajo licencias libres de derechos. </li>
|
23 |
-
<li><b>Thingiverse</b>: Un sitio web que alberga miles de modelos 3D que están diseñados para la impresión 3D. Puede navegar, descargar e imprimir modelos 3D en formato STL. Todos ellos son gratuitos bajo licencias Creative Commons. </li>
|
24 |
-
<li><b>SketchUp</b>: Un sitio web que ofrece un software de diseño 3D en línea gratuito que le permite crear y editar sus propios modelos 3D. También puede acceder a una biblioteca de miles de modelos 3D gratuitos en varias categorías, como muebles, plantas, animales, etc. Puede descargar modelos 3D en varios formatos, como OBJ, STL, FBX, etc. Algunos de ellos son gratuitos bajo licencias de Creative Commons, mientras que otros se pagan bajo licencias libres de derechos. </li>
|
25 |
-
<li><b>CGTrader</b>: Un sitio web que alberga cientos de miles de modelos 3D en varias categorías, como personajes, vehículos, arquitectura, etc. Puede ver, descargar y comprar modelos 3D en varios formatos. Algunos de ellos son gratuitos bajo licencias de Creative Commons, mientras que otros se pagan bajo licencias libres de derechos o editoriales. </li>
|
26 |
-
</ul>
|
27 |
-
<p>Estos son solo algunos ejemplos de los muchos sitios web que ofrecen modelos 3D para descargar. También puedes buscar modelos 3D en Google u otros motores de búsqueda usando palabras clave como "descarga de modelos 3D", "modelo 3D gratis", "sitio web de modelos 3D", etc.</p>
|
28 |
-
<h2>Cómo usar modelos 3D para diferentes aplicaciones</h2>
|
29 |
-
<p>Una vez que haya descargado modelos 3D de la web, puede utilizarlos para diferentes aplicaciones dependiendo de sus necesidades y objetivos. Algunas de las aplicaciones más comunes son:</p>
|
30 |
-
|
31 |
-
<p>Si quieres crear tus propios modelos 3D o editar los que has descargado, necesitas usar un software de diseño 3D que te permita manipular la geometría, textura, color y otros aspectos de un modelo 3D. Algunos de los software de diseño 3D más populares son:</p>
|
32 |
-
<ul>
|
33 |
-
<li><b>Blender</b>: Un software libre y de código abierto que ofrece un conjunto completo de herramientas para crear y editar modelos 3D, así como animación, renderizado, simulación, edición de video y más. Soporta varios formatos de archivo 3D, como OBJ, STL, FBX, COLLADA, GLTF, etc.</li>
|
34 |
-
<li><b>SketchUp</b>: Un software gratuito y fácil de usar que permite crear y editar modelos 3D con una interfaz sencilla e intuitiva. También ofrece una biblioteca de modelos 3D gratuitos que puede usar para sus proyectos. Admite varios formatos de archivo 3D, como OBJ, STL, FBX, etc.</li>
|
35 |
-
<li><b>Tinkercad</b>: Un software gratuito y online que te permite crear y editar modelos 3D con una interfaz sencilla y divertida. Es ideal para principiantes y niños que quieren aprender los fundamentos del modelado 3D. Es compatible con el formato STL para la impresión 3D. </li>
|
36 |
-
</ul>
|
37 |
-
<p>Estos son solo algunos ejemplos de los muchos software de diseño 3D que puede utilizar para crear y editar modelos 3D. También puede encontrar más opciones en Google u otros motores de búsqueda mediante el uso de palabras clave como "software de diseño 3d", "software de diseño 3d gratuito", "mejor software de diseño 3d", etc.</p>
|
38 |
-
<h3>Impresión 3D y escaneo para prototipado físico y fabricación</h3>
|
39 |
-
<p>Si quieres convertir tus modelos 3D en objetos físicos que puedas tocar y sostener, necesitas usar una impresora 3D o un escáner 3D que te permita imprimir o escanear tus modelos 3D en un material de tu elección. Algunos de los dispositivos de impresión y escaneo 3D más populares son:</p>
|
40 |
-
<ul>
|
41 |
-
<li><b>Ender-3</b>: Una impresora 3D de bajo costo y alta calidad que puede imprimir sus modelos 3D en varios materiales, como PLA, ABS, PETG, etc. Tiene un gran volumen de construcción de 220 x 220 x 250 mm y admite el formato STL para impresión. </li>
|
42 |
-
|
43 |
-
<li><b>Einscan</b>: Un escáner 3D versátil y de alta calidad que puede escanear sus objetos físicos y convertirlos en modelos 3D. Tiene una precisión de escaneo de hasta 0,05 mm y admite varios formatos de archivo 3D, como OBJ, STL, PLY, etc.</li>
|
44 |
-
</ul>
|
45 |
-
<p>Estos son solo algunos ejemplos de los muchos dispositivos de impresión y escaneo 3D que se pueden usar para prototipado físico y fabricación. También puedes encontrar más opciones en Google u otros motores de búsqueda usando palabras clave como "impresora 3d", "mejor impresora 3d", "escáner 3d", "mejor escáner 3d", etc.</p>
|
46 |
-
<h3>Visualización y animación en 3D para experiencias inmersivas e interactivas</h3>
|
47 |
-
<p>Si quieres crear experiencias inmersivas e interactivas con tus modelos 3D, necesitas usar un software de visualización y animación 3D que te permita representar, animar e interactuar con tus modelos 3D en tiempo real. Algunos de los software de visualización y animación 3D más populares son:</p>
|
48 |
-
<ul>
|
49 |
-
<li><b>Unity</b>: un software gratuito y potente que permite crear y ejecutar juegos y aplicaciones 3D para varias plataformas, como Windows, Mac, Linux, iOS, Android, etc. Admite varios formatos de archivo 3D, como OBJ, FBX, COLLADA, GLTF, etc.</li>
|
50 |
-
<li><b>Unreal Engine</b>: Un software libre y profesional que permite crear y ejecutar juegos y aplicaciones 3D para varias plataformas, como Windows, Mac, Linux, iOS, Android, etc. Es compatible con varios formatos de archivo 3D, como OBJ, FBX, COLLADA, GLTF, etc.</li>
|
51 |
-
<li><b>Sketchfab</b>: Un software gratuito y en línea que le permite subir y ver sus modelos 3D en un navegador web. También puede incrustar sus modelos 3D en su sitio web o blog. Es compatible con varios formatos de archivo 3D, como OBJ, STL, FBX, COLLADA, GLTF, etc.</li>
|
52 |
-
</ul>
|
53 |
-
|
54 |
-
<h2>Conclusión: Los beneficios del modelado 3D y cómo empezar</h2>
|
55 |
-
<p>Como puedes ver, el modelado 3D es una habilidad útil y divertida que puede ayudarte a crear cosas increíbles con tu imaginación. Puede descargar modelos 3D desde la web o crear su propio software de diseño 3D. También puede utilizarlos para diferentes aplicaciones, como impresión 3D, escaneo 3D, visualización 3D y animación 3D. También puede compartir sus modelos 3D con otros o venderlos en línea. </p>
|
56 |
-
<p>El modelado 3D tiene muchos beneficios, como:</p>
|
57 |
-
<ul>
|
58 |
-
<li><b>Mejorar su creatividad y habilidades de resolución de problemas</b>: El modelado 3D le permite expresar sus ideas y soluciones de una manera visual y tangible. Puede experimentar con diferentes formas, colores, texturas y efectos para crear modelos 3D únicos y originales. </li>
|
59 |
-
<li><b>Mejorando sus habilidades de comunicación y colaboración</b>: El modelado 3D le permite comunicarse y colaborar con otros de manera más efectiva. Puede mostrar sus modelos 3D a sus clientes, colegas, amigos o familiares y obtener sus comentarios y sugerencias. También puede trabajar con otros modeladores 3D y aprender de sus técnicas y estilos. </li>
|
60 |
-
<li><b>Aumentar su confianza y satisfacción</b>: El modelado 3D le permite alcanzar sus objetivos y ver los resultados de su trabajo. Puede sentirse orgulloso y feliz cuando termina un modelo 3D que cumple con sus expectativas y requisitos. También puede disfrutar del proceso de modelado 3D y divertirse con él. </li>
|
61 |
-
</ul>
|
62 |
-
<p>Si quieres comenzar con el modelado 3D, necesitas tener una computadora, un software de diseño 3D y un sitio web de modelos 3D. También puede obtener una impresora 3D o un escáner 3D si desea imprimir o escanear sus modelos 3D. También puede encontrar muchos recursos en línea, como tutoriales, cursos, libros, blogs, foros, etc., que pueden ayudarlo a aprender los conceptos básicos y las habilidades avanzadas de modelado 3D. </p>
|
63 |
-
<h2>Preguntas frecuentes</h2>
|
64 |
-
<p>Aquí hay algunas preguntas frecuentes sobre el modelado 3D:</p>
|
65 |
-
<ol>
|
66 |
-
|
67 |
-
<p>El modelado 3D es un término general que se refiere a la creación y manipulación de representaciones digitales de objetos en tres dimensiones. CAD (Computer-Aided Design) es un tipo específico de modelado 3D que se utiliza para fines técnicos y de ingeniería, como el diseño de máquinas, edificios, circuitos, etc.</p>
|
68 |
-
<li><b>¿Cuáles son los mejores programas de modelado 3D para principiantes? </b></li>
|
69 |
-
<p>Algunos de los mejores software de modelado 3D para principiantes son SketchUp, Tinkercad, Blender y Unity. Son gratuitos, fáciles de usar y ofrecen muchas características y funciones para crear y editar modelos 3D. </p>
|
70 |
-
<li><b>¿Cuánto tiempo se tarda en aprender modelado 3D? </b></li>
|
71 |
-
<p>El tiempo que toma aprender modelado 3D depende de muchos factores, como su experiencia anterior, su estilo de aprendizaje, sus objetivos, su elección de software, etc. Sin embargo, puede esperar aprender los fundamentos del modelado 3D en unas pocas semanas o meses si practica regularmente y sigue algunos tutoriales o cursos en línea. </p>
|
72 |
-
<li><b>¿Cuánto cuesta descargar o comprar modelos 3D? </b></li>
|
73 |
-
<p>El costo de descargar o comprar modelos 3D varía dependiendo del sitio web, la licencia, la calidad, la complejidad y la popularidad del modelo 3D. Algunos sitios web ofrecen modelos 3D gratuitos bajo licencias de Creative Commons, lo que significa que puede usarlos para fines personales o comerciales, siempre y cuando dé crédito al creador original. Algunos sitios web ofrecen modelos 3D pagados bajo licencias libres de derechos o editoriales, lo que significa que puede usarlos para fines personales o comerciales sin dar crédito, pero con algunas restricciones dependiendo de los términos de la licencia. El precio de los modelos 3D pagados puede variar desde unos pocos dólares hasta cientos de dólares. </p>
|
74 |
-
<li><b>¿Cómo puedo ganar dinero con el modelado 3D? </b></li>
|
75 |
-
<p>Hay muchas maneras de ganar dinero con el modelado 3D, como:</p>
|
76 |
-
<ul>
|
77 |
-
|
78 |
-
<li><b>Ofreciendo sus servicios de modelado 3D en línea</b>: Puede ofrecer sus habilidades de modelado 3D y servicios en sitios web como Fiverr, Upwork, Freelancer, etc. Puede establecer su propia tarifa y términos para sus proyectos de modelado 3D y trabajar con clientes de diferentes industrias y orígenes. </li>
|
79 |
-
<li><b>Crear tus propios juegos o aplicaciones 3D</b>: Puedes usar tus modelos 3D para crear tus propios juegos o aplicaciones 3D y publicarlos en varias plataformas, como Windows, Mac, Linux, iOS, Android, etc. Puedes monetizar tus juegos o aplicaciones 3D vendiéndolos, mostrar anuncios, ofrecer compras en la aplicación, etc.</li>
|
80 |
-
</ul>
|
81 |
-
<p>Estos son solo algunos ejemplos de las muchas maneras de hacer dinero con el modelado 3D. También puede encontrar más oportunidades en Google u otros motores de búsqueda mediante el uso de palabras clave como "cómo hacer dinero con el modelado 3D", "trabajos de modelado 3D", "carreras de modelado 3D", etc.</p>
|
82 |
-
<p>Espero que este artículo te haya ayudado a entender cómo descargar y usar modelos 3D para tus proyectos. Si usted tiene alguna pregunta o comentario, por favor siéntase libre de dejarlos abajo. Gracias por leer y feliz modelado 3D! </p> 64aa2da5cf<br />
|
83 |
-
<br />
|
84 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Appking Io.md
DELETED
@@ -1,89 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Appking io: Una plataforma para descargar juegos de mod y aplicaciones modificadas</h1>
|
3 |
-
<p>¿Te gusta jugar y usar aplicaciones en tu dispositivo móvil? ¿Desea acceder a más características, funciones y contenido que las versiones regulares? ¿Quieres ahorrar dinero y evitar anuncios mientras disfrutas de tus aplicaciones y juegos favoritos? Si respondiste sí a cualquiera de estas preguntas, entonces deberías echar un vistazo a appking io, una plataforma donde puedes descargar juegos de mod y aplicaciones modificadas de forma gratuita. En este artículo, le diremos todo lo que necesita saber sobre appking io, incluyendo qué es, cómo usarlo y qué beneficios ofrece. También te presentaremos a AppKing APK, una aplicación que te permite jugar y ganar dinero al mismo tiempo. Sigue leyendo para saber más. </p>
|
4 |
-
<h2>appking io</h2><br /><p><b><b>Download</b> >>>>> <a href="https://bltlly.com/2v6LMx">https://bltlly.com/2v6LMx</a></b></p><br /><br />
|
5 |
-
<h2>¿Qué es Appking io? </h2>
|
6 |
-
<p>Appking io es un sitio web que proporciona juegos de mod y aplicaciones modificadas para dispositivos Android e iOS. Los juegos mod son juegos que han sido modificados por desarrolladores de terceros para desbloquear funciones premium, eliminar anuncios, agregar recursos ilimitados o cambiar el juego. Las aplicaciones modificadas son aplicaciones que se han modificado para mejorar su rendimiento, funcionalidad o apariencia. Por ejemplo, puedes descargar una versión modificada de Spotify que te permite escuchar música sin conexión, saltar anuncios y disfrutar de saltos ilimitados. Otro software que puede encontrar en appking io incluye VPN, emuladores, grabadoras de pantalla, editores de video y más. </p>
|
7 |
-
<h3>Características de Appking io</h3>
|
8 |
-
<p>Appking io tiene muchas características que lo convierten en una gran plataforma para descargar juegos de mod y aplicaciones modificadas. Aquí están algunas de ellas:</p>
|
9 |
-
<h4>Juegos de mod</h4>
|
10 |
-
|
11 |
-
<h4>Aplicaciones modificadas</h4>
|
12 |
-
<p>Appking io también tiene una amplia gama de aplicaciones modificadas que mejoran su experiencia con redes sociales, música, transmisión de video, mensajería, productividad, educación, salud, fitness y más. Puede descargar versiones modificadas de Facebook, Instagram, TikTok, YouTube, Netflix, WhatsApp, Spotify, Duolingo, Fitbit y muchos otros. Todas las aplicaciones modificadas son gratuitas de descargar y usar, y vienen con características mejoradas que las hacen más útiles y convenientes. </p>
|
13 |
-
<h4>Otro software</h4>
|
14 |
-
<p>Appking io no es solo sobre juegos de mod y aplicaciones modificadas. También ofrece otro software que puede ayudarlo con el rendimiento, la seguridad, la personalización, el entretenimiento y la creatividad de su dispositivo. Puede encontrar VPN que protegen su privacidad y seguridad en línea; emuladores que le permiten ejecutar aplicaciones Android o iOS en su PC; grabadoras de pantalla que le permiten capturar la actividad de la pantalla; editores de video que le permiten crear videos increíbles; y más. </p>
|
15 |
-
<p></p>
|
16 |
-
<h3>¿Cómo usar Appking io? </h3>
|
17 |
-
<p>Usar appking io es muy fácil y simple. Aquí están los requisitos y pasos:</p>
|
18 |
-
<h4>Requisitos</h4 <h4>Requisitos</h4>
|
19 |
-
<p>Para usar appking io, necesita tener un dispositivo Android o iOS con una conexión a Internet. También debe habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo. Esto se debe a que appking io no está disponible en las tiendas de aplicaciones oficiales, y debe descargarlo desde su sitio web. Puede encontrar las instrucciones sobre cómo hacerlo en el sitio web appking io. </p>
|
20 |
-
<h4>Pasos</h4>
|
21 |
-
<p>Una vez que haya cumplido con los requisitos, puede seguir estos pasos para usar appking io:</p>
|
22 |
-
<ol>
|
23 |
-
<li>Vaya al sitio web de appking io en el navegador de su dispositivo. </li>
|
24 |
-
<li>Elija la categoría de software que desea descargar, como juegos de mod, aplicaciones modificadas u otro software. </li>
|
25 |
-
<li>Navegar por la lista de software disponible y seleccionar el que desea descargar. </li>
|
26 |
-
<li>Toque en el botón de descarga y espere a que la descarga termine. </li>
|
27 |
-
|
28 |
-
<li>Inicie el software instalado y disfrute de sus características. </li>
|
29 |
-
</ol>
|
30 |
-
<h3>Beneficios de Appking io</h3>
|
31 |
-
<p>Appking io tiene muchos beneficios que lo convierten en una gran plataforma para descargar juegos de mod y aplicaciones modificadas. Estos son algunos de ellos:</p>
|
32 |
-
<h4>Libre y seguro</h4>
|
33 |
-
<p>Todo el software que puedes descargar de appking io es gratuito. No tienes que pagar ninguna tarifa o suscripción para acceder a ellos. Tampoco tienes que preocuparte por los anuncios o las compras en la aplicación que pueden interrumpir tu experiencia o costarte dinero. Además, todo el software es seguro y seguro de usar. El equipo de appking io los prueba y verifica antes de subirlos al sitio web. No contienen ningún virus, malware o spyware que pueda dañar tu dispositivo o datos. </p>
|
34 |
-
<h4>Variedad y calidad</h4>
|
35 |
-
<p>Appking io tiene una gran variedad de software que puede adaptarse a sus preferencias y necesidades. Puedes encontrar juegos de mod y aplicaciones modificadas para diferentes categorías, géneros y propósitos. También puede encontrar otro software que le puede ayudar con diversas tareas y actividades. Todo el software es de alta calidad y rendimiento. Se actualizan regularmente para corregir cualquier error o errores y para agregar nuevas características y contenido. También son compatibles con la mayoría de dispositivos y versiones Android e iOS. </p>
|
36 |
-
<h4>Actualizaciones y soporte</h4>
|
37 |
-
<p>Appking io proporciona actualizaciones y soporte para sus usuarios. Puede recibir notificaciones cuando hay nuevas versiones o actualizaciones disponibles para el software que ha descargado. También puede ponerse en contacto con el equipo de appking io si tiene preguntas, comentarios o problemas con respecto a su software. Ellos le responderán lo antes posible y le ayudarán a resolver sus problemas. También puede unirse a la comunidad de usuarios de appking io y compartir sus opiniones, sugerencias o experiencias con su software. </p>
|
38 |
-
<h2>AppKing APK: Una aplicación para jugar y ganar dinero</h2>
|
39 |
-
|
40 |
-
<h3>¿Qué es AppKing APK? </h3>
|
41 |
-
<p>AppKing APK es una aplicación que le permite jugar juegos y ganar dinero en línea. Es desarrollado por AppKing Studio, una compañía que se especializa en crear aplicaciones de juegos con recompensas reales. AppKing APK tiene una variedad de juegos que usted puede elegir, tales como trivia, rompecabezas, árcade, casino, deportes, y más. Puedes jugar estos juegos gratis o con una pequeña cuota de inscripción. Puedes ganar premios en efectivo en función de tu puntuación, rango o suerte. Puede retirar sus ganancias a través de PayPal u otros métodos. </p>
|
42 |
-
<h3>Características de AppKing APK</h3>
|
43 |
-
<p>AppKing APK tiene muchas características que lo convierten en una gran aplicación para jugar y ganar dinero. Estos son algunos de ellos:</p>
|
44 |
-
<h4>Juegos divertidos y fáciles</h4>
|
45 |
-
<p>AppKing APK tiene un montón de juegos divertidos y fáciles que se pueden jugar en cualquier momento y en cualquier lugar. No necesitas ninguna habilidad o conocimiento especial para jugar a estos juegos. Solo necesitas seguir las instrucciones y usar tu lógica, memoria, reflejos o intuición. También puede elegir entre diferentes niveles de dificultad y modos de juego para adaptarse a sus preferencias y estado de ánimo. </p>
|
46 |
-
<h4> Recompensas reales e instantáneas</h4>
|
47 |
-
<p>AppKing APK le da recompensas reales e instantáneas para jugar. Usted puede ganar premios en efectivo <p>AppKing APK le da recompensas reales e instantáneas para jugar juegos. Usted puede ganar premios en efectivo que van desde $0.01 a $1000 dependiendo del juego y el resultado. También puede ganar monedas que puede cambiar por dinero en efectivo o tarjetas de regalo. Puede retirar sus ganancias a través de PayPal u otros métodos en 24 horas. No tienes que preocuparte por cargos ocultos. </p>
|
48 |
-
<h4>Programas de referencia y bonos</h4>
|
49 |
-
|
50 |
-
<h3> Cómo utilizar AppKing APK? </h3>
|
51 |
-
<p>Usar AppKing APK es muy fácil y simple. Aquí están los requisitos y pasos:</p>
|
52 |
-
<h4>Requisitos</h4>
|
53 |
-
<p>Para utilizar AppKing APK, es necesario tener un dispositivo Android con una conexión a Internet. También necesitas tener una cuenta PayPal u otros métodos de pago para retirar tus ganancias. Puede descargar AppKing APK desde su sitio web oficial o de otras fuentes. Sin embargo, debe asegurarse de que la fuente sea confiable y confiable. </p>
|
54 |
-
<h4>Pasos</h4>
|
55 |
-
<p>Una vez que haya cumplido con los requisitos, puede seguir estos pasos para usar AppKing APK:</p>
|
56 |
-
<ol>
|
57 |
-
<li> Descargar e instalar AppKing APK en su dispositivo. </li>
|
58 |
-
<li>Inicie la aplicación y regístrese con su dirección de correo electrónico o cuenta de Facebook. </li>
|
59 |
-
<li>Elige un juego que quieras jugar desde la página de inicio o el menú de la aplicación. </li>
|
60 |
-
<li>Juega el juego gratis o con una pequeña cuota de entrada y tratar de ganar premios en efectivo o monedas. </li>
|
61 |
-
<li>Revise su saldo y retire sus ganancias a través de PayPal u otros métodos. </li>
|
62 |
-
<li>Invita a tus amigos a unirse a AppKing APK y obtener una comisión de sus ganancias. </li>
|
63 |
-
<li>Únete a programas de bonificación y obtén recompensas adicionales por jugar, completar tareas o alcanzar hitos. </li>
|
64 |
-
</ol>
|
65 |
-
<h3>Beneficios de AppKing APK</h3>
|
66 |
-
<p>AppKing APK tiene muchos beneficios que lo convierten en una gran aplicación para jugar y ganar dinero. Estos son algunos de ellos:</p>
|
67 |
-
<h4>Entretenimiento e ingresos</h4>
|
68 |
-
<p>AppKing APK le proporciona entretenimiento e ingresos al mismo tiempo. Puede jugar juegos divertidos y fáciles que se adapten a su gusto y estado de ánimo. También puedes ganar premios en efectivo reales y monedas que puedes usar para tus necesidades o deseos. Puedes disfrutar jugando y ganando dinero sin problemas ni estrés. </p>
|
69 |
-
<h4>Flexibilidad y conveniencia</h4>
|
70 |
-
|
71 |
-
<h4>Seguridad y transparencia</h4>
|
72 |
-
<p>AppKing APK garantiza su seguridad y transparencia cuando se trata de jugar y ganar dinero. Protege su información personal y detalles de pago de cualquier acceso no autorizado o mal uso. También te proporciona información clara y precisa sobre los juegos, las recompensas, las reglas y los términos y condiciones. No oculta ninguna tarifa o cargo. </p>
|
73 |
-
<h2>Conclusión</h2>
|
74 |
-
<p>En conclusión, appking io es una plataforma donde puedes descargar juegos de mod y aplicaciones modificadas de forma gratuita. Tiene muchas características, como juegos de mod, aplicaciones modificadas, otro software, descargas gratuitas y seguras, variedad y calidad de software, actualizaciones y soporte, etc. Es fácil de usar, ya que solo necesita visitar su sitio web, elegir el software que desea descargar, instalarlo en su dispositivo, y disfrutar de sus características. AppKing APK es una aplicación que le permite jugar juegos y ganar dinero en línea. Tiene muchas características, como juegos divertidos y fáciles, recompensas reales e instantáneas, programas de referencia y bonos, etc. Es fácil de usar, ya que solo necesita descargar e instalar la aplicación, registrarse con su correo electrónico o cuenta de Facebook, jugar juegos, retirar sus ganancias, invitar a tus amigos, y unirse a programas de bonificación. Tanto appking io y AppKing APK son excelentes plataformas para descargar juegos de mod y aplicaciones modificadas y jugar juegos y ganar dinero. Tienen muchos beneficios, como entretenimiento e ingresos, flexibilidad y conveniencia, seguridad y transparencia, etc. También son gratuitos y seguros de usar. Si estás interesado en probarlos, puedes visitar sus sitios web y descargarlos desde allí. También puede encontrar más información y comentarios sobre ellos en línea. Esperamos que este artículo te haya ayudado a aprender más sobre appking io y AppKing APK. Si tiene alguna pregunta o comentario, no dude en ponerse en contacto con nosotros o dejar un comentario a continuación. Gracias por leer y tener un gran día! <h2>Preguntas frecuentes</h2>
|
75 |
-
|
76 |
-
<ol>
|
77 |
-
<li> ¿Cuáles son las diferencias entre appking io y AppKing APK? </li>
|
78 |
-
<p>Appking io es un sitio web que proporciona juegos de mod y aplicaciones modificadas para dispositivos Android e iOS. AppKing APK es una aplicación que le permite jugar juegos y ganar dinero en línea. Ambos son desarrollados por AppKing Studio, pero tienen diferentes propósitos y características. </p>
|
79 |
-
<li> Son appking io y AppKing APK legal y seguro de usar? </li>
|
80 |
-
<p>Appking io y AppKing APK son legales y seguros de usar siempre y cuando siga sus términos y condiciones y respete los derechos de propiedad intelectual de los desarrolladores originales del software que proporcionan. No contienen ningún virus, malware o spyware que pueda dañar su dispositivo o datos. Sin embargo, debes ser consciente de los riesgos que implica el uso de juegos mod y aplicaciones modificadas, como problemas de compatibilidad, errores, errores, prohibiciones o acciones legales. </p>
|
81 |
-
<li> ¿Cómo puedo descargar appking io y AppKing APK? </li>
|
82 |
-
<p>Puede descargar appking io y AppKing APK desde sus sitios web oficiales o de otras fuentes. Sin embargo, debe asegurarse de que la fuente sea confiable y confiable. También debe habilitar la instalación de aplicaciones de fuentes desconocidas en la configuración del dispositivo antes de instalarlas. </p>
|
83 |
-
<li> ¿Cómo puedo retirar mis ganancias de AppKing APK? </li>
|
84 |
-
<p>Puede retirar sus ganancias de AppKing APK a través de PayPal u otros métodos en 24 horas. Usted necesita tener un saldo mínimo de $10 para solicitar un retiro. También necesitas verificar tu identidad y detalles de pago antes de recibir tu dinero. </p>
|
85 |
-
<li> ¿Cómo puedo contactar appking io o AppKing APK? </li>
|
86 |
-
<p>Puede ponerse en contacto con appking io o AppKing APK enviando un correo electrónico a su equipo de soporte en [email protected] o llenando el formulario de contacto en su sitio web. También puedes seguirlos en sus cuentas de redes sociales o unirte a su comunidad de usuarios. </p>
|
87 |
-
</ol></p> 64aa2da5cf<br />
|
88 |
-
<br />
|
89 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Ciudad Helada Hile Apk Day.md
DELETED
@@ -1,39 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Qué es Simurq y por qué deberías detenerlo</h1>
|
3 |
-
<p>Si usted es un suscriptor de Azercell, es posible que haya escuchado o experimentado un servicio llamado Simurq. Este servicio pretende ofrecerle actualizaciones sobre las últimas noticias, música, moda, celebridades, tendencias y mucho más. Sin embargo, es posible que no sea consciente de lo que realmente es Simurq, cómo funciona, cuánto cuesta y cómo puede afectar su teléfono y su billetera. En este artículo te explicaremos todo lo que necesitas saber sobre Simurq y por qué debes detenerlo lo antes posible. </p>
|
4 |
-
<h2>Simurq es un servicio ofrecido por Azercell que le envía mensajes interactivos con varios contenidos</h2>
|
5 |
-
<h3>Cómo funciona Simurq y qué tipo de contenido ofrece</h3>
|
6 |
-
<p>Simurq es un servicio que Azercell ofrece a sus clientes que tienen planes de prepago o pospago. Puede activar el servicio en la sección Azercell del menú del teléfono. Una vez que lo active, comenzará a recibir mensajes interactivos en la pantalla del teléfono que le ofrecerán diferentes tipos de contenido. El contenido es proporcionado por agencias de información conocidas, como Reuters, BBC, CNN, etc. El contenido incluye noticias, consejos, extractos de música, fotos, canciones populares y mucha otra información. Puede optar por ver o ignorar los mensajes, o responder con un código para obtener más detalles o descargar el contenido. </p>
|
7 |
-
<h2>ciudad helada hile apk dayı</h2><br /><p><b><b>Download Zip</b> ⚡ <a href="https://bltlly.com/2v6L3r">https://bltlly.com/2v6L3r</a></b></p><br /><br />
|
8 |
-
<h3>Cuánto cuesta Simurq y cómo comprobar tu saldo</h3>
|
9 |
-
<p>Las transmisiones de Simurq son gratuitas, lo que significa que no se le cobrará por recibir los mensajes. Sin embargo, el costo del contenido ofrecido se indica en cada mensaje que reciba en su teléfono. Por ejemplo, si quieres descargar una canción o un video, tendrás que pagar una cierta cantidad de dinero que se deducirá de tu saldo. Los precios varían dependiendo del tipo y tamaño del contenido. Para más información, puede llamar al 6045. El primer minuto de la llamada es gratuito, pero la tarifa de servicio para cada minuto posterior es de 0,02 AZN. También puede comprobar su saldo marcando *100#YES.</p>
|
10 |
-
|
11 |
-
<h3>Simurq puede enviar spam a su teléfono con mensajes no deseados y distraerlo de las notificaciones importantes</h3>
|
12 |
-
<p>Uno de los principales problemas con Simurq es que puede enviarte demasiados mensajes que quizás no quieras o no necesites. Los mensajes pueden ser intrusivos y molestos, especialmente si aparecen en su pantalla cuando está ocupado o espera una notificación importante. Los mensajes también pueden interferir con el rendimiento y la funcionalidad de su teléfono, ya que pueden ralentizar su dispositivo o causar problemas técnicos. Además, los mensajes pueden ser engañosos o inexactos, ya que pueden no reflejar sus preferencias o intereses, o pueden contener información obsoleta o falsa. </p>
|
13 |
-
<h3>Simurq puede cobrar <h3>Simurq puede cobrarle por el contenido que puede que no le interese o que puede encontrar gratis en otro lugar</h3>
|
14 |
-
<p>Otro problema con Simurq es que puede hacerte pagar por contenido que no quieres o no necesitas. Los mensajes que Simurq te envía pueden no coincidir con tus gustos o preferencias, y puedes terminar descargando algo que no te gusta o que no usas. Además, es posible que el contenido que ofrece Simurq no sea exclusivo ni original, y que puedas encontrarlo de forma gratuita en otras plataformas o fuentes. Por ejemplo, puedes escuchar música o ver videos en YouTube, Spotify, Netflix, etc. sin pagar nada. Por lo tanto, Simurq puede ser un desperdicio de dinero y una fuente de frustración. </p>
|
15 |
-
<h3>Simurq puede consumir sus datos y batería sin su consentimiento</h3>
|
16 |
-
|
17 |
-
<h2>Cómo detener a Simurq y darse de baja del servicio</h2>
|
18 |
-
<h3>Cómo desactivar Simurq desde el menú de tu teléfono o enviando un mensaje de texto</h3>
|
19 |
-
<p>Si desea detener Simurq y darse de baja del servicio, hay dos formas de hacerlo. La primera forma es desactivar Simurq desde el menú del teléfono. Para ello, debe ir a la sección Azercell del menú del teléfono y seleccionar la opción "Simurq". Luego, debe elegir la opción "Desactivar" y confirmar su elección. Recibirá un mensaje de confirmación de que Simurq ha sido desactivado. La segunda forma es enviar un mensaje de texto con la palabra "STOP" a 6045. También recibirá un mensaje de confirmación de que Simurq ha sido desactivado. </p>
|
20 |
-
<h3>Cómo contactar al servicio de atención al cliente de Azercell si tiene algún problema o queja</h3>
|
21 |
-
<p>Si tiene algún problema o queja sobre Simurq, como que le cobren por el contenido que no solicitó o descargó, o que no pueda desactivar el servicio, puede ponerse en contacto con el servicio de atención al cliente de Azercell para obtener ayuda. Puede llamar al 111 desde su número de Azercell o al 012 490 11 11 desde cualquier otro número. También puede enviar un correo electrónico a [email protected] o visitar una de las oficinas o distribuidores de Azercell. Puede encontrar más información en el sitio web de Azercell: [Azercell]. </p>
|
22 |
-
<p></p>
|
23 |
-
<h3>Cómo evitar suscribirse a Simurq o servicios similares en el futuro</h3>
|
24 |
-
|
25 |
-
<h2>Conclusión</h2>
|
26 |
-
<p>Simurq es un servicio ofrecido por Azercell que le envía mensajes interactivos con varios contenidos. Sin embargo, Simurq no es un servicio beneficioso y puede ser molesto y caro. Simurq puede enviar spam a tu teléfono con mensajes no deseados y distraerte de las notificaciones importantes. Simurq puede cobrarte por contenido que quizás no te interese o que puedas encontrar gratis en otro sitio. Simurq puede consumir sus datos y batería sin su consentimiento. Por lo tanto, le recomendamos que deje de Simurq y se dé de baja del servicio lo antes posible. Puede desactivar Simurq desde el menú de su teléfono o enviando un mensaje de texto con la palabra "STOP" a 6045. También puede ponerse en contacto con el servicio de atención al cliente de Azercell si tiene algún problema o queja. Puede evitar suscribirse a Simurq o servicios similares en el futuro leyendo los términos y condiciones de cualquier servicio que active en su teléfono, comprobando su saldo y monitoreando sus datos y uso de la batería regularmente, y tener cuidado con los mensajes que abre o responde en su teléfono. </p>
|
27 |
-
<h2>Preguntas frecuentes</h2>
|
28 |
-
<h4> <h4>¿Qué es Simurq? </h4>
|
29 |
-
<p>Simurq es un servicio ofrecido por Azercell que le envía mensajes interactivos con varios contenidos, como noticias, música, moda, celebridades, tendencias y mucho más. Puede activar el servicio en la sección Azercell del menú de su teléfono y optar por ver o ignorar los mensajes, o responder con un código para obtener más detalles o descargar el contenido. </p>
|
30 |
-
<h4>¿Cómo detengo a Simurq? </h4>
|
31 |
-
<p>Puede detener Simurq y darse de baja del servicio mediante la desactivación de su menú de teléfono o enviando un mensaje de texto con la palabra "STOP" a 6045. Recibirás un mensaje de confirmación de que Simurq ha sido desactivado. También puede ponerse en contacto con el servicio de atención al cliente de Azercell si tiene algún problema o queja. </p>
|
32 |
-
<h4>¿Cuánto cuesta Simurq? </h4>
|
33 |
-
|
34 |
-
<h4>¿Es Simurq una estafa? </h4>
|
35 |
-
<p>Simurq no es una estafa, pero tampoco es un servicio beneficioso. Simurq puede enviar spam a tu teléfono con mensajes no deseados y distraerte de las notificaciones importantes. Simurq puede cobrarte por contenido que quizás no te interese o que puedas encontrar gratis en otro sitio. Simurq puede consumir sus datos y batería sin su consentimiento. Por lo tanto, le recomendamos que detenga Simurq y se dé de baja del servicio lo antes posible. </p>
|
36 |
-
<h4>¿Cuáles son algunas alternativas a Simurq? </h4>
|
37 |
-
<p>Si estás buscando alternativas a Simurq, puedes encontrar muchas otras fuentes o plataformas que ofrecen contenido similar o mejor de forma gratuita o por un precio más bajo. Por ejemplo, puedes usar aplicaciones de redes sociales, como Facebook, Instagram, Twitter, etc., para seguir las últimas noticias, tendencias, celebridades y más. También puede utilizar servicios de streaming, como YouTube, Spotify, Netflix, etc., para escuchar música o ver videos de su elección. También puedes usar motores de búsqueda, como Bing, Google, etc., para encontrar cualquier información que necesites. </p> 64aa2da5cf<br />
|
38 |
-
<br />
|
39 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Coche Deriva Carreras Carretera Mod Apk.md
DELETED
@@ -1,58 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>CarX deriva Racing carretera Mod APK: Una revisión</h1>
|
3 |
-
<p>Si usted es un fan de los juegos de carreras de coches, es posible que haya oído hablar de CarX Drift Racing Highway Mod APK. Esta es una versión modificada del juego original CarX Drift Racing 2 que ofrece dinero ilimitado, coches desbloqueados y más características. Pero, ¿vale la pena descargar y jugar? En este artículo, vamos a revisar CarX Drift Racing Highway Mod APK y decirle todo lo que necesita saber al respecto. </p>
|
4 |
-
<h2>¿Qué es CarX Drift Racing Highway Mod APK? </h2>
|
5 |
-
<p>CarX Drift Racing Highway Mod APK es un juego para Android que le permite experimentar la emoción de la deriva en las carreteras. Es una mezcla de física realista, gráficos llamativos y conducción extrema en carreteras llenas de tráfico. Puedes elegir entre una variedad de coches, personalizarlos a tu gusto, y competir en diferentes modos y desafíos. </p>
|
6 |
-
<h2>coche deriva carreras carretera mod apk</h2><br /><p><b><b>Download Zip</b> --->>> <a href="https://bltlly.com/2v6KQe">https://bltlly.com/2v6KQe</a></b></p><br /><br />
|
7 |
-
<h3> Características de CarX Drift Racing Highway Mod APK</h3>
|
8 |
-
<p>CarX Drift Racing Highway Mod APK tiene muchas características que lo hacen destacar de otros juegos de carreras de coches. Aquí están algunos de ellos:</p>
|
9 |
-
<h4>Física realista</h4>
|
10 |
-
<p>El juego utiliza un motor de física realista que simula el comportamiento de los coches reales en diferentes superficies y condiciones. Puedes sentir la diferencia entre asfalto, hierba, arena y nieve. También puede ajustar la suspensión, la presión de los neumáticos, el ángulo de curvatura y otros parámetros para adaptarse a su estilo de conducción. </p>
|
11 |
-
<h4>Impresionantes gráficos</h4>
|
12 |
-
<p>El juego cuenta con gráficos impresionantes que crean un entorno realista e inmersivo. Puedes ver los detalles de los coches, los reflejos de las luces, las sombras de los objetos y los efectos de humo. También puede cambiar la hora del día, el clima y el ángulo de la cámara para obtener diferentes vistas. </p>
|
13 |
-
<h4>Carreteras llenas de tráfico</h4>
|
14 |
-
<p>El juego te desafía a conducir en carreteras ocupadas con otros coches y camiones. Usted tiene que evitar colisiones, adelantar a otros vehículos, y la deriva alrededor de las esquinas. También puede utilizar el impulso nitro para acelerar y realizar acrobacias espectaculares. </p>
|
15 |
-
|
16 |
-
<p>El juego tiene un modo de campaña que te sumerge en el mundo de las carreras callejeras. Usted tiene que completar varias misiones, ganar dinero, mejorar sus coches, y desbloquear nuevos. También puedes competir contra otros corredores y jefes en diferentes lugares. </p>
|
17 |
-
<h4>Coches personalizables</h4>
|
18 |
-
<p>El juego le permite personalizar sus coches a su gusto. Puede cambiar el color, el trabajo de pintura, calcomanías, llantas, alerones, escapes, y más. También puede ajustar el motor, la transmisión, los frenos, el turbo y el nitro para mejorar el rendimiento. </p>
|
19 |
-
<h3>¿Cómo descargar e instalar CarX Drift Racing Highway Mod APK? </h3>
|
20 |
-
<p>Si desea descargar e instalar CarX Drift Racing Highway Mod APK en su dispositivo Android, es necesario seguir estos pasos:</p>
|
21 |
-
<ol>
|
22 |
-
<li>Ir a [CarX Highway Drift Mod Shoot APK (Android Game) - Descarga gratuita - APKCombo]( 1 ) y haga clic en el botón de descarga. </li>
|
23 |
-
<li>Permitir fuentes desconocidas en el dispositivo yendo a Configuración > Seguridad > Fuentes desconocidas.</li>
|
24 |
-
<li>Busque el archivo descargado en su administrador de archivos y toque en él para instalarlo. </li>
|
25 |
-
<li>Iniciar el juego y disfrutar! </li>
|
26 |
-
</ol>
|
27 |
-
<h3>Pros y contras de CarX Drift Racing Highway Mod APK</h3>
|
28 |
-
<p>CarX Drift Racing Highway Mod APK tiene algunos pros y contras que usted debe considerar antes de jugar. Aquí hay una tabla que los resume:</p>
|
29 |
-
<p></p>
|
30 |
-
<tabla>
|
31 |
-
<tr>
|
32 |
-
<th>Pros</th>
|
33 |
-
<th>Contras</th>
|
34 |
-
</tr>
|
35 |
-
<tr>
|
36 |
-
<td>- Dinero ilimitado y coches desbloqueados</td>
|
37 |
-
<td>- Requiere mucho espacio de almacenamiento y RAM</td>
|
38 |
-
</tr>
|
39 |
-
<tr>
|
40 |
-
<td>- Física realista y gráficos</td>
|
41 |
-
<td>- Puede drenar la batería rápidamente</td>
|
42 |
-
</tr>
|
43 |
-
<tr>
|
44 |
-
<td>- Carreteras llenas de tráfico y modo de campaña</td>
|
45 |
-
<td>- Puede contener anuncios y errores</td>
|
46 |
-
</tr>
|
47 |
-
<tr>
|
48 |
-
<td>- Coches personalizables y opciones de ajuste</td>
|
49 |
-
<td>- Puede que no sea compatible con algunos dispositivos</td>
|
50 |
-
</tr>
|
51 |
-
</tabla>
|
52 |
-
<h3> Alternativas a CarX Drift Racing Highway Mod APK</h3>
|
53 |
-
<p>Si estás buscando otros juegos de carreras de coches que ofrecen características similares y jugabilidad, puedes probar estas alternativas:</p>
|
54 |
-
|
55 |
-
<p>CarX Drift Racing Highway Mod APK es un divertido y emocionante juego de carreras de coches que le permite la deriva en carreteras de carretera con la física realista y gráficos. También ofrece dinero ilimitado, coches desbloqueados, coches personalizables, modo de campaña y más características. Sin embargo, también tiene algunos inconvenientes, como requerir mucho espacio de almacenamiento y RAM, agotar la batería rápidamente, contener anuncios y errores, y no ser compatible con algunos dispositivos. Por lo tanto, debe sopesar los pros y los contras antes de descargarlo y reproducirlo. También puedes probar algunas alternativas si quieres explorar otros juegos de carreras de coches. </p>
|
56 |
-
<h2>Preguntas frecuentes</h2> 64aa2da5cf<br />
|
57 |
-
<br />
|
58 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Amor Nikki Mod Apk.md
DELETED
@@ -1,74 +0,0 @@
|
|
1 |
-
<br />
|
2 |
-
<h1>Descargar Amor Nikki Mod APK: Una guía para los amantes de la moda</h1>
|
3 |
-
<p>Si eres un fan de la moda y los juegos de rol, es posible que hayas oído hablar de Love Nikki, un popular juego móvil que combina ambos géneros. Love Nikki es un juego que te permite vestir a tu avatar en miles de trajes, accesorios, peinados y maquillaje. También puedes explorar un mundo de fantasía lleno de historias, misiones, eventos y competiciones. Pero lo que si quieres disfrutar del juego sin gastar dinero real o ver anuncios? Ahí es donde el amor Nikki Mod APK entra en juego. En este artículo, le diremos qué es Love Nikki Mod APK, por qué debe descargarlo, y cómo descargarlo de forma segura y fácil. </p>
|
4 |
-
<h2>¿Qué es el amor Nikki? </h2>
|
5 |
-
<p>Love Nikki es un juego para móviles que fue lanzado en 2017 por Elex Technology. Está disponible para dispositivos Android e iOS. El juego ha sido descargado más de 100 millones de veces y ha recibido críticas positivas de críticos y jugadores por igual. Estas son algunas de las características que hacen de Love Nikki un juego único y divertido. </p>
|
6 |
-
<h2>descargar amor nikki mod apk</h2><br /><p><b><b>DOWNLOAD</b> ››››› <a href="https://bltlly.com/2v6MkO">https://bltlly.com/2v6MkO</a></b></p><br /><br />
|
7 |
-
<h3>Un juego de rol con un toque de moda</h3>
|
8 |
-
<p>Love Nikki no es solo un juego de disfraces. También es un juego de rol que te permite crear tu propio personaje y personalizar su apariencia. Puedes elegir entre diferentes estilos, como elegante, lindo, sexy, cool o deportivo. También puede mezclar y combinar diferentes elementos para crear sus propios trajes. El juego tiene más de 20,000 artículos para elegir, incluyendo ropa, zapatos, bolsos, joyas, sombreros, gafas, tatuajes y más. </p>
|
9 |
-
Pero vestirte no es lo único que puedes hacer en Love Nikki. También puede participar en diversos desafíos y concursos que ponen a prueba su sentido de la moda y habilidades. Puedes competir con otros jugadores o PNJ en diferentes temas y escenarios. También puede unirse a asociaciones y cooperar con otros estilistas para completar tareas y ganar recompensas. </p>
|
10 |
-
<h3>Una historia de aventura y amistad</h3>
|
11 |
-
|
12 |
-
<p>El juego tiene más de 600 capítulos para explorar, cada uno con su propia trama y diálogo. También encontrarás varios eventos e historias paralelas que añaden más profundidad y diversión al juego. Descubrirás secretos, misterios, romance y drama mientras juegas a Love Nikki.</p>
|
13 |
-
<h3>Una comunidad de estilistas y diseñadores</h3>
|
14 |
-
<p>Love Nikki no es solo un juego para un jugador. También es un juego social que te permite interactuar con otros jugadores de todo el mundo. Puede unirse o crear asociaciones y chatear con otros miembros. También puede visitar las casas de otros jugadores y dejar comentarios sobre sus diseños. También puedes compartir tu ropa e ideas en plataformas de redes sociales como Facebook, Instagram o Twitter.</p>
|
15 |
-
<p>Pero eso no es todo. Love Nikki también te da la oportunidad de convertirte en un diseñador. Puedes usar la función Design Studio para crear tu propia ropa usando diferentes materiales, patrones, colores y formas. También puedes enviar tus diseños al Corredor Estrellado y obtener comentarios de otros jugadores. También puedes votar por tus diseños favoritos y apoyar a otros creadores. </p>
|
16 |
-
<p></p>
|
17 |
-
<h2>¿Por qué descargar amor Nikki Mod APK? </h2>
|
18 |
-
<p>Love Nikki es un juego gratuito, pero también tiene algunas compras en el juego que pueden mejorar tu experiencia de juego. Por ejemplo, puedes comprar dinero y diamantes, las dos monedas principales del juego, para desbloquear más atuendos y objetos. También puede comprar resistencia, que se requiere para jugar los capítulos y eventos. También puedes comprar niveles VIP, que te dan acceso a beneficios y funciones exclusivas. </p>
|
19 |
-
<p>Sin embargo, no todo el mundo puede permitirse el lujo de gastar dinero real en el juego. Algunos jugadores también pueden encontrar los anuncios molestos o intrusivos. Es por eso que algunos jugadores optan por descargar Love Nikki Mod APK, una versión modificada del juego que le da algunas ventajas y beneficios. Estos son algunos de los beneficios de descargar Love Nikki Mod APK.</p>
|
20 |
-
<h3>Dinero y diamantes ilimitados</h3>
|
21 |
-
|
22 |
-
<h3>Acceso gratuito a todos los trajes y artículos</h3>
|
23 |
-
<p>Otra razón por la que los jugadores descargar Love Nikki Mod APK es obtener acceso gratuito a todos los trajes y artículos en el juego. Como mencionamos anteriormente, Love Nikki tiene más de 20.000 artículos para elegir, pero no todos están disponibles de forma gratuita. Algunos de ellos están bloqueados detrás de niveles VIP, eventos, misiones o sorteos gacha. Con Love Nikki Mod APK, puede desbloquear todos los trajes y artículos sin gastar dinero o diamantes. También puede utilizarlos en cualquier desafío o concurso sin restricciones. De esta manera, podrás disfrutar del máximo potencial del juego y expresar tu creatividad y estilo. </p>
|
24 |
-
<h3>No se requieren anuncios ni root</h3>
|
25 |
-
<p>Una razón final por la descarga de los jugadores Love Nikki Mod APK es deshacerse de los anuncios y los requisitos de raíz. Los anuncios pueden ser molestos y distraer, especialmente cuando aparecen en el medio del juego o cuando estás tratando de ver un video. También pueden ralentizar su dispositivo o consumir sus datos. Con Love Nikki Mod APK, puede jugar el juego sin anuncios o interrupciones. </p>
|
26 |
-
<p>Además, algunos juegos modded requieren que raíz de su dispositivo, lo que significa alterar sus ajustes de software y darse control total sobre él. Esto puede ser arriesgado y complicado, ya que puede anular su garantía, exponer su dispositivo a malware o causar un mal funcionamiento. Con Love Nikki Mod APK, usted no necesita raíz de su dispositivo para instalar o jugar el juego. Solo tienes que seguir unos sencillos pasos que explicaremos más adelante. </p>
|
27 |
-
<h2>Cómo descargar amor Nikki Mod APK? </h2>
|
28 |
-
<p>Ahora que sabes lo que es Love Nikki Mod APK y por qué debe descargarlo, es posible que se pregunte cómo hacerlo. No te preocupes, le guiará a través del proceso paso a paso. Aquí están las cosas que necesita hacer para descargar Love Nikki Mod APK de forma segura y fácil. </p>
|
29 |
-
<h3>Paso 1: Encontrar una fuente confiable</h3>
|
30 |
-
|
31 |
-
<p>Para evitar estos riesgos, es necesario hacer una investigación antes de descargar nada de Internet. Es necesario comprobar la reputación y los comentarios del sitio web que ofrece Love Nikki Mod APK. Necesitas buscar comentarios positivos de otros usuarios que han descargado y usado el juego modificado. También debe buscar signos de legitimidad y seguridad, como cifrado HTTPS, certificados, insignias o sellos. </p>
|
32 |
-
<p>Uno de los sitios web que recomendamos para descargar Love Nikki Mod APK es [ModAPKStore]. Este sitio web es una de las fuentes más populares y confiables para juegos y aplicaciones modificadas. Tiene una gran colección de juegos y aplicaciones que se actualizan regularmente y se prueban para la calidad y el rendimiento. También tiene una interfaz fácil de usar y una velocidad de descarga rápida. Puede descargar Love Nikki Mod APK desde este sitio web haciendo clic en este [enlace]. </p>
|
33 |
-
<h3>Paso 2: Habilitar fuentes desconocidas</h3>
|
34 |
-
<p>Lo siguiente que debe hacer es habilitar fuentes desconocidas en la configuración del dispositivo. Esto es necesario porque el amor Nikki Mod APK no es de la tienda oficial de Google Play o App Store. Es a partir de una fuente de terceros que el dispositivo podría no reconocer o confiar. Para permitir que su dispositivo instale y ejecute Love Nikki Mod APK, debe darle permiso para hacerlo. </p>
|
35 |
-
<p>Para habilitar fuentes desconocidas en su dispositivo, debe seguir estos pasos:</p>
|
36 |
-
<ul>
|
37 |
-
<li>Ve a la configuración de tu dispositivo y busca la opción de seguridad o privacidad. </li>
|
38 |
-
<li>Encontrar la opción que dice fuentes desconocidas o permitir la instalación de aplicaciones de fuentes desconocidas. </li>
|
39 |
-
<li> Cambiar el interruptor o marque la casilla para habilitarlo. </li>
|
40 |
-
<li> Puede aparecer un mensaje de advertencia que le informa sobre los riesgos de instalar aplicaciones de fuentes desconocidas. Toque en OK o confirme para continuar. </li>
|
41 |
-
</ul>
|
42 |
-
<p>Una vez que haya habilitado fuentes desconocidas, puede pasar al siguiente paso. </p>
|
43 |
-
<h3>Paso 3: Instalar el archivo APK</h3>
|
44 |
-
|
45 |
-
<p>Para instalar el archivo APK de Love Nikki Mod APK, debe seguir estos pasos:</p>
|
46 |
-
<ul>
|
47 |
-
<li>Ir a la página web donde ha descargado Love Nikki Mod APK, tales como [ModAPKStore]. </li>
|
48 |
-
<li>Encuentre el enlace o botón de descarga y toque en él. </li>
|
49 |
-
<li>La descarga se iniciará automáticamente y puede tardar unos minutos dependiendo de la velocidad de Internet y el almacenamiento del dispositivo. </li>
|
50 |
-
<li>Una vez que la descarga se haya completado, vaya al administrador de archivos de su dispositivo y busque el archivo APK descargado. Debería estar en la carpeta de descargas o donde sea que lo hayas guardado. </li>
|
51 |
-
<li>Toque en el archivo APK y aparecerá un mensaje, preguntándole si desea instalarlo. </li>
|
52 |
-
<li>Toque en instalar y espere a que el proceso de instalación termine. </li>
|
53 |
-
</ul>
|
54 |
-
<p>Una vez realizada la instalación, puede pasar al paso final. </p>
|
55 |
-
<h3>Paso 4: Disfruta del juego</h3>
|
56 |
-
<p>Lo último que tienes que hacer es disfrutar del juego. Ahora puedes lanzar Love Nikki Mod APK desde el cajón de aplicaciones de tu dispositivo o la pantalla de inicio. También puede crear un acceso directo o widget para facilitar el acceso. Ahora puedes jugar Love Nikki con dinero y diamantes ilimitados, acceso gratuito a todos los atuendos y artículos, sin anuncios y sin necesidad de raíz. También puedes disfrutar de todas las características y contenidos del juego original, como la historia, los desafíos, los eventos y la comunidad. </p>
|
57 |
-
<p>Felicidades, que ha descargado con éxito e instalado Love Nikki Mod APK en su dispositivo. Ahora puedes divertirte vistiendo a tu avatar, explorando el mundo de fantasía, compitiendo con otros jugadores y creando tus propios diseños. También puede compartir sus trajes e ideas con otros amantes de la moda en línea. También puedes actualizar el juego regularmente para obtener nuevas características y mejoras. </p>
|
58 |
-
<h2>Conclusión</h2>
|
59 |
-
|
60 |
-
<p>Si usted tiene alguna pregunta o comentario sobre Love Nikki Mod APK, no dude en dejar un comentario a continuación. Nos encantaría saber de ti y ayudarte. ¡Gracias por leer este artículo y feliz juego! </p>
|
61 |
-
<h2>Preguntas frecuentes</h2>
|
62 |
-
<p>Aquí están algunas de las preguntas más frecuentes sobre Love Nikki Mod APK:</p>
|
63 |
-
<h4> ¿Es seguro el amor Nikki Mod APK? </h4>
|
64 |
-
<p>Sí, Love Nikki Mod APK es seguro, siempre y cuando lo descargue de una fuente confiable como [ModAPKStore]. Este sitio web tiene una buena reputación y comentarios de otros usuarios que han descargado y utilizado el juego modificado. También tiene cifrado HTTPS, certificados, insignias o sellos que indican su legitimidad y seguridad. Sin embargo, siempre debe tener cuidado al descargar algo de Internet y escanearlo con un antivirus o un detector de malware antes de instalarlo en su dispositivo. </p>
|
65 |
-
<h4> ¿Es el amor Nikki Mod APK legal? </h4>
|
66 |
-
<p>No, Love Nikki Mod APK no es legal porque viola los términos y condiciones del desarrollador de juegos original Elex Technology. Al usar un juego modificado, estás eludiendo sus reglas y regulaciones con respecto a las compras, anuncios y contenido dentro del juego. También estás infringiendo sus derechos de propiedad intelectual al modificar su juego sin su permiso. Por lo tanto, el uso de Love Nikki Mod APK es ilegal y poco ético. </p>
|
67 |
-
<h4> ¿Voy a conseguir prohibido para el uso de Love Nikki Mod APK? </h4>
|
68 |
-
<p>Posiblemente, sí . Hay una posibilidad de que se le prohibió el uso de Love Nikki Mod APK, especialmente si se utiliza en línea o en modos multijugador. El desarrollador de juegos Elex Technology tiene el derecho de monitorear y detectar cualquier actividad sospechosa o fraudulenta en sus servidores. También pueden prohibir o suspender cualquier cuenta que viole sus términos y condiciones o perjudique la integridad y reputación de su juego. Por lo tanto, el uso de Love Nikki Mod APK es arriesgado y no se recomienda. </p>
|
69 |
-
<h4> ¿Puedo actualizar el amor Nikki Mod APK? </h4>
|
70 |
-
|
71 |
-
<h4> ¿Puedo usar Love Nikki Mod APK con el juego original? </h4>
|
72 |
-
<p>No, no se puede utilizar Love Nikki Mod APK con el juego original. El juego modificado y el juego original son dos aplicaciones separadas que tienen diferentes datos y código. No pueden coexistir o interactuar entre sí en su dispositivo. Si intenta instalar ambas aplicaciones en su dispositivo, es posible que encuentre errores, fallos o conflictos. También puede perder su progreso o datos en cualquiera de las aplicaciones. Por lo tanto, solo debe usar una aplicación a la vez en su dispositivo. </p> 64aa2da5cf<br />
|
73 |
-
<br />
|
74 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Benson/text-generation/Examples/Descargar Dls 2020 Apk Obb ltima Versin (7.42).md
DELETED
@@ -1,97 +0,0 @@
|
|
1 |
-
|
2 |
-
<h1>Betway App Descargar para Android (apk) para Casino y apuestas deportivas</h1>
|
3 |
-
<p>Si usted está buscando una forma confiable y conveniente para disfrutar de los juegos de azar en línea en su dispositivo Android, es posible que desee considerar la descarga de la aplicación Betway. Betway es una de las plataformas de apuestas online más populares y confiables del mundo, ofreciendo una amplia gama de juegos y deportes para apostar. Si usted es un fan de los juegos de casino, casino en vivo, esports, o apuestas deportivas, usted encontrará algo para satisfacer su gusto y presupuesto en Betway.</p>
|
4 |
-
<h2>descargar dls 2020 apk obb última versión (7.42)</h2><br /><p><b><b>Download</b> ✪ <a href="https://bltlly.com/2v6Kqp">https://bltlly.com/2v6Kqp</a></b></p><br /><br />
|
5 |
-
<p>En este artículo, le mostraremos cómo descargar e instalar la aplicación Betway para Android, así como cómo usarla para realizar sus apuestas y divertirse. También compararemos la aplicación Betway con el sitio móvil de Betway y le proporcionaremos información sobre la compatibilidad y los requisitos de la aplicación. Así que, sin más preámbulos, empecemos. </p>
|
6 |
-
<h2>¿Qué es Betway? </h2>
|
7 |
-
<p>Betway es una plataforma de juego en línea que se lanzó en 2006 y desde entonces ha crecido hasta convertirse en uno de los principales jugadores de la industria. Betway opera en varios mercados en todo el mundo, incluyendo África, Europa, Asia y América Latina. Betway está autorizado y regulado por varias autoridades, como la Comisión de Juego del Reino Unido, la Autoridad de Juego de Malta y la Autoridad de Juego de Suecia.</p>
|
8 |
-
<h3>Características y beneficios de Betway</h3>
|
9 |
-
<p>Algunas de las características y beneficios que hacen que Betway se destaque de otras plataformas de juego en línea son:</p>
|
10 |
-
<p></p>
|
11 |
-
<ul>
|
12 |
-
<li>Una gran selección de juegos y deportes para apostar, incluyendo juegos de casino, casino en vivo, esports, fútbol, cricket, tenis, baloncesto, carreras de caballos y más. </li>
|
13 |
-
<li>Un generoso bono de bienvenida para nuevos clientes, así como promociones y ofertas regulares para clientes existentes. </li>
|
14 |
-
<li>Una interfaz fácil de usar e intuitiva que facilita la navegación y encuentra lo que buscas. </li>
|
15 |
-
|
16 |
-
<li>Un equipo de atención al cliente receptivo y útil que está disponible 24/7 a través de chat en vivo, correo electrónico o teléfono. </li>
|
17 |
-
<li>Un programa de lealtad que te recompensa con puntos por cada apuesta que haces, que puedes canjear por apuestas gratis o premios en efectivo. </li>
|
18 |
-
</ul>
|
19 |
-
<h3>Aplicación de Betway vs Sitio móvil de Betway</h3>
|
20 |
-
<p>Si quieres acceder a Betway en tu dispositivo Android, tienes dos opciones: usar la aplicación Betway o usar el sitio móvil de Betway. Ambas opciones tienen sus pros y sus contras, dependiendo de tus preferencias y necesidades. Estas son algunas de las principales diferencias entre ellas:</p>
|
21 |
-
<tabla>
|
22 |
-
<tr>
|
23 |
-
<th>Aplicación de Betway</th>
|
24 |
-
<th>Sitio móvil de Betway</th>
|
25 |
-
</tr>
|
26 |
-
<tr>
|
27 |
-
<td>Requiere descargar e instalar un archivo apk desde una fuente de terceros. </td>
|
28 |
-
<td>No requiere ninguna descarga o instalación. </td>
|
29 |
-
</tr>
|
30 |
-
<tr>
|
31 |
-
<td>Ofrece un rendimiento más rápido y suave que el sitio móvil. </td>
|
32 |
-
<td>Puede ser más lento o menos estable que la aplicación dependiendo de su conexión a Internet. </td>
|
33 |
-
</tr>
|
34 |
-
<tr>
|
35 |
-
<td>Consume menos datos que el sitio móvil. </td>
|
36 |
-
<td>Consum es más datos que la aplicación. </td>
|
37 |
-
</tr>
|
38 |
-
<tr>
|
39 |
-
<td>Le permite acceder a todas las características y funciones de Betway, incluyendo streaming en vivo, cash out y apuestas in-play. </td>
|
40 |
-
<td>Puede que no sea compatible con algunas de las características y funciones de Betway, como la transmisión en vivo, el cobro en efectivo y las apuestas en vivo. </td>
|
41 |
-
</tr>
|
42 |
-
<tr>
|
43 |
-
<td>Le proporciona notificaciones push y alertas para las últimas noticias, ofertas y actualizaciones de Betway.</td>
|
44 |
-
<td>No proporciona notificaciones push y alertas de Betway.</td>
|
45 |
-
</tr>
|
46 |
-
</tabla>
|
47 |
-
<p>Como puedes ver, la aplicación Betway tiene algunas ventajas sobre el sitio móvil de Betway, como un rendimiento más rápido, un menor consumo de datos y más características. Sin embargo, el sitio móvil de Betway también es una buena opción si no desea descargar o instalar nada en su dispositivo, o si tiene un dispositivo anterior o incompatible. En última instancia, la elección es suya. </p>
|
48 |
-
<h2>Cómo descargar e instalar la aplicación Betway para Android</h2>
|
49 |
-
|
50 |
-
<h3>Paso 1: Permitir fuentes desconocidas</h3>
|
51 |
-
<p>Dado que la aplicación Betway no está disponible en Google Play Store, tendrá que permitir que su dispositivo instale aplicaciones de fuentes desconocidas. Para hacer esto, vaya a la configuración del dispositivo y busque la opción de seguridad o privacidad. Luego, active la opción que dice "permitir fuentes desconocidas" o "permitir la instalación desde fuentes desconocidas". Esto le permitirá instalar el archivo apk Betway en su dispositivo. </p>
|
52 |
-
<h3>Paso 2: Descargar el archivo apk Betway</h3>
|
53 |
-
<p>Siguiente, tendrá que descargar el archivo apk Betway de una fuente de confianza. Puede hacer esto visitando el sitio web oficial de Betway y haciendo clic en el botón "descargar aplicación". Alternativamente, puede utilizar este enlace para descargar el archivo apk Betway directamente. El tamaño del archivo es de aproximadamente 10 MB y debería tomar unos segundos o minutos para descargar dependiendo de su velocidad de Internet. </p>
|
54 |
-
<h3>Paso 3: Instalar la aplicación Betway</h3>
|
55 |
-
<p>Una vez que haya descargado el archivo apk Betway, puede proceder a instalar la aplicación Betway en su dispositivo. Para hacer esto, busque el archivo en su carpeta de descargas o barra de notificaciones y toque en él. Verá una ventana emergente pidiéndole que confirme la instalación. Toque en "instalar" y espere a que se complete el proceso. También puede ver un mensaje de advertencia diciendo que la aplicación puede dañar su dispositivo. Esto es normal y puede ignorarlo tocando en "instalar de todos modos". La aplicación Betway se instalará en su dispositivo y verá un icono en la pantalla de inicio o en el cajón de aplicaciones. </p>
|
56 |
-
<h2>Cómo usar la aplicación Betway para Android</h2>
|
57 |
-
<p>Ahora que ha instalado la aplicación Betway en su dispositivo, puede comenzar a usarla para realizar sus apuestas y disfrutar del juego en línea. Aquí está cómo usarlo:</p>
|
58 |
-
<h3>Paso 1: Regístrate o inicia sesión en tu cuenta de Betway</h3>
|
59 |
-
|
60 |
-
<p>Si ya tiene una cuenta de Betway, puede simplemente iniciar sesión en la aplicación usando sus credenciales existentes. Para ello, abra la aplicación y toque en "iniciar sesión". Introduzca su nombre de usuario o dirección de correo electrónico y contraseña y toque en "iniciar sesión". Estarás conectado a tu cuenta y podrás acceder a todas las características y funciones de la aplicación. </p>
|
61 |
-
<h3>Paso 2: Elige tu juego o deporte preferido</h3>
|
62 |
-
<p>Una vez que haya iniciado sesión en su cuenta, puede elegir entre una variedad de juegos y deportes para apostar. Puedes usar la barra de menú en la parte inferior de la pantalla para navegar entre diferentes categorías, como casino, casino en vivo, esports, deportes, promociones y más. También puedes usar la barra de búsqueda en la parte superior de la pantalla para encontrar un juego o deporte específico por nombre o palabra clave. </p>
|
63 |
-
<p>Cuando encuentres un juego o deporte que te interese, toca en él para ver más detalles y opciones. Verá información como probabilidades, mercados, estadísticas, resultados en vivo y más. También puede ver transmisiones en vivo de algunos eventos si están disponibles. </ <h3>Paso 3: Haga sus apuestas y disfrute</h3>
|
64 |
-
<p>Cuando se ha decidido por un juego o deporte para apostar, puede realizar sus apuestas y disfrutar de la emoción de los juegos de azar en línea. Para hacer una apuesta, toque en las probabilidades o el mercado que desea apostar y se añadirá a su boleto de apuesta. Puede agregar varias apuestas a su boleto de apuesta si desea crear una apuesta de combinación o acumuladora. También puedes ajustar tu apuesta y ver tu posible pago en tu boleta de apuesta. </p>
|
65 |
-
<p>Cuando esté satisfecho con su apuesta, toque en "realizar apuestas" y confirme su apuesta. Verá un mensaje de confirmación y su apuesta será procesada. Puede rastrear el estado de su apuesta en su historial de apuestas o en la sección de puntuación en vivo. También puede retirar su apuesta antes de que el evento termine si desea asegurar una ganancia o minimizar una pérdida. </p>
|
66 |
-
|
67 |
-
<h2>Compatibilidad y requisitos de la aplicación Betway</h2>
|
68 |
-
<p>La aplicación Betway para Android es compatible con la mayoría de los dispositivos Android que se ejecutan en Android 4.4 o superior. Sin embargo, algunos dispositivos más antiguos o de gama baja pueden no ser compatibles con la aplicación o algunas de sus características. Para asegurarse de que tiene la mejor experiencia con la aplicación Betway, debe tener un dispositivo que cumpla con las siguientes especificaciones mínimas:</p>
|
69 |
-
<h3>Versión mínima de Android</h3>
|
70 |
-
<p>Tu dispositivo debe tener Android 4.4 o superior instalado. Puede comprobar su versión de Android yendo a la configuración del dispositivo y buscando la opción de información sobre el teléfono o el software. </p>
|
71 |
-
<h3>Especificaciones mínimas del dispositivo</h3>
|
72 |
-
<p>Su dispositivo debe tener al menos 1 GB de RAM, 100 MB de espacio de almacenamiento libre y una resolución de pantalla de 800 x 480 píxeles o más. Puede comprobar estas especificaciones yendo a la configuración del dispositivo y buscando la opción de almacenamiento, memoria o visualización. </p>
|
73 |
-
<h2>Conclusión</h2>
|
74 |
-
<p>La aplicación Betway para Android es una gran manera de disfrutar de los juegos de azar en línea en su dispositivo. Ofrece una amplia gama de juegos y deportes para apostar, así como una interfaz fácil de usar y segura. Puede descargar e instalar la aplicación Betway para Android siguiendo los pasos que hemos descrito en este artículo. También puede utilizar el sitio móvil de Betway si prefiere no descargar nada en su dispositivo. </p>
|
75 |
-
<p>Esperamos que este artículo te haya ayudado a aprender más sobre la aplicación Betway para Android y cómo usarla. Si tiene alguna pregunta o comentario, no dude en contactarnos o dejar un comentario a continuación. Nos encantaría saber de usted. </p>
|
76 |
-
<h2>Preguntas frecuentes</h2>
|
77 |
-
<p>Aquí están algunas de las preguntas más frecuentes sobre la aplicación Betway para Android:</p>
|
78 |
-
<ul>
|
79 |
-
<li><b>¿Es la aplicación Betway para Android segura y legal? </b></li>
|
80 |
-
|
81 |
-
<li><b>¿Cómo actualizo la aplicación Betway para Android? </b></li>
|
82 |
-
<li>La aplicación Betway para Android se actualiza automáticamente cada vez que hay una nueva versión disponible. Sin embargo, si desea actualizar manualmente la aplicación, puede hacerlo visitando el sitio web oficial de Betway y descargando el último archivo apk. Luego, puede instalarlo sobre la aplicación existente sin perder ningún dato. </li>
|
83 |
-
<li><b>¿Puedo usar la aplicación Betway para Android en varios dispositivos? </b></li>
|
84 |
-
<li>Sí, puedes usar la aplicación Betway para Android en varios dispositivos siempre que inicies sesión con la misma cuenta. Sin embargo, no debe iniciar sesión con la misma cuenta en más de un dispositivo al mismo tiempo, ya que esto puede causar algunos problemas o errores. </li>
|
85 |
-
<li><b> ¿Cuáles son algunas de las alternativas a la aplicación Betway para Android? </b></li>
|
86 |
-
<li>Si usted está buscando algunas alternativas a la aplicación Betway para Android, es posible que desee echa un vistazo a algunas de estas otras plataformas de juego en línea que también tienen aplicaciones para Android:</li>
|
87 |
-
<ul>
|
88 |
-
<li><a href="">22Bet</a>: Una plataforma de juego en línea completa que ofrece juegos de casino, casino en vivo, apuestas deportivas, deportes virtuales y más. </li>
|
89 |
-
<li><a href="">Bet365</a>: Una de las plataformas de juego en línea más populares y de buena reputación en el mundo, que ofrece juegos de casino, casino en vivo, apuestas deportivas, esports, bingo, poker y más. </li>
|
90 |
-
<li><a href="">LeoVegas</a>: Una plataforma de juego en línea móvil que se especializa en juegos de casino, casino en vivo y apuestas deportivas. </li>
|
91 |
-
<li><a href="">Betfair</a>: Una plataforma de juego en línea única que ofrece juegos de casino, casino en vivo, apuestas deportivas, esports, poker y apuestas de intercambio. </li>
|
92 |
-
</ul>
|
93 |
-
<li><b>¿Cómo me pongo en contacto con el servicio de atención al cliente de Betway? </b></li>
|
94 |
-
|
95 |
-
</ul></p> 64aa2da5cf<br />
|
96 |
-
<br />
|
97 |
-
<br />
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/resources/__init__.py
DELETED
File without changes
|
spaces/BramVanroy/mai-simplification-nl-2023-demo/README.md
DELETED
@@ -1,19 +0,0 @@
|
|
1 |
-
---
|
2 |
-
title: Dutch Simplification
|
3 |
-
emoji: 🏃
|
4 |
-
colorFrom: indigo
|
5 |
-
colorTo: yellow
|
6 |
-
sdk: streamlit
|
7 |
-
sdk_version: 1.19.0
|
8 |
-
app_file: app.py
|
9 |
-
pinned: true
|
10 |
-
license: cc-by-nc-sa-4.0
|
11 |
-
models:
|
12 |
-
- BramVanroy/ul2-base-dutch-simplification-mai-2023
|
13 |
-
datasets:
|
14 |
-
- BramVanroy/chatgpt-dutch-simplification
|
15 |
-
tags:
|
16 |
-
- natural language processing
|
17 |
-
- simplification
|
18 |
-
- dutch
|
19 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/BraydenMoore/MARCI-NFL-Betting/get_record.py
DELETED
@@ -1,177 +0,0 @@
|
|
1 |
-
from datetime import date, datetime
|
2 |
-
import numpy as np
|
3 |
-
import pandas as pd
|
4 |
-
pd.set_option('chained_assignment',None)
|
5 |
-
pd.set_option('display.max_columns',None)
|
6 |
-
import os
|
7 |
-
import pickle as pkl
|
8 |
-
from Source.Predict.predict import predict
|
9 |
-
|
10 |
-
# get team abbreviations
|
11 |
-
with open('Source/Pickles/team_abbreviation_to_name.pkl', 'rb') as f:
|
12 |
-
team_abbreviation_to_name = pkl.load(f)
|
13 |
-
|
14 |
-
# get this year's odds and results
|
15 |
-
gbg_and_odds_this_year = pd.read_csv('Source/Data/gbg_and_odds_this_year.csv')
|
16 |
-
results = pd.read_csv('Source/Data/results.csv')
|
17 |
-
|
18 |
-
# make predictions
|
19 |
-
from tqdm import tqdm
|
20 |
-
print("Predicting games and getting record")
|
21 |
-
predictions = {}
|
22 |
-
for game_id,home,away,season,week,total in tqdm(gbg_and_odds_this_year[['game_id','home_team','away_team','Season','GP','Total Score Close']].values):
|
23 |
-
if week!=1:
|
24 |
-
predictions[game_id] = predict(home,away,season,week,total)
|
25 |
-
|
26 |
-
# merge data
|
27 |
-
predictions_df = pd.DataFrame(predictions).T
|
28 |
-
predictions_df['predicted_winner'] = [i['Winner'][0] if type(i['Winner'])==list else None for i in predictions_df[1]]
|
29 |
-
predictions_df['predicted_winner'] = predictions_df['predicted_winner'].map(team_abbreviation_to_name)
|
30 |
-
predictions_df['predicted_winner_probability'] = [i['Probabilities'][0] if type(i['Probabilities'])==list else None for i in predictions_df[1]]
|
31 |
-
predictions_df['predicted_over_under'] = [i['Over/Under'][0] if type(i['Over/Under'])==list else None for i in predictions_df[2]]
|
32 |
-
predictions_df['predicted_over_under_probability'] = [i['Probability'][0] if type(i['Probability'])==list else None for i in predictions_df[2]]
|
33 |
-
predictions_df = predictions_df.merge(results, left_index=True, right_on='game_id').merge(gbg_and_odds_this_year[['game_id','Total Score Close','home_team','away_team','game_date','Home Odds Close','Away Odds Close']]).dropna(subset=['predicted_winner'])
|
34 |
-
predictions_df['over_under'] = ['Over' if t>tsc else 'Under' if t<tsc else 'Push' for t,tsc in predictions_df[['total','Total Score Close']].values]
|
35 |
-
predictions_df['game_date'] = pd.to_datetime(predictions_df['game_date'])
|
36 |
-
|
37 |
-
# get returns
|
38 |
-
predictions_df['home'] = predictions_df['home_team'].map(team_abbreviation_to_name)
|
39 |
-
predictions_df['away'] = predictions_df['away_team'].map(team_abbreviation_to_name)
|
40 |
-
predictions_df['picked_home'] = (predictions_df['home']==predictions_df['predicted_winner'])
|
41 |
-
predictions_df['picked_away'] = (predictions_df['away']==predictions_df['predicted_winner'])
|
42 |
-
|
43 |
-
predictions_df['winner_correct'] = (predictions_df['predicted_winner']==predictions_df['winner'])
|
44 |
-
predictions_df['winner_incorrect'] = ((predictions_df['predicted_winner']!=predictions_df['winner']) & (predictions_df['winner']!='Tie'))
|
45 |
-
predictions_df['winner_tie'] = (predictions_df['winner']=='Tie')
|
46 |
-
predictions_df['over_under_correct'] = (predictions_df['predicted_over_under']==predictions_df['over_under'])
|
47 |
-
predictions_df['over_under_incorrect'] = ((predictions_df['predicted_over_under']!=predictions_df['over_under']) & (predictions_df['over_under']!='Push'))
|
48 |
-
predictions_df['over_under_push'] = (predictions_df['over_under']=='Push')
|
49 |
-
|
50 |
-
predictions_df['winner_return'] = [0 if tie else ao-1 if (pa and wc) else ho-1 if (ph and wc) else -1 for ao,ho,pa,ph,wc,tie in predictions_df[['Away Odds Close','Home Odds Close','picked_away','picked_home','winner_correct','winner_tie']].values]
|
51 |
-
predictions_df['over_under_return'] = [0 if push else 0.91 if ouc else -1 for ouc,push in predictions_df[['over_under_correct','over_under_push']].values]
|
52 |
-
predictions_df = predictions_df.loc[predictions_df['game_date']>datetime(year=2023,month=9,day=19)]
|
53 |
-
|
54 |
-
# Save
|
55 |
-
predictions_df.to_csv('Source/Data/predictions.csv')
|
56 |
-
bins = np.arange(0.5, 1.05, 0.05)
|
57 |
-
bin_midpoints = [(bins[i] + bins[i+1]) / 2 for i in range(len(bins) - 1)]
|
58 |
-
|
59 |
-
predictions_df['winner_probability_bin'] = pd.cut(predictions_df['predicted_winner_probability'], bins=bins, labels=bin_midpoints)
|
60 |
-
predictions_df['over_under_probability_bin'] = pd.cut(predictions_df['predicted_over_under_probability'], bins=bins, labels=bin_midpoints)
|
61 |
-
winner_binned = predictions_df.groupby('winner_probability_bin')['winner_correct'].mean().reset_index()
|
62 |
-
over_under_binned = predictions_df.groupby('over_under_probability_bin')['over_under_correct'].mean().reset_index()
|
63 |
-
|
64 |
-
## plot
|
65 |
-
|
66 |
-
import matplotlib.pyplot as plt
|
67 |
-
import numpy as np
|
68 |
-
|
69 |
-
def style_plot(ax, title):
|
70 |
-
ax.set_facecolor('black')
|
71 |
-
ax.set_title(title, color='white')
|
72 |
-
ax.set_xlabel('MARCI Predicted Probability', color='white')
|
73 |
-
ax.set_ylabel('Actual Probability', color='white')
|
74 |
-
ax.tick_params(axis='x', colors='white')
|
75 |
-
ax.tick_params(axis='y', colors='white')
|
76 |
-
ax.spines['bottom'].set_color('white')
|
77 |
-
ax.spines['top'].set_color('white')
|
78 |
-
ax.spines['left'].set_color('white')
|
79 |
-
ax.spines['right'].set_color('white')
|
80 |
-
#ax.grid(True, linestyle='--', linewidth=0.5, color='grey')
|
81 |
-
ax.set_ylim((0,1.1))
|
82 |
-
|
83 |
-
def add_identity_line(ax, max_x):
|
84 |
-
x = np.linspace(0.5, max_x, 100)
|
85 |
-
ax.plot(x, x, linestyle='--', color='grey')
|
86 |
-
|
87 |
-
def add_best_fit_line(ax, x_values, y_values):
|
88 |
-
x_values = x_values.astype('float64')
|
89 |
-
y_values = y_values.astype('float64')
|
90 |
-
mask = ~np.isnan(x_values) & ~np.isnan(y_values)
|
91 |
-
x_values = x_values[mask]
|
92 |
-
y_values = y_values[mask]
|
93 |
-
coef = np.polyfit(x_values, y_values, 1)
|
94 |
-
poly1d_fn = np.poly1d(coef)
|
95 |
-
ax.plot(x_values, poly1d_fn(x_values), color='green')
|
96 |
-
corr = np.corrcoef(x_values, y_values)[0,1]
|
97 |
-
max_x = np.max(x_values)
|
98 |
-
max_y = poly1d_fn(max_x)
|
99 |
-
#ax.text(max_x, max_y, f'Corr: {corr:.2f}', color='green')
|
100 |
-
|
101 |
-
# Create the Winner scatter plot
|
102 |
-
x_values_winner = winner_binned['winner_probability_bin']
|
103 |
-
y_values_winner = winner_binned['winner_correct']
|
104 |
-
fig1 = plt.figure(facecolor='black')
|
105 |
-
ax1 = fig1.add_subplot(1, 1, 1)
|
106 |
-
ax1.scatter(x_values_winner,
|
107 |
-
y_values_winner,
|
108 |
-
color=(0/255, 128/255, 0/255), s=100, marker='o')
|
109 |
-
add_identity_line(ax1, predictions_df['predicted_winner_probability'].max())
|
110 |
-
add_best_fit_line(ax1, predictions_df['predicted_winner_probability'], predictions_df['winner_correct'])
|
111 |
-
line, = ax1.plot([], [], linestyle='--', color='grey')
|
112 |
-
marci_line, = ax1.plot([], [], color='green')
|
113 |
-
ax1.legend([line, marci_line], ['Perfect Model', 'MARCI'], loc='upper left', facecolor='black', edgecolor='white', labelcolor='white')
|
114 |
-
style_plot(ax1, 'Winner Predictions')
|
115 |
-
plt.savefig('Static/Winner_Predictions_dark.png', facecolor='black')
|
116 |
-
plt.close(fig1)
|
117 |
-
|
118 |
-
# Create the Over/Under scatter plot
|
119 |
-
x_values_over_under = over_under_binned['over_under_probability_bin']
|
120 |
-
y_values_over_under = over_under_binned['over_under_correct']
|
121 |
-
fig2 = plt.figure(facecolor='black')
|
122 |
-
ax2 = fig2.add_subplot(1, 1, 1)
|
123 |
-
ax2.scatter(x_values_over_under,
|
124 |
-
y_values_over_under,
|
125 |
-
color=(0/255, 128/255, 0/255), s=100, marker='o')
|
126 |
-
add_identity_line(ax2, predictions_df['predicted_over_under_probability'].max())
|
127 |
-
add_best_fit_line(ax2, predictions_df['predicted_over_under_probability'], predictions_df['over_under_correct'])
|
128 |
-
line, = ax2.plot([], [], linestyle='--', color='grey')
|
129 |
-
marci_line, = ax2.plot([], [], color='green')
|
130 |
-
ax2.legend([line, marci_line], ['Perfect Model', 'MARCI'], loc='upper left', facecolor='black', edgecolor='white', labelcolor='white')
|
131 |
-
style_plot(ax2, 'Over/Under Predictions')
|
132 |
-
plt.savefig('Static/Over_Under_Predictions_dark.png', facecolor='black')
|
133 |
-
plt.close(fig2)
|
134 |
-
|
135 |
-
|
136 |
-
## get record
|
137 |
-
threshold = 0.6
|
138 |
-
|
139 |
-
winners_correct = predictions_df.loc[predictions_df['predicted_winner_probability']>threshold, 'winner_correct'].sum()
|
140 |
-
winners_accuracy = predictions_df.loc[predictions_df['predicted_winner_probability']>threshold, 'winner_correct'].mean()
|
141 |
-
winners_incorrect = predictions_df.loc[predictions_df['predicted_winner_probability']>threshold,'winner_incorrect'].sum()
|
142 |
-
winners_tie = predictions_df.loc[predictions_df['predicted_winner_probability']>threshold,'winner_tie'].sum()
|
143 |
-
winners_return = predictions_df.loc[predictions_df['predicted_winner_probability']>threshold, 'winner_return'].sum()
|
144 |
-
|
145 |
-
over_unders_correct = predictions_df.loc[predictions_df['predicted_over_under_probability']>threshold,'over_under_correct'].sum()
|
146 |
-
over_unders_accuracy = predictions_df.loc[predictions_df['predicted_over_under_probability']>threshold,'over_under_correct'].mean()
|
147 |
-
over_unders_incorrect = predictions_df.loc[predictions_df['predicted_over_under_probability']>threshold,'over_under_incorrect'].sum()
|
148 |
-
over_unders_push = predictions_df.loc[predictions_df['predicted_over_under_probability']>threshold,'over_under_push'].sum()
|
149 |
-
over_unders_return = predictions_df.loc[predictions_df['predicted_over_under_probability']>threshold,'over_under_return'].sum()
|
150 |
-
|
151 |
-
max_date = predictions_df['game_date'].max()
|
152 |
-
latest_game = pd.Timestamp(max_date).strftime("%A, %m/%d")
|
153 |
-
|
154 |
-
## get binom prob
|
155 |
-
from scipy.stats import binom
|
156 |
-
|
157 |
-
def compare_to_coinflip(c,n):
|
158 |
-
prob_fewer = binom.cdf(c, n, 0.5)
|
159 |
-
prob_more = 1 - prob_fewer
|
160 |
-
return f"{round(prob_more*100,1)}% chance of equal or better performance by flipping a coin."
|
161 |
-
|
162 |
-
record = {"winners_correct":str(winners_correct),
|
163 |
-
"winners_incorrect":str(winners_incorrect),
|
164 |
-
"winners_tie":("-"+str(winners_tie) if winners_tie>0 else ''),
|
165 |
-
"winners_return": str(round(winners_accuracy*100,1))+"% accuracy, " + str(round(winners_return,1))+"x return",
|
166 |
-
"over_unders_correct":str(over_unders_correct),
|
167 |
-
"over_unders_incorrect":str(over_unders_incorrect),
|
168 |
-
"over_unders_push":("-"+str(over_unders_push) if over_unders_push>0 else ''),
|
169 |
-
"over_unders_return": str(round(over_unders_accuracy*100,1))+"% accuracy, " + str(round(over_unders_return,1))+"x return",
|
170 |
-
"latest_game":latest_game,
|
171 |
-
"over_unders_binom":compare_to_coinflip(over_unders_correct, (over_unders_incorrect+over_unders_correct)),
|
172 |
-
"winners_binom":compare_to_coinflip(winners_correct, (winners_incorrect+winners_correct))}
|
173 |
-
|
174 |
-
import json
|
175 |
-
with open('Source/Data/record.json', 'w') as f:
|
176 |
-
json.dump(record,f)
|
177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVH-vn1210/make_hair/minigpt4/common/registry.py
DELETED
@@ -1,329 +0,0 @@
|
|
1 |
-
"""
|
2 |
-
Copyright (c) 2022, salesforce.com, inc.
|
3 |
-
All rights reserved.
|
4 |
-
SPDX-License-Identifier: BSD-3-Clause
|
5 |
-
For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause
|
6 |
-
"""
|
7 |
-
|
8 |
-
|
9 |
-
class Registry:
|
10 |
-
mapping = {
|
11 |
-
"builder_name_mapping": {},
|
12 |
-
"task_name_mapping": {},
|
13 |
-
"processor_name_mapping": {},
|
14 |
-
"model_name_mapping": {},
|
15 |
-
"lr_scheduler_name_mapping": {},
|
16 |
-
"runner_name_mapping": {},
|
17 |
-
"state": {},
|
18 |
-
"paths": {},
|
19 |
-
}
|
20 |
-
|
21 |
-
@classmethod
|
22 |
-
def register_builder(cls, name):
|
23 |
-
r"""Register a dataset builder to registry with key 'name'
|
24 |
-
|
25 |
-
Args:
|
26 |
-
name: Key with which the builder will be registered.
|
27 |
-
|
28 |
-
Usage:
|
29 |
-
|
30 |
-
from minigpt4.common.registry import registry
|
31 |
-
from minigpt4.datasets.base_dataset_builder import BaseDatasetBuilder
|
32 |
-
"""
|
33 |
-
|
34 |
-
def wrap(builder_cls):
|
35 |
-
from minigpt4.datasets.builders.base_dataset_builder import BaseDatasetBuilder
|
36 |
-
|
37 |
-
assert issubclass(
|
38 |
-
builder_cls, BaseDatasetBuilder
|
39 |
-
), "All builders must inherit BaseDatasetBuilder class, found {}".format(
|
40 |
-
builder_cls
|
41 |
-
)
|
42 |
-
if name in cls.mapping["builder_name_mapping"]:
|
43 |
-
raise KeyError(
|
44 |
-
"Name '{}' already registered for {}.".format(
|
45 |
-
name, cls.mapping["builder_name_mapping"][name]
|
46 |
-
)
|
47 |
-
)
|
48 |
-
cls.mapping["builder_name_mapping"][name] = builder_cls
|
49 |
-
return builder_cls
|
50 |
-
|
51 |
-
return wrap
|
52 |
-
|
53 |
-
@classmethod
|
54 |
-
def register_task(cls, name):
|
55 |
-
r"""Register a task to registry with key 'name'
|
56 |
-
|
57 |
-
Args:
|
58 |
-
name: Key with which the task will be registered.
|
59 |
-
|
60 |
-
Usage:
|
61 |
-
|
62 |
-
from minigpt4.common.registry import registry
|
63 |
-
"""
|
64 |
-
|
65 |
-
def wrap(task_cls):
|
66 |
-
from minigpt4.tasks.base_task import BaseTask
|
67 |
-
|
68 |
-
assert issubclass(
|
69 |
-
task_cls, BaseTask
|
70 |
-
), "All tasks must inherit BaseTask class"
|
71 |
-
if name in cls.mapping["task_name_mapping"]:
|
72 |
-
raise KeyError(
|
73 |
-
"Name '{}' already registered for {}.".format(
|
74 |
-
name, cls.mapping["task_name_mapping"][name]
|
75 |
-
)
|
76 |
-
)
|
77 |
-
cls.mapping["task_name_mapping"][name] = task_cls
|
78 |
-
return task_cls
|
79 |
-
|
80 |
-
return wrap
|
81 |
-
|
82 |
-
@classmethod
|
83 |
-
def register_model(cls, name):
|
84 |
-
r"""Register a task to registry with key 'name'
|
85 |
-
|
86 |
-
Args:
|
87 |
-
name: Key with which the task will be registered.
|
88 |
-
|
89 |
-
Usage:
|
90 |
-
|
91 |
-
from minigpt4.common.registry import registry
|
92 |
-
"""
|
93 |
-
|
94 |
-
def wrap(model_cls):
|
95 |
-
from minigpt4.models import BaseModel
|
96 |
-
|
97 |
-
assert issubclass(
|
98 |
-
model_cls, BaseModel
|
99 |
-
), "All models must inherit BaseModel class"
|
100 |
-
if name in cls.mapping["model_name_mapping"]:
|
101 |
-
raise KeyError(
|
102 |
-
"Name '{}' already registered for {}.".format(
|
103 |
-
name, cls.mapping["model_name_mapping"][name]
|
104 |
-
)
|
105 |
-
)
|
106 |
-
cls.mapping["model_name_mapping"][name] = model_cls
|
107 |
-
return model_cls
|
108 |
-
|
109 |
-
return wrap
|
110 |
-
|
111 |
-
@classmethod
|
112 |
-
def register_processor(cls, name):
|
113 |
-
r"""Register a processor to registry with key 'name'
|
114 |
-
|
115 |
-
Args:
|
116 |
-
name: Key with which the task will be registered.
|
117 |
-
|
118 |
-
Usage:
|
119 |
-
|
120 |
-
from minigpt4.common.registry import registry
|
121 |
-
"""
|
122 |
-
|
123 |
-
def wrap(processor_cls):
|
124 |
-
from minigpt4.processors import BaseProcessor
|
125 |
-
|
126 |
-
assert issubclass(
|
127 |
-
processor_cls, BaseProcessor
|
128 |
-
), "All processors must inherit BaseProcessor class"
|
129 |
-
if name in cls.mapping["processor_name_mapping"]:
|
130 |
-
raise KeyError(
|
131 |
-
"Name '{}' already registered for {}.".format(
|
132 |
-
name, cls.mapping["processor_name_mapping"][name]
|
133 |
-
)
|
134 |
-
)
|
135 |
-
cls.mapping["processor_name_mapping"][name] = processor_cls
|
136 |
-
return processor_cls
|
137 |
-
|
138 |
-
return wrap
|
139 |
-
|
140 |
-
@classmethod
|
141 |
-
def register_lr_scheduler(cls, name):
|
142 |
-
r"""Register a model to registry with key 'name'
|
143 |
-
|
144 |
-
Args:
|
145 |
-
name: Key with which the task will be registered.
|
146 |
-
|
147 |
-
Usage:
|
148 |
-
|
149 |
-
from minigpt4.common.registry import registry
|
150 |
-
"""
|
151 |
-
|
152 |
-
def wrap(lr_sched_cls):
|
153 |
-
if name in cls.mapping["lr_scheduler_name_mapping"]:
|
154 |
-
raise KeyError(
|
155 |
-
"Name '{}' already registered for {}.".format(
|
156 |
-
name, cls.mapping["lr_scheduler_name_mapping"][name]
|
157 |
-
)
|
158 |
-
)
|
159 |
-
cls.mapping["lr_scheduler_name_mapping"][name] = lr_sched_cls
|
160 |
-
return lr_sched_cls
|
161 |
-
|
162 |
-
return wrap
|
163 |
-
|
164 |
-
@classmethod
|
165 |
-
def register_runner(cls, name):
|
166 |
-
r"""Register a model to registry with key 'name'
|
167 |
-
|
168 |
-
Args:
|
169 |
-
name: Key with which the task will be registered.
|
170 |
-
|
171 |
-
Usage:
|
172 |
-
|
173 |
-
from minigpt4.common.registry import registry
|
174 |
-
"""
|
175 |
-
|
176 |
-
def wrap(runner_cls):
|
177 |
-
if name in cls.mapping["runner_name_mapping"]:
|
178 |
-
raise KeyError(
|
179 |
-
"Name '{}' already registered for {}.".format(
|
180 |
-
name, cls.mapping["runner_name_mapping"][name]
|
181 |
-
)
|
182 |
-
)
|
183 |
-
cls.mapping["runner_name_mapping"][name] = runner_cls
|
184 |
-
return runner_cls
|
185 |
-
|
186 |
-
return wrap
|
187 |
-
|
188 |
-
@classmethod
|
189 |
-
def register_path(cls, name, path):
|
190 |
-
r"""Register a path to registry with key 'name'
|
191 |
-
|
192 |
-
Args:
|
193 |
-
name: Key with which the path will be registered.
|
194 |
-
|
195 |
-
Usage:
|
196 |
-
|
197 |
-
from minigpt4.common.registry import registry
|
198 |
-
"""
|
199 |
-
assert isinstance(path, str), "All path must be str."
|
200 |
-
if name in cls.mapping["paths"]:
|
201 |
-
raise KeyError("Name '{}' already registered.".format(name))
|
202 |
-
cls.mapping["paths"][name] = path
|
203 |
-
|
204 |
-
@classmethod
|
205 |
-
def register(cls, name, obj):
|
206 |
-
r"""Register an item to registry with key 'name'
|
207 |
-
|
208 |
-
Args:
|
209 |
-
name: Key with which the item will be registered.
|
210 |
-
|
211 |
-
Usage::
|
212 |
-
|
213 |
-
from minigpt4.common.registry import registry
|
214 |
-
|
215 |
-
registry.register("config", {})
|
216 |
-
"""
|
217 |
-
path = name.split(".")
|
218 |
-
current = cls.mapping["state"]
|
219 |
-
|
220 |
-
for part in path[:-1]:
|
221 |
-
if part not in current:
|
222 |
-
current[part] = {}
|
223 |
-
current = current[part]
|
224 |
-
|
225 |
-
current[path[-1]] = obj
|
226 |
-
|
227 |
-
# @classmethod
|
228 |
-
# def get_trainer_class(cls, name):
|
229 |
-
# return cls.mapping["trainer_name_mapping"].get(name, None)
|
230 |
-
|
231 |
-
@classmethod
|
232 |
-
def get_builder_class(cls, name):
|
233 |
-
return cls.mapping["builder_name_mapping"].get(name, None)
|
234 |
-
|
235 |
-
@classmethod
|
236 |
-
def get_model_class(cls, name):
|
237 |
-
return cls.mapping["model_name_mapping"].get(name, None)
|
238 |
-
|
239 |
-
@classmethod
|
240 |
-
def get_task_class(cls, name):
|
241 |
-
return cls.mapping["task_name_mapping"].get(name, None)
|
242 |
-
|
243 |
-
@classmethod
|
244 |
-
def get_processor_class(cls, name):
|
245 |
-
return cls.mapping["processor_name_mapping"].get(name, None)
|
246 |
-
|
247 |
-
@classmethod
|
248 |
-
def get_lr_scheduler_class(cls, name):
|
249 |
-
return cls.mapping["lr_scheduler_name_mapping"].get(name, None)
|
250 |
-
|
251 |
-
@classmethod
|
252 |
-
def get_runner_class(cls, name):
|
253 |
-
return cls.mapping["runner_name_mapping"].get(name, None)
|
254 |
-
|
255 |
-
@classmethod
|
256 |
-
def list_runners(cls):
|
257 |
-
return sorted(cls.mapping["runner_name_mapping"].keys())
|
258 |
-
|
259 |
-
@classmethod
|
260 |
-
def list_models(cls):
|
261 |
-
return sorted(cls.mapping["model_name_mapping"].keys())
|
262 |
-
|
263 |
-
@classmethod
|
264 |
-
def list_tasks(cls):
|
265 |
-
return sorted(cls.mapping["task_name_mapping"].keys())
|
266 |
-
|
267 |
-
@classmethod
|
268 |
-
def list_processors(cls):
|
269 |
-
return sorted(cls.mapping["processor_name_mapping"].keys())
|
270 |
-
|
271 |
-
@classmethod
|
272 |
-
def list_lr_schedulers(cls):
|
273 |
-
return sorted(cls.mapping["lr_scheduler_name_mapping"].keys())
|
274 |
-
|
275 |
-
@classmethod
|
276 |
-
def list_datasets(cls):
|
277 |
-
return sorted(cls.mapping["builder_name_mapping"].keys())
|
278 |
-
|
279 |
-
@classmethod
|
280 |
-
def get_path(cls, name):
|
281 |
-
return cls.mapping["paths"].get(name, None)
|
282 |
-
|
283 |
-
@classmethod
|
284 |
-
def get(cls, name, default=None, no_warning=False):
|
285 |
-
r"""Get an item from registry with key 'name'
|
286 |
-
|
287 |
-
Args:
|
288 |
-
name (string): Key whose value needs to be retrieved.
|
289 |
-
default: If passed and key is not in registry, default value will
|
290 |
-
be returned with a warning. Default: None
|
291 |
-
no_warning (bool): If passed as True, warning when key doesn't exist
|
292 |
-
will not be generated. Useful for MMF's
|
293 |
-
internal operations. Default: False
|
294 |
-
"""
|
295 |
-
original_name = name
|
296 |
-
name = name.split(".")
|
297 |
-
value = cls.mapping["state"]
|
298 |
-
for subname in name:
|
299 |
-
value = value.get(subname, default)
|
300 |
-
if value is default:
|
301 |
-
break
|
302 |
-
|
303 |
-
if (
|
304 |
-
"writer" in cls.mapping["state"]
|
305 |
-
and value == default
|
306 |
-
and no_warning is False
|
307 |
-
):
|
308 |
-
cls.mapping["state"]["writer"].warning(
|
309 |
-
"Key {} is not present in registry, returning default value "
|
310 |
-
"of {}".format(original_name, default)
|
311 |
-
)
|
312 |
-
return value
|
313 |
-
|
314 |
-
@classmethod
|
315 |
-
def unregister(cls, name):
|
316 |
-
r"""Remove an item from registry with key 'name'
|
317 |
-
|
318 |
-
Args:
|
319 |
-
name: Key which needs to be removed.
|
320 |
-
Usage::
|
321 |
-
|
322 |
-
from mmf.common.registry import registry
|
323 |
-
|
324 |
-
config = registry.unregister("config")
|
325 |
-
"""
|
326 |
-
return cls.mapping["state"].pop(name, None)
|
327 |
-
|
328 |
-
|
329 |
-
registry = Registry()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/GFPGAN-example/scripts/parse_landmark.py
DELETED
@@ -1,85 +0,0 @@
|
|
1 |
-
import cv2
|
2 |
-
import json
|
3 |
-
import numpy as np
|
4 |
-
import os
|
5 |
-
import torch
|
6 |
-
from basicsr.utils import FileClient, imfrombytes
|
7 |
-
from collections import OrderedDict
|
8 |
-
|
9 |
-
# ---------------------------- This script is used to parse facial landmarks ------------------------------------- #
|
10 |
-
# Configurations
|
11 |
-
save_img = False
|
12 |
-
scale = 0.5 # 0.5 for official FFHQ (512x512), 1 for others
|
13 |
-
enlarge_ratio = 1.4 # only for eyes
|
14 |
-
json_path = 'ffhq-dataset-v2.json'
|
15 |
-
face_path = 'datasets/ffhq/ffhq_512.lmdb'
|
16 |
-
save_path = './FFHQ_eye_mouth_landmarks_512.pth'
|
17 |
-
|
18 |
-
print('Load JSON metadata...')
|
19 |
-
# use the official json file in FFHQ dataset
|
20 |
-
with open(json_path, 'rb') as f:
|
21 |
-
json_data = json.load(f, object_pairs_hook=OrderedDict)
|
22 |
-
|
23 |
-
print('Open LMDB file...')
|
24 |
-
# read ffhq images
|
25 |
-
file_client = FileClient('lmdb', db_paths=face_path)
|
26 |
-
with open(os.path.join(face_path, 'meta_info.txt')) as fin:
|
27 |
-
paths = [line.split('.')[0] for line in fin]
|
28 |
-
|
29 |
-
save_dict = {}
|
30 |
-
|
31 |
-
for item_idx, item in enumerate(json_data.values()):
|
32 |
-
print(f'\r{item_idx} / {len(json_data)}, {item["image"]["file_path"]} ', end='', flush=True)
|
33 |
-
|
34 |
-
# parse landmarks
|
35 |
-
lm = np.array(item['image']['face_landmarks'])
|
36 |
-
lm = lm * scale
|
37 |
-
|
38 |
-
item_dict = {}
|
39 |
-
# get image
|
40 |
-
if save_img:
|
41 |
-
img_bytes = file_client.get(paths[item_idx])
|
42 |
-
img = imfrombytes(img_bytes, float32=True)
|
43 |
-
|
44 |
-
# get landmarks for each component
|
45 |
-
map_left_eye = list(range(36, 42))
|
46 |
-
map_right_eye = list(range(42, 48))
|
47 |
-
map_mouth = list(range(48, 68))
|
48 |
-
|
49 |
-
# eye_left
|
50 |
-
mean_left_eye = np.mean(lm[map_left_eye], 0) # (x, y)
|
51 |
-
half_len_left_eye = np.max((np.max(np.max(lm[map_left_eye], 0) - np.min(lm[map_left_eye], 0)) / 2, 16))
|
52 |
-
item_dict['left_eye'] = [mean_left_eye[0], mean_left_eye[1], half_len_left_eye]
|
53 |
-
# mean_left_eye[0] = 512 - mean_left_eye[0] # for testing flip
|
54 |
-
half_len_left_eye *= enlarge_ratio
|
55 |
-
loc_left_eye = np.hstack((mean_left_eye - half_len_left_eye + 1, mean_left_eye + half_len_left_eye)).astype(int)
|
56 |
-
if save_img:
|
57 |
-
eye_left_img = img[loc_left_eye[1]:loc_left_eye[3], loc_left_eye[0]:loc_left_eye[2], :]
|
58 |
-
cv2.imwrite(f'tmp/{item_idx:08d}_eye_left.png', eye_left_img * 255)
|
59 |
-
|
60 |
-
# eye_right
|
61 |
-
mean_right_eye = np.mean(lm[map_right_eye], 0)
|
62 |
-
half_len_right_eye = np.max((np.max(np.max(lm[map_right_eye], 0) - np.min(lm[map_right_eye], 0)) / 2, 16))
|
63 |
-
item_dict['right_eye'] = [mean_right_eye[0], mean_right_eye[1], half_len_right_eye]
|
64 |
-
# mean_right_eye[0] = 512 - mean_right_eye[0] # # for testing flip
|
65 |
-
half_len_right_eye *= enlarge_ratio
|
66 |
-
loc_right_eye = np.hstack(
|
67 |
-
(mean_right_eye - half_len_right_eye + 1, mean_right_eye + half_len_right_eye)).astype(int)
|
68 |
-
if save_img:
|
69 |
-
eye_right_img = img[loc_right_eye[1]:loc_right_eye[3], loc_right_eye[0]:loc_right_eye[2], :]
|
70 |
-
cv2.imwrite(f'tmp/{item_idx:08d}_eye_right.png', eye_right_img * 255)
|
71 |
-
|
72 |
-
# mouth
|
73 |
-
mean_mouth = np.mean(lm[map_mouth], 0)
|
74 |
-
half_len_mouth = np.max((np.max(np.max(lm[map_mouth], 0) - np.min(lm[map_mouth], 0)) / 2, 16))
|
75 |
-
item_dict['mouth'] = [mean_mouth[0], mean_mouth[1], half_len_mouth]
|
76 |
-
# mean_mouth[0] = 512 - mean_mouth[0] # for testing flip
|
77 |
-
loc_mouth = np.hstack((mean_mouth - half_len_mouth + 1, mean_mouth + half_len_mouth)).astype(int)
|
78 |
-
if save_img:
|
79 |
-
mouth_img = img[loc_mouth[1]:loc_mouth[3], loc_mouth[0]:loc_mouth[2], :]
|
80 |
-
cv2.imwrite(f'tmp/{item_idx:08d}_mouth.png', mouth_img * 255)
|
81 |
-
|
82 |
-
save_dict[f'{item_idx:08d}'] = item_dict
|
83 |
-
|
84 |
-
print('Save...')
|
85 |
-
torch.save(save_dict, save_path)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/LIVE/thrust/thrust/system/omp/detail/merge.h
DELETED
@@ -1,23 +0,0 @@
|
|
1 |
-
/*
|
2 |
-
* Copyright 2008-2013 NVIDIA Corporation
|
3 |
-
*
|
4 |
-
* Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
-
* you may not use this file except in compliance with the License.
|
6 |
-
* You may obtain a copy of the License at
|
7 |
-
*
|
8 |
-
* http://www.apache.org/licenses/LICENSE-2.0
|
9 |
-
*
|
10 |
-
* Unless required by applicable law or agreed to in writing, software
|
11 |
-
* distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
-
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
-
* See the License for the specific language governing permissions and
|
14 |
-
* limitations under the License.
|
15 |
-
*/
|
16 |
-
|
17 |
-
#pragma once
|
18 |
-
|
19 |
-
#include <thrust/detail/config.h>
|
20 |
-
|
21 |
-
// this system inherits merge
|
22 |
-
#include <thrust/system/cpp/detail/merge.h>
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
spaces/CVPR/regionclip-demo/detectron2/evaluation/lvis_evaluation.py
DELETED
@@ -1,358 +0,0 @@
|
|
1 |
-
# Copyright (c) Facebook, Inc. and its affiliates.
|
2 |
-
import copy
|
3 |
-
import itertools
|
4 |
-
import json
|
5 |
-
import logging
|
6 |
-
import os
|
7 |
-
import pickle
|
8 |
-
from collections import OrderedDict
|
9 |
-
import torch
|
10 |
-
|
11 |
-
import detectron2.utils.comm as comm
|
12 |
-
from detectron2.config import CfgNode
|
13 |
-
from detectron2.data import MetadataCatalog
|
14 |
-
from detectron2.structures import Boxes, BoxMode, pairwise_iou
|
15 |
-
from detectron2.utils.file_io import PathManager
|
16 |
-
from detectron2.utils.logger import create_small_table
|
17 |
-
|
18 |
-
from .coco_evaluation import instances_to_coco_json
|
19 |
-
from .evaluator import DatasetEvaluator
|
20 |
-
|
21 |
-
|
22 |
-
class LVISEvaluator(DatasetEvaluator):
|
23 |
-
"""
|
24 |
-
Evaluate object proposal and instance detection/segmentation outputs using
|
25 |
-
LVIS's metrics and evaluation API.
|
26 |
-
"""
|
27 |
-
|
28 |
-
def __init__(self, dataset_name, tasks=None, distributed=True, output_dir=None):
|
29 |
-
"""
|
30 |
-
Args:
|
31 |
-
dataset_name (str): name of the dataset to be evaluated.
|
32 |
-
It must have the following corresponding metadata:
|
33 |
-
"json_file": the path to the LVIS format annotation
|
34 |
-
tasks (tuple[str]): tasks that can be evaluated under the given
|
35 |
-
configuration. A task is one of "bbox", "segm".
|
36 |
-
By default, will infer this automatically from predictions.
|
37 |
-
distributed (True): if True, will collect results from all ranks for evaluation.
|
38 |
-
Otherwise, will evaluate the results in the current process.
|
39 |
-
output_dir (str): optional, an output directory to dump results.
|
40 |
-
"""
|
41 |
-
from lvis import LVIS
|
42 |
-
|
43 |
-
self._logger = logging.getLogger(__name__)
|
44 |
-
|
45 |
-
if tasks is not None and isinstance(tasks, CfgNode):
|
46 |
-
self._logger.warn(
|
47 |
-
"COCO Evaluator instantiated using config, this is deprecated behavior."
|
48 |
-
" Please pass in explicit arguments instead."
|
49 |
-
)
|
50 |
-
self._tasks = None # Infering it from predictions should be better
|
51 |
-
else:
|
52 |
-
self._tasks = tasks
|
53 |
-
|
54 |
-
self._distributed = distributed
|
55 |
-
self._output_dir = output_dir
|
56 |
-
|
57 |
-
self._cpu_device = torch.device("cpu")
|
58 |
-
|
59 |
-
self._metadata = MetadataCatalog.get(dataset_name)
|
60 |
-
json_file = PathManager.get_local_path(self._metadata.json_file)
|
61 |
-
self._lvis_api = LVIS(json_file)
|
62 |
-
# Test set json files do not contain annotations (evaluation must be
|
63 |
-
# performed using the LVIS evaluation server).
|
64 |
-
self._do_evaluation = len(self._lvis_api.get_ann_ids()) > 0
|
65 |
-
|
66 |
-
def reset(self):
|
67 |
-
self._predictions = []
|
68 |
-
|
69 |
-
def process(self, inputs, outputs):
|
70 |
-
"""
|
71 |
-
Args:
|
72 |
-
inputs: the inputs to a LVIS model (e.g., GeneralizedRCNN).
|
73 |
-
It is a list of dict. Each dict corresponds to an image and
|
74 |
-
contains keys like "height", "width", "file_name", "image_id".
|
75 |
-
outputs: the outputs of a LVIS model. It is a list of dicts with key
|
76 |
-
"instances" that contains :class:`Instances`.
|
77 |
-
"""
|
78 |
-
for input, output in zip(inputs, outputs):
|
79 |
-
prediction = {"image_id": input["image_id"]}
|
80 |
-
|
81 |
-
if "instances" in output:
|
82 |
-
instances = output["instances"].to(self._cpu_device)
|
83 |
-
prediction["instances"] = instances_to_coco_json(instances, input["image_id"])
|
84 |
-
if "proposals" in output:
|
85 |
-
prediction["proposals"] = output["proposals"].to(self._cpu_device)
|
86 |
-
self._predictions.append(prediction)
|
87 |
-
|
88 |
-
def evaluate(self):
|
89 |
-
if self._distributed:
|
90 |
-
comm.synchronize()
|
91 |
-
predictions = comm.gather(self._predictions, dst=0)
|
92 |
-
predictions = list(itertools.chain(*predictions))
|
93 |
-
|
94 |
-
if not comm.is_main_process():
|
95 |
-
return
|
96 |
-
else:
|
97 |
-
predictions = self._predictions
|
98 |
-
|
99 |
-
if len(predictions) == 0:
|
100 |
-
self._logger.warning("[LVISEvaluator] Did not receive valid predictions.")
|
101 |
-
return {}
|
102 |
-
|
103 |
-
if self._output_dir:
|
104 |
-
PathManager.mkdirs(self._output_dir)
|
105 |
-
file_path = os.path.join(self._output_dir, "instances_predictions.pth")
|
106 |
-
with PathManager.open(file_path, "wb") as f:
|
107 |
-
torch.save(predictions, f)
|
108 |
-
|
109 |
-
self._results = OrderedDict()
|
110 |
-
if "proposals" in predictions[0]:
|
111 |
-
self._eval_box_proposals(predictions)
|
112 |
-
if "instances" in predictions[0]:
|
113 |
-
self._eval_predictions(predictions)
|
114 |
-
# Copy so the caller can do whatever with results
|
115 |
-
return copy.deepcopy(self._results)
|
116 |
-
|
117 |
-
def _tasks_from_predictions(self, predictions):
|
118 |
-
for pred in predictions:
|
119 |
-
if "segmentation" in pred:
|
120 |
-
return ("bbox", "segm")
|
121 |
-
return ("bbox",)
|
122 |
-
|
123 |
-
def _eval_predictions(self, predictions):
|
124 |
-
"""
|
125 |
-
Evaluate predictions. Fill self._results with the metrics of the tasks.
|
126 |
-
|
127 |
-
Args:
|
128 |
-
predictions (list[dict]): list of outputs from the model
|
129 |
-
"""
|
130 |
-
self._logger.info("Preparing results in the LVIS format ...")
|
131 |
-
lvis_results = list(itertools.chain(*[x["instances"] for x in predictions]))
|
132 |
-
tasks = self._tasks or self._tasks_from_predictions(lvis_results)
|
133 |
-
|
134 |
-
# LVIS evaluator can be used to evaluate results for COCO dataset categories.
|
135 |
-
# In this case `_metadata` variable will have a field with COCO-specific category mapping.
|
136 |
-
if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"):
|
137 |
-
reverse_id_mapping = {
|
138 |
-
v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items()
|
139 |
-
}
|
140 |
-
for result in lvis_results:
|
141 |
-
result["category_id"] = reverse_id_mapping[result["category_id"]]
|
142 |
-
else:
|
143 |
-
# unmap the category ids for LVIS (from 0-indexed to 1-indexed)
|
144 |
-
for result in lvis_results:
|
145 |
-
result["category_id"] += 1
|
146 |
-
|
147 |
-
if self._output_dir:
|
148 |
-
file_path = os.path.join(self._output_dir, "lvis_instances_results.json")
|
149 |
-
self._logger.info("Saving results to {}".format(file_path))
|
150 |
-
with PathManager.open(file_path, "w") as f:
|
151 |
-
f.write(json.dumps(lvis_results))
|
152 |
-
f.flush()
|
153 |
-
|
154 |
-
if not self._do_evaluation:
|
155 |
-
self._logger.info("Annotations are not available for evaluation.")
|
156 |
-
return
|
157 |
-
|
158 |
-
self._logger.info("Evaluating predictions ...")
|
159 |
-
for task in sorted(tasks):
|
160 |
-
res = _evaluate_predictions_on_lvis(
|
161 |
-
self._lvis_api, lvis_results, task, class_names=self._metadata.get("thing_classes")
|
162 |
-
)
|
163 |
-
self._results[task] = res
|
164 |
-
|
165 |
-
def _eval_box_proposals(self, predictions):
|
166 |
-
"""
|
167 |
-
Evaluate the box proposals in predictions.
|
168 |
-
Fill self._results with the metrics for "box_proposals" task.
|
169 |
-
"""
|
170 |
-
if self._output_dir:
|
171 |
-
# Saving generated box proposals to file.
|
172 |
-
# Predicted box_proposals are in XYXY_ABS mode.
|
173 |
-
bbox_mode = BoxMode.XYXY_ABS.value
|
174 |
-
ids, boxes, objectness_logits = [], [], []
|
175 |
-
for prediction in predictions:
|
176 |
-
ids.append(prediction["image_id"])
|
177 |
-
boxes.append(prediction["proposals"].proposal_boxes.tensor.numpy())
|
178 |
-
objectness_logits.append(prediction["proposals"].objectness_logits.numpy())
|
179 |
-
|
180 |
-
proposal_data = {
|
181 |
-
"boxes": boxes,
|
182 |
-
"objectness_logits": objectness_logits,
|
183 |
-
"ids": ids,
|
184 |
-
"bbox_mode": bbox_mode,
|
185 |
-
}
|
186 |
-
with PathManager.open(os.path.join(self._output_dir, "box_proposals.pkl"), "wb") as f:
|
187 |
-
pickle.dump(proposal_data, f)
|
188 |
-
|
189 |
-
if not self._do_evaluation:
|
190 |
-
self._logger.info("Annotations are not available for evaluation.")
|
191 |
-
return
|
192 |
-
|
193 |
-
self._logger.info("Evaluating bbox proposals ...")
|
194 |
-
res = {}
|
195 |
-
areas = {"all": "", "small": "s", "medium": "m", "large": "l"}
|
196 |
-
for limit in [100, 1000]:
|
197 |
-
for area, suffix in areas.items():
|
198 |
-
stats = _evaluate_box_proposals(predictions, self._lvis_api, area=area, limit=limit)
|
199 |
-
key = "AR{}@{:d}".format(suffix, limit)
|
200 |
-
res[key] = float(stats["ar"].item() * 100)
|
201 |
-
self._logger.info("Proposal metrics: \n" + create_small_table(res))
|
202 |
-
self._results["box_proposals"] = res
|
203 |
-
|
204 |
-
|
205 |
-
# inspired from Detectron:
|
206 |
-
# https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L255 # noqa
|
207 |
-
def _evaluate_box_proposals(dataset_predictions, lvis_api, thresholds=None, area="all", limit=None):
|
208 |
-
"""
|
209 |
-
Evaluate detection proposal recall metrics. This function is a much
|
210 |
-
faster alternative to the official LVIS API recall evaluation code. However,
|
211 |
-
it produces slightly different results.
|
212 |
-
"""
|
213 |
-
# Record max overlap value for each gt box
|
214 |
-
# Return vector of overlap values
|
215 |
-
areas = {
|
216 |
-
"all": 0,
|
217 |
-
"small": 1,
|
218 |
-
"medium": 2,
|
219 |
-
"large": 3,
|
220 |
-
"96-128": 4,
|
221 |
-
"128-256": 5,
|
222 |
-
"256-512": 6,
|
223 |
-
"512-inf": 7,
|
224 |
-
}
|
225 |
-
area_ranges = [
|
226 |
-
[0 ** 2, 1e5 ** 2], # all
|
227 |
-
[0 ** 2, 32 ** 2], # small
|
228 |
-
[32 ** 2, 96 ** 2], # medium
|
229 |
-
[96 ** 2, 1e5 ** 2], # large
|
230 |
-
[96 ** 2, 128 ** 2], # 96-128
|
231 |
-
[128 ** 2, 256 ** 2], # 128-256
|
232 |
-
[256 ** 2, 512 ** 2], # 256-512
|
233 |
-
[512 ** 2, 1e5 ** 2],
|
234 |
-
] # 512-inf
|
235 |
-
assert area in areas, "Unknown area range: {}".format(area)
|
236 |
-
area_range = area_ranges[areas[area]]
|
237 |
-
gt_overlaps = []
|
238 |
-
num_pos = 0
|
239 |
-
|
240 |
-
for prediction_dict in dataset_predictions:
|
241 |
-
predictions = prediction_dict["proposals"]
|
242 |
-
|
243 |
-
# sort predictions in descending order
|
244 |
-
# TODO maybe remove this and make it explicit in the documentation
|
245 |
-
inds = predictions.objectness_logits.sort(descending=True)[1]
|
246 |
-
predictions = predictions[inds]
|
247 |
-
|
248 |
-
ann_ids = lvis_api.get_ann_ids(img_ids=[prediction_dict["image_id"]])
|
249 |
-
anno = lvis_api.load_anns(ann_ids)
|
250 |
-
gt_boxes = [
|
251 |
-
BoxMode.convert(obj["bbox"], BoxMode.XYWH_ABS, BoxMode.XYXY_ABS) for obj in anno
|
252 |
-
]
|
253 |
-
gt_boxes = torch.as_tensor(gt_boxes).reshape(-1, 4) # guard against no boxes
|
254 |
-
gt_boxes = Boxes(gt_boxes)
|
255 |
-
gt_areas = torch.as_tensor([obj["area"] for obj in anno])
|
256 |
-
|
257 |
-
if len(gt_boxes) == 0 or len(predictions) == 0:
|
258 |
-
continue
|
259 |
-
|
260 |
-
valid_gt_inds = (gt_areas >= area_range[0]) & (gt_areas <= area_range[1])
|
261 |
-
gt_boxes = gt_boxes[valid_gt_inds]
|
262 |
-
|
263 |
-
num_pos += len(gt_boxes)
|
264 |
-
|
265 |
-
if len(gt_boxes) == 0:
|
266 |
-
continue
|
267 |
-
|
268 |
-
if limit is not None and len(predictions) > limit:
|
269 |
-
predictions = predictions[:limit]
|
270 |
-
|
271 |
-
overlaps = pairwise_iou(predictions.proposal_boxes, gt_boxes)
|
272 |
-
|
273 |
-
_gt_overlaps = torch.zeros(len(gt_boxes))
|
274 |
-
for j in range(min(len(predictions), len(gt_boxes))):
|
275 |
-
# find which proposal box maximally covers each gt box
|
276 |
-
# and get the iou amount of coverage for each gt box
|
277 |
-
max_overlaps, argmax_overlaps = overlaps.max(dim=0)
|
278 |
-
|
279 |
-
# find which gt box is 'best' covered (i.e. 'best' = most iou)
|
280 |
-
gt_ovr, gt_ind = max_overlaps.max(dim=0)
|
281 |
-
assert gt_ovr >= 0
|
282 |
-
# find the proposal box that covers the best covered gt box
|
283 |
-
box_ind = argmax_overlaps[gt_ind]
|
284 |
-
# record the iou coverage of this gt box
|
285 |
-
_gt_overlaps[j] = overlaps[box_ind, gt_ind]
|
286 |
-
assert _gt_overlaps[j] == gt_ovr
|
287 |
-
# mark the proposal box and the gt box as used
|
288 |
-
overlaps[box_ind, :] = -1
|
289 |
-
overlaps[:, gt_ind] = -1
|
290 |
-
|
291 |
-
# append recorded iou coverage level
|
292 |
-
gt_overlaps.append(_gt_overlaps)
|
293 |
-
gt_overlaps = (
|
294 |
-
torch.cat(gt_overlaps, dim=0) if len(gt_overlaps) else torch.zeros(0, dtype=torch.float32)
|
295 |
-
)
|
296 |
-
gt_overlaps, _ = torch.sort(gt_overlaps)
|
297 |
-
|
298 |
-
if thresholds is None:
|
299 |
-
step = 0.05
|
300 |
-
thresholds = torch.arange(0.5, 0.95 + 1e-5, step, dtype=torch.float32)
|
301 |
-
recalls = torch.zeros_like(thresholds)
|
302 |
-
# compute recall for each iou threshold
|
303 |
-
for i, t in enumerate(thresholds):
|
304 |
-
recalls[i] = (gt_overlaps >= t).float().sum() / float(num_pos)
|
305 |
-
# ar = 2 * np.trapz(recalls, thresholds)
|
306 |
-
ar = recalls.mean()
|
307 |
-
return {
|
308 |
-
"ar": ar,
|
309 |
-
"recalls": recalls,
|
310 |
-
"thresholds": thresholds,
|
311 |
-
"gt_overlaps": gt_overlaps,
|
312 |
-
"num_pos": num_pos,
|
313 |
-
}
|
314 |
-
|
315 |
-
|
316 |
-
def _evaluate_predictions_on_lvis(lvis_gt, lvis_results, iou_type, class_names=None):
|
317 |
-
"""
|
318 |
-
Args:
|
319 |
-
iou_type (str):
|
320 |
-
kpt_oks_sigmas (list[float]):
|
321 |
-
class_names (None or list[str]): if provided, will use it to predict
|
322 |
-
per-category AP.
|
323 |
-
|
324 |
-
Returns:
|
325 |
-
a dict of {metric name: score}
|
326 |
-
"""
|
327 |
-
metrics = {
|
328 |
-
"bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"],
|
329 |
-
"segm": ["AP", "AP50", "AP75", "APs", "APm", "APl", "APr", "APc", "APf"],
|
330 |
-
}[iou_type]
|
331 |
-
|
332 |
-
logger = logging.getLogger(__name__)
|
333 |
-
|
334 |
-
if len(lvis_results) == 0: # TODO: check if needed
|
335 |
-
logger.warn("No predictions from the model!")
|
336 |
-
return {metric: float("nan") for metric in metrics}
|
337 |
-
|
338 |
-
if iou_type == "segm":
|
339 |
-
lvis_results = copy.deepcopy(lvis_results)
|
340 |
-
# When evaluating mask AP, if the results contain bbox, LVIS API will
|
341 |
-
# use the box area as the area of the instance, instead of the mask area.
|
342 |
-
# This leads to a different definition of small/medium/large.
|
343 |
-
# We remove the bbox field to let mask AP use mask area.
|
344 |
-
for c in lvis_results:
|
345 |
-
c.pop("bbox", None)
|
346 |
-
|
347 |
-
from lvis import LVISEval, LVISResults
|
348 |
-
|
349 |
-
lvis_results = LVISResults(lvis_gt, lvis_results)
|
350 |
-
lvis_eval = LVISEval(lvis_gt, lvis_results, iou_type)
|
351 |
-
lvis_eval.run()
|
352 |
-
lvis_eval.print_results()
|
353 |
-
|
354 |
-
# Pull the standard metrics from the LVIS results
|
355 |
-
results = lvis_eval.get_results()
|
356 |
-
results = {metric: float(results[metric] * 100) for metric in metrics}
|
357 |
-
logger.info("Evaluation results for {}: \n".format(iou_type) + create_small_table(results))
|
358 |
-
return results
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|