parquet-converter commited on
Commit
2fad3fa
·
1 Parent(s): 05c426d

Update parquet files (step 10 of 476)

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. spaces/17TheWord/RealESRGAN/FAQ.md +0 -9
  2. spaces/17TheWord/RealESRGAN/inference_realesrgan.py +0 -128
  3. spaces/1acneusushi/gradio-2dmoleculeeditor/data/FIFA 22 on Mac Two Ways to Experience the Game of Football.md +0 -50
  4. spaces/1gistliPinn/ChatGPT4/Examples/Cadence Allegro Extracta Exe Downloa.md +0 -6
  5. spaces/1gistliPinn/ChatGPT4/Examples/FaceRig Pro V1.312 (Inclu ALL DLC) Cheats Tool Download Free.md +0 -6
  6. spaces/1line/AutoGPT/tests/test_config.py +0 -84
  7. spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Arena of Valor Global and Enjoy Fast and Fun Matches in 15 Minutes or Less.md +0 -155
  8. spaces/1phancelerku/anime-remove-background/Chess Board Offline 2 Player - A Simple and Fun Chess App for Everyone.md +0 -110
  9. spaces/1phancelerku/anime-remove-background/Download Incredibox Now and Join the Merry Crew of Beatboxers on Your Android Device.md +0 -131
  10. spaces/1phancelerku/anime-remove-background/Experience Nintendo GameCube and Wii Games on Xbox with Dolphin Emulator A Step-by-Step Guide.md +0 -119
  11. spaces/1toTree/lora_test/ppdiffusers/experimental/rl/value_guided_sampling.py +0 -146
  12. spaces/A00001/bingothoo/src/pages/api/healthz.ts +0 -7
  13. spaces/AI-ZTH-03-23/README/README.md +0 -20
  14. spaces/AI-Zero-to-Hero/01-H5-Play-Canvas-Sim-Physics/style.css +0 -28
  15. spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/__init__.py +0 -0
  16. spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/normalizing_flow/res_flow.py +0 -61
  17. spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/normalizing_flow/utils.py +0 -29
  18. spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_flow.py +0 -135
  19. spaces/AIWaves/SOP_Generation-single/Agent/__init__.py +0 -1
  20. spaces/AIZeroToHero/04-Image2OCR/README.md +0 -13
  21. spaces/ASJMO/freegpt/g4f/Provider/Providers/Dfehub.py +0 -49
  22. spaces/Aashir01/Live_Transcription/app.py +0 -236
  23. spaces/Abhaykoul/Wizard-AI/README.md +0 -12
  24. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/layermanager.d.ts +0 -2
  25. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/alphamaskimage/Factory.js +0 -13
  26. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/UpdateChart.js +0 -8
  27. spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateImage.js +0 -9
  28. spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/mapper/options/__init__.py +0 -0
  29. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/unconditional_image_generation.md +0 -69
  30. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_tensorrt_inpaint.py +0 -1088
  31. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_k_diffusion_objects.py +0 -17
  32. spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_upscale.py +0 -424
  33. spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py +0 -42
  34. spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/atss_assigner.py +0 -178
  35. spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/roi_extractors/__init__.py +0 -7
  36. spaces/Andy1621/uniformer_image_detection/mmdet/utils/collect_env.py +0 -16
  37. spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes.py +0 -2
  38. spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_80k_pascal_context.py +0 -10
  39. spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101b-d8_512x1024_80k_cityscapes.py +0 -4
  40. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/index/__init__.py +0 -2
  41. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/_jaraco_text.py +0 -109
  42. spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/_version.py +0 -2
  43. spaces/Atualli/yoloxTeste/app1.py +0 -105
  44. spaces/AzinZ/vitscn/README.md +0 -13
  45. spaces/AzulaFire/SparkDebate/utils/API.py +0 -244
  46. spaces/BAAI/vid2vid-zero/vid2vid_zero/pipelines/pipeline_vid2vid_zero.py +0 -541
  47. spaces/Benson/text-generation/Examples/2023 Songs Download.md +0 -91
  48. spaces/Benson/text-generation/Examples/Alice Blue Apk Descargar.md +0 -89
  49. spaces/Benson/text-generation/Examples/Carx Street Mod Apk 1.74.6 (dinero Ilimitado).md +0 -68
  50. spaces/Benson/text-generation/Examples/Cmo Descargar Mods En Simulador De Batalla Totalmente Preciso.md +0 -91
spaces/17TheWord/RealESRGAN/FAQ.md DELETED
@@ -1,9 +0,0 @@
1
- # FAQ
2
-
3
- 1. **What is the difference of `--netscale` and `outscale`?**
4
-
5
- A: TODO.
6
-
7
- 1. **How to select models?**
8
-
9
- A: TODO.
 
 
 
 
 
 
 
 
 
 
spaces/17TheWord/RealESRGAN/inference_realesrgan.py DELETED
@@ -1,128 +0,0 @@
1
- import argparse
2
- import cv2
3
- import glob
4
- import os
5
- from basicsr.archs.rrdbnet_arch import RRDBNet
6
-
7
- from realesrgan import RealESRGANer
8
- from realesrgan.archs.srvgg_arch import SRVGGNetCompact
9
-
10
-
11
- def main():
12
- """Inference demo for Real-ESRGAN.
13
- """
14
- parser = argparse.ArgumentParser()
15
- parser.add_argument('-i', '--input', type=str, default='inputs', help='Input image or folder')
16
- parser.add_argument(
17
- '-n',
18
- '--model_name',
19
- type=str,
20
- default='RealESRGAN_x4plus',
21
- help=('Model names: RealESRGAN_x4plus | RealESRNet_x4plus | RealESRGAN_x4plus_anime_6B | RealESRGAN_x2plus'
22
- 'RealESRGANv2-anime-xsx2 | RealESRGANv2-animevideo-xsx2-nousm | RealESRGANv2-animevideo-xsx2'
23
- 'RealESRGANv2-anime-xsx4 | RealESRGANv2-animevideo-xsx4-nousm | RealESRGANv2-animevideo-xsx4'))
24
- parser.add_argument('-o', '--output', type=str, default='results', help='Output folder')
25
- parser.add_argument('-s', '--outscale', type=float, default=4, help='The final upsampling scale of the image')
26
- parser.add_argument('--suffix', type=str, default='out', help='Suffix of the restored image')
27
- parser.add_argument('-t', '--tile', type=int, default=0, help='Tile size, 0 for no tile during testing')
28
- parser.add_argument('--tile_pad', type=int, default=10, help='Tile padding')
29
- parser.add_argument('--pre_pad', type=int, default=0, help='Pre padding size at each border')
30
- parser.add_argument('--face_enhance', action='store_true', help='Use GFPGAN to enhance face')
31
- parser.add_argument('--half', action='store_true', help='Use half precision during inference')
32
- parser.add_argument(
33
- '--alpha_upsampler',
34
- type=str,
35
- default='realesrgan',
36
- help='The upsampler for the alpha channels. Options: realesrgan | bicubic')
37
- parser.add_argument(
38
- '--ext',
39
- type=str,
40
- default='auto',
41
- help='Image extension. Options: auto | jpg | png, auto means using the same extension as inputs')
42
- args = parser.parse_args()
43
-
44
- # determine models according to model names
45
- args.model_name = args.model_name.split('.')[0]
46
- if args.model_name in ['RealESRGAN_x4plus', 'RealESRNet_x4plus']: # x4 RRDBNet model
47
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
48
- netscale = 4
49
- elif args.model_name in ['RealESRGAN_x4plus_anime_6B']: # x4 RRDBNet model with 6 blocks
50
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4)
51
- netscale = 4
52
- elif args.model_name in ['RealESRGAN_x2plus']: # x2 RRDBNet model
53
- model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=2)
54
- netscale = 2
55
- elif args.model_name in [
56
- 'RealESRGANv2-anime-xsx2', 'RealESRGANv2-animevideo-xsx2-nousm', 'RealESRGANv2-animevideo-xsx2'
57
- ]: # x2 VGG-style model (XS size)
58
- model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=2, act_type='prelu')
59
- netscale = 2
60
- elif args.model_name in [
61
- 'RealESRGANv2-anime-xsx4', 'RealESRGANv2-animevideo-xsx4-nousm', 'RealESRGANv2-animevideo-xsx4'
62
- ]: # x4 VGG-style model (XS size)
63
- model = SRVGGNetCompact(num_in_ch=3, num_out_ch=3, num_feat=64, num_conv=16, upscale=4, act_type='prelu')
64
- netscale = 4
65
-
66
- # determine model paths
67
- model_path = os.path.join('.', args.model_name + '.pth')
68
- if not os.path.isfile(model_path):
69
- model_path = os.path.join('.', args.model_name + '.pth')
70
- if not os.path.isfile(model_path):
71
- raise ValueError(f'Model {args.model_name} does not exist.')
72
-
73
- # restorer
74
- upsampler = RealESRGANer(
75
- scale=netscale,
76
- model_path=model_path,
77
- model=model,
78
- tile=args.tile,
79
- tile_pad=args.tile_pad,
80
- pre_pad=args.pre_pad,
81
- half=args.half)
82
-
83
- if args.face_enhance: # Use GFPGAN for face enhancement
84
- from gfpgan import GFPGANer
85
- face_enhancer = GFPGANer(
86
- model_path='https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth',
87
- upscale=args.outscale,
88
- arch='clean',
89
- channel_multiplier=2,
90
- bg_upsampler=upsampler)
91
- os.makedirs(args.output, exist_ok=True)
92
-
93
- if os.path.isfile(args.input):
94
- paths = [args.input]
95
- else:
96
- paths = sorted(glob.glob(os.path.join(args.input, '*')))
97
-
98
- for idx, path in enumerate(paths):
99
- imgname, extension = os.path.splitext(os.path.basename(path))
100
- print('Testing', idx, imgname)
101
-
102
- img = cv2.imread(path, cv2.IMREAD_UNCHANGED)
103
- if len(img.shape) == 3 and img.shape[2] == 4:
104
- img_mode = 'RGBA'
105
- else:
106
- img_mode = None
107
-
108
- try:
109
- if args.face_enhance:
110
- _, _, output = face_enhancer.enhance(img, has_aligned=False, only_center_face=False, paste_back=True)
111
- else:
112
- output, _ = upsampler.enhance(img, outscale=args.outscale)
113
- except RuntimeError as error:
114
- print('Error', error)
115
- print('If you encounter CUDA out of memory, try to set --tile with a smaller number.')
116
- else:
117
- if args.ext == 'auto':
118
- extension = extension[1:]
119
- else:
120
- extension = args.ext
121
- if img_mode == 'RGBA': # RGBA images should be saved in png format
122
- extension = 'png'
123
- save_path = os.path.join(args.output, f'{imgname}_{args.suffix}.{extension}')
124
- cv2.imwrite(save_path, output)
125
-
126
-
127
- if __name__ == '__main__':
128
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1acneusushi/gradio-2dmoleculeeditor/data/FIFA 22 on Mac Two Ways to Experience the Game of Football.md DELETED
@@ -1,50 +0,0 @@
1
-
2
- <h1>How to Play FIFA 22 on Mac: A Guide for Football Fans</h1>
3
- <p>FIFA 22 is the latest installment of the popular football simulation game series developed by EA Sports. It features new gameplay innovations, improved graphics, and more realistic animations powered by HyperMotion technology. However, if you are a Mac user, you might be wondering how to play FIFA 22 on your device, since the game is not officially supported by macOS. In this article, we will show you two ways to play FIFA 22 on Mac: using cloud gaming services or installing Windows 10 on your Mac.</p>
4
- <h2>What is cloud gaming?</h2>
5
- <p>Cloud gaming is a technology that allows you to stream games from remote servers to your device via the internet. You don't need to download or install the games on your device, and you don't need to worry about compatibility or hardware requirements. All you need is a stable internet connection and a compatible device, such as a laptop, tablet, smartphone, or smart TV.</p>
6
- <h2>fifa 22 mac</h2><br /><p><b><b>DOWNLOAD</b> &#9658;&#9658;&#9658;&#9658;&#9658; <a href="https://byltly.com/2uKzLJ">https://byltly.com/2uKzLJ</a></b></p><br /><br />
7
- <h2>How to play FIFA 22 on Mac using cloud gaming services?</h2>
8
- <p>There are several cloud gaming services that offer FIFA 22 as part of their library. Two of the most popular ones are Boosteroid and Google Stadia. Here are the steps to play FIFA 22 on Mac using these services:</p>
9
- <ul>
10
- <li><strong>Boosteroid</strong>: Boosteroid is a cloud gaming platform that allows you to play PC games on any device with a browser. It supports Windows, Mac OS X, Linux, Android, iOS, and smart TVs. To play FIFA 22 on Mac using Boosteroid, you need to follow these steps:
11
- <ol>
12
- <li>Create an account on Boosteroid.com and choose a subscription plan.</li>
13
- <li>Log in to your account and browse the game library.</li>
14
- <li>Select FIFA 22 and click on the Play button.</li>
15
- <li>Enjoy playing FIFA 22 on your Mac with high graphics and low latency.</li>
16
- </ol>
17
- </li>
18
- <li><strong>Google Stadia</strong>: Google Stadia is a cloud gaming service that allows you to play games on various devices using Google Chrome or the Stadia app. It supports Windows, Mac OS X, Linux, Android, iOS, and Chromecast. To play FIFA 22 on Mac using Google Stadia, you need to follow these steps:
19
- <ol>
20
- <li>Create an account on Stadia.com and choose a subscription plan.</li>
21
- <li>Log in to your account and browse the game store.</li>
22
- <li>Purchase FIFA 22 and click on the Play button.</li>
23
- <li>Enjoy playing FIFA 22 on your Mac with high graphics and low latency.</li>
24
- </ol>
25
- </li>
26
- </ul>
27
- <h2>What are the advantages and disadvantages of cloud gaming?</h2>
28
- <p>Cloud gaming has some advantages and disadvantages that you should consider before choosing this option. Here are some of them:</p>
29
- <ul>
30
- <li><strong>Advantages</strong>:
31
- <ul>
32
- <li>You don't need to download or install anything on your device.</li>
33
- <li>You don't need to worry about compatibility or hardware requirements.</li>
34
- <li>You can play games on any device with a browser or an app.</li>
35
- <li>You can access a large library of games with different genres and categories.</li>
36
- <li>You can enjoy high graphics and low latency with a stable internet connection.</li>
37
- </ul>
38
- </li>
39
- <li><strong>Disadvantages</strong>:
40
- <ul>
41
- <li>You need a stable and fast internet connection to play games smoothly.</li>
42
- <li>You may experience lag or buffering if your internet connection is slow or unstable.</li>
43
- <li>You may not be able to play games offline or without an internet connection.</li>
44
- <li>You may not be able to mod or customize your games as much as you want.</li>
45
- <li>You may need to pay a monthly fee or purchase games separately to access them.</li>
46
- </ul>
47
- </li>
48
- </ul></p> ddb901b051<br />
49
- <br />
50
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/Cadence Allegro Extracta Exe Downloa.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>cadence allegro extracta exe downloa</h2><br /><p><b><b>Download</b> &middot;&middot;&middot; <a href="https://imgfil.com/2uy1iG">https://imgfil.com/2uy1iG</a></b></p><br /><br />
2
-
3
- There are 2 possibilities for extracting data from Cadence Allegro (and also latest ... for ODB++ which means that you have to download a utility script from Valor. ... All they need to do is to run the executable CDC2FAB in this file structure. 4d29de3e1b<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1gistliPinn/ChatGPT4/Examples/FaceRig Pro V1.312 (Inclu ALL DLC) Cheats Tool Download Free.md DELETED
@@ -1,6 +0,0 @@
1
- <h2>FaceRig Pro v1.312 (Inclu ALL DLC) cheats tool download</h2><br /><p><b><b>Download</b> &#128279; <a href="https://imgfil.com/2uxXK0">https://imgfil.com/2uxXK0</a></b></p><br /><br />
2
- <br />
3
- Multimedia tools downloads - PluralEyes for Edius by Singular ... As for functionality, If all MS did was update the map data annually, I'd ... Acura Navigation Hack or Torrent DVD Downloads As an ... to dig FaceRig Pro v1.312 (Inclu Live2D Module & DLCs) TORRENT Cracked Free Download in magnet. 1fdad05405<br />
4
- <br />
5
- <br />
6
- <p></p>
 
 
 
 
 
 
 
spaces/1line/AutoGPT/tests/test_config.py DELETED
@@ -1,84 +0,0 @@
1
- from unittest import TestCase
2
-
3
- from autogpt.config import Config
4
-
5
-
6
- class TestConfig(TestCase):
7
- """
8
- Test cases for the Config class, which handles the configuration settings
9
- for the AI and ensures it behaves as a singleton.
10
- """
11
-
12
- def setUp(self):
13
- """
14
- Set up the test environment by creating an instance of the Config class.
15
- """
16
- self.config = Config()
17
-
18
- def test_singleton(self):
19
- """
20
- Test if the Config class behaves as a singleton by ensuring that two instances are the same.
21
- """
22
- config2 = Config()
23
- self.assertIs(self.config, config2)
24
-
25
- def test_initial_values(self):
26
- """
27
- Test if the initial values of the Config class attributes are set correctly.
28
- """
29
- self.assertFalse(self.config.debug_mode)
30
- self.assertFalse(self.config.continuous_mode)
31
- self.assertFalse(self.config.speak_mode)
32
- self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo")
33
- self.assertEqual(self.config.smart_llm_model, "gpt-4")
34
- self.assertEqual(self.config.fast_token_limit, 4000)
35
- self.assertEqual(self.config.smart_token_limit, 8000)
36
-
37
- def test_set_continuous_mode(self):
38
- """
39
- Test if the set_continuous_mode() method updates the continuous_mode attribute.
40
- """
41
- self.config.set_continuous_mode(True)
42
- self.assertTrue(self.config.continuous_mode)
43
-
44
- def test_set_speak_mode(self):
45
- """
46
- Test if the set_speak_mode() method updates the speak_mode attribute.
47
- """
48
- self.config.set_speak_mode(True)
49
- self.assertTrue(self.config.speak_mode)
50
-
51
- def test_set_fast_llm_model(self):
52
- """
53
- Test if the set_fast_llm_model() method updates the fast_llm_model attribute.
54
- """
55
- self.config.set_fast_llm_model("gpt-3.5-turbo-test")
56
- self.assertEqual(self.config.fast_llm_model, "gpt-3.5-turbo-test")
57
-
58
- def test_set_smart_llm_model(self):
59
- """
60
- Test if the set_smart_llm_model() method updates the smart_llm_model attribute.
61
- """
62
- self.config.set_smart_llm_model("gpt-4-test")
63
- self.assertEqual(self.config.smart_llm_model, "gpt-4-test")
64
-
65
- def test_set_fast_token_limit(self):
66
- """
67
- Test if the set_fast_token_limit() method updates the fast_token_limit attribute.
68
- """
69
- self.config.set_fast_token_limit(5000)
70
- self.assertEqual(self.config.fast_token_limit, 5000)
71
-
72
- def test_set_smart_token_limit(self):
73
- """
74
- Test if the set_smart_token_limit() method updates the smart_token_limit attribute.
75
- """
76
- self.config.set_smart_token_limit(9000)
77
- self.assertEqual(self.config.smart_token_limit, 9000)
78
-
79
- def test_set_debug_mode(self):
80
- """
81
- Test if the set_debug_mode() method updates the debug_mode attribute.
82
- """
83
- self.config.set_debug_mode(True)
84
- self.assertTrue(self.config.debug_mode)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Arena of Valor Global and Enjoy Fast and Fun Matches in 15 Minutes or Less.md DELETED
@@ -1,155 +0,0 @@
1
-
2
- <h1>How to Download Arena of Valor Global: A Guide for MOBA Fans</h1>
3
- <p>If you are a fan of multiplayer online battle arena (MOBA) games, you might have heard of <strong>Arena of Valor</strong>, an epic 5v5 MOBA game developed by TiMi Studio Group and brought to you by Level Infinite. In this game, you can choose from over 100 unique heroes, team up with your friends, and compete in various modes and maps. Whether you prefer classic 5v5 combat, fast-paced 3v3 action, or solo adventure, there is something for everyone in Arena of Valor.</p>
4
- <p>But did you know that there is a global version of this game that is available in more than 140 countries and regions? That's right, <strong>Arena of Valor Global</strong> is the ultimate version of this game that lets you play with players from all over the world, enjoy exclusive content and events, and experience the best performance and graphics. If you want to join the millions of players who are already enjoying this game, you might be wondering how to download it. Don't worry, we've got you covered. In this article, we will show you how to download Arena of Valor Global for your Android, iOS, or PC device. We will also share some tips and tricks for playing this game like a pro. So, without further ado, let's get started!</p>
5
- <h2>download arena of valor global</h2><br /><p><b><b>Download Zip</b> &#9889; <a href="https://urlin.us/2uSRVn">https://urlin.us/2uSRVn</a></b></p><br /><br />
6
- <h2>What is Arena of Valor Global?</h2>
7
- <p>Arena of Valor Global is a real-time 5v5 MOBA game that offers a variety of features, modes, and heroes for you to enjoy. Here are some of the highlights of this game:</p>
8
- <ul>
9
- <li><strong>Fast & Fun Matches:</strong> You can select a game mode, find opponents, and compete in intense battles that can be completed in 15 minutes or less.</li>
10
- <li><strong>Fight With Your Friends:</strong> You can team up with your friends, create a guild, and master over 100 unique heroes from internationally acclaimed franchises.</li>
11
- <li><strong>Battle For Top Ranking:</strong> You can master your heroes, unleash their powers, and defeat your enemies in ranked matches and climb the leaderboards.</li>
12
- <li><strong>Enjoy Exclusive Content & Events:</strong> You can access exclusive heroes, skins, game modes, and events that are only available in Arena of Valor Global.</li>
13
- <li><strong>Experience High-Quality Performance & Graphics:</strong> You can play the game with smooth controls, stunning visuals, and immersive sound effects that will make you feel like you are in the middle of the action.</li>
14
- </ul>
15
- <p>As you can see, Arena of Valor Global is a game that has something for everyone. Whether you are a casual player or a hardcore gamer, you will find yourself hooked to this game in no time. But before you can start playing, you need to download the game first. Let's see how you can do that for your device.</p>
16
- <h2>How to Download Arena of Valor Global for Android Devices</h2>
17
- <p>If you have an Android device, such as a smartphone or a tablet, you can download Arena of Valor Global from the Google Play Store. Here are the steps you need to follow:</p>
18
- <ol>
19
- <li>Open the Google Play Store app on your device.</li>
20
- <li>Search for "Arena of Valor Global" in the search bar.</li>
21
- <li>Tap on the game icon that appears in the results.</li>
22
- <li>Tap on the "Install" button and wait for the game to download and install on your device.</li>
23
- <li>Once the installation is complete, tap on the "Open" button to launch the game and start playing.</li>
24
- </ol>
25
- <p>That's it! You have successfully downloaded Arena of Valor Global for your Android device. You can now enjoy the game and join millions of players from around the world. But what if you don't have access to the Google Play Store or you want to download the game from another source? Don't worry, there is another way to download the game using an APK file.</p>
26
- <h3>How to Download Arena of Valor Global APK File</h3>
27
- <p>An APK file is a file format that contains all the data and code needed to install an Android app on your device. You can download an APK file from various websites that offer them, such as APKPure, APKMirror, or Uptodown. However, you need to be careful when downloading APK files from unknown sources, as they may contain malware or viruses that can harm your device. To avoid this, you should always scan the APK file with an antivirus app before installing it. Here are the steps you need to follow to download Arena of Valor Global APK file:</p>
28
- <ol>
29
- <li>Go to a website that offers Arena of Valor Global APK file, such as <a href="">APKPure</a>.</li>
30
- <li>Search for "Arena of Valor Global" in the search bar.</li>
31
- <li>Select the game icon that appears in the results.</li>
32
- <li>Tap on the "Download APK" button and wait for the file to download on your device.</li>
33
- <li>Once the download is complete, locate the file in your device's storage and tap on it to install it.</li>
34
- <li>If you see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source", tap on "Settings" and enable the option that says "Allow from this source".</li>
35
- <li>Go back to the installation screen and tap on "Install" to proceed with the installation.</li>
36
- <li>Once the installation is complete, tap on "Open" to launch the game and start playing.</li>
37
- </ol>
38
- <p>Congratulations! You have successfully downloaded Arena of Valor Global APK file and installed it on your device. You can now enjoy the game and join millions of players from around the world. But what if you have an iOS device instead of an Android device? Don't worry, we have a solution for that too. Let's see how you can download Arena of Valor Global for your iOS device.</p>
39
- <h2>How to Download Arena of Valor Global for iOS Devices</h2>
40
- <p>If you have an iOS device, such as an iPhone or an iPad, you can download Arena of Valor Global from the App Store. Here are the steps you need to follow:</p>
41
- <p>How to download arena of valor global on android<br />
42
- Arena of valor global apk download latest version<br />
43
- Download arena of valor global for pc windows 10<br />
44
- Arena of valor global tier list 2023<br />
45
- Best heroes in arena of valor global<br />
46
- Arena of valor global update patch notes<br />
47
- Arena of valor global vs mobile legends<br />
48
- Arena of valor global server status<br />
49
- Arena of valor global discord server<br />
50
- Arena of valor global reddit community<br />
51
- Arena of valor global gameplay tips and tricks<br />
52
- Arena of valor global review and rating<br />
53
- Arena of valor global download size and requirements<br />
54
- Arena of valor global support and customer service<br />
55
- Arena of valor global free redeem codes 2023<br />
56
- Arena of valor global events and rewards<br />
57
- Arena of valor global skins and costumes<br />
58
- Arena of valor global characters and abilities<br />
59
- Arena of valor global guides and tutorials<br />
60
- Arena of valor global news and updates<br />
61
- Arena of valor global esports and tournaments<br />
62
- Arena of valor global live stream and videos<br />
63
- Arena of valor global memes and jokes<br />
64
- Arena of valor global fan art and wallpapers<br />
65
- Arena of valor global merchandise and products<br />
66
- How to download arena of valor global on ios<br />
67
- Arena of valor global ios app store link<br />
68
- Download arena of valor global for mac os x<br />
69
- Arena of valor global crossplay and cross platform<br />
70
- How to play arena of valor global with friends<br />
71
- How to join a guild in arena of valor global<br />
72
- How to rank up in arena of valor global<br />
73
- How to get more gold and gems in arena of valor global<br />
74
- How to unlock more heroes in arena of valor global<br />
75
- How to master a hero in arena of valor global<br />
76
- How to counter a hero in arena of valor global<br />
77
- How to build a hero in arena of valor global<br />
78
- How to report a player in arena of valor global<br />
79
- How to change your name in arena of valor global<br />
80
- How to change your region in arena of valor global<br />
81
- How to contact level infinite in arena of valor global<br />
82
- How to delete your account in arena of valor global<br />
83
- How to reinstall arena of valor global without losing data<br />
84
- How to fix lag and connection issues in arena of valor global<br />
85
- How to enable voice chat in arena of valor global<br />
86
- How to mute a player in arena of valor global<br />
87
- How to customize your controls in arena of valor global<br />
88
- How to switch between modes in arena of valor global<br />
89
- How to watch replays in arena of valor global</p>
90
- <ol>
91
- <li>Open the App Store app on your device.</li>
92
- <li>Search for "Arena of Valor Global" in the search bar.</li>
93
- <li>Tap on the game icon that appears in the results.</li>
94
- <li>Tap on the "Get" button and wait for the game to download and install on your device.</li>
95
- <li>Once the installation is complete, tap on the game icon on your home screen to launch the game and start playing.</li>
96
- </ol>
97
- <p>That's it! You have successfully downloaded Arena of Valor Global for your iOS device. You can now enjoy the game and join millions of players from around the world. But what if you don't have access to the App Store or you want to download the game from another source? Don't worry, there is another way to download the game using an IPA file.</p>
98
- <h3>How to Download Arena of Valor Global IPA File</h3>
99
- <p>An IPA file is a file format that contains all the data and code needed to install an iOS app on your device. You can download an IPA file from various websites that offer them, such as Panda Helper, AppValley, or TweakBox. However, you need to be careful when downloading IPA files from unknown sources, as they may contain malware or viruses that can harm your device. To avoid this, you should always scan the IPA file with an antivirus app before installing it. Here are the steps you need to follow to download Arena of Valor Global IPA file:</p>
100
- <ol>
101
- <li>Go to a website that offers Arena of Valor Global IPA file, such as <a href="">Panda Helper</a>.</li>
102
- <li>Search for "Arena of Valor Global" in the search bar.</li>
103
- <li>Select the game icon that appears in the results.</li>
104
- <li>Tap on the "Download" button and wait for the file to download on your device.</li>
105
- <li>Once the download is complete, locate the file in your device's storage and tap on it to install it.</li>
106
- <li>If you see a warning message that says "Untrusted Enterprise Developer", tap on "Cancel" and go to your device's settings.</li>
107
- <li>Go to General > Profiles & Device Management and find the profile that belongs to the app you just installed.</li>
108
- <li>Tap on the profile and then tap on "Trust" to allow the app to run on your device.</li>
109
- <li>Go back to your home screen and tap on the game icon to launch the game and start playing.</li>
110
- </ol>
111
- <p>Congratulations! You have successfully downloaded Arena of Valor Global IPA file and installed it on your device. You can now enjoy the game and join millions of players from around the world. But what if you want to play the game on a bigger screen and with better controls? Don't worry, we have a solution for that too. Let's see how you can download Arena of Valor Global for your PC.</p>
112
- <h2>How to Download Arena of Valor Global for PC</h2>
113
- <p>If you want to play Arena of Valor Global on your PC, you will need an emulator. An emulator is a software that allows you to run mobile apps on your computer. There are many emulators available for playing mobile games on PC, but one of the best ones is BlueStacks. BlueStacks is a free and powerful emulator that offers high-quality performance and graphics, easy controls, and a wide range of features. Here are the steps you need to follow to download Arena of Valor Global for PC using BlueStacks emulator:</p>
114
- <h3>How to Download and Install BlueStacks Emulator</h3>
115
- <ol>
116
- <li>Go to <a href="">BlueStacks official website</a> and click on the "Download BlueStacks" button.</li>
117
- <li>Wait for the file to download on your PC and then double-click on it to run it.</li>
118
- <li>Follow the instructions on the screen to install BlueStacks emulator on your PC.</li>
119
- <li>Once the installation is complete, launch BlueStacks emulator from your desktop or start menu.</li>
120
- </ol>
121
- <h3>How to Download and Install Arena of Valor Global on BlueStacks Emulator</h3>
122
- <ol>
123
- <li>In BlueStacks emulator, go to Google Play Store and sign in with your Google account.</li>
124
- <li>Search for "Arena of Valor Global" in the search bar.</li>
125
- <li>Select the game icon that appears in the results.</li>
126
- <li>Click on the "Install" button and wait for the game to download and install on BlueStacks emulator.</li>
127
- <li>Once the installation is complete, click on the game icon on the home screen of BlueStacks emulator to launch the game and start playing.</li>
128
- </ol>
129
- <p>Congratulations! You have successfully downloaded Arena of Valor Global for PC using BlueStacks emulator. You can now enjoy the game on a bigger screen and with better controls. You can also customize your keyboard and mouse settings, record your gameplay, and stream your matches to your friends and fans. But before you jump into the game, you might want to learn some tips and tricks for playing Arena of Valor Global like a pro. Let's see what they are.</p>
130
- <h2>Tips and Tricks for Playing Arena of Valor Global</h2>
131
- <p>Arena of Valor Global is a game that requires skill, strategy, and teamwork. If you want to improve your gameplay and win more matches, you need to master some tips and tricks that will give you an edge over your opponents. Here are some of them:</p>
132
- <h3>Choose Your Role and Hero Wisely</h3>
133
- <p>In Arena of Valor Global, there are five main roles that you can choose from: Tank, Warrior, Assassin, Mage, and Support. Each role has its own strengths, weaknesses, and responsibilities in the game. You should choose a role that suits your playstyle and preference, and then select a hero that fits that role. For example, if you like to initiate fights and protect your teammates, you should choose a Tank role and a hero like Maloch or Thane. If you like to deal massive damage and eliminate enemies quickly, you should choose an Assassin role and a hero like Quillen or Butterfly. You should also consider your team composition and the enemy team composition when choosing your role and hero. You should try to balance your team with different roles and heroes that can complement each other and counter the enemy team.</p>
134
- <h3>Communicate and Coordinate with Your Teammates</h3>
135
- <p>Arena of Valor Global is a team-based game that requires communication and coordination with your teammates. You should use the chat and ping system to communicate with your teammates effectively. You can use the chat to type messages or use voice chat to talk to your teammates. You can also use the ping system to send signals to your teammates, such as "Attack", "Retreat", "Gather", or "Enemy Missing". You should communicate with your teammates about your strategy, objectives, enemy movements, item builds, cooldowns, and other important information. You should also listen to your teammates' suggestions and feedback, and cooperate with them in fights and objectives. By communicating and coordinating with your teammates, you can increase your chances of winning the game.</p>
136
- <h3>Learn from the Pros and Watch Live Streams</h3>
137
- <p>Arena of Valor Global is a game that has a competitive scene with professional players and teams from around the world. If you want to learn from the pros and watch live streams of their matches, you can do so in the game itself. You can go to the "Watch" tab in the game menu and select from various live streams of professional players and teams. You can also watch replays of previous matches or highlights of epic moments. By watching live streams of pros, you can learn from their strategies, techniques, item builds, hero choices, map awareness, positioning, teamwork, and more. You can also interact with them through chat or send them gifts to show your support. By learning from the pros and watching live streams, you can improve your gameplay and skills in Arena of Valor Global.</p>
138
- <h2>Conclusion</h2>
139
- <p>Arena of Valor Global is an epic 5v5 MOBA game that offers a variety of features, modes, and heroes for you to enjoy. Whether you have an Android device, an iOS device, or a PC device, you can download this game easily using our guide above. You can also use our tips and tricks to play this game like a pro and win more matches. Arena of Valor Global is a game that will keep you entertained and challenged for hours. So, what are you waiting for? Download Arena of Valor Global today and join the global community of MOBA fans. You won't regret it!</p>
140
- <h2>FAQs</h2>
141
- <p>Here are some frequently asked questions and answers about Arena of Valor Global:</p>
142
- <ul>
143
- <li><strong>Q: How much space does Arena of Valor Global take on my device?</strong></li>
144
- <li>A: Arena of Valor Global takes about 1.5 GB of space on your device. However, this may vary depending on your device model and the updates you download.</li>
145
- <li><strong>Q: How can I update Arena of Valor Global to the latest version?</strong></li>
146
- <li>A: You can update Arena of Valor Global to the latest version by going to the Google Play Store or the App Store and tapping on the "Update" button. Alternatively, you can download the latest APK or IPA file from the websites we mentioned above and install it on your device.</li>
147
- <li><strong>Q: How can I change the language of Arena of Valor Global?</strong></li>
148
- <li>A: You can change the language of Arena of Valor Global by going to the game settings and tapping on the "Language" option. You can choose from various languages, such as English, Spanish, French, German, Portuguese, Russian, Turkish, Arabic, Thai, Indonesian, Vietnamese, and more.</li>
149
- <li><strong>Q: How can I contact the customer service of Arena of Valor Global?</strong></li>
150
- <li>A: You can contact the customer service of Arena of Valor Global by going to the game settings and tapping on the "Customer Service" option. You can then choose from various options, such as FAQ, Feedback, Report a Problem, or Live Chat. You can also email them at <a href="mailto:[email protected]">[email protected]</a>.</li>
151
- <li><strong>Q: How can I get more gold and gems in Arena of Valor Global?</strong></li>
152
- <li>A: You can get more gold and gems in Arena of Valor Global by playing the game regularly, completing quests and achievements, participating in events and activities, joining a guild, watching ads, or purchasing them with real money.</li>
153
- </ul></p> 197e85843d<br />
154
- <br />
155
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Chess Board Offline 2 Player - A Simple and Fun Chess App for Everyone.md DELETED
@@ -1,110 +0,0 @@
1
- <br />
2
- <h1>Chess Board Offline 2 Player APK: A Free and Fun Way to Play Chess with a Friend</h1>
3
- <p>Chess is one of the oldest and most popular board games in the world. It is a game of strategy, logic, and skill that can challenge your mind and entertain you for hours. But what if you want to play chess with a friend without internet connection or creating an account? What if you want to play chess on your phone or tablet without installing multiple apps? What if you want to save and share your games with other chess enthusiasts?</p>
4
- <p>If you are looking for a simple and convenient way to play chess with a friend on one screen and completely offline, then you should try Chess Board Offline 2 Player APK. This is a free app that lets you play chess on a virtual board with a friend or by yourself. You can also use a chess clock, create custom setups, save unlimited games, and export them in PGN format. In this article, we will tell you more about this app and how to download and install it on your device.</p>
5
- <h2>chess board offline 2 player apk</h2><br /><p><b><b>Download Zip</b> &#9733; <a href="https://jinyurl.com/2uNO1X">https://jinyurl.com/2uNO1X</a></b></p><br /><br />
6
- <h2>What is Chess Board Offline 2 Player APK?</h2>
7
- <h3>A virtual chess board for two players</h3>
8
- <p>Chess Board Offline 2 Player APK is an app that simulates a real chess board on your screen. You can play chess with a friend by taking turns on the same device. You can also play by yourself against an imaginary opponent or practice different moves and scenarios. The app has a standard 8x8 board with all the pieces and rules of chess. You can move the pieces by dragging them or tapping them.</p>
9
- <h3>A free and offline app</h3>
10
- <p>One of the best features of Chess Board Offline 2 Player APK is that it is completely free and offline. You don't need to pay anything to download or use the app. You also don't need to be online or create an account to play chess. You can play anytime and anywhere without worrying about internet connection or data usage. You can also enjoy the app without any ads or in-app purchases.</p>
11
- <h3>A simple and user-friendly interface</h3>
12
- <p>Another great feature of Chess Board Offline 2 Player APK is that it has a simple and user-friendly interface. The app has a minimalist design that focuses on the chess board and the pieces. The app also has easy-to-use controls and settings that let you customize your game. You can choose between different board colors, piece styles, sound effects, and languages. You can also enable or disable hints, undo moves, flip board, rotate screen, and more.</p>
13
- <h2>Why Should You Download Chess Board Offline 2 Player APK?</h2>
14
- <h3>To enjoy chess without internet or accounts</h3>
15
- <p>If you love chess but don't have access to internet or don't want to create an account on other apps, then Chess Board Offline 2 Player APK is perfect for you. You can play chess with a friend on one screen without any hassle or interruption. You can also play by yourself without any pressure or competition. You can have fun and relax with this app.</p>
16
- <h3>To practice chess openings and strategies</h3>
17
- <p>If you want to improve your chess skills or learn new chess openings and strategies, then Chess Board Offline 2 Player APK can help you. You can use the app to practice different moves and scenarios on the board. You can also create custom setups and test your skills. The app has a hint feature that can suggest the best move for you. You can also undo your moves and try different options. The app can help you learn from your mistakes and improve your chess game.</p>
18
- <h3>To save and export your games in PGN format</h3>
19
- <p>If you want to save and share your chess games with other chess enthusiasts, then Chess Board Offline 2 Player APK can help you. The app allows you to save unlimited games on your device. You can also export your games in PGN format, which is a standard format for chess games. You can use PGN files to view, analyze, or replay your games on other apps or websites. You can also share your PGN files with your friends or online communities.</p>
20
- <h2>How to Download and Install Chess Board Offline 2 Player APK?</h2>
21
- <h3>Step 1: Go to the official website or Google Play Store</h3>
22
- <p>The easiest way to download Chess Board Offline 2 Player APK is to go to the official website of the app or the Google Play Store. You can use the following links to access them:</p>
23
- <p>chess board offline 2 player apk download<br />
24
- chess board offline 2 player apk free<br />
25
- chess board offline 2 player apk mod<br />
26
- chess board offline 2 player apk android<br />
27
- chess board offline 2 player apk latest version<br />
28
- chess board offline 2 player apk for pc<br />
29
- chess board offline 2 player apk no ads<br />
30
- chess board offline 2 player apk full<br />
31
- chess board offline 2 player apk premium<br />
32
- chess board offline 2 player apk pro<br />
33
- chess board offline 2 player apk best<br />
34
- chess board offline 2 player apk review<br />
35
- chess board offline 2 player apk online<br />
36
- chess board offline 2 player apk multiplayer<br />
37
- chess board offline 2 player apk with friends<br />
38
- chess board offline 2 player apk without internet<br />
39
- chess board offline 2 player apk unlimited coins<br />
40
- chess board offline 2 player apk hack<br />
41
- chess board offline 2 player apk cheat<br />
42
- chess board offline 2 player apk cracked<br />
43
- chess board offline 2 player apk unlocked<br />
44
- chess board offline 2 player apk update<br />
45
- chess board offline 2 player apk new features<br />
46
- chess board offline 2 player apk bug fixes<br />
47
- chess board offline 2 player apk improvements<br />
48
- chess board offline 2 player apk tips and tricks<br />
49
- chess board offline 2 player apk tutorial<br />
50
- chess board offline 2 player apk guide<br />
51
- chess board offline 2 player apk how to play<br />
52
- chess board offline 2 player apk rules and regulations<br />
53
- chess board offline 2 player apk game modes<br />
54
- chess board offline 2 player apk difficulty levels<br />
55
- chess board offline 2 player apk themes and skins<br />
56
- chess board offline 2 player apk sound and music<br />
57
- chess board offline 2 player apk graphics and animation<br />
58
- chess board offline 2 player apk performance and optimization<br />
59
- chess board offline 2 player apk compatibility and requirements<br />
60
- chess board offline 2 player apk installation and setup<br />
61
- chess board offline 2 player apk feedback and support<br />
62
- chess board offline 2 player apk rating and reviews</p>
63
- <table>
64
- <tr>
65
- <th>Official website</th>
66
- <th>Google Play Store</th>
67
- </tr>
68
- <tr>
69
- <td><a href="">Chess Board Offline 2 Player APK</a></td>
70
- <td><a href="">Chess Board Offline 2 Player - Apps on Google Play</a></td>
71
- </tr>
72
- </table>
73
- <p>You can also scan the QR codes below to download the app:</p>
74
- <table>
75
- <tr>
76
- <th>Official website</th>
77
- <th>Google Play Store</th>
78
- </tr>
79
- <tr>
80
- <td><img src="" alt="QR code for official website"></td>
81
- <td><img src="" alt="QR code for Google Play Store"></td>
82
- </tr>
83
- </table>
84
- <h3>Step 2: Click on the download button or install button</h3>
85
- <p>Once you are on the official website or the Google Play Store, you will see a download button or an install button. Click on it to start downloading the app. The app is about 5 MB in size, so it should not take long to download.</p>
86
- <h3>Step 3: Allow unknown sources if prompted</h3>
87
- <p>If you are downloading the app from the official website, you may need to allow unknown sources on your device. This is because the app is not from the Google Play Store and may not be verified by Google. To allow unknown sources, follow these steps:</p>
88
- <ul>
89
- <li>Go to your device settings and look for security or privacy options.</li>
90
- <li>Find the option that says unknown sources or install unknown apps and enable it.</li>
91
- <li>You may see a warning message that says installing apps from unknown sources may harm your device. Tap on OK or Allow to proceed.</li>
92
- </ul>
93
- <p>If you are downloading the app from the Google Play Store, you don't need to do this step.</p>
94
- <h3>Step 4: Open the app and start playing</h3>
95
- <p>Once the app is downloaded and installed, you can open it and start playing chess with a friend or by yourself. You will see a welcome screen that shows you how to use the app and its features. You can also access the settings menu to customize your game. Enjoy playing chess with Chess Board Offline 2 Player APK!</p>
96
- <h2>Conclusion</h2>
97
- <p>Chess Board Offline 2 Player APK is a free and fun way to play chess with a friend on one screen and completely offline. You can also play by yourself and practice different moves and scenarios. The app has a simple and user-friendly interface that lets you customize your game. You can also save and export your games in PGN format and share them with other chess enthusiasts. If you love chess and want to play it anytime and anywhere without internet or accounts, then you should download Chess Board Offline 2 Player APK today!</p>
98
- <h2>FAQs</h2>
99
- <h4>Q: Is Chess Board Offline 2 Player APK safe?</h4>
100
- <p>A: Yes, Chess Board Offline 2 Player APK is safe to download and use. The app does not require any permissions or access to your device data. The app also does not contain any ads or in-app purchases that may harm your device or privacy.</p>
101
- <h4>Q: Can I play chess online with Chess Board Offline 2 Player APK?</h4>
102
- <p>A: No, Chess Board Offline 2 Player APK is an offline app that does not support online play. You can only play chess with a friend on one screen or by yourself against an imaginary opponent. If you want to play chess online with other players, you will need to use a different app that supports online play.</p>
103
- <h4>Q: Can I play chess with different difficulty levels with Chess Board Offline 2 Player APK?</h4>
104
- <p>A: No, Chess Board Offline 2 Player APK does not have different difficulty levels or artificial intelligence. The app is designed for playing chess with a friend or by yourself. You can adjust the level of challenge by choosing your opponent or creating custom setups. If you want to play chess with different difficulty levels or artificial intelligence, you will need to use a different app that has these features.</p>
105
- <h4>Q: Can I play chess with different variants or rules with Chess Board Offline 2 Player APK?</h4>
106
- <p>A: No, Chess Board Offline 2 Player APK only supports the standard chess rules and variants. The app does not have options for changing the board size, the number of pieces, the movement of pieces, or the game objectives. The app follows the official rules of chess as defined by the World Chess Federation (FIDE). If you want to play chess with different variants or rules, you will need to use a different app that has these options.</p>
107
- <h4>Q: Can I play chess with other apps or devices with Chess Board Offline 2 Player APK?</h4>
108
- <p>A: Yes, you can play chess with other apps or devices with Chess Board Offline 2 Player APK. The app allows you to export your games in PGN format, which is a standard format for chess games. You can use PGN files to view, analyze, or replay your games on other apps or devices that support PGN files. You can also share your PGN files with your friends or online communities that use PGN files.</p> 401be4b1e0<br />
109
- <br />
110
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Download Incredibox Now and Join the Merry Crew of Beatboxers on Your Android Device.md DELETED
@@ -1,131 +0,0 @@
1
-
2
- <h1>Incredibox Free Download Android: How to Create Your Own Music with a Merry Crew of Beatboxers</h1>
3
- <p>Do you love music and want to create your own songs with a simple and fun app? Do you want to explore different musical genres and mix them together to create unique sounds? Do you want to share your creations with the world and get feedback from other users? If you answered yes to any of these questions, then you should try <strong>Incredibox</strong>, a music app that lets you create your own music with the help of a merry crew of beatboxers.</p>
4
- <p>In this article, we will tell you everything you need to know about Incredibox, how to download it for free on your Android device, how to play it, and why you should play it. Let's get started!</p>
5
- <h2>incredibox free download android</h2><br /><p><b><b>DOWNLOAD</b> &#187; <a href="https://jinyurl.com/2uNLA5">https://jinyurl.com/2uNLA5</a></b></p><br /><br />
6
- <h2>What is Incredibox?</h2>
7
- <p>Incredibox is a music app that was created in 2009 by the French company So Far So Good. It is a combination of a game, a tool, and an educational resource that introduces kids and adults to notions of rhythm and melody in a fun and entertaining way.</p>
8
- <h3>A fun, interactive music experience</h3>
9
- <p>Incredibox is a music app that lets you create your own music with the help of a merry crew of beatboxers. You can choose from 9 musical styles among 8 impressive atmospheres and start to lay down, record, and share your mix. You can also find the right sound combos to unlock animated choruses that will enhance your tune. You can save your mix and get a link to share it with anybody so they can listen and vote for it. If your mix gets enough votes from other users, you may join the Top 50 chart and become a legend.</p>
10
- <h3>A music app with 9 musical styles and 8 characters</h3>
11
- <p>Incredibox features 9 musical styles that cover a wide range of genres, such as hip-hop, rock, funk, jazz, techno, electro-pop, samba, trap, and Bollywood. Each style has its own atmosphere, graphics, animation, and sound samples. You can switch between styles anytime you want and create your own combinations.</p>
12
- <p>Incredibox also features 8 characters that represent different types of sounds, such as beats, effects, melodies, voices, choruses, percussions, basses, and bonuses. Each character has its own personality and appearance. You can drag and drop icons onto the characters to make them sing and start to compose your own music. You can also customize the characters by changing their outfits and accessories.</p>
13
- <h3>A game, a tool, and an educational resource</h3>
14
- <p>Incredibox is not only a music app but also a game, a tool, and an educational resource. As a game, Incredibox challenges you to create the best mix possible by finding the right sound combos and unlocking animated choruses. You can also compete with other users by sharing your mix and getting votes from them. As a tool, Incredibox allows you to express your creativity and musical talent by creating your own songs with simple drag-and-drop actions. You can also download your mixes as MP3 files and listen to them anytime you want. As an educational resource, Incredibox introduces you to the basics of musical creation by teaching you about rhythm and melody in an interactive way. You can also learn about different musical genres and cultures by exploring the different styles and atmospheres.</p <h2>How to play Incredibox on Android devices?</h2>
15
- <p>Now that you have downloaded Incredibox on your Android device, you may wonder how to play it and have fun with it. Incredibox is a very easy and intuitive app that anyone can use, regardless of their age or musical skills. Here are some steps and tips to help you play Incredibox on your Android device.</p>
16
- <h3>The basic gameplay</h3>
17
- <p>The basic gameplay of Incredibox is very simple and straightforward. You just need to follow these steps:</p>
18
- <ol>
19
- <li>Open the app and choose a musical style from the 9 available ones. You can swipe left or right to see all the options.</li>
20
- <li>Tap on the play button to start the music and see the 8 characters on the screen. Each character represents a type of sound, such as beats, effects, melodies, voices, choruses, percussions, basses, and bonuses.</li>
21
- <li>Drag and drop icons from the bottom of the screen onto the characters to make them sing and create your own mix. You can use up to 20 icons at a time, and you can change them anytime you want.</li>
22
- <li>Find the right sound combos to unlock animated choruses that will enhance your mix. You can see the progress of the combos on the top of the screen.</li>
23
- <li>Tap on the record button to record your mix and save it on your device. You can also share it with other users by tapping on the share button.</li>
24
- </ol>
25
- <h3>The advanced features</h3>
26
- <p>Incredibox also has some advanced features that you can use to make your mix more interesting and unique. Here are some of them:</p>
27
- <p>incredibox app free download for android<br />
28
- incredibox apk free download latest version<br />
29
- incredibox mod apk free download android<br />
30
- incredibox v8 dystopia free download android<br />
31
- incredibox v9 wekiddy free download android<br />
32
- incredibox v7 jeevan free download android<br />
33
- incredibox v6 alive free download android<br />
34
- incredibox v5 brazil free download android<br />
35
- incredibox v4 the love free download android<br />
36
- incredibox v3 sunrise free download android<br />
37
- incredibox v2 little miss free download android<br />
38
- incredibox v1 alpha free download android<br />
39
- incredibox 0.5.1 apk free download android<br />
40
- incredibox 0.4.9 apk free download android<br />
41
- incredibox 0.4.8 apk free download android<br />
42
- incredibox 0.4.7 apk free download android<br />
43
- incredibox 0.4.6 apk free download android<br />
44
- incredibox 0.4.5 apk free download android<br />
45
- incredibox 0.4.4 apk free download android<br />
46
- incredibox 0.4.3 apk free download android<br />
47
- how to get incredibox for free on android<br />
48
- how to play incredibox offline on android<br />
49
- how to install incredibox on android<br />
50
- how to update incredibox on android<br />
51
- how to record incredibox on android<br />
52
- how to share incredibox on android<br />
53
- how to delete incredibox on android<br />
54
- how to use dark mode in incredibox on android<br />
55
- how to disable sharing in incredibox on android for kids<br />
56
- how to access all versions of incredibox on android app<br />
57
- best music creation app for android like incredibox<br />
58
- best alternatives to incredibox for android users<br />
59
- best tips and tricks for playing incredibox on android devices<br />
60
- best reviews and ratings for incredibox app on android store<br />
61
- best songs and mixes made with incredibox app on android phone<br />
62
- best features and benefits of downloading incredibox app on android tablet<br />
63
- best deals and discounts for buying incredibox app on android market<br />
64
- best ways and methods to learn music with incredibox app on android online<br />
65
- best resources and guides for using incredibox app on android offline<br />
66
- best fun and entertainment with incredibox app on android for adults and kids alike</p>
67
- <ul>
68
- <li>You can mute or solo any character by tapping on them. This will allow you to focus on a specific sound or create different variations of your mix.</li>
69
- <li>You can shuffle your mix by tapping on the shuffle button. This will randomly change the icons on the characters and create a new mix.</li>
70
- <li>You can customize your characters by tapping on the customize button. This will let you change their outfits and accessories according to your taste and style.</li>
71
- <li>You can access the bonus mode by tapping on the bonus button. This will let you play with special sound effects that are not available in the normal mode.</li>
72
- </ul>
73
- <h3>The tips and tricks</h3>
74
- <p>If you want to improve your skills and enjoy Incredibox more, here are some tips and tricks that you can follow:</p>
75
- <ul>
76
- <li>Experiment with different musical styles and sound combinations. You may discover new sounds and genres that you like or that inspire you.</li>
77
- <li>Listen to other users' mixes and vote for them. You may learn from their techniques and ideas, and also get feedback from them for your own mixes.</li>
78
- <li>Try to complete all the combos and unlock all the choruses. This will challenge your creativity and musical sense, and also reward you with amazing animations and sounds.</li>
79
- <li>Have fun and express yourself. Incredibox is a music app that lets you create your own music with no rules or limitations. You can be as creative and original as you want, and share your emotions and feelings through music.</li>
80
- </ul> <h2>Why should you play Incredibox on Android devices?</h2>
81
- <p>By now, you may have a clear idea of what Incredibox is and how to play it on your Android device. But you may still wonder why you should play it and what benefits it can bring you. Here are some reasons why you should play Incredibox on your Android device.</p>
82
- <h3>The benefits of playing Incredibox</h3>
83
- <p>Incredibox is not just a music app, but also a game, a tool, and an educational resource that can offer you many benefits, such as:</p>
84
- <ul>
85
- <li>It can stimulate your creativity and musical talent by letting you create your own songs with simple drag-and-drop actions.</li>
86
- <li>It can improve your musical knowledge and skills by introducing you to different musical genres and cultures, and teaching you about rhythm and melody.</li>
87
- <li>It can enhance your mood and well-being by providing you with a fun and entertaining experience that can make you laugh, smile, and relax.</li>
88
- <li>It can boost your confidence and self-expression by allowing you to share your creations with the world and get feedback from other users.</li>
89
- <li>It can foster your social interaction and communication by enabling you to connect with other users who share your passion for music and Incredibox.</li>
90
- </ul>
91
- <h3>The reviews and ratings of Incredibox</h3>
92
- <p>If you are still not convinced by the benefits of playing Incredibox, you may want to check out the reviews and ratings of Incredibox from other users who have tried it. Incredibox has received overwhelmingly positive feedback from its users, who have praised its originality, simplicity, quality, and fun factor. Here are some examples of what users have said about Incredibox:</p>
93
- <blockquote>
94
- <p>"This is the best app ever! I love making music with this app. It's so easy and fun. The graphics are amazing and the sounds are awesome. I recommend this app to everyone who loves music."</p>
95
- </blockquote>
96
- <blockquote>
97
- <p>"Incredibox is a masterpiece. It's not just a game, it's an art. It's a way to express yourself through music. It's a way to learn about different musical styles and cultures. It's a way to have fun and relax."</p>
98
- </blockquote>
99
- <blockquote>
100
- <p>"I'm addicted to this app. I can't stop playing it. It's so cool and creative. I love how I can mix different sounds and create my own songs. I also love how I can share my mixes with other people and listen to theirs."</p>
101
- </blockquote>
102
- <p>Incredibox has also received high ratings from its users, who have given it an average of 4.8 out of 5 stars on the Google Play Store. This shows that Incredibox is a highly rated and popular app that many users enjoy and appreciate.</p>
103
- <h3>The alternatives to Incredibox</h3>
104
- <p>If you are looking for some alternatives to Incredibox, you may want to try some other music apps that are similar or related to Incredibox. Here are some of them:</p>
105
- <ul>
106
- <li><strong>Groovepad</strong>: A music app that lets you create your own beats and music tracks with various sound effects, loops, samples, and genres.</li>
107
- <li><strong>Beat Snap</strong>: A music app that lets you make your own music with drums, synths, vocals, FX, and more.</li>
108
- <li><strong>Music Maker Jam</strong>: A music app that lets you create your own songs with thousands of studio-quality loops, beats, and samples.</li>
109
- <li><strong>DJ Loop Pads</strong>: A music app that lets you remix your favorite songs or create your own music with various pads, loops, FX, and more.</li>
110
- <li><strong>BandLab</strong>: A music app that lets you record, edit, mix, and share your own music with millions of creators and fans.</li>
111
- </ul>
112
- <h2>Conclusion</h2>
113
- <p>In conclusion, Incredibox is a music app that lets you create your own music with the help of a merry crew of beatboxers. You can choose from 9 musical styles among 8 impressive atmospheres and start to lay down, record, and share your mix. You can also find the right sound combos to unlock animated choruses that will enhance your tune.</p>
114
- <p>In this article, we have told you everything you need to know about Incredibox, how to download it for free on your Android device, how to play it, and why you should play it. We hope that this article has been helpful and informative for you, and that you have enjoyed reading it as much as we have enjoyed writing it <p>Now that you have reached the end of the article, you may have some questions or doubts about Incredibox or anything related to it. To help you with that, we have prepared a list of 5 frequently asked questions (FAQs) that may answer some of your queries. Here they are:</p>
115
- <h2>FAQs</h2>
116
- <ol>
117
- <li><strong>Is Incredibox safe for kids?</strong></li>
118
- <p>Yes, Incredibox is safe for kids, as it does not contain any inappropriate or harmful content. It is also suitable for kids, as it is easy to use, fun to play, and educational to learn. In fact, Incredibox is often used by teachers and parents as a way to introduce kids to music and creativity.</p>
119
- <li><strong>Is Incredibox available for other devices?</strong></li>
120
- <p>Yes, Incredibox is available for other devices besides Android devices. You can also play Incredibox on iOS devices, such as iPhones and iPads, by downloading it from the App Store. You can also play Incredibox on your web browser, such as Chrome, Firefox, or Safari, by visiting the official website of Incredibox.</p>
121
- <li><strong>How can I contact the developer of Incredibox?</strong></li>
122
- <p>If you want to contact the developer of Incredibox, you can do so by visiting their website and filling out the contact form. You can also follow them on their social media accounts, such as Facebook, Twitter, Instagram, and YouTube. You can also send them an email at [email protected].</p>
123
- <li><strong>How can I support the developer of Incredibox?</strong></li>
124
- <p>If you want to support the developer of Incredibox, you can do so by buying the app from the app store, leaving a positive review and rating for the app, sharing your mixes with other users and friends, and following their social media accounts. You can also donate to them via PayPal or Patreon.</p>
125
- <li><strong>How can I learn more about Incredibox?</strong></li>
126
- <p>If you want to learn more about Incredibox, you can do so by visiting their website and reading their blog posts and news articles. You can also watch their videos and tutorials on their YouTube channel. You can also join their community and forum on their website and interact with other users and fans.</p>
127
- </ol>
128
- <p>We hope that these FAQs have been useful and informative for you. If you have any other questions or comments about Incredibox or this article, please feel free to leave them below. We would love to hear from you and help you out.</p>
129
- <p>Thank you for reading this article and playing Incredibox. We hope that you have enjoyed it as much as we have enjoyed writing it and creating music with it. Have a great day and keep on making music!</p> 401be4b1e0<br />
130
- <br />
131
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1phancelerku/anime-remove-background/Experience Nintendo GameCube and Wii Games on Xbox with Dolphin Emulator A Step-by-Step Guide.md DELETED
@@ -1,119 +0,0 @@
1
- <br />
2
- <h1>How to Download Dolphin Emulator on Xbox</h1>
3
- <p>If you are a fan of Nintendo GameCube and Wii games, you might be wondering if there is a way to play them on your Xbox console. The answer is yes, thanks to a powerful emulator called Dolphin. Dolphin is a software that can run GameCube and Wii games on various platforms, including Windows, Linux, Android, and even Xbox. In this article, we will show you how to download and install Dolphin Emulator on your Xbox Series X/S or Xbox One, and how to configure it for the best performance and compatibility. We will also share some tips and tricks to enhance your gaming experience with Dolphin Emulator on Xbox.</p>
4
- <h2>Requirements for Dolphin Emulator on Xbox</h2>
5
- <p>Before you start, you will need the following things:</p>
6
- <h2>how to download dolphin emulator on xbox</h2><br /><p><b><b>Download</b> &#8250;&#8250;&#8250;&#8250;&#8250; <a href="https://jinyurl.com/2uNOB2">https://jinyurl.com/2uNOB2</a></b></p><br /><br />
7
- <ul>
8
- <li>An Xbox Series X/S or Xbox One console with enough storage space.</li>
9
- <li>A USB drive formatted as NTFS with at least 4 GB of free space.</li>
10
- <li>A PC with internet access and a web browser.</li>
11
- <li>A copy of the latest version of Dolphin Emulator for UWP (Universal Windows Platform) from <a href="(^1^)">here</a>.</li>
12
- <li>Some GameCube or Wii game ROMs or ISOs that you legally own. You can rip them from your original discs using a compatible disc drive and software like CleanRip or RawDump.</li>
13
- </ul>
14
- <h2>How to Enable Developer Mode on Xbox</h2>
15
- <p>The first step is to enable developer mode on your Xbox console. This will allow you to install apps that are not available on the Microsoft Store, such as Dolphin Emulator. Developer mode is free for anyone to use, but it has some limitations and risks. For example, you will not be able to play online multiplayer games or use some features like achievements or game DVR while in developer mode. You will also need to switch back to retail mode if you want to use those features again. To enable developer mode, follow these steps:</p>
16
- <ol>
17
- <li>Go to <a href="(^2^)">https://developer.microsoft.com/en-us/xboxactivate</a> on your PC and sign in with your Microsoft account.</li>
18
- <li>Select Activate Console and follow the instructions to register your console as a developer device.</li>
19
- <li>On your console, go to Settings > System > Console info and select Reset console.</li>
20
- <li>Select Reset and keep my games & apps.</li>
21
- <li>Wait for the reset process to complete and sign in with your Microsoft account again.</li>
22
- <li>Go to Settings > System > Developer settings and select Enable developer mode.</li>
23
- <li>Wait for the console to reboot into developer mode.</li>
24
- </ol>
25
- <h2>How to Download and Install Dolphin Emulator on Xbox</h2>
26
- <p>Now that you have enabled developer mode, you can download and install Dolphin Emulator on your console. To do this, follow these steps:</p>
27
- <ol>
28
- <li>Copy the Dolphin Emulator app file (DolphinUWP_<version>.appx) from your PC to your USB drive.</li>
29
- <li>Plug the USB drive into your console.</li>
30
- <li>On your console, go to Settings > System > Developer settings and select Remote Access Settings.</li>
31
- <li>Enable Remote Access and set a username and password for authentication.</li>
32
- <li>Note down the IP address of your console shown under Remote Access Settings.</li>
33
- <li>On your PC, open a web browser and enter the IP address of your console followed by :11443 in the address bar. For example, https://192.168.1.100:11443.</li>
34
- <li>You will see a security warning about an untrusted certificate. Click on Advanced and proceed to the website.</li>
35
- <li>Enter the username and password that you set for your console and click on Log in.</li>
36
- <li>Click on Add and browse to the location of the Dolphin Emulator app file on your USB drive.</li>
37
- <li>Select the app file and click on Next.</li>
38
- <li>Wait for the app to be uploaded and installed on your console.</li>
39
- <li>Once the installation is complete, you will see Dolphin Emulator listed under Installed Apps on the web page.</li>
40
- <li>On your console, go to My games & apps > See all > Apps and launch Dolphin Emulator.</li>
41
- </ol>
42
- <h2>How to Configure Dolphin Emulator Settings on Xbox</h2>
43
- <p>Before you start playing games, you will need to configure some settings in Dolphin Emulator to optimize its performance and compatibility. To do this, follow these steps:</p>
44
- <ol>
45
- <li>On the main menu of Dolphin Emulator, select Config.</li>
46
- <li>Under the General tab, you can adjust some basic settings such as language, theme, and interface options.</li>
47
- <li>Under the Graphics tab, you can change some settings related to video output, such as resolution, aspect ratio, vsync, and enhancements. For the best performance, we recommend using the native resolution of your console (1080p for Xbox One and 4K for Xbox Series X/S) and disabling any unnecessary enhancements such as anti-aliasing or anisotropic filtering.</li>
48
- <li>Under the Audio tab, you can change some settings related to sound output, such as volume, backend, and latency. For the best compatibility, we recommend using the XAudio2 backend and lowering the latency to 20 ms or less.</li>
49
- <li>Under the GameCube tab, you can change some settings related to GameCube emulation, such as system language, memory card size, and controller type. For the best compatibility, we recommend using a standard controller for port 1 and leaving the other ports empty.</li>
50
- <li>Under the Wii tab, you can change some settings related to Wii emulation, such as system language, aspect ratio, sensor bar position, and speaker volume. For the best compatibility, we recommend using a horizontal aspect ratio and placing the sensor bar above or below your TV screen.</li>
51
- <li>Under the Paths tab, you can add or remove folders where Dolphin Emulator will look for game files. By default, it will scan the internal storage of your console and any USB drives connected to it. You can also add network paths if you have game files stored on a PC or a NAS device.</li>
52
- <li>Under the Advanced tab, you can change some settings related to advanced features such as CPU overclocking, dual core mode, cheats, and debug options. For the best stability, we recommend leaving these settings at their default values unless you know what you are doing.</li>
53
- </ol>
54
- <h2>How to Play GameCube and Wii Games on Xbox with Dolphin Emulator</h2>
55
- <p>Now that you have configured Dolphin Emulator settings on your console, you are ready to play some games. To do this, follow these steps:</p>
56
- <ol>
57
- <li>Make sure that you have some GameCube or Wii game files (ROMs or ISOs) stored on your console's internal storage or a USB drive. You can also use network paths if you have game files stored on a PC or a NAS device.</li>
58
- <li>On the main menu of Dolphin Emulator, select Browse.</li>
59
- <li>Navigate to the folder where your game files are located and select one of them.</li>
60
- <li>The game will start loading and you will see some information about it on the screen. You can press the Menu button on your controller to access some options such as save states, screenshots, cheats, and more.</li>
61
- <li>You can use your controller to play the game as if it was a native Xbox game. You can also use a keyboard and mouse if you prefer. You can customize the controller mappings in Dolphin Emulator by going to Controllers > Configure Controller in the main menu.</li>
62
- <li>To exit the game, press the View button on your controller and select Exit from the menu that appears.</li>
63
- </ol>
64
- <h2>Tips and Tricks for Dolphin Emulator on Xbox</h2>
65
- <p>To make the most out of Dolphin Emulator on Xbox, here are some tips and tricks that you might find useful:</p>
66
- <ul>
67
- <li>If you encounter any issues with a game, such as graphical glitches, audio problems, or crashes, you can try changing some settings in Dolphin Emulator to fix them. You can also check <a href="">the official compatibility list</a> for more information about how well each game works with Dolphin Emulator.</li>
68
- <li>If you want to play games that require motion controls or pointer input, such as Wii Sports or Super Mario Galaxy, [user you can use a smartphone as a virtual controller. To do this, you will need to download the <a href="">Dolphin Controller app</a> on your smartphone and connect it to the same Wi-Fi network as your console. Then, you can scan the QR code shown on Dolphin Emulator on your console and pair your smartphone as a controller. You can also customize the layout and sensitivity of the virtual buttons and sensors on your smartphone.</li>
69
- <li>If you want to play games that support online multiplayer, such as Mario Kart Wii or Super Smash Bros. Brawl, you can use a service called <a href="">Dolphin Netplay</a>. This will allow you to play with other Dolphin Emulator users over the internet. To do this, you will need to create or join a Netplay session on your PC and then connect your console to it using the IP address and port number shown on Dolphin Emulator on your PC. You will also need to have the same game file and settings as the other players in the session.</li>
70
- <li>If you want to enhance the graphics and sound of your games, you can use some features such as shaders, texture packs, HD audio packs, and more. These are optional add-ons that can improve the quality and fidelity of your games. You can download them from various sources online and place them in the appropriate folders on your console or USB drive. You can also enable or disable them in Dolphin Emulator by going to Graphics > Enhancements or Audio > DSP in the main menu.</li>
71
- </ul>
72
- <h2>Conclusion</h2>
73
- <p>Dolphin Emulator is a great way to enjoy GameCube and Wii games on your Xbox console. It is easy to download and install, and it offers a lot of customization and optimization options. You can play hundreds of games with high compatibility and performance, and even use some features that are not available on the original consoles, such as online multiplayer, motion controls, and graphical enhancements. Dolphin Emulator is a must-have for any Nintendo fan who owns an Xbox console.</p>
74
- <h2>FAQs</h2>
75
- <p>Here are some frequently asked questions about Dolphin Emulator on Xbox:</p>
76
- <p>How to install dolphin emulator on xbox series x/s and xbox one<br />
77
- How to play gamecube and wii games on xbox series x/s with dolphin<br />
78
- How to set up dolphin emulator for uwp on xbox consoles<br />
79
- How to enable developer mode on xbox for dolphin emulator<br />
80
- How to use hd texture packs with dolphin emulator on xbox<br />
81
- How to run dolphin emulator on xbox one x with retroarch<br />
82
- How to play mario kart double dash online with dolphin emulator on xbox<br />
83
- How to configure xbox controller for dolphin emulator games<br />
84
- How to use a usb drive for storing gamecube and wii roms for dolphin emulator on xbox<br />
85
- How to update dolphin emulator for uwp on xbox series x/s and xbox one<br />
86
- How to fix performance issues with dolphin emulator on xbox one<br />
87
- How to play zelda twilight princess with dolphin emulator on xbox series x/s<br />
88
- How to use broadband adapter with dolphin emulator on xbox consoles<br />
89
- How to install gamecube and wii mods with dolphin emulator on xbox series x/s<br />
90
- How to play metroid prime trilogy with dolphin emulator on xbox series s/x<br />
91
- How to use cheats and codes with dolphin emulator on xbox consoles<br />
92
- How to play super smash bros melee with dolphin emulator on xbox series x/s<br />
93
- How to use save states and memory cards with dolphin emulator on xbox consoles<br />
94
- How to play resident evil 4 with dolphin emulator on xbox series x/s<br />
95
- How to use shaders and filters with dolphin emulator on xbox consoles<br />
96
- How to play animal crossing with dolphin emulator on xbox series x/s<br />
97
- How to use motion controls and wiimote with dolphin emulator on xbox consoles<br />
98
- How to play pikmin 2 with dolphin emulator on xbox series x/s<br />
99
- How to use netplay and multiplayer with dolphin emulator on xbox consoles<br />
100
- How to play luigi's mansion with dolphin emulator on xbox series x/s<br />
101
- How to use custom resolutions and aspect ratios with dolphin emulator on xbox consoles<br />
102
- How to play fire emblem path of radiance with dolphin emulator on xbox series x/s<br />
103
- How to use gamepad and keyboard mapping with dolphin emulator on xbox consoles<br />
104
- How to play paper mario the thousand year door with dolphin emulator on xbox series x/s<br />
105
- How to use screenshots and video recording with dolphin emulator on xbox consoles<br />
106
- How to play sonic adventure 2 battle with dolphin emulator on xbox series x/s<br />
107
- How to use turbo mode and speed hacks with dolphin emulator on xbox consoles<br />
108
- How to play star wars rogue squadron ii rogue leader with dolphin emulator on xbox series x/s<br />
109
- How to use audio settings and enhancements with dolphin emulator on xbox consoles<br />
110
- How to play f-zero gx with dolphin emulator on xbox series x/s</p>
111
- <ol>
112
- <li><b>Is Dolphin Emulator legal?</b><br>Dolphin Emulator itself is legal, as it is a software that emulates the hardware and software of GameCube and Wii consoles. However, downloading or distributing game files (ROMs or ISOs) that you do not own is illegal, as it violates the copyright laws of the game developers and publishers. You should only use game files that you have legally obtained from your own discs or digital purchases.</li>
113
- <li><b>Is Dolphin Emulator safe?</b><br>Dolphin Emulator is safe, as long as you download it from its official website or GitHub repository. It does not contain any viruses, malware, or spyware that could harm your console or PC. However, you should be careful when downloading any add-ons or game files from other sources online, as they might contain harmful or malicious content.</li>
114
- <li><b>Does Dolphin Emulator work on Xbox One S or Xbox One X?</b><br>Yes, Dolphin Emulator works on any Xbox One model, including Xbox One S and Xbox One X. However, you might notice some differences in performance and compatibility depending on the model of your console. For example, Xbox One X has more power and memory than Xbox One S, which means it can run some games faster and smoother than Xbox One S.</li>
115
- <li><b>Can I use an external hard drive instead of a USB drive for Dolphin Emulator?</b><br>Yes, you can use an external hard drive instead of a USB drive for Dolphin Emulator, as long as it is formatted as NTFS and has enough space for your game files. However, you might experience some issues with loading times or compatibility depending on the speed and quality of your external hard drive.</li>
116
- <li><b>Can I use a wireless controller instead of a wired controller for Dolphin Emulator?</b><br>Yes, you can use a wireless controller instead of a wired controller for Dolphin Emulator, as long as it is compatible with your console and has enough battery life. However, you might experience some issues with input lag or responsiveness depending on the quality and signal strength of your wireless controller.</li>
117
- </ol></p> 401be4b1e0<br />
118
- <br />
119
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/1toTree/lora_test/ppdiffusers/experimental/rl/value_guided_sampling.py DELETED
@@ -1,146 +0,0 @@
1
- # Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2
- # Copyright 2022 The HuggingFace Team. All rights reserved.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import numpy as np
17
- import paddle
18
-
19
- from ...models.unet_1d import UNet1DModel
20
- from ...pipeline_utils import DiffusionPipeline
21
- from ...utils.dummy_paddle_objects import DDPMScheduler
22
-
23
-
24
- class ValueGuidedRLPipeline(DiffusionPipeline):
25
- r"""
26
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
27
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
28
- Pipeline for sampling actions from a diffusion model trained to predict sequences of states.
29
- Original implementation inspired by this repository: https://github.com/jannerm/diffuser.
30
-
31
- Parameters:
32
- value_function ([`UNet1DModel`]): A specialized UNet for fine-tuning trajectories base on reward.
33
- unet ([`UNet1DModel`]): U-Net architecture to denoise the encoded trajectories.
34
- scheduler ([`SchedulerMixin`]):
35
- A scheduler to be used in combination with `unet` to denoise the encoded trajectories. Default for this
36
- application is [`DDPMScheduler`].
37
- env: An environment following the OpenAI gym API to act in. For now only Hopper has pretrained models.
38
- """
39
-
40
- def __init__(
41
- self,
42
- value_function: UNet1DModel,
43
- unet: UNet1DModel,
44
- scheduler: DDPMScheduler,
45
- env,
46
- ):
47
- super().__init__()
48
- self.value_function = value_function
49
- self.unet = unet
50
- self.scheduler = scheduler
51
- self.env = env
52
- self.data = env.get_dataset()
53
- self.means = dict()
54
- for key in self.data.keys():
55
- try:
56
- self.means[key] = self.data[key].mean()
57
- except Exception:
58
- pass
59
- self.stds = dict()
60
- for key in self.data.keys():
61
- try:
62
- self.stds[key] = self.data[key].std()
63
- except Exception:
64
- pass
65
- self.state_dim = env.observation_space.shape[0]
66
- self.action_dim = env.action_space.shape[0]
67
-
68
- def normalize(self, x_in, key):
69
- return (x_in - self.means[key]) / self.stds[key]
70
-
71
- def de_normalize(self, x_in, key):
72
- return x_in * self.stds[key] + self.means[key]
73
-
74
- def to_paddle(self, x_in):
75
- if type(x_in) is dict:
76
- return {k: self.to_paddle(v) for k, v in x_in.items()}
77
- elif paddle.is_tensor(x_in):
78
- return x_in
79
- return paddle.to_tensor(x_in)
80
-
81
- def reset_x0(self, x_in, cond, act_dim):
82
- for key, val in cond.items():
83
- x_in[:, key, act_dim:] = val.clone()
84
- return x_in
85
-
86
- def run_diffusion(self, x, conditions, n_guide_steps, scale):
87
- batch_size = x.shape[0]
88
- y = None
89
- for i in self.progress_bar(self.scheduler.timesteps):
90
- # create batch of timesteps to pass into model
91
- timesteps = paddle.full((batch_size,), i, dtype="int64")
92
- for _ in range(n_guide_steps):
93
- with paddle.set_grad_enabled(True):
94
- x.stop_gradient = False
95
- # permute to match dimension for pre-trained models
96
- y = self.value_function(x.transpose([0, 2, 1]), timesteps).sample
97
- grad = paddle.autograd.grad([y.sum()], [x])[0]
98
-
99
- posterior_variance = self.scheduler._get_variance(i)
100
- model_std = paddle.exp(0.5 * posterior_variance)
101
- grad = model_std * grad
102
-
103
- grad[timesteps < 2] = 0
104
- x = x.detach()
105
- x = x + scale * grad
106
- x = self.reset_x0(x, conditions, self.action_dim)
107
- prev_x = self.unet(x.transpose([0, 2, 1]), timesteps).sample.transpose([0, 2, 1])
108
- # TODO: verify deprecation of this kwarg
109
- x = self.scheduler.step(prev_x, i, x, predict_epsilon=False)["prev_sample"]
110
-
111
- # apply conditions to the trajectory (set the initial state)
112
- x = self.reset_x0(x, conditions, self.action_dim)
113
- x = self.to_paddle(x)
114
- return x, y
115
-
116
- def __call__(self, obs, batch_size=64, planning_horizon=32, n_guide_steps=2, scale=0.1):
117
- # normalize the observations and create batch dimension
118
- obs = self.normalize(obs, "observations")
119
- obs = obs[None].repeat(batch_size, axis=0)
120
-
121
- conditions = {0: self.to_paddle(obs)}
122
- shape = [batch_size, planning_horizon, self.state_dim + self.action_dim]
123
-
124
- # generate initial noise and apply our conditions (to make the trajectories start at current state)
125
- x1 = paddle.randn(shape)
126
- x = self.reset_x0(x1, conditions, self.action_dim)
127
- x = self.to_paddle(x)
128
-
129
- # run the diffusion process
130
- x, y = self.run_diffusion(x, conditions, n_guide_steps, scale)
131
-
132
- # sort output trajectories by value
133
- sorted_idx = paddle.argsort(y, 0, descending=True).squeeze()
134
- sorted_values = x[sorted_idx]
135
- actions = sorted_values[:, :, : self.action_dim]
136
- actions = actions.detach().numpy()
137
- denorm_actions = self.de_normalize(actions, key="actions")
138
-
139
- # select the action with the highest value
140
- if y is not None:
141
- selected_index = 0
142
- else:
143
- # if we didn't run value guiding, select a random action
144
- selected_index = np.random.randint(0, batch_size)
145
- denorm_actions = denorm_actions[selected_index, 0]
146
- return denorm_actions
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/A00001/bingothoo/src/pages/api/healthz.ts DELETED
@@ -1,7 +0,0 @@
1
- 'use server'
2
-
3
- import { NextApiRequest, NextApiResponse } from 'next'
4
-
5
- export default async function handler(req: NextApiRequest, res: NextApiResponse) {
6
- res.status(200).end('ok')
7
- }
 
 
 
 
 
 
 
 
spaces/AI-ZTH-03-23/README/README.md DELETED
@@ -1,20 +0,0 @@
1
- ---
2
- title: README
3
- emoji: 🐠
4
- colorFrom: gray
5
- colorTo: purple
6
- sdk: static
7
- pinned: false
8
- ---
9
-
10
- # 03-23-2023 Code Examples:
11
- 1. Classroom: https://huggingface.co/AI-ZTH-03-23
12
- 2. Dynamic Architecture Modeling: https://huggingface.co/spaces/awacke1/Streamlit.GraphViz.Dynamic.Architecture.Diagram
13
- 3. Aframe VR IOT Motion Sensor WASD: https://huggingface.co/spaces/awacke1/HTML5-Aframe-3dMap-Flight
14
- 4. MediaPipe: https://huggingface.co/spaces/awacke1/RealTime-MediaPipe-AI-From-Video-On-Any-Device
15
- 5. Wikipedia Fact Check Chat: https://huggingface.co/spaces/awacke1/StreamlitWikipediaChat
16
- 6. Dashboard - Tweet, Wiki, Memory: https://huggingface.co/spaces/awacke1/AI.Dashboard.Wiki.Chat.Cognitive.HTML5
17
- 7. Dashboard - Chat, Download, Image Search, OCR, StoryGen, Q, Mermaid HTML5: https://huggingface.co/spaces/awacke1/AI.Dashboard.Gradio.Streamlit.HTML5
18
- 8. Datasets - Biomed NER: https://huggingface.co/spaces/DataScienceEngineering/7-NER-Biomed-ClinicalTerms
19
- 9. MN Hospitals Comparative Maps: https://huggingface.co/spaces/awacke1/MN.Map.Hospitals.Top.Five
20
- 10. Find Mental Health Providers, Maps, Location: https://huggingface.co/spaces/awacke1/Gradio-Maps-Latitude-Longitude
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AI-Zero-to-Hero/01-H5-Play-Canvas-Sim-Physics/style.css DELETED
@@ -1,28 +0,0 @@
1
- body {
2
- padding: 2rem;
3
- font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif;
4
- }
5
-
6
- h1 {
7
- font-size: 16px;
8
- margin-top: 0;
9
- }
10
-
11
- p {
12
- color: rgb(107, 114, 128);
13
- font-size: 15px;
14
- margin-bottom: 10px;
15
- margin-top: 5px;
16
- }
17
-
18
- .card {
19
- max-width: 620px;
20
- margin: 0 auto;
21
- padding: 16px;
22
- border: 1px solid lightgray;
23
- border-radius: 16px;
24
- }
25
-
26
- .card p:last-child {
27
- margin-bottom: 0;
28
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIFILMS/generate_human_motion/pyrender/tests/unit/__init__.py DELETED
File without changes
spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/normalizing_flow/res_flow.py DELETED
@@ -1,61 +0,0 @@
1
- import torch
2
- from torch import nn
3
- from text_to_speech.modules.commons.conv import ConditionalConvBlocks
4
- from text_to_speech.modules.commons.wavenet import WN
5
-
6
-
7
- class FlipLayer(nn.Module):
8
- def forward(self, x, nonpadding, cond=None, reverse=False):
9
- x = torch.flip(x, [1])
10
- return x
11
-
12
-
13
- class CouplingLayer(nn.Module):
14
- def __init__(self, c_in, hidden_size, kernel_size, n_layers, p_dropout=0, c_in_g=0, nn_type='wn'):
15
- super().__init__()
16
- self.channels = c_in
17
- self.hidden_size = hidden_size
18
- self.kernel_size = kernel_size
19
- self.n_layers = n_layers
20
- self.c_half = c_in // 2
21
-
22
- self.pre = nn.Conv1d(self.c_half, hidden_size, 1)
23
- if nn_type == 'wn':
24
- self.enc = WN(hidden_size, kernel_size, 1, n_layers, p_dropout=p_dropout,
25
- c_cond=c_in_g)
26
- elif nn_type == 'conv':
27
- self.enc = ConditionalConvBlocks(
28
- hidden_size, c_in_g, hidden_size, None, kernel_size,
29
- layers_in_block=1, is_BTC=False, num_layers=n_layers)
30
- self.post = nn.Conv1d(hidden_size, self.c_half, 1)
31
-
32
- def forward(self, x, nonpadding, cond=None, reverse=False):
33
- x0, x1 = x[:, :self.c_half], x[:, self.c_half:]
34
- x_ = self.pre(x0) * nonpadding
35
- x_ = self.enc(x_, nonpadding=nonpadding, cond=cond)
36
- m = self.post(x_)
37
- x1 = m + x1 if not reverse else x1 - m
38
- x = torch.cat([x0, x1], 1)
39
- return x * nonpadding
40
-
41
-
42
- class ResFlow(nn.Module):
43
- def __init__(self,
44
- c_in,
45
- hidden_size,
46
- kernel_size,
47
- n_flow_layers,
48
- n_flow_steps=4,
49
- c_cond=0,
50
- nn_type='wn'):
51
- super().__init__()
52
- self.flows = nn.ModuleList()
53
- for i in range(n_flow_steps):
54
- self.flows.append(
55
- CouplingLayer(c_in, hidden_size, kernel_size, n_flow_layers, c_in_g=c_cond, nn_type=nn_type))
56
- self.flows.append(FlipLayer())
57
-
58
- def forward(self, x, nonpadding, cond=None, reverse=False):
59
- for flow in (self.flows if not reverse else reversed(self.flows)):
60
- x = flow(x, nonpadding, cond=cond, reverse=reverse)
61
- return x
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/normalizing_flow/utils.py DELETED
@@ -1,29 +0,0 @@
1
- import torch
2
-
3
-
4
- def squeeze(x, x_mask=None, n_sqz=2):
5
- b, c, t = x.size()
6
-
7
- t = (t // n_sqz) * n_sqz
8
- x = x[:, :, :t]
9
- x_sqz = x.view(b, c, t // n_sqz, n_sqz)
10
- x_sqz = x_sqz.permute(0, 3, 1, 2).contiguous().view(b, c * n_sqz, t // n_sqz)
11
-
12
- if x_mask is not None:
13
- x_mask = x_mask[:, :, n_sqz - 1::n_sqz]
14
- else:
15
- x_mask = torch.ones(b, 1, t // n_sqz).to(device=x.device, dtype=x.dtype)
16
- return x_sqz * x_mask, x_mask
17
-
18
-
19
- def unsqueeze(x, x_mask=None, n_sqz=2):
20
- b, c, t = x.size()
21
-
22
- x_unsqz = x.view(b, n_sqz, c // n_sqz, t)
23
- x_unsqz = x_unsqz.permute(0, 2, 3, 1).contiguous().view(b, c // n_sqz, t * n_sqz)
24
-
25
- if x_mask is not None:
26
- x_mask = x_mask.unsqueeze(-1).repeat(1, 1, 1, n_sqz).view(b, 1, t * n_sqz)
27
- else:
28
- x_mask = torch.ones(b, 1, t * n_sqz).to(device=x.device, dtype=x.dtype)
29
- return x_unsqz * x_mask, x_mask
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/ps_flow.py DELETED
@@ -1,135 +0,0 @@
1
- import torch
2
- from text_to_speech.modules.tts.portaspeech.portaspeech_flow import PortaSpeechFlow
3
- from tasks.tts.fs import FastSpeechTask
4
- from tasks.tts.ps import PortaSpeechTask
5
- from text_to_speech.utils.audio.pitch.utils import denorm_f0
6
- from text_to_speech.utils.commons.hparams import hparams
7
-
8
-
9
- class PortaSpeechFlowTask(PortaSpeechTask):
10
- def __init__(self):
11
- super().__init__()
12
- self.training_post_glow = False
13
-
14
- def build_tts_model(self):
15
- ph_dict_size = len(self.token_encoder)
16
- word_dict_size = len(self.word_encoder)
17
- self.model = PortaSpeechFlow(ph_dict_size, word_dict_size, hparams)
18
-
19
- def _training_step(self, sample, batch_idx, opt_idx):
20
- self.training_post_glow = self.global_step >= hparams['post_glow_training_start'] \
21
- and hparams['use_post_flow']
22
- if hparams['two_stage'] and \
23
- ((opt_idx == 0 and self.training_post_glow) or (opt_idx == 1 and not self.training_post_glow)):
24
- return None
25
- loss_output, _ = self.run_model(sample)
26
- total_loss = sum([v for v in loss_output.values() if isinstance(v, torch.Tensor) and v.requires_grad])
27
- loss_output['batch_size'] = sample['txt_tokens'].size()[0]
28
- if 'postflow' in loss_output and loss_output['postflow'] is None:
29
- return None
30
- return total_loss, loss_output
31
-
32
- def run_model(self, sample, infer=False, *args, **kwargs):
33
- if not infer:
34
- training_post_glow = self.training_post_glow
35
- spk_embed = sample.get('spk_embed')
36
- spk_id = sample.get('spk_ids')
37
- output = self.model(sample['txt_tokens'],
38
- sample['word_tokens'],
39
- ph2word=sample['ph2word'],
40
- mel2word=sample['mel2word'],
41
- mel2ph=sample['mel2ph'],
42
- word_len=sample['word_lengths'].max(),
43
- tgt_mels=sample['mels'],
44
- pitch=sample.get('pitch'),
45
- spk_embed=spk_embed,
46
- spk_id=spk_id,
47
- infer=False,
48
- forward_post_glow=training_post_glow,
49
- two_stage=hparams['two_stage'],
50
- global_step=self.global_step,
51
- bert_feats=sample.get('bert_feats'))
52
- losses = {}
53
- self.add_mel_loss(output['mel_out'], sample['mels'], losses)
54
- if (training_post_glow or not hparams['two_stage']) and hparams['use_post_flow']:
55
- losses['postflow'] = output['postflow']
56
- losses['l1'] = losses['l1'].detach()
57
- losses['ssim'] = losses['ssim'].detach()
58
- if not training_post_glow or not hparams['two_stage'] or not self.training:
59
- losses['kl'] = output['kl']
60
- if self.global_step < hparams['kl_start_steps']:
61
- losses['kl'] = losses['kl'].detach()
62
- else:
63
- losses['kl'] = torch.clamp(losses['kl'], min=hparams['kl_min'])
64
- losses['kl'] = losses['kl'] * hparams['lambda_kl']
65
- if hparams['dur_level'] == 'word':
66
- self.add_dur_loss(
67
- output['dur'], sample['mel2word'], sample['word_lengths'], sample['txt_tokens'], losses)
68
- self.get_attn_stats(output['attn'], sample, losses)
69
- else:
70
- super().add_dur_loss(output['dur'], sample['mel2ph'], sample['txt_tokens'], losses)
71
- return losses, output
72
- else:
73
- use_gt_dur = kwargs.get('infer_use_gt_dur', hparams['use_gt_dur'])
74
- forward_post_glow = self.global_step >= hparams['post_glow_training_start'] + 1000 \
75
- and hparams['use_post_flow']
76
- spk_embed = sample.get('spk_embed')
77
- spk_id = sample.get('spk_ids')
78
- output = self.model(
79
- sample['txt_tokens'],
80
- sample['word_tokens'],
81
- ph2word=sample['ph2word'],
82
- word_len=sample['word_lengths'].max(),
83
- pitch=sample.get('pitch'),
84
- mel2ph=sample['mel2ph'] if use_gt_dur else None,
85
- mel2word=sample['mel2word'] if hparams['profile_infer'] or hparams['use_gt_dur'] else None,
86
- infer=True,
87
- forward_post_glow=forward_post_glow,
88
- spk_embed=spk_embed,
89
- spk_id=spk_id,
90
- two_stage=hparams['two_stage'],
91
- bert_feats=sample.get('bert_feats'))
92
- return output
93
-
94
- def validation_step(self, sample, batch_idx):
95
- self.training_post_glow = self.global_step >= hparams['post_glow_training_start'] \
96
- and hparams['use_post_flow']
97
- return super().validation_step(sample, batch_idx)
98
-
99
- def save_valid_result(self, sample, batch_idx, model_out):
100
- super(PortaSpeechFlowTask, self).save_valid_result(sample, batch_idx, model_out)
101
- sr = hparams['audio_sample_rate']
102
- f0_gt = None
103
- if sample.get('f0') is not None:
104
- f0_gt = denorm_f0(sample['f0'][0].cpu(), sample['uv'][0].cpu())
105
- if self.global_step > 0:
106
- # save FVAE result
107
- if hparams['use_post_flow']:
108
- wav_pred = self.vocoder.spec2wav(model_out['mel_out_fvae'][0].cpu(), f0=f0_gt)
109
- self.logger.add_audio(f'wav_fvae_{batch_idx}', wav_pred, self.global_step, sr)
110
- self.plot_mel(batch_idx, sample['mels'], model_out['mel_out_fvae'][0],
111
- f'mel_fvae_{batch_idx}', f0s=f0_gt)
112
-
113
- def build_optimizer(self, model):
114
- if hparams['two_stage'] and hparams['use_post_flow']:
115
- self.optimizer = torch.optim.AdamW(
116
- [p for name, p in self.model.named_parameters() if 'post_flow' not in name],
117
- lr=hparams['lr'],
118
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
119
- weight_decay=hparams['weight_decay'])
120
- self.post_flow_optimizer = torch.optim.AdamW(
121
- self.model.post_flow.parameters(),
122
- lr=hparams['post_flow_lr'],
123
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
124
- weight_decay=hparams['weight_decay'])
125
- return [self.optimizer, self.post_flow_optimizer]
126
- else:
127
- self.optimizer = torch.optim.AdamW(
128
- self.model.parameters(),
129
- lr=hparams['lr'],
130
- betas=(hparams['optimizer_adam_beta1'], hparams['optimizer_adam_beta2']),
131
- weight_decay=hparams['weight_decay'])
132
- return [self.optimizer]
133
-
134
- def build_scheduler(self, optimizer):
135
- return FastSpeechTask.build_scheduler(self, optimizer[0])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AIWaves/SOP_Generation-single/Agent/__init__.py DELETED
@@ -1 +0,0 @@
1
- from .Agent import Agent
 
 
spaces/AIZeroToHero/04-Image2OCR/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: 04 Image2OCR
3
- emoji: 🚀
4
- colorFrom: yellow
5
- colorTo: yellow
6
- sdk: gradio
7
- sdk_version: 3.1.5
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/ASJMO/freegpt/g4f/Provider/Providers/Dfehub.py DELETED
@@ -1,49 +0,0 @@
1
- import os
2
- import requests
3
- from ...typing import sha256, Dict, get_type_hints
4
-
5
- url = "https://chat.dfehub.com"
6
- model = ['gpt-3.5-turbo', 'gpt-3.5-turbo-16k', 'gpt-4']
7
- supports_stream = True
8
- needs_auth = False
9
-
10
-
11
- def _create_completion(model: str, messages: list, stream: bool, **kwargs):
12
- headers = {
13
- 'Authority': 'chat.dfehub.com',
14
- 'Content-Type': 'application/json',
15
- 'Method': 'POST',
16
- 'Path': '/api/openai/v1/chat/completions',
17
- 'Scheme': 'https',
18
- 'Accept': 'text/event-stream',
19
- 'Accept-Language': 'pt-BR,pt;q=0.9,en-US;q=0.8,en;q=0.7,zh-CN;q=0.6,zh;q=0.5',
20
- 'Content-Type': 'application/json',
21
- 'Origin': 'https://chat.dfehub.com',
22
- 'Referer': 'https://chat.dfehub.com/',
23
- 'Sec-Ch-Ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
24
- 'Sec-Ch-Ua-Mobile': '?0',
25
- 'Sec-Ch-Ua-Platform': '"Windows"',
26
- 'Sec-Fetch-Dest': 'empty',
27
- 'Sec-Fetch-Mode': 'cors',
28
- 'Sec-Fetch-Site': 'same-origin',
29
- 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36',
30
- 'X-Requested-With': 'XMLHttpRequest',
31
- }
32
-
33
- data = {
34
- 'model': model,
35
- 'temperature': 0.7,
36
- 'max_tokens': '8000',
37
- 'presence_penalty': 0,
38
- 'messages': messages,
39
- }
40
-
41
- response = requests.post(url + '/api/openai/v1/chat/completions',
42
- headers=headers, json=data, stream=stream)
43
-
44
- yield response.json()['choices'][0]['message']['content']
45
-
46
-
47
- params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \
48
- '(%s)' % ', '.join(
49
- [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Aashir01/Live_Transcription/app.py DELETED
@@ -1,236 +0,0 @@
1
- import base64
2
- import math
3
- import os
4
- import time
5
- from functools import partial
6
- from multiprocessing import Pool
7
-
8
- import gradio as gr
9
- import numpy as np
10
- import pytube
11
- import requests
12
- from processing_whisper import WhisperPrePostProcessor
13
- from transformers.models.whisper.tokenization_whisper import TO_LANGUAGE_CODE
14
- from transformers.pipelines.audio_utils import ffmpeg_read
15
-
16
-
17
- title = "Whisper JAX: The Fastest Whisper API ⚡️"
18
-
19
- description = """Whisper JAX is an optimised implementation of the [Whisper model](https://huggingface.co/openai/whisper-large-v2) by OpenAI. It runs on JAX with a TPU v4-8 in the backend. Compared to PyTorch on an A100 GPU, it is over [**70x faster**](https://github.com/sanchit-gandhi/whisper-jax#benchmarks), making it the fastest Whisper API available.
20
-
21
- Note that at peak times, you may find yourself in the queue for this demo. When you submit a request, your queue position will be shown in the top right-hand side of the demo pane. Once you reach the front of the queue, your audio file will be sent to the TPU and then transcribed, with the progress displayed through a progress bar.
22
-
23
- To skip the queue, you may wish to create your own inference endpoint, details for which can be found in the [Whisper JAX repository](https://github.com/sanchit-gandhi/whisper-jax#creating-an-endpoint).
24
- """
25
-
26
- article = "Whisper large-v2 model by OpenAI. Backend running JAX on a TPU v4-8 through the generous support of the [TRC](https://sites.research.google/trc/about/) programme. Whisper JAX [code](https://github.com/sanchit-gandhi/whisper-jax) and Gradio demo by 🤗 Hugging Face."
27
-
28
- API_SEND_URL = os.getenv("API_SEND_URL")
29
- API_FORWARD_URL = os.getenv("API_FORWARD_URL")
30
-
31
- language_names = sorted(TO_LANGUAGE_CODE.keys())
32
- CHUNK_LENGTH_S = 30
33
- BATCH_SIZE = 16
34
- NUM_PROC = 16
35
- FILE_LIMIT_MB = 1000
36
-
37
-
38
- def query(url, payload):
39
- response = requests.post(url, json=payload)
40
- return response.json(), response.status_code
41
-
42
-
43
- def inference(batch_id, idx, task=None, return_timestamps=False):
44
- payload = {"batch_id": batch_id, "idx": idx, "task": task, "return_timestamps": return_timestamps}
45
-
46
- data, status_code = query(API_FORWARD_URL, payload)
47
-
48
- if status_code == 200:
49
- tokens = {"tokens": np.asarray(data["tokens"])}
50
- return tokens
51
- else:
52
- gr.Error(data["detail"])
53
-
54
-
55
- def send_chunks(batch, batch_id):
56
- feature_shape = batch["input_features"].shape
57
- batch["input_features"] = base64.b64encode(batch["input_features"].tobytes()).decode()
58
- query(API_SEND_URL, {"batch": batch, "feature_shape": feature_shape, "batch_id": batch_id})
59
-
60
-
61
- def forward(batch_id, idx, task=None, return_timestamps=False):
62
- outputs = inference(batch_id, idx, task, return_timestamps)
63
- return outputs
64
-
65
-
66
- # Copied from https://github.com/openai/whisper/blob/c09a7ae299c4c34c5839a76380ae407e7d785914/whisper/utils.py#L50
67
- def format_timestamp(seconds: float, always_include_hours: bool = False, decimal_marker: str = "."):
68
- if seconds is not None:
69
- milliseconds = round(seconds * 1000.0)
70
-
71
- hours = milliseconds // 3_600_000
72
- milliseconds -= hours * 3_600_000
73
-
74
- minutes = milliseconds // 60_000
75
- milliseconds -= minutes * 60_000
76
-
77
- seconds = milliseconds // 1_000
78
- milliseconds -= seconds * 1_000
79
-
80
- hours_marker = f"{hours:02d}:" if always_include_hours or hours > 0 else ""
81
- return f"{hours_marker}{minutes:02d}:{seconds:02d}{decimal_marker}{milliseconds:03d}"
82
- else:
83
- # we have a malformed timestamp so just return it as is
84
- return seconds
85
-
86
-
87
- if __name__ == "__main__":
88
- processor = WhisperPrePostProcessor.from_pretrained("openai/whisper-large-v2")
89
- stride_length_s = CHUNK_LENGTH_S / 6
90
- chunk_len = round(CHUNK_LENGTH_S * processor.feature_extractor.sampling_rate)
91
- stride_left = stride_right = round(stride_length_s * processor.feature_extractor.sampling_rate)
92
- step = chunk_len - stride_left - stride_right
93
- pool = Pool(NUM_PROC)
94
-
95
- def tqdm_generate(inputs: dict, task: str, return_timestamps: bool, progress: gr.Progress):
96
- inputs_len = inputs["array"].shape[0]
97
- all_chunk_start_batch_id = np.arange(0, inputs_len, step)
98
- num_samples = len(all_chunk_start_batch_id)
99
- num_batches = math.ceil(num_samples / BATCH_SIZE)
100
- dummy_batches = list(range(num_batches))
101
-
102
- dataloader = processor.preprocess_batch(inputs, chunk_length_s=CHUNK_LENGTH_S, batch_size=BATCH_SIZE)
103
- progress(0, desc="Sending audio to TPU...")
104
- batch_id = np.random.randint(
105
- 1000000
106
- ) # TODO(SG): swap to an iterator - currently taking our 1 in a million chances
107
- pool.map(partial(send_chunks, batch_id=batch_id), dataloader)
108
-
109
- model_outputs = []
110
- start_time = time.time()
111
- # iterate over our chunked audio samples
112
- for idx in progress.tqdm(dummy_batches, desc="Transcribing..."):
113
- model_outputs.append(forward(batch_id, idx, task=task, return_timestamps=return_timestamps))
114
- runtime = time.time() - start_time
115
-
116
- post_processed = processor.postprocess(model_outputs, return_timestamps=return_timestamps)
117
- text = post_processed["text"]
118
- timestamps = post_processed.get("chunks")
119
- if timestamps is not None:
120
- timestamps = [
121
- f"[{format_timestamp(chunk['timestamp'][0])} -> {format_timestamp(chunk['timestamp'][1])}] {chunk['text']}"
122
- for chunk in timestamps
123
- ]
124
- text = "\n".join(str(feature) for feature in timestamps)
125
- return text, runtime
126
-
127
- def transcribe_chunked_audio(inputs, task, return_timestamps, progress=gr.Progress()):
128
- progress(0, desc="Loading audio file...")
129
- if inputs is None:
130
- raise gr.Error("No audio file submitted! Please upload an audio file before submitting your request.")
131
- file_size_mb = os.stat(inputs).st_size / (1024 * 1024)
132
- if file_size_mb > FILE_LIMIT_MB:
133
- raise gr.Error(
134
- f"File size exceeds file size limit. Got file of size {file_size_mb:.2f}MB for a limit of {FILE_LIMIT_MB}MB."
135
- )
136
-
137
- with open(inputs, "rb") as f:
138
- inputs = f.read()
139
-
140
- inputs = ffmpeg_read(inputs, processor.feature_extractor.sampling_rate)
141
- inputs = {"array": inputs, "sampling_rate": processor.feature_extractor.sampling_rate}
142
- text, runtime = tqdm_generate(inputs, task=task, return_timestamps=return_timestamps, progress=progress)
143
- return text, runtime
144
-
145
- def _return_yt_html_embed(yt_url):
146
- video_id = yt_url.split("?v=")[-1]
147
- HTML_str = (
148
- f'<center> <iframe width="500" height="320" src="https://www.youtube.com/embed/{video_id}"> </iframe>'
149
- " </center>"
150
- )
151
- return HTML_str
152
-
153
- def transcribe_youtube(yt_url, task, return_timestamps, progress=gr.Progress(), max_filesize=75.0):
154
- progress(0, desc="Loading audio file...")
155
- html_embed_str = _return_yt_html_embed(yt_url)
156
- try:
157
- yt = pytube.YouTube(yt_url)
158
- stream = yt.streams.filter(only_audio=True)[0]
159
- except:
160
- raise gr.Error("An error occurred while loading the YouTube video. Please try again.")
161
-
162
- if stream.filesize_mb > max_filesize:
163
- raise gr.Error(f"Maximum YouTube file size is {max_filesize}MB, got {stream.filesize_mb:.2f}MB.")
164
-
165
- stream.download(filename="audio.mp3")
166
-
167
- with open("audio.mp3", "rb") as f:
168
- inputs = f.read()
169
-
170
- inputs = ffmpeg_read(inputs, processor.feature_extractor.sampling_rate)
171
- inputs = {"array": inputs, "sampling_rate": processor.feature_extractor.sampling_rate}
172
- text, runtime = tqdm_generate(inputs, task=task, return_timestamps=return_timestamps, progress=progress)
173
- return html_embed_str, text, runtime
174
-
175
- microphone_chunked = gr.Interface(
176
- fn=transcribe_chunked_audio,
177
- inputs=[
178
- gr.inputs.Audio(source="microphone", optional=True, type="filepath"),
179
- gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"),
180
- gr.inputs.Checkbox(default=False, label="Return timestamps"),
181
- ],
182
- outputs=[
183
- gr.outputs.Textbox(label="Transcription").style(show_copy_button=True),
184
- gr.outputs.Textbox(label="Transcription Time (s)"),
185
- ],
186
- allow_flagging="never",
187
- title=title,
188
- description=description,
189
- article=article,
190
- )
191
-
192
- audio_chunked = gr.Interface(
193
- fn=transcribe_chunked_audio,
194
- inputs=[
195
- gr.inputs.Audio(source="upload", optional=True, label="Audio file", type="filepath"),
196
- gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"),
197
- gr.inputs.Checkbox(default=False, label="Return timestamps"),
198
- ],
199
- outputs=[
200
- gr.outputs.Textbox(label="Transcription").style(show_copy_button=True),
201
- gr.outputs.Textbox(label="Transcription Time (s)"),
202
- ],
203
- allow_flagging="never",
204
- title=title,
205
- description=description,
206
- article=article,
207
- )
208
-
209
- youtube = gr.Interface(
210
- fn=transcribe_youtube,
211
- inputs=[
212
- gr.inputs.Textbox(lines=1, placeholder="Paste the URL to a YouTube video here", label="YouTube URL"),
213
- gr.inputs.Radio(["transcribe", "translate"], label="Task", default="transcribe"),
214
- gr.inputs.Checkbox(default=False, label="Return timestamps"),
215
- ],
216
- outputs=[
217
- gr.outputs.HTML(label="Video"),
218
- gr.outputs.Textbox(label="Transcription").style(show_copy_button=True),
219
- gr.outputs.Textbox(label="Transcription Time (s)"),
220
- ],
221
- allow_flagging="never",
222
- title=title,
223
- examples=[["https://www.youtube.com/watch?v=m8u-18Q0s7I", "transcribe", False]],
224
- cache_examples=False,
225
- description=description,
226
- article=article,
227
- )
228
-
229
- demo = gr.Blocks()
230
-
231
- with demo:
232
- gr.TabbedInterface([microphone_chunked, audio_chunked, youtube], ["Microphone", "Audio File", "YouTube"])
233
-
234
- demo.queue(max_size=10)
235
- demo.launch(show_api=False, max_threads=10)
236
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Abhaykoul/Wizard-AI/README.md DELETED
@@ -1,12 +0,0 @@
1
- ---
2
- title: Wizard AI
3
- emoji: 🏃
4
- colorFrom: gray
5
- colorTo: indigo
6
- sdk: streamlit
7
- sdk_version: 1.28.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/layermanager.d.ts DELETED
@@ -1,2 +0,0 @@
1
- import LayerManager from './gameobjects/layer/layermanager/LayerManager';
2
- export default LayerManager;
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/alphamaskimage/Factory.js DELETED
@@ -1,13 +0,0 @@
1
- import AlphaMaskImage from './AlphaMaskImage.js';
2
- import ObjectFactory from '../ObjectFactory.js';
3
- import SetValue from '../../../plugins/utils/object/SetValue.js';
4
-
5
- ObjectFactory.register('alphaMaskImage', function (x, y, key, frame, config) {
6
- var gameObject = new AlphaMaskImage(this.scene, x, y, key, frame, config);
7
- this.scene.add.existing(gameObject);
8
- return gameObject;
9
- });
10
-
11
- SetValue(window, 'RexPlugins.UI.AlphaMaskImage', AlphaMaskImage);
12
-
13
- export default AlphaMaskImage;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/chart/UpdateChart.js DELETED
@@ -1,8 +0,0 @@
1
- var UpdateChart = function () {
2
- if (this.chart === undefined) {
3
- return this;
4
- }
5
- this.chart.update();
6
- return this;
7
- }
8
- export default UpdateChart;
 
 
 
 
 
 
 
 
 
spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/maker/builders/CreateImage.js DELETED
@@ -1,9 +0,0 @@
1
- import CreateAnyImage from './utils/CreateAnyImage.js';
2
-
3
- const PhaserImage = Phaser.GameObjects.Image;
4
-
5
- var CreateImage = function (scene, data, view, styles, customBuilders) {
6
- return CreateAnyImage(scene, data, view, styles, customBuilders, PhaserImage);
7
- }
8
-
9
- export default CreateImage;
 
 
 
 
 
 
 
 
 
 
spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/mapper/options/__init__.py DELETED
File without changes
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/unconditional_image_generation.md DELETED
@@ -1,69 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Unconditional image generation
14
-
15
- [[open-in-colab]]
16
-
17
- Unconditional image generation is a relatively straightforward task. The model only generates images - without any additional context like text or an image - resembling the training data it was trained on.
18
-
19
- The [`DiffusionPipeline`] is the easiest way to use a pre-trained diffusion system for inference.
20
-
21
- Start by creating an instance of [`DiffusionPipeline`] and specify which pipeline checkpoint you would like to download.
22
- You can use any of the 🧨 Diffusers [checkpoints](https://huggingface.co/models?library=diffusers&sort=downloads) from the Hub (the checkpoint you'll use generates images of butterflies).
23
-
24
- <Tip>
25
-
26
- 💡 Want to train your own unconditional image generation model? Take a look at the training [guide](training/unconditional_training) to learn how to generate your own images.
27
-
28
- </Tip>
29
-
30
- In this guide, you'll use [`DiffusionPipeline`] for unconditional image generation with [DDPM](https://arxiv.org/abs/2006.11239):
31
-
32
- ```python
33
- >>> from diffusers import DiffusionPipeline
34
-
35
- >>> generator = DiffusionPipeline.from_pretrained("anton-l/ddpm-butterflies-128")
36
- ```
37
-
38
- The [`DiffusionPipeline`] downloads and caches all modeling, tokenization, and scheduling components.
39
- Because the model consists of roughly 1.4 billion parameters, we strongly recommend running it on a GPU.
40
- You can move the generator object to a GPU, just like you would in PyTorch:
41
-
42
- ```python
43
- >>> generator.to("cuda")
44
- ```
45
-
46
- Now you can use the `generator` to generate an image:
47
-
48
- ```python
49
- >>> image = generator().images[0]
50
- ```
51
-
52
- The output is by default wrapped into a [`PIL.Image`](https://pillow.readthedocs.io/en/stable/reference/Image.html?highlight=image#the-image-class) object.
53
-
54
- You can save the image by calling:
55
-
56
- ```python
57
- >>> image.save("generated_image.png")
58
- ```
59
-
60
- Try out the Spaces below, and feel free to play around with the inference steps parameter to see how it affects the image quality!
61
-
62
- <iframe
63
- src="https://stevhliu-ddpm-butterflies-128.hf.space"
64
- frameborder="0"
65
- width="850"
66
- height="500"
67
- ></iframe>
68
-
69
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/examples/community/stable_diffusion_tensorrt_inpaint.py DELETED
@@ -1,1088 +0,0 @@
1
- #
2
- # Copyright 2023 The HuggingFace Inc. team.
3
- # SPDX-FileCopyrightText: Copyright (c) 1993-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
4
- # SPDX-License-Identifier: Apache-2.0
5
- #
6
- # Licensed under the Apache License, Version 2.0 (the "License");
7
- # you may not use this file except in compliance with the License.
8
- # You may obtain a copy of the License at
9
- #
10
- # http://www.apache.org/licenses/LICENSE-2.0
11
- #
12
- # Unless required by applicable law or agreed to in writing, software
13
- # distributed under the License is distributed on an "AS IS" BASIS,
14
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
- # See the License for the specific language governing permissions and
16
- # limitations under the License.
17
-
18
- import gc
19
- import os
20
- from collections import OrderedDict
21
- from copy import copy
22
- from typing import List, Optional, Union
23
-
24
- import numpy as np
25
- import onnx
26
- import onnx_graphsurgeon as gs
27
- import PIL
28
- import tensorrt as trt
29
- import torch
30
- from huggingface_hub import snapshot_download
31
- from onnx import shape_inference
32
- from polygraphy import cuda
33
- from polygraphy.backend.common import bytes_from_path
34
- from polygraphy.backend.onnx.loader import fold_constants
35
- from polygraphy.backend.trt import (
36
- CreateConfig,
37
- Profile,
38
- engine_from_bytes,
39
- engine_from_network,
40
- network_from_onnx_path,
41
- save_engine,
42
- )
43
- from polygraphy.backend.trt import util as trt_util
44
- from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
45
-
46
- from diffusers.models import AutoencoderKL, UNet2DConditionModel
47
- from diffusers.pipelines.stable_diffusion import (
48
- StableDiffusionInpaintPipeline,
49
- StableDiffusionPipelineOutput,
50
- StableDiffusionSafetyChecker,
51
- )
52
- from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_inpaint import prepare_mask_and_masked_image
53
- from diffusers.schedulers import DDIMScheduler
54
- from diffusers.utils import DIFFUSERS_CACHE, logging
55
-
56
-
57
- """
58
- Installation instructions
59
- python3 -m pip install --upgrade transformers diffusers>=0.16.0
60
- python3 -m pip install --upgrade tensorrt>=8.6.1
61
- python3 -m pip install --upgrade polygraphy>=0.47.0 onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
62
- python3 -m pip install onnxruntime
63
- """
64
-
65
- TRT_LOGGER = trt.Logger(trt.Logger.ERROR)
66
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
67
-
68
- # Map of numpy dtype -> torch dtype
69
- numpy_to_torch_dtype_dict = {
70
- np.uint8: torch.uint8,
71
- np.int8: torch.int8,
72
- np.int16: torch.int16,
73
- np.int32: torch.int32,
74
- np.int64: torch.int64,
75
- np.float16: torch.float16,
76
- np.float32: torch.float32,
77
- np.float64: torch.float64,
78
- np.complex64: torch.complex64,
79
- np.complex128: torch.complex128,
80
- }
81
- if np.version.full_version >= "1.24.0":
82
- numpy_to_torch_dtype_dict[np.bool_] = torch.bool
83
- else:
84
- numpy_to_torch_dtype_dict[np.bool] = torch.bool
85
-
86
- # Map of torch dtype -> numpy dtype
87
- torch_to_numpy_dtype_dict = {value: key for (key, value) in numpy_to_torch_dtype_dict.items()}
88
-
89
-
90
- def device_view(t):
91
- return cuda.DeviceView(ptr=t.data_ptr(), shape=t.shape, dtype=torch_to_numpy_dtype_dict[t.dtype])
92
-
93
-
94
- def preprocess_image(image):
95
- """
96
- image: torch.Tensor
97
- """
98
- w, h = image.size
99
- w, h = (x - x % 32 for x in (w, h)) # resize to integer multiple of 32
100
- image = image.resize((w, h))
101
- image = np.array(image).astype(np.float32) / 255.0
102
- image = image[None].transpose(0, 3, 1, 2)
103
- image = torch.from_numpy(image).contiguous()
104
- return 2.0 * image - 1.0
105
-
106
-
107
- class Engine:
108
- def __init__(self, engine_path):
109
- self.engine_path = engine_path
110
- self.engine = None
111
- self.context = None
112
- self.buffers = OrderedDict()
113
- self.tensors = OrderedDict()
114
-
115
- def __del__(self):
116
- [buf.free() for buf in self.buffers.values() if isinstance(buf, cuda.DeviceArray)]
117
- del self.engine
118
- del self.context
119
- del self.buffers
120
- del self.tensors
121
-
122
- def build(
123
- self,
124
- onnx_path,
125
- fp16,
126
- input_profile=None,
127
- enable_preview=False,
128
- enable_all_tactics=False,
129
- timing_cache=None,
130
- workspace_size=0,
131
- ):
132
- logger.warning(f"Building TensorRT engine for {onnx_path}: {self.engine_path}")
133
- p = Profile()
134
- if input_profile:
135
- for name, dims in input_profile.items():
136
- assert len(dims) == 3
137
- p.add(name, min=dims[0], opt=dims[1], max=dims[2])
138
-
139
- config_kwargs = {}
140
-
141
- config_kwargs["preview_features"] = [trt.PreviewFeature.DISABLE_EXTERNAL_TACTIC_SOURCES_FOR_CORE_0805]
142
- if enable_preview:
143
- # Faster dynamic shapes made optional since it increases engine build time.
144
- config_kwargs["preview_features"].append(trt.PreviewFeature.FASTER_DYNAMIC_SHAPES_0805)
145
- if workspace_size > 0:
146
- config_kwargs["memory_pool_limits"] = {trt.MemoryPoolType.WORKSPACE: workspace_size}
147
- if not enable_all_tactics:
148
- config_kwargs["tactic_sources"] = []
149
-
150
- engine = engine_from_network(
151
- network_from_onnx_path(onnx_path, flags=[trt.OnnxParserFlag.NATIVE_INSTANCENORM]),
152
- config=CreateConfig(fp16=fp16, profiles=[p], load_timing_cache=timing_cache, **config_kwargs),
153
- save_timing_cache=timing_cache,
154
- )
155
- save_engine(engine, path=self.engine_path)
156
-
157
- def load(self):
158
- logger.warning(f"Loading TensorRT engine: {self.engine_path}")
159
- self.engine = engine_from_bytes(bytes_from_path(self.engine_path))
160
-
161
- def activate(self):
162
- self.context = self.engine.create_execution_context()
163
-
164
- def allocate_buffers(self, shape_dict=None, device="cuda"):
165
- for idx in range(trt_util.get_bindings_per_profile(self.engine)):
166
- binding = self.engine[idx]
167
- if shape_dict and binding in shape_dict:
168
- shape = shape_dict[binding]
169
- else:
170
- shape = self.engine.get_binding_shape(binding)
171
- dtype = trt.nptype(self.engine.get_binding_dtype(binding))
172
- if self.engine.binding_is_input(binding):
173
- self.context.set_binding_shape(idx, shape)
174
- tensor = torch.empty(tuple(shape), dtype=numpy_to_torch_dtype_dict[dtype]).to(device=device)
175
- self.tensors[binding] = tensor
176
- self.buffers[binding] = cuda.DeviceView(ptr=tensor.data_ptr(), shape=shape, dtype=dtype)
177
-
178
- def infer(self, feed_dict, stream):
179
- start_binding, end_binding = trt_util.get_active_profile_bindings(self.context)
180
- # shallow copy of ordered dict
181
- device_buffers = copy(self.buffers)
182
- for name, buf in feed_dict.items():
183
- assert isinstance(buf, cuda.DeviceView)
184
- device_buffers[name] = buf
185
- bindings = [0] * start_binding + [buf.ptr for buf in device_buffers.values()]
186
- noerror = self.context.execute_async_v2(bindings=bindings, stream_handle=stream.ptr)
187
- if not noerror:
188
- raise ValueError("ERROR: inference failed.")
189
-
190
- return self.tensors
191
-
192
-
193
- class Optimizer:
194
- def __init__(self, onnx_graph):
195
- self.graph = gs.import_onnx(onnx_graph)
196
-
197
- def cleanup(self, return_onnx=False):
198
- self.graph.cleanup().toposort()
199
- if return_onnx:
200
- return gs.export_onnx(self.graph)
201
-
202
- def select_outputs(self, keep, names=None):
203
- self.graph.outputs = [self.graph.outputs[o] for o in keep]
204
- if names:
205
- for i, name in enumerate(names):
206
- self.graph.outputs[i].name = name
207
-
208
- def fold_constants(self, return_onnx=False):
209
- onnx_graph = fold_constants(gs.export_onnx(self.graph), allow_onnxruntime_shape_inference=True)
210
- self.graph = gs.import_onnx(onnx_graph)
211
- if return_onnx:
212
- return onnx_graph
213
-
214
- def infer_shapes(self, return_onnx=False):
215
- onnx_graph = gs.export_onnx(self.graph)
216
- if onnx_graph.ByteSize() > 2147483648:
217
- raise TypeError("ERROR: model size exceeds supported 2GB limit")
218
- else:
219
- onnx_graph = shape_inference.infer_shapes(onnx_graph)
220
-
221
- self.graph = gs.import_onnx(onnx_graph)
222
- if return_onnx:
223
- return onnx_graph
224
-
225
-
226
- class BaseModel:
227
- def __init__(self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77):
228
- self.model = model
229
- self.name = "SD Model"
230
- self.fp16 = fp16
231
- self.device = device
232
-
233
- self.min_batch = 1
234
- self.max_batch = max_batch_size
235
- self.min_image_shape = 256 # min image resolution: 256x256
236
- self.max_image_shape = 1024 # max image resolution: 1024x1024
237
- self.min_latent_shape = self.min_image_shape // 8
238
- self.max_latent_shape = self.max_image_shape // 8
239
-
240
- self.embedding_dim = embedding_dim
241
- self.text_maxlen = text_maxlen
242
-
243
- def get_model(self):
244
- return self.model
245
-
246
- def get_input_names(self):
247
- pass
248
-
249
- def get_output_names(self):
250
- pass
251
-
252
- def get_dynamic_axes(self):
253
- return None
254
-
255
- def get_sample_input(self, batch_size, image_height, image_width):
256
- pass
257
-
258
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
259
- return None
260
-
261
- def get_shape_dict(self, batch_size, image_height, image_width):
262
- return None
263
-
264
- def optimize(self, onnx_graph):
265
- opt = Optimizer(onnx_graph)
266
- opt.cleanup()
267
- opt.fold_constants()
268
- opt.infer_shapes()
269
- onnx_opt_graph = opt.cleanup(return_onnx=True)
270
- return onnx_opt_graph
271
-
272
- def check_dims(self, batch_size, image_height, image_width):
273
- assert batch_size >= self.min_batch and batch_size <= self.max_batch
274
- assert image_height % 8 == 0 or image_width % 8 == 0
275
- latent_height = image_height // 8
276
- latent_width = image_width // 8
277
- assert latent_height >= self.min_latent_shape and latent_height <= self.max_latent_shape
278
- assert latent_width >= self.min_latent_shape and latent_width <= self.max_latent_shape
279
- return (latent_height, latent_width)
280
-
281
- def get_minmax_dims(self, batch_size, image_height, image_width, static_batch, static_shape):
282
- min_batch = batch_size if static_batch else self.min_batch
283
- max_batch = batch_size if static_batch else self.max_batch
284
- latent_height = image_height // 8
285
- latent_width = image_width // 8
286
- min_image_height = image_height if static_shape else self.min_image_shape
287
- max_image_height = image_height if static_shape else self.max_image_shape
288
- min_image_width = image_width if static_shape else self.min_image_shape
289
- max_image_width = image_width if static_shape else self.max_image_shape
290
- min_latent_height = latent_height if static_shape else self.min_latent_shape
291
- max_latent_height = latent_height if static_shape else self.max_latent_shape
292
- min_latent_width = latent_width if static_shape else self.min_latent_shape
293
- max_latent_width = latent_width if static_shape else self.max_latent_shape
294
- return (
295
- min_batch,
296
- max_batch,
297
- min_image_height,
298
- max_image_height,
299
- min_image_width,
300
- max_image_width,
301
- min_latent_height,
302
- max_latent_height,
303
- min_latent_width,
304
- max_latent_width,
305
- )
306
-
307
-
308
- def getOnnxPath(model_name, onnx_dir, opt=True):
309
- return os.path.join(onnx_dir, model_name + (".opt" if opt else "") + ".onnx")
310
-
311
-
312
- def getEnginePath(model_name, engine_dir):
313
- return os.path.join(engine_dir, model_name + ".plan")
314
-
315
-
316
- def build_engines(
317
- models: dict,
318
- engine_dir,
319
- onnx_dir,
320
- onnx_opset,
321
- opt_image_height,
322
- opt_image_width,
323
- opt_batch_size=1,
324
- force_engine_rebuild=False,
325
- static_batch=False,
326
- static_shape=True,
327
- enable_preview=False,
328
- enable_all_tactics=False,
329
- timing_cache=None,
330
- max_workspace_size=0,
331
- ):
332
- built_engines = {}
333
- if not os.path.isdir(onnx_dir):
334
- os.makedirs(onnx_dir)
335
- if not os.path.isdir(engine_dir):
336
- os.makedirs(engine_dir)
337
-
338
- # Export models to ONNX
339
- for model_name, model_obj in models.items():
340
- engine_path = getEnginePath(model_name, engine_dir)
341
- if force_engine_rebuild or not os.path.exists(engine_path):
342
- logger.warning("Building Engines...")
343
- logger.warning("Engine build can take a while to complete")
344
- onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
345
- onnx_opt_path = getOnnxPath(model_name, onnx_dir)
346
- if force_engine_rebuild or not os.path.exists(onnx_opt_path):
347
- if force_engine_rebuild or not os.path.exists(onnx_path):
348
- logger.warning(f"Exporting model: {onnx_path}")
349
- model = model_obj.get_model()
350
- with torch.inference_mode(), torch.autocast("cuda"):
351
- inputs = model_obj.get_sample_input(opt_batch_size, opt_image_height, opt_image_width)
352
- torch.onnx.export(
353
- model,
354
- inputs,
355
- onnx_path,
356
- export_params=True,
357
- opset_version=onnx_opset,
358
- do_constant_folding=True,
359
- input_names=model_obj.get_input_names(),
360
- output_names=model_obj.get_output_names(),
361
- dynamic_axes=model_obj.get_dynamic_axes(),
362
- )
363
- del model
364
- torch.cuda.empty_cache()
365
- gc.collect()
366
- else:
367
- logger.warning(f"Found cached model: {onnx_path}")
368
-
369
- # Optimize onnx
370
- if force_engine_rebuild or not os.path.exists(onnx_opt_path):
371
- logger.warning(f"Generating optimizing model: {onnx_opt_path}")
372
- onnx_opt_graph = model_obj.optimize(onnx.load(onnx_path))
373
- onnx.save(onnx_opt_graph, onnx_opt_path)
374
- else:
375
- logger.warning(f"Found cached optimized model: {onnx_opt_path} ")
376
-
377
- # Build TensorRT engines
378
- for model_name, model_obj in models.items():
379
- engine_path = getEnginePath(model_name, engine_dir)
380
- engine = Engine(engine_path)
381
- onnx_path = getOnnxPath(model_name, onnx_dir, opt=False)
382
- onnx_opt_path = getOnnxPath(model_name, onnx_dir)
383
-
384
- if force_engine_rebuild or not os.path.exists(engine.engine_path):
385
- engine.build(
386
- onnx_opt_path,
387
- fp16=True,
388
- input_profile=model_obj.get_input_profile(
389
- opt_batch_size,
390
- opt_image_height,
391
- opt_image_width,
392
- static_batch=static_batch,
393
- static_shape=static_shape,
394
- ),
395
- enable_preview=enable_preview,
396
- timing_cache=timing_cache,
397
- workspace_size=max_workspace_size,
398
- )
399
- built_engines[model_name] = engine
400
-
401
- # Load and activate TensorRT engines
402
- for model_name, model_obj in models.items():
403
- engine = built_engines[model_name]
404
- engine.load()
405
- engine.activate()
406
-
407
- return built_engines
408
-
409
-
410
- def runEngine(engine, feed_dict, stream):
411
- return engine.infer(feed_dict, stream)
412
-
413
-
414
- class CLIP(BaseModel):
415
- def __init__(self, model, device, max_batch_size, embedding_dim):
416
- super(CLIP, self).__init__(
417
- model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
418
- )
419
- self.name = "CLIP"
420
-
421
- def get_input_names(self):
422
- return ["input_ids"]
423
-
424
- def get_output_names(self):
425
- return ["text_embeddings", "pooler_output"]
426
-
427
- def get_dynamic_axes(self):
428
- return {"input_ids": {0: "B"}, "text_embeddings": {0: "B"}}
429
-
430
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
431
- self.check_dims(batch_size, image_height, image_width)
432
- min_batch, max_batch, _, _, _, _, _, _, _, _ = self.get_minmax_dims(
433
- batch_size, image_height, image_width, static_batch, static_shape
434
- )
435
- return {
436
- "input_ids": [(min_batch, self.text_maxlen), (batch_size, self.text_maxlen), (max_batch, self.text_maxlen)]
437
- }
438
-
439
- def get_shape_dict(self, batch_size, image_height, image_width):
440
- self.check_dims(batch_size, image_height, image_width)
441
- return {
442
- "input_ids": (batch_size, self.text_maxlen),
443
- "text_embeddings": (batch_size, self.text_maxlen, self.embedding_dim),
444
- }
445
-
446
- def get_sample_input(self, batch_size, image_height, image_width):
447
- self.check_dims(batch_size, image_height, image_width)
448
- return torch.zeros(batch_size, self.text_maxlen, dtype=torch.int32, device=self.device)
449
-
450
- def optimize(self, onnx_graph):
451
- opt = Optimizer(onnx_graph)
452
- opt.select_outputs([0]) # delete graph output#1
453
- opt.cleanup()
454
- opt.fold_constants()
455
- opt.infer_shapes()
456
- opt.select_outputs([0], names=["text_embeddings"]) # rename network output
457
- opt_onnx_graph = opt.cleanup(return_onnx=True)
458
- return opt_onnx_graph
459
-
460
-
461
- def make_CLIP(model, device, max_batch_size, embedding_dim, inpaint=False):
462
- return CLIP(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
463
-
464
-
465
- class UNet(BaseModel):
466
- def __init__(
467
- self, model, fp16=False, device="cuda", max_batch_size=16, embedding_dim=768, text_maxlen=77, unet_dim=4
468
- ):
469
- super(UNet, self).__init__(
470
- model=model,
471
- fp16=fp16,
472
- device=device,
473
- max_batch_size=max_batch_size,
474
- embedding_dim=embedding_dim,
475
- text_maxlen=text_maxlen,
476
- )
477
- self.unet_dim = unet_dim
478
- self.name = "UNet"
479
-
480
- def get_input_names(self):
481
- return ["sample", "timestep", "encoder_hidden_states"]
482
-
483
- def get_output_names(self):
484
- return ["latent"]
485
-
486
- def get_dynamic_axes(self):
487
- return {
488
- "sample": {0: "2B", 2: "H", 3: "W"},
489
- "encoder_hidden_states": {0: "2B"},
490
- "latent": {0: "2B", 2: "H", 3: "W"},
491
- }
492
-
493
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
494
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
495
- (
496
- min_batch,
497
- max_batch,
498
- _,
499
- _,
500
- _,
501
- _,
502
- min_latent_height,
503
- max_latent_height,
504
- min_latent_width,
505
- max_latent_width,
506
- ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
507
- return {
508
- "sample": [
509
- (2 * min_batch, self.unet_dim, min_latent_height, min_latent_width),
510
- (2 * batch_size, self.unet_dim, latent_height, latent_width),
511
- (2 * max_batch, self.unet_dim, max_latent_height, max_latent_width),
512
- ],
513
- "encoder_hidden_states": [
514
- (2 * min_batch, self.text_maxlen, self.embedding_dim),
515
- (2 * batch_size, self.text_maxlen, self.embedding_dim),
516
- (2 * max_batch, self.text_maxlen, self.embedding_dim),
517
- ],
518
- }
519
-
520
- def get_shape_dict(self, batch_size, image_height, image_width):
521
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
522
- return {
523
- "sample": (2 * batch_size, self.unet_dim, latent_height, latent_width),
524
- "encoder_hidden_states": (2 * batch_size, self.text_maxlen, self.embedding_dim),
525
- "latent": (2 * batch_size, 4, latent_height, latent_width),
526
- }
527
-
528
- def get_sample_input(self, batch_size, image_height, image_width):
529
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
530
- dtype = torch.float16 if self.fp16 else torch.float32
531
- return (
532
- torch.randn(
533
- 2 * batch_size, self.unet_dim, latent_height, latent_width, dtype=torch.float32, device=self.device
534
- ),
535
- torch.tensor([1.0], dtype=torch.float32, device=self.device),
536
- torch.randn(2 * batch_size, self.text_maxlen, self.embedding_dim, dtype=dtype, device=self.device),
537
- )
538
-
539
-
540
- def make_UNet(model, device, max_batch_size, embedding_dim, inpaint=False, unet_dim=4):
541
- return UNet(
542
- model,
543
- fp16=True,
544
- device=device,
545
- max_batch_size=max_batch_size,
546
- embedding_dim=embedding_dim,
547
- unet_dim=unet_dim,
548
- )
549
-
550
-
551
- class VAE(BaseModel):
552
- def __init__(self, model, device, max_batch_size, embedding_dim):
553
- super(VAE, self).__init__(
554
- model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
555
- )
556
- self.name = "VAE decoder"
557
-
558
- def get_input_names(self):
559
- return ["latent"]
560
-
561
- def get_output_names(self):
562
- return ["images"]
563
-
564
- def get_dynamic_axes(self):
565
- return {"latent": {0: "B", 2: "H", 3: "W"}, "images": {0: "B", 2: "8H", 3: "8W"}}
566
-
567
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
568
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
569
- (
570
- min_batch,
571
- max_batch,
572
- _,
573
- _,
574
- _,
575
- _,
576
- min_latent_height,
577
- max_latent_height,
578
- min_latent_width,
579
- max_latent_width,
580
- ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
581
- return {
582
- "latent": [
583
- (min_batch, 4, min_latent_height, min_latent_width),
584
- (batch_size, 4, latent_height, latent_width),
585
- (max_batch, 4, max_latent_height, max_latent_width),
586
- ]
587
- }
588
-
589
- def get_shape_dict(self, batch_size, image_height, image_width):
590
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
591
- return {
592
- "latent": (batch_size, 4, latent_height, latent_width),
593
- "images": (batch_size, 3, image_height, image_width),
594
- }
595
-
596
- def get_sample_input(self, batch_size, image_height, image_width):
597
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
598
- return torch.randn(batch_size, 4, latent_height, latent_width, dtype=torch.float32, device=self.device)
599
-
600
-
601
- def make_VAE(model, device, max_batch_size, embedding_dim, inpaint=False):
602
- return VAE(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
603
-
604
-
605
- class TorchVAEEncoder(torch.nn.Module):
606
- def __init__(self, model):
607
- super().__init__()
608
- self.vae_encoder = model
609
-
610
- def forward(self, x):
611
- return self.vae_encoder.encode(x).latent_dist.sample()
612
-
613
-
614
- class VAEEncoder(BaseModel):
615
- def __init__(self, model, device, max_batch_size, embedding_dim):
616
- super(VAEEncoder, self).__init__(
617
- model=model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim
618
- )
619
- self.name = "VAE encoder"
620
-
621
- def get_model(self):
622
- vae_encoder = TorchVAEEncoder(self.model)
623
- return vae_encoder
624
-
625
- def get_input_names(self):
626
- return ["images"]
627
-
628
- def get_output_names(self):
629
- return ["latent"]
630
-
631
- def get_dynamic_axes(self):
632
- return {"images": {0: "B", 2: "8H", 3: "8W"}, "latent": {0: "B", 2: "H", 3: "W"}}
633
-
634
- def get_input_profile(self, batch_size, image_height, image_width, static_batch, static_shape):
635
- assert batch_size >= self.min_batch and batch_size <= self.max_batch
636
- min_batch = batch_size if static_batch else self.min_batch
637
- max_batch = batch_size if static_batch else self.max_batch
638
- self.check_dims(batch_size, image_height, image_width)
639
- (
640
- min_batch,
641
- max_batch,
642
- min_image_height,
643
- max_image_height,
644
- min_image_width,
645
- max_image_width,
646
- _,
647
- _,
648
- _,
649
- _,
650
- ) = self.get_minmax_dims(batch_size, image_height, image_width, static_batch, static_shape)
651
-
652
- return {
653
- "images": [
654
- (min_batch, 3, min_image_height, min_image_width),
655
- (batch_size, 3, image_height, image_width),
656
- (max_batch, 3, max_image_height, max_image_width),
657
- ]
658
- }
659
-
660
- def get_shape_dict(self, batch_size, image_height, image_width):
661
- latent_height, latent_width = self.check_dims(batch_size, image_height, image_width)
662
- return {
663
- "images": (batch_size, 3, image_height, image_width),
664
- "latent": (batch_size, 4, latent_height, latent_width),
665
- }
666
-
667
- def get_sample_input(self, batch_size, image_height, image_width):
668
- self.check_dims(batch_size, image_height, image_width)
669
- return torch.randn(batch_size, 3, image_height, image_width, dtype=torch.float32, device=self.device)
670
-
671
-
672
- def make_VAEEncoder(model, device, max_batch_size, embedding_dim, inpaint=False):
673
- return VAEEncoder(model, device=device, max_batch_size=max_batch_size, embedding_dim=embedding_dim)
674
-
675
-
676
- class TensorRTStableDiffusionInpaintPipeline(StableDiffusionInpaintPipeline):
677
- r"""
678
- Pipeline for inpainting using TensorRT accelerated Stable Diffusion.
679
-
680
- This model inherits from [`StableDiffusionInpaintPipeline`]. Check the superclass documentation for the generic methods the
681
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
682
-
683
- Args:
684
- vae ([`AutoencoderKL`]):
685
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
686
- text_encoder ([`CLIPTextModel`]):
687
- Frozen text-encoder. Stable Diffusion uses the text portion of
688
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
689
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
690
- tokenizer (`CLIPTokenizer`):
691
- Tokenizer of class
692
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
693
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
694
- scheduler ([`SchedulerMixin`]):
695
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
696
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
697
- safety_checker ([`StableDiffusionSafetyChecker`]):
698
- Classification module that estimates whether generated images could be considered offensive or harmful.
699
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
700
- feature_extractor ([`CLIPFeatureExtractor`]):
701
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
702
- """
703
-
704
- def __init__(
705
- self,
706
- vae: AutoencoderKL,
707
- text_encoder: CLIPTextModel,
708
- tokenizer: CLIPTokenizer,
709
- unet: UNet2DConditionModel,
710
- scheduler: DDIMScheduler,
711
- safety_checker: StableDiffusionSafetyChecker,
712
- feature_extractor: CLIPFeatureExtractor,
713
- requires_safety_checker: bool = True,
714
- stages=["clip", "unet", "vae", "vae_encoder"],
715
- image_height: int = 512,
716
- image_width: int = 512,
717
- max_batch_size: int = 16,
718
- # ONNX export parameters
719
- onnx_opset: int = 17,
720
- onnx_dir: str = "onnx",
721
- # TensorRT engine build parameters
722
- engine_dir: str = "engine",
723
- build_preview_features: bool = True,
724
- force_engine_rebuild: bool = False,
725
- timing_cache: str = "timing_cache",
726
- ):
727
- super().__init__(
728
- vae, text_encoder, tokenizer, unet, scheduler, safety_checker, feature_extractor, requires_safety_checker
729
- )
730
-
731
- self.vae.forward = self.vae.decode
732
-
733
- self.stages = stages
734
- self.image_height, self.image_width = image_height, image_width
735
- self.inpaint = True
736
- self.onnx_opset = onnx_opset
737
- self.onnx_dir = onnx_dir
738
- self.engine_dir = engine_dir
739
- self.force_engine_rebuild = force_engine_rebuild
740
- self.timing_cache = timing_cache
741
- self.build_static_batch = False
742
- self.build_dynamic_shape = False
743
- self.build_preview_features = build_preview_features
744
-
745
- self.max_batch_size = max_batch_size
746
- # TODO: Restrict batch size to 4 for larger image dimensions as a WAR for TensorRT limitation.
747
- if self.build_dynamic_shape or self.image_height > 512 or self.image_width > 512:
748
- self.max_batch_size = 4
749
-
750
- self.stream = None # loaded in loadResources()
751
- self.models = {} # loaded in __loadModels()
752
- self.engine = {} # loaded in build_engines()
753
-
754
- def __loadModels(self):
755
- # Load pipeline models
756
- self.embedding_dim = self.text_encoder.config.hidden_size
757
- models_args = {
758
- "device": self.torch_device,
759
- "max_batch_size": self.max_batch_size,
760
- "embedding_dim": self.embedding_dim,
761
- "inpaint": self.inpaint,
762
- }
763
- if "clip" in self.stages:
764
- self.models["clip"] = make_CLIP(self.text_encoder, **models_args)
765
- if "unet" in self.stages:
766
- self.models["unet"] = make_UNet(self.unet, **models_args, unet_dim=self.unet.config.in_channels)
767
- if "vae" in self.stages:
768
- self.models["vae"] = make_VAE(self.vae, **models_args)
769
- if "vae_encoder" in self.stages:
770
- self.models["vae_encoder"] = make_VAEEncoder(self.vae, **models_args)
771
-
772
- @classmethod
773
- def set_cached_folder(cls, pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], **kwargs):
774
- cache_dir = kwargs.pop("cache_dir", DIFFUSERS_CACHE)
775
- resume_download = kwargs.pop("resume_download", False)
776
- proxies = kwargs.pop("proxies", None)
777
- local_files_only = kwargs.pop("local_files_only", False)
778
- use_auth_token = kwargs.pop("use_auth_token", None)
779
- revision = kwargs.pop("revision", None)
780
-
781
- cls.cached_folder = (
782
- pretrained_model_name_or_path
783
- if os.path.isdir(pretrained_model_name_or_path)
784
- else snapshot_download(
785
- pretrained_model_name_or_path,
786
- cache_dir=cache_dir,
787
- resume_download=resume_download,
788
- proxies=proxies,
789
- local_files_only=local_files_only,
790
- use_auth_token=use_auth_token,
791
- revision=revision,
792
- )
793
- )
794
-
795
- def to(self, torch_device: Optional[Union[str, torch.device]] = None, silence_dtype_warnings: bool = False):
796
- super().to(torch_device, silence_dtype_warnings=silence_dtype_warnings)
797
-
798
- self.onnx_dir = os.path.join(self.cached_folder, self.onnx_dir)
799
- self.engine_dir = os.path.join(self.cached_folder, self.engine_dir)
800
- self.timing_cache = os.path.join(self.cached_folder, self.timing_cache)
801
-
802
- # set device
803
- self.torch_device = self._execution_device
804
- logger.warning(f"Running inference on device: {self.torch_device}")
805
-
806
- # load models
807
- self.__loadModels()
808
-
809
- # build engines
810
- self.engine = build_engines(
811
- self.models,
812
- self.engine_dir,
813
- self.onnx_dir,
814
- self.onnx_opset,
815
- opt_image_height=self.image_height,
816
- opt_image_width=self.image_width,
817
- force_engine_rebuild=self.force_engine_rebuild,
818
- static_batch=self.build_static_batch,
819
- static_shape=not self.build_dynamic_shape,
820
- enable_preview=self.build_preview_features,
821
- timing_cache=self.timing_cache,
822
- )
823
-
824
- return self
825
-
826
- def __initialize_timesteps(self, timesteps, strength):
827
- self.scheduler.set_timesteps(timesteps)
828
- offset = self.scheduler.steps_offset if hasattr(self.scheduler, "steps_offset") else 0
829
- init_timestep = int(timesteps * strength) + offset
830
- init_timestep = min(init_timestep, timesteps)
831
- t_start = max(timesteps - init_timestep + offset, 0)
832
- timesteps = self.scheduler.timesteps[t_start:].to(self.torch_device)
833
- return timesteps, t_start
834
-
835
- def __preprocess_images(self, batch_size, images=()):
836
- init_images = []
837
- for image in images:
838
- image = image.to(self.torch_device).float()
839
- image = image.repeat(batch_size, 1, 1, 1)
840
- init_images.append(image)
841
- return tuple(init_images)
842
-
843
- def __encode_image(self, init_image):
844
- init_latents = runEngine(self.engine["vae_encoder"], {"images": device_view(init_image)}, self.stream)[
845
- "latent"
846
- ]
847
- init_latents = 0.18215 * init_latents
848
- return init_latents
849
-
850
- def __encode_prompt(self, prompt, negative_prompt):
851
- r"""
852
- Encodes the prompt into text encoder hidden states.
853
-
854
- Args:
855
- prompt (`str` or `List[str]`, *optional*):
856
- prompt to be encoded
857
- negative_prompt (`str` or `List[str]`, *optional*):
858
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
859
- `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
860
- Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
861
- """
862
- # Tokenize prompt
863
- text_input_ids = (
864
- self.tokenizer(
865
- prompt,
866
- padding="max_length",
867
- max_length=self.tokenizer.model_max_length,
868
- truncation=True,
869
- return_tensors="pt",
870
- )
871
- .input_ids.type(torch.int32)
872
- .to(self.torch_device)
873
- )
874
-
875
- text_input_ids_inp = device_view(text_input_ids)
876
- # NOTE: output tensor for CLIP must be cloned because it will be overwritten when called again for negative prompt
877
- text_embeddings = runEngine(self.engine["clip"], {"input_ids": text_input_ids_inp}, self.stream)[
878
- "text_embeddings"
879
- ].clone()
880
-
881
- # Tokenize negative prompt
882
- uncond_input_ids = (
883
- self.tokenizer(
884
- negative_prompt,
885
- padding="max_length",
886
- max_length=self.tokenizer.model_max_length,
887
- truncation=True,
888
- return_tensors="pt",
889
- )
890
- .input_ids.type(torch.int32)
891
- .to(self.torch_device)
892
- )
893
- uncond_input_ids_inp = device_view(uncond_input_ids)
894
- uncond_embeddings = runEngine(self.engine["clip"], {"input_ids": uncond_input_ids_inp}, self.stream)[
895
- "text_embeddings"
896
- ]
897
-
898
- # Concatenate the unconditional and text embeddings into a single batch to avoid doing two forward passes for classifier free guidance
899
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings]).to(dtype=torch.float16)
900
-
901
- return text_embeddings
902
-
903
- def __denoise_latent(
904
- self, latents, text_embeddings, timesteps=None, step_offset=0, mask=None, masked_image_latents=None
905
- ):
906
- if not isinstance(timesteps, torch.Tensor):
907
- timesteps = self.scheduler.timesteps
908
- for step_index, timestep in enumerate(timesteps):
909
- # Expand the latents if we are doing classifier free guidance
910
- latent_model_input = torch.cat([latents] * 2)
911
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, timestep)
912
- if isinstance(mask, torch.Tensor):
913
- latent_model_input = torch.cat([latent_model_input, mask, masked_image_latents], dim=1)
914
-
915
- # Predict the noise residual
916
- timestep_float = timestep.float() if timestep.dtype != torch.float32 else timestep
917
-
918
- sample_inp = device_view(latent_model_input)
919
- timestep_inp = device_view(timestep_float)
920
- embeddings_inp = device_view(text_embeddings)
921
- noise_pred = runEngine(
922
- self.engine["unet"],
923
- {"sample": sample_inp, "timestep": timestep_inp, "encoder_hidden_states": embeddings_inp},
924
- self.stream,
925
- )["latent"]
926
-
927
- # Perform guidance
928
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
929
- noise_pred = noise_pred_uncond + self.guidance_scale * (noise_pred_text - noise_pred_uncond)
930
-
931
- latents = self.scheduler.step(noise_pred, timestep, latents).prev_sample
932
-
933
- latents = 1.0 / 0.18215 * latents
934
- return latents
935
-
936
- def __decode_latent(self, latents):
937
- images = runEngine(self.engine["vae"], {"latent": device_view(latents)}, self.stream)["images"]
938
- images = (images / 2 + 0.5).clamp(0, 1)
939
- return images.cpu().permute(0, 2, 3, 1).float().numpy()
940
-
941
- def __loadResources(self, image_height, image_width, batch_size):
942
- self.stream = cuda.Stream()
943
-
944
- # Allocate buffers for TensorRT engine bindings
945
- for model_name, obj in self.models.items():
946
- self.engine[model_name].allocate_buffers(
947
- shape_dict=obj.get_shape_dict(batch_size, image_height, image_width), device=self.torch_device
948
- )
949
-
950
- @torch.no_grad()
951
- def __call__(
952
- self,
953
- prompt: Union[str, List[str]] = None,
954
- image: Union[torch.FloatTensor, PIL.Image.Image] = None,
955
- mask_image: Union[torch.FloatTensor, PIL.Image.Image] = None,
956
- strength: float = 0.75,
957
- num_inference_steps: int = 50,
958
- guidance_scale: float = 7.5,
959
- negative_prompt: Optional[Union[str, List[str]]] = None,
960
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
961
- ):
962
- r"""
963
- Function invoked when calling the pipeline for generation.
964
-
965
- Args:
966
- prompt (`str` or `List[str]`, *optional*):
967
- The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
968
- instead.
969
- image (`PIL.Image.Image`):
970
- `Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
971
- be masked out with `mask_image` and repainted according to `prompt`.
972
- mask_image (`PIL.Image.Image`):
973
- `Image`, or tensor representing an image batch, to mask `image`. White pixels in the mask will be
974
- repainted, while black pixels will be preserved. If `mask_image` is a PIL image, it will be converted
975
- to a single channel (luminance) before use. If it's a tensor, it should contain one color channel (L)
976
- instead of 3, so the expected shape would be `(B, H, W, 1)`.
977
- strength (`float`, *optional*, defaults to 0.8):
978
- Conceptually, indicates how much to transform the reference `image`. Must be between 0 and 1. `image`
979
- will be used as a starting point, adding more noise to it the larger the `strength`. The number of
980
- denoising steps depends on the amount of noise initially added. When `strength` is 1, added noise will
981
- be maximum and the denoising process will run for the full number of iterations specified in
982
- `num_inference_steps`. A value of 1, therefore, essentially ignores `image`.
983
- num_inference_steps (`int`, *optional*, defaults to 50):
984
- The number of denoising steps. More denoising steps usually lead to a higher quality image at the
985
- expense of slower inference.
986
- guidance_scale (`float`, *optional*, defaults to 7.5):
987
- Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
988
- `guidance_scale` is defined as `w` of equation 2. of [Imagen
989
- Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
990
- 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
991
- usually at the expense of lower image quality.
992
- negative_prompt (`str` or `List[str]`, *optional*):
993
- The prompt or prompts not to guide the image generation. If not defined, one has to pass
994
- `negative_prompt_embeds`. instead. If not defined, one has to pass `negative_prompt_embeds`. instead.
995
- Ignored when not using guidance (i.e., ignored if `guidance_scale` is less than `1`).
996
- generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
997
- One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
998
- to make generation deterministic.
999
-
1000
- """
1001
- self.generator = generator
1002
- self.denoising_steps = num_inference_steps
1003
- self.guidance_scale = guidance_scale
1004
-
1005
- # Pre-compute latent input scales and linear multistep coefficients
1006
- self.scheduler.set_timesteps(self.denoising_steps, device=self.torch_device)
1007
-
1008
- # Define call parameters
1009
- if prompt is not None and isinstance(prompt, str):
1010
- batch_size = 1
1011
- prompt = [prompt]
1012
- elif prompt is not None and isinstance(prompt, list):
1013
- batch_size = len(prompt)
1014
- else:
1015
- raise ValueError(f"Expected prompt to be of type list or str but got {type(prompt)}")
1016
-
1017
- if negative_prompt is None:
1018
- negative_prompt = [""] * batch_size
1019
-
1020
- if negative_prompt is not None and isinstance(negative_prompt, str):
1021
- negative_prompt = [negative_prompt]
1022
-
1023
- assert len(prompt) == len(negative_prompt)
1024
-
1025
- if batch_size > self.max_batch_size:
1026
- raise ValueError(
1027
- f"Batch size {len(prompt)} is larger than allowed {self.max_batch_size}. If dynamic shape is used, then maximum batch size is 4"
1028
- )
1029
-
1030
- # Validate image dimensions
1031
- mask_width, mask_height = mask_image.size
1032
- if mask_height != self.image_height or mask_width != self.image_width:
1033
- raise ValueError(
1034
- f"Input image height and width {self.image_height} and {self.image_width} are not equal to "
1035
- f"the respective dimensions of the mask image {mask_height} and {mask_width}"
1036
- )
1037
-
1038
- # load resources
1039
- self.__loadResources(self.image_height, self.image_width, batch_size)
1040
-
1041
- with torch.inference_mode(), torch.autocast("cuda"), trt.Runtime(TRT_LOGGER):
1042
- # Spatial dimensions of latent tensor
1043
- latent_height = self.image_height // 8
1044
- latent_width = self.image_width // 8
1045
-
1046
- # Pre-initialize latents
1047
- num_channels_latents = self.vae.config.latent_channels
1048
- latents = self.prepare_latents(
1049
- batch_size,
1050
- num_channels_latents,
1051
- self.image_height,
1052
- self.image_width,
1053
- torch.float32,
1054
- self.torch_device,
1055
- generator,
1056
- )
1057
-
1058
- # Pre-process input images
1059
- mask, masked_image = self.__preprocess_images(batch_size, prepare_mask_and_masked_image(image, mask_image))
1060
- # print(mask)
1061
- mask = torch.nn.functional.interpolate(mask, size=(latent_height, latent_width))
1062
- mask = torch.cat([mask] * 2)
1063
-
1064
- # Initialize timesteps
1065
- timesteps, t_start = self.__initialize_timesteps(self.denoising_steps, strength)
1066
-
1067
- # VAE encode masked image
1068
- masked_latents = self.__encode_image(masked_image)
1069
- masked_latents = torch.cat([masked_latents] * 2)
1070
-
1071
- # CLIP text encoder
1072
- text_embeddings = self.__encode_prompt(prompt, negative_prompt)
1073
-
1074
- # UNet denoiser
1075
- latents = self.__denoise_latent(
1076
- latents,
1077
- text_embeddings,
1078
- timesteps=timesteps,
1079
- step_offset=t_start,
1080
- mask=mask,
1081
- masked_image_latents=masked_latents,
1082
- )
1083
-
1084
- # VAE decode latent
1085
- images = self.__decode_latent(latents)
1086
-
1087
- images = self.numpy_to_pil(images)
1088
- return StableDiffusionPipelineOutput(images=images, nsfw_content_detected=None)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_transformers_and_k_diffusion_objects.py DELETED
@@ -1,17 +0,0 @@
1
- # This file is autogenerated by the command `make fix-copies`, do not edit.
2
- from ..utils import DummyObject, requires_backends
3
-
4
-
5
- class StableDiffusionKDiffusionPipeline(metaclass=DummyObject):
6
- _backends = ["torch", "transformers", "k_diffusion"]
7
-
8
- def __init__(self, *args, **kwargs):
9
- requires_backends(self, ["torch", "transformers", "k_diffusion"])
10
-
11
- @classmethod
12
- def from_config(cls, *args, **kwargs):
13
- requires_backends(cls, ["torch", "transformers", "k_diffusion"])
14
-
15
- @classmethod
16
- def from_pretrained(cls, *args, **kwargs):
17
- requires_backends(cls, ["torch", "transformers", "k_diffusion"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/pipelines/stable_diffusion_2/test_stable_diffusion_upscale.py DELETED
@@ -1,424 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2023 HuggingFace Inc.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- import gc
17
- import random
18
- import unittest
19
-
20
- import numpy as np
21
- import torch
22
- from PIL import Image
23
- from transformers import CLIPTextConfig, CLIPTextModel, CLIPTokenizer
24
-
25
- from diffusers import AutoencoderKL, DDIMScheduler, DDPMScheduler, StableDiffusionUpscalePipeline, UNet2DConditionModel
26
- from diffusers.utils import floats_tensor, load_image, load_numpy, slow, torch_device
27
- from diffusers.utils.testing_utils import enable_full_determinism, require_torch_gpu
28
-
29
-
30
- enable_full_determinism()
31
-
32
-
33
- class StableDiffusionUpscalePipelineFastTests(unittest.TestCase):
34
- def tearDown(self):
35
- # clean up the VRAM after each test
36
- super().tearDown()
37
- gc.collect()
38
- torch.cuda.empty_cache()
39
-
40
- @property
41
- def dummy_image(self):
42
- batch_size = 1
43
- num_channels = 3
44
- sizes = (32, 32)
45
-
46
- image = floats_tensor((batch_size, num_channels) + sizes, rng=random.Random(0)).to(torch_device)
47
- return image
48
-
49
- @property
50
- def dummy_cond_unet_upscale(self):
51
- torch.manual_seed(0)
52
- model = UNet2DConditionModel(
53
- block_out_channels=(32, 32, 64),
54
- layers_per_block=2,
55
- sample_size=32,
56
- in_channels=7,
57
- out_channels=4,
58
- down_block_types=("DownBlock2D", "CrossAttnDownBlock2D", "CrossAttnDownBlock2D"),
59
- up_block_types=("CrossAttnUpBlock2D", "CrossAttnUpBlock2D", "UpBlock2D"),
60
- cross_attention_dim=32,
61
- # SD2-specific config below
62
- attention_head_dim=8,
63
- use_linear_projection=True,
64
- only_cross_attention=(True, True, False),
65
- num_class_embeds=100,
66
- )
67
- return model
68
-
69
- @property
70
- def dummy_vae(self):
71
- torch.manual_seed(0)
72
- model = AutoencoderKL(
73
- block_out_channels=[32, 32, 64],
74
- in_channels=3,
75
- out_channels=3,
76
- down_block_types=["DownEncoderBlock2D", "DownEncoderBlock2D", "DownEncoderBlock2D"],
77
- up_block_types=["UpDecoderBlock2D", "UpDecoderBlock2D", "UpDecoderBlock2D"],
78
- latent_channels=4,
79
- )
80
- return model
81
-
82
- @property
83
- def dummy_text_encoder(self):
84
- torch.manual_seed(0)
85
- config = CLIPTextConfig(
86
- bos_token_id=0,
87
- eos_token_id=2,
88
- hidden_size=32,
89
- intermediate_size=37,
90
- layer_norm_eps=1e-05,
91
- num_attention_heads=4,
92
- num_hidden_layers=5,
93
- pad_token_id=1,
94
- vocab_size=1000,
95
- # SD2-specific config below
96
- hidden_act="gelu",
97
- projection_dim=512,
98
- )
99
- return CLIPTextModel(config)
100
-
101
- def test_stable_diffusion_upscale(self):
102
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
103
- unet = self.dummy_cond_unet_upscale
104
- low_res_scheduler = DDPMScheduler()
105
- scheduler = DDIMScheduler(prediction_type="v_prediction")
106
- vae = self.dummy_vae
107
- text_encoder = self.dummy_text_encoder
108
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
109
-
110
- image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0]
111
- low_res_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64))
112
-
113
- # make sure here that pndm scheduler skips prk
114
- sd_pipe = StableDiffusionUpscalePipeline(
115
- unet=unet,
116
- low_res_scheduler=low_res_scheduler,
117
- scheduler=scheduler,
118
- vae=vae,
119
- text_encoder=text_encoder,
120
- tokenizer=tokenizer,
121
- max_noise_level=350,
122
- )
123
- sd_pipe = sd_pipe.to(device)
124
- sd_pipe.set_progress_bar_config(disable=None)
125
-
126
- prompt = "A painting of a squirrel eating a burger"
127
- generator = torch.Generator(device=device).manual_seed(0)
128
- output = sd_pipe(
129
- [prompt],
130
- image=low_res_image,
131
- generator=generator,
132
- guidance_scale=6.0,
133
- noise_level=20,
134
- num_inference_steps=2,
135
- output_type="np",
136
- )
137
-
138
- image = output.images
139
-
140
- generator = torch.Generator(device=device).manual_seed(0)
141
- image_from_tuple = sd_pipe(
142
- [prompt],
143
- image=low_res_image,
144
- generator=generator,
145
- guidance_scale=6.0,
146
- noise_level=20,
147
- num_inference_steps=2,
148
- output_type="np",
149
- return_dict=False,
150
- )[0]
151
-
152
- image_slice = image[0, -3:, -3:, -1]
153
- image_from_tuple_slice = image_from_tuple[0, -3:, -3:, -1]
154
-
155
- expected_height_width = low_res_image.size[0] * 4
156
- assert image.shape == (1, expected_height_width, expected_height_width, 3)
157
- expected_slice = np.array([0.3113, 0.3910, 0.4272, 0.4859, 0.5061, 0.4652, 0.5362, 0.5715, 0.5661])
158
-
159
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
160
- assert np.abs(image_from_tuple_slice.flatten() - expected_slice).max() < 1e-2
161
-
162
- def test_stable_diffusion_upscale_batch(self):
163
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
164
- unet = self.dummy_cond_unet_upscale
165
- low_res_scheduler = DDPMScheduler()
166
- scheduler = DDIMScheduler(prediction_type="v_prediction")
167
- vae = self.dummy_vae
168
- text_encoder = self.dummy_text_encoder
169
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
170
-
171
- image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0]
172
- low_res_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64))
173
-
174
- # make sure here that pndm scheduler skips prk
175
- sd_pipe = StableDiffusionUpscalePipeline(
176
- unet=unet,
177
- low_res_scheduler=low_res_scheduler,
178
- scheduler=scheduler,
179
- vae=vae,
180
- text_encoder=text_encoder,
181
- tokenizer=tokenizer,
182
- max_noise_level=350,
183
- )
184
- sd_pipe = sd_pipe.to(device)
185
- sd_pipe.set_progress_bar_config(disable=None)
186
-
187
- prompt = "A painting of a squirrel eating a burger"
188
- output = sd_pipe(
189
- 2 * [prompt],
190
- image=2 * [low_res_image],
191
- guidance_scale=6.0,
192
- noise_level=20,
193
- num_inference_steps=2,
194
- output_type="np",
195
- )
196
- image = output.images
197
- assert image.shape[0] == 2
198
-
199
- generator = torch.Generator(device=device).manual_seed(0)
200
- output = sd_pipe(
201
- [prompt],
202
- image=low_res_image,
203
- generator=generator,
204
- num_images_per_prompt=2,
205
- guidance_scale=6.0,
206
- noise_level=20,
207
- num_inference_steps=2,
208
- output_type="np",
209
- )
210
- image = output.images
211
- assert image.shape[0] == 2
212
-
213
- def test_stable_diffusion_upscale_prompt_embeds(self):
214
- device = "cpu" # ensure determinism for the device-dependent torch.Generator
215
- unet = self.dummy_cond_unet_upscale
216
- low_res_scheduler = DDPMScheduler()
217
- scheduler = DDIMScheduler(prediction_type="v_prediction")
218
- vae = self.dummy_vae
219
- text_encoder = self.dummy_text_encoder
220
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
221
-
222
- image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0]
223
- low_res_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64))
224
-
225
- # make sure here that pndm scheduler skips prk
226
- sd_pipe = StableDiffusionUpscalePipeline(
227
- unet=unet,
228
- low_res_scheduler=low_res_scheduler,
229
- scheduler=scheduler,
230
- vae=vae,
231
- text_encoder=text_encoder,
232
- tokenizer=tokenizer,
233
- max_noise_level=350,
234
- )
235
- sd_pipe = sd_pipe.to(device)
236
- sd_pipe.set_progress_bar_config(disable=None)
237
-
238
- prompt = "A painting of a squirrel eating a burger"
239
- generator = torch.Generator(device=device).manual_seed(0)
240
- output = sd_pipe(
241
- [prompt],
242
- image=low_res_image,
243
- generator=generator,
244
- guidance_scale=6.0,
245
- noise_level=20,
246
- num_inference_steps=2,
247
- output_type="np",
248
- )
249
-
250
- image = output.images
251
-
252
- generator = torch.Generator(device=device).manual_seed(0)
253
- prompt_embeds = sd_pipe._encode_prompt(prompt, device, 1, False)
254
- image_from_prompt_embeds = sd_pipe(
255
- prompt_embeds=prompt_embeds,
256
- image=[low_res_image],
257
- generator=generator,
258
- guidance_scale=6.0,
259
- noise_level=20,
260
- num_inference_steps=2,
261
- output_type="np",
262
- return_dict=False,
263
- )[0]
264
-
265
- image_slice = image[0, -3:, -3:, -1]
266
- image_from_prompt_embeds_slice = image_from_prompt_embeds[0, -3:, -3:, -1]
267
-
268
- expected_height_width = low_res_image.size[0] * 4
269
- assert image.shape == (1, expected_height_width, expected_height_width, 3)
270
- expected_slice = np.array([0.3113, 0.3910, 0.4272, 0.4859, 0.5061, 0.4652, 0.5362, 0.5715, 0.5661])
271
-
272
- assert np.abs(image_slice.flatten() - expected_slice).max() < 1e-2
273
- assert np.abs(image_from_prompt_embeds_slice.flatten() - expected_slice).max() < 1e-2
274
-
275
- @unittest.skipIf(torch_device != "cuda", "This test requires a GPU")
276
- def test_stable_diffusion_upscale_fp16(self):
277
- """Test that stable diffusion upscale works with fp16"""
278
- unet = self.dummy_cond_unet_upscale
279
- low_res_scheduler = DDPMScheduler()
280
- scheduler = DDIMScheduler(prediction_type="v_prediction")
281
- vae = self.dummy_vae
282
- text_encoder = self.dummy_text_encoder
283
- tokenizer = CLIPTokenizer.from_pretrained("hf-internal-testing/tiny-random-clip")
284
-
285
- image = self.dummy_image.cpu().permute(0, 2, 3, 1)[0]
286
- low_res_image = Image.fromarray(np.uint8(image)).convert("RGB").resize((64, 64))
287
-
288
- # put models in fp16, except vae as it overflows in fp16
289
- unet = unet.half()
290
- text_encoder = text_encoder.half()
291
-
292
- # make sure here that pndm scheduler skips prk
293
- sd_pipe = StableDiffusionUpscalePipeline(
294
- unet=unet,
295
- low_res_scheduler=low_res_scheduler,
296
- scheduler=scheduler,
297
- vae=vae,
298
- text_encoder=text_encoder,
299
- tokenizer=tokenizer,
300
- max_noise_level=350,
301
- )
302
- sd_pipe = sd_pipe.to(torch_device)
303
- sd_pipe.set_progress_bar_config(disable=None)
304
-
305
- prompt = "A painting of a squirrel eating a burger"
306
- generator = torch.manual_seed(0)
307
- image = sd_pipe(
308
- [prompt],
309
- image=low_res_image,
310
- generator=generator,
311
- num_inference_steps=2,
312
- output_type="np",
313
- ).images
314
-
315
- expected_height_width = low_res_image.size[0] * 4
316
- assert image.shape == (1, expected_height_width, expected_height_width, 3)
317
-
318
-
319
- @slow
320
- @require_torch_gpu
321
- class StableDiffusionUpscalePipelineIntegrationTests(unittest.TestCase):
322
- def tearDown(self):
323
- # clean up the VRAM after each test
324
- super().tearDown()
325
- gc.collect()
326
- torch.cuda.empty_cache()
327
-
328
- def test_stable_diffusion_upscale_pipeline(self):
329
- image = load_image(
330
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
331
- "/sd2-upscale/low_res_cat.png"
332
- )
333
- expected_image = load_numpy(
334
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale"
335
- "/upsampled_cat.npy"
336
- )
337
-
338
- model_id = "stabilityai/stable-diffusion-x4-upscaler"
339
- pipe = StableDiffusionUpscalePipeline.from_pretrained(model_id)
340
- pipe.to(torch_device)
341
- pipe.set_progress_bar_config(disable=None)
342
- pipe.enable_attention_slicing()
343
-
344
- prompt = "a cat sitting on a park bench"
345
-
346
- generator = torch.manual_seed(0)
347
- output = pipe(
348
- prompt=prompt,
349
- image=image,
350
- generator=generator,
351
- output_type="np",
352
- )
353
- image = output.images[0]
354
-
355
- assert image.shape == (512, 512, 3)
356
- assert np.abs(expected_image - image).max() < 1e-3
357
-
358
- def test_stable_diffusion_upscale_pipeline_fp16(self):
359
- image = load_image(
360
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
361
- "/sd2-upscale/low_res_cat.png"
362
- )
363
- expected_image = load_numpy(
364
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale"
365
- "/upsampled_cat_fp16.npy"
366
- )
367
-
368
- model_id = "stabilityai/stable-diffusion-x4-upscaler"
369
- pipe = StableDiffusionUpscalePipeline.from_pretrained(
370
- model_id,
371
- torch_dtype=torch.float16,
372
- )
373
- pipe.to(torch_device)
374
- pipe.set_progress_bar_config(disable=None)
375
- pipe.enable_attention_slicing()
376
-
377
- prompt = "a cat sitting on a park bench"
378
-
379
- generator = torch.manual_seed(0)
380
- output = pipe(
381
- prompt=prompt,
382
- image=image,
383
- generator=generator,
384
- output_type="np",
385
- )
386
- image = output.images[0]
387
-
388
- assert image.shape == (512, 512, 3)
389
- assert np.abs(expected_image - image).max() < 5e-1
390
-
391
- def test_stable_diffusion_pipeline_with_sequential_cpu_offloading(self):
392
- torch.cuda.empty_cache()
393
- torch.cuda.reset_max_memory_allocated()
394
- torch.cuda.reset_peak_memory_stats()
395
-
396
- image = load_image(
397
- "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main"
398
- "/sd2-upscale/low_res_cat.png"
399
- )
400
-
401
- model_id = "stabilityai/stable-diffusion-x4-upscaler"
402
- pipe = StableDiffusionUpscalePipeline.from_pretrained(
403
- model_id,
404
- torch_dtype=torch.float16,
405
- )
406
- pipe.to(torch_device)
407
- pipe.set_progress_bar_config(disable=None)
408
- pipe.enable_attention_slicing(1)
409
- pipe.enable_sequential_cpu_offload()
410
-
411
- prompt = "a cat sitting on a park bench"
412
-
413
- generator = torch.manual_seed(0)
414
- _ = pipe(
415
- prompt=prompt,
416
- image=image,
417
- generator=generator,
418
- num_inference_steps=5,
419
- output_type="np",
420
- )
421
-
422
- mem_bytes = torch.cuda.max_memory_allocated()
423
- # make sure that less than 2.9 GB is allocated
424
- assert mem_bytes < 2.9 * 10**9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py DELETED
@@ -1,42 +0,0 @@
1
- _base_ = './faster_rcnn_r50_fpn_1x_coco.py'
2
- model = dict(
3
- pretrained='open-mmlab://detectron2/resnet50_caffe',
4
- backbone=dict(
5
- norm_cfg=dict(requires_grad=False), norm_eval=True, style='caffe'))
6
- # use caffe img_norm
7
- img_norm_cfg = dict(
8
- mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False)
9
- train_pipeline = [
10
- dict(type='LoadImageFromFile'),
11
- dict(type='LoadAnnotations', with_bbox=True),
12
- dict(
13
- type='Resize',
14
- img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
15
- (1333, 768), (1333, 800)],
16
- multiscale_mode='value',
17
- keep_ratio=True),
18
- dict(type='RandomFlip', flip_ratio=0.5),
19
- dict(type='Normalize', **img_norm_cfg),
20
- dict(type='Pad', size_divisor=32),
21
- dict(type='DefaultFormatBundle'),
22
- dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']),
23
- ]
24
- test_pipeline = [
25
- dict(type='LoadImageFromFile'),
26
- dict(
27
- type='MultiScaleFlipAug',
28
- img_scale=(1333, 800),
29
- flip=False,
30
- transforms=[
31
- dict(type='Resize', keep_ratio=True),
32
- dict(type='RandomFlip'),
33
- dict(type='Normalize', **img_norm_cfg),
34
- dict(type='Pad', size_divisor=32),
35
- dict(type='ImageToTensor', keys=['img']),
36
- dict(type='Collect', keys=['img']),
37
- ])
38
- ]
39
- data = dict(
40
- train=dict(pipeline=train_pipeline),
41
- val=dict(pipeline=test_pipeline),
42
- test=dict(pipeline=test_pipeline))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/core/bbox/assigners/atss_assigner.py DELETED
@@ -1,178 +0,0 @@
1
- import torch
2
-
3
- from ..builder import BBOX_ASSIGNERS
4
- from ..iou_calculators import build_iou_calculator
5
- from .assign_result import AssignResult
6
- from .base_assigner import BaseAssigner
7
-
8
-
9
- @BBOX_ASSIGNERS.register_module()
10
- class ATSSAssigner(BaseAssigner):
11
- """Assign a corresponding gt bbox or background to each bbox.
12
-
13
- Each proposals will be assigned with `0` or a positive integer
14
- indicating the ground truth index.
15
-
16
- - 0: negative sample, no assigned gt
17
- - positive integer: positive sample, index (1-based) of assigned gt
18
-
19
- Args:
20
- topk (float): number of bbox selected in each level
21
- """
22
-
23
- def __init__(self,
24
- topk,
25
- iou_calculator=dict(type='BboxOverlaps2D'),
26
- ignore_iof_thr=-1):
27
- self.topk = topk
28
- self.iou_calculator = build_iou_calculator(iou_calculator)
29
- self.ignore_iof_thr = ignore_iof_thr
30
-
31
- # https://github.com/sfzhang15/ATSS/blob/master/atss_core/modeling/rpn/atss/loss.py
32
-
33
- def assign(self,
34
- bboxes,
35
- num_level_bboxes,
36
- gt_bboxes,
37
- gt_bboxes_ignore=None,
38
- gt_labels=None):
39
- """Assign gt to bboxes.
40
-
41
- The assignment is done in following steps
42
-
43
- 1. compute iou between all bbox (bbox of all pyramid levels) and gt
44
- 2. compute center distance between all bbox and gt
45
- 3. on each pyramid level, for each gt, select k bbox whose center
46
- are closest to the gt center, so we total select k*l bbox as
47
- candidates for each gt
48
- 4. get corresponding iou for the these candidates, and compute the
49
- mean and std, set mean + std as the iou threshold
50
- 5. select these candidates whose iou are greater than or equal to
51
- the threshold as positive
52
- 6. limit the positive sample's center in gt
53
-
54
-
55
- Args:
56
- bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4).
57
- num_level_bboxes (List): num of bboxes in each level
58
- gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4).
59
- gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are
60
- labelled as `ignored`, e.g., crowd boxes in COCO.
61
- gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ).
62
-
63
- Returns:
64
- :obj:`AssignResult`: The assign result.
65
- """
66
- INF = 100000000
67
- bboxes = bboxes[:, :4]
68
- num_gt, num_bboxes = gt_bboxes.size(0), bboxes.size(0)
69
-
70
- # compute iou between all bbox and gt
71
- overlaps = self.iou_calculator(bboxes, gt_bboxes)
72
-
73
- # assign 0 by default
74
- assigned_gt_inds = overlaps.new_full((num_bboxes, ),
75
- 0,
76
- dtype=torch.long)
77
-
78
- if num_gt == 0 or num_bboxes == 0:
79
- # No ground truth or boxes, return empty assignment
80
- max_overlaps = overlaps.new_zeros((num_bboxes, ))
81
- if num_gt == 0:
82
- # No truth, assign everything to background
83
- assigned_gt_inds[:] = 0
84
- if gt_labels is None:
85
- assigned_labels = None
86
- else:
87
- assigned_labels = overlaps.new_full((num_bboxes, ),
88
- -1,
89
- dtype=torch.long)
90
- return AssignResult(
91
- num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels)
92
-
93
- # compute center distance between all bbox and gt
94
- gt_cx = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0
95
- gt_cy = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0
96
- gt_points = torch.stack((gt_cx, gt_cy), dim=1)
97
-
98
- bboxes_cx = (bboxes[:, 0] + bboxes[:, 2]) / 2.0
99
- bboxes_cy = (bboxes[:, 1] + bboxes[:, 3]) / 2.0
100
- bboxes_points = torch.stack((bboxes_cx, bboxes_cy), dim=1)
101
-
102
- distances = (bboxes_points[:, None, :] -
103
- gt_points[None, :, :]).pow(2).sum(-1).sqrt()
104
-
105
- if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None
106
- and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0):
107
- ignore_overlaps = self.iou_calculator(
108
- bboxes, gt_bboxes_ignore, mode='iof')
109
- ignore_max_overlaps, _ = ignore_overlaps.max(dim=1)
110
- ignore_idxs = ignore_max_overlaps > self.ignore_iof_thr
111
- distances[ignore_idxs, :] = INF
112
- assigned_gt_inds[ignore_idxs] = -1
113
-
114
- # Selecting candidates based on the center distance
115
- candidate_idxs = []
116
- start_idx = 0
117
- for level, bboxes_per_level in enumerate(num_level_bboxes):
118
- # on each pyramid level, for each gt,
119
- # select k bbox whose center are closest to the gt center
120
- end_idx = start_idx + bboxes_per_level
121
- distances_per_level = distances[start_idx:end_idx, :]
122
- selectable_k = min(self.topk, bboxes_per_level)
123
- _, topk_idxs_per_level = distances_per_level.topk(
124
- selectable_k, dim=0, largest=False)
125
- candidate_idxs.append(topk_idxs_per_level + start_idx)
126
- start_idx = end_idx
127
- candidate_idxs = torch.cat(candidate_idxs, dim=0)
128
-
129
- # get corresponding iou for the these candidates, and compute the
130
- # mean and std, set mean + std as the iou threshold
131
- candidate_overlaps = overlaps[candidate_idxs, torch.arange(num_gt)]
132
- overlaps_mean_per_gt = candidate_overlaps.mean(0)
133
- overlaps_std_per_gt = candidate_overlaps.std(0)
134
- overlaps_thr_per_gt = overlaps_mean_per_gt + overlaps_std_per_gt
135
-
136
- is_pos = candidate_overlaps >= overlaps_thr_per_gt[None, :]
137
-
138
- # limit the positive sample's center in gt
139
- for gt_idx in range(num_gt):
140
- candidate_idxs[:, gt_idx] += gt_idx * num_bboxes
141
- ep_bboxes_cx = bboxes_cx.view(1, -1).expand(
142
- num_gt, num_bboxes).contiguous().view(-1)
143
- ep_bboxes_cy = bboxes_cy.view(1, -1).expand(
144
- num_gt, num_bboxes).contiguous().view(-1)
145
- candidate_idxs = candidate_idxs.view(-1)
146
-
147
- # calculate the left, top, right, bottom distance between positive
148
- # bbox center and gt side
149
- l_ = ep_bboxes_cx[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 0]
150
- t_ = ep_bboxes_cy[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 1]
151
- r_ = gt_bboxes[:, 2] - ep_bboxes_cx[candidate_idxs].view(-1, num_gt)
152
- b_ = gt_bboxes[:, 3] - ep_bboxes_cy[candidate_idxs].view(-1, num_gt)
153
- is_in_gts = torch.stack([l_, t_, r_, b_], dim=1).min(dim=1)[0] > 0.01
154
- is_pos = is_pos & is_in_gts
155
-
156
- # if an anchor box is assigned to multiple gts,
157
- # the one with the highest IoU will be selected.
158
- overlaps_inf = torch.full_like(overlaps,
159
- -INF).t().contiguous().view(-1)
160
- index = candidate_idxs.view(-1)[is_pos.view(-1)]
161
- overlaps_inf[index] = overlaps.t().contiguous().view(-1)[index]
162
- overlaps_inf = overlaps_inf.view(num_gt, -1).t()
163
-
164
- max_overlaps, argmax_overlaps = overlaps_inf.max(dim=1)
165
- assigned_gt_inds[
166
- max_overlaps != -INF] = argmax_overlaps[max_overlaps != -INF] + 1
167
-
168
- if gt_labels is not None:
169
- assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1)
170
- pos_inds = torch.nonzero(
171
- assigned_gt_inds > 0, as_tuple=False).squeeze()
172
- if pos_inds.numel() > 0:
173
- assigned_labels[pos_inds] = gt_labels[
174
- assigned_gt_inds[pos_inds] - 1]
175
- else:
176
- assigned_labels = None
177
- return AssignResult(
178
- num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/models/roi_heads/roi_extractors/__init__.py DELETED
@@ -1,7 +0,0 @@
1
- from .generic_roi_extractor import GenericRoIExtractor
2
- from .single_level_roi_extractor import SingleRoIExtractor
3
-
4
- __all__ = [
5
- 'SingleRoIExtractor',
6
- 'GenericRoIExtractor',
7
- ]
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_detection/mmdet/utils/collect_env.py DELETED
@@ -1,16 +0,0 @@
1
- from mmcv.utils import collect_env as collect_base_env
2
- from mmcv.utils import get_git_hash
3
-
4
- import mmdet
5
-
6
-
7
- def collect_env():
8
- """Collect the information of the running environments."""
9
- env_info = collect_base_env()
10
- env_info['MMDetection'] = mmdet.__version__ + '+' + get_git_hash()[:7]
11
- return env_info
12
-
13
-
14
- if __name__ == '__main__':
15
- for name, val in collect_env().items():
16
- print(f'{name}: {val}')
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x1024_40k_cityscapes.py DELETED
@@ -1,2 +0,0 @@
1
- _base_ = './ccnet_r50-d8_512x1024_40k_cityscapes.py'
2
- model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r50-d8_480x480_80k_pascal_context.py DELETED
@@ -1,10 +0,0 @@
1
- _base_ = [
2
- '../_base_/models/deeplabv3_r50-d8.py',
3
- '../_base_/datasets/pascal_context.py', '../_base_/default_runtime.py',
4
- '../_base_/schedules/schedule_80k.py'
5
- ]
6
- model = dict(
7
- decode_head=dict(num_classes=60),
8
- auxiliary_head=dict(num_classes=60),
9
- test_cfg=dict(mode='slide', crop_size=(480, 480), stride=(320, 320)))
10
- optimizer = dict(type='SGD', lr=0.004, momentum=0.9, weight_decay=0.0001)
 
 
 
 
 
 
 
 
 
 
 
spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101b-d8_512x1024_80k_cityscapes.py DELETED
@@ -1,4 +0,0 @@
1
- _base_ = './pspnet_r50-d8_512x1024_80k_cityscapes.py'
2
- model = dict(
3
- pretrained='torchvision://resnet101',
4
- backbone=dict(type='ResNet', depth=101))
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/index/__init__.py DELETED
@@ -1,2 +0,0 @@
1
- """Index interaction code
2
- """
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/_jaraco_text.py DELETED
@@ -1,109 +0,0 @@
1
- """Functions brought over from jaraco.text.
2
-
3
- These functions are not supposed to be used within `pip._internal`. These are
4
- helper functions brought over from `jaraco.text` to enable vendoring newer
5
- copies of `pkg_resources` without having to vendor `jaraco.text` and its entire
6
- dependency cone; something that our vendoring setup is not currently capable of
7
- handling.
8
-
9
- License reproduced from original source below:
10
-
11
- Copyright Jason R. Coombs
12
-
13
- Permission is hereby granted, free of charge, to any person obtaining a copy
14
- of this software and associated documentation files (the "Software"), to
15
- deal in the Software without restriction, including without limitation the
16
- rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
17
- sell copies of the Software, and to permit persons to whom the Software is
18
- furnished to do so, subject to the following conditions:
19
-
20
- The above copyright notice and this permission notice shall be included in
21
- all copies or substantial portions of the Software.
22
-
23
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
24
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
25
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
26
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
27
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
28
- FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
29
- IN THE SOFTWARE.
30
- """
31
-
32
- import functools
33
- import itertools
34
-
35
-
36
- def _nonblank(str):
37
- return str and not str.startswith("#")
38
-
39
-
40
- @functools.singledispatch
41
- def yield_lines(iterable):
42
- r"""
43
- Yield valid lines of a string or iterable.
44
-
45
- >>> list(yield_lines(''))
46
- []
47
- >>> list(yield_lines(['foo', 'bar']))
48
- ['foo', 'bar']
49
- >>> list(yield_lines('foo\nbar'))
50
- ['foo', 'bar']
51
- >>> list(yield_lines('\nfoo\n#bar\nbaz #comment'))
52
- ['foo', 'baz #comment']
53
- >>> list(yield_lines(['foo\nbar', 'baz', 'bing\n\n\n']))
54
- ['foo', 'bar', 'baz', 'bing']
55
- """
56
- return itertools.chain.from_iterable(map(yield_lines, iterable))
57
-
58
-
59
- @yield_lines.register(str)
60
- def _(text):
61
- return filter(_nonblank, map(str.strip, text.splitlines()))
62
-
63
-
64
- def drop_comment(line):
65
- """
66
- Drop comments.
67
-
68
- >>> drop_comment('foo # bar')
69
- 'foo'
70
-
71
- A hash without a space may be in a URL.
72
-
73
- >>> drop_comment('http://example.com/foo#bar')
74
- 'http://example.com/foo#bar'
75
- """
76
- return line.partition(" #")[0]
77
-
78
-
79
- def join_continuation(lines):
80
- r"""
81
- Join lines continued by a trailing backslash.
82
-
83
- >>> list(join_continuation(['foo \\', 'bar', 'baz']))
84
- ['foobar', 'baz']
85
- >>> list(join_continuation(['foo \\', 'bar', 'baz']))
86
- ['foobar', 'baz']
87
- >>> list(join_continuation(['foo \\', 'bar \\', 'baz']))
88
- ['foobarbaz']
89
-
90
- Not sure why, but...
91
- The character preceeding the backslash is also elided.
92
-
93
- >>> list(join_continuation(['goo\\', 'dly']))
94
- ['godly']
95
-
96
- A terrible idea, but...
97
- If no line is available to continue, suppress the lines.
98
-
99
- >>> list(join_continuation(['foo', 'bar\\', 'baz\\']))
100
- ['foo']
101
- """
102
- lines = iter(lines)
103
- for item in lines:
104
- while item.endswith("\\"):
105
- try:
106
- item = item[:-2].strip() + next(lines)
107
- except StopIteration:
108
- return
109
- yield item
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/_version.py DELETED
@@ -1,2 +0,0 @@
1
- # This file is protected via CODEOWNERS
2
- __version__ = "1.26.15"
 
 
 
spaces/Atualli/yoloxTeste/app1.py DELETED
@@ -1,105 +0,0 @@
1
- import gradio as gr
2
- import os
3
- #os.system("pip -qq install yoloxdetect==0.0.7")
4
- os.system("pip -qq install yoloxdetect")
5
- import torch
6
- import json
7
- import yoloxdetect2.helpers as yoloxdetect
8
- #from yoloxdetect import YoloxDetector
9
-
10
-
11
- # Images
12
- torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg', 'zidane.jpg')
13
- torch.hub.download_url_to_file('https://raw.githubusercontent.com/obss/sahi/main/tests/data/small-vehicles1.jpeg', 'small-vehicles1.jpeg')
14
- torch.hub.download_url_to_file('https://raw.githubusercontent.com/Megvii-BaseDetection/YOLOX/main/assets/dog.jpg', 'dog.jpg')
15
-
16
- model = yoloxdetect.YoloxDetector2('kadirnar/yolox_s-v0.1.1', 'configs.yolox_s', device="cuda", hf_model=True)
17
-
18
- def yolox_inference(
19
- image_path: gr.inputs.Image = None,
20
- model_path: gr.inputs.Dropdown = 'kadirnar/yolox_s-v0.1.1',
21
- config_path: gr.inputs.Textbox = 'configs.yolox_s',
22
- image_size: gr.inputs.Slider = 640
23
- ):
24
- """
25
- YOLOX inference function
26
- Args:
27
- image: Input image
28
- model_path: Path to the model
29
- config_path: Path to the config file
30
- image_size: Image size
31
- Returns:
32
- Rendered image
33
- """
34
-
35
- #model = YoloxDetector(model_path, config_path=config_path, device="cpu", hf_model=True)
36
- #pred = model.predict(image_path=image_path, image_size=image_size)
37
- pred2 = []
38
- if model :
39
- print (image_path)
40
- model.torchyolo = True
41
- pred2 = model.predict(image_path=image_path, image_size=image_size)
42
- #text = "Ola"
43
- #print (vars(model))
44
- #print (pred2[0])
45
- #print (pred2[1])
46
- #print (pred2[2])
47
- #os.remove(image_path)
48
-
49
-
50
- tensor = {
51
- "tensorflow": [
52
- ]
53
- }
54
-
55
- if pred2 is not None:
56
- #print (pred2[3])
57
- for i, element in enumerate(pred2[0]):
58
- object = {}
59
- itemclass = round(pred2[2][i].item())
60
- object["classe"] = itemclass
61
- object["nome"] = pred2[3][itemclass]
62
- object["score"] = pred2[1][i].item()
63
- object["x"] = element[0].item()
64
- object["y"] = element[1].item()
65
- object["w"] = element[2].item()
66
- object["h"] = element[3].item()
67
- tensor["tensorflow"].append(object)
68
-
69
- #print(tensor)
70
-
71
- text = json.dumps(tensor)
72
- return text
73
-
74
-
75
- inputs = [
76
- gr.inputs.Image(type="filepath", label="Input Image"),
77
- gr.inputs.Textbox(lines=1, label="Model Path", default="kadirnar/yolox_s-v0.1.1"),
78
- gr.inputs.Textbox(lines=1, label="Config Path", default="configs.yolox_s"),
79
- gr.inputs.Slider(minimum=320, maximum=1280, default=640, step=32, label="Image Size"),
80
- ]
81
-
82
- outputs = gr.outputs.Image(type="filepath", label="Output Image")
83
- title = "SIMULADOR PARA RECONHECIMENTO DE IMAGEM"
84
-
85
- examples = [
86
- ["small-vehicles1.jpeg", "kadirnar/yolox_m-v0.1.1", "configs.yolox_m", 640],
87
- ["zidane.jpg", "kadirnar/yolox_s-v0.1.1", "configs.yolox_s", 640],
88
- ["dog.jpg", "kadirnar/yolox_tiny-v0.1.1", "configs.yolox_tiny", 640],
89
- ]
90
-
91
- demo_app = gr.Interface(
92
- fn=yolox_inference,
93
- inputs=inputs,
94
- outputs=["text"],
95
- title=title,
96
- examples=examples,
97
- cache_examples=True,
98
- live=True,
99
- theme='huggingface',
100
- )
101
- try:
102
- demo_app.launch(debug=True, server_name="192.168.0.153", server_port=8081, enable_queue=True)
103
- except:
104
- demo_app.close()
105
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AzinZ/vitscn/README.md DELETED
@@ -1,13 +0,0 @@
1
- ---
2
- title: Vitscn
3
- emoji: 🌖
4
- colorFrom: gray
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 3.44.2
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/AzulaFire/SparkDebate/utils/API.py DELETED
@@ -1,244 +0,0 @@
1
-
2
- import base64
3
- import hmac
4
- import json
5
- from datetime import datetime, timezone
6
- from urllib.parse import urlencode, urlparse
7
- from websocket import create_connection, WebSocketConnectionClosedException
8
- from utils.tools import get_prompt, process_response, init_script, create_script
9
-
10
-
11
- class SparkAPI:
12
- __api_url = 'wss://spark-api.xf-yun.com/v1.1/chat'
13
- __max_token = 4096
14
-
15
- def __init__(self, app_id, api_key, api_secret):
16
- self.__app_id = app_id
17
- self.__api_key = api_key
18
- self.__api_secret = api_secret
19
-
20
- def __set_max_tokens(self, token):
21
- if isinstance(token, int) is False or token < 0:
22
- print("set_max_tokens() error: tokens should be a positive integer!")
23
- return
24
- self.__max_token = token
25
-
26
- def __get_authorization_url(self):
27
- authorize_url = urlparse(self.__api_url)
28
- # 1. generate data
29
- date = datetime.now(timezone.utc).strftime('%a, %d %b %Y %H:%M:%S %Z')
30
-
31
- """
32
- Generation rule of Authorization parameters
33
- 1) Obtain the APIKey and APISecret parameters from the console.
34
- 2) Use the aforementioned date to dynamically concatenate a string tmp. Here we take Huobi's URL as an example,
35
- the actual usage requires replacing the host and path with the specific request URL.
36
- """
37
- signature_origin = "host: {}\ndate: {}\nGET {} HTTP/1.1".format(
38
- authorize_url.netloc, date, authorize_url.path
39
- )
40
- signature = base64.b64encode(
41
- hmac.new(
42
- self.__api_secret.encode(),
43
- signature_origin.encode(),
44
- digestmod='sha256'
45
- ).digest()
46
- ).decode()
47
- authorization_origin = \
48
- 'api_key="{}",algorithm="{}",headers="{}",signature="{}"'.format(
49
- self.__api_key, "hmac-sha256", "host date request-line", signature
50
- )
51
- authorization = base64.b64encode(
52
- authorization_origin.encode()).decode()
53
- params = {
54
- "authorization": authorization,
55
- "date": date,
56
- "host": authorize_url.netloc
57
- }
58
-
59
- ws_url = self.__api_url + "?" + urlencode(params)
60
- return ws_url
61
-
62
- def __build_inputs(
63
- self,
64
- message: dict,
65
- user_id: str = "001",
66
- domain: str = "general",
67
- temperature: float = 0.5,
68
- max_tokens: int = 4096
69
- ):
70
- input_dict = {
71
- "header": {
72
- "app_id": self.__app_id,
73
- "uid": user_id,
74
- },
75
- "parameter": {
76
- "chat": {
77
- "domain": domain,
78
- "temperature": temperature,
79
- "max_tokens": max_tokens,
80
- }
81
- },
82
- "payload": {
83
- "message": message
84
- }
85
- }
86
- return json.dumps(input_dict)
87
-
88
- def chat(
89
- self,
90
- query: str,
91
- history: list = None, # store the conversation history
92
- user_id: str = "001",
93
- domain: str = "general",
94
- max_tokens: int = 4096,
95
- temperature: float = 0.5,
96
- ):
97
- if history is None:
98
- history = []
99
-
100
- # the max of max_length is 4096
101
- max_tokens = min(max_tokens, 4096)
102
- url = self.__get_authorization_url()
103
- ws = create_connection(url)
104
- message = get_prompt(query, history)
105
- input_str = self.__build_inputs(
106
- message=message,
107
- user_id=user_id,
108
- domain=domain,
109
- temperature=temperature,
110
- max_tokens=max_tokens,
111
- )
112
- ws.send(input_str)
113
- response_str = ws.recv()
114
- try:
115
- while True:
116
- response, history, status = process_response(
117
- response_str, history)
118
- """
119
- The final return result, which means a complete conversation.
120
- doc url: https://www.xfyun.cn/doc/spark/Web.html#_1-%E6%8E%A5%E5%8F%A3%E8%AF%B4%E6%98%8E
121
- """
122
- if len(response) == 0 or status == 2:
123
- break
124
- response_str = ws.recv()
125
- return response
126
-
127
- except WebSocketConnectionClosedException:
128
- print("Connection closed")
129
- finally:
130
- ws.close()
131
- # Stream output statement, used for terminal chat.
132
-
133
- def streaming_output(
134
- self,
135
- query: str,
136
- history: list = None, # store the conversation history
137
- user_id: str = "001",
138
- domain: str = "general",
139
- max_tokens: int = 4096,
140
- temperature: float = 0.5,
141
- ):
142
- if history is None:
143
- history = []
144
- # the max of max_length is 4096
145
- max_tokens = min(max_tokens, 4096)
146
- url = self.__get_authorization_url()
147
- ws = create_connection(url)
148
-
149
- message = get_prompt(query, history)
150
- input_str = self.__build_inputs(
151
- message=message,
152
- user_id=user_id,
153
- domain=domain,
154
- temperature=temperature,
155
- max_tokens=max_tokens,
156
- )
157
- # print(input_str)
158
- # send question or prompt to url, and receive the answer
159
- ws.send(input_str)
160
- response_str = ws.recv()
161
-
162
- # Continuous conversation
163
- try:
164
- while True:
165
- response, history, status = process_response(
166
- response_str, history)
167
- yield response, history
168
- if len(response) == 0 or status == 2:
169
- break
170
- response_str = ws.recv()
171
-
172
- except WebSocketConnectionClosedException:
173
- print("Connection closed")
174
- finally:
175
- ws.close()
176
-
177
- def chat_stream(self):
178
- history = []
179
- try:
180
- print("输入init来初始化剧本,输入create来创作剧本,输入exit或stop来终止对话\n")
181
- while True:
182
- query = input("Ask: ")
183
- if query == 'init':
184
- jsonfile = input("请输入剧本文件路径:")
185
- script_data = init_script(history, jsonfile)
186
- print(
187
- f"正在导入剧本{script_data['name']},角色信息:{script_data['characters']},剧情介绍:{script_data['summary']}")
188
- query = f"我希望你能够扮演这个剧本杀游戏的主持人,我希望你能够逐步引导玩家到达最终结局,同时希望你在游戏中设定一些随机事件,需要玩家依靠自身的能力解决,当玩家做出偏离主线的行为或者与剧本无关的行为时,你需要委婉地将玩家引导至正常游玩路线中,对于玩家需要决策的事件,你需要提供一些行动推荐,下面是剧本介绍:{script_data}"
189
- if query == 'create':
190
- name = input('请输入剧本名称:')
191
- characters = input('请输入角色信息:')
192
- summary = input('请输入剧情介绍:')
193
- details = input('请输入剧本细节')
194
- create_script(name, characters, summary, details)
195
- print('剧本创建成功!')
196
- continue
197
- if query == "exit" or query == "stop":
198
- break
199
- for response, _ in self.streaming_output(query, history):
200
- print("\r" + response, end="")
201
- print("\n")
202
- finally:
203
- print("\nThank you for using the SparkDesk AI. Welcome to use it again!")
204
-
205
-
206
- from langchain.llms.base import LLM
207
- from typing import Any, List, Mapping, Optional
208
- class Spark_forlangchain(LLM):
209
-
210
- # 类的成员变量,类型为整型
211
- n: int
212
- app_id: str
213
- api_key: str
214
- api_secret: str
215
- # 用于指定该子类对象的类型
216
-
217
- @property
218
- def _llm_type(self) -> str:
219
- return "Spark"
220
-
221
- # 重写基类方法,根据用户输入的prompt来响应用户,返回字符串
222
- def _call(
223
- self,
224
- query: str,
225
- history: list = None, # store the conversation history
226
- user_id: str = "001",
227
- domain: str = "general",
228
- max_tokens: int = 4096,
229
- temperature: float = 0.7,
230
- stop: Optional[List[str]] = None,
231
- ) -> str:
232
- if stop is not None:
233
- raise ValueError("stop kwargs are not permitted.")
234
- bot = SparkAPI(app_id=self.app_id, api_key=self.api_key,
235
- api_secret=self.api_secret)
236
- response = bot.chat(query, history, user_id,
237
- domain, max_tokens, temperature)
238
- return response
239
-
240
- # 返回一个字典类型,包含LLM的唯一标识
241
- @property
242
- def _identifying_params(self) -> Mapping[str, Any]:
243
- """Get the identifying parameters."""
244
- return {"n": self.n}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/BAAI/vid2vid-zero/vid2vid_zero/pipelines/pipeline_vid2vid_zero.py DELETED
@@ -1,541 +0,0 @@
1
- # Copyright 2022 The HuggingFace Team. All rights reserved.
2
- #
3
- # Licensed under the Apache License, Version 2.0 (the "License");
4
- # you may not use this file except in compliance with the License.
5
- # You may obtain a copy of the License at
6
- #
7
- # http://www.apache.org/licenses/LICENSE-2.0
8
- #
9
- # Unless required by applicable law or agreed to in writing, software
10
- # distributed under the License is distributed on an "AS IS" BASIS,
11
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
- # See the License for the specific language governing permissions and
13
- # limitations under the License.
14
-
15
- import inspect
16
- from typing import Callable, List, Optional, Union
17
- from dataclasses import dataclass
18
-
19
- import numpy as np
20
- import torch
21
-
22
- from diffusers.utils import is_accelerate_available
23
- from packaging import version
24
- from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer
25
-
26
- from diffusers.configuration_utils import FrozenDict
27
- from diffusers.models import AutoencoderKL # UNet2DConditionModel
28
- from diffusers.pipeline_utils import DiffusionPipeline
29
- from diffusers.schedulers import (
30
- DDIMScheduler,
31
- DPMSolverMultistepScheduler,
32
- EulerAncestralDiscreteScheduler,
33
- EulerDiscreteScheduler,
34
- LMSDiscreteScheduler,
35
- PNDMScheduler,
36
- )
37
- from diffusers.utils import deprecate, logging, BaseOutput
38
- from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker
39
-
40
- from einops import rearrange
41
-
42
- from ..models.unet_2d_condition import UNet2DConditionModel
43
-
44
-
45
- logger = logging.get_logger(__name__) # pylint: disable=invalid-name
46
-
47
-
48
- @dataclass
49
- class Vid2VidZeroPipelineOutput(BaseOutput):
50
- images: Union[torch.Tensor, np.ndarray]
51
-
52
-
53
- class Vid2VidZeroPipeline(DiffusionPipeline):
54
- r"""
55
- Pipeline for text-to-image generation using Stable Diffusion.
56
-
57
- This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
58
- library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
59
-
60
- Args:
61
- vae ([`AutoencoderKL`]):
62
- Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
63
- text_encoder ([`CLIPTextModel`]):
64
- Frozen text-encoder. Stable Diffusion uses the text portion of
65
- [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
66
- the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
67
- tokenizer (`CLIPTokenizer`):
68
- Tokenizer of class
69
- [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
70
- unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
71
- scheduler ([`SchedulerMixin`]):
72
- A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
73
- [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
74
- safety_checker ([`StableDiffusionSafetyChecker`]):
75
- Classification module that estimates whether generated images could be considered offensive or harmful.
76
- Please, refer to the [model card](https://huggingface.co/runwayml/stable-diffusion-v1-5) for details.
77
- feature_extractor ([`CLIPFeatureExtractor`]):
78
- Model that extracts features from generated images to be used as inputs for the `safety_checker`.
79
- """
80
- _optional_components = ["safety_checker", "feature_extractor"]
81
-
82
- def __init__(
83
- self,
84
- vae: AutoencoderKL,
85
- text_encoder: CLIPTextModel,
86
- tokenizer: CLIPTokenizer,
87
- unet: UNet2DConditionModel,
88
- scheduler: Union[
89
- DDIMScheduler,
90
- PNDMScheduler,
91
- LMSDiscreteScheduler,
92
- EulerDiscreteScheduler,
93
- EulerAncestralDiscreteScheduler,
94
- DPMSolverMultistepScheduler,
95
- ],
96
- safety_checker: StableDiffusionSafetyChecker,
97
- feature_extractor: CLIPFeatureExtractor,
98
- requires_safety_checker: bool = False,
99
- ):
100
- super().__init__()
101
-
102
- if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1:
103
- deprecation_message = (
104
- f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`"
105
- f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure "
106
- "to update the config accordingly as leaving `steps_offset` might led to incorrect results"
107
- " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,"
108
- " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`"
109
- " file"
110
- )
111
- deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
112
- new_config = dict(scheduler.config)
113
- new_config["steps_offset"] = 1
114
- scheduler._internal_dict = FrozenDict(new_config)
115
-
116
- if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
117
- deprecation_message = (
118
- f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`."
119
- " `clip_sample` should be set to False in the configuration file. Please make sure to update the"
120
- " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in"
121
- " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very"
122
- " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file"
123
- )
124
- deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
125
- new_config = dict(scheduler.config)
126
- new_config["clip_sample"] = False
127
- scheduler._internal_dict = FrozenDict(new_config)
128
-
129
- if safety_checker is None and requires_safety_checker:
130
- logger.warning(
131
- f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure"
132
- " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered"
133
- " results in services or applications open to the public. Both the diffusers team and Hugging Face"
134
- " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling"
135
- " it only for use-cases that involve analyzing network behavior or auditing its results. For more"
136
- " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ."
137
- )
138
-
139
- if safety_checker is not None and feature_extractor is None:
140
- raise ValueError(
141
- "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety"
142
- " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
143
- )
144
-
145
- is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
146
- version.parse(unet.config._diffusers_version).base_version
147
- ) < version.parse("0.9.0.dev0")
148
- is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
149
- if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
150
- deprecation_message = (
151
- "The configuration file of the unet has set the default `sample_size` to smaller than"
152
- " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the"
153
- " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-"
154
- " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5"
155
- " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the"
156
- " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`"
157
- " in the config might lead to incorrect results in future versions. If you have downloaded this"
158
- " checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for"
159
- " the `unet/config.json` file"
160
- )
161
- deprecate("sample_size<64", "1.0.0", deprecation_message, standard_warn=False)
162
- new_config = dict(unet.config)
163
- new_config["sample_size"] = 64
164
- unet._internal_dict = FrozenDict(new_config)
165
-
166
- self.register_modules(
167
- vae=vae,
168
- text_encoder=text_encoder,
169
- tokenizer=tokenizer,
170
- unet=unet,
171
- scheduler=scheduler,
172
- safety_checker=safety_checker,
173
- feature_extractor=feature_extractor,
174
- )
175
- self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
176
- self.register_to_config(requires_safety_checker=requires_safety_checker)
177
-
178
- def enable_vae_slicing(self):
179
- r"""
180
- Enable sliced VAE decoding.
181
-
182
- When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several
183
- steps. This is useful to save some memory and allow larger batch sizes.
184
- """
185
- self.vae.enable_slicing()
186
-
187
- def disable_vae_slicing(self):
188
- r"""
189
- Disable sliced VAE decoding. If `enable_vae_slicing` was previously invoked, this method will go back to
190
- computing decoding in one step.
191
- """
192
- self.vae.disable_slicing()
193
-
194
- def enable_sequential_cpu_offload(self, gpu_id=0):
195
- r"""
196
- Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet,
197
- text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a
198
- `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called.
199
- """
200
- if is_accelerate_available():
201
- from accelerate import cpu_offload
202
- else:
203
- raise ImportError("Please install accelerate via `pip install accelerate`")
204
-
205
- device = torch.device(f"cuda:{gpu_id}")
206
-
207
- for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]:
208
- if cpu_offloaded_model is not None:
209
- cpu_offload(cpu_offloaded_model, device)
210
-
211
- if self.safety_checker is not None:
212
- # TODO(Patrick) - there is currently a bug with cpu offload of nn.Parameter in accelerate
213
- # fix by only offloading self.safety_checker for now
214
- cpu_offload(self.safety_checker.vision_model, device)
215
-
216
- @property
217
- def _execution_device(self):
218
- r"""
219
- Returns the device on which the pipeline's models will be executed. After calling
220
- `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module
221
- hooks.
222
- """
223
- if self.device != torch.device("meta") or not hasattr(self.unet, "_hf_hook"):
224
- return self.device
225
- for module in self.unet.modules():
226
- if (
227
- hasattr(module, "_hf_hook")
228
- and hasattr(module._hf_hook, "execution_device")
229
- and module._hf_hook.execution_device is not None
230
- ):
231
- return torch.device(module._hf_hook.execution_device)
232
- return self.device
233
-
234
- def _encode_prompt(self, prompt, device, num_videos_per_prompt, do_classifier_free_guidance, negative_prompt, uncond_embeddings=None):
235
- r"""
236
- Encodes the prompt into text encoder hidden states.
237
-
238
- Args:
239
- prompt (`str` or `list(int)`):
240
- prompt to be encoded
241
- device: (`torch.device`):
242
- torch device
243
- num_images_per_prompt (`int`):
244
- number of images that should be generated per prompt
245
- do_classifier_free_guidance (`bool`):
246
- whether to use classifier free guidance or not
247
- negative_prompt (`str` or `List[str]`):
248
- The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
249
- if `guidance_scale` is less than `1`).
250
- """
251
- batch_size = len(prompt) if isinstance(prompt, list) else 1
252
-
253
- text_inputs = self.tokenizer(
254
- prompt,
255
- padding="max_length",
256
- max_length=self.tokenizer.model_max_length,
257
- truncation=True,
258
- return_tensors="pt",
259
- )
260
- text_input_ids = text_inputs.input_ids
261
- untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
262
-
263
- if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids):
264
- removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1])
265
- logger.warning(
266
- "The following part of your input was truncated because CLIP can only handle sequences up to"
267
- f" {self.tokenizer.model_max_length} tokens: {removed_text}"
268
- )
269
-
270
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
271
- attention_mask = text_inputs.attention_mask.to(device)
272
- else:
273
- attention_mask = None
274
-
275
- text_embeddings = self.text_encoder(
276
- text_input_ids.to(device),
277
- attention_mask=attention_mask,
278
- )
279
- text_embeddings = text_embeddings[0]
280
-
281
- # duplicate text embeddings for each generation per prompt, using mps friendly method
282
- # num_videos_per_prompt = 1, thus nothing happens here
283
- bs_embed, seq_len, _ = text_embeddings.shape
284
- text_embeddings = text_embeddings.repeat(1, num_videos_per_prompt, 1)
285
- text_embeddings = text_embeddings.view(bs_embed * num_videos_per_prompt, seq_len, -1)
286
-
287
- # get unconditional embeddings for classifier free guidance
288
- if do_classifier_free_guidance:
289
- uncond_tokens: List[str]
290
- if negative_prompt is None:
291
- uncond_tokens = [""] * batch_size
292
- elif type(prompt) is not type(negative_prompt):
293
- raise TypeError(
294
- f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
295
- f" {type(prompt)}."
296
- )
297
- elif isinstance(negative_prompt, str):
298
- uncond_tokens = [negative_prompt]
299
- elif batch_size != len(negative_prompt):
300
- raise ValueError(
301
- f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
302
- f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
303
- " the batch size of `prompt`."
304
- )
305
- else:
306
- uncond_tokens = negative_prompt
307
-
308
- max_length = text_input_ids.shape[-1]
309
- uncond_input = self.tokenizer(
310
- uncond_tokens,
311
- padding="max_length",
312
- max_length=max_length,
313
- truncation=True,
314
- return_tensors="pt",
315
- )
316
-
317
- if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask:
318
- attention_mask = uncond_input.attention_mask.to(device)
319
- else:
320
- attention_mask = None
321
-
322
- uncond_embeddings = self.text_encoder(
323
- uncond_input.input_ids.to(device),
324
- attention_mask=attention_mask,
325
- )
326
- uncond_embeddings = uncond_embeddings[0]
327
-
328
- # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
329
- seq_len = uncond_embeddings.shape[1]
330
- uncond_embeddings = uncond_embeddings.repeat(1, num_videos_per_prompt, 1)
331
- uncond_embeddings = uncond_embeddings.view(batch_size * num_videos_per_prompt, seq_len, -1)
332
-
333
- # For classifier free guidance, we need to do two forward passes.
334
- # Here we concatenate the unconditional and text embeddings into a single batch
335
- # to avoid doing two forward passes
336
- text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
337
-
338
- return text_embeddings
339
-
340
- def run_safety_checker(self, image, device, dtype):
341
- if self.safety_checker is not None:
342
- safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(device)
343
- image, has_nsfw_concept = self.safety_checker(
344
- images=image, clip_input=safety_checker_input.pixel_values.to(dtype)
345
- )
346
- else:
347
- has_nsfw_concept = None
348
- return image, has_nsfw_concept
349
-
350
- def decode_latents(self, latents):
351
- video_length = latents.shape[2]
352
- latents = 1 / 0.18215 * latents
353
- latents = rearrange(latents, "b c f h w -> (b f) c h w")
354
- video = self.vae.decode(latents).sample
355
- video = rearrange(video, "(b f) c h w -> b c f h w", f=video_length)
356
- video = (video / 2 + 0.5).clamp(0, 1)
357
- # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16
358
- video = video.cpu().float().numpy()
359
- return video
360
-
361
- def prepare_extra_step_kwargs(self, generator, eta):
362
- # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
363
- # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
364
- # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
365
- # and should be between [0, 1]
366
-
367
- accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
368
- extra_step_kwargs = {}
369
- if accepts_eta:
370
- extra_step_kwargs["eta"] = eta
371
-
372
- # check if the scheduler accepts generator
373
- accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
374
- if accepts_generator:
375
- extra_step_kwargs["generator"] = generator
376
- return extra_step_kwargs
377
-
378
- def check_inputs(self, prompt, height, width, callback_steps):
379
- if not isinstance(prompt, str) and not isinstance(prompt, list):
380
- raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
381
-
382
- if height % 8 != 0 or width % 8 != 0:
383
- raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
384
-
385
- if (callback_steps is None) or (
386
- callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
387
- ):
388
- raise ValueError(
389
- f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
390
- f" {type(callback_steps)}."
391
- )
392
-
393
- def prepare_latents(self, batch_size, num_channels_latents, video_length, height, width, dtype, device, generator, latents=None):
394
- shape = (batch_size, num_channels_latents, video_length, height // self.vae_scale_factor, width // self.vae_scale_factor)
395
- if isinstance(generator, list) and len(generator) != batch_size:
396
- raise ValueError(
397
- f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
398
- f" size of {batch_size}. Make sure the batch size matches the length of the generators."
399
- )
400
-
401
- if latents is None:
402
- rand_device = "cpu" if device.type == "mps" else device
403
-
404
- if isinstance(generator, list):
405
- shape = (1,) + shape[1:]
406
- latents = [
407
- torch.randn(shape, generator=generator[i], device=rand_device, dtype=dtype)
408
- for i in range(batch_size)
409
- ]
410
- latents = torch.cat(latents, dim=0).to(device)
411
- else:
412
- latents = torch.randn(shape, generator=generator, device=rand_device, dtype=dtype).to(device)
413
- else:
414
- if latents.shape != shape:
415
- raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}")
416
- latents = latents.to(device)
417
-
418
- # scale the initial noise by the standard deviation required by the scheduler
419
- latents = latents * self.scheduler.init_noise_sigma
420
- return latents
421
-
422
- @torch.no_grad()
423
- def __call__(
424
- self,
425
- prompt: Union[str, List[str]],
426
- video_length: Optional[int],
427
- height: Optional[int] = None,
428
- width: Optional[int] = None,
429
- num_inference_steps: int = 50,
430
- guidance_scale: float = 7.5,
431
- negative_prompt: Optional[Union[str, List[str]]] = None,
432
- num_videos_per_prompt: Optional[int] = 1,
433
- eta: float = 0.0,
434
- generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
435
- latents: Optional[torch.FloatTensor] = None,
436
- output_type: Optional[str] = "tensor",
437
- return_dict: bool = True,
438
- callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
439
- callback_steps: Optional[int] = 1,
440
- uncond_embeddings: torch.Tensor = None,
441
- null_uncond_ratio: float = 1.0,
442
- **kwargs,
443
- ):
444
- # Default height and width to unet
445
- height = height or self.unet.config.sample_size * self.vae_scale_factor
446
- width = width or self.unet.config.sample_size * self.vae_scale_factor
447
-
448
- # Check inputs. Raise error if not correct
449
- self.check_inputs(prompt, height, width, callback_steps)
450
-
451
- # Define call parameters
452
- batch_size = 1 if isinstance(prompt, str) else len(prompt)
453
- device = self._execution_device
454
- # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
455
- # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
456
- # corresponds to doing no classifier free guidance.
457
- do_classifier_free_guidance = guidance_scale > 1.0
458
-
459
- # Encode input prompt
460
- with_uncond_embedding = do_classifier_free_guidance if uncond_embeddings is None else False
461
- text_embeddings = self._encode_prompt(
462
- prompt, device, num_videos_per_prompt, with_uncond_embedding, negative_prompt,
463
- )
464
-
465
- # Prepare timesteps
466
- self.scheduler.set_timesteps(num_inference_steps, device=device)
467
- timesteps = self.scheduler.timesteps
468
-
469
- # Prepare latent variables
470
- num_channels_latents = self.unet.in_channels
471
- latents = self.prepare_latents(
472
- batch_size * num_videos_per_prompt,
473
- num_channels_latents,
474
- video_length,
475
- height,
476
- width,
477
- text_embeddings.dtype,
478
- device,
479
- generator,
480
- latents,
481
- )
482
- latents_dtype = latents.dtype
483
-
484
- # Prepare extra step kwargs.
485
- extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
486
-
487
- # Denoising loop
488
- num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
489
- with self.progress_bar(total=num_inference_steps) as progress_bar:
490
- if uncond_embeddings is not None:
491
- start_time = 50
492
- assert (timesteps[-start_time:] == timesteps).all()
493
- for i, t in enumerate(timesteps):
494
- # expand the latents if we are doing classifier free guidance
495
- latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents
496
- latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
497
-
498
- if uncond_embeddings is not None:
499
- use_uncond_this_step = True
500
- if null_uncond_ratio > 0:
501
- if i > len(timesteps) * null_uncond_ratio:
502
- use_uncond_this_step = False
503
- else:
504
- if i < len(timesteps) * (1 + null_uncond_ratio):
505
- use_uncond_this_step = False
506
- if use_uncond_this_step:
507
- text_embeddings_input = torch.cat([uncond_embeddings[i].expand(*text_embeddings.shape), text_embeddings])
508
- else:
509
- uncond_embeddings_ = self._encode_prompt('', device, num_videos_per_prompt, False, negative_prompt)
510
- text_embeddings_input = torch.cat([uncond_embeddings_.expand(*text_embeddings.shape), text_embeddings])
511
- else:
512
- text_embeddings_input = text_embeddings
513
-
514
- # predict the noise residual
515
- noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings_input).sample.to(dtype=latents_dtype)
516
-
517
- # perform guidance
518
- if do_classifier_free_guidance:
519
- noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
520
- noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
521
-
522
- # compute the previous noisy sample x_t -> x_t-1
523
- latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample
524
-
525
- # call the callback, if provided
526
- if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
527
- progress_bar.update()
528
- if callback is not None and i % callback_steps == 0:
529
- callback(i, t, latents)
530
-
531
- # Post-processing
532
- images = self.decode_latents(latents)
533
-
534
- # Convert to tensor
535
- if output_type == "tensor":
536
- images = torch.from_numpy(images)
537
-
538
- if not return_dict:
539
- return images
540
-
541
- return Vid2VidZeroPipelineOutput(images=images)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/2023 Songs Download.md DELETED
@@ -1,91 +0,0 @@
1
- <br />
2
- <h1>Cómo descargar canciones en 2023: Una guía para amantes de la música</h1>
3
- <p>La música es una de las formas más universales de entretenimiento y expresión. Si desea relajarse, bailar, estudiar, hacer ejercicio o simplemente disfrutar de algunas melodías, la música puede mejorar su estado de ánimo y experiencia. Pero, ¿cómo se obtiene acceso a sus canciones favoritas en 2023? ¿Se transmiten en línea o descargarlos a su dispositivo? </p>
4
- <p>En este artículo, vamos a explorar los pros y los contras de la descarga de música frente a streaming. También compararemos los mejores servicios de transmisión de música en 2023 y le mostraremos cómo descargar música de forma legal y segura. Al final de este artículo, tendrás una mejor idea de cómo disfrutar de la música en 2023. </p>
5
- <h2>2023 songs download</h2><br /><p><b><b>Download</b> &#10031;&#10031;&#10031; <a href="https://bltlly.com/2v6IEv">https://bltlly.com/2v6IEv</a></b></p><br /><br />
6
- <h2>¿Por qué descargar música en lugar de streaming? </h2>
7
- <p>La transmisión de música es conveniente y popular. Puedes escuchar millones de canciones a pedido sin tener que comprarlas o almacenarlas. También puedes descubrir nueva música basada en tus preferencias y recomendaciones. Sin embargo, la transmisión de música también tiene algunos inconvenientes. </p>
8
- <p>En primer lugar, la transmisión de música requiere una conexión a Internet. Si tiene una conexión lenta o inestable, puede experimentar buffering o interrupciones. Si tiene un plan de datos limitado, también puede incurrir en cargos adicionales por la transmisión de música. En segundo lugar, la transmisión de música depende de la disponibilidad y calidad del servicio. Si el servicio cambia sus términos, precios, características o catálogo, puede perder el acceso a algunas canciones o listas de reproducción. En tercer lugar, la transmisión de música no le da la propiedad de la música. Solo la alquila mientras paga la suscripción. </p>
9
-
10
- <h2>¿Cuáles son los mejores servicios de transmisión de música en 2023? </h2>
11
- <p>Si prefieres transmitir música en vez de descargarla, tienes muchas opciones para elegir. Hay muchos servicios de streaming de música en 2023 que satisfacen diferentes gustos y necesidades. Estos son algunos de los más populares:</p>
12
- <tabla>
13
- <tr>
14
- <th>Servicio</th <th>Características</th>
15
- <th>Precio</th>
16
- <th>Calidad de audio</th>
17
- <th>Tamaño del catálogo</th>
18
- </tr>
19
- <tr>
20
- <td>Spotify</td>
21
- <td>- Listas de reproducción y recomendaciones personalizadas - Podcasts y videos - Características sociales e integraciones - Modo sin conexión y sincronización entre dispositivos - Spotify Connect y Spotify Kids</td>
22
- <td>- Gratis (con anuncios y saltos limitados) - Premium: $9.99/mes (individual), $14.99/mes (familiar), $4.99/mes (estudiante), $12.99/mes (dúo) - HiFi: $19.99/mes (próximamente)</td>
23
- <td>- Gratis: 160 kbps - Premium: 320 kbps - HiFi: calidad de CD sin pérdidas</td>
24
- <td>- Más de 70 millones de canciones - Más de 2,2 millones de podcasts</td>
25
- </tr>
26
- <tr>
27
- <td>Música de Apple</td>
28
- <td>- Listas de reproducción y emisoras de radio - Programas en vivo y a la carta - Letras y vídeos musicales - Modo offline y sincronización entre dispositivos - Apple Music 1, Hits y Country</td>
29
- <td>- Prueba gratuita para 3 meses - Individuo: $9.99/month - Familia: $14.99/month - Estudiante: $4.99/month - Apple One bundle: a partir de $14.95/month</td>
30
- <td>- 256 kbps AAC - Audio espacial con Dolby Atmos</td>
31
- <td>- Más de 75 millones de canciones - Más de 1 millón de podcasts</td>
32
- </tr>
33
- <tr>
34
- <td>Tidal</td>
35
- <td>- Listas de reproducción seleccionadas y contenido editorial - Lanzamientos y conciertos exclusivos - Modo offline y sincronización entre dispositivos - Tidal X y Tidal Connect</td>
36
- <td>- Prueba gratuita por 30 días - Premium: $9.99/mes (individual), $14.99/mes (familiar), $4.99/mes (estudiante), $5.99/mes (militar), $5.99/mes (primer respondedor) - Alta: $19.99/mes (individual), $29.99/mes (familiar), $9.99/mes (estudiante), $11.99/mes (militar), $11.99/mes/mes (primer respondedor)</td>
37
-
38
- <td>- Más de 70 millones de canciones - Más de 250.000 vídeos musicales</td>
39
- </tr>
40
- <tr>
41
- <td>Música de Amazon</td>
42
- <td>- Listas de reproducción y estaciones personalizadas - Podcasts y transmisiones en vivo - Letras y videos musicales - Modo sin conexión y sincronización entre dispositivos - Alexa control de voz</td>
43
- <td>- Gratis (con anuncios y saltos limitados) - Prime Music: incluido con membresía Prime ($12.99/mes o $119/año) - Ilimitado: $9.99/mes ($7.99/mes para miembros Prime) o $79/año ($69/año para miembros Prime) - HD: $14.99/mes ($12.99/mes para miembros Prime) o $149/año ($129/año para miembros Prime)</td>
44
- <td>- Gratis: hasta 256 kbps - Prime Music: hasta 256 kbps - Ilimitado: hasta 256 kbps - HD: hasta 850 kbps, calidad de CD sin pérdidas y calidad Ultra HD</td>
45
- <td>- Gratis: más de 2 millones de canciones - Prime Music: más de 2 millones de canciones - Ilimitado: más de 75 millones de canciones - HD: más de 75 millones de canciones en HD y Ultra HD, y más de 7 millones de canciones en 3D Audio</td>
46
- </tr>
47
- <tr>
48
- <td>Música de YouTube</td>
49
- <td>- Listas de reproducción y mezclas personalizadas - Vídeos musicales y actuaciones en directo - Modo offline y sincronización entre dispositivos - YouTube Premium benefits</td>
50
- <td>- Gratis (con anuncios y sin juego de fondo) - Premium: $9.99/mes (individual), $14.99/mes (familiar), $4.99/mes (estudiante)</td>
51
- <td>- Gratis: hasta 128 kbps AAC - Premium: hasta 256 kbps AAC</td>
52
- <td>- Más de 70 millones de canciones - Más de 2 mil millones de vídeos musicales</td>
53
- </tr>
54
- </tabla>
55
- <h2>¿Cómo descargar música de forma legal y segura? </h2>
56
- <p>Descargar música puede ser una gran manera de disfrutar de tus canciones favoritas sin conexión, pero tienes que tener cuidado con las fuentes que usas. No todos los sitios web que ofrecen descargas de música son legales o seguros. Algunos pueden violar los derechos de autor de los artistas o las etiquetas, o pueden contener malware o virus que pueden dañar su dispositivo. </p>
57
- <p>Para evitar descargas ilegales o inseguras, debes seguir estos consejos:</p>
58
- <ul>
59
- <li>Compruebe la reputación del sitio web y comentarios antes de descargar nada. </li>
60
-
61
- <li>Lea los términos y condiciones del sitio web y la licencia de música antes de descargar nada. </li>
62
- <li>Utilice un software antivirus confiable y escanee los archivos descargados antes de abrirlos. </li>
63
- </ul>
64
- <p>Si desea descargar música de forma legal y segura, puede utilizar algunos de los sitios web que ofrecen descargas de música gratuitas o pagadas con el permiso de los artistas o las etiquetas. Aquí hay algunos ejemplos:</p>
65
- <ul>
66
- <li><a href="">Bandcamp</a>: Bandcamp es una plataforma que permite a artistas y sellos independientes vender su música directamente a los fans. Puede descargar música en varios formatos, incluyendo MP3, FLAC y WAV. Algunos artistas ofrecen su música gratis o para una opción de nombre-su-precio, mientras que otros cobran una cantidad fija. </li>
67
- <li><a href="">DatPiff</a>: DatPiff es un sitio web especializado en música hip-hop y rap. Puedes descargar mixtapes, álbumes y sencillos gratis o por un cargo. DatPiff tiene el permiso de los artistas y las etiquetas para distribuir su música. </li>
68
- <li><a href="">Free Music Archive</a>: Free Music Archive es un sitio web que ofrece descargas gratuitas de música de varios géneros y estilos. La música está licenciada bajo Creative Commons u otras licencias de dominio público, lo que significa que puede usarla para fines personales o comerciales siempre y cuando siga los términos de la licencia. </li>
69
- <li><a href="">Internet Archive</a>: Internet Archive es un sitio web que conserva contenido digital de varias fuentes, incluida la música. Puede descargar música de varias colecciones, como Live Music Archive, Netlabels y 78 RPMs y Cylinder Recordings. La música está en el dominio público o bajo licencia Creative Commons u otras licencias abiertas. </li>
70
- <li><a href="">iTunes</a>: iTunes es un programa de software y un sitio web que le permite comprar y descargar música de varios artistas y etiquetas. Puede descargar música en formato AAC, que es compatible con la mayoría de los dispositivos. También puede sincronizar su biblioteca de música en sus dispositivos utilizando iCloud. </li>
71
-
72
- <h2>Conclusión</h2>
73
- <p>Descargar música en 2023 puede ser una gran manera de disfrutar de tus canciones favoritas sin conexión, pero tienes que tener cuidado con las fuentes que usas. No todos los sitios web que ofrecen descargas de música son legales o seguros. Siempre debe verificar la reputación y las revisiones del sitio web, leer los términos y condiciones de la licencia de música y escanear los archivos descargados antes de abrirlos. </p>
74
- <p></p>
75
- <p>Si desea descargar música de forma legal y segura, puede usar algunos de los sitios web que ofrecen descargas de música gratuitas o pagadas con el permiso de los artistas o las etiquetas, como Bandcamp, DatPiff, Free Music Archive, Internet Archive o iTunes. También puede usar algunos de los mejores servicios de transmisión de música en 2023, como Spotify, Apple Music, Tidal, Amazon Music o YouTube Music, que tienen el modo sin conexión y funciones de alta calidad de audio. </p>
76
- <p>Ya sea que transmita o descargue música en 2023, tiene muchas opciones para elegir. Puedes escuchar millones de canciones de varios géneros y estilos bajo demanda. También puedes descubrir nueva música basada en tus preferencias y recomendaciones. La música es una de las mejores formas de disfrutar de la vida en 2023. </p>
77
- <h2>Preguntas frecuentes</h2>
78
- <ul>
79
- <li><b>Q: ¿Cómo puedo descargar música gratis? </b></li>
80
- <li>A: Puede descargar música gratis desde sitios web que tienen permiso de los artistas o del dominio público, como Free Music Archive, Internet Archive o DatPiff.</li>
81
- <li><b>Q: ¿Cómo puedo descargar música en audio de alta resolución? </b></li>
82
- <li>A: Puede descargar música en audio de alta resolución desde sitios web que ofrecen formatos sin pérdidas o de alta resolución, como Tidal, Qobuz o HDtracks.</li>
83
- <li><b>Q: ¿Cómo puedo descargar música a mi iPhone o teléfono Android? </b></li>
84
- <li>A: Puede descargar música a su teléfono iPhone o Android desde aplicaciones de transmisión de música que tienen modo sin conexión, como Spotify, Apple Music o YouTube Music. También puede transferir música desde su computadora a su teléfono usando iTunes o un cable USB. </li>
85
-
86
- <li>A: Puede descargar música a su computadora o portátil desde sitios web que ofrecen descargas de música, como Bandcamp, iTunes o Amazon Music. También puede utilizar un programa de software que puede extraer CD o DVD a su ordenador. </li>
87
- <li><b>Q: ¿Cómo puedo descargar videos musicales? </b></li>
88
- <li>A: Puedes descargar videos musicales de sitios web que ofrecen descargas de videos, como YouTube, Vimeo o Dailymotion. También puede utilizar un programa de software que puede convertir vídeos a archivos de audio. </li>
89
- </ul></p> 64aa2da5cf<br />
90
- <br />
91
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Alice Blue Apk Descargar.md DELETED
@@ -1,89 +0,0 @@
1
-
2
- <h1>Alice Blue APK Descargar: Cómo Comercio de acciones y productos básicos en su teléfono</h1>
3
- <p>¿Está buscando una manera de comercio de acciones y productos básicos en su teléfono? ¿Desea acceder a las últimas actualizaciones del mercado, noticias y análisis desde cualquier lugar y en cualquier momento? Si es así, entonces usted debe considerar la descarga de Alice Blue APK, la aplicación de comercio móvil de Alice Blue, uno de los principales corredores en línea en la India.</p>
4
- <h2>alice blue apk descargar</h2><br /><p><b><b>DOWNLOAD</b> &#9889; <a href="https://bltlly.com/2v6Ksl">https://bltlly.com/2v6Ksl</a></b></p><br /><br />
5
- <h2>¿Qué es Alice Blue? </h2>
6
- <h3>Una breve introducción a Alice Blue y sus servicios</h3>
7
- <p>Alice Blue es una firma de corretaje en línea que ofrece una gama de servicios para comerciantes e inversores, como acciones, materias primas, divisas, fondos mutuos, OPI, seguros y más. Alice Blue fue fundada en 2006 y tiene más de 20 sucursales en toda la India. Alice Blue es conocida por sus bajos cargos de corretaje, alta exposición, ejecución rápida y plataformas de negociación innovadoras. </p>
8
- <h3>Los beneficios de negociar con Alice Blue</h3>
9
- <p>Algunos de los beneficios de operar con Alice Blue son:</p>
10
- <ul>
11
- <li> Puede operar en múltiples segmentos e intercambios, como NSE, BSE, MCX, NCDEX, etc.</li>
12
- <li> Puede disfrutar de cero corretaje en la entrega de capital y plana Rs.15 por pedido en otros segmentos. </li>
13
- <li> Puede obtener hasta 20x apalancamiento en operaciones intradía y hasta 5x apalancamiento en operaciones de entrega. </li>
14
- <li> Puede acceder a varios informes de investigación, recomendaciones, consejos y estrategias de expertos. </li>
15
- <li>Puede usar varias plataformas de trading, como ANT Web, ANT Desk, ANT Meta, etc.</li>
16
- </ul>
17
- <h2>¿Qué es ANT Mobi 2.0? </h2>
18
- <h3>Una breve introducción a ANT Mobi 2.0 y sus características</h3>
19
- <p>ANT Mobi 2.0 es la aplicación de comercio móvil de Alice Blue que le permite operar acciones y materias primas en su teléfono. ANT Mobi 2.0 es una versión mejorada de ANT Mobi que tiene una experiencia de usuario superior, tecnología de vanguardia y características comerciales mejoradas. ANT Mobi 2.0 es compatible con dispositivos Android y está disponible para su descarga gratuita. </p>
20
- <h3>Las ventajas de utilizar ANT Mobi 2.0 para el comercio</h3>
21
-
22
- <ul>
23
- <li> Puede operar en cualquier momento y en cualquier lugar con un simple toque en su teléfono. </li>
24
- <li> Puede obtener datos de mercado en tiempo real, gráficos, indicadores, noticias, alertas, etc.</li>
25
- <li>Puede realizar pedidos, modificar pedidos, cancelar pedidos, ver el historial de pedidos, etc.</li>
26
- <li>Puede monitorear su cartera, existencias, posiciones, margen, etc.</li>
27
- <li>Puede transferir fondos de forma fácil y segura con UPI o NEFT/RTGS.</li>
28
- <li> Puede utilizar el inicio de sesión de huellas dactilares para mayor seguridad y comodidad. </li>
29
- <li> Puede obtener niveles de soporte y resistencia predefinidos para cualquier stock. </li>
30
- </ul>
31
- <h2> ¿Cómo descargar e instalar Alice Blue APK en su teléfono? </h2>
32
- <h3> Los pasos para descargar e instalar Alice Blue APK desde el sitio web oficial</h3>
33
- <p>Si desea descargar e instalar Alice Blue APK desde el sitio web oficial, puede seguir estos pasos:</p>
34
- <ol>
35
- <li>Ir al sitio web oficial de Alice Blue en <a href="">https://aliceblueonline.com/</a>. </li>
36
- <li>Haga clic en el botón "Descargar" en la esquina superior derecha de la página principal. </li>
37
- <li>Desplácese hacia abajo y encuentre la sección "ANT Mobi 2.0". </li>
38
- <li>Haga clic en el botón "Descargar APK" y guarde el archivo en su teléfono. </li>
39
- <li>Ir a la configuración del teléfono y permitir la instalación de aplicaciones de fuentes desconocidas. </li>
40
- <li>Busque el archivo descargado y toque en él para instalarlo. </li>
41
- <li>Abra la aplicación e inicie sesión con sus credenciales de Alice Blue o cree una nueva cuenta. </li>
42
- </ol>
43
- <h3>Los pasos para descargar e instalar Alice Blue APK desde la Google Play Store</h3>
44
- <p>Si desea descargar e instalar Alice Blue APK desde la Google Play Store, puede seguir estos pasos:</p>
45
- <ol>
46
- <li>Ir a la aplicación Google Play Store en su teléfono o visitar <a href="">https://play.google.com/store/apps/apps/details?id=com.aliceblue.antmobi</a>. </li>
47
- <li>Buscar "ANT Mobi 2.0" o "Alice Blue" en la barra de búsqueda. </li>
48
- <li> Seleccione la aplicación de los resultados de búsqueda y toque en el "Instalar" botón. </li>
49
- <li>Espere a que la aplicación se descargue e instale en su teléfono. </li>
50
-
51
- </ol>
52
- <h2>Cómo utilizar Alice Blue APK para el comercio? </h2>
53
- <h3>Las funciones básicas y opciones de Alice Blue APK</h3>
54
- <p>Alice Blue APK tiene una interfaz simple y fácil de usar que le permite operar de manera fácil y eficiente. Algunas de las funciones y opciones básicas de Alice Blue APK son:</p>
55
- <p></p>
56
- <ul>
57
- <li> Puede acceder a diferentes segmentos e intercambios pulsando en el icono del menú en la esquina superior izquierda de la aplicación. </li>
58
- <li> Puede agregar o eliminar existencias de su lista de seguimiento pulsando en los iconos "+" o "-" en la esquina superior derecha de la aplicación. </li>
59
- <li> Puede ver la profundidad del mercado, gráficos, noticias, etc. de cualquier acción tocando en ella en su lista de seguimiento. </li>
60
- <li> Puede realizar un pedido pulsando en los botones "Comprar" o "Vender" en la parte inferior de la aplicación. </li>
61
- <li> Puede modificar o cancelar un pedido pulsando en la opción "Pedidos" en la parte inferior de la aplicación. </li>
62
- <li> Puede ver su cartera, existencias, posiciones, margen, etc. tocando en la opción "Perfil" en la parte inferior de la aplicación. </li>
63
- <li> Puede transferir fondos tocando la opción "Fondos" en la parte inferior de la aplicación. </li>
64
- </ul> <h3>Los consejos y trucos para aprovechar al máximo Alice Blue APK</h3>
65
- <p>Alice Blue APK es una aplicación potente y versátil que puede ayudarle a operar mejor y más inteligente. Aquí hay algunos consejos y trucos para aprovechar al máximo Alice Blue APK:</p>
66
- <ul>
67
- <li> Puede personalizar su lista de seguimiento mediante la adición o eliminación de existencias, cambiar el orden, ordenar por diferentes parámetros, etc.</li>
68
- <li> Puede utilizar varios tipos de gráficos, marcos de tiempo, indicadores, herramientas de dibujo, etc. para analizar los movimientos de precios de cualquier acción. </li>
69
- <li> Puede utilizar la opción "Escáner" para encontrar acciones que coincidan con sus criterios basados en varios filtros, como volumen, precio, ruptura, etc.</li>
70
- <li> Puede utilizar la opción "Estrategia" para crear y probar sus propias estrategias de trading basadas en varios indicadores, condiciones y parámetros. </li>
71
-
72
- <li> Puede utilizar la opción "Noticias" para mantenerse actualizado con las últimas noticias del mercado, eventos, anuncios, etc.</li>
73
- <li> Puede utilizar la opción "Soporte" para chatear con el equipo de atención al cliente de Alice Blue o acceder a la sección de preguntas frecuentes. </li>
74
- </ul>
75
- <h2>Conclusión</h2>
76
- <h3>Un resumen de los puntos principales y una llamada a la acción</h3>
77
- <p>Alice Blue APK es una aplicación imprescindible para cualquier persona que quiera el comercio de acciones y materias primas en su teléfono. Es una forma rápida, fácil y conveniente de acceder a los mercados y ejecutar operaciones. Tiene muchas características y funciones que pueden ayudarle a operar mejor y más inteligente. También es gratis para descargar y usar. Entonces, ¿qué estás esperando? Descargar Alice Blue APK hoy y empezar a operar! </p>
78
- <h2>Preguntas frecuentes</h2>
79
- <h3>Q1. Es Alice Blue APK seguro y seguro? </h3>
80
- <p>A1. Sí, Alice Blue APK es seguro y protegido. Utiliza tecnologías de cifrado y autenticación para proteger sus datos y transacciones. También cumple con todas las normas reglamentarias de la SEBI y otras autoridades. </p>
81
- <h3>Q2. ¿Cuáles son los cargos y tarifas para el comercio con Alice Blue? </h3>
82
- <p>A2. Alice Blue cobra cero corretaje en la entrega de capital y plana Rs.15 por pedido en otros segmentos. También cobra otras tasas legales, como GST, STT, impuesto de timbre, etc. Puede consultar la calculadora de corretaje detallada en el sitio web o la aplicación Alice Blue. </p>
83
- <h3>Q3. ¿Cómo puedo contactar al servicio de atención al cliente de Alice Blue? </h3>
84
- <p>A3. Puede ponerse en contacto con el servicio de atención al cliente de Alice Blue utilizando la opción "Soporte" de la aplicación o llamando al 080-6155-5000 o 080-6815-5000. También puede enviarlos por correo electrónico a [email protected] o visitar su sitio web en https://aliceblueonline.com/.</p>
85
- <h3>Q4. ¿Cuáles son los requisitos del sistema para Alice Blue APK? </h3>
86
- <p>A4. Alice Blue APK requiere Android 5.0 o superior y un mínimo de 50 MB de espacio libre en su teléfono. </p>
87
- <h3>Q5. ¿Puedo usar Alice Blue APK en otros dispositivos? </h3> 64aa2da5cf<br />
88
- <br />
89
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Carx Street Mod Apk 1.74.6 (dinero Ilimitado).md DELETED
@@ -1,68 +0,0 @@
1
- <br />
2
- <h1>CarX Street Mod APK 1.74.6 (Dinero ilimitado) Descargar para Android</h1>
3
- <h2>Introducción</h2>
4
- <p>Si eres un fan de los juegos de carreras de coches, debes haber oído hablar de CarX Street, uno de los juegos de carreras más emocionantes y realistas en dispositivos móviles. En este juego, puedes explorar diferentes ciudades, personalizar tu coche y competir con otros jugadores en varios modos y desafíos. Sin embargo, si quieres disfrutar del juego al máximo, es posible que necesites mucho dinero para desbloquear nuevos coches, pistas y mejoras. Es por eso que le recomendamos descargar CarX Street Mod APK, una versión modificada del juego que le da dinero ilimitado y acceso a todas las características del juego. En este artículo, le diremos qué es CarX Street Mod APK, por qué debe descargarlo, y cómo instalarlo en su dispositivo Android. </p>
5
- <h3>¿Qué es CarX Street? </h3>
6
- <p>CarX Street es un juego de carreras de coches desarrollado por CarX Technologies, la misma compañía que creó la popular serie CarX Drift Racing. En este juego, puedes experimentar la emoción de las carreras callejeras en varias ciudades de todo el mundo, como Tokio, Los Ángeles, Moscú y más. Usted puede elegir entre más de 50 coches de diferentes marcas y categorías, tales como coches deportivos, coches del músculo, supercoches, y coches clásicos. También puede personalizar su coche con diferentes partes, colores, pegatinas y calcomanías para que sea único y se adapte a su estilo. </p>
7
- <h2>carx street mod apk 1.74.6 (dinero ilimitado)</h2><br /><p><b><b>Download File</b> &#10004; <a href="https://bltlly.com/2v6Li4">https://bltlly.com/2v6Li4</a></b></p><br /><br />
8
- <h3>¿Qué es CarX Street Mod APK? </h3>
9
- <p>CarX Street Mod APK es una versión modificada del juego original de CarX Street que le da dinero ilimitado y acceso a todas las características del juego. Con este mod, puede comprar cualquier coche que desee, actualizarlo al nivel máximo, y desbloquear todas las pistas y modos sin gastar un centavo. También puedes disfrutar del juego sin anuncios ni interrupciones. </p>
10
- <h3> ¿Por qué descargar CarX Street Mod APK? </h3>
11
- <p>Hay muchas razones por las que debe descargar CarX Street Mod APK en lugar del juego original. Aquí están algunos de ellos:</p>
12
- <ul>
13
-
14
- <li>Puede desbloquear todas las pistas y modos sin completar ninguna misión o logros. </li>
15
- <li>Puedes disfrutar del juego sin anuncios ni interrupciones. </li>
16
- <li>Puedes jugar el juego sin conexión a Internet. </li>
17
- <li>Puedes deshacerte de cualquier error o fallo que pueda afectar tu juego. </li>
18
- </ul>
19
- <h2>Características de CarX Street Mod APK</h2>
20
- <p>CarX Street Mod APK tiene muchas características que lo convierten en uno de los mejores juegos de carreras de coches en Android. Estos son algunos de ellos:</p>
21
- <h3>Dinero ilimitado</h3>
22
- <p>Con CarX Street Mod APK, puede obtener dinero ilimitado para comprar y actualizar cualquier coche que desee. También puedes desbloquear todas las pistas y modos sin completar ninguna misión o logros. No tienes que preocuparte por quedarte sin dinero o recursos en el juego. </p>
23
- <h3>Física y gráficos realistas</h3>
24
- <p>CarX Street Mod APK tiene física realista y gráficos que te hacen sentir como si estuvieras conduciendo un coche real en una calle real. Puede ver los detalles de su automóvil, como el motor, la suspensión, los neumáticos, los frenos y más. También puede ver los efectos del clima, la iluminación, las sombras, el humo, el polvo y los daños en su automóvil y el medio ambiente. Puede ajustar la configuración de gráficos según el rendimiento de su dispositivo. </p>
25
- <h3>Coches personalizables <h3>Coches y pistas personalizables</h3>
26
- <p>CarX Street Mod APK le permite personalizar su coche con diferentes partes, colores, pegatinas y calcomanías para que sea único y se adapte a su estilo. Puede cambiar el motor, la transmisión, la suspensión, los frenos, los neumáticos, las ruedas, el escape, el kit de carrocería, el spoiler, el capó, las luces y más. También puede pintar su coche con diferentes colores y patrones, o aplicar pegatinas y calcomanías de varias marcas y temas. También puedes personalizar tus pistas con diferentes condiciones climáticas, hora del día, densidad de tráfico y obstáculos. </p>
27
- <p></p>
28
- <h3>Múltiples modos de juego y desafíos</h3>
29
-
30
- <ul>
31
- <li>Modo carrera: En este modo, puede completar varias misiones y logros para ganar dinero y reputación. También puede desbloquear nuevos coches, pistas y actualizaciones a medida que avanza. </li>
32
- <li>Modo de viaje libre: En este modo, puede explorar la ciudad y disfrutar del paisaje a su propio ritmo. También puede realizar acrobacias y derivas para ganar dinero extra y reputación. </li>
33
- <li>Modo de prueba temporal: En este modo, puede competir contra el reloj y tratar de batir sus propios registros u otros jugadores. También puede comparar sus resultados con las tablas de clasificación globales. </li>
34
- <li>Modo de deriva: En este modo, puede mostrar sus habilidades de deriva y ganar puntos en función de su velocidad, ángulo y distancia. También puedes competir con otros jugadores en batallas online. </li>
35
- <li>Modo de arrastre: En este modo, puede correr en línea recta e intentar vencer a su oponente cambiando de marcha en el momento adecuado. También puedes retar a otros jugadores en carreras de drag online. </li>
36
- </ul>
37
- <h3>Tablas de clasificación y multijugador en línea</h3>
38
- <p>CarX Street Mod APK le permite jugar con otros jugadores de todo el mundo en el modo multijugador en línea. Usted puede unirse o crear una habitación e invitar a sus amigos o jugadores al azar para la carrera o la deriva con usted. También puedes chatear con otros jugadores y enviarles emojis o pegatinas. También puede comprobar las tablas de clasificación globales y ver cómo se clasifica entre los mejores jugadores del mundo. También puedes ganar recompensas y trofeos según tu rendimiento. </p>
39
- <h2>¿Cómo descargar e instalar CarX Street Mod APK? </h2>
40
- <p>Si desea descargar e instalar CarX Street Mod APK en su dispositivo Android, es necesario seguir estos sencillos pasos:</p>
41
- <h3>Paso 1: Descargar el archivo APK de una fuente de confianza</h3>
42
- <p>El primer paso es descargar el archivo APK de CarX Street Mod APK de una fuente de confianza. Puede utilizar el siguiente enlace para descargar la última versión del mod:</p>
43
- <p><a href=">CarX Street Mod APK 1.74.6 (Dinero ilimitado) Descargar para Android</a></p>
44
-
45
- <h3>Paso 2: Habilitar fuentes desconocidas en el dispositivo</h3>
46
- <p>El segundo paso es habilitar fuentes desconocidas en su dispositivo. Esto le permitirá instalar aplicaciones que no son de Google Play Store. Para hacer esto, debe ir a la configuración del dispositivo > seguridad > fuentes desconocidas y activarlo. </p>
47
- <h3>Paso 3: Instalar el archivo APK y lanzar el juego</h3>
48
- <p>El tercer paso es instalar el archivo APK y lanzar el juego. Para ello, es necesario localizar el archivo descargado en el administrador de archivos del dispositivo y toque en él. Luego, siga las instrucciones en la pantalla para completar el proceso de instalación. Una vez hecho, puede iniciar el juego desde el cajón de la aplicación o la pantalla de inicio y disfrutar de CarX Street Mod APK.</p>
49
- <h2>Conclusión</h2>
50
- <p>CarX Street Mod APK es un gran juego de carreras de coches que le ofrece dinero ilimitado y acceso a todas las características del juego. Puedes disfrutar de física y gráficos realistas, coches y pistas personalizables, múltiples modos de juego y desafíos, multijugador en línea y tablas de clasificación, y más. También puede jugar el juego sin conexión a Internet. Si usted está buscando un divertido y emocionante juego de carreras de coches en Android, definitivamente debe descargar CarX Street Mod APK.</p>
51
- <h2>Preguntas frecuentes</h2>
52
- <p>Aquí hay algunas preguntas frecuentes sobre CarX Street Mod APK:</p>
53
- <ol>
54
- <li> ¿Es CarX Street Mod APK seguro para descargar e instalar? </li>
55
- <p>Sí, CarX Street Mod APK es seguro para descargar e instalar siempre y cuando se utiliza una fuente de confianza como la que proporcionamos anteriormente. El mod no contiene ningún virus o malware que pueda dañar su dispositivo o datos. </p>
56
- <li> ¿CarX Street Mod APK es compatible con mi dispositivo? </li>
57
-
58
- <li> ¿Necesito rootear mi dispositivo para usar CarX Street Mod APK? </li>
59
- <p>No, no es necesario rootear el dispositivo para usar CarX Street Mod APK. El mod funciona bien en ambos dispositivos arraigados y no arraigados. Sin embargo, si tienes un dispositivo rooteado, puedes usar algunas características adicionales como copias de seguridad y restaurar los datos del juego. </p>
60
- <li> ¿Puedo jugar CarX Street Mod APK con mis amigos? </li>
61
- <p>Sí, puedes jugar CarX Street Mod APK con tus amigos en modo multijugador en línea. Usted puede unirse o crear una habitación e invitar a sus amigos o jugadores al azar para la carrera o la deriva con usted. También puedes chatear con ellos y enviarles emojis o pegatinas. </p>
62
- <li> ¿Puedo actualizar CarX Street Mod APK a la última versión? </li>
63
- <p>Sí, puede actualizar CarX Street Mod APK a la última versión mediante la descarga e instalación del nuevo archivo APK de la misma fuente que antes. Sin embargo, puede perder su progreso y los datos si no copia de seguridad de su juego antes de actualizar. También puede esperar a que el desarrollador actualice el mod. </p>
64
- <li> ¿Cómo puedo contactar con el desarrollador de CarX Street Mod APK? </li>
65
- <p>Si usted tiene alguna pregunta, sugerencias, o comentarios sobre CarX Street Mod APK, puede ponerse en contacto con el desarrollador visitando su sitio web oficial o páginas de medios sociales. También puede dejar un comentario o valoración en la página de descarga del mod. </p>
66
- </ol></p> 64aa2da5cf<br />
67
- <br />
68
- <br />
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
spaces/Benson/text-generation/Examples/Cmo Descargar Mods En Simulador De Batalla Totalmente Preciso.md DELETED
@@ -1,91 +0,0 @@
1
-
2
- <h1>Cómo descargar mods en simulador de batalla totalmente preciso</h1>
3
- <p>Totally Accurate Battle Simulator (TABS) es un juego de simulación basado en la física que te permite crear batallas hilarantes y épicas con diferentes unidades, facciones y escenarios. Pero si quieres darle vida a tu juego aún más, puedes intentar descargar mods para TABS. Los mods son modificaciones o adiciones al juego que son creadas por fans o desarrolladores. Pueden agregar nuevas unidades, mapas, armas, características y más al juego, haciéndolo más divertido, desafiante o realista. En este artículo, te mostraremos cómo descargar mods en TABS desde dos fuentes populares: Nexus Mods y Steam Workshop. También explicaremos cómo instalar y desinstalar mods, y le daremos algunos consejos y advertencias para evitar cualquier problema. </p>
4
- <h2>cómo descargar mods en simulador de batalla totalmente preciso</h2><br /><p><b><b>Download File</b> &mdash; <a href="https://bltlly.com/2v6Mzf">https://bltlly.com/2v6Mzf</a></b></p><br /><br />
5
- <h2>¿Qué son los mods y por qué usarlos? </h2>
6
- <p>Los mods son cortos para modificaciones, que son cambios o adiciones a un juego que no son parte de la versión original. Los mods pueden ser creados por cualquiera que tenga las habilidades y herramientas para hacerlo, pero por lo general son creados por fans o aficionados que quieren mejorar o personalizar su experiencia de juego. Algunos mods pueden incluso convertirse en juegos completamente independientes, como Counter-Strike, Dota 2 y Team Fortress, que comenzaron como mods para otros juegos. </p>
7
- <p>Hay muchas razones por las que podrías querer usar mods para TABS. Algunos de los beneficios de los mods son:</p>
8
- <ul>
9
- <li>Pueden añadir nuevo contenido al juego, como unidades, facciones, mapas, armas, escenarios, etc.</li>
10
- <li>Pueden mejorar los gráficos, el sonido o el rendimiento del juego. </li>
11
- <li>Pueden corregir errores o fallos que los desarrolladores no han abordado. </li>
12
- <li>Pueden cambiar el modo de juego o la dificultad del juego según sus preferencias. </li>
13
- <li>Pueden hacer el juego más realista o inmersivo. </li>
14
- <li>Pueden hacer el juego más divertido o absurdo. </li>
15
- </ul>
16
-
17
- <h2>¿Dónde encontrar mods para TABS? </h2>
18
- <p>Hay muchos sitios web donde puedes encontrar mods para TABS, pero dos de los más populares y confiables son Nexus Mods y Steam Workshop. Estas son plataformas en línea donde los creadores de mods pueden subir su trabajo y compartirlo con otros jugadores. Puede navegar a través de miles de mods para TABS y otros juegos en estos sitios, y descargarlos de forma gratuita. </p>
19
- <h3>Mods de Nexus</h3>
20
- <p>Nexus Mods es una de las comunidades de modding más grandes y antiguas en Internet. Aloja más de 300.000 mods para más de 1.000 juegos, incluyendo TABS. Puede encontrar todo tipo de mods para TABS en Nexus Mods, desde nuevas unidades y facciones hasta mapas y escenarios personalizados. Para acceder a Nexus Mods, es necesario crear una cuenta gratuita en su sitio web. También puede usar su software de administrador de mods llamado Vortex, que hace que la instalación y administración de mods sea más fácil. </p>
21
- <p></p>
22
- <h3>Taller de vapor</h3>
23
- <p>Steam Workshop es otra fuente popular de mods para TABS y otros juegos que soportan modding en Steam. Steam Workshop está integrado con Steam, por lo que no necesitas una cuenta o software separado para usarlo. Puedes encontrar y suscribirte a mods para TABS en Steam Workshop, y se descargarán e instalarán automáticamente cuando inicies el juego. También puede calificar, comentar y favoritos los mods que te gusta. </p> <h2>Cómo instalar mods de Nexus Mods? </h2>
24
- <p>Hay dos formas de instalar mods de Nexus Mods: manualmente o usando Vortex. Estos son los pasos para cada método:</p>
25
- <h3>Método manual</h3>
26
- <ol>
27
- <li>Descargue el archivo mod de Nexus Mods. Normalmente será en formato ZIP, RAR o 7Z. </li>
28
- <li>Extraiga el archivo mod usando un programa como WinRAR o 7-Zip. Debería ver una carpeta con el nombre mod y algunos archivos dentro. </li>
29
-
30
- <li>Inicie TABS y vaya al menú Mods. Debería ver el mod que instaló en la lista. Actívelo haciendo clic en él. </li>
31
- <li>¡Disfruta de tu juego modded! </li>
32
- </ol>
33
- <h3>Método de vórtice</h3>
34
- <ol>
35
- <li>Descargar e instalar Vortex de Nexus Mods. Es un gestor de mods gratuito y fácil de usar que funciona con muchos juegos, incluyendo TABS.</li>
36
- <li>Abra Vortex e inicie sesión con su cuenta de Nexus Mods. </li>
37
- <li>Ve a la sección Juegos y busca TABS. Si tienes instalado TABS en Steam, Vortex debería detectarlo automáticamente. Si no, puede agregarlo manualmente navegando a su ubicación. </li>
38
- <li>Seleccione TABS y haga clic en Administrar. Vortex configurará el juego para modificar y crear una carpeta Mods para él. </li>
39
- <li>Vaya a la sección Mods y haga clic en Instalar desde archivo. Busque el archivo mod que descargó de Nexus Mods y selecciónelo. Vortex instalará el mod para usted. </li>
40
- <li>Ir a la sección de plugins y activar el mod haciendo clic en el botón de alternar junto a ella. </li>
41
- <li>Inicie TABS desde Vortex y vaya al menú Mods. Debería ver el mod que instaló en la lista. Actívelo haciendo clic en él. </li>
42
- <li>¡Disfruta de tu juego modded! </li>
43
- </ol>
44
- <h2>¿Cómo instalar mods de Steam Workshop? </h2>
45
- <p>Instalar mods de Steam Workshop es mucho más simple que instalar mods de Nexus Mods. No necesitas descargar ni extraer ningún archivo, ni usar ningún software. Todo lo que necesitas hacer es:</p>
46
- <ol>
47
- <li>Ve a la página de Steam Workshop para TABS y navega por los mods disponibles. Puedes ordenarlos por popularidad, clasificación, fecha, etc.</li>
48
- <li>Encuentra un mod que te guste y haz clic en él. Verás una descripción, capturas de pantalla, vídeos, comentarios y clasificaciones para el mod. </li>
49
- <li>Si desea instalar el mod, haga clic en el botón Suscribir. Esto agregará el mod a su lista de suscriptores y lo descargará automáticamente. </li>
50
- <li>Inicie TABS y vaya al menú Mods. Debería ver el mod al que se suscribió en la lista. Actívelo haciendo clic en él. </li>
51
- <li>¡Disfruta de tu juego modded! </li>
52
-
53
- <h2>Cómo desinstalar mods? </h2>
54
- <p>Si quieres desinstalar o eliminar cualquier mod de tu juego, puedes hacerlo siguiendo estos pasos:</p>
55
- <h3>Mods de Nexus</h3>
56
- <p>Si ha instalado mods manualmente, puede desinstalarlos borrando sus carpetas de la carpeta de mods TABS. Si instala mods usando Vortex, puede desinstalarlos deshabilitando en la sección Plugins y luego eliminándolos en la sección Mods. </p>
57
- <h3>Taller de vapor</h3>
58
- <p>Si te has suscrito a mods en Steam Workshop, puedes cancelar la suscripción yendo a su página y haciendo clic en el botón Cancelar suscripción. Esto los eliminará de su lista de suscriptores y los eliminará automáticamente. </p>
59
- <h2>Consejos y advertencias</h2>
60
- <p>Antes de descargar e instalar cualquier mod para TABS, aquí hay algunos consejos y advertencias a tener en cuenta:</p>
61
- <ul>
62
- <li>Siempre lea la descripción del mod, revisiones e instrucciones cuidadosamente antes de instalar cualquier mod. Algunos mods pueden tener requisitos especiales, instrucciones o problemas de compatibilidad que debes conocer. </li>
63
- <li>Siempre copia de seguridad de los archivos del juego antes de hacer cualquier cambio. De esta manera, puedes restaurar tu juego a su estado original si algo sale mal o si no te gusta un mod. </li>
64
- <li>No instale demasiados mods a la vez. Esto puede causar problemas de rendimiento, fallos o conflictos entre los mods. Intenta instalar solo unos pocos mods a la vez y pruébalos antes de agregar más. </li>
65
- <li>Si encuentra algún problema con un mod, intente desactivarlo o desinstalarlo y vea si eso soluciona el problema. </ul>
66
- <li>Si tiene alguna pregunta o comentario sobre un mod, puede ponerse en contacto con el creador del mod o dejar un comentario en su página. Ellos pueden ser capaces de ayudarle o actualizar su mod en consecuencia. </li>
67
-
68
- </ul>
69
- <h1>Conclusión</h1>
70
- <p>Los mods son una gran manera de mejorar tu experiencia de juego en TABS. Pueden agregar nuevo contenido, características y desafíos al juego, haciéndolo más divertido y diverso. Puedes encontrar y descargar mods para TABS de Nexus Mods y Steam Workshop, que son dos de las fuentes más populares y confiables de mods para muchos juegos. También puede instalar y desinstalar mods fácilmente utilizando métodos manuales o software como Vortex. Sin embargo, siempre debes ser cuidadoso y responsable al usar mods, ya que pueden causar problemas o conflictos con el juego u otros mods. Siempre debes leer cuidadosamente la descripción del mod, las revisiones y las instrucciones, hacer copias de seguridad de tus archivos de juego, probar tus mods y respetar a los creadores del mod y a otros jugadores. </p>
71
- <p>Esperamos que este artículo te haya ayudado a aprender a descargar mods en TABS. Si tiene alguna pregunta o sugerencia, no dude en dejar un comentario a continuación. Happy modding! </p>
72
- <h2>Preguntas frecuentes</h2>
73
- <p>Aquí están algunas de las preguntas y respuestas más frecuentes sobre los mods de descarga en TABS:</p>
74
- <h3>Q: ¿Cuáles son los mejores mods para TABS? </h3>
75
- <p>A: Esta es una pregunta subjetiva, ya que diferentes jugadores pueden tener diferentes preferencias y gustos cuando se trata de mods. Sin embargo, algunos de los mods más populares y bien clasificados para TABS son:</p>
76
- <ul>
77
- <li>La actualización de la dinastía: Este mod añade una nueva facción llamada dinastía, que se basa en la antigua China. Incluye nuevas unidades, armas, mapas y escenarios. </li>
78
- <li>The Modern Faction: Este mod añade una nueva facción llamada Modern, que se basa en la guerra moderna. Incluye nuevas unidades, armas, vehículos y mapas. </li>
79
- <li>The Wild West Update: Este mod añade una nueva facción llamada Wild West, que se basa en la frontera estadounidense. Incluye nuevas unidades, armas, mapas y escenarios. </li>
80
- <li>The Fantasy Faction: Este mod añade una nueva facción llamada Fantasy, que se basa en la fantasía y la mitología. Incluye nuevas unidades, armas, criaturas y mapas. </li>
81
-
82
- </ul>
83
- <h3>Q: ¿Cómo actualizo mis mods? </h3>
84
- <p>A: Si has descargado tus mods de Nexus Mods, puedes buscar actualizaciones yendo a la sección Mods en Vortex y haciendo clic en el botón Buscar actualizaciones. Si hay alguna actualización disponible, puede descargarla e instalarla haciendo clic en el botón Instalar actualización. Si has descargado tus mods de Steam Workshop, no necesitas hacer nada, ya que Steam actualizará automáticamente tus mods suscritos cuando estén disponibles. </p>
85
- <h3>Q: ¿Cómo puedo crear mis propios mods? </h3>
86
- <p>A: Si quieres crear tus propios mods para TABS, necesitarás algunas habilidades y herramientas para hacerlo. Necesitarás saber cómo usar Unity, que es el motor del juego en el que se basa TABS. También tendrá que descargar el TABS Modding Kit, que es una colección de archivos y scripts que le ayudarán a crear y probar sus mods. Puedes encontrar tutoriales y guías sobre cómo usar estas herramientas en YouTube o Reddit. También puedes unirte a TABS Modding Discord, donde puedes chatear con otros modders y obtener ayuda o comentarios. </p>
87
- <h3>Q: ¿Cómo comparto mis mods con otros? </h3>
88
- <p>A: Si quieres compartir tus mods con otros, puedes subirlos a Nexus Mods o Steam Workshop. Para subir tus mods a Nexus Mods, tendrás que crear una cuenta en su sitio web y seguir sus directrices sobre cómo subir tus archivos. Para subir tus mods a Steam Workshop, necesitarás tener una cuenta de Steam y seguir sus instrucciones sobre cómo publicar tus artículos. </p>
89
- <h3>Q: ¿Cómo puedo reportar un problema con un mod? </h3> 64aa2da5cf<br />
90
- <br />
91
- <br />