xueyao commited on
Commit
1af5080
·
1 Parent(s): cfc556d
Files changed (4) hide show
  1. README.md +111 -15
  2. ad1.jpg +0 -0
  3. ad2.jpg +0 -0
  4. bk_workflow.json +683 -0
README.md CHANGED
@@ -1,15 +1,111 @@
1
- ---
2
- {}
3
- ---
4
- Bokeh 1.0 Medium
5
- Bokeh Image
6
-
7
- Features
8
- We have pushed text-to-image models to new heights in terms of fidelity. Compared to models with similar capabilities like Flux1.1pro ultra raw/Recraft v3 raw, Bokeh medium demonstrates leading advantages in high fidelity and detail preservation, with capabilities in certain domains that rival human photographers.
9
- The model supports various prompt formats, with photography terminology equally applicable to Bokeh.
10
- We have evaluated the community's needs and training costs for fine-tuning this model. You can easily perform downstream training tasks (Finetune, LoRA, Lycoris, and academic research) on this base model.
11
- The official version of Bokeh medium is coming soon with compatible ControlNet support in development.
12
-
13
- Contact
14
- Website: https://tensor.art https://tusiart.com
15
- Developed by: TensorArt
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Bokeh 3.5 Medium
2
+ <div align="center">
3
+ <img src="ad2.jpg" alt="00205_" width="620"/>
4
+ </div>
5
+
6
+ Bokeh 3.5 Medium is a **Continue-training** model built upon the **stable diffusion 3.5 medium** foundation, further refined using a **500W high-resolution open-source dataset** with rigorous **aesthetic curation**. This ensures outstanding image quality, fine detail preservation, and enhanced controllability.
7
+
8
+ This model is released under the Stability Community License.
9
+ For more details, visit [Tensor.Art](https://tensor.art) or [TusiArt](https://tusiart.com) to explore additional resources and useful information.
10
+
11
+ ## Overview
12
+
13
+ - **Continue-training on SD3.5M**, leveraging a large-scale **500W high-resolution dataset**, carefully curated for aesthetic quality.
14
+ - **Supports hybrid short/long caption training** for enhanced natural language understanding.
15
+ - **Short Captions:** Focus on core image features.
16
+ - **Long Captions:** Provide broader scene context and atmospheric details.
17
+ - **Recommended Resolutions:**
18
+ `1920x1024`, `1728x1152`, `1152x1728`, `1280x1664`, `1440x1440`
19
+ - **Best Quality Training Resolution:** `1440x1440`
20
+ - **Supports LoRA fine-tuning.**
21
+
22
+ ## Advantages
23
+
24
+ ### 🖼️ High-Quality Image Generation
25
+ - **State-of-the-art visual fidelity** with improved detail extraction and **aesthetic consistency**.
26
+ - **Enhanced resolution support** up to **200W pixels**, ensuring highly detailed image outputs.
27
+ - **Carefully curated dataset** ensures better composition, lighting, and overall artistic appeal.
28
+
29
+ ### 🎯 Powerful Custom Fine-Tuning
30
+ - **Exceptional LoRA training support**, making it highly effective for:
31
+ - Photography
32
+ - 3D Rendering
33
+ - Illustration
34
+ - Concept Art
35
+
36
+ ### ⚡ Efficient Inference & Training
37
+ - **Low hardware requirements for inference:**
38
+ - **Medium model:** 9GB VRAM (without T5)
39
+ - **Full weights inference:** 16GB VRAM (suitable for local deployment)
40
+ - **LoRA fine-tuning VRAM requirement:** 12GB - 32GB
41
+
42
+ ## Known Issues
43
+
44
+ - **Potential human anatomy inconsistencies.**
45
+ - **Limited ability to generate photorealistic images.**
46
+ - **Some concepts may suffer from aesthetic quality issues.**
47
+
48
+
49
+ ## Prompting Guide
50
+
51
+ ### Use a structured prompt combining:
52
+ - **Main subject** (e.g., `"Close-up of a macaw"`)
53
+ - **Detailed features** (e.g., `"vivid feathers, sharp beak"`)
54
+ - **Background environment** (e.g., `"dimly lit environment"`)
55
+ - **Atmospheric description** (e.g., `"soft warm lighting, cinematic mood"`)
56
+
57
+ ### Best Practices:
58
+ - **Avoid overly complex prompts**, as the model already has strong text encoding. Overloading details can cause **T5 hallucination artifacts**, reducing image quality.
59
+ - **Do not use excessively short prompts** (e.g., single words or 2-3 tokens) unless combined with **LoRA or Image2Image (i2i)** techniques.
60
+ - **Avoid mixing too many unrelated concepts**, as this can lead to visual distortions and unwanted artifacts.
61
+ - **Optimal token length:** **30-70 tokens**.
62
+
63
+ ### Negative Prompting
64
+ - **Negative prompts strongly influence image quality.**
65
+ - Ensure they **do not contradict the main subject** to avoid degrading the output.
66
+
67
+
68
+
69
+ ## Example Output
70
+ Using diffusers:
71
+ ```python
72
+ import torch
73
+ from diffusers import StableDiffusion3Pipeline
74
+
75
+ pipe = StableDiffusion3Pipeline.from_pretrained("/mnt/share/pcm_outputs/bokeh_3.5_medium", torch_dtype=torch.bfloat16)
76
+ pipe = pipe.to("cuda")
77
+
78
+ image = pipe(
79
+ "Close-up of a macaw, dimly lit environment",
80
+ num_inference_steps=28,
81
+ guidance_scale=4,
82
+ height=1920,
83
+ width=1024,
84
+ ).images[0]
85
+ image.save("macaw.jpg")
86
+ ```
87
+ Using comfyui:
88
+ To use this workflow in **ComfyUI**, download the JSON file and load it:
89
+
90
+ [Download Workflow](bk_workflow.json)
91
+
92
+ ## Recommended Training Configuration
93
+
94
+ For **LoRA fine-tuning**, the following tools and settings are recommended:
95
+
96
+ ### 🔧 Training Tools
97
+ - **Kohya_ss:** [GitHub Repository](https://github.com/bmaltais/kohya_ss.git)
98
+ - **Simple Tuner:** [GitHub Repository](https://github.com/bghira/SimpleTuner)
99
+
100
+ ### ⚙️ Suggested Training Settings
101
+ ```bash
102
+ --Resolution 1440x1440
103
+ --t5xxl_max_token_length 154
104
+ --optimizer_type AdamW8bit
105
+ --mmdit_lr 1e-4
106
+ --text_encoder_lr 5e-5
107
+ ```
108
+
109
+ ## Contact
110
+ * Website: https://tensor.art https://tusiart.com
111
+ * Developed by: TensorArt
ad1.jpg ADDED
ad2.jpg ADDED
bk_workflow.json ADDED
@@ -0,0 +1,683 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "last_node_id": 63,
3
+ "last_link_id": 258,
4
+ "nodes": [
5
+ {
6
+ "id": 30,
7
+ "type": "CLIPTextEncodeSD3",
8
+ "pos": [
9
+ 518,
10
+ -261
11
+ ],
12
+ "size": [
13
+ 258.8465881347656,
14
+ 200
15
+ ],
16
+ "flags": {
17
+ "collapsed": true
18
+ },
19
+ "order": 8,
20
+ "mode": 0,
21
+ "inputs": [
22
+ {
23
+ "name": "clip",
24
+ "type": "CLIP",
25
+ "link": 258
26
+ },
27
+ {
28
+ "name": "clip_g",
29
+ "type": "STRING",
30
+ "widget": {
31
+ "name": "clip_g"
32
+ },
33
+ "link": 222
34
+ },
35
+ {
36
+ "name": "clip_l",
37
+ "type": "STRING",
38
+ "widget": {
39
+ "name": "clip_l"
40
+ },
41
+ "link": 223
42
+ }
43
+ ],
44
+ "outputs": [
45
+ {
46
+ "name": "CONDITIONING",
47
+ "type": "CONDITIONING",
48
+ "links": [
49
+ 236
50
+ ],
51
+ "slot_index": 0
52
+ }
53
+ ],
54
+ "properties": {
55
+ "Node name for S&R": "CLIPTextEncodeSD3"
56
+ },
57
+ "widgets_values": [
58
+ "anime,render,cartoon,3d,bad structure",
59
+ "anime,render,cartoon,3d,bad structure",
60
+ "",
61
+ "none",
62
+ true,
63
+ true,
64
+ true
65
+ ]
66
+ },
67
+ {
68
+ "id": 32,
69
+ "type": "CLIPTextEncodeSD3",
70
+ "pos": [
71
+ 518,
72
+ -319
73
+ ],
74
+ "size": [
75
+ 262.4820556640625,
76
+ 190
77
+ ],
78
+ "flags": {
79
+ "collapsed": true
80
+ },
81
+ "order": 9,
82
+ "mode": 0,
83
+ "inputs": [
84
+ {
85
+ "name": "clip",
86
+ "type": "CLIP",
87
+ "link": 257
88
+ },
89
+ {
90
+ "name": "clip_g",
91
+ "type": "STRING",
92
+ "widget": {
93
+ "name": "clip_g"
94
+ },
95
+ "link": 122
96
+ },
97
+ {
98
+ "name": "clip_l",
99
+ "type": "STRING",
100
+ "widget": {
101
+ "name": "clip_l"
102
+ },
103
+ "link": 123
104
+ },
105
+ {
106
+ "name": "t5xxl",
107
+ "type": "STRING",
108
+ "widget": {
109
+ "name": "t5xxl"
110
+ },
111
+ "link": 152
112
+ }
113
+ ],
114
+ "outputs": [
115
+ {
116
+ "name": "CONDITIONING",
117
+ "type": "CONDITIONING",
118
+ "links": [
119
+ 237
120
+ ],
121
+ "slot_index": 0
122
+ }
123
+ ],
124
+ "properties": {
125
+ "Node name for S&R": "CLIPTextEncodeSD3"
126
+ },
127
+ "widgets_values": [
128
+ "Close-up of a macaw, dimly lit environment",
129
+ "Close-up of a macaw, dimly lit environment",
130
+ "Close-up of a macaw, dimly lit environment",
131
+ "empty_prompt",
132
+ true,
133
+ true,
134
+ true
135
+ ]
136
+ },
137
+ {
138
+ "id": 17,
139
+ "type": "PrimitiveNode",
140
+ "pos": [
141
+ 487,
142
+ -196
143
+ ],
144
+ "size": [
145
+ 263.0819396972656,
146
+ 82
147
+ ],
148
+ "flags": {},
149
+ "order": 0,
150
+ "mode": 0,
151
+ "inputs": [],
152
+ "outputs": [
153
+ {
154
+ "name": "INT",
155
+ "type": "INT",
156
+ "links": [
157
+ 21
158
+ ],
159
+ "slot_index": 0
160
+ }
161
+ ],
162
+ "title": "seed",
163
+ "properties": {
164
+ "Run widget replace on values": false
165
+ },
166
+ "widgets_values": [
167
+ 220636977427261,
168
+ "randomize"
169
+ ]
170
+ },
171
+ {
172
+ "id": 52,
173
+ "type": "PrimitiveNode",
174
+ "pos": [
175
+ 218,
176
+ -94
177
+ ],
178
+ "size": [
179
+ 210,
180
+ 151.54025268554688
181
+ ],
182
+ "flags": {
183
+ "collapsed": false
184
+ },
185
+ "order": 1,
186
+ "mode": 0,
187
+ "inputs": [],
188
+ "outputs": [
189
+ {
190
+ "name": "STRING",
191
+ "type": "STRING",
192
+ "links": [
193
+ 222,
194
+ 223
195
+ ],
196
+ "slot_index": 0
197
+ }
198
+ ],
199
+ "title": "Negative_prompt",
200
+ "properties": {
201
+ "Run widget replace on values": false
202
+ },
203
+ "widgets_values": [
204
+ "anime,render,cartoon,3d,bad structure"
205
+ ]
206
+ },
207
+ {
208
+ "id": 10,
209
+ "type": "TripleCLIPLoader",
210
+ "pos": [
211
+ -98,
212
+ -265
213
+ ],
214
+ "size": [
215
+ 521.9664916992188,
216
+ 120.35124206542969
217
+ ],
218
+ "flags": {},
219
+ "order": 2,
220
+ "mode": 0,
221
+ "inputs": [],
222
+ "outputs": [
223
+ {
224
+ "name": "CLIP",
225
+ "type": "CLIP",
226
+ "links": [
227
+ 257,
228
+ 258
229
+ ],
230
+ "slot_index": 0
231
+ }
232
+ ],
233
+ "properties": {
234
+ "Node name for S&R": "TripleCLIPLoader"
235
+ },
236
+ "widgets_values": [
237
+ "bokeh_clip_g.safetensors",
238
+ "bokeh_clip_l.safetensors",
239
+ "t5xxl_fp16.safetensors"
240
+ ]
241
+ },
242
+ {
243
+ "id": 62,
244
+ "type": "Note",
245
+ "pos": [
246
+ -101,
247
+ 107
248
+ ],
249
+ "size": [
250
+ 304.94696044921875,
251
+ 114.46440887451172
252
+ ],
253
+ "flags": {},
254
+ "order": 3,
255
+ "mode": 0,
256
+ "inputs": [],
257
+ "outputs": [],
258
+ "properties": {},
259
+ "widgets_values": [
260
+ "Do not enter overly complex prompt words, which will cause serious degradation of image performance,you can use emotional and atmospheric cues to improve the picture quality"
261
+ ],
262
+ "color": "#432",
263
+ "bgcolor": "#653"
264
+ },
265
+ {
266
+ "id": 19,
267
+ "type": "PreviewImage",
268
+ "pos": [
269
+ 1334,
270
+ -468
271
+ ],
272
+ "size": [
273
+ 539.6785278320312,
274
+ 669.1779174804688
275
+ ],
276
+ "flags": {
277
+ "collapsed": false
278
+ },
279
+ "order": 12,
280
+ "mode": 0,
281
+ "inputs": [
282
+ {
283
+ "name": "images",
284
+ "type": "IMAGE",
285
+ "link": 24
286
+ }
287
+ ],
288
+ "outputs": [],
289
+ "properties": {
290
+ "Node name for S&R": "PreviewImage"
291
+ },
292
+ "widgets_values": []
293
+ },
294
+ {
295
+ "id": 18,
296
+ "type": "VAEDecode",
297
+ "pos": [
298
+ 1091,
299
+ -413
300
+ ],
301
+ "size": [
302
+ 200.854736328125,
303
+ 50.05826187133789
304
+ ],
305
+ "flags": {},
306
+ "order": 11,
307
+ "mode": 0,
308
+ "inputs": [
309
+ {
310
+ "name": "samples",
311
+ "type": "LATENT",
312
+ "link": 22
313
+ },
314
+ {
315
+ "name": "vae",
316
+ "type": "VAE",
317
+ "link": 23
318
+ }
319
+ ],
320
+ "outputs": [
321
+ {
322
+ "name": "IMAGE",
323
+ "type": "IMAGE",
324
+ "links": [
325
+ 24
326
+ ],
327
+ "slot_index": 0
328
+ }
329
+ ],
330
+ "properties": {
331
+ "Node name for S&R": "VAEDecode"
332
+ },
333
+ "widgets_values": []
334
+ },
335
+ {
336
+ "id": 59,
337
+ "type": "Note",
338
+ "pos": [
339
+ 487,
340
+ 105
341
+ ],
342
+ "size": [
343
+ 249.3325653076172,
344
+ 104.7717514038086
345
+ ],
346
+ "flags": {},
347
+ "order": 4,
348
+ "mode": 0,
349
+ "inputs": [],
350
+ "outputs": [],
351
+ "properties": {},
352
+ "widgets_values": [
353
+ "1920x1024 1728x1152 1152x1728 1280x1664 1440x1440"
354
+ ],
355
+ "color": "#432",
356
+ "bgcolor": "#653"
357
+ },
358
+ {
359
+ "id": 5,
360
+ "type": "EmptyLatentImage",
361
+ "pos": [
362
+ 475,
363
+ -54
364
+ ],
365
+ "size": [
366
+ 266.8973388671875,
367
+ 116.21234893798828
368
+ ],
369
+ "flags": {},
370
+ "order": 5,
371
+ "mode": 0,
372
+ "inputs": [],
373
+ "outputs": [
374
+ {
375
+ "name": "LATENT",
376
+ "type": "LATENT",
377
+ "links": [
378
+ 243
379
+ ],
380
+ "slot_index": 0
381
+ }
382
+ ],
383
+ "properties": {
384
+ "Node name for S&R": "EmptyLatentImage"
385
+ },
386
+ "widgets_values": [
387
+ 1280,
388
+ 1664,
389
+ 1
390
+ ]
391
+ },
392
+ {
393
+ "id": 12,
394
+ "type": "KSampler",
395
+ "pos": [
396
+ 771,
397
+ -418
398
+ ],
399
+ "size": [
400
+ 279.5604553222656,
401
+ 258
402
+ ],
403
+ "flags": {
404
+ "collapsed": false
405
+ },
406
+ "order": 10,
407
+ "mode": 0,
408
+ "inputs": [
409
+ {
410
+ "name": "model",
411
+ "type": "MODEL",
412
+ "link": 256
413
+ },
414
+ {
415
+ "name": "positive",
416
+ "type": "CONDITIONING",
417
+ "link": 237
418
+ },
419
+ {
420
+ "name": "negative",
421
+ "type": "CONDITIONING",
422
+ "link": 236
423
+ },
424
+ {
425
+ "name": "latent_image",
426
+ "type": "LATENT",
427
+ "link": 243
428
+ },
429
+ {
430
+ "name": "seed",
431
+ "type": "INT",
432
+ "widget": {
433
+ "name": "seed"
434
+ },
435
+ "link": 21
436
+ }
437
+ ],
438
+ "outputs": [
439
+ {
440
+ "name": "LATENT",
441
+ "type": "LATENT",
442
+ "links": [
443
+ 22
444
+ ],
445
+ "slot_index": 0
446
+ }
447
+ ],
448
+ "properties": {
449
+ "Node name for S&R": "KSampler"
450
+ },
451
+ "widgets_values": [
452
+ 220636977427261,
453
+ "randomize",
454
+ 28,
455
+ 4,
456
+ "dpmpp_2m",
457
+ "sgm_uniform",
458
+ 1
459
+ ]
460
+ },
461
+ {
462
+ "id": 20,
463
+ "type": "PrimitiveNode",
464
+ "pos": [
465
+ -102,
466
+ -94
467
+ ],
468
+ "size": [
469
+ 306.11773681640625,
470
+ 152.6992950439453
471
+ ],
472
+ "flags": {
473
+ "collapsed": false
474
+ },
475
+ "order": 6,
476
+ "mode": 0,
477
+ "inputs": [],
478
+ "outputs": [
479
+ {
480
+ "name": "STRING",
481
+ "type": "STRING",
482
+ "links": [
483
+ 122,
484
+ 123,
485
+ 152
486
+ ],
487
+ "slot_index": 0
488
+ }
489
+ ],
490
+ "title": "Positive_prompt",
491
+ "properties": {
492
+ "Run widget replace on values": false
493
+ },
494
+ "widgets_values": [
495
+ "Close-up of a macaw, dimly lit environment"
496
+ ]
497
+ },
498
+ {
499
+ "id": 13,
500
+ "type": "CheckpointLoaderSimple",
501
+ "pos": [
502
+ -95,
503
+ -414
504
+ ],
505
+ "size": [
506
+ 510.9742431640625,
507
+ 107.2224349975586
508
+ ],
509
+ "flags": {},
510
+ "order": 7,
511
+ "mode": 0,
512
+ "inputs": [],
513
+ "outputs": [
514
+ {
515
+ "name": "MODEL",
516
+ "type": "MODEL",
517
+ "links": [
518
+ 256
519
+ ],
520
+ "slot_index": 0
521
+ },
522
+ {
523
+ "name": "CLIP",
524
+ "type": "CLIP",
525
+ "links": [],
526
+ "slot_index": 1
527
+ },
528
+ {
529
+ "name": "VAE",
530
+ "type": "VAE",
531
+ "links": [
532
+ 23
533
+ ],
534
+ "slot_index": 2
535
+ }
536
+ ],
537
+ "properties": {
538
+ "Node name for S&R": "CheckpointLoaderSimple"
539
+ },
540
+ "widgets_values": [
541
+ "bokeh_3.5_medium.safetensors"
542
+ ]
543
+ }
544
+ ],
545
+ "links": [
546
+ [
547
+ 21,
548
+ 17,
549
+ 0,
550
+ 12,
551
+ 4,
552
+ "INT"
553
+ ],
554
+ [
555
+ 22,
556
+ 12,
557
+ 0,
558
+ 18,
559
+ 0,
560
+ "LATENT"
561
+ ],
562
+ [
563
+ 23,
564
+ 13,
565
+ 2,
566
+ 18,
567
+ 1,
568
+ "VAE"
569
+ ],
570
+ [
571
+ 24,
572
+ 18,
573
+ 0,
574
+ 19,
575
+ 0,
576
+ "IMAGE"
577
+ ],
578
+ [
579
+ 122,
580
+ 20,
581
+ 0,
582
+ 32,
583
+ 1,
584
+ "STRING"
585
+ ],
586
+ [
587
+ 123,
588
+ 20,
589
+ 0,
590
+ 32,
591
+ 2,
592
+ "STRING"
593
+ ],
594
+ [
595
+ 152,
596
+ 20,
597
+ 0,
598
+ 32,
599
+ 3,
600
+ "STRING"
601
+ ],
602
+ [
603
+ 222,
604
+ 52,
605
+ 0,
606
+ 30,
607
+ 1,
608
+ "STRING"
609
+ ],
610
+ [
611
+ 223,
612
+ 52,
613
+ 0,
614
+ 30,
615
+ 2,
616
+ "STRING"
617
+ ],
618
+ [
619
+ 236,
620
+ 30,
621
+ 0,
622
+ 12,
623
+ 2,
624
+ "CONDITIONING"
625
+ ],
626
+ [
627
+ 237,
628
+ 32,
629
+ 0,
630
+ 12,
631
+ 1,
632
+ "CONDITIONING"
633
+ ],
634
+ [
635
+ 243,
636
+ 5,
637
+ 0,
638
+ 12,
639
+ 3,
640
+ "LATENT"
641
+ ],
642
+ [
643
+ 256,
644
+ 13,
645
+ 0,
646
+ 12,
647
+ 0,
648
+ "MODEL"
649
+ ],
650
+ [
651
+ 257,
652
+ 10,
653
+ 0,
654
+ 32,
655
+ 0,
656
+ "CLIP"
657
+ ],
658
+ [
659
+ 258,
660
+ 10,
661
+ 0,
662
+ 30,
663
+ 0,
664
+ "CLIP"
665
+ ]
666
+ ],
667
+ "groups": [],
668
+ "config": {},
669
+ "extra": {
670
+ "ds": {
671
+ "scale": 0.7247295000000004,
672
+ "offset": [
673
+ 833.5543134270109,
674
+ 652.7487494515917
675
+ ]
676
+ },
677
+ "VHS_latentpreview": false,
678
+ "VHS_latentpreviewrate": 0,
679
+ "VHS_MetadataImage": true,
680
+ "VHS_KeepIntermediate": true
681
+ },
682
+ "version": 0.4
683
+ }