File size: 5,127 Bytes
6b87c79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---
license: creativeml-openrail-m
tags:
  - imagepipeline
  - imagepipeline.io
  - text-to-image
  - ultra-realistic
pinned: false
pipeline_tag: text-to-image

---


## Weight-Slider
<img src="https://image.civitai.com/xG1nkqKTMzGDvpLrqFT7WA/9a60163f-737b-459c-aa6a-db2eb9dc8333/width=450/weight_slider.jpeg" alt="Generated by Image Pipeline" style="border-radius: 10px;">



**This lora model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**

Model details - weight: -3.0 to 3.0

positive: more weight

negative: less weight

V2: It is more capable on both ends of the spectrum but the range you give the lora is less. I uploaded a few days ago but something messed up and I couldn't update it.

I have been working non stop on my training algorithm and it is becoming far more stable while affecting the case concept of the character and the scene less and less. With this one, you get almost no artifacts with contrast and scenery as you scale and you can get much more extreme results, leaving more in the middle. 




[![Try this model](https://img.shields.io/badge/try_this_model-image_pipeline-BD9319)](https://imagepipeline.io/models/Weight-Slider?id=fc752b47-1bf8-4f64-ab52-e4d010c82d1c/) 




## How to try this model ?

You can try using it locally or send an API call to test the output quality.

Get your `API_KEY` from  [imagepipeline.io](https://imagepipeline.io/). No payment required.





Coding in `php` `javascript` `node` etc ? Checkout our documentation  

[![documentation](https://img.shields.io/badge/documentation-image_pipeline-blue)](https://docs.imagepipeline.io/docs/introduction) 


```python
import requests  
import json  
  
url =  "https://imagepipeline.io/sd/text2image/v1/run"  
  
payload = json.dumps({  
"model_id":  "sd1.5",  
"prompt":  "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",  
"negative_prompt":  "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",  
"width":  "512",  
"height":  "512",  
"samples":  "1",  
"num_inference_steps":  "30",  
"safety_checker":  false,   
"guidance_scale":  7.5,  
"multi_lingual":  "no",  
"embeddings":  "", 
"lora_models": "fc752b47-1bf8-4f64-ab52-e4d010c82d1c", 
"lora_weights":  "0.5" 
})  
  
headers =  {  
'Content-Type':  'application/json',
'API-Key': 'your_api_key'
}  
  
response = requests.request("POST", url, headers=headers, data=payload)  
  
print(response.text)

}
```

Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` : 

[![All models](https://img.shields.io/badge/Get%20All%20Models-image_pipeline-BD9319)](https://imagepipeline.io/models) 

### API Reference

#### Generate Image

```http
  https://api.imagepipeline.io/sd/text2image/v1
```

| Headers               | Type     | Description                                                                                                        |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key`             | `str` | Get your `API_KEY` from  [imagepipeline.io](https://imagepipeline.io/)                                             |
| `Content-Type`        | `str` | application/json - content type of the request body |


| Parameter | Type     | Description                |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in  [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | 	Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |

---
license: creativeml-openrail-m
tags:
  - imagepipeline
  - imagepipeline.io
  - text-to-image
  - ultra-realistic
pinned: false
pipeline_tag: text-to-image

---

### Feedback 

If you have any feedback, please reach out to us at [email protected]


#### 🔗 Visit Website
[![portfolio](https://img.shields.io/badge/image_pipeline-BD9319?style=for-the-badge&logo=gocd&logoColor=white)](https://imagepipeline.io/)


If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits