Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,8 @@
|
|
1 |
---
|
2 |
license: creativeml-openrail-m
|
|
|
|
|
|
|
3 |
---
|
4 |
|
5 |
# stable-diffusion-v1-5-int8-ov
|
@@ -26,8 +29,8 @@ For more information on quantization, check the [OpenVINO model optimization gui
|
|
26 |
|
27 |
The provided OpenVINO™ IR model is compatible with:
|
28 |
|
29 |
-
* OpenVINO version
|
30 |
-
* Optimum Intel 1.
|
31 |
|
32 |
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
|
33 |
|
@@ -46,7 +49,7 @@ model_id = "OpenVINO/stable-diffusion-v1-5-int8-ov"
|
|
46 |
pipeline = OVDiffusionPipeline.from_pretrained(model_id)
|
47 |
|
48 |
prompt = "sailing ship in storm by Rembrandt"
|
49 |
-
images = pipeline(prompt, num_inference_steps=
|
50 |
```
|
51 |
|
52 |
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
|
@@ -79,7 +82,7 @@ device = "CPU"
|
|
79 |
pipe = ov_genai.Text2ImagePipeline(model_path, device)
|
80 |
|
81 |
prompt = "sailing ship in storm by Rembrandt"
|
82 |
-
image_tensor = pipe.generate(prompt, num_inference_steps=
|
83 |
image = Image.fromarray(image_tensor.data[0])
|
84 |
|
85 |
```
|
|
|
1 |
---
|
2 |
license: creativeml-openrail-m
|
3 |
+
base_model:
|
4 |
+
- stable-diffusion-v1-5/stable-diffusion-v1-5
|
5 |
+
base_model_relation: quantized
|
6 |
---
|
7 |
|
8 |
# stable-diffusion-v1-5-int8-ov
|
|
|
29 |
|
30 |
The provided OpenVINO™ IR model is compatible with:
|
31 |
|
32 |
+
* OpenVINO version 2025.0.0 and higher
|
33 |
+
* Optimum Intel 1.22.0 and higher
|
34 |
|
35 |
## Running Model Inference with [Optimum Intel](https://huggingface.co/docs/optimum/intel/index)
|
36 |
|
|
|
49 |
pipeline = OVDiffusionPipeline.from_pretrained(model_id)
|
50 |
|
51 |
prompt = "sailing ship in storm by Rembrandt"
|
52 |
+
images = pipeline(prompt, num_inference_steps=20).images
|
53 |
```
|
54 |
|
55 |
## Running Model Inference with [OpenVINO GenAI](https://github.com/openvinotoolkit/openvino.genai)
|
|
|
82 |
pipe = ov_genai.Text2ImagePipeline(model_path, device)
|
83 |
|
84 |
prompt = "sailing ship in storm by Rembrandt"
|
85 |
+
image_tensor = pipe.generate(prompt, num_inference_steps=20)
|
86 |
image = Image.fromarray(image_tensor.data[0])
|
87 |
|
88 |
```
|