The SDXL Turbo model is converted to OpenVINO for the fast inference on CPU. This model is intended for research purposes only.
Original Model : sdxl-turbo
You can use this model with FastSD CPU.
To run the model yourself, you can leverage the 🧨 Diffusers library:
- Install the dependencies:
pip install optimum-intel openvino diffusers onnx
- Run the model:
from optimum.intel.openvino.modeling_diffusion import OVStableDiffusionXLPipeline
pipeline = OVStableDiffusionXLPipeline.from_pretrained(
"rupeshs/sdxl-turbo-openvino-int8",
ov_config={"CACHE_DIR": ""},
)
prompt = "Teddy bears working on new AI research on the moon in the 1980s"
images = pipeline(
prompt=prompt,
width=512,
height=512,
num_inference_steps=1,
guidance_scale=1.0,
).images
images[0].save("out_image.png")
License
The SDXL Turbo Model is licensed under the Stability AI Non-Commercial Research Community License, Copyright (c) Stability AI Ltd. All Rights Reserved.
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.