This model is a quantized version of stabilityai/sdxl-turbo
and was exported to the OpenVINO format using optimum-intel via the nncf-quantization space.
First make sure you have optimum-intel installed:
pip install optimum[openvino]
To load your model you can do as follows:
from optimum.intel import OVStableDiffusionXLPipeline
model_id = "echarlaix/sdxl-turbo-openvino-int8"
model = OVStableDiffusionXLPipeline.from_pretrained(model_id)
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API has been turned off for this model.
Model tree for echarlaix/sdxl-turbo-openvino-8bit
Base model
stabilityai/sdxl-turbo