Skip to content

Latest commit

 

History

History

OpenVINO Stable Diffusion (with LoRA) C++ image generation pipeline

The pure C++ text-to-image pipeline, driven by the OpenVINO native C++ API for Stable Diffusion v1.5 with LMS Discrete Scheduler, supports both static and dynamic model inference. It includes advanced features like LoRA integration with safetensors and OpenVINO Tokenizers. Loading openvino_tokenizers to ov::Core enables tokenization. The sample uses diffusers for image generation and imwrite for saving .bmp images. This demo has been tested on Windows and Unix platforms. There is also a Jupyter notebook which provides an example of image generation in Python.

Note

This tutorial assumes that the current working directory is <openvino.genai repo>/image_generation/stable_diffusion_1_5/cpp/ and all paths are relative to this folder.

Step 1: Prepare build environment

Prerequisites:

C++ Packages:

Prepare a python environment and install dependencies:

conda create -n openvino_sd_cpp python==3.10
conda activate openvino_sd_cpp
conda install -c conda-forge openvino c-compiler cxx-compiler make cmake
# Ensure that Conda standard libraries are used
conda env config vars set LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH

Step 2: Convert Stable Diffusion v1.5 and Tokenizer models

Stable Diffusion v1.5 model:

  1. Install dependencies to import models from HuggingFace:
git submodule update --init
# Reactivate Conda environment after installing dependencies and setting env vars
conda activate openvino_sd_cpp
python -m pip install -r requirements.txt
python -m pip install ../../../thirdparty/openvino_tokenizers/[transformers]
  1. Download a huggingface SD v1.5 model like:
  • runwayml/stable-diffusion-v1-5

  • dreamlike-anime-1.0 to run Stable Diffusion with LoRA adapters.

    Example command for downloading and exporting FP16 model:

    export MODEL_PATH="models/dreamlike_anime_1_0_ov/FP16"
    # Using optimum-cli for exporting model to OpenVINO format
    optimum-cli export openvino --model dreamlike-art/dreamlike-anime-1.0 --task stable-diffusion --convert-tokenizer --weight-format fp16 $MODEL_PATH
    # Converting tokenizer manually (`--convert-tokenizer` flag of `optimum-cli` results in "OpenVINO Tokenizer export for CLIPTokenizer is not supported.")
    convert_tokenizer $MODEL_PATH/tokenizer/ --tokenizer-output-type i32 -o $MODEL_PATH/tokenizer/

    You can also choose other precision and export FP32 or INT8 model.

    Please, refer to the official website for 🤗 Optimum and optimum-intel to read more details.

Note

Now the pipeline support batch size = 1 only, i.e. static model (1, 3, 512, 512)

LoRA enabling with safetensors

Refer to python pipeline blog. The safetensor model is loaded via safetensors.h. The layer name and weight are modified with Eigen library and inserted into the SD models with ov::pass::MatcherPass in the file common/diffusers/src/lora.cpp.

SD model dreamlike-anime-1.0 and LoRA soulcard are tested in this pipeline.

Download and put safetensors and model IR into the models folder.

Step 3: Build the SD application

conda activate openvino_sd_cpp
cmake -DCMAKE_BUILD_TYPE=Release -S . -B build
cmake --build build --parallel

Step 4: Run Pipeline

./build/stable_diffusion [-p <posPrompt>] [-n <negPrompt>] [-s <seed>] [--height <output image>] [--width <output image>] [-d <device>] [-r <readNPLatent>] [-l <lora.safetensors>] [-a <alpha>] [-h <help>] [-m <modelPath>] [-t <modelType>] [--dynamic]

Usage:
  stable_diffusion [OPTION...]
  • -p, --posPrompt arg Initial positive prompt for SD (default: cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting)
  • -n, --negPrompt arg Default is empty with space (default: )
  • -d, --device arg AUTO, CPU, or GPU. Doesn't apply to Tokenizer model, OpenVINO Tokenizers can be inferred on a CPU device only (default: CPU)
  • --step arg Number of diffusion step ( default: 20)
  • -s, --seed arg Number of random seed to generate latent (default: 42)
  • --num arg Number of image output(default: 1)
  • --height arg Height of output image (default: 512)
  • --width arg Width of output image (default: 512)
  • -c, --useCache Use model caching
  • -r, --readNPLatent Read numpy generated latents from file
  • -m, --modelPath arg Specify path of SD model IR (default: ../models/dreamlike_anime_1_0_ov)
  • -t, --type arg Specify the type of SD model IRs (FP32, FP16 or INT8) (default: FP16)
  • --dynamic Specify the model input shape to use dynamic shape
  • -l, --loraPath arg Specify path of lora file. (*.safetensors). (default: )
  • -a, --alpha arg alpha for lora (default: 0.75)
  • -h, --help Print usage

Note

The tokenizer model will always be loaded to CPU: OpenVINO Tokenizers can be inferred on a CPU device only.

Examples

Positive prompt: cyberpunk cityscape like Tokyo New York with tall buildings at dusk golden hour cinematic lighting

Negative prompt: (empty, here couldn't use OV tokenizer, check the issues for details)

Read the numpy latent instead of C++ std lib for the alignment with Python pipeline

  • Generate image without lora ./build/stable_diffusion -r

  • Generate image with soulcard lora ./build/stable_diffusion -r

  • Generate different size image with dynamic model (C++ lib generated latent): ./build/stable_diffusion -m ./models/dreamlike_anime_1_0_ov -t FP16 --dynamic --height 448 --width 704

Notes:

For the generation quality, be careful with the negative prompt and random latent generation. C++ random generation with MT19937 results is differ from numpy.random.randn(). Hence, please use -r, --readNPLatent for the alignment with Python (this latent file is for output image 512X512 only)