Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flux.1 image generation hasn't yet implemented iGPU and Compile-mode but PRs are ready #2646

Closed
JamieVC opened this issue Jan 13, 2025 · 3 comments

Comments

@JamieVC
Copy link

JamieVC commented Jan 13, 2025

Describe the bug
We saw the PR make iGPU run Flux.1.
openvinotoolkit/openvino#27265

We saw PRs that make compile-only mode works.
huggingface/optimum-intel#873
huggingface/optimum-intel#1101

Then, we did some tests on Flux.1 https://github.com/openvinotoolkit/openvino_notebooks/blob/latest/notebooks/flux.1-image-generation/flux.1-image-generation.ipynb

Can you help enable Flux.1 sample code to support GPU as well?
The config below is for CPU, GPU, GPU(with compile-only mode).
image

Expected behavior
Flux.1 sample code can support GPU

@andrei-kochin
Copy link
Collaborator

@JamieVC compile-only mode is not the default path and will not be present in the notebook which covers the general model usecase. GPU can run flux.1 that is why it is listed among supported

@JamieVC
Copy link
Author

JamieVC commented Jan 15, 2025

hi @andrei-kochin

Sure! understood we don't present the compile-only mode on the notebook.

However, I believe we need this config to inference on GPU. Could we modify the notebook's ipynb code to support GPU as well? Thanks

from optimum.intel.openvino import OVDiffusionPipeline
model_dir = model_base_dir / "INT4" if use_quantized_models.value else model_base_dir / "FP16"
ov_config = {"INFERENCE_PRECISION_HINT" : "f16", "ACTIVATIONS_SCALE_FACTOR": "8.0", "CACHE_DIR" : ""}
ov_pipe = OVDiffusionPipeline.from_pretrained(model_dir, device=device.value, ov_config=ov_config)

@andrei-kochin
Copy link
Collaborator

@JamieVC we don't "need" it for running the notebook on GPU. This option "can" be provided and that is a key point. As I've mentioned previously we demonstrate the general scenario of the notebook execution and it is aligned now with the existing content.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants