Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: "Accelerate with OpenVINO" option not present. #124

Open
1 task done
mcondarelli opened this issue Feb 1, 2025 · 3 comments
Open
1 task done

[Bug]: "Accelerate with OpenVINO" option not present. #124

mcondarelli opened this issue Feb 1, 2025 · 3 comments

Comments

@mcondarelli
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What happened?

"Accelerate with OpenVINO" option is not present in dropdown menu.

Looking at the first startup log I can see the error (see below for full listing):

*** Error loading script: openvino_accelerate.py
    Traceback (most recent call last):
      File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/scripts.py", line 382, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/home/mcon/prove/LLaMa/stable-diffusion-webui/scripts/openvino_accelerate.py", line 34, in <module>
        from openvino.frontend.pytorch.torchdynamo import backend # noqa: F401
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/openvino/frontend/pytorch/torchdynamo/backend.py", line 15, in <module>
        from torch._inductor.compile_fx import compile_fx
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 21, in <module>
        from . import config, metrics, overrides, pattern_matcher
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 18, in <module>
        from . import config, ir
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/ir.py", line 29, in <module>
        from . import config, dependencies
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/dependencies.py", line 10, in <module>
        from .codegen.common import index_prevent_reordering
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 13, in <module>
        from ..utils import (
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/utils.py", line 32, in <module>
        from triton.testing import do_bench
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/__init__.py", line 20, in <module>
        from .runtime import (
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/__init__.py", line 1, in <module>
        from .autotuner import Config, Heuristics, autotune, heuristics
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 7, in <module>
        from ..compiler import OutOfResources
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1888, in <module>
        @static_vars(amdgcn_bitcode_paths = _get_amdgcn_bitcode_paths())
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1867, in _get_amdgcn_bitcode_paths
        gfx_arch = _get_amdgcn_bitcode_paths.discovered_gfx_arch_fulldetails[1]
    TypeError: 'NoneType' object is not subscriptable

I seem to understand OpenVINO does not recognize my CPU/InternalGPU, but I have no idea how to help it.

Steps to reproduce the problem

  1. Start webui.sh
  2. open script combo box.
  3. no "Accelerate with OpenVINO" is present.

What should have happened?

I should see the "Accelerate with OpenVINO" option.

Sysinfo

sysinfo-2025-02-01-20-27.txt

What browsers do you use to access the UI ?

Mozilla Firefox

Console logs

This is the full log to date, error is the same as above.
I started `webui.sh`, played a bit with the interface and then I generated a test image (which was done using CPU) and restarted GUI to see if anything changed.


mcon@cinderella:~/prove/LLaMa$ . sd_venv/bin/activate
(sd_venv) mcon@cinderella:~/prove/LLaMa$ cd stable-diffusion-webui/
(sd_venv) mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui$ echo $PYTORCH_TRACING_MODE

(sd_venv) mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui$ export PYTORCH_TRACING_MODE=TORCHFX
(sd_venv) mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui$ export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half" 
(sd_venv) mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui$ ./webui.sh 

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on mcon user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
python venv already activate or run without venv: /home/mcon/prove/LLaMa/sd_venv
################################################################

################################################################
Launching launch.py...
################################################################
Cannot locate TCMalloc (improves CPU memory usage)
fatal: No names found, cannot describe anything.
Python 3.10.6 (main, Feb  1 2025, 19:14:22) [GCC 14.2.0]
Version: 1.6.0
Commit hash: e5a634da06c62d72dbdc764b16c65ef3408aa588
Installing torch and torchvision
Looking in indexes: https://download.pytorch.org/whl/rocm5.4.2
Collecting torch==2.0.1+rocm5.4.2
  Downloading https://download.pytorch.org/whl/rocm5.4.2/torch-2.0.1%2Brocm5.4.2-cp310-cp310-linux_x86_64.whl (1536.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 GB 698.6 kB/s eta 0:00:00
Collecting torchvision==0.15.2+rocm5.4.2
  Downloading https://download.pytorch.org/whl/rocm5.4.2/torchvision-0.15.2%2Brocm5.4.2-cp310-cp310-linux_x86_64.whl (62.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.4/62.4 MB 17.3 MB/s eta 0:00:00
Collecting typing-extensions
  Downloading https://download.pytorch.org/whl/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting pytorch-triton-rocm<2.1,>=2.0.0
  Downloading https://download.pytorch.org/whl/pytorch_triton_rocm-2.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (78.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.4/78.4 MB 16.3 MB/s eta 0:00:00
Collecting networkx
  Downloading https://download.pytorch.org/whl/networkx-3.3-py3-none-any.whl (1.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 19.3 MB/s eta 0:00:00
Collecting filelock
  Downloading https://download.pytorch.org/whl/filelock-3.13.1-py3-none-any.whl (11 kB)
Collecting sympy
  Downloading https://download.pytorch.org/whl/sympy-1.13.1-py3-none-any.whl (6.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 26.4 MB/s eta 0:00:00
Collecting jinja2
  Downloading https://download.pytorch.org/whl/Jinja2-3.1.4-py3-none-any.whl (133 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.3/133.3 kB 3.3 MB/s eta 0:00:00
Collecting pillow!=8.3.*,>=5.3.0
  Downloading https://download.pytorch.org/whl/pillow-11.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (4.4 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.4/4.4 MB 27.1 MB/s eta 0:00:00
Collecting numpy
  Downloading https://download.pytorch.org/whl/numpy-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.3 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.3/16.3 MB 19.7 MB/s eta 0:00:00
Collecting requests
  Downloading https://download.pytorch.org/whl/requests-2.28.1-py3-none-any.whl (62 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.8/62.8 kB 2.1 MB/s eta 0:00:00
Collecting cmake
  Downloading https://download.pytorch.org/whl/cmake-3.25.0-py2.py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.7 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.7/23.7 MB 32.1 MB/s eta 0:00:00
Collecting lit
  Downloading https://download.pytorch.org/whl/lit-15.0.7.tar.gz (132 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 132.3/132.3 kB 726.1 kB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Collecting MarkupSafe>=2.0
  Downloading https://download.pytorch.org/whl/MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Collecting certifi>=2017.4.17
  Downloading https://download.pytorch.org/whl/certifi-2022.12.7-py3-none-any.whl (155 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 155.3/155.3 kB 3.7 MB/s eta 0:00:00
Collecting urllib3<1.27,>=1.21.1
  Downloading https://download.pytorch.org/whl/urllib3-1.26.13-py2.py3-none-any.whl (140 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 140.6/140.6 kB 2.7 MB/s eta 0:00:00
Collecting idna<4,>=2.5
  Downloading https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.5/61.5 kB 1.7 MB/s eta 0:00:00
Collecting charset-normalizer<3,>=2
  Downloading https://download.pytorch.org/whl/charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting mpmath<1.4,>=1.1.0
  Downloading https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 9.7 MB/s eta 0:00:00
Using legacy 'setup.py install' for lit, since package 'wheel' is not installed.
Installing collected packages: mpmath, lit, cmake, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, pytorch-triton-rocm, torch, torchvision
  Running setup.py install for lit ... done
Successfully installed MarkupSafe-2.1.5 certifi-2022.12.7 charset-normalizer-2.1.1 cmake-3.25.0 filelock-3.13.1 idna-3.4 jinja2-3.1.4 lit-15.0.7 mpmath-1.3.0 networkx-3.3 numpy-2.1.2 pillow-11.0.0 pytorch-triton-rocm-2.0.1 requests-2.28.1 sympy-1.13.1 torch-2.0.1+rocm5.4.2 torchvision-0.15.2+rocm5.4.2 typing-extensions-4.12.2 urllib3-1.26.13
WARNING: There was an error checking the latest version of pip.
Installing clip
Installing open_clip
Cloning Stable Diffusion into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (2/2), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 580 (delta 0), reused 0 (delta 0), pack-reused 578 (from 2)
Receiving objects: 100% (580/580), 73.44 MiB | 39.85 MiB/s, done.
Resolving deltas: 100% (283/283), done.
Cloning Stable Diffusion XL into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/generative-models...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/generative-models'...
remote: Enumerating objects: 1064, done.
remote: Counting objects: 100% (477/477), done.
remote: Compressing objects: 100% (124/124), done.
remote: Total 1064 (delta 376), reused 353 (delta 353), pack-reused 587 (from 1)
Receiving objects: 100% (1064/1064), 53.60 MiB | 38.22 MiB/s, done.
Resolving deltas: 100% (562/562), done.
Cloning K-diffusion into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/k-diffusion...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/k-diffusion'...
remote: Enumerating objects: 1350, done.
remote: Counting objects: 100% (1350/1350), done.
remote: Compressing objects: 100% (444/444), done.
remote: Total 1350 (delta 951), reused 1254 (delta 899), pack-reused 0 (from 0)
Receiving objects: 100% (1350/1350), 233.36 KiB | 1.91 MiB/s, done.
Resolving deltas: 100% (951/951), done.
Cloning CodeFormer into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/CodeFormer...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/CodeFormer'...
remote: Enumerating objects: 614, done.
remote: Counting objects: 100% (297/297), done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 614 (delta 208), reused 183 (delta 183), pack-reused 317 (from 3)
Receiving objects: 100% (614/614), 17.31 MiB | 23.30 MiB/s, done.
Resolving deltas: 100% (296/296), done.
Cloning BLIP into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/BLIP...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (183/183), done.
remote: Compressing objects: 100% (46/46), done.
remote: Total 277 (delta 145), reused 137 (delta 137), pack-reused 94 (from 1)
Receiving objects: 100% (277/277), 7.04 MiB | 18.66 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements for CodeFormer
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'No HIP GPUs are available', memory monitor disabled
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to /home/mcon/prove/LLaMa/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors

100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 3.97G/3.97G [00:40<00:00, 105MB/s]
*** Error loading script: openvino_accelerate.py
    Traceback (most recent call last):
      File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/scripts.py", line 382, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/home/mcon/prove/LLaMa/stable-diffusion-webui/scripts/openvino_accelerate.py", line 34, in <module>
        from openvino.frontend.pytorch.torchdynamo import backend # noqa: F401
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/openvino/frontend/pytorch/torchdynamo/backend.py", line 15, in <module>
        from torch._inductor.compile_fx import compile_fx
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 21, in <module>
        from . import config, metrics, overrides, pattern_matcher
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 18, in <module>
        from . import config, ir
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/ir.py", line 29, in <module>
        from . import config, dependencies
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/dependencies.py", line 10, in <module>
        from .codegen.common import index_prevent_reordering
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 13, in <module>
        from ..utils import (
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/utils.py", line 32, in <module>
        from triton.testing import do_bench
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/__init__.py", line 20, in <module>
        from .runtime import (
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/__init__.py", line 1, in <module>
        from .autotuner import Config, Heuristics, autotune, heuristics
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 7, in <module>
        from ..compiler import OutOfResources
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1888, in <module>
        @static_vars(amdgcn_bitcode_paths = _get_amdgcn_bitcode_paths())
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1867, in _get_amdgcn_bitcode_paths
        gfx_arch = _get_amdgcn_bitcode_paths.discovered_gfx_arch_fulldetails[1]
    TypeError: 'NoneType' object is not subscriptable

---
Calculating sha256 for /home/mcon/prove/LLaMa/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors: Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 346.9s (prepare environment: 294.6s, import torch: 5.6s, import gradio: 0.6s, setup paths: 0.6s, other imports: 0.5s, list SD models: 41.3s, load scripts: 1.7s, create ui: 0.9s, gradio launch: 0.9s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from /home/mcon/prove/LLaMa/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /home/mcon/prove/LLaMa/stable-diffusion-webui/configs/v1-inference.yaml
/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
vocab.json: 100%|██████████████████████████████████████████████████████████████████████████████████████| 961k/961k [00:00<00:00, 3.05MB/s]
merges.txt: 100%|██████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 37.6MB/s]
special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████| 389/389 [00:00<00:00, 5.18MB/s]
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████| 905/905 [00:00<00:00, 3.19MB/s]
config.json: 100%|███████████████████████████████████████████████████████████████████████████████████| 4.52k/4.52k [00:00<00:00, 10.6MB/s]
Applying attention optimization: InvokeAI... done.
Model loaded in 23.9s (calculate hash: 10.6s, load weights from disk: 0.2s, create model: 2.9s, apply weights to model: 10.1s).
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:28<00:00,  7.45s/it]
Total progress: 100%|█████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:26<00:00,  7.34s/it]
Restarting UI...100%|█████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:26<00:00,  7.02s/it]
Closing server running on port: 7860
*** Error loading script: openvino_accelerate.py
    Traceback (most recent call last):
      File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/scripts.py", line 382, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/home/mcon/prove/LLaMa/stable-diffusion-webui/scripts/openvino_accelerate.py", line 34, in <module>
        from openvino.frontend.pytorch.torchdynamo import backend # noqa: F401
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/openvino/frontend/pytorch/torchdynamo/backend.py", line 15, in <module>
        from torch._inductor.compile_fx import compile_fx
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 21, in <module>
        from . import config, metrics, overrides, pattern_matcher
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 18, in <module>
        from . import config, ir
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/ir.py", line 29, in <module>
        from . import config, dependencies
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/dependencies.py", line 10, in <module>
        from .codegen.common import index_prevent_reordering
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 13, in <module>
        from ..utils import (
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/utils.py", line 32, in <module>
        from triton.testing import do_bench
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/__init__.py", line 20, in <module>
        from .runtime import (
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/__init__.py", line 1, in <module>
        from .autotuner import Config, Heuristics, autotune, heuristics
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 7, in <module>
        from ..compiler import OutOfResources
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1888, in <module>
        @static_vars(amdgcn_bitcode_paths = _get_amdgcn_bitcode_paths())
      File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1867, in _get_amdgcn_bitcode_paths
        gfx_arch = _get_amdgcn_bitcode_paths.discovered_gfx_arch_fulldetails[1]
    TypeError: 'NoneType' object is not subscriptable

---
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 0.8s (load scripts: 0.2s, create ui: 0.5s).

Additional information

On top of what is shown by sysinfo I also have a discrete GPU (it actually is a RX580X 8Gb VRAM, if it matters):

mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui/scripts$ sudo lspci | grep VGA
05:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev e7)

Can it be presence of the discrete GPU "confuses" OpenVINO?

sysinfo also seems to have problems getting my CPU type; this is what my system reports:

mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui/scripts$ head -28 /proc/cpuinfo 
processor	: 0
vendor_id	: GenuineIntel
cpu family	: 6
model		: 158
model name	: Intel(R) Core(TM) i7-8700K CPU @ 3.70GHz
stepping	: 10
microcode	: 0xf8
cpu MHz		: 4469.114
cache size	: 12288 KB
physical id	: 0
siblings	: 12
core id		: 0
cpu cores	: 6
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 22
wp		: yes
flags		: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
vmx flags	: vnmi preemption_timer invvpid ept_x_only ept_ad ept_1gb flexpriority tsc_offset vtpr mtf vapic ept vpid unrestricted_guest ple shadow_vmcs pml ept_violation_ve ept_mode_based_exec
bugs		: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit srbds mmio_stale_data retbleed gds
bogomips	: 7399.70
clflush size	: 64
cache_alignment	: 64
address sizes	: 39 bits physical, 48 bits virtual
power management:

@TheWonderfulTartiflette
Copy link

TheWonderfulTartiflette commented Feb 9, 2025

Yep, I can assure that I have the exact same issue:

fatal: No names found, cannot describe anything.
Python 3.10.0 (tags/v3.10.0:b494f59, Oct  4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
Version: 1.6.0
Commit hash: e5a634da06c62d72dbdc764b16c65ef3408aa588
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
*** Error loading script: openvino_accelerate.py
    Traceback (most recent call last):
      File "E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\modules\scripts.py", line 382, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\scripts\openvino_accelerate.py", line 47, in <module>
        from diffusers import (
      File "E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\venv\lib\site-packages\diffusers\__init__.py", line 5, in <module>
        from .utils import (
      File "E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\venv\lib\site-packages\diffusers\utils\__init__.py", line 38, in <module>
        from .dynamic_modules_utils import get_class_from_dynamic_module
      File "E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\venv\lib\site-packages\diffusers\utils\dynamic_modules_utils.py", line 28, in <module>
        from huggingface_hub import HfFolder, cached_download, hf_hub_download, model_info
    ImportError: cannot import name 'cached_download' from 'huggingface_hub' (E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\__init__.py)

---
Loading weights [6ce0161689] from E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
E:\AI\OpenWebUI OpenVINO\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Startup time: 6.7s (prepare environment: 0.5s, import torch: 3.0s, import gradio: 0.7s, setup paths: 0.5s, initialize shared: 0.1s, other imports: 0.4s, load scripts: 0.9s, create ui: 0.4s, gradio launch: 0.3s).
Applying attention optimization: InvokeAI... done.
Model loaded in 1.7s (load weights from disk: 0.5s, create model: 0.2s, apply weights to model: 0.9s)

@picarica
Copy link

same here

@edricus
Copy link

edricus commented Mar 26, 2025

fixed it by downgrading huggingface_hub
pip install huggingface_hub==0.25.0
Thanks to easydiffusion/easydiffusion#1851 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants