You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
when the quantization running, it shows warning Could not load tokenizer using specified model ID or path. OpenVINO tokenizer/detokenizer models won't be generated. Exception: Invalid version: '2025.0.0.0-476-2b2420220f9'
and after quantized, load the model with ov_genai.LLMPipeline(model_path, "CPU")
met issue:
File "C:\Users\gta\Downloads\script\npu_workspace\official_npu_ov\run.py", line 6, in <module>
pipe = ov_genai.LLMPipeline(model_path, "CPU")#, pipeline_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Check 'ov_tokenizer || ov_detokenizer' failed at C:\Jenkins\workspace\private-ci\ie\build-windows-vs2022\b\repos\openvino.genai\src\cpp\src\tokenizer.cpp:196:
Neither tokenizer nor detokenzier models were provided
which step did I make a mistake?
The text was updated successfully, but these errors were encountered:
I am following https://docs.openvino.ai/2025/openvino-workflow-generative/inference-with-genai/inference-with-genai-on-npu.html to convert the phi3 model, my command
when the quantization running, it shows warning
Could not load tokenizer using specified model ID or path. OpenVINO tokenizer/detokenizer models won't be generated. Exception: Invalid version: '2025.0.0.0-476-2b2420220f9'
and after quantized, load the model with
ov_genai.LLMPipeline(model_path, "CPU")
met issue:
which step did I make a mistake?
The text was updated successfully, but these errors were encountered: