You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I just see a deprecation warning concerning the optimum.runtime module that will be deprecated in openvino v2026 (which we will need to update before v2026), could you share more information on what the issue is?
Hi, I just see a deprecation warning concerning the optimum.runtime module that will be deprecated in openvino v2026 (which we will need to update before v2026), could you share more information on what the issue is?
The issue is the optimum-cli ONLY output the warning message and did nothing else when convert DeepSeek R1 32B model
@openvino-book 32B parameters model is quite huge in comparison to 1.5B, it may take some time for conversion, and this time may be significantly more than for 1.5b. From your logs, it is not visible that the conversion process is finished, so I assume it is not a bug, just long export and it is stack on loading model weights from disk on pytorch level before any export process is started
The version of OpenVINO and optimum-intel I used:
openvino 2025.0.0
openvino-genai 2025.0.0.0
openvino-telemetry 2025.0.1
openvino-tokenizers 2025.0.0.0
optimum 1.24.0
optimum-intel 1.23.0.dev0+c8c6beb
transformers 4.48.3
torch 2.6.0
I downloaded DeepSeek R1 1.5B from: https://www.modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B, use optimum-cli to convert it successfully.
I downloaded DeepSeek R1 32B from: https://www.modelscope.cn/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B, Fail to use optimum-cli to convert it.
The text was updated successfully, but these errors were encountered: