-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add version info to OpenVINO models #690
Add version info to OpenVINO models #690
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
model.set_rt_info(_transformers_version, ["optimum", "transformers_version"]) | ||
model.set_rt_info(_torch_version, ["optimum", "pytorch_version"]) | ||
model.set_rt_info(_optimum_intel_version, ["optimum", "optimum_intel_version"]) | ||
model.set_rt_info(_optimum_version, ["optimum", "optimum_version"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need also add info about nncf here?
and as some other libs may be used as model source for conversion (diffusers for sd, timm for image classification and sentence_transformers), I think it can be useful to provide this libs info too (if they are available or based on source lib for conversion)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 for including NNCF version
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added NNCF, diffusers, timm, sentence-transformers and onnx (for models converted from ONNX). I prefer not to add them just when the dependency is available, it's a bit strange to me to have timm_version in a BERT model for example. For timm and diffusers I now get the library name from the model path. Seems to work well, but let me know if there is a better way.
@@ -33,6 +33,7 @@ | |||
STR_OPERATION_TO_FUNC = {">": op.gt, ">=": op.ge, "==": op.eq, "!=": op.ne, "<=": op.le, "<": op.lt} | |||
|
|||
_optimum_version = importlib_metadata.version("optimum") | |||
_optimum_intel_version = importlib_metadata.version("optimum-intel") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not from optimum.intel.version import __version__
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I liked the consistency with _optimum_version :) But also the importlib_metadata version gives the commit ID:
>>> from optimum.intel.version import __version__
>>> from optimum.intel.utils.import_utils import _optimum_intel_version
>>> __version__
'1.17.0.dev0'
>>> _optimum_intel_version
'1.17.0.dev0+9d58b66'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks @helena-intel
if any("token_embeddings" in output.get_names() for output in model.outputs): | ||
import sentence_transformers | ||
|
||
model.set_rt_info(sentence_transformers.__version__, ["optimum", "sentence_transformers_version"]) | ||
if library_name == "diffusers": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not :
if any("token_embeddings" in output.get_names() for output in model.outputs): | |
import sentence_transformers | |
model.set_rt_info(sentence_transformers.__version__, ["optimum", "sentence_transformers_version"]) | |
if library_name == "diffusers": | |
if library_name == "sentence_transformers": | |
import sentence_transformers | |
model.set_rt_info(sentence_transformers.__version__, ["optimum", "sentence_transformers_version"]) | |
elif library_name == "diffusers": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That was my first attempt, but that didn't work, after optimum-cli export openvino -m BAAI/bge-base-en-v1.5 --library sentence_transformers bge-base-en-ov-st
library_name is detected as transformers by TasksManager.infer_library_from_model
. If I copy the config_sentence_transformers.json to the model directory, the model is detected as sentence-transformers. Should we save that file for models exported with sentence_transformers library?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, an option could be to re-use the library_name
inferred from the original model, this solution also works for me
Versions of optimum-intel and dependencies can have an impact on performance/accuracy. This PR adds transformers, torch, optimum and optimum-intel dependencies to the openvino_model.xml file (the OpenVINO version is already there). This is really useful when debugging issues, or looking into differences.
Example (

<optimum>
is added):@eaidova @AlexKoff88